数智化转型网szhzxw.cn 人工智能 新的安全 AI 开发规则具有历史意义,但它们重要吗?

新的安全 AI 开发规则具有历史意义,但它们重要吗?

18个国家已经签署了新的不具约束力的“安全人工智能系统开发指南”,但是,如果不执行,它会产生影响吗?

一目了然

  • 英国国家网络安全中心(National Cyber Security Centre)和美国网络安全基础设施安全局(Cybersecurity Infrastructure Security Agency)牵头起草了指导方针。
  • 有人说指南缺乏牙齿和特异性。
  • 然而,指导方针是人工智能安全全球合作的另一项行为,可能会向人工智能行业发出信息。

人工智能 (AI) 的发展和采用正在蓬勃发展,而世界各地的监管机构都在努力了解这项技术及其潜在危险。欧盟是《欧盟人工智能法案》的先行者。美国总统乔·拜登(Joe Biden)发布了一项广泛的行政命令,呼吁制定标准以确保人工智能系统的安全。标准机构正在发布解决人工智能问题的框架。例如,美国国家标准与技术研究院(NIST)推出了人工智能风险管理框架。现在,一套供 AI 开发人员考虑的新指南已经出现。 数字化转型网www.szhzxw.cn

英国国家网络安全中心 (NCSC) 和美国网络安全和基础设施安全局 (CISA) 牵头制定了安全 AI 系统开发指南。除美国和英国外,还有16个国家同意遵守这项不具约束力的协议。

随着人工智能令人兴奋的可能性激增,安全和安保问题继续增加,这些最新指南带来了什么?

一、准则

这些新指南主要针对人工智能系统提供商,无论这些系统是从头开始构建的,还是在其他人提供的服务之上构建的。但该文件的作者敦促所有利益相关者阅读这些指南,以帮助他们就如何在 组织中使用人工智能做出明智的决定。

该文档分为四个部分:安全设计、安全开发、安全部署和安全运维。

该指南的第一部分侧重于在人工智能系统处于其生命周期的设计阶段时衡量风险和威胁建模。

安全开发指南敦促人工智能系统提供商考虑供应链安全、文档和技术债务。

在 AI 系统生命周期的部署阶段,该指南要求主动防止模型泄露、制定事件管理流程和负责任的发布。

安全操作和维护指南侧重于部署系统后的安全和安保。该文件呼吁人工智能系统提供商监控其系统的行为和输入,并在任何系统更新中优先考虑安全设计原则。最后,该指南呼吁系统提供商在系统安全方面采用透明度和信息共享。 数字化转型网www.szhzxw.cn

二、潜在影响

这些指南再次表明了在人工智能安全、安保和风险方面进行全球合作的意愿。他们向人工智能行业发出了一个信息,即这些担忧不会消失。“现在,我们知道很多人在做事时没有任何真正的责任感,他们做事时没有任何真正的道德感,”数据基础设施解决方案公司Inrupt的信任和数字道德副总裁Davi Ottenheimer说。

这些指南再次表明了在人工智能安全、安保和风险方面进行全球合作的意愿。他们向人工智能行业发出了一个信息,即这些担忧不会消失。

那么,这些准则对 AI 系统开发人员意味着什么?首先,这些准则正是如此;它们不可强制执行。“你将拥有专注于上市速度、获得新功能和功能以及推动收入等的组织。安抚投资者,“开源依赖生命周期管理平台Endor Labs的首席安全顾问Chris Hughes说,他是CISA的网络创新研究员。“这将与对安全性、严谨性和治理的要求形成鲜明对比。 数字化转型网www.szhzxw.cn

但休斯指出,即使没有强制要求,遵循这些准则也有好处。“我认为,如果大型人工智能开发人员将这些建议和它在整个软件开发生命周期中谈论的这些实践结合起来,它可以减轻我们听到的很多事情,无论是偏见还是毒害模型数据,或者本质上是篡改模型或系统的完整性,”他解释道。

虽然这些指南是针对人工智能系统开发人员的,但如果实施,它们可能会为最终用户带来下游利益。网络安全咨询服务和解决方案公司Optiv的大数据和分析实践总监Randy Lariar指出,最终用户也有责任熟悉使用人工智能系统的风险。“作为人工智能的用户,我认为重要的是要关注结果,专注于尽最大努力与安全框架保持一致,并最终选择用例和活动……这些都是从风险角度经过审查的,“他说。

Hughes表示,虽然这些指导方针是思考风险的有用基础,但它们相当高层次,缺乏具体的战术建议。仍有工作要做,以“……翻译并使这些指导方针与技术环境、政策环境的现实保持一致,“Lariar 说。

三、人工智能的下一步是什么?

遵守这些准则是自愿的,并且还有许多其他用于人工智能安全、安保和风险的框架。构建和提供人工智能系统的企业面临着应对大量建议和法规的挑战。“我们还不知道哪一个会成为最有用和最广泛采用的,”Lariar说。“为您的组织决定您的风险环境、监管环境、需要保护的内容、您将如何参与这些不同的框架以及您将要做什么。”

当监管机构考虑如何进行时,他们将面临挑战,即在监督人工智能行业与不过度阻碍进步之间找到微妙的平衡。Ottenheimer指出,行业自律可能会在未来发挥作用。他说:“它适用于行业知道如何为行业做最好的事情的领域,同时也使受其影响的人们受益。 数字化转型网www.szhzxw.cn

Ottenheimer以支付卡行业数据安全标准(PCI DSS)为例。信用卡公司知道,如果欺诈猖獗,消费者不会使用他们的产品,这导致了用于信用卡安全的 PCI DSS 的发展。

云空间还可以让人们了解人工智能领域的安全性如何发展。“当云开始流行时,没有太多的安全考虑,”Lariar 说。“随着时间的流逝,云供应商本身以及合作伙伴生态系统已经出现,以确保您的云配置安全,您正在以成熟的方式进行检测响应,并且您真正保护了云开发和使用的整个生命周期。

行业和监管机构可能会合作制定具体标准,并将安全性融入人工智能,但还有更多风险需要应对。“这个问题本身非常困难,因为你如何监管一些普遍的东西?”自主谈判公司Pactum AI的联合创始人兼首席执行官马丁·兰德(Martin Rand)问道。“任何玩过 ChatGPT 的人都会发现它可以用于任何事情:科学或写诗。”

兰德认为,目前还没有实用的指导方针来应对人工智能风险,如合成内容生成、错误信息和有害偏见的传播。人工智能的生存风险如何?“如果一个人工智能系统(突然开始)让自己变得更聪明,并且智能爆炸,把人们抛在后面,会发生什么?”兰德问道。

在全球范围内制定法规以应对这些风险是一项巨大的挑战。兰德表示,不同的国家有相互竞争的利益,这使得联合监管机构不太可能成为解决方案。相反,他建议建立一个国际科学研究机构,专注于人工智能的生存威胁。 数字化转型网www.szhzxw.cn

新指南表明了对风险的认识,但它们只是解决人工智能安全和安保重大问题的早期一步。“我认为这表明还有很多工作要做,”Ottenheimer说。

数字化转型网www.szhzxw.cn

英文原文:

New Secure AI Development Rules are Historic, But Do They Matter?

18 nations have signed on to the new non-binding ‘Guidelines for Secure AI System Development,’ but, without enforcement, will it make an impact?

At a Glance

  • UK’s National Cyber Security Centre and US’s Cybersecurity Infrastructure Security Agency led effort to draft guidelines. 数字化转型网www.szhzxw.cn
  • Some say guidelines lack teeth and specificity.
  • However, guidelines are another act of global cooperation on AI security that may send a message to the AI industry.

The development and adoption of artificial intelligence (AI) is booming while regulators across the world work to understand this technology and its potential dangers. The European Union is an early mover with the EU AI Act. President Joe Biden released a broad executive order calling for standards to ensure the safety and security of AI systems. Standards bodies are releasing frameworks addressing AI. The National Institute of Standards and Technology (NIST), for example, launched the AI Risk Management Framework. Now, a new set of guidelines for AI developers to consider has come onto the scene.

The UK’s National Cyber Security Centre (NCSC) and the US’s Cybersecurity and Infrastructure Security Agency (CISA) led the development of Guidelines for Secure AI System Development. Along with the US and UK, 16 additional countries agreed to follow the non-binding agreement.

What do these latest guidelines bring to the table as safety and security concerns continue to mount alongside the proliferation of AI’s exciting possibilities? 数字化转型网www.szhzxw.cn

1. The Guidelines

These new guidelines are primarily directed at AI systems providers, whether those systems are built from scratch or on top of services provided by others. But the document authors urge all stakeholders to read the guidelines to help them make informed decisions about how AI is used in their organizations.

The document is separated into four sections: secure design, secure development, secure deployment and secure operations and maintenance.

The first section of the guidelines focuses on measuring risks and threat modeling when AI systems are in the design phase of their lifecycle.

The secure development guidelines urge AI system providers to consider supply chain security, documentation, and technical debt. 数字化转型网www.szhzxw.cn

In the deployment phase of an AI system’s lifecycle, the guidelines call for proactive protection against model compromise, the development of incident management processes and responsible release.

The secure operation and maintenance guidelines focus on safety and security once a system has been deployed. The document calls for AI system providers to monitor their systems’ behavior and inputs and to prioritize secure by design principles for any system updates. Finally, the guidelines call for system providers to embrace transparency and information-sharing on system security.

2. The Potential Impact

These guidelines are another indication of the willingness for global collaboration on AI safety, security, and risk. They send a message to the AI industry that these concerns are not going to go away. “Right now, we know that a lot of people are doing things without any real sense of accountability, and they’re doing things [without] any real sense of ethics,” says Davi Ottenheimer, vice president of trust and digital ethics at Inrupt, a data infrastructure solutions company. 数字化转型网www.szhzxw.cn

These guidelines are another indication of the willingness for global collaboration on AI safety, security, and risk. They send a message to the AI industry that these concerns are not going to go away.

So, what do these guidelines mean for AI system developers? First and foremost, these guidelines are exactly that; they are not enforceable. “You’re going to have organizations that are focused on speed to market and getting new features out there and capabilities and driving revenue and things like that. Appeasing investors,” says Chris Hughes, chief security advisor at Endor Labs, an open source dependency lifecycle management platform and Cyber Innovation Fellow at CISA. “That’s going to be contrasted against the demands for security and rigor and governance.”

But Hughes points out that there are benefits to following these guidelines even if they are not mandated. “I think if large AI developers integrate these recommendations and these practices that it talks about throughout the entire software development lifecycle, it can mitigate a lot of things that we’re hearing concerns about whether it’s biases or poisoning model data or essentially tampering with the integrity of a model or the system,” he explains. 数字化转型网www.szhzxw.cn

While these guidelines are targeted at AI system developers, they could have downstream benefits for end users if implemented. End users also have the responsibility to familiarize themselves with the risk of using AI systems, Randy Lariar, practice director of big data and analytics at cybersecurity advisory services and solutions company Optiv, points out. “As users of AI, I think it’s important to focus on the outcomes, focus on the best efforts to try to align to security frameworks and ultimately choose use cases and activities…that have been vetted from the risk perspective,” he says.

While these guidelines are a useful foundation for thinking about risk, they are fairly high-level and lack specific, tactical recommendations, according to Hughes. Work remains to be done to “…to translate and bring these guidelines into alignment with the reality of the technology environment, the policy environment,” says Lariar.

3. What Comes Next for AI?

Adherence to these guidelines is voluntary, and there are a multitude of other frameworks for AI safety, security, and risk. The enterprises building and providing AI systems are challenged to navigate a morass of recommendations and regulations. “We don’t know yet which one is going to emerge as the most useful and most widely adopted,” says Lariar. “Decide for your organization with your risk environment, your regulatory environment the things that you need to protect, how you’re going to engage with these different frameworks and what you’re going to do.” 数字化转型网www.szhzxw.cn

As regulators consider how to proceed, they will be challenged to find the delicate balance between overseeing the AI industry without undue impediment to progress. Ottenheimer points out that industry self-regulation will likely play a role going forward. “It works in areas where the industry knows how to do what’s best for the industry that also benefits people who are affected by it,” he says.

Ottenheimer offers Payment Card Industry Data Security Standard (PCI DSS) as an example. Credit card companies knew consumers would not use their products if fraud was rampant, leading to the development of PCI DSS for credit card security.

The cloud space could also give an idea of how security could evolve in the AI space. “When cloud took off there weren’t a lot of security considerations,” says Lariar. “Over time, the cloud vendors themselves as well as an ecosystem of partners have emerged to ensure your cloud is configured securely, that you’re doing detection response in a mature way and that you’re really securing the whole life cycle of cloud development and use.” 数字化转型网www.szhzxw.cn

Industry and regulators could potentially work together to create specific standards and bake security into AI, but there are more risks with which to grapple. “The problem itself is fiendishly hard because how do you regulate something which is universal?” asks Martin Rand, cofounder and CEO of Pactum AI, an autonomous negotiations company. “Anyone who has played around with ChatGPT sees that it can be used for anything: for science or writing a poem.”

Rand argues that there are yet to be practical guidelines addressing AI risks like synthetic content generation, misinformation, and propagation of harmful biases. What about the existential risks of AI? “What will happen if an AI system [suddenly starts] making itself smarter and there’s an explosion of intelligence, which leaves people behind?” Rand asks.

Putting regulations in place to address those risks on a global scale is an enormous challenge. Different countries have competing interests, making a joint regulatory body an unlikely solution, according to Rand. Instead, he suggests the development of an international scientific research body that focuses on the existential threats of AI. 数字化转型网www.szhzxw.cn

本文由数字化转型网(www.szhzxw.cn)转载而成,来源于INFORMATIONWEEK.COM;编辑/翻译:数字化转型网宁檬树。

数字化转型网www.szhzxw.cn

免责声明: 本网站(https://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。

本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。

免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。 本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。https://www.szhzxw.cn/27339.html
联系我们

联系我们

17717556551

邮箱: editor@cxounion.org

关注微信
微信扫一扫关注我们

微信扫一扫关注我们

关注微博
返回顶部