
企业风险主管和 IT 安全领导者越来越担心生成 AI 工具可能对其组织构成的威胁。
制定明确的生成式人工智能战略对于在避免长期风险的同时获得优势至关重要,但大多数公司今天都没有这样做。
令人惊讶的是,研究表明,超过一半的人在没有安全、一致的部署的凝聚力战略的情况下匆忙进入,而研究公司 Gartner 八月的一项调查显示,GenAI 已成为企业面临的最大新兴风险。
Delinea首席信息安全官斯坦·布莱克(Stan Black)解释说,人们急于利用这项技术,这通常会使组织面临巨大的风险。“我们通常会公开凭据,以便任何破坏和利用这些凭据的人都可以访问极其敏感的内容,”他说。“例如,一家供应商发现他们的软件开发人员正在调试LLM中的代码,然后它就受到了损害。
他补充说,一个大问题是,人们试图更好更快地完成他们的工作,但没有任何控制来将精灵留在瓶子里。“现在我们有这些弹出式AI引擎,它们基本上在SEO上排名很高,人们认为他们正在点击真实的东西,”他补充道。“这只是一只披着羊皮的狼,由恶意行为者建立,以获取凭据和其他有价值的信息。
一、新技术,旧威胁
这些风险通常可以通过负责任的使用、严格的安全措施、意识培训、定期审核、持续监控以及在开发和部署 AI 模型时遵守道德准则来缓解。
然而,Black指出,主要挑战之一是组织仍然将自己暴露或容易泄露的凭据放在一起,以便任何访问和利用这些凭据的人都可以访问极其敏感的内容,包括知识产权。
“降低这种风险的最佳方法之一是利用多因素身份验证来确保凭据不会被利用,迫使所谓的用户是谁 – 人类,机器或黑客 – 证明其身份而不仅仅是用户名和密码,”他说。
Grammarly的首席信息安全官Suha Can通过电子邮件告诉InformationWeek,生成AI的一些风险,如质量和准确性问题,不一定是新的。“然而,生成式人工智能不同于我们见过的其他技术,因为它可以如此令人信服地创建称职的内容,”他说。“这些只是今天已知的挑战。我怎么强调都不为过,我们仍在了解这些技术的能力,我们可能还没有确定所有相关的风险。
他解释说,这就是为什么从一开始就采取负责任的产品开发方法非常重要的原因。“制定战略始于一些基本原则,例如评估组织的准备情况和独特的风险因素,以及定义具有既定目标的有目的的方法,”他说。“重要的是要了解生成式人工智能的能力和局限性,以便在对期望持现实态度的同时仔细规划你的战略。
二、随着法规的发展制定程序
Olive Technologies的产品主管Heather Stevens通过电子邮件告诉InformationWeek,企业风险主管在引入任何访问,收集或存储数据的工具时应谨慎行事。“在引入这些工具之前,公司必须有审查这些工具的程序,”她说。
企业风险主管应首先实施明确的规则,禁止员工使用任何未经批准的Web应用程序和工具。“这实际上是影子IT的另一个例子,其中包括IT部门不知道的任何与IT相关的购买,活动或使用,这在大多数组织中历来是一个大问题,”Stevens说。
当员工使用经批准的GenAI工具时,公司需要规则来管理哪些数据可以 – 更重要的是,不能 – 与该工具一起使用。“但这些规则不应该仅限于GenAI工具,”她补充道。“它们应该适用于组织中使用的所有工具和应用程序。
这些高管应该与任何可能使用GenAI工具的关键利益相关者合作。
史蒂文斯说,理想情况下,该组织有一个首席信息安全官,信息安全组织是访问和存储数据或存在于公司网络和生态系统中的每个应用程序的关键利益相关者。
“关于隐私和监管最困难的部分是,生成式人工智能解决方案比以往任何时候都更快地超越了法规,”布莱克说。“在我们能比人工智能更快之前,最好的建议是不要使用这些工具,除非你能有一定程度的控制,做大量的意识和培训,然后尝试在法规发展时保持领先。
虽然所有这些都可以理解地引起风险领导者的很多担忧,虽然想要限制或限制组织使用生成式人工智能可能很诱人,但从Can的角度来看,这将是一个错误。
“生成式人工智能已经出现在你的工作场所,无论你是否知道。你不能忽视或超越它,“他说。“作为安全和风险领导者,这是我们成为推动者的机会。我们必须为我们的企业和团队配备平台和政策,以安全地使用和充分利用生成式人工智能工具。
英文原文:
Generative AI an Emerging Risk as CISOs Shift Cyber Resilience Strategies
Taking a long-term view of GenAI will ensure companies continue to get the benefits while staying on top of its risks.
At a Glance
- Research shows that over half are rushing in without a cohesive strategy for secure, aligned deployment of GenAI.
- “People are trying to do their jobs better and faster. But without any of the controls to keep the genie in the bottle.”
- When employees use approved GenAI tools, the company needs rules governing what data can be used with the tool.
Enterprise risk executives and IT security leaders are growing more concerned over the threat generative AI tools may pose to their organizations.
Building a clear generative AI strategy is essential to gain a leg up while avoiding long-term risks, but most companies aren’t doing this today.
Surprisingly, research shows that over half are rushing in without a cohesive strategy for secure. Aligned deployment, while an August survey from research firm Gartner revealed GenAI has become a top emerging risk for enterprises.
Stan Black, chief information security officer at Delinea, explains there is such a rush to leverage the technology, it’s often putting organizations in a huge amount of risk. “Often we’re exposing credentials so that anyone who compromises and exploits those can have access to incredibly sensitive content,” he says. “For example, one vendor found their software developers were debugging code in LLMs, and then it was compromised.”
He adds one big concern is that people are trying to do their jobs better and faster, but without any of the controls to keep the genie in the bottle. “Now we have these pop-up AI engines, which are basically getting ranked high on SEO and people think they’re clicking on the real thing,” he adds. “It’s just a wolf in sheep’s clothing, established by malicious actors to get credentials and other valuable information.”
New Technology, Old Threats
These risks can often be mitigated through responsible use, rigorous security measures, awareness training, regular audits, ongoing monitoring. And adherence to ethical guidelines in the development and deployment of AI models.
However, Black notes one of the main challenges is that organizations are still putting themselves out there with exposed or easily compromised credentials. So that anyone who accesses and exploits those can have access to incredibly sensitive content, including intellectual property.
“One of the best ways to reduce this risk is to leverage multi-factor authentication to ensure credentials don’t get exploited by forcing whoever the purported user is — human, machine or hacker — to prove its identity beyond just a username and password,” he says.
Suha Can, CISO of Grammarly, tells InformationWeek via email some of generative AI’s risks, like quality and accuracy issues, aren’t necessarily new. “However, generative AI is unlike other technologies we’ve seen because it can so convincingly create competent content,” he says. “And these are just the known challenges today. I can’t stress enough that we’re still learning the extent to which these technologies are capable, and we may not have identified all associated risks.”
He explains that’s why taking a responsible approach to product development from the start is hyper-important. “Setting your strategy starts with some foundational principles, like assessing your organization’s readiness and unique risk factors and defining a purposeful approach with set objectives,” he says. “It’s important to understand both the capabilities and limitations of generative AI to carefully plan your strategy while being realistic about expectations.”
Developing Procedures as Regulations Evolve
Heather Stevens, product executive at Olive Technologies. Tells InformationWeek via email enterprise risk executives should proceed with caution when introducing any tool that accesses, collects, or stores data. “Companies must have procedures to vet these tools prior to their introduction,” she says.
Enterprise risk executives should start by implementing clear rules prohibiting employees from using any unapproved web applications and tools. “It’s really another instance of shadow IT. Which includes any IT-related purchases, activities or uses that the IT department is unaware of and which has historically been a big problem in most organizations,” Stevens says.
When employees use approved GenAI tools, the company needs rules governing what data can — and, more importantly, cannot — be used with the tool. “But these rules shouldn’t be limited to only GenAI tools,” she adds. “They should be in place for all tools and applications used in the organization.”
These execs should partner with any key stakeholders who might use GenAI tools.
Stevens says ideally, the organization has a CISO. With the infosec organization a key stakeholder for every application that accesses and stores data or lives within the company’s network and ecosystem.
“The hardest part about privacy and regulation is that generative AI solutions are outpacing regulations faster than we’ve ever seen before,” Black says. “Until we can be faster than AI, the best advice is don’t use these tools unless you can have a level of control. Do a lot of awareness and training. And then try to stay on top of regulations as they evolve.
While all of this can understandably create a lot of concern for risk leaders. And while it can be tempting to want to limit or restrict an organization’s use of generative AI. From Can’s perspective this would be a mistake.
“Generative AI is already in your workplace, whether you know it or not. You can’t ignore or outrun it,” he says. “As security and risk leaders, this is our opportunity to be enablers. We must equip our businesses and teams with the platforms and policies to safely use. And get the most out of generative AI tools.”
本文由数字化转型网(www.szhzxw.cn)翻译而成,来源于数智化转型网;编辑/翻译:数字化转型网默然。

免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。
本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。
