数智化转型网szhzxw.cn 影子IT 您的组织是否容易受到影子 AI 的攻击?

您的组织是否容易受到影子 AI 的攻击?

潜伏在每个企业的黑暗深处的是开发人员和其他精通技术的IT团队成员,他们想要突破极限。这些人不愿意等待IT部门在新软件工具上盖上批准印章。他们秘密工作,不受监督,不顾潜在的安全隐患,使用未经批准的技术向前推进,经常与毫无戒心的同事分享他们的创新。

当涉及到影子人工智能部署时,风险呈指数级增长。术语影子 AI 是指未经组织领导者批准或知情而使用的 AI 程序。多年来,首席信息官和IT经理一直在努力应对影子IT,影子IT使用未经IT老板特别授权的任何软件工具。“同样的问题现在以人工智能工具的形式出现,”HP Federal首席技术官Tommy Gardner说。

当科学家和软件工程师跳上新的开源技术并将其应用于他们的工作时,影子人工智能通常会浮出水面,通常一开始只是尝试一下,信用评分服务FICO的首席分析官Scott Zoldi说。“一旦建立,它就会被一致地使用,而不会被带回任何正式的人工智能模型治理流程。与影子AI作斗争的组织通常不了解谁创建了它,如何使用它以及用于什么目的,所有这些都会产生相当大的风险。

一、未解决的AI影子人工智能所带来的危险有哪些?

也许与未解决的影子人工智能相关的最大危险是敏感的企业数据可能会落入坏人之手。这对隐私和机密性构成了重大风险,网络安全和合规解决方案提供商BARR Advisory的咨询经理Larry Kinkaid警告说。“这些数据可用于训练混合的人工智能模型,或者更糟的是,公开的,让不良行为者访问敏感信息,这些信息可能被用来破坏你公司的网络或服务。他补充说,如果数据受到法律、法定或监管保护,也可能产生严重的财务影响。

Zoldi 说,致力于负责任地部署和使用 AI 的组织遵循强大、可解释、合乎道德和可审计的实践。“这些做法共同构成了负责任的人工智能治理框架的基础。影子 AI 发生在看不见的地方,超出了 AI 治理护栏。当用于制定决策或影响业务流程时,它通常甚至不符合基本的治理标准。“这样的人工智能是不受控制的,这可能使其使用不道德、不稳定和不安全,造成未知的风险,”他警告说。

当员工未经许可将人工智能驱动的程序下载到他们的工作笔记本电脑时,这似乎是无害的。但是,当该程序包含可利用的漏洞时,就会出现问题。“如果 IT 团队不知道这些程序正在使用中,他们就无法采取适当的预防措施来解决问题,公司数据可能会受到损害,”Gardner 解释道。

二、最常见的影子 AI 人工智能来源会是?

人工智能是一项变革性技术,能够从根本上改善企业使用数据的方式,以及管理客户互动。“当正确利用人工智能时,组织可以以超人的速度做出数据知情决策,”佐尔迪说。“然而,对于影子人工智能,创作者往往不了解或预测伴随人工智能使用的固有危险。

最常见的影子 AI 来源是辛勤工作的员工或团队,他们下载一个或多个未经授权的应用程序,其崇高意图是使用 AI 解决即时问题或调查新流程。然而,虽然使用影子人工智能驱动的生产力工具或编辑功能来完成任务似乎相对无害,但如果不遵循正式的批准流程,可能会出现严重的漏洞,Gardner解释说。“虽然这些工具可能是在短期内解决问题的绝佳选择,但它们给组织带来了潜在的安全问题,”他指出。

三、面对影子 AI 人工智能有哪些防御措施?

Kinkaid建议,除了为员工提供安全意识培训外,旨在降低与影子AI相关的风险的组织还应通过可接受的使用政策建立明确的治理。“确保在正式的风险评估中解决人工智能的使用问题,”他补充道。

Zoldi强调了提醒整个企业注意AI危险的重要性。“人工智能可用于任何部门,从法律和人力资源到软件开发、销售、业务规划、客户支持等等。

Zoldi说,通过受控安全实践支持的专用网络访问AI算法也很重要。“这意味着,作为第一道防线,阻止对开源代码的访问。想要利用开源AI的用户需要经过受控的审查过程。

四、影子AI人工智能能够带来哪些好处?

尽管存在危险,但一旦发现、识别和中和影子 AI,它可以提供显着的好处。“它可以为构建和修改人工智能想法提供一个起点,”金凯德说。

英文原文:

Is Your Organization Vulnerable to Shadow AI?

Who knows what evil lies in the heart of unauthorized AI? The shadow-savvy IT leader knows.

Lurking within the dark recesses of every enterprise are the developers and other tech-savvy IT team members who want to push the envelope. Such individuals aren’t willing to wait for IT to stamp its seal of approval on a new software tool. Working surreptitiously and without oversight, they plow forward with their unsanctioned technology, regardless of potential security implications, often sharing their innovation with unsuspecting colleagues.

When it comes to shadow artificial intelligence deployments, the stakes grow exponentially high. The term shadow AI refers to AI programs used without the approval or knowledge of organization leaders. For years, CIOs and IT managers have struggled with shadow IT — the use of any software tool not specifically authorized by IT bosses. “This same issue is now presenting itself in the form of AI tools,” says Tommy Gardner, CTO with HP Federal.

Shadow AI typically surfaces when scientists and software engineers jump onto new open source technologies and apply them in their work, often at first to just try them out, says Scott Zoldi, chief analytics officer at credit scoring service FICO. “Once established, it’s used consistently without being brought back into any formal AI model governance processes.” Organizations grappling with shadow AI often don’t understand who created it, how it’s being used, and for what purposes, all of which can creates considerable risks.

Danger Ahead

Perhaps the biggest danger associated with unaddressed shadow AI is that sensitive enterprise data could fall into the wrong hands. This poses a significant risk to privacy and confidentiality, cautions Larry Kinkaid a consulting manager at BARR Advisory, a cybersecurity and compliance solutions provider. “The data could be used to train AI models that are commingled, or worse, public, giving bad actors access to sensitive information that could be used to compromise your company’s network or services.” There could also be serious financial repercussions if the data is subject to legal, statutory, or regulatory protections, he adds.

Organizations dedicated to responsible AI deployment and use follow strong, explainable, ethical, and auditable practices, Zoldi says. “Together, such practices form the basis for a responsible AI governance framework.” Shadow AI occurs out of sight and beyond AI governance guardrails. When used to make decisions or impact business processes, it usually doesn’t meet even basic governance standards. “Such AI is ungoverned, which could make its use unethical, unstable, and unsafe, creating unknown risks,” he warns.

It may seem harmless when employees download an AI-powered program to their work laptop without permission. Concern arises, however, when that program contains an exploitable vulnerability. “If the IT team isn’t aware that these programs are in use, they can’t take appropriate preventive measures to get ahead of the issue and company data could be compromised,” Gardner explains.

Potential Suspects

AI is a transformative technology with the power to fundamentally improve the way businesses use data, as well as manage customer interactions. “When leveraging AI properly, organizations can make data-informed decisions at super-human speed,” Zoldi says. “However, with shadow AI, creators often don’t understand or anticipate the inherent dangers that accompany AI use.”

The most common shadow AI source is hard-working employees or teams who download one or more unauthorized apps with the noble intention of using AI to solve immediate issues or investigate new processes. Yet while using shadow AI-powered productivity tools or editing features to accomplish a task may seem relatively harmless, serious vulnerabilities can arise when failing to follow a formal approval process, Gardner explains. “While these tools can be great options to solve a problem in the short term, they pose a potential security issue for the organization,” he notes.

Defensive Measures

Besides providing security awareness training to employees, organizations aiming to mitigate risks associated with shadow AI should establish clear governance through acceptable use policies, Kinkaid advises. “Ensure that the use of AI is addressed within formal risk assessments,” he adds.

Zoldi stresses the importance of alerting the entire enterprise to shadow AI’s danger. “AI can be used in any department, from legal and HR to software development, sales, business planning, customer support, and beyond.”

It’s also important for AI algorithms to be accessed through a private network supported by controlled security practices, Zoldi says. “This means, as a first-line defense, blocking access to open source code.” Users who want to leverage open source AI need to go through a controlled review process.

A Single Possible Benefit

Despite the danger, shadow AI can offer a significant benefit, once it’s been detected, identified, and neutralized. “It can be useful in providing a starting point on which to build and modify AI ideas,” Kinkaid states.

本文由数字化转型网(www.szhzxw.cn)转载而成,来源于数智化转型网;编辑/翻译:数字化转型网默然。

数字化转型网www.szhzxw.cn

免责声明: 本网站(https://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。

本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。

免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。 本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。https://www.szhzxw.cn/23886.html
联系我们

联系我们

17717556551

邮箱: editor@cxounion.org

关注微信
微信扫一扫关注我们

微信扫一扫关注我们

关注微博
返回顶部