想象未来,你和人工智能一起生活,无话不谈。你为工作劳碌,它会提供情绪价值和建议;你为晚餐发愁,它会教你做菜;你去海边旅行,它帮你规划行程……
不过,那暂时还是想象的世界。数字化转型网szhzxw.cn
回到现实,自ChatGPT发布,全球竞逐AI赛道,人工智能发展的速度,已超乎我们的想象。随之而来的是意大利封禁ChatGPT,各国加强监管,上千位科技大佬呼吁暂停AI训练。国内,各大厂对这个赛道的争夺逐渐白热化。一些雄心勃勃的创业者,孤注一掷投身其中。在很多领域,相关的人才成了抢手货,待价而沽。数字化转型网szhzxw.cn
但不可不认,人工智能已经走到十字路口,众多选择摆在人类面前:AI研发是继续狂飙,还是踩下刹车?政府该放任还是监管?是创新在前还是伦理优先?
我们找了几位人工智能领域大佬聊了聊。数字化转型网szhzxw.cn

问题一:是继续研发还是暂停?
最近几个月,全球对AI白热化的研发,陷入一场了“失控”的竞赛。数字化转型网szhzxw.cn
仅一张草图,GPT4就能生成一个网站代码;文心一言10秒钟画出一幅插画,Midjourney一句话生成视频,谷歌Bard接入更大的语言模型,“她”智能到有情绪、发脾气……更高智能的AI陆续出场,功能强大到令人害怕。数字化转型网szhzxw.cn
3月29日,包括马斯克在内的上千科技大佬发出的一封公开信,呼吁暂停训练比GPT-4更强大AI系统,让其黑暗面走到了聚光灯下:泄密、散布谣言、取代人的工作、威胁人类生存。
一石激起千层浪,来自各方呼吁停止研发AI的声音,也变得越来越响亮。
3月18日,斯坦福大学教授科辛斯基在推特上疾呼:“我担心我们无法持久地遏制住AI。”被称为“人工智能教父”的杰弗里·辛顿也预言:AI消灭人类并非空谈。数字化转型网szhzxw.cn
到了两难的境地,全球对AI的研发,该继续狂飙,还是该踩下刹车?
中国科技大学教授、中国人工智能学会人工智能伦理与治理工委会主任、广东省科学院人工智能首席科学家陈小平告诉潮新闻记者,这个问题要从两方面看。数字化转型网szhzxw.cn
单纯从技术角度看,研发比GPT4更强的AI,技术本身不会有太大的危害。AI技术带来的负面影响,大多是在实际应用中发生的。如果把它“封印”在笼中,比如仅在实验室中研究和测试,它不会产生严重的社会性后果。数字化转型网szhzxw.cn
AI再强大,也有技术解决不了的问题。
从基本原理来分析,大模型技术有两个先天局限,且不能被彻底消除:一是判断真假的能力,二是对伦理的判断能力,比如什么是符合社会规范的,符合道德伦常,尤其是没有明文规定的那部分,它很难通过学习来获得。数字化转型网szhzxw.cn
“面临人工智能的新进展,当下治理的模式、治理的措施都没有跟上。如果此时再推出更强大的AI,很有可能带来更严重的隐患。”陈小平分析。数字化转型网szhzxw.cn
从这个角度,有人选择暂停一下,等新的治理模式和方法出台以后,再来有序研发,不失为一种更稳妥的方式。数字化转型网szhzxw.cn
“语言模型是一个很好的技术,是一个提高生产力的工具。但是,任何好的技术,特别是新技术的诞生,总会有风险。”中国人民大学高瓴人工智能学院教授卢志武告诉潮新闻记者。
打个比方,跟汽车代替马车一样,汽车开得飞快,时常会有车祸,风险比马车更大,但后期人们不断改进技术,减少了车祸的发生。数字化转型网szhzxw.cn
在卢志武看来,不能因为有风险,就把新技术禁掉,或者停止研发,它需要在跟人类的社会结合过程中去磨合、完善,慢慢把不好的地方改掉。数字化转型网szhzxw.cn

问题二:放任还是监管?
面对未知,我们第一反应是恐慌,而后才是理性。在疯狂和混乱中,人们试图寻回秩序,制定关于AI的监管措施。数字化转型网szhzxw.cn
3月31日,意大利率先禁止使用ChatGPT,并限制OpenAI处理意大利的用户信息,理由是“为训练ChatGPT的算法而大量收集和存储个人数据没有法律依据”。
4月4日,德国也开始考虑禁用ChatGPT。法国和爱尔兰的监管机构表示,他们正在与意大利讨论其调查结果。此前,英国也提交了《支持AI创新的监管方法》政策文件。
“这些国家的做法,只是一种临时性的措施,不是长久之计。等新的治理办法出来后,他们自然会解禁。”陈小平说。数字化转型网szhzxw.cn
过去的技术治理,通常是事后治理。等问题爆发了,对症下药,有的放矢,效率很高。比如直播带货乱象频现,人们再来监管;虚拟货币盛行,多国开始封禁;网络谣言泛滥,人们来修改法律规范。数字化转型网szhzxw.cn
眼看着AI一路狂飙,传统模式已然不适用了。
“如果等危害发生了,比如诱导AI出坏主意造成严重的恐怖事件,或者短期内造成大面积失业,人们再来治理,其后果很可能是不可接受的。”陈小平说。数字化转型网szhzxw.cn
科技发展要逐步地转向伦理先行。越来越多人意识到,在AI领域,特别是大模型技术,它要发展得好,一是靠技术的进一步完善,二是靠人工智能伦理治理。
“我们现在需要尽快建立人工智能伦理治理的新的模式、新的办法,办法有了,技术就可以放心地使用,不会发生严重的恶性后果,也有利于人工智能的健康发展。”陈小平说。
去年,中国政府就提出了“伦理先行”的概念,在一次联合国会议上,中国政府提交了《中国关于加强人工智能伦理治理的立场文件》,既是表达治理决心,也是呼吁国际合作。
就在不久前,联合国首次就AI的伦理风险,对所有会员国提出要求:在AI发展问题上,各国政府应毫不拖延地实施全球伦理框架。

问题三:AI和人类,未来谁更胜一筹?
最近,关于AI自我意识的讨论沸沸扬扬,但一直没有定论。那个终极难题再度浮现,AI是否能够超越人类,让人既害怕又好奇。数字化转型网szhzxw.cn
随着更强大的GPT-4的发布,连OpenAI的首席执行官山姆·阿尔特曼也有了一些耐人寻味的表述,他说,未来AI确实可能杀死人类。数字化转型网szhzxw.cn
连创始人也并不能完全理解、预测和控制AI的未来。360集团周鸿祎更是大胆地预言,“AI一定会产生自我意识,留给人类的时间不多了。”数字化转型网szhzxw.cn
那么,AI真能有人的意识和智慧吗?的确,ChatGPT给人一种有智慧的“错觉”,但几位教授都指出,从根本上说,机器不是人,它不是真的能理解人。
陈小平说,人工智能更准确的名称是机器智能,它不是人的智能,机器智能可以模仿人的一部分功能,包括人的部分意识功能,但并不是说机器具备和人一样的意识。
什么叫模仿功能?举个例子,AI机器人阿尔法狗能下围棋,但它下棋跟人有很多不同。人下棋时会想一些棋外的事情,而这些对阿法狗根本就不存在。ChatGPT能模仿人的语言行为,语言能力甚至超越很多人,但不等于人说话时想到的它也都想到了。说到底,它只是某些功能更强。数字化转型网szhzxw.cn
卢志武说,我们暂时不用担心AI会有自主意识,会不会超越人类。ChatGPT模型的训练方式,是人为地收集数据、训练模型,整个进程是由人控制的,而不是机器自主学习,它不会失控。
ChatGPT学了人的语言功能,但要更进一步模仿人类,比如学习图像、音频,难度大得多。“语言毕竟是人类抽象的一个东西,但图像和声音都是具象的,是天然存在的,它数据量更大,信息表达更复杂。”卢志武说。数字化转型网szhzxw.cn
如果,我们的终极目标是想让人工智能变得更聪明,同时对人类做有益的贡献,这还有很长的路要走。数字化转型网szhzxw.cn
但是,无论是过去的科幻电影,还是眼前的人工智能,都在一次次警示我们,从现在起,必须全球通力合作,解决AI的安全和伦理问题。数字化转型网szhzxw.cn
站在十字路口,一切还不算太晚。数字化转型网szhzxw.cn

翻译:
Imagine a future where you live with an AI and talk about everything. You work for it, and it provides emotional value and advice. If you worry about dinner, it will teach you to cook. You go to the beach, it helps you plan your trip…数字化转型网szhzxw.cn
But that, for the moment, is an imaginary world.数字化转型网szhzxw.cn
Back to reality, since ChatGPT’s launch, the global race to the AI circuit has seen AI progress faster than we ever imagined. This was followed by a ban on ChatGPT in Italy, increased regulation around the world and calls from thousands of tech leaders for a moratorium on AI training. Domestic, the major factories to the race track gradually white-hot competition. Some ambitious entrepreneurs put all their eggs in one basket. In many fields, relevant talents are up for grabs.数字化转型网szhzxw.cn
But it’s hard to deny that AI has come to a crossroads, and there are many choices ahead of us: should AI continue to speed up, or slam on the brakes? Should the government let go or regulate? Is it innovation or ethics?
We talked to some of the biggest names in artificial intelligence.
Question 1: Should we continue development or suspend it?
In recent months, the world’s white-hot AI research and development has been locked in a runaway race.
With just a sketch, GPT4 generates a website code; Wen Xin can draw an illustration in 10 seconds with a single sentence, Midjourney can generate a video with a single sentence, Google Bard can access a larger language model, and “she” is smart enough to have emotions and tantrums… More intelligent AI is coming out, and the power is scary.
On March 29, an open letter from thousands of tech leaders, including Musk, calling for a moratorium on training AI systems more powerful than GPT-4, put the spotlight on its dark side: leaks, rumor-mongering, displacing people’s jobs and threatening humanity’s survival.
A stone aroused a thousand layers of waves, from all sides to stop the development of AI voice, also become more and more loud.数字化转型网szhzxw.cn
On March 18, Kosinski, a professor at Stanford University, tweeted: “I fear we will not be able to contain AI permanently.” Jeffrey Hinton, known as the “godfather of artificial intelligence”, also predicted that the destruction of human beings by AI is not empty talk.
The dilemma is, should the global AI push on, or should it slam on the brakes?
Chen Xiaoping, a professor at the University of Science and Technology of China, director of the AI Ethics and Governance Working Committee of the Chinese Society for Artificial Intelligence, and chief scientist of AI at the Guangdong Academy of Sciences, told Chao News that the problem should be looked at from two aspects.
From a purely technical point of view, developing AI that is better than GPT4, the technology itself will not do much harm. Most of the negative effects of AI technology occur in practical applications. If it is “sealed” in a cage, such as being studied and tested only in a laboratory, it has no serious social consequences.数字化转型网szhzxw.cn
No matter how powerful AI is, there are problems that technology can’t solve.
From first principles, the grand model technology has two inherent limitations that cannot be eliminated completely: the ability to judge truth and falseness, and the ability to judge ethics, such as what is in line with social norms, what is in line with ethics, especially the unwritten part, which is difficult to learn.数字化转型网szhzxw.cn
“Facing the new progress of artificial intelligence, the current governance model and governance measures have not kept up. “If we introduce more powerful AI at this time, it will bring more serious problems.” Analysis by Chen Xiaoping.
From this point of view, some people choose to pause for a while, such as the introduction of new governance models and methods, and then orderly research and development, can be regarded as a more stable way.数字化转型网szhzxw.cn
“Language modeling is a great technology and a productivity tool. But with any good technology, especially new technology, there are always risks.” Lu Zhiwu, professor of Hillhouse Artificial Intelligence School of Renmin University of China, told Chao News reporter.
For example, just as the car replaced the horse-drawn carriage, the car drove very fast and there were often more accidents than the horse-drawn carriage, but later people improved the technology and reduced the number of accidents.数字化转型网szhzxw.cn
In Lu Zhiwu’s opinion, new technology should not be banned or R&D stopped because of risks. It needs to be integrated with human society to run in, improve, and slowly get rid of the bad places.数字化转型网szhzxw.cn
Question 2: laissez-faire or regulation?
In the face of the unknown, our first reaction is panic, then rational. In the midst of the madness and chaos, people are trying to restore order and develop regulations on AI.
On March 31, Italy was the first country to ban ChatGPT and restrict OpenAI from processing user information in Italy on the grounds that “there is no legal basis for the bulk collection and storage of personal data for the purpose of training ChatGPT’s algorithms.”
On April 4, Germany also began considering banning ChatGPT. Regulators in France and Ireland said they were discussing their findings with Italy. Earlier, the UK also submitted a policy paper entitled “Regulatory Approach to Support AI Innovation”.
“What these countries have done is only a temporary measure, not a long-term solution. They will lift the ban when the new governance measures come out.” Chen Xiaoping said.
In the past, technology governance was usually post-governance. When problems break out, the right medicine, targeted, very efficient. For example, live broadcast with chaotic image frequency, people to supervision; Virtual currency is popular, many countries began to ban; Internet rumors are rampant, and people come to revise laws and norms.
As we watch AI race along, the traditional model doesn’t work anymore.
“If the harm happens, such as inducing bad ideas from AI to cause a serious terrorist incident, or causing mass unemployment in the short term, the consequences are likely to be unacceptable.” Chen Xiaoping said.数字化转型网szhzxw.cn
The development of science and technology should gradually turn to ethics first. More and more people realize that in the field of AI, especially large model technology, its development depends on the further improvement of technology and the ethical governance of artificial intelligence.
“Now we need to establish a new model and a new method of AI ethical governance as soon as possible, so that the technology can be used safely, without serious malignant consequences, and conducive to the healthy development of AI.” Chen Xiaoping said.
Last year, the Chinese government put forward the concept of “ethics first”. At a UN conference, the Chinese government submitted China’s Position Paper on Strengthening the Ethical Governance of AI, which not only expresses governance determination, but also calls for international cooperation.数字化转型网szhzxw.cn
Not long ago, for the first time, the United Nations asked all member States to implement a global ethical framework on AI development without delay.
Question 3: Who will win in the future, AI or human?
Recently, there has been a lot of discussion about AI’s self-awareness, but it has been inconclusive. The ultimate dilemma, whether AI can ever surpass humans, has resurfaced with both fear and curiosity.数字化转型网szhzxw.cn
With the release of the more powerful GPT-4, even OpenAI CEO Sam Altman made some intriguing statements, saying that AI could indeed kill humans in the future.
Even the founders don’t fully understand, predict, or control the future of AI. Zhou Hongyi of 360 Group predicted boldly, “AI will surely produce self-awareness, leaving little time for human beings.”
So, can AI really be human consciousness and intelligence? It’s true that ChatGPT gives the “illusion” of intelligence, but several professors have pointed out that, fundamentally, a machine is not a person, and it doesn’t really understand people.
Chen Xiaoping said that the more accurate name of artificial intelligence is machine intelligence, it is not human intelligence, machine intelligence can imitate part of the function of people, including part of the consciousness function of people, but it does not mean that the machine has the same consciousness as people.数字化转型网szhzxw.cn
What is imitation function?
For example, Alphago, an AI robot, can play go, but it plays chess differently than humans. People think about other things when they play chess, and those things don’t exist in Alpha dogs. ChatGPT’s ability to mimic human speech is even better than that of many people, but that doesn’t mean it thinks everything you think when you speak. At the end of the day, it just does some things better.数字化转型网szhzxw.cn
‘We don’t have to worry for the time being whether AI will have autonomous consciousness and surpass humans,’ Lu said. The training method of ChatGPT model is to collect data and train the model artificially. The whole process is controlled by people, rather than machine self-learning, and it will not get out of control.数字化转型网szhzxw.cn
ChatGPT has learned human speech functions, but it is much harder for Chatgpt to go further and imitate humans, such as learning images and audio. “Language is, after all, a human abstraction, but images and sounds are figurative and natural, with larger amounts of data and more complex information representation.” Lu Zhiwu said.
If our ultimate goal is to make AI smarter while making useful contributions to humanity, it’s still a long way off.数字化转型网szhzxw.cn
However, both past science fiction movies and current artificial intelligence remind us time and again that from now on, we must work together globally to solve the safety and ethical issues of AI.
Standing at the crossroads, it’s not too late.数字化转型网szhzxw.cn
本文由数字化转型网(www.szhzxw.cn)转载而成,来源于潮新闻;编辑/翻译:数字化转型网宁檬树。
免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。
本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。
