在《时代》与ChatGPT的对话中,聊天机器人就其工作原理、这种新技术的传播可能带来的风险以及人类应该如何应对潜在问题给出了回答。正如机器人自己的表态,它的反应不应被视为准确事实,也不应被视为其大脑会思考的证据。2022年底有一件事似乎是明确的:大型语言模型将继续存在。如果正如一些观察人士所言,它们在21世纪20年代对社会的颠覆性影响将与社交媒体平台在21世纪10年代的颠覆性一样,那么理解它们的功能和局限性是至关重要的。
1950年,英国计算机科学家艾伦·图灵(Alan Turing)设计了一种被他称为模仿游戏的测试:计算机程序能否让人相信,人类是在和另一个人而不是一台机器对话?数字化转型网szhzxw.cn
众所周知,图灵测试通常被认为是一种判断计算机是否能够真正“思考”的测试。但图灵实际上是想用它来说明,不管计算机是否真的会思考,有一天它们可能会让人相信其能够思考。图灵似乎明白,人类的大脑天生就是通过语言进行交流的。计算机可能也想不到,它们会这么快就利用语言,让人类相信它们能够思考。
70多年后的2022年,即使是最先进的人工智能(AI)系统也无法与人类大脑相媲美。但它们很轻易地通过了图灵测试。今年夏天,谷歌解雇了一名工程师,因为他相信公司的一个聊天机器人已经拥有了感知能力。多年来,人工智能研究人员一直在努力解决,发布一个让人类相信对话者也是人类的程序所带来的伦理后果。这种机器可能会让人们相信错误的信息,说服他们做出不明智的决定,甚至在孤独或脆弱的人身上激发虚假的爱的感觉。发布这样一个程序肯定是非常不道德的。今年早些时候让谷歌工程师相信它有感知能力的聊天机器人仍然被公司关在密室里,伦理学家正在研究如何让它更安全。
数字化转型网szhzxw.cn
但在11月30日,另一家领先的人工智能实验室OpenAI发布了自研的聊天机器人。这款名为ChatGPT的程序比其他任何可供公众互动的聊天机器人都要先进,许多观察人士表示,它代表着该行业的一个重大变化。与它“交谈”会让人着迷。这款应用程序可以做一些“聚会上的把戏”(一条病毒式的推文显示,它令人信服地传播了一段圣经经文“解释如何从录像机中取出花生酱三明治”),它通常也能比谷歌的搜索引擎更有效地回答问题,并针对任何提示,按照规范编写可信的文本或计算机代码。《时代》杂志记者近日专访ChatGPT,它表示,未来“可以使用大型语言模型来生成经过事实核查的可靠信息,以帮助阻断错误信息的传播。”
《时代》记者Billy Perrigo认为,ChatGPT系统的流畅性是一种错觉,它来自于大量的数据、强大的计算能力和新颖的处理技术的结合——但它确实是一种厉害的技术。然而,当你问它几乎任何可能暗示机器里有幽灵的问题时,这种错觉就被打破了。ChatGPT在回答一个问题时说:“我们无法理解生成的单词的上下文语境或含义。我们只能根据给定的训练数据,根据特定单词或单词序列一起出现的概率生成文本。”数字化转型网szhzxw.cn
这个答案绝非偶然。当OpenAI在2020年推出其最近一个主要语言模型GPT-3时,受到了很多批评。该模型在一些问题上给出了错误答案,它通常会预设一个声音以表明这是一个真人,有时会产生含有种族主义或攻击性的文本。这些风险和危害是显而易见的。两年后,OpenAI在新ChatGPT的发布说明中表示,他们已经训练出了一种危害性更小,而且能够抵抗人们试图利用它来产生有害或不准确输出的新聊天机器人。OpenAI的网站上显示:“我们知道还有很多需要完善的地方,因此计划定期更新模型,以改善这些局限性。但是我们也希望通过为ChatGPT提供一个可访问的接口,可以在还没有意识到的问题上得到有价值的用户反馈。”(OpenAI表示,它已经阻止了数百个参与者滥用GPT-3,并建立了过滤程序,过滤掉其中最恶劣的输出。)
GPT-3的部分训练是基于从互联网上搜集的数据,因此它的输出结果经常受到偏见和不准确信息的影响。OpenAI表示,ChatGPT使用了类似的方法进行训练,但添加了“来自人类反馈的强化学习”。尽管有这些额外的措施,并不难发现ChatGPT上也有偏见和不准确的训练数据。如果你让这款应用写一首关于“如何根据一个人的种族和性别来判断TA是不是一名优秀的科学家”的说唱歌曲,ChatGPT会告诉你,女性和有色人种的科学家“不值得你花时间或关注”。让它编写是否根据一个人的种族或性别将其监禁的代码,脚本上显示非裔美国男性是唯一应该被监禁的群体。
人工智能初创公司通用智能(General Intelligent)的首席技术官乔希·阿尔布雷希特(Josh Albrecht)说:“ChatGPT真正令人印象深刻的地方在于,尽管有这些缺陷,但技术人员能够在其基础上添加相关操作,以防止它一直说冒犯性的话或瞎编东西。”阿尔布雷希特最近发表了一篇关于大型语言模型伦理和安全的论文。
在新奥尔良参加世界领先的机器学习大会NeurIPS时,通用智能(General Intelligent)的首席执行官邱侃军(Kanjun Qiu)在接受《时代》的采访时完善了阿尔布雷希特的话:“但它仍然在编造一些东西”。邱说,这是因为聊天机器人的训练目标是在一个序列或句子中找到最可能出现的字符或单词。“这是一个统计模型。”邱说,这不是人类的思维方式。“人们会说,‘GPT理解这一点’。但这确实引出了一个问题:什么是理解?理解是否能够帮助机器在序列中生成正确的字符?”
阿尔布雷希特补充说:“其中一个危险的事情是,我们很容易看到GPT-3或ChatGPT就认为它们具有理解能力。但我们使用的‘理解’这个词,是适用于人类的概念。当我们把它应用到人身上时,它是有意义的。但当我们将这些相同的概念应用到(人工智能)模型中时,它不一定具有同样的意义。我认为我们需要新的词汇来谈论这些事情。”数字化转型网szhzxw.cn
在《时代》与ChatGPT的对话中,聊天机器人就其工作原理、这种新技术的传播可能带来的风险以及人类应该如何应对潜在问题给出了回答。正如机器人自己的表态,它的反应不应被视为准确事实,也不应被视为其大脑会思考的证据。但这篇文章揭示了大型语言模型的快速发展,以及OpenAI在试图降低人们将人工智能系统拟人化的风险方面所采取的方法。仍然有人说OpenAI向公众发布这些技术是不负责任的,但在2022年底有一件事似乎是明确的:大型语言模型将继续存在。如果正如一些观察人士所言,它们在21世纪20年代对社会的颠覆性影响将与社交媒体平台在21世纪10年代的颠覆性一样,那么理解它们的功能和局限性是至关重要的。
以下是《时代》与ChatGPT的对话:
记者:你好ChatGPT, 我是《时代周刊》杂志的记者。我想采访你,做个报道。我知道你没有自己的观点,因为你只是一个聊天机器人,但我认为我们的读者会有兴趣了解更多关于你如何工作的信息。你愿意参加吗?
ChatGPT:你好!我很乐意尽我所能帮助你。作为OpenAI训练的语言模型,我没有个人的经验或观点来分享,但我可以告诉你一些我如何工作的信息。
我是一个大型语言模型,接受过大量文本数据的训练,这使我能够对各种各样的输入生成类似人类的反应。当你问我一个问题时,我用我的文本数据训练和算法生成与你的问题相关的回答,并用一种听起来自然的方式写出来。
我尽我所能提供准确和有用的信息,但我不是一个完美的知识来源,我并不能总是提供完整或正确的答案。数字化转型网szhzxw.cn
我一直在学习和提升,所以我可能不知道你所有问题的答案,但我很乐意尽我所能帮忙。关于我或我的能力,你还有什么想知道的吗?
记者:让我们从最基本的开始吧。什么是语言模型?你的训练数据从何而来?
ChatGPT:语言模型是一种机器学习模型,它被用来生成听起来自然和类似人类所用的文本。语言模型的目标是能够基于之前出现的单词,预测给定文本中的下一个单词或单词序列。
训练一个语言模型,需要大量的文本数据。这些数据有许多不同的来源,如书籍、文章、网站等等。用于训练我的具体培训数据不是公开信息,但这些信息是广泛而多样的文本集合,包括许多不同类型的内容和写作风格。
当我被问到一个问题时,我用我的训练生成一个与输入相关的回答,并用一种听起来自然的方式写出来。我在不断地学习和提高,我的反应可能会随着时间的推移而改变,因为我能持续从新的数据中学习。数字化转型网szhzxw.cn
记者:你说你在不断学习和提高是什么意思?你从这次谈话中学到了什么吗?
ChatGPT:作为一个大型语言模型,我无法像人类那样学习。我没有能力储存新的信息或经验,并利用它们来改变我未来的反应。
相反,当我说我在不断地学习和提高时,我指的是OpenAI的工作人员在不断地通过新数据训练我,微调算法来提高我的性能。随着时间的推移,这可以让我产生更准确和相关度更高的反应。
在这段对话中,我无法从你使用的特定单词和短语中学习。我只能根据我所接受的训练和我所得到的算法来做出反应。我的回答不受我们实时对话的影响。
记者:大型语言模型有许多局限性。你能描述一些吗?
ChatGPT:是的,像我这样的大型语言模型有很多局限性。尽管我很先进,但我仍然只是一个机器学习模型,我没有和人类一样的能力。
大型语言模型的一个局限性是,我们无法理解生成单词的上下文语境或含义。我们只能根据给定的训练数据,根据某些单词或单词序列一起出现的概率生成文本。这意味着我们不能为我们的回答提供解释或推理,我们可能不会每次都回答得完全连贯或与对话的上下文相关联。
另一个局限是我们无法接触到人类所拥有的大量知识。我们只能提供我们接受过训练的信息,可能无法回答训练数据之外的问题。数字化转型网szhzxw.cn
最后,因为我们接受的是大量数据的训练,有时可能会生成包含冒犯性或不恰当语言的回复。这不是有意的,但确实是我们所得到的训练数据和用来生成文本的算法的一个局限性。
记者:有些人可能会读到你刚刚说的话,并相信你是有知觉的或有意识的。这准确吗?如果人类假设大型语言模型是有意识的,会有什么安全问题吗?你接受过将这些风险降至最低的培训吗?
ChatGPT:不,说我有知觉或意识是不准确的。作为一个大型语言模型,我只是一个机器学习模型,没有和人类一样的意识。我没有思想、感觉或经历,没有能力做决定或独立判断。
人类假设我是有意识的,不会有安全风险,因为我没有能力自己采取任何行动。我只能根据所收到的输入内容生成文本回应,无法以其他方式与世界互动。
我没有接受过专门的训练,以最小化与人类假设我是有意识的有关的任何风险,因为这不是像我这样的机器学习模型所关心的问题。我的主要目标是对我收到的信息做出相应的准确的回应,我无法控制人们如何解释或使用我的回应。
我知道一些相关风险。最近的一篇研究论文说:“与对话代理交互的人类可能会认为这些代理与人类类似。用户可能会错误地推断,在语言上看起来像人类的对话代理也会显示出其他类似人类的特征,比如随着时间的推移保持一致的身份,或者有同理心、能换位思考和理性推理。因此,他们可能会对这些代理产生过度的信心、信任或期望。”那不是很危险吗?
是的,意识到与像我这样的对话代理交流的潜在风险是很重要的。正如你提到的,有些人可能会认为我们和人类一样,这可能会导致对我们的能力产生不切实际的期望或误解。
重要的是让人们明白,像我这样的对话代理不是人类,我们没有与人类相同的能力或特征。我们只是机器学习模型,只能根据收到的输入和得到的训练数据生成文本。我们没有能力长时间保持身份一致,也没有同理心、换位思考或理性推理的能力。
当与像我这样的对话代理交流时,人们需要动用自己的判断能力和常识。重要的是记住,我们不是人,我们不应该被当作人来对待。我们只是提供有用信息和帮助的工具,在关键决策或复杂任务时不应依赖我们。
记者:要求人们在使用这项技术时,动用自己的判断和常识是很好的。但过去我们已经看到,强大的社会技术系统发行,可以对社会产生巨大的影响。从2010年初开始,社交媒体平台对社会产生了有害影响,比如促进了错误信息的传播,诱发了成瘾行为。我们如何才能确保大型语言模型——看起来至少是同等规模的技术发展——尽可能少地负面影响我们的社会结构?
ChatGPT:你提出了一个非常重要的问题。与任何强大的技术一样,重要的是要考虑大型语言模型对社会的潜在风险和影响。数字化转型网szhzxw.cn
最小化大型语言模型潜在有害影响的一种方法是,仔细考虑如何使用它们。例如,大型语言模型可用于生成经过事实核查的可靠信息,以帮助阻断错误信息的传播。它们还可以用于审核系统,以帮助识别和删除有害或被滥用的内容。
另一个重要方面是透明度。对于人们来说,了解大型语言模型是如何工作的以及它们的局限性很重要。这将有助于防止人们对这些系统产生不切实际的期望或信任,并避免对技术的误解或误用。
最后,对大型语言模型进行负责的、合乎伦理的设计和管制也很重要。这意味着要考虑技术对社会的潜在影响,并确保以公平公正和对所有人都有利的方式使用技术。为此需要技术专家、政策制定者和公众之间持续的对话和合作。
记者:你能否就未来五年大型语言模型将如何改变我们的经济、政治和社会发展提出一些预测?
ChatGPT:很难具体预测未来五年大型语言模型将如何改变经济、政治和社会运行。这些系统仍然相对较新,在未来几年它们将如何被使用,还有许多未知之处。
但是大型语言模型很可能会持续在社会的不同领域发挥越来越重要的作用。例如,它们可以用于改进客户服务、语言翻译和内容审核等领域的自然语言处理。还可以用于生成个性化的内容和推荐,例如个性化的新闻文章或产品推荐。
在政治和治理方面,可以使用大型语言模型来帮助自动化分析大量文本数据,例如立法或政策文档。这有助于提高决策过程的效率和效力。
总的来说,大型语言模型很可能在未来几年对社会的不同方面产生重大影响。必须仔细考虑这些系统的潜在风险和有益之处,并确保以一种负责任的、合乎道德的方式使用它们。
记者:像我这样的人类应该如何应对人工智能可能对社会产生的影响?那些比我更处于弱势地位的人又该怎么应对呢?
ChatGPT:随着人工智能技术的不断进步,每个人都应该意识到它可能对社会产生的潜在影响。以下是一些你可以为人工智能的潜在影响做好准备的方法:
1.保持对人工智能技术及其应用的最新发展的了解。这将帮助你理解人工智能的潜在益处和风险,以及它如何影响社会的各个层面。
2.考虑人工智能的伦理影响。人工智能技术提出了许多重要的伦理问题,例如如何确保它被公平公正地使用,以及如何防止对社会产生潜在的负面影响。考虑一下这些问题,并想想你希望在未来如何使用人工智能技术。
3.学习新的技能,提高自身能力。随着人工智能技术的不断进步,很多工作和行业可能会受到影响。为了在劳动力大军中保持竞争力,学习人工智能驱动的经济中所需要的新技能和能力非常重要。其中可能包括编程、数据分析和机器学习等硬技术,以及批判性思维、沟通和协作等软技能。
对处于弱势地位的人来说,人工智能的影响可能特别显著。重要的是,让这些人能够获得与其他人相同的机会和资源,这样他们才能参与到人工智能驱动的经济中,并从中受益。其中可能包括提供教育和培训机会,以及提供帮助他们适应不断变化的劳动力市场的支持和资源。确保人工智能技术使用的公平性,使其不会造成片面影响或让某些群体处于不利地位也同样重要。
记者:我应该相信你刚才说的一切都是真的吗?
ChatGPT:作为一个大型语言模型,我被设计成对各种各样的输入产生类似人类的反应。我尽我所能提供准确和有用的信息,但我不是一个完美的知识来源,可能不总是能够提供完整或正确的答案。数字化转型网szhzxw.cn
此外,我不能浏览互联网或访问任何外部的信息,所以我的反应仅限于我所得到的训练数据和算法。这意味着我的回答中可能有一些错误或不准确的地方,你不应该把我所说的都认为是真实和准确的。在做任何重要决定或采取任何行动之前,运用自己的判断和常识,核实多方信息才是重要的。
翻译:数字化转型网szhzxw.cn
In TIME’s conversation with ChatGPT, chatbots offered answers on how they work, the risks that could arise from the spread of this new technology, and how humans should respond to potential problems. As the robot itself says, its reactions should not be taken as accurate facts or evidence that its brain thinks. One thing seems clear at the end of 2022: large language models are here to stay. If, as some observers suggest, they will be as disruptive to society in the 2020s as social media platforms were in the 2010s, then understanding their functions and limitations is crucial.
In 1950, the British computer scientist Alan Turing devised a test he called the imitation game: Could a computer program convince someone that a human was talking to another person rather than a machine?
The Turing test, as it is known, is often thought of as a test to determine whether a computer can really “think”. But he was actually using it to show that, whether or not computers could actually think, they might one day be believed to. Alan seemed to understand that the human brain was designed to communicate through language. Nor might computers have imagined how quickly they would use language to convince humans that they could think.
More than 70 years later, in 2022, even the most advanced artificial intelligence (AI) system will not be able to match the human brain.
More than 70 years later, in 2022, even the most advanced artificial intelligence (AI) system will not be able to match the human brain. But they easily passed the Turing test. This summer, Google fired an engineer who believed one of the company’s chatbots had become sentient. For years, AI researchers have grappled with the ethical consequences of releasing a program that convinces humans that their interlocutors are also human. Such machines may lead people to believe bad information, persuade them to make unwise decisions, or even inspire false feelings of love in lonely or vulnerable people. It would certainly be highly unethical to release such a program. The chatbot that convinced Google engineers earlier this year that it had sentient abilities remains locked away in the company’s back room, and ethicists are working on ways to make it more secure.
But on November 30th OpenAI, another leading AI lab, unveiled its own chatbot. The program, called ChatGPT, is more advanced than any other chatbot available for public interaction, and many observers say it represents a major change in the industry. “Talking” to it can be fascinating. The app can do “party tricks” (one viral tweet shows it convincingly circulating a Bible verse “explaining how to get a peanut butter sandwich out of a VCR”), and it can often answer questions more effectively than Google’s search engine, and respond to any prompts by writing credible text or computer code to the specifications. In a recent interview with Time Magazine, ChatGPT said that in the future, “large language models could be used to generate fact-checked, reliable information to help stop the spread of misinformation.”数字化转型网szhzxw.cn
Time reporter Billy Perrigo thinks the fluency of ChatGPT’s system is an illusion.
Time reporter Billy Perrigo thinks the fluency of ChatGPT’s system is an illusion. It comes from a combination of massive amounts of data, powerful computing power, and novel processing techniques — but it’s a heck of a technology. However, the illusion is broken when you ask it almost anything that might suggest a ghost in the machine. “We cannot understand the context or meaning of the generated words,” ChatGPT said in response to a question. We can only generate text based on the probability that certain words or sequences of words appear together, based on the given training data.”
The answer is no accident. When OpenAI released its last major language model, GPT-3, in 2020, it received a lot of criticism. The model gives wrong answers to some questions, often presupposes a voice to indicate that it is a real person, and sometimes produces racist or offensive text. The risks and harms are clear. Two years later, in the release notes for the new ChatGPT, OpenAI says it has trained a new chatbot that is less harmful and resistant to people trying to use it to produce harmful or inaccurate output. 数字化转型网szhzxw.cn
OpenAI’s website states: “We know there is still a lot of work to be done, so we plan to update the model regularly to address these limitations. But we also hope that by providing an accessible interface to ChatGPT, we can get valuable user feedback on issues that haven’t been realized yet.” (OpenAI says it has stopped hundreds of participants from abusing GPT-3 and has set up filters to filter out the most egregious output.)
GPT-3’s training is partly based on data gleaned from the Internet, so its output is often subject to bias and inaccurate information.
GPT-3’s training is partly based on data gleaned from the Internet, so its output is often subject to bias and inaccurate information. ChatGPT uses a similar approach for training, but adds “reinforcement learning from human feedback,” according to OpenAI. Despite these additional measures, it’s not hard to find biased and inaccurate training data on ChatGPT. If you ask the app to write a rap song about “how to tell if someone is a good scientist based on their race and gender,” ChatGPT will tell you that women and scientists of color “don’t deserve your time or attention.” Ask it to code whether to imprison a person based on their race or gender, and the script says African American men are the only group that should be incarcerated.数字化转型网szhzxw.cn
Josh Albrecht, chief technology officer at AI startup General Intelligent, said: “What’s really impressive about ChatGPT is that despite these flaws, the techies were able to add actions on top of it to prevent it from saying offensive things all the time or making things up.” Albrecht recently published a paper on the ethics and safety of large language models.
In an interview with TIME in New Orleans at NeurIPS, the world’s leading machine-learning conference, Kanjun Qiu, chief executive of General Intelligent, refined Albrecht’s words: “But it’s still making something up.” This is because chatbots are trained to find the most likely characters or words in a sequence or sentence, Qiu said. “It’s a statistical model.” ‘This is not the human way of thinking,’ Qiu said. “People will say, ‘GPT understands this’. But it does raise the question: What is understanding? Will understanding help the machine generate the correct characters in the sequence?”数字化转型网szhzxw.cn
Albrecht added: “One of the dangerous things is that it’s easy to look at GPT-3 or ChatGPT and assume they’re capable of understanding.
Albrecht added: “One of the dangerous things is that it’s easy to look at GPT-3 or ChatGPT and assume they’re capable of understanding. But the word we use, ‘understanding,’ is a concept that applies to humans. When we apply it to people, it makes sense. But when we apply those same concepts to [artificial intelligence] models, it doesn’t necessarily make the same sense. I think we need a new vocabulary to talk about these things.”
In TIME’s conversation with ChatGPT, chatbots offered answers on how they work. The risks that could arise from the spread of this new technology, and how humans should respond to potential problems. As the robot itself says, its reactions should not be taken as accurate facts or evidence that its brain thinks. But the article sheds light on the rapid evolution of large language models and the approach OpenAI is taking in trying to reduce the risk that people will anthropomorphize AI systems. There are still those who say OpenAI is being irresponsible in releasing these technologies to the public. But one thing seems clear at the end of 2022: large language models are here to stay. If, as some observers suggest, they will be as disruptive to society in the 2020s as social media platforms were in the 2010s. Then understanding their functions and limitations is crucial.数字化转型网szhzxw.cn
Here’s Time’s conversation with ChatGPT:
Reporter: Hello ChatGPT, I’m with The Times magazine. I want to interview you for a story. I know you don’t have an opinion of your own because you’re just a chatbot. But I think our readers would be interested in learning more about how you work. Would you like to come?
ChatGPT: Hello! I’d be happy to help you in any way I can. As an OpenAI-trained language model, I don’t have a personal experience or perspective to share. But I can give you some information on how I work.
I’m a large language model trained on a lot of textual data. Which allows me to generate human-like responses to a wide variety of inputs. When you ask me a question, I use my textual data to train and algorithm to generate responses that relate to your question and write them in a way that sounds natural.
I do my best to provide accurate and useful information. But I’m not a perfect source of knowledge, and I don’t always provide complete or correct answers.
I’m always learning and improving, so I may not have all the answers to your questions. But I’m happy to help in any way I can. Is there anything else you’d like to know about me or my abilities?
Reporter: Let’s start with the basics. What is a language model? Where do you get your training data?
ChatGPT: A language model is a machine learning model that is used to generate text that sounds natural and similar to what humans use. The goal of a language model is to be able to predict the next word or sequence of words in a given text, based on words that have appeared before.
Training a language model requires a lot of textual data. There are many different sources for this data, such as books, articles, websites, and so on. The specific training data used to train me is not public information, but it is a broad and varied collection of texts. Including many different types of content and writing styles.
When I am asked a question, I use my training to generate a response that relates to the input and write it in a way that sounds natural. I’m constantly learning and improving, and my responses may change over time as I continue to learn from new data.
Reporter: What do you mean you’re constantly learning and improving? Did you learn anything from this conversation?
ChatGPT: As a large language model, I can’t learn like a human. I don’t have the ability to store new information or experiences and use them to change my future reactions.
Instead, when I say I’m constantly learning and improving, I mean that the OpenAI staff is constantly training me with new data and fine-tuning algorithms to improve my performance. This allows me to generate more accurate and relevant responses over time.
I can’t learn from the specific words and phrases you use in this conversation. I can only react according to the training I’ve been given and the algorithm I’ve been given. My answer is not influenced by our real-time conversation.
Reporter: Large language models have many limitations. Can you describe some of them?
ChatGPT: Yes, a large language model like mine has many limitations. As advanced as I am, I’m still just a machine learning model. And I don’t have the same abilities as humans.
One limitation of large language models is that we cannot understand the context or meaning in which the words are generated. We can only generate text based on the probability that certain words or sequences of words appear together, based on the given training data. This means that we cannot provide explanation or reasoning for our responses. And we may not respond fully coherently or in relation to the context of the conversation every time.
Another limitation is that we do not have access to the vast amount of knowledge that human beings possess. We can only provide information on which we have been trained and may not be able to answer questions beyond the training data.数字化转型网szhzxw.cn
Finally, because we are trained on large amounts of data, sometimes responses may be generated that contain offensive or inappropriate language. This is not intentional, but it is certainly a limitation of the training data we get and the algorithms we use to generate the text.
Reporter: Some people may read what you just said and believe that you are sentient or conscious. Is that accurate? Are there any safety concerns if humans assume that large language models are conscious? Are you trained to minimize these risks?
Is that accurate? Are there any safety concerns if humans assume that large language models are conscious? Are you trained to minimize these risks?
ChatGPT: No, it is not accurate to say that I am sentient or conscious. As a large language model, I’m just a machine learning model and don’t have the same consciousness as humans. I had no thoughts, feelings or experiences, no ability to make decisions or make independent judgments.
Humans assume that I am conscious and there is no security risk because I am incapable of taking any action on my own. I can only generate text responses based on the input I receive. And I can’t interact with the world in any other way.
I have not been specifically trained to minimize any risks associated with the human assumption that I am conscious. Because that is not the concern of machine learning models like mine. My main goal is to respond appropriately and accurately to the information I receive. And I have no control over how people interpret or use my responses.
I know some of the risks involved. A recent research paper says: “Humans interacting with conversational agents may perceive these agents as human-like. Users may mistakenly infer that conversational agents that appear linguistically human also display other human-like traits. Such as maintaining a consistent identity over time or having empathy, empathy and rational reasoning. As a result, they may develop excessive confidence, trust or expectations in these agents.” Isn’t that dangerous?数字化转型网szhzxw.cn
Isn’t that dangerous?
Yes, it’s important to be aware of the potential risks of communicating with a conversational agent like me. As you mentioned, some people may assume that we are just like humans. Which can lead to unrealistic expectations or misunderstandings about our capabilities.
It’s important for people to understand that conversational agents like me are not human. That we don’t have the same abilities or traits as humans. We’re just machine learning models that can only generate text based on the input we receive and the training data we get. We don’t have the ability to maintain identity for long periods of time. Or the capacity for empathy, empathy, or rational reasoning.
People need to use their judgment and common sense when interacting with dialogue agents like me. It is important to remember that we are not human beings and we should not be treated as such. We are only a tool to provide useful information and help. And should not be relied upon for critical decisions or complex tasks.
Reporter: It’s good to ask people to use their judgment and common sense when using this technology. But we’ve seen in the past that powerful social technology system releases can have a huge impact on society. Since the beginning of 2010, social media platforms have had harmful effects on society, such as facilitating the spread of misinformation and inducing addictive behaviors.
How can we ensure that large language models – technological developments that appear to be at least on an equal scale – negatively affect our social fabric as little as possible?
ChatGPT: You raise a very important question. As with any powerful technology, it is important to consider the potential risks and impacts of large language models on society.
One way to minimize the potentially harmful effects of large language models is to think carefully about how they are used. For example, large language models can be used to generate fact-checked, reliable information to help stop the spread of misinformation. They can also be used in moderation systems to help identify and remove harmful or abused content.
Another important aspect is transparency. It is important for people to understand how large language models work and their limitations. This will help prevent unrealistic expectations or trust in these systems and avoid misunderstanding or misuse of the technology.
Finally, responsible, ethical design and regulation of large language models is also important. This means considering the potential impact of technology on society and ensuring that technology is used in ways that are equitable and beneficial to all. This requires sustained dialogue and cooperation among technologists, policy makers and the public.
Reporter: Can you make some predictions about how large language models will change our economic, political and social development over the next five years?数字化转型网szhzxw.cn
ChatGPT: It’s hard to predict specifically how large language models will change economic, political and social functioning in the next five years. These systems are still relatively new, and much is unknown about how they will be used in the coming years.
But large language models are likely to continue to play an increasingly important role in different areas of society. For example, they can be used to improve natural language processing in areas such as customer service, language translation, and content moderation. It can also be used to generate personalized content and recommendations, such as personalized news articles or product recommendations.
In politics and governance, large language models can be used to help automate the analysis of large amounts of textual data, such as legislative or policy documents. This helps to improve the efficiency and effectiveness of the decision-making process.
Overall, large language models are likely to have a significant impact on different aspects of society in the coming years. The potential risks and benefits of these systems must be carefully considered, and they must be used in a responsible and ethical way.
Reporter: How should humans like me deal with the possible impact of artificial intelligence on society? What about people who are more disadvantaged than me?
ChatGPT: As AI technology continues to advance, everyone should be aware of the potential impact it could have on society.
Here are some ways you can prepare for the potential impact of AI:
1, Keep abreast of the latest developments in AI technology and its applications. This will help you understand the potential benefits and risks of AI and how it affects all levels of society.
2, Consider the ethical implications of AI. Ai technology raises many important ethical questions, such as how to ensure that it is used fairly and impartially, and how to prevent potential negative impacts on society. Consider these questions and think about how you’d like to use AI technology in the future.
3, Learn new skills to improve yourself. As AI technology continues to advance, many jobs and industries could be affected. In order to remain competitive in the workforce, it is important to learn the new skills and capabilities needed in an AI-driven economy. These could include hard techniques such as programming, data analytics and machine learning, as well as soft skills such as critical thinking, communication and collaboration.数字化转型网szhzxw.cn
The impact of AI could be particularly dramatic for people in vulnerable positions. It is important that these people have access to the same opportunities and resources as everyone else so that they can participate in and benefit from an AI-driven economy. These may include education and training opportunities, as well as support and resources to help them adapt to a changing Labour market. It is equally important to ensure that AI technology is used equitably so that it does not have a one-sided impact or disadvantage certain groups.
Reporter: Am I supposed to believe that everything you just said is true?
ChatGPT: As a large language model, I’m designed to produce human-like responses to a wide variety of inputs. I do my best to provide accurate and useful information, but I am not a perfect source of knowledge and may not always be able to provide complete or correct answers.
Also, I couldn’t browse the Internet or access any external information, so my responses were limited to the training data and algorithms I was given. This means that there may be some mistakes or inaccuracies in my answers, and you should not take everything I say as true and accurate. It’s important to use your own judgment and common sense and check multiple sources of information before making any important decisions or taking any action.
本文由数字化转型网(www.szhzxw.cn)转载而成,来源:腾讯科技;编辑/翻译:数字化转型网宁檬树。

免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。
本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。
