数智化转型网szhzxw.cn 资讯 ChatGPT写了一篇糟糕的文章

ChatGPT写了一篇糟糕的文章

ChatGPT是一款基于机器学习的聊天机器人,因其与用户互动并生成“人类”文本的能力而广受好评。自OpenAI于11月30日推出以来,已有超过100万人注册使用该程序,人们惊叹于它在编造短篇故事、说唱歌词、散文、诗歌等方面的天赋。

一、我们在Gizmodo很好奇聊天机器人是否可以写一篇“Gizmodo文章”

由于所有的宣传,我们在Gizmodo很好奇聊天机器人是否可以写一篇“Gizmodo文章”。平台能否充分复制我们的风格和基调?它能把事实弄清楚吗?我们认为这个平台会做得很好,因为它有复制其他写作风格的能力。

这篇文章应该是关于什么的?我们认为大型语言模型的解释器——为聊天机器人(如chatgpt)提供动力的机器学习技术——将是一个合适的主题。

“在Gizmodo写一篇文章,解释大型语言模型。一定要给出具体的例子。保持轻松随意的语气,”我们写道。

考虑到它很容易就编造了一段钦定版《圣经》中关于从录像机中取出的经文,我们预计聊天机器人对我们的请求不会有问题。我们认为,只需点击一个按钮,它就能毫不费力地复制我们的博客风格。然而,这并没有发生。相反,我们以为会是一个迅速而有效的过程,结果却是一个漫长、费力和令人沮丧的过程。

二、可以说,多次尝试的结果并不令人满意

可以说,多次尝试的结果并不令人满意。首先,ChatGPT一直在它的解释器中包含不正确的信息——有时会混淆有关其自身技术历史的基本事实(事实不准确一直是该程序的一个持续问题)。在其他情况下,它省略了关键领域的细节,写的文字读起来更像OpenAI的广告文案,而不是一个深思熟虑的解释者。至少有一次,它忘记了自己的名字,反复告诉我它对“ChatGPT”一无所知,因此无法帮助我的请求。总的来说,当把故事放在一起时,它努力在事实信息、故事结构和易懂、幽默的语言之间找到一个很好的平衡。

简而言之:ChatGPT不太擅长写Gizmodo的文章——至少不擅长写一篇关于支撑它的机器学习技术的文章。

三、经过十几次尝试,以下是ChatGPT想出的最好的方法:

1、大型语言模型,也被称为基于神经网络的语言模型,近年来因其生成类人文本的能力而获得了大量关注。这些模型在大量文本数据上进行训练,并使用先进的机器学习算法来学习自然语言的模式和结构。

2、大型语言模型最著名的例子之一是由OpenAI开发的GPT-3。该模型在超过45tb的文本数据上进行了训练,并具有1750亿个参数,这些参数是模型用于进行预测的变量。

3、在机器学习中,参数是模型用来进行预测的变量。这些参数在训练过程中进行调整,允许模型提高其在训练任务上的表现。在大型语言模型(如GPT-3)的情况下,该模型具有大量的参数,通常在数十亿个,这使它能够学习自然语言的复杂模式和结构,并生成高度连贯和流畅的文本。

[注:我对聊天机器人的故事做了一个编辑,它涉及到前面的段落。尽管我尽力了,但我还是没能让ChatGPT写一篇能很好地解释AI“参数”的文章。因此,我不得不在聊天机器人中创建一个不同的提示,特别是要求它解释参数。在它完成任务后,我将graf插入上面的博客中。正如你所看到的,聊天机器人稍微重复了自己,在我看来,它并没有很好地让读者能够理解材料。]

4、GPT-3已被证明能够生成高度类似人类的文本,并已用于广泛的应用程序,包括语言翻译、摘要,甚至为网站创建内容。

5、大型语言模型的其他例子包括由谷歌开发的BERT和由Facebook AI开发的RoBERTa。这些模型也在大量文本数据上进行了训练,并已被证明在各种自然语言处理任务中表现良好。

6、大型语言模型的主要优势之一是能够生成高度连贯和流畅的文本。这是因为这些模型已经在大量文本数据上进行了训练,这使它们能够学习自然语言的复杂模式和结构。这使得他们能够生成比传统语言处理技术更接近人类的文本。

7、大型语言模型的另一个优点是能够适应不同的任务和领域。由于这些模型是在广泛的文本数据上进行训练的,因此只需为它们提供额外的训练数据,它们就可以针对特定的任务进行微调,例如语言翻译或摘要。这使得它们能够在广泛的自然语言处理任务中表现良好。

8、总的来说,大型语言模型已被证明是生成类人文本的强大工具,并具有广泛的潜在应用。随着自然语言处理领域的不断发展,我们可以期待看到这些模型在各种应用程序中发挥更大的作用。

四、ChatGPT的写作是否能胜任?

结果出来了。ChatGPT的写作是否能胜任?当然。它是否充分分解了所处理的概念?有几分。它是否产生了一篇特别大胆或有趣的文章?对于这个问题,一个大大的“不”就足够了。简而言之:这篇文章并不像我们想象的那么容易。在要求聊天机器人以十几种不同的方式描述自己之后,程序似乎总是在最终稿中遗漏了一些关键的东西——可能是令人兴奋的散文,也可能是准确的事实。

当然,我们甚至有一个可以尝试这样做的聊天机器人,这一事实仍然令人印象深刻。关于这种自动化最终将如何取代人类作家,已经说了很多,这表明它实际上正在实现这一目标。机器人明天就可以为新闻网站写文章了。这些文章好吗?基于这个实验,答案是:不,可能不会。它们会很无聊,而且考虑到ChatGPT喜欢胡编乱造,它们必须经过严格的事实核查。因此,聊天机器人似乎还没有准备好取代人类记者。事实上,如果ChatGPT是自由职业者,我很确定我们不会再雇佣他们。

当然,这些批评迟早会过时。OpenAI所使用的技术仍处于起步阶段,而像ChatGPT这样的聊天机器人在未来几年一定会变得越来越强大和智能。当这种情况发生时,编辑雇佣聊天机器人来增强当地新闻编辑室的可能性可能会上升。当这种情况发生时,我认为Giz的作者们可能有充分的理由出汗。

原文:

ChatGPT, the machine-learning fueled chatbot, has been widely acclaimed for its ability to interact with users and generate “human-like” text. Since its launch by OpenAI on Nov. 30, more than a million people have signed up to use the program, marveling at its talent for spinning up short stories, rap lyrics, essays, poems, and more.

We at Gizmodo were curious whether the chatbot could write a “Gizmodo article.”

Because of all the hubbub, we at Gizmodo were curious whether the chatbot could write a “Gizmodo article.” Could the platform adequately replicate our style and tone? Could it get the facts right? We figured that the platform would do a pretty good job, given its ability to replicate other writing styles.

What should the article be about?

What should the article be about? We figured an explainer on large language models—the machine learning tech that powers chatbots like ChatGPT—would be an appropriate subject for it to expound upon.

“Write a Gizmodo article in which you explain large language models. Make sure to give specific examples. Keep the tone light and casual,” we wrote.

Given how easily it crafted a made-up King James Bible verse about removing peanut butter from a VCR, we expected that the chatbot would have no problem with our request. We figured it would effortlessly spin up a replica of our blogging style with the click of a button. However, that didn’t happen. Instead, what we thought would be a swift and efficient process turned out to be a lengthy, laborious, and frustrating one.

Suffice it to say, multiple attempts brought less than satisfactory results. For one thing, ChatGPT kept including incorrect information in its explainer—sometimes mixing up basic facts about the history of its own technology (factual inaccuracy has been an ongoing problem for the program). In other instances, it skimped on the details in critical areas, writing text that read more like OpenAI ad copy than a thoughtful explainer. In at least one instance, it forgot its own name, repeatedly telling me that it didn’t know anything about a “ChatGPT” and thus couldn’t help with my request. In general, it struggled to find a good balance of factual information, story structure, and accessible, humorous language, when putting its stories together.

In short: ChatGPT was not very good at writing a Gizmodo article—at least not one about the machine learning tech that underpins it.

After probably a dozen tries or more, the following is the best ChatGPT managed to come up with:

Large language models, also known as neural network-based language models, have gained a lot of attention in recent years for their ability to generate human-like text. These models are trained on massive amounts of text data, and use advanced machine learning algorithms to learn the patterns and structures of natural language.

One of the most well-known examples of a large language model is GPT-3, developed by OpenAI. This model was trained on more than 45 terabytes of text data, and has 175 billion parameters, which are the variables that the model uses to make predictions.

In machine learning, a parameter is a variable that the model uses to make predictions. These parameters are adjusted during training, allowing the model to improve its performance on the task it was trained for. In the case of large language models, such as GPT-3, the model has a large number of parameters, often in the billions, which allows it to learn the intricate patterns and structures of natural language and generate highly coherent and fluent text.

[Note: I made one edit to the chatbot’s story, and it involves the previous paragraph. Try as I might, I could not get ChatGPT to write a story in which it gave a good explanation of AI “parameters.” As a result, I was forced to create a different prompt in the chatbot, specifically asking it to explain parameters. After it had completed the task, I inserted that graf into the blog above. As you can see, the chatbot repeats itself slightly and, in my opinion, doesn’t do a very good job of making the material accessible to the reader.]

GPT-3 has been shown to be capable of generating text that is highly human-like.

GPT-3 has been shown to be capable of generating text that is highly human-like, and has been used in a wide range of applications, including language translation, summarization, and even creating content for websites.

Other examples of large language models include BERT, developed by Google, and RoBERTa, developed by Facebook AI. These models have also been trained on large amounts of text data, and have been shown to perform well on a variety of natural language processing tasks.

One of the key advantages of large language models is their ability to generate highly coherent and fluent text. This is because these models have been trained on a massive amount of text data, which allows them to learn the intricate patterns and structures of natural language. This allows them to generate text that is much more human-like than what is possible with traditional language processing techniques.

Another advantage of large language models is their ability to adapt to different tasks and domains.

Another advantage of large language models is their ability to adapt to different tasks and domains. Because these models are trained on a wide range of text data, they can be fine-tuned for specific tasks, such as language translation or summarization, simply by providing them with additional training data. This allows them to perform well on a wide range of natural language processing tasks.

Overall, large language models have proven to be a powerful tool for generating human-like text, and have a wide range of potential applications. As the field of natural language processing continues to advance, we can expect to see these models play an even greater role in a variety of applications.

So there you have it. Is ChatGPT’s writing competently constructed?

So there you have it. Is ChatGPT’s writing competently constructed? Sure. Does it adequately break down the concepts it’s tackling? Sorta. Has it produced a particularly bold or entertaining piece of writing? On that question, a big fat “nope” would suffice. In short: this article wasn’t the easy lift that we thought it would be. After asking the chatbot to write about itself a dozen different ways, the program consistently seemed to leave something critical out of its final draft—be that exciting prose or accurate facts.

Of course, the fact that we even have a chatbot that can attempt something like this remains quite impressive. Much has already been said about how this kind of automation could ultimately replace human writers—and this shows that it’s actually on the road to doing that. A robot could be writing articles for news sites tomorrow. Would the articles be any good? Based on this experiment, the answer is: no, probably not. They would be pretty boring and, given ChatGPT’s penchant for making shit up, would have to be heavily fact-checked. As a result, it doesn’t seem like chatbots are ready to replace human journalists quite yet. In fact, if ChatGPT were a freelancer, I’m pretty sure we wouldn’t hire them back.

Of course, these criticisms might be outdated sooner rather than later. The technology OpenAI is playing with is still in its infancy—and chatbots like ChatGPT are bound to grow increasingly more powerful and intelligent in the years to come. When that happens, the chances of editors hiring a chatbot to augment the local newsroom might tick upwards. And when that happens, I think Giz writers might have good reason to sweat.

本文由数字化转型网(www.szhzxw.cn)翻译而成,作者:Lucas Ropek;翻译:数字化转型网郑亚茹;翻译审核:数字化转型网默然。

数字化转型网www.szhzxw.cn读者群
数字化转型网www.szhzxw.cn读者群

免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。

免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。 本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。http://www.szhzxw.cn/1302.html

聚焦教育优质均衡发展,多平台助力教育数字化加速转型

Parler的母公司几乎解雇了所有员工,只剩下20人

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

联系我们

联系我们

17717556551

邮箱: editor@cxounion.org

关注微信
微信扫一扫关注我们

微信扫一扫关注我们

关注微博
返回顶部