数智化转型网szhzxw.cn 人工智能 见彼,知己:人类民工迎战ChatGPT指南

见彼,知己:人类民工迎战ChatGPT指南

最近ChatGPT在各大新闻媒体的热度居高不下,从其对学术规范的冲击到替代谷歌的争议,都可以感受到扑面而来的颠覆性。相对于搜索引擎仍然给人保留了相当一部分信息检索的主动性,ChatGPT几乎是万能管家——它直接提供唯一的答案和解决方案。也因而,它引发了人们普遍的危机感:既然如此,它是否将替代掉我的工作?

作为一个经常“负责解答万事万物”宏观研究员,驱使我接触ChatGPT的一方面是好奇,另一方面也是“被替代”的恐惧——毕竟我们所做的几乎是同行。我花了2天左右的时间与ChatGPT进行各个层面的对话,尝试弄清这台“机器”的能与不能,进而评估两个问题:

1. ChatGPT一进门,是否可以代替屋里所有人?

2. 什么样的特性可以避免被AI替代,并且驾驭AI?

我与ChatGPT的问答主要涉及以下几种类型,我将分别用星级代表我对ChatGPT在此方面实际能力的评估:

一、事实/史实★★★☆☆

对于大部分公开资料充分的事实,ChatGPT能进行简单并清晰的总结。但有一点引起了我的注意,即在一些冷门事实上,ChatGPT会发生明显的张冠李戴、移花接木的情况,但它清晰和毋庸置疑的语法对读者构成了陷阱。我向AI询问我研究生时代宿舍楼的历史,可以看到ChatGPT陈述得相当,然而我自己的检索却无法验证其中的大部分内容,反而找到一些证据指向ChatGPT可能基于错误的关键词联想,将这栋楼与另外的建筑搞混了。同样的问题也出现在我询问ChatGPT我工作单位所在的建筑历史时。显然,ChatGPT在处理这些问题方面只是“随口编了个通顺的故事”。

二、原理/方法/机制★★★★★

这方面我向ChatGPT的提问内容涉及:经济、物理、甚至是黄焖鸡米饭的制作方法。如果你尝试问明确地问一个已经被研究得很成熟的问题,大概率能得到一个非常系统性的回答。在回答中可以看到,ChatGPT在结构化写作方面的优势非常突出,回答精炼通顺、要点明确,十分易于读者快速理解。

三、对比/评估★★★★☆

这方面我对ChatGPT提问涉及电池的不同技术路线、国别间货币政策框架、方言的差异等。可以发现,ChatGPT进行概念对比并不完全依托既定材料的结论,而是遵循了分类-比较的逻辑思路。不过需要注意的是,在对比复杂成因的事物(比如经济模式等)时,ChatGPT似乎会犯下静态比较等等错误,并可能会错过一些非常重要的深层因素。

(此处AI的陈述有很多错误,并不让人满意)

四、议论/评价★★★☆☆

如果让ChatGPT评价既有研究比较充分的领域(比如说政策评价等),ChatGPT同样能发挥出结构化写作的相当优势。但在涉及具体人物/政策/哲学困境/审美等的评价上,AI基本上采取了开放、中立、或是官方立场,目前似乎不能指望AI展现出明显的政治/哲学/美学倾向。

五、预测/即时分析★☆☆☆☆

目前为止,ChatGPT无法预测未来也拒绝预测未来。ChatGPT同样无法分析正在发生的事件和数据,问到此类问题是,ChatGPT会直接建议你去寻求权威发布机构和专业观点的支持。

六、创作/建构—–

这部分无法由人类评价,我只能说AI的创造可能提供了另一种审美的视角,但如何评价则因人而异。比如以下的这番创作,使我感到些许奇妙,但其文笔又很难说能打动人。

总结来说,目前已知的ChatGPT的特性有(长处显而易见,这里主要聊聊短板):

1.AI的回答可能是真假交织或不完整的,具有相当的迷惑性;

2. AI目前无法获取非公开的事实和结论;

3. AI无法预测,无法对不同的观点赋予权重,也无法进行及时、具体场景的分析;

4. AI无法提供主观和价值层面的判断。

从目前ChatGPT所提供回答质量来看,我仍然认为Google和Wiki等简单易行的替代渠道在多数情形下能提供更优质的信息,而在专业的领域内,专业的信息渠道和检索能力将更占有压倒性优势。不过我也相信,随着时间推移,以上1、2、3点短板都可能得到改进,但其中隐含的一个关键问题可能很难被解决:我们在获得ChatGPT回答的同时,并没有被告知其背后的信息来源和处理过程,亦不知道ChatGPT在回答过程中省略了什么。在更宏观的维度上,AI对事实的误解和扭曲可能具有广泛的社会危害,进而挑战我们的集体记忆与共识。我们需要认真地从社会和个人两个层面来思考AI时代的诸多“何去何从”。

在与ChatGPT聊天时,我时常感觉到自己在疑惑:与我对话的这台机器是否真的在思考,抑或是,它只是用流畅的语法来让我觉得它在思考,而它实际上或许只是概率的运算,并不“理解”自己在说什么?正因如此,它才会有以上种种缺陷,甚至是“一本正经地胡说八道”。不过,我对其底层技术的细节十分外行,这个问题似乎更多是留给“图灵测试”等哲学话题了。不过我想,既然现代的语言哲学已经揭示了语言对于思维塑造的重要性,那么既然AI能够熟练地运用语言框架,我们或许也不能完全拒绝承认AI可能拥有了思考的能力——也因而,AI与人类的竞争必然存在。

就输出的结果而言,ChatGPT和人类的“研究员”们(不幸的是,我正是其中一员)有很多相似点——他们都向受众提供了“似然正确的答案”。考虑到AI未来非线性发展的路径,它们所能提供的“似然正确性”还会明显地提升,或许很快我们便将看到AI与人类的“研究员”的直接竞争,而这在金融领域内或许会十分明显。我时常会听到类似这样的灵魂拷问:究竟是真相重要还是回答重要?如果是后者的话,那么研究员似乎确实没有什么存在的价值了。但这个问题显然不能如此消极地回答,我想更为重要的是思考:人类将如何证明其驾驭AI、而非被AI驾驭的价值呢?什么样的态度和能力是重要的呢?我思忖了一些时间,有一些未必成熟的想法:

  • 发现和提问的能力:如果答案终将跟随算力变得越来越廉价,那么发现并精确地描述的问题,其价值将更胜于答案;
  • 独立与批判性的逻辑框架:我们将越来越频繁地面对各类虚假事实和错误结论的挑战,扎实的知识储备是必要的,同样也需要独立、警惕和批判的态度,以及对任何信息进行交叉验真的技能也必不可少。
  • 深度独创研究的能力:AI归纳梗概的高效率和清晰的结构性表达,将使得涉及浅层知识的内容创造和传播变得极端高效,以及冗余。而研究者需要拥有的竞争优势在于:用深度、独创、实在的研究,对浅知识形成压倒性的增强。
  • 信息赋权和决策的能力:我不怀疑AI最终会具备一定的预测能力。但人类所不同的是,我们对信息作出自己的赋权,并作出决策、付诸行动。我们的精力将越来越注重终端的决策,提高决策的胜率、并降低错误决策的影响,将是人类越来越不能回避的问题
  • 杜绝语言的腐败:这一点想法,来源于我看到的一篇用ChatGPT做的城市发展远景规划,其中竟也充斥着相当多官样文章。Rubbish in rubbish out,我们太习惯用复杂的语言来掩饰真实的目的、扼杀创新的想法,既然AI提供了语言使用的一种新的形式,创造了新的可能,我们是否可以改变“语言的腐败”?——精炼直白,直抵核心。

行文至此,我也想到人类玩家的另一个、或许也是最凸出优势:相比起AI而言,大多数人在很长一段时间内仍将更愿意相信人类。何然?人未必比AI更为正确,但具体的人可以成为明确归咎的主体,而他们对价值的鲜明坚持也使得他们更为鲜活可爱。我想最重要(也可能是有点中二)的一点是,是要有身为人类的觉悟,承担起全部的风险和责任,勇敢地进入与AI——这个人类造物共存的新世界。

翻译:

ChatGPT has been a hot topic in the news media lately, from its impact on academic norms to the controversy over the alternative to Google. In contrast to search engines, which still retain a fair amount of information retrieval initiative, ChatGPT is almost universal — it directly provides unique answers and solutions. So it raises a general sense of crisis: Is it going to take my job?

As a macro researcher who often “answers to everything,” I was driven to ChatGPT partly by curiosity, but also by a fear of being replaced — after all, we were almost in business together. I spent about two days talking to ChatGPT at all levels, trying to figure out what this “machine” could and couldn’t do, and evaluating two questions:

  1. Can ChatGPT replace everyone in the room as soon as he walks in?
  2. What characteristics can avoid being replaced by AI and control AI?

My Q&A with ChatGPT will focus on the following categories, and I will use stars to represent my assessment of ChatGPT’s actual capabilities in this area:

The fact/historical fact ★★★ ★

ChatGPT provides a simple and clear summary of most facts that are well documented. But one thing that caught my eye was the fact that ChatGPT’s clear and unquestionable syntax was a trap for the reader, despite the fact that there were some obvious twists and turns going on. When I asked the AI about the history of my graduate school dorm building, I could see that ChatGPT made a similar statement, but my own search was unable to verify most of it. Instead, I found some evidence that ChatGPT might have confused the building with another building based on a false keyword association. The same question came up when I asked ChatGPT about the history of the building where I work. Apparently, ChatGPT is just “spinning a smooth story” in dealing with these issues.

Principle/method/mechanism ★★★★★

I asked ChatGPT about economics, physics and even the recipe for braised chicken rice. If you try to ask a well-researched question explicitly, chances are you’ll get a very systematic answer. In the answers, it can be seen that ChatGPT has a prominent advantage in structured writing, and its answers are concise and smooth with clear points, which is easy for readers to understand quickly.

Comparison/evaluation ★★★★★

In this regard, I asked ChatGPT about different technical routes of batteries, monetary policy frameworks between countries, and differences in dialects. It can be found that ChatGPT does not completely rely on the conclusion of established materials to make conceptual comparison, but follows the logical thinking of classification and comparison. Note, however, that ChatGPT seems to make mistakes like static comparisons when comparing things of complex origin, such as economic models, and may miss some very important deeper factors.

Comment/evaluate ★★★

If ChatGPT is asked to evaluate well-researched areas (such as policy evaluation, etc.), ChatGPT can also bring out the considerable advantages of structured writing. But AI generally takes an open, neutral, or official stance when it comes to comments about specific people/policies/philosophical dilemmas/aesthetics, etc., and it seems unlikely that AI can be expected to show any obvious political/philosophical/aesthetic leanings at the moment.

Forecasting/real-time analysis ★

So far, ChatGPT cannot and refuses to predict the future. ChatGPT is also unable to analyze events and data as they are happening. When asked this question, ChatGPT directly recommends that you seek the support of authoritative publishers and professional opinion.

Create/Construct —–

This part cannot be evaluated by humans, but I can only say that the creation of AI may offer another aesthetic perspective, but how to evaluate is different for different people. This one, for example, strikes me as somewhat wondrous, but hardly impressively written.

In summary, the features of ChatGPT that are known so far (the strengths are obvious, but here are the weaknesses) are:

1.AI’s answers may be mixed or incomplete, which is quite confusing;

2.AI currently has no access to non-public facts and conclusions;

3.AI is unable to predict, assign weight to different points of view, or perform timely, context-specific analysis;

4.AI fails to provide subjective and value-based judgments.

Judging by the quality of the answers provided so far on ChatGPT, I still believe that simple alternatives like Google and Wiki provide better information in most cases, and that professional information channels and retrieval capabilities will be overwhelmingly superior in professional areas. I also believe that the first, second, and third points above could be improved over time, but there is a key problem that may be difficult to resolve: We are not told where the information is coming from, how it was processed, and what ChatGPT omitted from the answer. On a larger scale, AI’s misinterpretation and distortion of facts could have widespread social harm, challenging our collective memory and consensus. We need to think seriously about the many “what-ifs” of the AI era, both socially and personally.

While chatting with ChatGPT, I often felt myself wondering

While chatting with ChatGPT, I often felt myself wondering whether the machine I was talking to was really thinking, or whether it was just using its fluent grammar to make me think it was thinking, when perhaps it was actually just an operation of probabilities and didn’t “understand” what it was saying. Because of this, it has all the above flaws, even “serious nonsense”. However, I am quite a layman on the details of the underlying technology, and the question seems more left to philosophical topics such as the Turing Test. But I suppose that since modern philosophy of language has revealed the importance of language in shaping the mind, and since AI is so adept at using linguistic frameworks, we may not completely reject the possibility that AI has the ability to think — and therefore that competition with humans is inevitable.

In terms of output, ChatGPT has a lot in common with human “researchers” (of which, unfortunately, I am one) — they both provide the audience with “likely correct answers.”

Given the non-linear path of AI’s future development, the “likelihood correctness” they can provide will be significantly improved. And we may soon see AI in direct competition with human “researchers”, which may be evident in the financial field. I often hear soul grilling like this: Is it the truth or the answer that matters? If it is the latter, then there really seems to be little value for researchers to exist. But this question cannot be answered so negatively. I think it is more important to think: how will humans prove their worth in driving AI, rather than being driven by AI? What kinds of attitudes and abilities are important? I’ve been thinking for some time, and I have some ideas that may not be mature:

The ability to find and ask questions:

If answers eventually become cheaper with computing power, questions discovered and accurately described will be more valuable than answers;

Independent and critical logical framework:

As we are increasingly confronted with false facts and false conclusions of all kinds. A solid body of knowledge is necessary, as is independence, vigilance, and critical attitude. As well as the ability to cross-examine any information.

The ability to deeply original research:

The efficient and clear structural expression of AI generalisations will make the creation and dissemination of content involving shallow knowledge extremely efficient and redundant. The competitive advantage that researchers need to have is the overwhelming enhancement of shallow knowledge with deep, original, and substantive research.

The ability to empower information and make decisions:

I don’t doubt that AI will eventually have some predictive power. But what makes humans different is that we empower ourselves with information and make decisions and act on them. Our energy will pay more and more attention to terminal decision, improve the success rate of decision, and reduce the impact of wrong decision, will be more and more human can not avoid the problem.

Stop the corruption of language:

This idea comes from an urban development vision I saw in ChatGPT. Which was also filled with a fair amount of red tape. Rubbish in rubbish out. We are so used to using complex language to hide the real purpose and kill the idea of innovation. Now that AI provides a new form of language use and creates new possibilities. Can we change the “corruption of language”? — Concise and straightforward, straight to the core.

This brings me to another, and perhaps most salient, advantage of human gamers. Most people will continue to trust humans more than AI for a long time to come. Why? People are not necessarily more correct than AI. But specific people can be the subject of explicit attribution. And their strong adherence to values makes them all the more likable. I think the most important (and perhaps somewhat secondary) point is to have the awareness to be human, to take all the risks and responsibilities, and to boldly enter the new world of coexistence with AI, a human creation.

本文由数字化转型网(www.szhzxw.cn)转载而成,来源:青野有枯荣,作者:青野Tsingyeh;编辑/翻译:数字化转型网宁檬树。

免责声明: 本网站(https://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。

本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。

免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。 本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。https://www.szhzxw.cn/1629.html

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

联系我们

联系我们

17717556551

邮箱: editor@cxounion.org

关注微信
微信扫一扫关注我们

微信扫一扫关注我们

关注微博
返回顶部