TED-Ed
自從2017 年人工智能商業(yè)化爆發(fā),尤其是ChatGPT 面世以來,行業(yè)內外出現(xiàn)了許多討論的聲音,下面這篇文章節(jié)選自TED 演講,可能會讓我們用更加理性的態(tài)度看待人工智能。
In the coming years, artificial intelligence(AI) is probably going tochange your life, and likely the entire world. But people have a hard timeagreeing on exactly how.
There’s a big difference between asking a human to do something andgiving that as the 1)objective to an AI system. When you ask a human to getyou a cup of coffee, you don’t mean this should be their life’s mission, andnothing else in the universe matters.
And the problem with the way we build AI systems now is that we givethem a fixed objective. The algorithms require us to 2)specify everything inthe objective. And if you say, “Can we fix the acidification of the oceans?”“Yeah, you could have a catalytic reaction that does thatextremely efficiently, but it consumes a quarter of the oxygenin the atmosphere, which would apparently cause us to diefairly slowly and unpleasantly over the course of severalhours.” The AI system may answered.
So, how do we avoid this problem? You might say, okay,well, just be more careful about specifying the objective—don’t forget the atmospheric oxygen. And then, of course,some side effect of the reaction in the ocean poisons all the?fish. Okay, well I mean don’t kill the fish either. Andthen, well, what about the seaweed? Don’t do anythingthat’s going to cause all the seaweed to die. And onand on and on.
And the reason that we don’t have to do that withhumans is that humans often know that they don’tknow all the things that we care about. For example, ifyou ask a human to get you a cup of coffee, and youhappen to be in the Hotel George Sand in Paris, wherethe coffee is 13 euros a cup, it’s entirely 3)reasonable to come back andsay, “Well, it’s 13 euros, are you sure you want it? Or I could go next doorand get one?” And it’s a perfectly normal thing for a person to do. Foranother example, to ask, “I’m going to repaint your house—is it okay if Itake off the drainpipes and then put them back?” We don’t think of this asa terribly sophisticated capability, but AI systems don’t have it becausethe way we build them now, they have to know the full objective. If webuild systems that know that they don’t know what the objective is, thenthey start to exhibit these behaviors, like asking permission before gettingrid of all the oxygen in the atmosphere.
In all these senses, control over the AI systemcomes from the machine’s uncertainty aboutwhat the true objective is. And it’s when you buildmachines that believe with certainty that theyhave the objective, that’s when you get this sort ofpsychopathic behavior. And I think we see the samething in humans.
There’s an interesting story that E.M. Forster?wrote, where everyone is entirely machine-dependent. The story is reallyabout the fact that if you hand over the management of your civilization tomachines, you then lose the incentive to understand it yourself or to teachthe next generation how to understand it. You can see “WALL-E” actuallyas a modern version, where everyone is enfeebled and infantilized by themachine, and that hasn’t been possible up to now.
We put a lot of our civilization into books, but the books can’t run itfor us. And so we always have to teach the next generation. If you workit out, it’s about a trillion person years of teaching and learning and anunbroken chain that goes back tens of thousands of generations. Whathappens if that chain breaks?
I think that’s something we have to understand as AI moves forward.The actual date of arrival of general purpose AI—you’re not going to beable to 4)pinpoint, it isn’t a single day. It’s also not the case that it’s all ornothing. The impact is going to be increasing. So with every advance inAI, it significantly expands the range of tasks.
So in that sense, I think most experts say by the end of the century,we’re very, very likely to have general purpose AI. The median issomething around 2045. I’m a little more on the conservative side. I thinkthe problem is harder than we think.
I like what John McAfee, he was one ofthe founders of AI, when he was asked thisquestion, he said, somewhere between 5 and500 years. And we’re going to need, I think,several Einsteins to make it happen.
1) objective n. 目標 2) specify v. 明確規(guī)定
3) reasonable adj. 明智的 4) pinpoint v. 明確指出
詞組加油站
side effect 副作用
care about 關心
ask permission 取得許可
在將來的歲月里,人工智能極有可能會改變你的生活,甚至有可能改變全世界。但人們對于這種改變的呈現(xiàn)方式結論不一。
要求一個人做某件事與將其作為目標交給人工智能系統(tǒng)是有很大區(qū)別的。當你拜托一個人幫你拿杯咖啡時,你并不是在讓這個人奉它為人生使命,以致宇宙間再也沒有更重要的事了。
而我們現(xiàn)在構建人工智能系統(tǒng)的問題是我們給了它們一個固定目標。算法是要求我們規(guī)定目標里的一切。如果你說:“我們能解決海洋的酸化問題嗎?”人工智能可能會回答:“沒問題,可以形成一種非常有效的化學反應,但這將會吞噬大氣層里四分之一的氧氣,從而導致我們全都慢慢地、不愉快地在幾個小時后死去。”
那,我們該如何避免這種問題呢?你可能會說,好吧,那我們就對目標更具體地說明一下——別忘了大氣層里的氧氣。然后,當然也要避免海洋里某種效應的副作用會毒死所有的魚。好吧,那我就再定義一下,也別毒死魚。那么,海藻呢?也別做任何會導致海藻全部死亡的事。以此類推。
我們對人類不需要這樣做是因為人們大都明白自己并不可能對每個人的愛好無不知曉。
例如,如果一個人拜托你買咖啡,而你剛好在一杯咖啡為13 歐元的巴黎喬治圣德酒店,你很有可能會再回去問一下:“喂,這里咖啡得13 歐元,你還要嗎?要不我去隔壁店里幫你買杯?”這對人類來講再正常不過。又如,當你問道:“我要重新粉刷你的房子,我可以先把排水管拆了再裝回去嗎?”
我們并不覺得這是一種特別復雜厲害的能力,但人工智能系統(tǒng)沒有這種能力,因為在我們當下的建構方法里,它們必須知道全部目標。如果我們構建的系統(tǒng)明白它們并不了解目標,它們就會開始出現(xiàn)此類行動: 比如在除掉大氣層里的氧氣之前先征求許可。
在這種意義上,對于人工智能系統(tǒng)的控制源于機器對真正目標的不確定性。而只有在構建對目標自以為有著絕對肯定性的機器時,才會產生這種精神錯亂的行為。我覺得對于人類,也是相同的理念。
E.M. 福斯特寫過一篇引人深思的故事。故事里的人們都完全依賴機器。其中寓意是,如果你把文明的管理權交給了機器,那你將會失去自身了解文明、把文明傳承給下一代的動力。我們可以將《機器人總動員》視為現(xiàn)代版:由于機器,人們變得衰弱與幼兒化,到目前為止,這還不可能。
我們把大量文明寫入書籍,但書籍無法為我們管理文明。所以,我們必須一直指導下一代。計算下來,這是一個在一萬億年、數(shù)以萬計的世代之間綿延不絕的教導與學習的鏈條。這條鏈如果斷了,將會如何?
隨著人工智能的發(fā)展,我認為這是我們必須了解的事情。我們將無法精準地確認通用型人工智能真正來臨的時日,因為那并不會是一日之功。也并不是存在或不存在的兩項極端。這方面的影響力將是與日俱增的。所以隨著人工智能的進步,它所能完成的任務將顯著擴展。
這樣一看,我覺得大部分的專家都說我們極有可能在21 世紀末前生產通用型人工智能。中位數(shù)位置在2045 年左右。我對此偏于保守派。我認為問題比我們想象的還要難。
我喜歡人工智能的發(fā)明家之一約翰·麥卡菲對這個問題的答案: 他說,應該在5 到500 年之間。我覺得,這得要幾位愛因斯坦才能實現(xiàn)。