• 
    

    
    

      99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

      Artificial Intelligence—Can We Keep It in the Box?我們能管控人工智能之威脅嗎?

      2018-01-08 05:48:56杜焱
      英語(yǔ)世界 2017年9期
      關(guān)鍵詞:溫格智力機(jī)器

      譯/杜焱

      我們都知道如何處理可疑包裹——那就是要多小心就有多小心!現(xiàn)在我們讓機(jī)器人替我們?nèi)ッ半U(xiǎn)。但是,如果機(jī)器人本身就是危險(xiǎn)呢?一些評(píng)論員指出:我們應(yīng)該像對(duì)待可疑包裹一樣去看待人工智能,因?yàn)榭赡苡幸惶焖鼤?huì)給我們?cè)斐赏{。我們真該為此擔(dān)憂嗎?

      爆發(fā)的智能?

      [2]當(dāng)被問及未來(lái)是否會(huì)和人一樣聰明時(shí),美國(guó)數(shù)學(xué)家、科幻小說(shuō)作家瓦爾·溫格回答道:“會(huì)的,可是這樣的計(jì)算機(jī)只會(huì)短暫地出現(xiàn)一段時(shí)間?!?/p>

      [3]他的意思是:一旦計(jì)算機(jī)發(fā)展到這個(gè)水平,勢(shì)將在短時(shí)間內(nèi)取得飛速發(fā)展。溫格將智能這一井噴式的發(fā)展勢(shì)頭命名為“技術(shù)奇點(diǎn)”,同時(shí)他認(rèn)為,從人類的角度來(lái)看,這可能不是一個(gè)好消息。

      We know how to deal with suspicious packages—as carefully as possible!These days, we let robots take the risk.But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

      Exploding intelligence?

      [2] Asked whether there will ever be computers as smart as people, the US mathematician and sci-f i author Vernor Vinge replied: “Yes, but only brief l y.”

      [3] He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly.Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

      [4] Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb,perhaps)?

      AI as a low achiever

      [5] Optimists sometimes take comfort from the fact the field of AI has very chequered past. Periods of exuberance and hype have been mixed with socalled “AI winters”—times of reduced funding and interest, after promised capabilities fail to materialise.

      [6] Some people point to this as evidence machines are never likely to reach human levels of intelligence,let alone to exceed them. Others point out that the same could have been said about heavier-than-air flight.

      [7] The history of that technology,too, is littered with naysayers (some of whom refused to believe reports of the Wright brothers’ success, apparently).

      [4]溫格的觀點(diǎn)是否正確?如果正確,我們?cè)撊绾螒?yīng)對(duì)?畢竟,人工智能產(chǎn)品不同于一般的可疑包裹,至少?gòu)哪撤N程度上來(lái)說(shuō),人工智能的未來(lái)是由我們決定的?,F(xiàn)在,我們能做點(diǎn)什么來(lái)確保人工智能產(chǎn)品不是一個(gè)“炸彈”(或者,如果可能的話,是一個(gè)殺傷力較小的炸彈,而不是一個(gè)危害性極強(qiáng)的炸彈。)?

      人工智能成就平平

      [5]人工智能的發(fā)展歷經(jīng)曲折,樂觀主義者有時(shí)可從中得到安慰。人工智能的發(fā)展有過繁榮期和炒作大熱期,但在這過程中,也曾多次遭遇所謂的“人工智能的冬天”,即由于預(yù)先承諾實(shí)現(xiàn)的功能無(wú)法兌現(xiàn),導(dǎo)致投入資金削減,研究熱度降低的階段。

      [6]有人以此為證據(jù),指出機(jī)器永遠(yuǎn)不可能達(dá)到人類同等的智力水平,更不用說(shuō)超越。也有人指出,曾有人說(shuō)過飛機(jī)重于空氣,不可能起飛的話。

      [7]飛行技術(shù)發(fā)展的過程中,也充斥著否定者的聲音(顯然,其中有些人不愿相信萊特兄弟已經(jīng)獲得成功的新聞報(bào)道。)和飛機(jī)重于空氣一樣,對(duì)人工智能持否定觀點(diǎn)的人必須正視一個(gè)事實(shí),即:自然已經(jīng)設(shè)定好規(guī)則:不要將人類的腦力與鳥類的飛行力混為一談。

      [8]想要令人信服地否定人工智能,需要一個(gè)理由證明人類技術(shù)永遠(yuǎn)達(dá)不到人工智能的水平。

      [9]秉持悲觀論調(diào)更容易。但需要提醒的是,所有人都知道人類的大腦存儲(chǔ)智慧,而一部分人會(huì)致力于了解更多關(guān)于人類智慧的原理。這很難支撐人類技術(shù)永遠(yuǎn)達(dá)不到人工智能水平的說(shuō)法,恰恰相反,人們正在不斷增進(jìn)對(duì)其的了解。

      摩爾定律和弱人工智能

      [10]不管是在硬件還是軟件方面,我們?cè)诩夹g(shù)層面上似乎越來(lái)越接近人工智能的核心原理。在硬件領(lǐng)域,摩爾定律預(yù)測(cè)的一塊芯片所負(fù)載的計(jì)算能力每?jī)赡攴环瑳]有絲毫減緩的跡象。

      [11]在軟件領(lǐng)域,“強(qiáng)人工智能”(即與人類智力相當(dāng)甚至超越人類智力的人工智能)是否有可能出現(xiàn),人們對(duì)此爭(zhēng)論不休。但是,“弱人工智能”(即僅限于完成某些特定任務(wù)的人工智能)正穩(wěn)步向前發(fā)展。計(jì)算機(jī)占領(lǐng)了一個(gè)又一個(gè)過去人們認(rèn)為只有人的智力和直覺力才能夠勝任的工作領(lǐng)域。

      For human-level intelligence, as for heavier-than-air flight, naysayers need to confront the fact nature has managed the trick: think brains and birds,respectively.

      [8] A good naysaying argument needs a reason for thinking that human technology can never reach the bar in terms of AI.

      [9] Pessimism is much easier. For one thing, we know nature managed to put human-level intelligence in skullsized boxes, and that some of those skull-sized boxes are making progress in figuring out how nature does it. This makes it hard to maintain that the bar is permanently out of reach of artificial intelligence—on the contrary, we seem to be improving our understanding of what it would take to get there.

      Moore’s law and narrow AI

      [10] On the technological side of the fence, we seem to be making progress towards the bar, both in hardware and in software terms. In the hardware arena, Moore’s law, which predicts that the amount of computing power we can fit on a chip doubles every two years,shows little sign of slowing down.

      [11] In the software arena, people debate the possibility of “strong AI”(artificial intelligence that matches or exceeds human intelligence) but the caravan of “narrow AI” (AI that’s limited to particular tasks) moves steadily forward. One by one, computers take over domains that were previously considered off-limits to anything but human intellect and intuition.

      [12] We now have machines that have trumped human performance in such domains as chess, trivia games, flying,driving, financial trading, face, speech and handwriting recognition—the list goes on.

      [13] Along with the continuing progress in hardware, these developments in narrow AI make it harder to defend the view that computers will never reach the level of the human brain. A steeply rising curve and a horizontal line seem destined to intersect!

      [12]現(xiàn)在,機(jī)器的性能在很多領(lǐng)域都遠(yuǎn)遠(yuǎn)超過了人類的表現(xiàn),比如國(guó)際象棋、益智游戲、飛行、汽車駕駛、金融貿(mào)易、人臉識(shí)別、語(yǔ)音識(shí)別、字跡識(shí)別等,這樣的例子不勝枚舉。

      [13]在硬件不斷發(fā)展的同時(shí),弱人工智能的這些發(fā)展讓人們更難相信“計(jì)算機(jī)永遠(yuǎn)不會(huì)達(dá)到人腦的智力水平”這一說(shuō)法。畢竟,人工智能一直在快速向前發(fā)展,如同一條急劇上升的曲線;而人類智能處于穩(wěn)定狀態(tài),是一條水平線,這兩條線似乎注定要相交。

      智能幫手,有什么不好?

      [14]如果計(jì)算機(jī)變得跟人一樣聰明,這難道不是一件好事嗎?看看弱人工智能目前取得的一系列成功,這也許正好說(shuō)明有些人態(tài)度消極是毫無(wú)根據(jù)的。歸根到底,這些應(yīng)用程序不是非常實(shí)用嗎?也許國(guó)際象棋大師的自尊心會(huì)遭到一點(diǎn)打擊,金融市場(chǎng)會(huì)出現(xiàn)些許動(dòng)蕩,但在上述領(lǐng)域中,我們并沒有看到任何災(zāi)難性事件即將發(fā)生的跡象。

      [15]事實(shí)確實(shí)如此,悲觀主義者如是說(shuō)。可是就人類未來(lái)發(fā)展而言,弱人工智能所滲透的各個(gè)領(lǐng)域?qū)θ祟惿钏斐傻挠绊懣纱罂尚 S行╊I(lǐng)域人工智能的影響大于其他領(lǐng)域。(比如,在未來(lái)十年左右,如果機(jī)器人代替人類駕駛汽車,那我們的經(jīng)濟(jì)將會(huì)發(fā)生翻天覆地的變化。)

      What’s so bad about intelligent helpers?

      [14] Would it be a bad thing if computers were as smart as humans?The list of current successes in narrow AI might suggest pessimism is unwarranted. Aren’t these applications mostly useful, after all? A little damage to Grandmasters’ egos, perhaps, and a few glitches on financial markets, but it’s hard to see any sign of impending catastrophe on the list above.

      [15] That’s true, say the pessimists,but as far as our future is concerned, the narrow domains we yield to computers are not all created equal. Some areas are likely to have a much bigger impact than others. (Having robots drive our cars may completely rewire our economies in the next decade or so, for example.)

      [16] The greatest concerns stem from the possibility that computers might take over domains that are critical to controlling the speed and direction of technological progress itself.

      [16]一些至關(guān)重要的領(lǐng)域,其本身就掌控著技術(shù)進(jìn)步的速度和方向,它們也有可能被計(jì)算機(jī)接管,這才是人們最擔(dān)心的問題。

      不僅像人,而且比人聰明!

      [17]超過人類自身能力的任何智能都可以像人類一樣,在一些重要領(lǐng)域,甚至比人腦聰明得多,這讓人深感欣慰。但是,再一次,那些悲觀的人看到了一些壞消息。他們說(shuō),幾乎人類所珍視的一切(愛、幸福,甚至生存)對(duì)我們而言都是無(wú)比重要的,因?yàn)槲覀內(nèi)祟悡碛凶约簩俚倪M(jìn)化史——這是我們與高等動(dòng)物共同享有的歷史,與人工智能這樣的計(jì)算機(jī)程序并無(wú)關(guān)系。

      [18]于是,似乎沒有什么理由可以讓我們默認(rèn)智能機(jī)器可以和我們?nèi)祟悡碛型瑯拥膬r(jià)值觀。好消息是,這樣可能我們也找不到任何理由認(rèn)為智能機(jī)器會(huì)充滿敵意,畢竟敵意也是動(dòng)物才會(huì)有的情感。

      Not just like us, but smarter!

      [17] It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects—just a lot cleverer.But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love,happiness, even survival) are important to us because we have particular evolutionary history—a history we share with higher animals, but not with computer programs, such as artificial intelligences.

      [18] By default, then, we seem to have no reason to think that intelligent machines would share our values. The good news is that we probably have no reason to think they would be hostile,as such: hostility, too, is an animal emotion.

      [19] The bad news is that they might simply be indifferent to us—they might care about us as much as we care about the bugs on the windscreen.

      [20] People sometimes complain that corporations are psychopaths, if they are not sufficiently reined in by human control. The pessimistic prospect here is that artificial intelligence might be similar, except much much cleverer and much much faster.

      [19]壞消息是:它們可能也只會(huì)用冷漠的態(tài)度對(duì)待我們——我們有多不在意爬在擋風(fēng)玻璃上的小蟲子,它們就會(huì)有多不在意我們。

      [20]人們有時(shí)會(huì)抱怨,如果管理者不能有效管理,企業(yè)會(huì)陷入混亂。悲觀者認(rèn)為,人工智能和企業(yè)一樣,只不過人工智能智商更高,反應(yīng)也更迅速。

      發(fā)展阻礙

      [21]現(xiàn)在,透過這些悲觀的想法,我們可以看到人工智能的未來(lái)發(fā)展方向。問題是:設(shè)計(jì)出與人類智能相當(dāng)?shù)挠?jì)算機(jī)(至少是在影響技術(shù)進(jìn)步的重要領(lǐng)域),就意味著我們冒著將地球交到智能機(jī)器手里的風(fēng)險(xiǎn),而這些智能機(jī)器對(duì)我們漠不關(guān)心,對(duì)生命、可持續(xù)發(fā)展的環(huán)境等我們所珍視的一切漠不關(guān)心。

      [22]悲觀者如是說(shuō),如果這聽上去有些牽強(qiáng)附會(huì),就去問問大猩猩的感受吧,問問它們與最聰明的物種——人類——爭(zhēng)奪資源時(shí)感受如何?;旧峡梢赃@樣認(rèn)為,大猩猩會(huì)滅絕不是因?yàn)槿祟悓?duì)它們惡意滿滿,而是因?yàn)槭苋祟惪刂频淖匀画h(huán)境已經(jīng)不適合它們繼續(xù)存活下去。

      Getting in the way

      [21] By now you see where this is going, according to this pessimistic view. The concern is that by creating computers that are as intelligent as humans (at least domains that matter to technological progress), we risk yielding control over the planet to intelligences that are simply indifferent to us, and to things that we consider valuable—things such as life and a sustainable environment.

      [22] If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species—the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.

      “也許,這些消極的觀點(diǎn)都是不對(duì)的!”

      [23]正如悲觀主義者所說(shuō),不可否認(rèn),人們很難做出預(yù)測(cè),尤其是預(yù)測(cè)未來(lái)!但是我們?cè)谌粘I钪幸卜浅UJ(rèn)真地對(duì)待各種不確定性,尤其是在危如累卵的時(shí)刻。

      [24]歸根到底,這也是為什么我們會(huì)使用價(jià)格昂貴的機(jī)器人去檢查那些可疑包裹。(盡管我們明白只有極少數(shù)的包裹中有炸彈。)

      [25]如本文所述,人工智能未來(lái)也會(huì)“爆炸”,那將會(huì)是人類碰到的最后一顆炸彈,因?yàn)樽源巳祟悓⒉粡?fù)存在。這樣看來(lái),即使我們有充分的理由相信人工智能爆炸的風(fēng)險(xiǎn)很小,人們心懷疑慮的態(tài)度也很合理。

      [26]就目前而言,即使是這種程度的寬慰似乎也無(wú)法實(shí)現(xiàn),因?yàn)槲覀儗?duì)人工智能不夠了解,因此無(wú)法信心滿滿地進(jìn)行風(fēng)險(xiǎn)評(píng)估。(畢竟,自我感覺樂觀和有充足的理由保持樂觀不是一回事。)

      “The pessimists might be wrong!”

      [23] Of course—making predictions is difficult, as they say, especially about the future! But in ordinary life we take uncertainties very seriously, when a lot is at stake.

      [24] That’s why we use expensive robots to investigate suspicious packages, after all (even when we know that only a very tiny proportion of them will turn out to be bombs).

      [25] If the future of AI is “explosive”in the way described here, it could be the last bomb the human species ever encounters. A suspicious attitude would seem more than sensible, then, even if we had good reason to think the risks are very small.

      [26] At the moment, even that degree of reassurance seems out of our reach—we don’t know enough about the issues to estimate the risks with any high degree of confidence. (Feeling optimistic is not the same as having good reason to be optimistic, after all.)

      What to do?

      [27] A good first step, we think, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later.

      [28] Once we put such a future on the agenda we can begin some serious research about ways to ensure outsourcing intelligence to machines would be safe and beneficial, from our point of view.

      [29] Perhaps the best cause for optimism is that, unlike ordinary ticking parcels, the future of AI is still being assembled, piece by piece, by hundreds of developers and scientists throughout the world.

      [30] The future isn’t yet fixed, and there may well be things we can do now to make it safer. But this is only a reason for optimism if we take the trouble to make it one, by investigating the issues and thinking hard about the safest strategies.

      [31] We owe it to our grandchildren—not to mention our ancestors, who worked so hard for so long to get us this far!—to make that effort. ■

      接下來(lái)怎么辦?

      [27]我們認(rèn)為,不能再把智能機(jī)器當(dāng)作科幻小說(shuō)的素材,要開始把它們看作是我們這一代或我們的后代可能遲早要面對(duì)的現(xiàn)實(shí)中的一部分,這是我們首先要做的事。

      [28]一旦把這樣的未來(lái)放在議事日程上,我們就可以開始認(rèn)真地研究一些方法,確保將智能賦予機(jī)器安全且有益。

      [29]也許,可以讓我們保持樂觀的最好理由是,與普通的包裹不同,人工智能的未來(lái)仍需要世界各地成百上千的開發(fā)者和科學(xué)家一塊一塊去組裝。

      [30]未來(lái)變數(shù)滿滿,而現(xiàn)在我們能做的就是讓未來(lái)更安全。但是,如果我們認(rèn)真對(duì)待問題,深入了解并認(rèn)真思考最安全的策略,上面所述也只是我們樂觀的理由。

      [31]我們做這一切都是為了我們的子孫后代,更不用說(shuō)我們的祖先為我們現(xiàn)有的成就而做出的努力。 □

      猜你喜歡
      溫格智力機(jī)器
      他從丹麥漁村騎向環(huán)法冠軍
      機(jī)器狗
      機(jī)器狗
      未來(lái)機(jī)器城
      電影(2018年8期)2018-09-21 08:00:06
      溫格與特朗普
      溫格13年首勝穆帥
      智力闖關(guān)
      智力闖關(guān)
      無(wú)敵機(jī)器蛛
      歡樂智力谷
      五莲县| 徐闻县| 玉环县| 石首市| 乐山市| 平远县| 比如县| 个旧市| 原平市| 积石山| 恭城| 麦盖提县| 河东区| 新巴尔虎右旗| 绩溪县| 泰顺县| 东至县| 桓台县| 太康县| 商河县| 黑河市| 天镇县| 太保市| 玉溪市| 夏邑县| 松阳县| 读书| 富源县| 华安县| 大兴区| 雷州市| 城步| 海宁市| 黄石市| 南投县| 洪洞县| 高要市| 毕节市| 长海县| 若羌县| 台安县|