The past year, described as the biggest election year in recorded history, is considered to be one of the biggest years for the spreading of misinformation and disinformation. Both refer to misleading content, but disinformation is deliberately generated. Political parties have long competed for voter approval and subjected their differing policies to public scrutiny. But the difference now is that online search and social media enable claims and counterclaims to be made almost endlessly.
過去的一年被稱為有史以來最重大的選舉年,也被認為是錯誤信息和虛假信息傳播最為嚴重的年份之一。這兩種信息均為誤導性信息,不過虛假信息是人為有意制造出來的。長期以來,各黨派競相爭取選民的支持,各自的政治主張受到公眾的審視?,F(xiàn)如今的不同之處在于,網(wǎng)絡搜索與社交媒體使正反主張幾乎可以無休止地提出。
A recent study in Nature highlights a previously underappreciated aspect of this phenomenon: the existence of data voids, information spaces that lack evidence, into which people searching to check the accuracy of controversial topics can easily fall. It might no longer be enough for search providers to combat misinformation and disinformation by just using automated systems to deprioritize these sources.
《自然》雜志最近刊登的一篇論文著重指出了此前該現(xiàn)象未被充分重視的一個方面:網(wǎng)絡上存在數(shù)據(jù)真空,也就是缺乏證據(jù)的信息空間,上網(wǎng)搜索以核實爭議話題準確性的人很容易掉進去。而對于網(wǎng)絡搜索服務商來說,依靠系統(tǒng)自動降低信息優(yōu)先級的辦法來打擊錯誤信息和虛假信息已不夠。
The mechanics of how misinformation and disinformation spread has long been an active area of research. According to the ‘illusory truth effect’, people perceive something to be true the more they are exposed to it, regardless of its veracity. This phenomenon pre-dates the digital age and now manifests itself through search engines and social media.
長期以來,錯誤和虛假信息的傳播機制一直是熱門研究領域。根據(jù)“真相錯覺效應”,接觸某個信息的次數(shù)越多,人們越容易相信其真實性,而不論它到底是不是真的。這種現(xiàn)象遠在數(shù)字時代之前就存在,如今通過搜索引擎和社交媒體呈現(xiàn)出來。
In their recent study, Kevin Aslett, a political scientist at the University of Central Florida in Orlando, and his colleagues found that people who used Google Search to evaluate the accuracy of news stories—stories that the authors but not the participants knew to be inaccurate—ended up trusting those stories more. This is because their attempts to search for such news made them more likely to be shown sources that corroborated an inaccurate story.
奧蘭多中佛羅里達大學的政治學家凱文·阿斯利特及其同事在最近的研究中發(fā)現(xiàn),有些新聞報道的參與者不知道但作者知道它有誤,而利用谷歌搜索來評估其準確性的人最終愈發(fā)相信這些新聞。這是由于用戶的搜索行為讓他們接觸到更多佐證失實報道的信息源。
Google’s algorithms rank news items by taking into account various measures of quality, such as how much a piece of content aligns with the consensus of expert sources on a topic. In this way, the search engine deprioritizes unsubstantiated news, as well as news sources carrying unsubstantiated news from its results. Furthermore, its search results carry content warnings. For example, ‘breaking news’ indicates that a story is likely to change and that readers should come back later when more sources are available. There is also an ‘a(chǎn)bout this result’ tab, which explains more about a news source—although users have to click on a different icon to access it.
谷歌算法通過綜合評估多項指標對新聞條目進行排序,比如新聞內(nèi)容與權(quán)威信源的一致性。通過這種方式,谷歌可以在搜索結(jié)果中降低無事實依據(jù)的新聞及此類新聞來源的優(yōu)先性。此外,搜索結(jié)果還帶有內(nèi)容警告。比如,“突發(fā)新聞”意味著新聞內(nèi)容很可能會更改,讀者應稍后再來查看,屆時會有更多的信息來源。還有一個“關(guān)于此結(jié)果”的標簽, 內(nèi)含對相關(guān)信息的更多解釋,不過用戶必須點擊另一個圖標才能訪問它。
Clearly, copying terms from inaccurate news stories into a search engine reinforces misinformation, making it a poor method for verifying accuracy. So, what more could be done to route people to better sources? Google does not manually remove content, or de-rank a search result; nor does it moderate or edit content, in the way that social-media sites and publishers do. Google is sticking to the view that, when it comes to ensuring quality results, the future is automated methods that rank results on the basis of quality measures. But there can be additional approaches to preventing people falling into data voids of misinformation and disinformation, as Google itself acknowledges and as Aslett and colleagues show.
顯然,將失實新聞中的詞句復制到搜索引擎會強化該虛假信息,導致這種驗證準確性的方法效果不佳。既如此,還能做些什么來引導公眾獲取更優(yōu)質(zhì)的信息來源呢?谷歌不會手動刪除內(nèi)容或降低信息的優(yōu)先級,也不會像社交媒體網(wǎng)站和出版商那樣審核或編輯內(nèi)容。谷歌堅持認為,就保障搜索結(jié)果的質(zhì)量而言,將來要依據(jù)優(yōu)質(zhì)標準自動對結(jié)果排序。但正如谷歌自己所承認的,以及阿斯利特和同事們所指出的那樣,可以采取額外的措施來防止人們陷入充斥著虛假和錯誤信息的數(shù)據(jù)真空。
Some type of human input, for example, might enhance internal fact-checking systems, especially on topics on which there might be a void of reliable information. How this can be done sensitively is an important research topic, not least because the end result should be not about censorship, but about protecting people from harm.
比如,某種形式的人工介入可能會增強內(nèi)部事實核查系統(tǒng),尤其適用于可能缺乏可靠信息的主題。如何審慎地做到這一點是一個重要的研究課題,畢竟最終結(jié)果并非為了審查,而是為了保護人們免受傷害。
There’s also a body of literature on improving media literacy1—including suggestions on more, or better education on discriminating between different sources in search results. Mike Caulfield, who studies media literacy and online verification skills at the University of Washington in Seattle, says that there is value in exposing a wider population to some of the skills taught in research methods. He recommends starting with influential people, giving them opportunities to improve their own media literacy, as a way to then influence others in their networks.
還有很多文章談到提高媒體素養(yǎng),包括建議提供更多更好的培訓以提高人們識別網(wǎng)絡信息真?zhèn)蔚哪芰?。在西雅圖華盛頓大學進行媒體素養(yǎng)和網(wǎng)絡識別技能研究的邁克·考菲爾德指出,將研究方法中的一些技能傳授給更廣泛的人群是有價值的。他建議從有影響力的人開始,給他們提供機會提高自己的媒體素養(yǎng),進而影響其社交圈中的其他人。
One point raised by Paul Crawshaw, a social scientist at Teesside University in Middlesbrough, UK, is that research-methods teaching on its own does not always have the desired impact. Students benefit more when they are learning about research methods while carrying out research projects. He also suggests that lessons could be learnt by studying the conduct and impact of health-literacy2 campaigns. In some cases, these can be less effective for people on lower incomes, compared with those on higher incomes. Understanding that different population groups have different needs will also need to be factored into media-literacy campaigns, he argues.
英國米德爾斯堡提賽德大學的社會科學家保羅·克勞肖卻認為,傳授研究方法本身并不總是能達到預期的效果。學生一邊學習研究方法一邊實施研究項目時獲益更多。他指出,對健康素養(yǎng)運動過程和效果的研究也能給予經(jīng)驗教訓。某些情況下,相對于高收入人群,此類活動對低收入人群的影響更小。他認為,開展媒體素養(yǎng)運動時還要考慮到不同人群有不同的需求。
Clearly, there’s work to do. The need is urgent, because it’s possible that generative artificial-intelligence and large language models will propel misinformation to much greater heights. The often-mentioned phrase ‘search it online’ could end up increasing the prominence of inaccurate news instead of reducing it.
顯然,相關(guān)工作刻不容緩,因為生成式人工智能和大型語言模型可能會讓虛假信息大幅增多。人們時常掛在嘴邊的“上網(wǎng)查查”一詞最終非但無法減少不實新聞,反而會讓其更加突出。
(譯者為“《英語世界》杯”翻譯大賽獲獎者)
1指媒介使用者面對不同媒體中各種信息時,所表現(xiàn)出的信息的選擇能力、質(zhì)疑能力、理解能力、評估能力、創(chuàng)造和生產(chǎn)能力以及思辨的反應能力。" 2健康素養(yǎng),指個人獲取和理解基本健康信息和服務,并運用這些信息和服務做出正確決策,以維護和促進自身健康的能力。