莉薩·法齊奧
Why humans stink at finding falsehoods.人類為何拙于發(fā)現(xiàn)謊言。
Heres a quick quiz for you:
· In the biblical story, what was Jonah swallowed by?
· How many animals of each kind did Moses take on the Ark?
Did you answer “whale” to the first question and “two” to the second? Most people do… even though theyre well aware that it was Noah, not Moses who built the ark in the biblical story.
Psychologists like me call this phenomenon the Moses Illusion. Its just one example of how people are very bad at picking up on factual errors in the world around them. Even when people know the correct information, they often fail to notice errors and will even go on to use that incorrect information in other situations.
Research from cognitive psychology shows that people are naturally poor fact-checkers and it is very difficult for us to compare things we read or hear to what we already know about a topic. In whats been called an era of “fake news,” this reality has important implications for how people consume journalism, social media, and other public information.
Failing to notice what you know is wrong
The Moses Illusion has been studied repeatedly since the 1980s. It occurs with a variety of questions and the key finding is that—even though people know the correct information—they dont notice the error and proceed to answer the question.
In the original study, 80 percent of the participants failed to notice the error in the question despite later correctly answering the question “Who was it that took the animals on the Ark?”
The Moses Illusion demonstrates what psychologists call knowledge neglect1—people have relevant knowledge, but they fail to use it.
One way my colleagues and I have studied this knowledge neglect is by having people read fictional stories that contain true and false information about the world. For example, one story is about a characters summer job at a planetarium. Some information in the story is correct: “Lucky me, I had to wear some huge old space suit. I dont know if I was supposed to be anyone in particular—maybe I was supposed to be Neil Armstrong, the first man on the moon.” Other information is incorrect: “First I had to go through all the regular astronomical facts, starting with how our solar system works, that Saturn is the largest planet, etc.”
Later, we give participants a trivia test2 with some new questions (Which precious gem is red?) and some questions that relate to the information from the story (What is the largest planet in the solar system?). We reliably find positive effects of reading the correct information within the story—participants are more likely to answer “Who was the first person to step foot on the moon?” correctly. We also see negative effects of reading the misinformation—participants are both less likely to recall that Jupiter is the largest planet and they are more likely to answer with Saturn.
These negative effects of reading false information occur even when the incorrect information directly contradicts peoples prior knowledge. In one study, my colleagues and I had people take a trivia test two weeks before reading the stories. Thus, we knew what information each person did and did not know. Participants still learned false information from the stories they later read. In fact, they were equally likely to pick up false information from the stories when it did and did not contradict their prior knowledge.
Can you improve at noticing incorrect info?
So people often fail to notice errors in what they read and will use those errors in later situations. But what can we do to prevent this influence of misinformation?
Expertise or greater knowledge seems to help, but it doesnt solve the problem. Even biology graduate students will attempt to answer distorted questions such as “Water contains two atoms of helium and how many atoms of oxygen?”
Many of the interventions my colleagues and I have implemented to try to reduce peoples reliance on the misinformation have failed or even backfired. One initial thought was that participants would be more likely to notice the errors if they had more time to process the information. So, we presented the stories in a book-on-tape format and slowed down the presentation rate. But instead of using the extra time to detect and avoid the errors, participants were even more likely to produce the misinformation from the stories on a later trivia test.
Next, we tried highlighting the critical information in a red font. We told readers to pay particular attention to the information presented in red with the hope that paying special attention to the incorrect information would help them notice and avoid the errors. Instead, they paid additional attention to the errors and were thus more likely to repeat them on the later test.
The one thing that does seem to help is to act like a professional fact-checker. When participants are instructed to edit the story and highlight any inaccurate statements, they are less likely to learn misinformation from the story. Similar results occur when participants read the stories sentence by sentence and decide whether each sentence contains an error.
Its important to note that even these “fact-checking” readers miss many of the errors and still learn false information from the stories. For example, in the sentence-by-sentence detection task participants caught about 30 percent of the errors. But given their prior knowledge they should have been able to detect at least 70 percent.
Quirks of psychology make us miss mistakes
Why are human beings so bad at noticing errors and misinformation? Psychologists believe that there are at least two forces at work.
First, people have a general bias to believe that things are true. (After all, most things that we read or hear are true.) In fact, theres some evidence that we initially process all statements as true and that it then takes cognitive effort to mentally mark them as false.
Second, people tend to accept information as long as its close enough to the correct information. Natural speech often includes errors, pauses, and repeats. (“She was wearing a blue—um, I mean, a black, a black dress.”) One idea is that to maintain conversations we need to go with the flow—accept information that is “good enough” and just move on.
And people dont fall for these illusions when the incorrect information is obviously wrong. For example, people dont try and answer the question “How many animals of each kind did Nixon take on the Ark?” and people dont believe that Pluto is the largest planet after reading it in a fictional story.
Detecting and correcting false information is difficult work and requires fighting against the ways our brains like to process information. Critical thinking alone wont save us. Our psychological quirks put us at risk of falling for misinformation, disinformation and propaganda. Professional fact-checkers provide an essential service in hunting out incorrect information in the public view. As such, they are one of our best hopes for zeroing in on errors and correcting them, before the rest of us read or hear the false information and incorporate it into what we know of the world.
做個快速測試吧:
·《圣經(jīng)》故事中,什么吞了約拿?
·摩西帶到方舟上的動物,每種帶了多少?
你的答案是否分別是“鯨”和“兩個”?大多數(shù)人都會這么回答吧……雖然明知《圣經(jīng)》故事中建造方舟的是諾亞而不是摩西。
我是研究心理學(xué)的,這種現(xiàn)象在心理學(xué)中被稱為“摩西錯覺”。這僅僅是人拙于發(fā)現(xiàn)身邊事實性錯誤的例證之一。即使人們知道正確的信息,也常常察覺不到錯誤,甚至還會在其他場合沿用錯誤信息。
認知心理學(xué)的研究表明,人類天生就不善于核實,也很難將耳聞目見的信息與已有認知進行比較。在這個人們認為“假新聞”泛濫的年代,這一發(fā)現(xiàn)對于了解大眾如何閱讀新聞、社交媒體及其他公眾信息有著重要的啟示。
認知錯誤而不自知
20世紀(jì)80年代以來,“摩西錯覺”一直是人們反復(fù)研究的對象。研究采用的問題各種各樣,而重要的發(fā)現(xiàn)就是:雖然人們知道正確的信息,卻并未察覺到題目中的錯誤,仍會繼續(xù)回答問題。
在開頭測試涉及的研究中,80%的參與者未能注意到第二個問題中的錯誤,雖然后續(xù)被問及“誰把動物帶上了方舟”這一問題時,這些人都回答對了。
“摩西錯覺”證實了心理學(xué)家所說的“知識忽略”,即人們雖然具備相關(guān)知識,卻未能調(diào)用。
我和同事在研究“知識忽略”時,采用的一種方法便是讓參與者讀虛構(gòu)小說,小說提及的現(xiàn)實世界的信息有真有假。例如,有個故事講主人公暑假在天文館打工。故事中,有些信息是正確的,比如:“幸運如我,不得不穿著肥大、陳舊的航天服。我不清楚自己是不是在扮演某個航天員——或許就是登月第一人尼爾·阿姆斯特朗。”有些信息則是錯誤的,比如:“首先,我得過一遍天文常識,從太陽系的運行開始,諸如土星是太陽系中最大的行星?!?/p>
接著,我們對參與者進行了知識測試,其中包含一些新問題(什么寶石是紅色的?)以及一些跟故事所交代信息相關(guān)的問題(太陽系最大的行星是什么?)。結(jié)果表明,故事中的正確信息確實發(fā)揮了積極作用,對于“誰第一個登上了月球?”這樣的問題,參與者的正確率較高。然而,我們也注意到閱讀錯誤信息的負面影響,參與者往往不太容易想起木星是太陽系最大的行星,而更可能答土星。
即使錯誤信息與人們此前掌握的知識相左,閱讀錯誤信息依然會產(chǎn)生負面影響。例如,在一項研究中,我和同事安排參與者先進行知識測試,兩周后,再讀小說故事。這樣,我們就知道每個人之前知道哪些信息,不知道哪些信息。然而,參與者還是記住了后續(xù)所讀故事中的錯誤信息。事實上,無論故事中的信息與其先前掌握的知識是否相悖,參與者都同樣可能受到錯誤信息的誤導(dǎo)。
能否提高發(fā)現(xiàn)錯誤信息的能力?
總之,人們常常發(fā)現(xiàn)不了所讀內(nèi)容中的錯誤,而且后續(xù)還會把這些錯誤信息用在其他地方。那我們怎樣才能避免受錯誤信息的誤導(dǎo)呢?
了解專業(yè)知識或擴大知識面看似有幫助,但并不能解決問題。即使是生物專業(yè)研究生,面對“水分子包含兩個氦原子和多少氧原子”這類錯誤問題時,也會試圖回答。
為了減少人們對錯誤信息的依賴,我和同事嘗試了各種干預(yù)措施,其中很多要么失敗,要么適得其反。最初,我們認為如果參與者有更多的時間處理信息,就更可能留意到錯誤。于是,我們采用有聲書的形式播放故事,并放慢了播放速度。然而,參與者并未將增加的時間用于甄別錯誤、避免犯錯,在后續(xù)測試中,他們受故事中錯誤信息誤導(dǎo)的幾率反而有增無減。
接著,我們又嘗試將關(guān)鍵信息以紅色字體突出顯示,并提醒參與者特別注意紅字部分,希望對錯誤信息的特別關(guān)注可以幫助他們發(fā)現(xiàn)錯誤、避免犯錯。結(jié)果,由于特別留意了錯誤信息,參與者在后續(xù)測試中反而更容易重復(fù)那些錯誤。
唯一看起來有用的方法就是讓參與者扮演專業(yè)的核查員。我們要求參與者對故事進行編輯,標(biāo)注出任何失實的陳述,結(jié)果他們答題時受故事中錯誤信息誤導(dǎo)的幾率降低了。另外,讓參與者逐句閱讀故事,確定每句是否有錯,也收到了同樣的效果。
需要注意的是,即便是這些“核查員”,也漏掉了許多錯誤,仍然會受故事中錯誤信息的誤導(dǎo)。例如,在逐句檢查任務(wù)中,參與者找出了約30%的錯誤,但鑒于其先前掌握的知識,其錯誤識別率本應(yīng)至少達到70%。
心理傾向?qū)е氯祟惡鲆曞e誤
為什么人們甄別錯誤和虛假信息的能力如此之差?心理學(xué)家認為至少有兩點原因。
首先,人們一般都傾向于認為所見所聞是真的(畢竟,其中多數(shù)都是真的)。其實,有些證據(jù)表明,我們的大腦最初是將所有信息視為真實的,之后經(jīng)過一番認知努力識別出其中的錯誤信息。
其次,只要信息顯得跟正確的差不多,人們就傾向于接受。口語中,人們常常出現(xiàn)錯誤、停頓和重復(fù)。(“她穿著件藍色……呃,不對,是黑色,黑色的連衣裙?!保┯杏^點認為,為了推進對話,人們需要順其自然——只要信息“還算過得去”就可以繼續(xù)往下聊。
如果信息存在明顯錯誤,人們則不會陷入錯覺。例如,如果問題換成“尼克松帶到方舟上的動物,每種帶了多少”,回答者就不會落入圈套,人們也不會相信虛構(gòu)小說中所寫的“冥王星是太陽系中最大的行星”。
甄別、糾錯并非易事,這要求我們對抗大腦本能的信息處理機制。光靠批判性思考無法讓我們脫困。種種心理傾向?qū)е挛覀兒苋菀资苠e誤信息、虛假信息和政治宣傳的影響。專業(yè)核查員可以為找出公眾視野中的錯誤信息提供必要的服務(wù)。因此,這些人是最有希望揪出并糾正錯誤的人選之一,可防止其他人讀到或聽到錯誤信息并將其代入自己對現(xiàn)實世界的既有認知。? ? ? ? ? ? ? ? ? ? ? □
(譯者單位:上海大學(xué))