Originally written in July 2015, the following commentary on three academic papers, looking at how we can and do validate knowledge, seems relevant once gain in the era of fake news, social media disinformation, and political hyper-partisanship.
The most important points are: that we do not need to endorse a source or person wholesale to trust some of what they propose as knowledge; that we may choose to defer our own judgement by endorsing, without rational grounds, some other ‘authority’; and that lived experience can be a powerful motivator for distrusting the authority of others who propose as knowledge viewpoints which contradict our own experiences.
There’s a paper on ‘Epistemic Vigilance’ by an improbably diverse range of scholars affiliated with the Central European University (sources cited at the end), examining the ‘suite of cognitive mechanisms’ involved in exercising vigilance about the veracity of information we receive. This begins with an evaluation of whether the effort of seeking validation of some piece of information is worth the expected value of that information.
Of particular significance to the topics of teaching and assessing critical thinking and analysis (CTA), and its social practice more widely, is a consideration in the paper of the philosophical treatment of epistemology (the study of the nature of knowledge), and specifically the consideration whether ‘testimony’ can be accepted as ‘knowledge’ in itself, or whether it requires independent validation by other sources. We can probably get away with thinking of testimony as a range of verbal and non-verbal communication, like political statements, news reports, feature articles, and, especially, online discourses.
Oxford Magdalen College fellow and philosophy tutor Elizabeth Fricker, using work by Melbourne University don CAJ Coady, makes a dense and possibly unnecessarily complex argument to show that we need to establish a set of conditions to rely on testimony as knowledge rather than just noise. That may seem obvious, but it raises questions for both pedagogy and our own personal evaluation of information:
We normal adult humans share a commonsense conception of the world we live in, and of our own nature and place in it. This shared body of knowledge includes a folk physics, a folk psychology, and an elementary folk linguistics: a conception of language both as representional system and as social institution, including the characteristic roles of speaker and hearer. According to this commonsense world-picture, testimony is one of a number of causal-cum-informational processes through which we receive or retain information about the empirical world, the others being sight and our other modes of perceptual awareness, and memory.— Fricker, 1995, p 397.
In other words, socialization includes learning to adopt criteria about assessing relevance, reliability, and authority, and in an epistemological sense, we will be loath to change our minds based on testimony contradicted by memory and perceptions.
Fricker’s most interesting observation, even though she apparently disapproves of it, is that ‘a source of knowledge need not be infallible to be portrayed as yielding direct knowledge’ (1995, p 400). What this establishes philosophically is that we can trust completely, trust partially, or not trust at all the authority or credibility of the testifier, and we do the same with every discrete piece of information arising from testimony.
The importance of such validation is that it makes a nonsense of all zero sum strategies. We do not need to trust a source to trust one or more pieces of information emanating from it, and we can and do dismiss information from trusted sources. This implication appears to channel Habermas’s idea of ‘communicative action’, as the effort to understand oneself in isolation as well as oneself-in-the-‘lifeworld’, which is an ugly way of translating ‘Lebenswelt’, probably referring to that part of the world an individual actually lives in, including that which involves the influence of family, friends, professional pursuits, and personally experienced locality. The part of the world that ‘is the locus of moral-practical knowledge or relations of meaning shared in families and workplaces’ as distinct from the public sphere of political action and opinion (Love, 1995, p. 50).
Putting this back into the context of epistemic vigilance, ‘the comprehension process itself involves the automatic activation of background information in the context of which the utterance may be interpreted as relevant’ (Sperber, Clement, Heintz, Mascaro, Mercier, Origgi, & Wilson, p 374), including confirmation biases (p 378) and ideas or beliefs which are shared broadly across a society, but for whose belief no logical or sound reason has been developed (p 380).
When epistemic authorities-religious leaders, gurus, maîtres à penser-achieve such inflated reputations, people who are then inclined to defer more to them than to any source whose reliability they have directly assessed may find themselves in the following predicament: If they were to check the pronouncements of these sources (for instance, ‘Mary was and remained a virgin when she gave birth’ or Lacan’s ‘There is no such thing as a sexual relationship’) for coherence with their existing beliefs, they would reject them. But this would in turn bring into question their acceptance of the authority of the source. A common solution to this predicament is to engage in a variant of Davidsonian ‘charitable interpretation’, and to ‘optimize agreement’ not by providing a clear and acceptable interpretation of these pronouncements, but by deferring to the authorities (or their authorised interpreters) for the proper interpretation, and thus accepting a half-understood or ‘semi-propositional’ idea (Sperber, 1985; 1997; In press). Most religious beliefs are typical examples of beliefs of this kind, whose content is in part mysterious to the believers themselves (Bloch, 1998; Boyer, 2001).— ibid, p 382.
Moreover, even if we practice epistemic vigilance, it does not guarantee that we will always apply an independent evaluation to any particular piece of information. That may create the common reality that ‘even if people do not trust blindly, they at least have their eyes closed most of the time to the possibility of being misinformed’ (ibid, p 363).
The implications for education are alarming: students may accept on blind faith what they are told without understanding any, some, or all of it, but without the effort of seeking independent validation. Alternatively, they may reject all or some of what they hear in the classroom because of jarring contradictions with their own sensory and social experiences. It is reasonable to expect the same dynamics to apply in society more generally, in which we can expect to see ‘an evolved conformist bias in favour of adopting the behaviour and attitudes of the majority of members of one’s community’ and that serious effort at ‘vigilance is directed primarily at information originating in face to face interaction, and not at information propagated on a larger scale’ (ibid, p 381).
Sperber et al, suggest several times that people will practice miserliness in the amount of effort they will dedicate to validating information, and might do so only as far as it is necessary to re-establish an equilibrium if their pre-existing assumptions are unexpectedly disturbed by new information (pp 360, 363, 374, 375, 381). This does not bode well for the teaching and assessment of CTA, nor for its practice in the wider society outside the classroom and lecture theatre.
While much of the paper was refreshingly different from bureaucratic approaches to CTA, there was a grating discontinuity concerning the apparently simple-minded, engineering oriented paradigm imposed on thinking ‘processes’ described like a flowchart (p 364) using ‘processing time’ (p 374), perhaps referencing computing analogies. These don’t strike me as useful at all because they pre-emptively deny the possibility of human thought as transcending automated processes.
Human thought is not reducible to a programmable response like an automated process or algorithm because it is self-determined enough to introduce tangential thinking, and to discover in the act of considering a problem a creative new idea, solution, or train of thought. A famous example, even if it cannot be empirically verified, is Archimedes’ ‘eureka’ (I have (found) it) moment when he discovered that his body displaced an equivalent mass of water on lowering himself into his bath. The inference is that Archimedes did not take his bath in order to pursue this idea, but that by a process of serendipity and creativity he connected hitherto unconnected ideas to create a new one.
This quality of thought is still exclusive to humans. It remains unclear only whether all humans are capable of such creativity, whether it can be taught, and whether it is a pre-requisite for, or product of CTA.
These are not freely available online.
Fricker, E. (1995). Telling and Trusting: Reductionism and Anti-Reductionism in the Epistemology of Testimony. Mind, 104(414), 393-411. Retrieved from http://www.jstor.org/stable/2254797
Love, N. S. (1995). What’s left of Marx? In S.K. White (ed.) The Cambridge Companion to Habermas, pp. 46-66. New York: Cambridge University Press.
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic Vigilance. Mind & Language, 25(4), 359-393. Retrieved on 20 July 2015 from http://www.dan.sperber.fr/wp-content/uploads/Epistemic-Vigilance-published.pdf