Epistemic Vigilance

Critical thinking and analysis

epistemic-vigilance-post

Looking for evidence of metathinking about critical thinking and analysis (CTA) is heartbreaking. Once you move out of a stream of bureaucratic fetish literature obsessed with definitions and re-definitions that have failed to advance the cause of CTA in schools and universities, there’s a real dearth of serious consideration of how it works, whether it works at all, and whether it can be taught or assessed.

That search is integral to a longer narrative essay I’m working on to look at the failure of CTA in the academy and in the public sphere. So coming across material that seemed to be a sidetrack, but turned out to be the mainline, was a real eye-opener.

There’s a paper on ‘Epistemic Vigilance’ by an improbably diverse range of scholars affiliated with the Central European University, examining the ‘suite of cognitive mechanisms’ involved in exercising vigilance about the veracity of information we receive. This begins with an evaluation of whether the effort of seeking validation of some piece of information is worth the expected value to the subject.

Of particular significance to the topics of teaching and assessing CTA, and its social practice more widely, is a small departure in the paper into the purely philosophical consideration of epistemology, and specifically the consideration whether ‘testimony’ can be accepted as ‘knowledge’ in itself, or whether it requires independent validation by other sources. In this regard we can probably get away with thinking of testimony to include a range of verbal and non-verbal communication, like news reports, articles, and, especially, online discourses.

Oxford Magdalen College fellow and philosophy tutor Elizabeth Fricker, using work by Melbourne University don C. A. J. Coady, makes a dense and possibly unnecessarily complex argument to show that we need to establish a set of conditions to rely on testimony as knowledge rather than just noise. That may seem obvious, but it raises questions for both teaching and our own evaluation of information:

We normal adult humans share a commonsense conception of the world we live in, and of our own nature and place in it. This shared body of knowledge includes a folk physics, a folk psychology, and an elementary folk linguistics: a conception of language both as representional system and as social institution, including the characteristic roles of speaker and hearer. According to this commonsense world-picture, testimony is one of a number of causal-cum-informational processes through which we receive or retain information about the empirical world, the others being sight and our other modes of perceptual awareness, and memory. (Fricker, 1995, p. 397.)

In other words, socialisation includes learning to adopt criteria about assessing relevance, reliability, and authority, and in an epistemological sense, we will be loath to change our minds based on testimony contradicted by memory and perceptions.

Fricker’s most interesting observation, even though she apparently disapproves of it, is that ‘a source of knowledge need not be infallible to be portrayed as yielding direct knowledge’ (1995, p. 400). What this establishes philosophically is that we can trust completely, trust partially, or not trust at all the authority or credibility of the testifier, and we do the same with every discrete piece of information testified to, matching every combination of trust. The importance of such validation is that it makes a nonsense of all zero sum strategies. We do not need to trust a source to trust one or more pieces of information emanating from it, and we can and do dismiss information from trusted sources. This implication appears to channel Habermas’s idea of ‘communicative action’, as the effort to understand oneself and oneself-in-the-‘lifeworld’, which is an ugly way of translating ‘Lebenswelt’, probably referring to that part of the world an individual actually lives in, including that which is proximate to family, friends, professional pursuits, and personally experienced locality. The part of the world that ‘is the locus of moral-practical knowledge or relations of meaning shared in families and workplaces’ as distinct from the public sphere of political action and opinion (Love, 1995, p. 50).

Putting this back into the context of epistemic vigilance, ‘the comprehension process itself involves the automatic activation of background information in the context of which the utterance may be interpreted as relevant’ (Sperber, Clement, Heintz, Mascaro, Mercier, Origgi, & Wilson, p. 374), including confirmation biases (p. 378) and ideas or beliefs which are shared broadly across a society, but for whose belief no good reason has been developed (p. 380).

When epistemic authorities-religious leaders, gurus, maîtres à penser-achieve such inflated reputations, people who are then inclined to defer more to them than to any source whose reliability they have directly assessed may find themselves in the following predicament: If they were to check the pronouncements of these sources (for instance, ‘Mary was and remained a virgin when she gave birth’ or Lacan’s ‘There is no such thing as a sexual relationship’) for coherence with their existing beliefs, they would reject them. But this would in turn bring into question their acceptance of the authority of the source. A common solution to this predicament is to engage in a variant of Davidsonian ‘charitable interpretation’, and to ‘optimize agreement’ not by providing a clear and acceptable interpretation of these pronouncements, but by deferring to the authorities (or their authorised interpreters) for the proper interpretation, and thus accepting a half-understood or ‘semi-propositional’ idea (Sperber, 1985; 1997; In press). Most religious beliefs are typical examples of beliefs of this kind, whose content is in part mysterious to the believers themselves (Bloch, 1998; Boyer, 2001). (p. 382.)

Moreover, even if epistemic vigilance is present as an agency, it does not guarantee that any particular person will apply an evaluation to any particular piece of information. That may create the common reality that ‘even if people do not trust blindly, they at least have their eyes closed most of the time to the possibility of being misinformed’ (p. 363).

The implications for education are alarming: students may accept on blind faith what they are told without understanding any, some, or all of it, but without the effort of seeking independent validation. Alternatively they may reject all or some of what they hear in the classroom because of jarring contradictions with their own sensory and social experiences in their Lebenswelten (lifeworlds). It is reasonable to expect the same dynamics to apply in society more generally, in which we can expect to see ‘an evolved conformist bias in favour of adopting the behaviour and attitudes of the majority of members of one’s community’ and that serious effort at ‘vigilance is directed primarily at information originating in face to face interaction, and not at information propagated on a larger scale’ (p. 381).

Sperber et al., suggest several times that people will practice miserliness in the amount of effort they will dedicate to validating information, and might do so only as far as it is necessary to re-establish an equilibrium if their pre-existing assumptions are unexpectedly disturbed by new information (pp. 360, 363, 374, 375, 381). This does not bode well for the teaching and assessment of CTA, nor for its practice in the wider society outside the classroom and lecture theatre.

While much of the paper was refreshingly different from bureaucratic approaches to CTA, there was a grating discontinuity concerning the apparently simple-minded, engineering oriented paradigm imposed on thinking ‘processes’ described like a flowchart (p. 364) using ‘processing time’ (p. 374), perhaps referencing computing analogies. These don’t strike me as useful at all because they pre-emptively deny the notion of human thought as transcending replicable processes.

Human thought is not reducible to a programmable response like an automated process or algorithm because it is self-determined enough to introduce tangential thinking, and to discover in the act of considering a problem a creative new idea, solution, or train of thought. A famous example, even if it cannot be empirically verified, is Archimedes’ ‘eureka’ (I have (found) it) moment when he discovered that his body displaced an equivalent mass of water on lowering himself into his bath. The inference is that Archimedes did not take his bath in order to pursue this idea, but that by a process of serendipity and creativity he connected hitherto unconnected ideas to create a new one.

This quality of thought is still exclusive to humans rather than being replicable by automated process or algorithm. It remains unclear only whether all humans are capable of such creativity, whether it can be taught, and whether it is a pre-requisite or product of CTA.

 


 

Fricker, E. (1995). Telling and Trusting: Reductionism and Anti-Reductionism in the Epistemology of Testimony. Mind, 104(414), 393-411. Retrieved from http://www.jstor.org/stable/2254797

Love, N. S. (1995). What’s left of Marx? In S.K. White (ed.) The Cambridge Companion to Habermas, pp. 46-66. New York: Cambridge University Press.

Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic Vigilance. Mind & Language, 25(4), 359-393. Retrieved on 20 July 2015 from http://www.dan.sperber.fr/wp-content/uploads/Epistemic-Vigilance-published.pdf