home | philosophy index | notes | comment

When Reliabilities Conflict

A look into the Reliabilist Account of belief justification
through the limits of a proposed principle.



by David Foss






It has been suggested, in accord with a reliabilist account of justification, that a formal expression of the conditions under which a belief is counted as justified would be as follows:

  1. For any non-referential belief p,
    p is justified for S iff:
    1. p is a result of a reliable process, P1; and,
    2. there is no procedure, P2, which conflicts with P1 more than 1% of the time and which, had it been employed, would have led S to give up p.

Before dealing explicitly with the adequacy of this attempt, some investigation into the reliabilist account (generally) will have to be made. Exposing difficulties in the current proposal, through the introduction of counter-examples, will certainly help to illuminate the reliabilist approach; but, a (prior) short discursive treatment of some of the more central concepts is needed to introduce the project, criticisms (against the project generally, as well as this particular attempted formulation), and the possible responses to them. This will help to clarify the terms involved, as well as the perceived inadequacy of prior attempts at formal explication (to which this is a modification, and proposed improvement).

The Reliabilist project, as formulated by Alvin I. Goldman, seeks to make sense of our intuitive notions concerning the justificatory status of beliefs by directing attention to the cognitive processes by which beliefs are formed (and maintained). Justification is, on this model, fundamentally related to the degree of reliability of appropriately defined classes of processes. Presumably it is this model which correctly accounts for our intuitive understanding of such a thing as justification.

The justificational status of a belief is a function of the reliability of the process or processes that cause it, where (as a first approximation) reliability consists in the tendency of a process to produce beliefs that are true rather than false.[1]

Obviously, a notion of reliability is central here, as is a notion of process to which it applies.

Reliability, speaking vaguely, is taken as the propensity of a belief-producing process to generate true belief. The process, in turn, is any cognitive mechanism which produces (as output) beliefs. This notion of process is elaborated somewhat by Goldman:

Let us mean by a ‘process’ a functional operation or procedure, i.e., something that generates a mapping from certain states — ‘inputs’ — into other states — ‘outputs’. The outputs in the present case are states of believing this or that proposition at a given moment.[2]

The designation of relevant processes is further refined by a restriction to those mechanisms which might be strictly classified as ‘cognitive’ events. Reliability, as a standard of justification, is understood as the performance of these mechanisms with respect to the generation of true belief.

A justified belief is, roughly speaking, one that results from cognitive operations that are, generally speaking, good or successful. But ‘cognitive’ operations are most plausibly construed as operations of the cognitive faculties, i.e., ‘information-processing’ equipment internal to the organism.[3]

This is not meant as a new designation as to what is to count as justified. Rather, it is proposed that this general approach will more adequately express our intuitive notion of what it means for a belief to be justified.

The above effort at stating the ‘non-epistemic’ conditions under which a belief would be counted as justified, is a rough response to some difficulties with the theory as it has been stated elsewhere. Specifically, Goldman suggests[4]:

    S’s belief in p at t is justified if:
    1. S’s belief in p at t results from a reliable cognitive process; and,
    2. there is no reliable or conditionally reliable process available to S which, had it been used by S in addition to the process actually used, would have resulted in S’s not believing p at t.

A problem occurs, because it seems there are many cases where an alternative cognitive process exists (ideally) concerning the belief at hand, but S’s ignorance of its divergence in such a case would not threaten the justificatory status of S’s believing p. This is most obvious in cases where the belief in question, although produced by a fairly reliable process, is false. It’s important, for the sake of realism, that reliable processes sometimes result in false beliefs (so long as their propensity is in favor of truth). When a belief is false, it is generally expected that there exists some process by which the truth of the matter (or at least its indeterminacy) will become obvious.

It seems that part (b) of this attempt is therefore too strict; assuming we take reliability to be a standard external to the believer (where a reliable process is reliable in fact, and not one which is simply believed by S to be reliable), and available is understood to implicate all those processes for which the believer possesses the requisite cognitive hardware (and not simply those processes the believer might be expected to be introspectively aware of). In an effort to restrict, or at least clarify, the domain of relevant processes the proposal (1) has been made.

Admittedly, principle (1) lacks some important detail in its present formulation for a full appreciation of its implications. The restriction of application to non-inferential belief is intended to indicate the principle’s role as a ‘base clause’, in the justificatory (or causal) hierarchy. The corresponding ‘step clause’, need be no more complex than expressing a sort of inheritance rule by the wider inclusion of conditionally reliable processes. More serious ambiguities exist in the proposed principle: most notably, a notion of ‘conflict’ must be spelled out which may obtain between two or more (reliable) processes.

Presumably, the relationship between P1 and P2 is one in which both processes are reliable, i.e., they result in successful (true) beliefs most of the time. In general, it’s not clear that a process must result in true beliefs 99% of the time (or even 90% of the time) in order to be counted as generally reliable. If this is accurate (and many processes are taken as reliable even when this threshold is not met), then it seems that both P1 and P2 may cause false beliefs more than 1% of the time. The issue of conflict does not speak to this type of limit on relative reliabilities (the rate of truth or falsity strictly speaking), but rather it seeks to identify the divergence of two generally compatible processes. Regardless of the actual degree of true-belief production, a valid P2 will be any (generally reliable) process whose function (as the set of either possible or actual mappings) operates over the domain of P1, but whose range (the set of caused beliefs) is, for more than 1% of the mappings, incompatible with the range of P1. Notice that it is possible on this reading that there exists two reliable processes, covering the same domain, which perform utterly distinct mappings without a conflict occurring. But a question readily arises as to whether the percentage referred to is calculated with respect to the total number of mappings of P1, P2, the mathematical sum or union of both, or something else entirely. This problem will become more obvious through an apparent counter-example.

Consider Sam, who is mildly dyslexic. He has a very good memory for faces, but not names. At a social gathering, where everyone is wearing name-tags, Sam sees Lee from across the room and remembers seeing her face the day before, when (he believes) they had been briefly introduced. In addition to remembering Lee’s face, Sam is sure he remembers her name is “Elle”. From where he is standing, Sam can easily make out Lee’s name-tag. He quickly glances at it, and reads “L-E-E”. Sam concludes that his memory of her name is wrong, and believes that the woman he met the day before, who looked just like Lee (and may have even been her), was named “Lee”.

At this point, it might seem that Sam would even be at least somewhat justified had he believed Lee’s name was Elle, even though he read “Lee”. He might even be intuitively justified if he believed that this was not the woman he met the day before, because he remembers meeting someone named “Elle”, with her face. On at least one reading of our proposed principle he is clearly not justified in any of these beliefs.

Of course, it might be argued that Sam’s belief here is not intuitively justified. This might be claimed based upon the relative unreliability of the two central processes involved (namely, the reading performance of someone slightly dyslexic, and poor memory for names). However, in this case, it seems quite reasonable for Sam to believe that the name of the woman he takes Lee to be (namely the woman Sam had met the day before) is not “Elle”, after his quick reading of her name-tag seemed to disconfirm his apparently clear memory.

By recognizing Sam as mildly dyslexic, the reliability of his reading process (a conjunction of the sensory process of sight, with the reasoning process of word or letter identification) is admittedly diminished, but it is not undermined completely. It simply suggests that Sam sometimes “sees” the ordering or orientation of letters (or even whole words) wrongly, but usually gets it right. Mild dyslexia just indicates that Sam’s reading process makes mistakes slightly more frequently than the norm (however this is calculated).

Sam’s memory for names is clearly even less reliable (as postulated), especially under conditions when the foundational experiences are few and/or short. The process of memory-of-names-under-such-circumstances may even be recognized by Sam as being generally unreliable (eg., he generally does not correctly remember the names of people he does not know very well), but this time Sam feels he’s sure of it — in part resting on his generally reliable process of memory-of-faces-under-such-circumstances. Sam occasionally forms a clear memory of someone’s name, and associates that name strongly with a face; and these occasions, although rare, tend to bare out the truth.

From this clarification of the circumstances here, Sam’s belief in question should strike us as reasonable, and thereby justified (to some degree). But, by principle (1), Sam is not justified in believing the belief produced by this process (that he met “Lee” yesterday). Indeed, the process which produced the belief that he had met “Elle” — namely a distinct (and reliable) memory of her face and a less reliable, but firmly associated, memory of her name — threatens the process which produced the belief that he had met “Lee” — namely his reading of the name-tag coupled with his distinct and reliable memory of her face.

What is important to notice between these two processes, is their general divergence under such conditions. The reasons each process fails, when they do, are normally unrelated. Although Sam will sometimes misread because of expectations produced by memory, more often each process is used to correct the failings of the other. Because the reading process involved here is mildly dyslexic, we can reasonably assume that its failure rate will approach 5%, and that possibly half of the failures will be concurrent with failures of relevant memory processes (as just mentioned). So, very roughly, we might guess that there is a possibility of conflict in only about 2% of the cases where this process might be applied. In other words, for every one hundred beliefs in the range of this process, no more than approximately two of them will conflict directly, and it is more likely that this number would be even less, as the beliefs produced might still be dispassionately compatible.[5]

Sam comes to believe he met a woman named ‘Lee’ yesterday, by reading Lee’s name-tag quickly — a process given to a slightly greater failure-rate (than we might expect of a ‘normal’ reader) due to his mild dyslexia. The failure-rate of the whole process, comprising his memory of her face as well as his reading of her name-tag, is no less than the failure rate of the less reliable sub-process, and no more than the straight sum of the two failure rates (depending upon the degree of mutual failures).

Sam also possesses a belief process which would produce the belief that the woman he had met yesterday was named ‘Elle’. Granted, in general Sam’s memory-for-names is unreliable (i.e., the failure rate exceeds what one might consider a threshold for rational reliability — whatever that is); but under the more limited circumstances when Sam’s memory-for-names is accompanied by strong association with his reliable memory-for-faces, the resulting beliefs tend toward accuracy. A narrowing of the domain, dramatically increases the reliability of this second process (if done correctly).

If we take the reading process as P1, and the memory process as P2, and show they would conflict only about 2% of the time, in terms of P1, then Sam should apparently give up his belief in deference to P2, even though P2 may be less reliable than P1. There is at least one problem here, just looking at the method of calculating the degree of conflict. If we choose to calculate in terms of the union of all beliefs formed by both processes, the degree of conflict will be effected dramatically, depending upon the domain(s) considered relevant.

The 2% divergence we calculated (in a very cursory manner) is confined to those cases where P1 is false. If we look across the entire domain common to both belief-causing functions, we are sure to find a higher rate of divergence. Furthermore, we might expect that as the rate of failure for P2 increases, the rate of conflict will increase. The rate of conflict will not vary directly (1:1) with the rates of failure of P1 and P2 (as a portion of the failures of each will be shared, or at least compatible). But an increase in the rate of failure in either will almost certainly increase the rate of conflict generally. So our 2% conflict rate is overly optimistic when we are forced to include the rate of failure of P2 with respect to P1, even when P1 is successful. Even if our initial calculations had revealed a rate of less than 1% (where P1 failed and P2 conflicted), the inclusion of those cases where P2 failed and P1 conflicted (any ‘correction’ by P1 on P2), the rate could easily pass 1%, especially for any P2 with a general failure rate more than twice that of P1 (or vice versa).

This trouble may be due to a misreading of principle (1), which might be clarified by expanding on what sort of conflict is considered relevant. We could limit potential P2’s to those processes which have failure-rates no greater than P1. So our re-written principle might look as follows:

  1. For any non-referential belief p,
    p is justified for S iff:
    1. p is a result of a reliable process, P1; and,
    2. there is no procedure, P2, at least as reliable as P1, which conflicts with P1 more than 1% of the time and which, had it been employed, would have led S to give up p.

This would still be too strict. If we look at that period leading up to Sam’s reading Lee’s name-tag, it looks as if he is (intuitively) somewhat justified in believing her name is Elle. Our principle (2) would deny any degree of justification.

It seems a more widespread trouble enters our principle — even as revised in (2) — because reliability is determined (categorically) by actual failure rates, and degrees of conflict are calculated by process divergence. By treating reliability categorically, while permitting a dynamic standard of conflict to influence a categorical designation of justification, the principle does not adequately address the normal intuitive recognition that justification is not a categorical standard. Goldman recognizes this when he indicates:

...notice that justifiedness is not a purely categorical concept, although I treat it here as categorical in the interest of simplicity. We can and do regard certain beliefs as more justified than others. Furthermore, our intuitions of comparative justifiedness go along with our beliefs about the comparative reliability of the belief-causing processes.[6]

Unfortunately, neither the principle he develops, nor the principle(s) tested here, account for this feature of justification. For the sake of simplicity, we seem to have lost sight of one of the most important properties of both reliability and justification. It’s not clear that the principle can be adjusted to make sense of the interaction of beliefs formed by less than perfectly reliable processes. As most processes qualify as ‘less than perfect’, this is a serious problem.

The inadequacy of the principle (2), as well as (1), arises out of a mortal ambiguity in the reliabilist approach. It is never clear how we ought to go about defining which belief formations belong to a single process. Even if we could designate processes based upon some ‘content-neutral’ standard, we might still be able to identify sub-categories — or sub-processes — which maintain content neutrality but obtain starkly different failure-rates. Which process or sub-process is relevant in a particular attribution of justification appears to vary widely. In the end, the reliabilist approach seems to introduce more ambiguities than it claims to resolve.







Proseminar: Epistemology
PHIL-706-01, Georgetown University
Fall 1991
(© David Foss, October 18, 1991)

Send comments to hok007@shlobin-foss.net
Last modified August 27, 1998


home | philosophy index | notes | comment