I work at the intersection of epistemology with the philosophy of mind. I am especially interested in topics that in my view involve an epistemic interaction between internal and external perspectives on one’s own mind. For a sense of what I mean, note that if you believe that it will rain, then from your internal perspective it appears to be a fact about the world that it will rain. If on the other hand you merely know of another person’s belief that it will rain, then from your external perspective this appears merely to be a fact about a particular person’s state of mind, which might support that it will rain only against a background of further evidence concerning that person’s track record, reliability and so forth. These two perspectives can interact because just as you can learn evidence about another person’s beliefs—such as what she believes, and how she came to believe it—you can learn the same kind of evidence about yourself and your own beliefs. I think the interaction between these two perspectives plays a central role in a number of debates in epistemology, and my principal research project examines a few of these in particular: introspective self-knowledge, higher-order justification, the epistemology of memory and testimony, and skepticism in both a contemporary and a historical context.
For additional information on recent and upcoming research, please see my research statement.
Perceptual Justification and the Cartesian Theater: 2019, Oxford Studies in Epistemology 6 (runner-up for the 2015 Marc Sanders Prize in Epistemology).
According to a traditional Cartesian model of perception, perception does not provide one with direct knowledge of the external world. Instead, when you look out to see a red wall, what you learn first is not a fact about the color of the wall—i.e., that it is red—but instead a fact about your own visual experience—i.e., that the wall looks red to you. Recent anti-Cartesian theorists have pushed back against this traditional model, claiming that the epistemic significance of having a perceptual experience is not exhausted by your knowledge that the perceptual experience has occurred. After clarifying the motivations and central commitments of anti-Cartesian accounts of perception, I argue that the most plausible such accounts face a trilemma: Either they must license an implausible chauvinism about one’s own experience, or claim that having an experience can offer additional justification for a belief even when one knows in advance that one will have that experience, or claim that merely reflecting on one’s experiences can defeat the perceptual justification that they otherwise provide.
Is Memory Merely Testimony from One’s Former Self?: 2015, Philosophical Review 124(3): 353-392.
An important philosophical tradition treats the deliverances of one’s own internal faculties as analogous to the deliverances of external sources of testimony. Pushing back against this tradition in the special case of the deliverances of one’s own memory, I aim to highlight the broader interaction between an internal (or first-person) and an external (or third-person) perspective that one might adopt towards one’s own states of mind. According to what I call the ‘diary model’ of memory, one’s memory ordinarily serves as a means for one’s present self to gain evidence about one’s past states of mind, much as testimony from another person can provide one with evidence about that person’s states of mind. I reject the diary model’s analogy between memory and testimony from one’s former self, arguing first that memory and a diary differ with respect to their psychological roles, and second that this psychological difference underwrites important downstream epistemic differences.
Inferential Justification and the Transparency of Belief: 2016, Nous 50(1): 184-212.
This paper critically examines currently influential transparency accounts of our knowledge of our own beliefs that say that self-ascriptions of belief typically are arrived at by “looking outward” onto the world. For example, one version of the transparency account says that one self-ascribes beliefs via an inference from a premise to the conclusion that one believes that premise. This rule of inference reliably yields accurate self-ascriptions because you cannot infer a conclusion from a premise without believing the premise, and so you cannot infer from a premise that you believe the premise unless you do believe it. I argue that this procedure cannot be a source of justification, however, because one can be justified in inferring from p that q only if p amounts to strong evidence that q is true. This is incompatible with the transparency account because p often is not very strong evidence that you believe that p. For example, unless you are a weather expert, the fact that it will rain is not very strong evidence that you believe it will rain. After showing how this intuitive problem can be made precise, I conclude with a broader lesson about the nature of inferential justification: that inferential transitions between beliefs, when justified, must be underwritten by evidential relationships between the facts or propositions which those beliefs represent.
What’s the Matter With Epistemic Circularity?: 2014, Philosophical Studies 171 (2):177-205.
If the reliability of a source of testimony—another person, a religious text, a crystal ball—is open to question, it seems epistemically illegitimate to verify the source’s reliability by appealing to that source’s own testimony. Is this because it is illegitimate to trust a questionable source’s testimony on any matter whatsoever? Or is there a distinctive problem with appealing to the source’s testimony on the matter of that source’s own reliability? After distinguishing between two different kinds of epistemically illegitimate circularity—bootstrapping and direct self-verification—I argue for a qualified version of the claim that there is nothing especially illegitimate about using a questionable source to evaluate its own reliability. Instead, it is illegitimate to appeal to a questionable source’s testimony on any matter whatsoever, with the matter of the source’s own reliability serving only as a special case.
Works in Progress
Recent discussions of memory in epistemology have largely centered on two related problems: the problem of stored beliefs and the problem of forgotten evidence. This paper presents a unified discussion of these two problems, as well as an explanation of their broader epistemic significance. The problem of stored beliefs concerns cases in which one’s current conscious belief in proposition would be undermined if further beliefs were recalled from memory. The question is whether these further beliefs can still undermine the conscious belief even if they remain stored. Building on previous work, I argue that merely stored beliefs can undermine conscious beliefs. The problem of forgotten evidence concerns cases where one’s present belief would be undermined by evidence that one previously knew, but has since forgotten. The question is whether this evidence can still undermine one’s current belief. I argue that it can under the right circumstances. In both cases, the matter comes down to whether a belief can be undermined even if one has no way to give up the belief without doing something irrational. I argue that this is possible, and (surprisingly) consistent with a general ban on epistemic dilemmas. These debates about the epistemology of memory have have broader epistemic significance because of their interaction with internalism, the view that whether an agent is justified in holding a doxastic attitude depends solely on factors internal to that agent’s first-person perspective. Even those who are sympathetic to internalism in broad outline disagree about what is genuinely internal to an agent’s first-person perspective, and cases of forgotten evidence and stored beliefs lie at the fault lines of competing views about what is internal. An important feature of the view I advocate in the paper is that it accepts core internalist principles and intuitions, but takes much more to qualify as internal than many traditional internalists have allowed.
As G. E. Moore famously observed, it seems somehow improper (or “absurd”) to assert propositions of the form <p, but I don’t believe that p>. Many theorists have thought we should go further, and say that it is irrational to believe propositions of this form as well. I respond to this further claim by offering a novel decision-theoretic explanation of the impropriety of Moore-paradoxical assertions. Unlike other prominent proposals, the one I endorse does not appeal to the communicative intentions typically associated with the speech act of assertion, but instead extends to agents who have the narrower aim to tell the truth, the whole truth, and nothing but the truth. This is important in two ways. First, it enables the proposal to explain the common intuition that it is irrational to inwardly affirm Moore-paradoxical propositions to oneself, even though in this case many of the communicative intentions associated with assertion are inapplicable. Second, the explanation of the inappropriateness of Moore-paradoxical judgments fails to carry over to Moore-paradoxical beliefs. This raises problems for views which posit an especially intimate relationship between the state of believing that p and the disposition to make a judgement that p. Moreover, it undercuts one line of support for rationalist theories of self-knowledge, which seek to understand the special knowledge we have of our own minds in terms of rationality or reasoning, in contrast to inner sense theories which seek to understand self-knowledge in broadly perceptual terms.
How should you act when your actions themselves amount to evidence about what their consequences will be? This paper proposes that one should prefer actions with a greater degree of ratifiability. This proposal handles Andy Egan’s alleged counterexamples to causal decision theory, as well as the cases that led Egan to reject the potenti al replacements for it that he considers. And it has more acceptable implications for many-option cases than a related suggestion from Ralph Wedgwood. Although the proposal faces some challenges raised elsewhere for Wedgwood, some of these are less serious than they seem, while others are problems for any view consistent with our intuitions about relevant examples. Perhaps you should not consider the evidential significance of your actions at all, in contrast to these intuitions. But if you should, you should do it as the present proposal instructs.
Higher-Order Evidence is the Wrong Kind of Reason (handout; draft coming soon)
Ordinary reasoning is transparent, in that the reasoner does not consider her own beliefs, or whether they are rational. Instead, the reasoner simply attends to the worldly matters that her beliefs are about. But reflectivists like Tyler Burge, Christine Korsgaard, and Declan Smithies think there is a distinct type of reflective reasoning, in which an agent revises her beliefs in light of higher-order reflections about their rationality. For example, a reflective agent might consider her evidence and judge that she is rationally required to hold a belief—and then follow through by adopting it. Reflective reasoning is alleged to have wide-ranging significance for epistemology, ethics, and the nature of rational agency. Against this widespread reflectivist picture, I claim that higher-order reflections give one the wrong kind of reason for belief. A paradigmatic example of the wrong kind of reason for belief is a moral reason. If you judge that you are morally required to believe that your friend is innocent of a crime, this arguably gives you a reason to believe it, but not the right kind of reason. I argue that the same goes for the judgement that you are rationally required to believe. In my view, it is just as irrational to believe on the grounds that belief is rationally required as on the grounds that it is morally required. Reflectivists can respond by claiming that from the agent’s first-person perspective, the question whether it is the case that p is not distinct from the question whether it is rational for her to believe p, as it is from the question whether believing p is rational for some other agent. But I argue this claim leads to an objectionable chauvinism. When I judge that it is rational for you to believe that p, this gives me a reason to believe p only to the extent I consider it unlikely that you are in the unfortunate position of being rationally required to believe a falsehood (because of misleading evidence, for example). If I go on to treat the question whether it is rational for me to believe p as having a more direct bearing on whether p, then I am guilty of chauvinistically treating myself as less likely to occupy the same unfortunate position.
Intellectual Autonomy and the Cartesian Circle (draft coming eventually)
This paper explores a widely overlooked interaction between Descartes’ epistemology and his metaphysics of persons, with the aim of explaining a pair of famously puzzling features of his response to skepticism. The first is that Descartes allows the Meditator to vindicate the reliability of reason using reason, but not the reliability of other sources like sensory perception or testimony using sensory perception or testimony. The second is that Descartes is willing to grant momentary knowledge (cognitio) of a geometrical theorem to an atheist, who lacks proof of the reliability of his own cognitive faculties, and yet he denies that this knowledge can persist when the the atheist is not consciously entertaining the theorem’s demonstration. The explanation of these otherwise puzzling commitments, I argue, can be found only if we go beyond the epistemology to Descartes’ views on free will and mind-body interaction. According to Descartes, the difference between knowledge and true belief is that the knower is in control of whether he assents to the truth, whereas the believer is dependent on external events working out in his favor. Descartes furthermore thought that the mind is an immaterial substance which interacts with a physical machine, and that our freedom and responsibility extend only to what is internal to the mind. The internal includes the faculties of reason and intellectual memory, but sensory perception, imagination, and sensory memory require the mind to interact with the brain and sensory organs in a manner roughly analogous to a homunculus using a surveillance camera and a diary. For illustration, when one demonstrates that God exists using reason, one’s knowledge that God exists remains intact even when one is not consciously attending to the demonstration. But when one appeals to mental images in demonstrating a geometrical theorem, as on my reading the atheist geometer does, the images are recorded “offline” in one’s brain, like a diagram tucked away in one’s diary. Because the geometrical demonstration itself is stored externally, standing geometrical knowledge depends on knowledge of the reliability of the faculties used to arrive at the demonstration. And so the atheist geometer, who lacks knowledge of their reliability, can know the geometrical theorem only when he is consciously attending to the demonstration itself.