Tuesday, 6 October 2015

PERFECT Year Two: Lisa

The second year of ERC-funded project PERFECT (logo above) has just started and it is time to look back at what we have done so far and make plans for the future.

What we have done so far

Ema, Magdalena, Michael and I have had a very busy time, delivering talks, writing papers, and sponsoring a series of really interesting, interdisciplinary events, including a public engagement event on Sight, Sound and Mental Health for the Arts and Science Festival 2015, a Delusion lunchtime seminar with experts on delusion formation, and a session on the Function of Delusions as part of the Royal College of Psychiatrists Annual Congress.

We had three papers published open access: a review paper on costs and benefits of realism and optimism in Current Opinion in Psychiatry, a paper on the ethics of delusional belief in Erkenntnis, and a review paper on the nature and development of delusions in Philosophy Compass. Many more are in progress!

We continued to disseminate our work on the blog, and also created an Imperfect Cognitions playlist on YouTube highlighting our network members' work, and an app for iOS and Android, called PERFECT, free to download, with information about events, links to relevant sites, videos, blog posts, links to papers, and some interactive features (e.g., "Ask PERFECT"!)

What I plan to do next

Ema, Magdalena, and Michael will tell you about their own plans in the next few Tuesday blog posts, and I will say something about mine here. 

I am increasingly interested in what irrational cognitions mean for agency. Overly optimistic beliefs ("I am a talented football player" when I am just a mediocre one) and explanations that are not grounded on evidence ("I offered the job to Jim and not to Julie because he was better prepared" when my selection was based on implicit biases due to gender stereotypes) are good examples of cognitions that do not seem to get us any closer to the truth but play some role in helping us achieve other goals, some of which turn out to be epistemically worthwhile.

Monday, 5 October 2015

A Tale of two Optimists

In our everyday conception, an optimist is someone who looks at the bright side and expects good things to happen. The psychological literature distinguishes between different kinds of optimism: dispositional optimism and unrealistic optimism (also known as the optimism bias). Which of these types of optimism corresponds to our lay conception and how are they related to each other? Do these types of optimism differ in their effects? 

Dispositional optimism is conceptualized as a general tendency to expect good outcomes and as a fairly stable personality trait. The optimism bias on the other hand is supposed to be a cognitive bias whereby people overestimate the likelihood of encountering specific positive events and underestimate the likelihood of encountering specific negative events. This unrealistic estimate can either be comparative or absolute. In the first case, we rate our own prospects as better than those of comparable others, in the second, we overestimate our prospects relative to the actual likelihood.

While these two types of optimism sometimes coincide, a dispositional optimism does not predict unrealistic optimism and vice versa (Shepperd 2002). Nevertheless, there are some intriguing parallels between trait optimism and unrealistic optimism: Both seem to require the perception that we exert control over events. Trait optimism is linked to an optimistic explanatory style, which stresses the possibility of personal control over events (Forgeard et al. 2012). Unrealistic optimism is frequently associated with the illusion of control, the illusion that we control events to an extent that we don’t in fact. Furthermore, a correlation between trait optimism and optimism bias in belief updates of predictions concerning oneself compared to predictions for others have been observed (Kuzmanovic et al. 2015).

Both dispositional optimism and unrealistic optimism have been claimed to have beneficial consequences for individuals (Carver et al 2010, Taylor and Brown 1994). But findings are far more mixed in the case of unrealistic optimism. Especially in the literature on health and optimism, unrealistic optimism has been linked to dangerous complacency regarding health risks.

Importantly, in contrast to unrealistic optimism, dispositional optimism is not conceptualized as intrinsically unrealistic, though it may be if it coincides with unrealistic optimism. This may seem puzzling at first; we might think that dispositional optimism and the optimism bias are just two ways of measuring the same thing: One looks at general, one at specific expectations for the future. We would expect that specific positive expectations lead to a positive general outlook and the general expectation also manifests itself in specific predictions.

But a generally positive outlook and specific expectations need not be linked in this way: ‘The optimist thinks we live in the best of all possible worlds, the pessimist fears he may be right.’ As this wonderful joke illustrates, optimism does not necessarily entail error, it is just as much about the spin we put on events.

A general expectation of good things happening to me is much harder to disappoint then a very specific prediction. If I overestimate my chances of getting a specific job, I may end up being disappointed when I don’t. If I expect ‘more good things then bad things to happen to me’, then I might focus on the valuable feed-back I got when I was rejected and therefore have something to add to my stock of good things happening to me despite experiencing rejection. This flexibility of outlook and evaluation in general optimism may make it more beneficial to individuals, as it does not set them up for disappointment in the way very specific unrealistically optimistic expectations can.

Thursday, 1 October 2015

Epistemic Utility Theory: Interview with Richard Pettigrew

In this post I interview Richard Pettigrew (in the picture above), who is Professor in the Department of Philosophy at the University of Bristol, and is leading a four year project entitled “Epistemic Utility Theory: Foundations and Applications”, also featuring Jason Konek, Ben Levinstein, Chris Burr and Pavel Janda. Ben Levinstein left the project in February to join the Future of Humanity Institute in Oxford. Jason Konek left the project in August to take up a TT post at Kansas State University. They have been replaced by Patricia Rich (PhD, Carnegie Mellon) and Greg Gandenberger (PhD, Pitt; postdoc at LMU Munich).

LB: When did you first become interested in the notion of epistemic utility? What inspired you to explore its foundations and applications as part of an ERC-funded project?

RP: It all started in my Masters year, when I read Jim Joyce's fantastic paper 'A Nonpragmatic Vindication of Probabilism' (Philosophy of Science, 1998, 65 (4):575-603). In that paper, Joyce wished to justify the principle known as Probabilism. This is a principle that is intended to govern your credences or degrees of belief or partial beliefs. Probabilism says that your credences or degrees of belief should obey the axioms of probability calculus. Joyce notes that there already exist justifications for that principle, but they all appeal to the allegedly bad pragmatic consequences of violating it -- if your credences violate Probabilism, these arguments show, they'll lead you to make decisions that are guaranteed to be bad in some way. 

The Dutch Book argument, as put forward by Frank Ramsey and Bruno de Finetti, is the classic argument of this sort. As his title suggests, Joyce seeks a justification that doesn't appeal to the pragmatic problems that arise from non-probabilistic credences. He's interested in identifying what is wrong with them from a purely epistemic point of view. After all, suppose I violate Probabilism because I believe that it's going to rain more strongly than I believe that it will rain or snow. It may well be true that these credences will have bad pragmatic consequences -- they may well lead me to buy a £1 bet that it will rain for more than I will sell a £1 bet that it will rain or snow, for instance, and that will lead to a sure loss for me.

But there also seems to be some purely epistemic flaw in my credences. Joyce wishes to identify that flaw. To do this, he identifies a notion of purely epistemic utility for credences. This is supposed to be a measure of how good a set of credences is from a purely epistemic point of view; how valuable they are, epistemically speaking. His thought is that we value credences according to their accuracy. Very roughly, a credence in a true proposition is more accurate the higher it is, whereas a credence in a falsehood is more accurate the lower it is.

So if I believe that it will rain more strongly than you, and it does rain, then I will be more accurate than you because I have a higher credence in a truth. Joyce then proves a startling result: he shows that, if you have credences that violate Probabilism -- that is, if your credences do not satisfy the axioms of the probability calculus -- then there are alternative credences in the same propositions that are guaranteed to be more accurate than your credences are; that is, however the world turns out, these alternative credences will be more accurate; that is, you know a priori that those credences outperform yours from the point of view of accuracy. From this, he infers that such credences must be irrational. This is his nonpragmatic vindication of Probabilism.

So that was the starting point for me. I read about it in my Masters year and played around with the framework for a few years after that. But it was only when I started talking with Hannes Leitgeb about the project that things took off. We wrote two papers together proposing alternative accuracy-based justifications for Probabilism and other norms ('An Objective Justification of Bayesianism I: Measuring Inaccuracy', 2010, Philosophy of Science, 77: 201-235; 'An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy', 2010, Philosophy of Science, 77: 236-272). And since then, this topic has been the main focus of my research for five years. I find so many aspects of Joyce's argument and technical result compelling. For one thing, the conclusion of the argument seems to lie far from the premises, so it seems to make real philosophical progress -- from an account of how to assign epistemic value or utility to credences, we get a very specific and mathematically precise norm. 

Another appealing feature of the argument, and the one that launched the project I'm currently exploring, is that it suggests a way in which we might argue for other credal principles. Joyce's argument essentially has two premises. The first is an account of epistemic utility: the epistemic utility of a credence is its accuracy. The second is a principle of decision theory: it is the dominance principle, which says that it's irrational to pick an option when there is an alternative option that is guaranteed to have greater utility -- that is, it is irrational to pick a dominated option. Using a mathematical theorem -- which shows that the options that aren't dominated are precisely the probabilistic sets of credences -- he derives Probabilism from these two premises. But the dominance principle is just one of many decision principles. Thus, a natural question emerges: which principles for credences follow from the other decision principles?

Tuesday, 29 September 2015

Debunking Dualist Notions of Near-Death Experiences

This post is by Hayley Dewe, pictured above. She is a PhD student from the School of Psychology at the University of Birmingham. Her research is based in The Selective Attention and Awareness laboratory, directed by Jason Braithwaite. Her research focuses on the neurocognitive correlates of anomalous (hallucinatory) experience, specifically pertaining to the ‘self’, embodiment, and consciousness.

In this post I will briefly discuss the extraordinary phenomena of Near-Death Experiences (NDEs), and highlights key arguments raised in my recent paper, co-authored paper with Jason Braithwaite, which explores how findings from neuroscience can help debunk dualist notions of NDEs (Braithwaite & Dewe 2014; published in The (UK) Skeptic magazine).

NDEs are striking experiences that typically occur when one is close to death or exposed to life-threatening situations of intense physical and/or emotional danger (first coined by Moody 1975, Life after Life. New York: Bantam Books). This unusual experience includes a variety of aberrant components such as: sensations of peace and vivid imagery, bright flashes of light, the sensation of travelling through a dark tunnel towards a bright light, a disconnection from the physical body (a shift in perspective: the Out-of-Body Experience), and the sensation of entering a light / visions of an ‘afterlife’ etc. (Greyson 1980).

From a parapsychological (or survivalist / supernatural) perspective, NDEs are understood as mystical and spiritual experiences that expose the individual to another world (or afterlife). This is taken as evidence for the survival of bodily death (i.e. dualism); that the mind/consciousness is not dependent on the brain (Parnia and Fenwick 2002; van Lommel et al. 2001).

In stark contrast is the scientific/neuroscience perspective. Here, it is argued that NDEs are hallucinatory phenomena, generated by a disinhibited and highly confused, dying brain (known as the ‘dying brain account’; Blackmore 1996; Braithwaite 2008; Jansen 1990).

There are two important arguments pertaining to the scientific account that I would like to raise here. There are a host of logical fallacies and methodological discrepancies within the parapsychological literature (discussed at length in Braithwaite & Dewe 2014; Braithwaite 2008). One argument we propose is that, to our knowledge, there appears to be no objective study validating the presence of an entirely inactive human brain with the simultaneous occurrence of an NDE! This is of principal concern to survivalists; how is it assumed that the NDE is a glimpse of an afterlife, or evidence for dualist notions of life (or mind) surviving brain death, if no such evidence actually exists? Further, even if there were evidence of a completely inactive brain, and subsequent recollection of an NDE; evidencing their simultaneous occurrence would be extremely problematic. How could one pinpoint the precise time frame that the NDE components occurred? The NDE itself may well have occurred before levels of brain activity became ‘inactive’ (or ‘flattened’), or even experienced and recalled afterwards, during recovery.

Secondly, no component of the NDE is actually unique to the ‘near-death’ experience. The visual perceptions that are observed such as flashes of light, or alternate shifts in perspective (i.e. the OBE) can and do occur in a variety of contexts; not only when one is close to death. For instance, OBEs reportedly occur in 12% of the general population (Blackmore 1984). Therefore, one needn’t necessarily be ‘near to death’ to experience NDE phenomena, and consequently we suggest that dualist / survivalist arguments of NDEs are arguably flawed.