London's mayor, Boris Johnson, got himself in hot water last week with the following politically incorrect remark:
Like it or not, the free market economy is the only show in town. ... No one can ignore the harshness of that competition, or the inequality that it inevitably accentuates; and I am afraid that violent economic centrifuge is operating on human beings who are already very far from equal in raw ability, if not spiritual worth.
Whatever you may think of the value of IQ tests, it is surely relevant to a conversation about equality that as many as 16 per cent of our species have an IQ below 85, while about 2 per cent have an IQ above 130. The harder you shake the pack, the easier it will be for some cornflakes to get to the top.
And for one reason or another – boardroom greed or, as I am assured, the natural and god-given talent of boardroom inhabitants - the income gap between the top cornflakes and the bottom cornflakes is getting wider than ever.
Johnson was attacked for daring to say out loud that there's a correlation between IQ and income/wealth inequality. Most commentators cited a 2007 paper by Jay Zagorsky that supposedly refutes Johnson's claim.
But I'm not so sure. I think most commentators privately agree with Johnson, and their mock outrage is simply to enflame their readers.
First, you should read the Zagorsky paper yourself and reach your own conclusion. Contrary to reports, Zagorsky's analysis actually shows a strong correlation between IQ and income/wealth before he fudges the numbers by "controlling for other variables". In doing so, he muddies the result. Let me explain. Imagine a study that shows that men are, on average, physically stronger than women, then controls for shirt size, and comes to the conclusion that women are just as strong as men! Since shirt size is probably linked to the outcome being measured (physical strength), by controlling for it you falsely skew the result.
Second, I believe Johnson is admirable in trying to identify root causes of inequality, regardless of political correctness. If inequality in income and wealth results from differences in IQ (that are beyond your control), let's do something about it, not attack the messenger. I believe that other innate qualities like temperament, energy-level and drive are more important than IQ in explaining natural inequality, but the point remains. If we accept that natural inequality is true, it justifies more income re-distribution and higher taxes on the wealthy to support the poor. To say otherwise and fall back on political correctness just makes inequality worse.
1. IntroductionIn this paper I introduce the idea of a higher-order modal logic—not a modal logic for higher-order predicate logic, but rather a logic of higher-order modalities. “What is a higher-order modality?”, you might be wondering. Well, if a first-order modality is a way that some entity could have been—whether it is a mereological atom, or a mereological complex, or the universe as a whole—a higher-order modality is a way that a first-order modality could have been. First-order modality is modeled in terms of a space of possible worlds—a set of worlds structured by an accessibility relation, i.e., a relation of relative possibility—each world representing a way that the entire universe could have been. A second-order modality would be modeled in terms of a space of spaces of (first-order) possible worlds, each space representing a way that (first-order) possible worlds could have been. And just as there is a unique actual world which represents the way that things actually are, there is a unique actual space which represents the way that first-order modality actually is.One might wonder what the accessibility relation itself is like. Presumably, if it is logical or metaphysical modality that is being dealt with, it is reflexive; but is it also symmetric, or transitive? Especially in the case of metaphysical modality, the answer is not clear. And whichever of these properties it may or may not have, could that itself have been different? Could at least some rival modal logics represent different ways that first-order modality could have been?To be clear, the idea behind my proposal is not just that some things which are possible or necessary might not have been so at the first order, as determined by the actual accessibility relation, but also that the actual accessibility relation, and hence the nature or structure of actual modality, could have been different at some higher order of modality. Even if the accessibility relation is actually both symmetric and transitive, perhaps it could (second-order) have been otherwise: There is a (second-order) possible space of worlds in which it is different, where it fails to be symmetric, or transitive. We must, therefore, introduce the notion of a higher-order accessibility relation, one that in this case relates spaces of first-order worlds. The question then arises as to whether that relation is symmetric, or transitive. We can then consider third-order modalities, spaces of spaces of spaces of possible worlds, where the second-order accessibility relation differs from how it actually is. I can see no reason why there should be a limit to this hierarchy of higher-order modalities, any more than I can see a reason why there should be a limit to the hierarchy of higher-order properties. There will thus be an infinity of orders, one for each positive integer, and each order will have an accessibility relation of its own. To keep things as clear as possible, a space of first-order points (i.e., of possible worlds) shall be called a galaxy, a space of second-order points, a universe, and a space of any higher order, a cosmos. However, to keep things as simple as possible, in what follows I will deal with but a single cosmos at a time, and hence will not deal with modalities higher than the third order.The accessibility relation is not the only thing that might be thought to vary between spaces of worlds: Perhaps the contents of the spaces can vary as well. While I presume that the contents of the worlds themselves remain constant—it makes doubtful sense to suppose that in one space some entity e exists in a world w and in another space e doesn’t exist in that same world w—we may suppose that different spaces may differ as to which worlds they contain, just as different worlds may differ as to which objects they contain. Thus we might have a higher-order analogue of a variable-domain modal logic. There seem, then, to be three ways in which spaces can differ: First, as to the properties of the accessibility relation; second, as to which worlds the relation relates; and third, as to which worlds or spaces are parts of their domains.The paper will be structured as follows. In Section 2 I provide some reasons why one might want to pursue this kind of project in the first place. In Section 3 I outline the syntax and semantics of my proposed logic. Section 4 covers semantic tableaux for this system; and after giving the rules for their construction, I construct a few of them myself to establish some logical consequences of the system and give the reader a feel for how it works. In Section 5 I outline a potential application of my framework to the metalogic of modal logics. In Sections 6, 7 and 8 I explore some of its potential philosophical implications for areas besides logic, namely the philosophy of language; metaphysics, including the metaphysics of modality, the philosophy of time, and laws of nature; and finally the philosophy of religion, before concluding the paper in Section 9.
The Center for Ethics and Public Affairs at the Murphy Institute at Tulane University has extended the application deadline for residential Faculty Fellowships for the 2014-2015 academic year. The new application deadline is February 15, 2014.
Rutgers University will be hosting a five day metaphysics summer school for graduate students, running May 19th-23rd, 2014, and featuring Karen Bennett, Shamik Dasgupta, Laurie Paul, Jonathan Schaffer, and Ted Sider.
All local (NY/NJ area) graduate students are invited to attend.
Non-local graduate students must apply to attend, by sending the following to email@example.com by January 10, 2014:
• A single page cover letter
• A curriculum vitae
• A writing sample on any topic in metaphysics
• A brief letter of recommendation (which need be no more than one paragraph), sent from a professor familiar with your work
Applicants will be notified by February 1, 2014. Housing and possibly some limited financial support will be available for non-local graduate students.
You can find the link, and the call for papers, here.
If your first name begins with V, W, X, Y, or Z, it's your special month, the one in which we especially encourage you to post on PEA Soup. Please pay heed to the calendar of events when doing so, in order to give any planned events a few days to breathe.
On behalf of Russ Shafer-Landau I'm very pleased to announce a Call for Papers for the 2nd annual Marc Sanders Prize in Metaethics.
The winner of the prize will receive $8,000, present his or her paper at the next Wisconsin Metaethics Workshop, and have the winning paper included in a forthcoming volume of Oxford Studies in Metaethics.
More information below the fold.
The Marc Sanders Prize in Metaethics
In keeping with its mission of encouraging and recognizing excellence in philosophy, The Marc Sanders Foundation seeks to highlight the importance of ongoing support for the work of younger scholars. As part of this commitment, the Foundation has dedicated resources to an ongoing essay competition, designed to promote excellent research and writing in metaethics on the part of younger scholars. More information about the Sanders Prize in Metaethics, and the Marc Sanders Foundation, can be found at http://www.marcsandersfoundation.org/sanders-prizes/metaethics/.
The Marc Sanders Prize in Metaethics is an annual essay competition open to scholars who are within fifteen years of receiving a Ph.D. or students who are currently enrolled in a graduate program. Independent scholars may also be eligible, and should direct inquiries to the Editor of Oxford Studies in MetaethicsRuss Shafer-Landau, at firstname.lastname@example.org. The award for the prizewinning essay is $8,000, and winning essays will be published in Oxford Studies in Metaethics. The recipient of the award will be expected to present his or her paper at the Annual Wisconsin Metaethics Workshop, Sept 12-14, 2014, at the University of Wisconsin-Madison. More information about the Workshop can be found at https://sites.google.com/site/wiscmew/.
Submitted essays must present original research in metaethics. Essays should be between 7,500 and 12,000 words. Since winning essays will appear in Oxford Studies in Metaethics, submissions must not be under review elsewhere. To be eligible for this year’s prize, submissions must be received, electronically, by March 1, 2014. Refereeing will be blind; authors should omit remarks and references that might disclose their identities. Receipt of submissions will be acknowledged by e-mail. The winner will be determined by a committee of members of the Editorial Board of Oxford Studies in Metaethics and will be announced in early June.
All inquiries should be directed to the Prize Administrator, Russ Shafer-Landau, at email@example.com.
There's an exciting new theory in cognitive science. The theory began as an account of message-passing in the visual cortex, but it quickly expanded into a unified explanation of perception, action, attention, learning, homeostasis, and the very possibility of life. In its most general and ambitious form, the theory was mainly developed by Karl Friston -- see e.g. Friston 2006, Friston and Stephan 2007, Friston 2009, Friston 2010, or the Wikipedia page on the free-energy principle.
Unfortunately, Friston isn't very good at explaining what exactly the theory says. The unifying principle at its core is called the free-energy principle. It says that "any self-organizing system that is at equilibrium with its environment must minimize its free energy" (Friston 2010). Both perception and action are then characterized as serving this goal of minimizing free energy.
Let's have a closer look. The term "free energy" originally comes from thermodynamics, but Friston's usage has its home in variational Bayesian methods for approximating intractable integrals in machine learning and statistics.
The basic idea is actually quite simple. Suppose we have a probability distribution P that we would like to conditionalize on some data d. Call the target distribution P'. If the distributions are moderately complex, computing P' turns out to be infeasible. What we can do instead is focus on a restricted class of computationally simple distributions and find the distribution Q from within that class that is most similar to P'. For many purposes, Q will do as an approximation to the target distribution P'.
The similarity between Q and P' is commonly measured by the "Kullback-Leibler divergence",
The first term in the second formulation is called the (variational) free energy of Q (relative to P and d); the second term, -log P(d), is called the surprise or surprisal of d (relative to P).
Note that Q doesn't occur in -log P(d). So when we look for the function Q that minimizes KL(Q||P'), we can equivalently look for the function Q that minimizes free energy. Given suitable restrictions on the class of functions from which Q is chosen, this can be done very efficiently by an iterative algorithm described e.g. on these useful slides by Kay Brodersen.
How is all this relevant to cognitive science? Well, when our brain receives sensory input, it has to come up with a plausible hypothesis about the state of the world. Bayesian methods are ideally suited to that task, and there are good reasons for thinking that the information processing in our perceptual systems does in fact follow broadly Bayesian principles. But since the ideal Bayesian method (conditionalization) is computationally intractable, our brain can only implement approximations. One hypothesis is that it implements the iterative algorithm of variational approximation. (Let's call this the variational hypothesis about perception.) Minimizing free energy then plays an important role in our perceptual architecture.
The variational hypothesis is not the only game in town. It could turn out that our brain uses some other techniques (MCMC, say) to approximate the Bayesian ideal. However, there's a sense in which our brain would still minimize free energy. For no matter how the brain arrives at its posterior probability Q, approximating the Bayesian ideal means getting close to the Bayesian posterior P', and getting close to P' means minimizing free energy.
(Here is a more complicated way of making essentially the same point, that's popular among Friston & friends. Recall from the above definition of KL(Q||P) that divergence equals free energy minus surprise. Moving surprise to the left-hand side of the equation, we see that free energy equals divergence plus surprise. And since the divergence can't be negative, free energy is an upper bound on surprise. Now Bayesian conditionalization on d turns a prior probability P into a posterior P' such that P'(d)=1 and therefore -log P'(d)=0. So the output of conditionalization is a distribution that, in a sense, minimizes surprise. (Strictly speaking, what's minimized is of course the posterior surprise -log P'(d), not the prior surprise -log P(d) that figures in our formula above.) Any approximation to conditionalization will generally yield a posterior Q that also assigns high probability to the input d. So it will also reduce surprise. And so, all else equal, it will reduce free energy.)
To sum up, the proposal that a perceptual system minimizes free energy can be understood either as a substantive conjecture on how the system implements approximately Bayesian inference (the variational hypothesis) or as a rather less substantive conjecture that merely says that the system (somehow or other) implements approximately Bayesian inference. It's a bit misleading to express the weaker conjecture in terms of free energy rather than (say) surprise, since non-variational approximations to conditionalization don't directly involve free energy. But let that pass.
So far, we've only looked at perception. Let's turn to action. Here the free-energy approach is often advertised as a radical departure from traditional views insofar as it no longer appeals to goals in its explanation of action. Instead, action comes out as just another method for minimizing free energy.
The basic idea seems to go roughly as follows. Suppose my internal probability function Q assigns high probability to states in which I'm having a slice of pizza, while my sensory input suggests that I'm currently not having a slice of pizza. There are two ways of bringing Q in alignment with my sensory input: (a) I could change Q so that it no longer assigns high probability to pizza states, (b) I could grab a piece of pizza, thereby changing my sensory input so that it conforms to the pizza predictions of Q. Both (a) and (b) would lead to a state in which my (new) probability function Q' assigns high probability to my (new) sensory input d'. Compared to the present state, the sensory input will then have lower surprise. So any transition to these states can be seen as a reduction of free energy, in the unambitious sense of the term.
Action is thus explained as an attempt to bring one's sensory input in alignment with one's representation of the world.
This is clearly nuts. When I decide to reach out for the pizza, I don't assign high probability to states in which I'm already eating the slice. It is precisely my knowledge that I'm not eating the slice, together with my desire to eat the slice, that explains my reaching out.
There are at least two fundamental problems with the simple picture just outlined. One is that it makes little sense without postulating an independent source of goals or desires. Suppose it's true that I reach out for the pizza because I hallucinate (as it were) that that's what I'm doing, and I try to turn this hallucination into reality. Where does the hallucination come from? Surely it's not just a technical glitch in my perceptual system. Otherwise it would be a miraculous coincidence that I mostly hallucinate pleasant and fitness-increasing states. Some further part of my cognitive architecture must trigger the hallucinations that cause me to act. (If there's no such source, the much discussed "dark room problem" arises: why don't we efficiently minimize sensory surprise (and thereby free energy) by sitting still in a dark room until we die?)
The second problem is that efficient action requires keeping track of both the actual state and the goal state. If I want to reach out for the pizza, I'd better know where my arms are, where the pizza is, what's in between the two, and so on. If my internal representation of the world falsely says that the pizza is already in my mouth, it's hard to explain how I manage to grab it from the plate.
A closer look at Friston's papers suggests that the above rough proposal isn't quite what he has in mind. Recall that minimizing free energy can be seen as an approximate method for bringing one probability function Q close to another function P. If we think of Q as representing the system's beliefs about the present state, and P as a representation of its goals, then we have the required two components for explaining action. What's unusual is only that the goals are represented by a probability function, rather than (say) a utility function. How would that work?
Here's an idea. Given the present probability function Q, we can map any goal state A to the target function Q^A, which is Q conditionalized on A -- or perhaps on certain sensory states that would go along with A. For example, if I successfully reach out for the pizza, my belief function Q will change to a function Q^A that assigns high probability to my arm being outstretched, to seeing and feeling the pizza in my fingers, etc. Choosing an act that minimizes the difference between my belief function and Q^A is then tantamount to choosing an act that realizes my goal.
This might lead to an interesting empirical model of how actions are generated. Of course we'd need to know more about how the target function Q^A is determined. I said it comes about by (approximately?) conditionalizing Q on the goal state A, but how do we identify the relevant A? Why do I want to reach out for the pizza? Arguably the explanation is that reaching out is likely (according to Q) to lead to a more distal state in which I eat the pizza, which I desire. So to compute the proximal target probability Q^A we presumably need to encode the system's more distal goals and then use techniques from (stochastic) control theory, perhaps, to derive more immediate goals.
That version of the story looks much more plausible, and much less revolutionary, than the story outlined above. In the present version, perception and action are not two means to the same end -- minimizing free energy. The free energy that's minimized in perception is a completely different quantity than the free energy that's minimized in action. What's true is that both tasks involve mathematically similar optimization problems. But that isn't too surprising given the well-known mathematical and computational parallels between conditionalizing and maximizing expected utility.
Now you may rightly wonder what any of this has to do with the free-energy principle -- that "any self-organizing system that is at equilibrium with its environment must minimize its free energy". It is certainly not true that any self-organizing system at equilibrium (whatever that means) must employ variational Bayesian methods to process sensory input. Nor is it true that it must encode proximate goals by a probability function. There are other models of action and perception, including non-probabilistic models that don't involve any kind of free energy, not even in a derivative and unambitious sense.
Here is how Friston seems to think it all hangs together.
Consider the frequency with which biological systems of a given type are in a physical state of a given type (where a "state" includes extrinsic relations to the environment). We can think of these frequencies as a probability measure P over physically possible states. This distribution P -- call it the population distribution -- will usually be concentrated on a small region in the state space. This requires an explanation, since it isn't what normally happens to physical systems when left on their own. In fact, biological systems seem to make an active effort to remain within their typical region. For example, when our body temperature gets unusually low, we generally do things (shiver, put on more cloths, turn up the heating) that prevent the temperature from decreasing further. One might even say that it's a defining feature of biological systems that they actively make sure to be in states with high P-probability, and therefore low P-surprise. (If P is high, -log P is low.)
Now whatever mechanisms ensures that a system maintains a high-P state must be sensitive to the system's actual present state. It needn't "know" the exact present state, but it might at least encode a probability measure Q over possibilities concerning the present state. The goal of the mechanism is then to bring Q close to P -- in other words, to minimize the free energy of Q relative to P.
Above we saw that action can perhaps be modelled as a process of minimizing the difference between our internal representation Q and a target distribution Q'. As biological systems, we have to minimize the difference between Q and P. This suggests that our ultimate target distribution is none other than the population distribution P! The free energy that's minimized in action is thus essentially the same free energy that must be minimized by all biological systems.
Perception enters the picture in two ways. First, minimizing the difference between Q and P won't be conducive to the goal of maintaining high-P states unless Q is fairly accurate about the present state. Perception helps to render Q accurate. More importantly, recall that perception works by minimizing the free energy of Q relative to P', where P' is some prior probability distribution P conditionalized on sensory data. What is this prior distribution P? Maybe it's once again the population distribution P! This would mean that the free energy that's minimized in perception is, after all, closely related to the free energy that's minimized in action, and to the free energy that we have to minimize qua biological organisms.
In this picture, there aren't just abstract mathematical parallels between action, perception and homeostasis. There is a biological imperative to minimize a certain form of free energy, and both action and perception are means to this end.
That's the unified theory, as far as I can tell. (As I said, I find Friston hard to read.)
Is it a plausible theory? I don't think so. First of all, even if we accept that biological systems have to maintain states with high P-value (the free-energy principle), this provides no good reason for thinking that the mechanisms that achieve this can be usefully modelled as minimizing the free energy of an internal representation Q relative to the population distribution P. In particular, the free-energy principle lends no support to the variational hypothesis about perception, nor to any specific hypothesis about the generation of acts. In short, the free-energy principle is far too unspecific to serve as the basis of an interesting, unified theory of cognition.
Moreover, the idea that the very same type of free energy gets minimized in action and perception is extremely implausible, as we saw above. It is also implausible that even one of these quantities has any direct connection to the population distribution P. For one thing, we often find ourselves in states whose past population probability is zero, so our prior probability had better not coincide with that population probability. More importantly, the distribution that serves as the prior P in sensory updates is arguably not fixed, but modified by earlier experience. So it can't always coincide with the rigid population distribution.
Things get a lot worse when we look at the supposed connection between the population distribution P and an organism's ultimate goals. Let's set aside the question whether there's a sensible model of action in which ultimate goals can be represented in a probability function. The more immediate problem is that typicality and desirability, while correlated, are by no means the same. Many insects have thousands of offspring almost all of which die at a very early age. But they don't actively attempt to die. In the other direction, mating events are often extremely rare in the population distribution, and yet individuals actively seek them out. More generally, there is just no reason to believe that evolution would make organisms seek out states to the extent that they are common, rather than to the extent that they promote fitness.
Free energy minimization might play an interesting role in perception. A computationally similar process might play a role in the generation of action. But the kinds of free energy that are minimized in the two cases would be very different. Neither of them has any plausible connection to population distributions.
We're very pleased to announce the first PEA Soup discussion of a paper published in Politics Philosophy & Economics. We will kick off with A.J. Julius's piece 'The Possibility of Exchange', which is available here. The discussion will open on Monday, December the 16th. Victor Tadros has kindly agreed to write a critical précis, and Richard Arneson, Stephanie Collins, Mark Reiff, Nicos Stravoupalos and Andrew Williams will take part in the discussion. We're looking forward to a great conversation!
The FDA recently banned 23andMe from marketing (if not selling) its DNA testing kits.
There are two issues here. First is an individual's right to know his or her private genetic information. Second, who has the right to make medical claims on those gene variants?
In the first case, I believe an individual's right to know their personal genetic signature (i.e. gene variants) should be inviolate and absolute. The FDA has no right to censor that. And second, no government agency, especially the FDA, has the ability to regulate all scientific claims (made by individuals, universities, corporations, doctors, etc) on each of our 20,000 human genes (and millions of variants thereof, most of which are not disease related).
In the next few years, we'll each carry around a memory stick with a complete copy of our personal genetic sequence. We'll be able to search websites anywhere in the world to find out what people think about the function of each gene variant. Why not allow anyone to make claims on genetic function, and allow individuals to choose? We can handle it.
Many scientists also find the FDA's decision questionable:
[According to] Misha Angrist, an assistant professor at the Institute for Genome Sciences and Policy at Duke, ... with DNA sequencing becoming cheaper and easier, the F.D.A. would ultimately fail in keeping people from having access to their own genetic information.
“Is the only pathway for me to get access to the contents of my cells via some guy in a white coat?” he said. “F.D.A. clearly thinks the answer is yes. I find that disappointing and shortsighted and naïve.”
Don't let this important human right be taken away. Sign the protest here and here. There's an important precedent at stake. The holy grail of personal genetic information is not 23andMe's current $99 gene chip technology, which measures only some of the genetic variability among us, but the full-genome sequence which currently costs over $1000, and measures 3 billion potential genetic variations among us.
With that information, we'll not only discover the underlying genetic mechanisms of disease, but also of personality traits and human potential. How will the FDA stop scientists from making claims on gene function, once full sequencing is available?
(The irony here is that the FDA recently approved a next-generation sequencing service. So why pick on 23andMe now? Just ask them to separate their testing kits from their claims, and the whole designation as a regulated medical device becomes moot.)
UPDATE!! SCIENTIFIC PROOF THAT A SUBSTANTIVE PROPERTY OF FINAL VALUE IS A PHILOSOPHER'S FANTASY (AND THAT EVEN PHILOSOPHERS DON'T TRULY BELIEVE IT).
(Sorry for the sensationalism, but suddenly one needs to compete for an audience around here! And I think I "buried the lead" in my original post.)...
I thought I’d take this opportunity to present one of the crazier ideas I’ve been working on. In the spirit of Philippa Foot’s “Morality as a System of Hypothetical Imperatives” (1972) and like-minded philosophers, I’ve argued (e.g. in my forthcoming book, Confusion of Tongues) that thin normative words like ‘good’ and ‘ought’ are essentially relativized to ends or goals (what I call an “end-relational” theory). So any logically complete sentence of the form ‘p is good’ is implicitly relativized to some relevant end: ‘p is good [for e]’, which I’ve interpreted as meaning roughly that p promotes/ raises the probability of e, or: e is more likely given p than given not-p.
This view encounters an obvious objection from final value: judgments about what is good “for its own sake”. What should an end-relational theory say about ‘good for its own sake’?Whereas this locution is often taken as expressing a kind of goodness that is nonrelational, a compositional treatment of the expression itself suggests something different: a kind of goodness that is relativized reflexively. ‘For its own sake’ in general seems to mean for itself. For example, ‘He did it for his own sake’ means that he did it for himself, not that he didn’t do it for anybody. So might we suppose that to be good for its own sake is to be good for itself; i.e. p is good for its own sake iff p is good for p, or: p raises the probability that p?
This sounds crazy, of course: it’s true of any proposition p whatsoever that p is more likely if p than otherwise, but surely not everything is good for its own sake. But I suggest it becomes plausible with one further tweak. We don’t call something (instrumentally) “good”, simpliciter/sans phrase, just because it is good for something or other, but only if it is good for something that is relevantly desired or valued. I’ll capitalize (e.g. ‘p is Good’) to indicate this kind of use. Plausibly, something is said to be “good for its own sake” in case it is judged to be Good, and there is a question about the basis for its Goodness. Of things that are judged Good, some are Good because they promote something else that is relevantly desired, but others are Good because the relevantly desired thing they promote is themselves. So to be Good, for its own sake, is to be the object (or perhaps, constitutive of the object) of relevant intrinsic desire. Note that this fits with what Plato and Aristotle say: intrinsically valuable Goods are those which are desired for their own sake and not for the sake of anything else.
Most likely you’ll think this is still pretty crazy. But there’s some striking empirical evidence in its favor. An implication of this theory is that nothing can be Bad, Worse, Worst, Better, or Best for its own sake. (Because nothing can promote itself more or less than anything else promotes itself). Initially I thought this looked like a fatal problem, but then it struck me that those expressions did all seem bad to me. So I checked google usage data, and found the following:
‘It is ____ for its own sake’ ‘It is intrinsically ____’
‘good’ 4,600,000 992,000
‘bad’ 1 480,000
‘better’ 1 420,000
‘best’ 0 5
‘worse’ 0 27,000
‘worst’ 0 2
(Note: searching on the shorter string ‘bad for its own sake’ yields many more results, but I found that the majority of these seemed to be variants either of ‘desiring what is bad for its own sake’, where ‘for its own sake’ is qualifying the desiring rather than the badness, or of ‘good/bad for its own sake’, where ‘bad’ is plausibly only included for completeness/ as an afterthought.)
This is a striking pattern of use that demands an explanation. My hypothesis predicts it, and I take this as strong evidence that the hypothesis is correct. The usage for ‘intrinsically ___’ shows that it can’t be explained by a general focus on goodness over other kinds of relations. (I take it that to be “intrinsically bad” is to be Bad in virtue of intrinsic properties.)
So I’m interested in what people think. Is the evidence compelling, or are there other good explanations available for this pattern?
It is an interesting fact about many of our most important choices, such as the choice of what kind of education to pursue, whether and whom to marry, and whether to have children – for short, life choices – that they transform us in ways we can’t fully anticipate, so that the person who lives with the consequences of the choice won’t be quite the same as the person who makes the choice. Recently, L.A. Paul has argued in a stimulating paper that the existence of such transformative experiences causes serious trouble for rational decision-making. I’ll grant here that her argument is more or less successful to the extent that the phenomenal quality of our experiences is central to the value of a choice or preference among options.Alas, I think that precisely when it comes to the life choices that are associated with transformative experiences, phenomenal quality is relatively insignificant. After all, there is more to prudential value than positive experiences. Consequently, we make our life choices not on grounds of what we expect it to be like to lead a certain kind of life, but rather on the grounds of how it will shape our relationships, roles, and identities, in short, the story of our life, and have good reason to do so. As I prefer to put it, life choices are not experience-regarding, but primarily story-regarding, and should be such. Insofar as we can reliably compare the value of or form preferences between relationships and activities, and estimate their likelihood given our choices, we can after all make approximately rational self-interested life choices. Even if Mary can't know what it will be like for her to have a child, she can ask herself whether having a child would best promote connecting her to something larger than herself or best express her love and commitment to her partner, and use the answers as a basis for rational choice.
Here is an argument that is suggested by Paul’s paper:
I say this argument is suggested by Paul’s paper. She doesn’t quite make it, since she is very explicit that the conclusion is that “you cannot rationally choose to have a child based on what you think it will be like to have a child” (p. 20) – that is, she doesn’t say there is no way to make a rational choice. But it is clear that she thinks that making the decision on a phenomenal basis (what it is like to have a child) is a) the ordinary, common sense way of making it, and b) the rationally preferable way of making it, as long as you don’t take into account moral or, say, environmental considerations, or instrumental reasons like the need for an heir or having more hands at the farm, because the value of the phenomenal outcome is likely to “swamp” the value of non-phenomenal outcomes. (For detail of her actual argument, see the appendix below.) So the popular press is not entirely wrong in claiming she argues "there is no rational way to decide to have children—or not to have them".
Alas, this pessimistic conclusion rests on premise 2, which is false, especially when it comes to life choices. What motivates the premise is either the thought that what is objectively valuable for us are above all or exclusively positive experiences, or that what we subjectively care about are above all or exclusively positive experiences. Since neither of these claims is true, our at least partial ignorance of what outcomes that involve transformative experiences are like for us doesn’t threaten the possibility of making approximately rational self-interested life choices (which is not to say that it is easy to make them).
The standard way of arguing against the notion that prudential choices are or should be exclusively experience-regarding, as I will say, is by appeal to thought experiments like Nozick’s Experience Machine. As I have emphasized, the relevant comparison is between a qualitatively identical outcome in the Experience Machine and in reality. It is illustrative, I believe, that it is precisely when it comes to outcomes of life choices that the intuition against plugging in is strongest. Would I be indifferent between actually going to see Gravity or eating a nice meal, and having the perfect illusion of doing so? Yes. Would I be indifferent between actually having and interacting with my two children or doing my job, and having the perfect illusion thereof? Certainly not. What matters to me about being with my children and doing my job is neither exclusively nor centrally the quality of associated experiences.
So how do we, and should we, go about making life choices? On Paul’s picture, recall, the way people ordinarily decide whether to have a child is asking themselves how they would feel if they had one, or more broadly what it would be like for them if they had a child. I am, first of all, highly skeptical of whether the empirical claim is true. It doesn’t even remotely match my own experience of making the choice to have children, nor does it reflect what conversations with friends suggest.
But the important thing is what actually makes a difference to the prudential value of the outcome of either having or not having children. Here are some of the non-experiential things you choose when you decide to have a child. You choose to be a father or mother, and potentially a grandfather or grandmother, a link in the chain of generations from the unknown past to the uncertain future. You choose to take on certain obligations and responsibilities towards the child, whatever he or she will be like. You choose to bring into the world and nurture a creature who is for a long time dependent on you. You choose to bind yourself to your partner in a new way: you will always have something distinctively in common, someone who shares some of the unique traits of both of you, both by biological transmission and (typically) by way of the example you give and the way you bring her up. This may be an expression of a deep kind of love. In short, you choose identities, roles, commitments, and relationships. (Including being a childless woman, for example.) These are outcomes that you can know in advance, regardless of how the actual experience transforms or would transform you.
And these outcomes have value for people independently of the associated experiences. Why? That’s not a simple question. I believe that it has to do with the fact that we are not just subjects of experience but also active agents shaping our lives, so that certain exercises of our agency are good for us independently of the experiences they involve. Subjectively, some people simply have preferences for having certain roles, relationships, or activities, regardless of or in addition to what they are like for them. They may feel their lives are wasted or rudderless without them, or take pride in doing them well. When it comes to objective value, there are many competing non-experiential theories. A perfectionist might say that the roles and relationships involved in having children are good for you to the extent that they enable the development and exercise of your essential human capacities. (In that case, you are better of not having children when doing so would hinder such development and exercise.) An objective list theorist would ask which kind of life is likely to involve higher levels of achievement, personal relationships, and self-respect (Fletcher 2013), or whatever is on the list.
My own view is that the non-experiential value of being a parent and a co-parent depends on whether it makes your life more (objectively) meaningful, which in turn is, roughly speaking, a matter of whether it continues your story in the direction of making a lasting positive difference in the world in a way that builds on your past efforts and makes use of your distinctive abilities. On this view, whether it is prudentially rational for you to have children depends centrally on what else you could be doing with respect to realizing some objective value (whether moral, alethic, or aesthetic), as well as what you have done up to now and are able to do. To be sure, it also depends in part on whether children would make you happy, but that is not a paramount value that would “swamp” others, but a relatively minor consideration, unless there is some good reason to think it would make you either extremely happy or unhappy.
So I prefer to say that the self-interested choice of whether to have a child, or go to college, for another example, should rationally be centrally story-regarding. Insofar as I’m abstracting from moral considerations, I should ask myself whether the story of my life is likely to be better if I make one choice rather than the other. This is by no means an easy calculation – nobody says that making even approximately rational life choices is simple. But if it is difficult, it is not because you can’t predict your experiences, but because it’s hard to predict exactly how your relationships and projects are transformed.
So: I reject the conclusion of the argument suggested by Paul’s paper, since what it is like to have a child is not a central prudential consideration when it comes to deciding to have a child. Contrary to Paul, I don’t believe common sense makes this error either, though the issue would need to be settled by proper empirical research. The result is a kind of a dilemma for Paul’s thesis about the implications of transformative experience. On the one hand, when it comes to ordinary choices for which the phenomenal quality of outcomes is paramount, we are in a fairly good position to know what the phenomenal quality will be (for example, I’ll get better food at Café Paradiso than at McDonald’s down the road). On the other hand, when it comes to life choices, the phenomenal quality of outcomes is indeed hard to tell, but it is not among the most important considerations either. We don’t have to set aside our preferences between lives to make a rational choice, since (and as long as) those preferences are above all for relationships, identitities, and activities rather than experiences.
Is Paul committed to the pessimistic conclusion of the argument I claim is suggested by her paper? Perhaps not, but she comes close when she emphasizes the importance of phenomenal quality in making personal choices, and then denies it can be the rational basis when it comes to transformative experiences. For example, she says that “it seems appropriate to frame the decision [of whether to have children] in terms of making a personal choice, one that carefully weighs the value of one’s future experiences” (p. 2), and even more strongly that
not only is the phenomenal outcome what it’s like to have your own child a relevant outcome of your choice, it’s an outcome whose value might swamp the other outcomes. In other words, even if other outcomes are relevant, the value of the phenomenal outcome, when it occurs, might be so positive or so negative that none of the values of the other relevant outcomes matter. (p. 17)
Paul considers the possibility that other outcomes might outweigh the phenomenal ones, but dismisses it with the claim that “What is much more likely is that the value of what it is like to have the child will swamp the other outcomes.” (p. 17n28) So it is clear that she doesn’t just think that one potential basis for making a prudentially rational choice is missing when it comes to transformative experiences, but that the very possibility of such choice is severely threatened, because phenomenal quality is of paramount importance. To be sure, Paul does grant that someone can decide rationally, as long as she “ignore[s] what she personally thinks about whether she wants to have a child” (p. 20), but given what she says about the importance of what it’s like to have a child, I read this as gesturing towards making the choice on the basis of something other than one’s personal preferences or intrinsic interests, such as the kind of considerations that motivated some pre-modern parents, or moral reasons. At the very least, the claim about the phenomenal value of an outcome swamping non-phenomenal values is in severe tension with the claim that it is possible to make a rational choice on the basis of non-phenomenal values.
(Thanks to David Killoren for comments on a draft of this blog.)
I am pleased to introduce this month's featured philosopher: me. Please join me in welcoming me.
[Added Monday morning 18 November by Shoemaker: Because of some random spamming difficulties, all comments will now be moderated. Please be patient, as comments must now be read and approved prior to being published.]
Well, thanks, it's nice to be here at PEA Soup, and to be the second consecutive JLD featured in as many months.
Once I was a metaethicist. I wanted to understand what is at issue when people (including professional moral philosophers) seem to disagree in their fundamental judgments, and try to figure out what the truth is, or act as if there were a truth to find. Back in the day I defended a kind of contextualist relativism (not knowing its proper name at the time I called it “Speaker Relativism”). I did a bunch of work on expressivism, largely defending it from objections. Eventually I became convinced that expressivism had the advantages of relativism without a main cost. But then I changed my mind again and decided there isn’t much difference between the views once each side has done its best to minimize its own costs. So that’s one reason I am now a metametaethicist: I’m trying to figure out how to understand what is at issue when metaethicists seem to disagree about fundamental things.
Outside of metaethics, I’ve been interested in the foundations of decision theory, and also in the way decision theory and other methods of economics can be used in philosophical ethics. In the same vein, I have defended the (apparently -- I didn't expect it to be) provocative claim that any moderately plausible ethical view can be reconstructed as a practically equivalent consequentialist view. I think one reason some people don’t like this idea is that they think it’s supposed to show that consequentialism is in some way superior to its rivals. But that’s not what I meant. The reason it’s useful, if indeed it is, is that ‘consequentializing’ a theory (that is, reformulating it as a consequentialist theory with the same deontic verdicts) exposes the interesting differences among theories, abstracting away from superficial differences.
I’m happy to discuss any of this. (I have also been thinking about a new tool for consequentializers, suggested by Doug Portmore in his book, namely: centering value on worlds. But I’m still a little confused about this so I might just post something about it separately later.) To provide a little focus, let me say what I’ve written recently and what I’m interested in right now.
First, some things about the new quietist non-naturalism bother me. One is that this sort of view has nothing good to say about how practical normative judgment is related to choice and action. One needn’t be a stark, raving internalist to think that judgments about what you have reason to do or what you ought to do have some special relation to your practical decisions. What is that relation? I think the new non-naturalists want to say that it is a relation of rationality: it is irrational for you to judge that you ought to do something and then have no tendency at all to do it. Right. But why is this irrational? Kantians have an excellent answer. (Their answer is false, but I mean it would be a great answer if true.) So do Humean constructivists. Some kinds of Aristotelians. Expressivists. Even reductive naturalists! But non-naturalists, whether quietist or loudist, don’t seem to have anything to say at all.
Glad I got that off my chest.
Second, I’ve become worried that a certain kind of problem that besets non-naturalist realists will also be a problem for quasi-realists. In “Metaethics and the problem of creeping minimalism” I endorsed the idea that what makes a quasi-realist quasi is the absence of ethical (or other normative) material in the explanation of what it is to make an ethical judgment or assert an ethical proposition. Suppose (as I’m still inclined to do) this is right. Now there’s a new problem. How is the quasi-realist going to explain the Cosmic Coincidence between our normative judgments and the normative truth? We are not infallible, by any means, but we are remarkable accurate. What explains our accuracy? Quasi-realists accept that we are accurate, so it seems they accept something that needs an explanation… but they don’t have one. (By the way, this is obviously related to a complaint of Sharon Street’s, but I only mean to be picking up one element of her challenge, and not the overtly epistemic part – epistemology is hard.) Why not? Does it have to do with the fact that normative beliefs are not explained by the elements of their contents? And about the same issue arises for the explanation of supervenience, I think. The idea that expressivists have a good explanation of supervenience was an illusion.
Okay, that's probably already longer than the ideal PEA Soup posting, so I'll stop, and welcome comments and discussion.
We are very pleased to begin our announced Ethics discussion of Erich Hatala Matthes' piece, “History, Value, and Irreplaceability," which can be found open access here. Carolyn Korsmeyer, professor of philosophy at the University of Buffalo (SUNY), will open the discussion with the critical précis below the fold. Here now is Korsmeyer. Thanks to everyone for participating, and here's to a great discussion!
Critical Précis of Erich Hatala Matthes' “History, Value, and Irreplaceability," by Carolyn Korsmeyer
Some philosophers target the property of being irreplaceable as the primary factor involved in the historical value we attribute to old artifacts. Erich Hatala Matthes argues that the chief importance of historical artifacts is not their irreplaceability—which he considers a “merely contingent” feature—but their ability to put us in touch with the past in a way that nothing else can. I am in complete agreement with the sentiment of this conclusion, and I also concur that there is a strong aesthetic element at work in the experience of old things. At the same time, I wonder if irreplaceability can be expunged quite so thoroughly from among the features of objects deemed historically valuable.
Irreplaceability seems to unite keepsakes, heirlooms, artworks, historical documents, national treasures, and relics. With keepsakes and heirlooms, however, the value of an object depends upon its relation to an individual or to a small group of individuals, and it is important to distinguish between (merely) personal value and historical value. We know when something has personal value because it is important to us; therefore, perhaps historical value is grounded on what is important to many. But this can’t be the correct account, because sometimes there is longstanding public neglect of something important, about which everyone ought to care; so historical value is poorly grounded if it only depends on a lot of people regarding objects in a certain way. Hence the claims made for the intrinsic value of an irreplaceable particular, as G.A. Cohen and others have argued. Matthes examines this position and reveals the vulnerabilities of irreplaceability as a defining characteristic of historical value.
Matthes addresses the relationship between irreplaceability and what he calls the “historical mode of evaluation.” He scrutinizes exactly what it means to claim that an object is irreplaceable, noting how frequently that feature is simply assumed rather than demonstrated. He examines in useful detail various versions of “meaningful” irreplaceability and the conditions under which an object “rationally resists” replacement. The strategic progress of his argument builds a case for the weakness of irreplaceability as a criterion for historical value. While one might think that replaceability pertains only to things with instrumental value—nails, milk bottles, engines—Matthes points out that objects are meaningfully replaceable under circumstances that do not always rule out things of historical value, since it may be the case that another thing of the same sort will serve as well. He demonstrates that the notion of irreplaceability is more complex than at first it seems, and he is persuasive that irreplaceability neither tracks historical value nor illuminates historical significance.
Equally compelling is the way that he disentangles the notion of intrinsic value from the value of particulars, a point that provocatively targets Cohen’s thesis about historical value. Matthes observes that even things with intrinsic value, such as beautiful flowers, may be replaceable by others of their kind. One of the most astute features of Matthes’ essay is the way he probes intuitions that previously seemed relatively impervious to challenge, and I find especially persuasive his arguments that undermine the intuitive link between irreplaceability, intrinsic value, and the value of particular objects.
The irreplaceability thesis is additionally problematic, Matthes claims, because in a sense every object has its own unique history and to that (usually trivial) extent is irreplaceable, but not every object is a candidate for historical significance. Properties deemed “historical” are normative features that only things of special and lasting importance possess. Anyone might cherish a keepsake for its singular history, regard it as irreplaceable as such, and at the same time grant that it has no historical value at all. The result is the “proliferation problem,” namely, that there is no way to limit irreplaceability to the zone of objects it is intended to illuminate. Matthes is certainly correct that if we make irreplaceability the key to historical value there is no end to the objects that will pile up.
He concludes that issues involving replaceability are general features of evaluation that do not usefully single out objects for their historical significance. As he sums up this part of his conclusion: “Whether or not you would have good reason to accept a replacement for a valued object is irrelevant to explaining the specific character of objects we value for their histories” (p. 26). He casts that specific character in different terms, positing that at the root of historical value is the ability of an object to connect us with the past in an especially immediate and intimate way. Here is how he evocatively describes the “emotional resonance” of old things that we can touch and hold:
The historical properties of a memento or heirloom allow you to hold the past in your hand. This phenomenon is all the more remarkable when it pushes beyond the boundaries of our own life and allows us to connect with persons and events from the distant past… it can also offer a sense of unity with the significant moments that have shaped both the earth and ourselves (p. 28).
I endorse this idea wholeheartedly. But we may still inquire: Does the ability to put us in intimate touch with the past usefully supplant the notion of irreplaceability as the key to historical value? Matthes seems to grant that the historical properties of objects cannot be “replicated,” but he argues that from that fact the “stronger thesis” of irreplaceability cannot be inferred (pp. 26-27). However, let’s pursue replication further. What sorts of historical properties do we have in mind? Are they (a) the distinctive discernible features of old things? (b) The features that an object originally possessed when it was made? (c) The relational property of having been owned or used by a particular person? (d) The property of having endured a long time? These variations foreground a difference between an object that has historical value and an object that is valued for its history. While it may seem that these two phrases express essentially the same idea, their difference advises reconsideration of irreplaceability and the nonfungible sentiments and affections with which we recognize unique or special objects.
Replacement is the removal of one thing and substitution of another. Sometimes the new thing might be quite different from the old, as with the replacement of an old church with a new apartment building. Replication, however, is an effort to make a second thing that is just like the first, and sometimes replacement with a replica is done in order to preserve an aspect of historical value, namely, features of objects that permit a glimpse of life long ago. Objects of great age are almost never handed down to us intact. They require repair and restoration, which sometimes includes replacement of damaged parts with (ideally) indiscernible replicas in order to restore an object to its (apparent) original condition (as with a and b above). Otherwise, we might not be able even to recognize an object as a thing of its kind. The result is that many of the artifacts that we preserve and value—including those that acquaint us with what life in the past was like—are not exactly as they were at the time of their making. (Indeed, after many repairs the ontological issue of whether the same thing is still extant will eventually arise.) Only one sort of feature is utterly nonreplicable, and that is being the very thing that has that history—having been made at a specific time, having endured sequences of wear and tear, having been touched by its maker and those who followed (c and d above).
As Matthes notes, replaceability varies with the relationship an object bears to those who cherish it. Consider, for example, mourning brooches, which for over a century were popular pieces of memorial jewelry that featured tiny woven mats of the hair of a loved one now departed. Because they are relatively common, usually not costly, and pretty in more or less the same sorts of ways, to a collector of such items today one mourning brooch may be replaceable by another without loss of historical value. But of course to the original owner, such as the widow of a fallen soldier, the only acceptable brooch would have been one that contained the hair of her husband. And the reason for this rather obvious claim is that he and only he was the nonfungible intentional object of her emotional attachment, and nonfungibility transfers to the token of his hair. In other words, valuing objects for their historical properties and valuing objects for the history they embody are not quite the same. The latter are the sort with the strongest claims for irreplaceability, and they are also the objects that most compellingly entice us to hold the past in the ways that Matthes so vividly describes.
(Incidentally, here is an uneasy sidebar for philosophers to ponder: Some psychologists analyze the sort of value summed up in the idea of “holding the past” as rooted in a type of “magical thinking” whereby things once touched indelibly retain the effects of that contact. With such objects, “their history, which may not be represented in their appearance, endows them with important properties.”  This seems to be at work in the distinction between valuing an object for its historical significance, and valuing an object because of the history that it embodies. While I don’t necessarily endorse the diagnosis of magical thinking, we might well query the notion of “rational” replacement that figures in discussions of irreplaceability.)
In elaborating the distinction between things with historical value and those valued for their histories, I have unfortunately reopened the proliferation problem, because the attachments formed with the latter do not screen the significant from the merely personal. What to preserve and what to discard are not only theoretical but also practical problems confronted by anyone cleaning out an attic. The conditions that bestow historical significance on things are circumstantial and sometimes serendipitous, and as such are likely to resist formulating as a general principle. Therefore, I wonder if the proliferation problem has a philosophical solution.
 Paul Rozin and Carol Nemeroff, “Sympathetic Magical Thinking: The Contagion and Similarity ‘Heuristics’,” in Heuristics and Biases: The Psychology of Intuitive Judgment, Thomas Gilovic, Dale, Griffin, and Daniel Kahneman, eds. (Cambridge, UK: Cambridge University Press, 2002): 202. See also Paul Bloom, How Pleasure Works (New York: W.W. Norton, 2010) and Matthew Hutson, The 7 Laws of Magical Thinking (London: Penguin, 2012).
Suppose I say (*), with respect to a particular gambling occasion.
(*) A gambler lost some of her savings. Another lost all of hers.
There is an implicature here that the first gambler, unlike the second, didn't lose all her savings. How does this implicature arise?
On the standard account of scalar implicatures, we should consider certain alternatives to the uttered sentences. In particular, I could have said 'A gambler lost all of her savings' instead of 'A gambler lost some of her savings'. If true, this alternative would have been more informative. Since I chose the weaker sentence, you can infer that I wasn't in a position to assert the stronger sentence. Assuming I am well-informed, you can further infer that the stronger sentence is false.
But in the context of (*) this explanation makes no sense. For the second sentence in (*) entails that the stronger alternative to the first sentence ('a gambler lost all of her savings') is true. So you can hardly conclude that I wasn't in a position to utter that alternative.
One might suggest that we should consider not just alternatives to the individual sentences in (*), but to (*) as a whole. If I had known that both gamblers lost all of their savings, I would have chosen 'two gamblers lost all of their savings' instead of the weaker (and more complex?) (*). Since I didn't, you can infer that only one of the gamblers lost all their savings.
That might work for (*). But I don't think it will do as a general solution. Couldn't I utter just the first sentence of (*), 'a gambler lost some of her savings', without thereby suggesting that there is no gambler who lost all of her savings?
Perhaps a more promising idea is that when we compute the implicature in (*), we hold fixed the gambler at issue. You might reason as follows:
The speaker said of some gambler that she lost some of her savings; it would have been more informative to say that she lost all of her savings; so the speaker probably doesn't think that this is true; so the gambler at issue probably didn't lose all of her savings.
This might even fit the standard account of scalar implicatures provided we treat the indefinite 'a gambler' in (*) not as a quantifier but as a referring expression, as suggested e.g. in Kamp's DRT or Heim's File Change Semantics. On these accounts, the logical form of 'a gambler lost some of her savings' is something like 'x is a gambler and x lost some of her savings', where 'x' is a free variable that gets existentially closed only on the level of discourse. When computing scalar implicatures, the alternatives should plausibly involve the same variable. In particular, since I didn't use the alternative 'x is a gambler and x lost all of her savings', you can infer 'x is a gambler and x lost some but not all of her savings'.
Unfortunately it isn't clear to me that this still works in more recent versions of dynamic semantics, where the logical form of 'a gambler' is taken to include a quantificational element.
Is there any literature on this?
On December 6-7 at NYU, we are hosting a conference on "The Brain Mapping Initiatives: Foundational Issues". The conference is devoted to foundational issues raised by recent brain mapping initiatives, such as the BRAIN initiative, the Human Brain project, the Human Connectome Project, and the Allen Brain Atlas. What can mapping the brain tell us about the human mind? What are the ethical implications? These issues will be discussed by leading cognitive neuroscientists and philosophers, including Cori Bargmann, Patricia Churchland, Nita Farahany, Sean Hill, Gary Marcus, Anthony Movshon, Anders Sandberg, Walter Sinnott-Armstrong, Rafael Yuste, and Anthony Zador. Anyone who is interested is welcome. You can register (free but required) via the conference page.
The conference is co-sponsored by the NYU Center for Mind, Brain, and Consciousness and the Center for Bioethics. The Center for Mind, Brain, and Consciousness is a new center co-directed by Ned Block and me, devoted to foundational issues in the mind-brain sciences. It will be organizing many interdisciplinary events in the coming years, including annual conferences on foundational topics and regular debates at NYU. We will also be appointing post-doctoral fellows. In the short term, one of the Bersoff Fellowships currently being advertised at NYU will be assigned to the Center. Applications from people with interests in these areas are welcome. (The deadline is November 1; sorry about the short notice!)
I haven’t updated this for a while, have I? So it’s time for some updates.
Last weekend I was at a workshop on social epistemology at Arche. Miriam Schoenfield presented this great paper. I did a paper that was somewhat derivative of Jennifer Lackey’s work on generative testimony. (Well, perhaps more than somewhat – I’ll post it if I decide I really had anything interesting original to say.) I had to miss some papers so I could come back to America to work. But I did hear two interesting papers by Alvin Goldman and Jennifer Lackey on group belief. And I was wondering if anyone had defended the following idea for how to define the beliefs of a group in terms of the beliefs of the group members.
First, use some kind of credal aggregation function to get a group credence function out of the individual group member credences. This could be arithmetic averaging, or (better) it could be one of the more complicated functions that Ben Levinstein discusses in his thesis. Second, draw out one’s favourite theory of credal reductionism to define group beliefs in terms of group credences. My favourite such theory is interest-relative, and it’s possible that some propositions could be interesting to the group without being interesting to any member of the group, so this view wouldn’t be totally reductive.
This approach seems fairly simple-minded, but it does seem to avoid some of the problems that arise for other views in the literature. Hopefully I’ll get some time to read Christian List and Philip Pettit’s book on Group Agency, and see how the credence-first approach compares to theirs.
This will be the ninth bi-annual Young Epistemologist Prize (YEP) to be awarded. To be eligible, a person must have a Ph.D. obtained by the time of the submission of the paper but not earlier than ten (10) years prior to the date of the conference. Thus, for the Rutgers Epistemology Conference 8-10 May, 2015, the Ph.D. must have been awarded between May 8, 2005, and November 10, 2014.
The author of the prize winning essay will present it at the Rutgers Epistemology Conference and it will be published in Philosophy and Phenomenological Research. The winner of the prize will receive an award of $1,000 plus all travel and lodging expenses connected with attending the conference.
The essay may be in any area of epistemology. It must be limited to 6,000 words (including the footnotes but not the bibliography). Please send two copies of the paper as email attachments in a .pdf format to:
One copy must mask the author’s identity so that it can be evaluated blindly. The second copy must be in a form suitable for publication. The email should have the subject: “YEP Submission.” The email must be sent by 8 pm (EST) on November 10, 2014. The winner of the prize will be announced by February 16, 2015.
By submitting the essay, the author agrees not to submit it to another publication venue prior to February 16, 2015, and agrees i) to present the paper at the Rutgers Epistemology Conference, ii) to have it posted on the conference webpage, and iii) to have it published in Philosophy and Phenomenological Research.
All questions about the Young Epistemologist Prize should be sent to YEP@philosophy.rutgers.edu.
Sadly, it seems relevant to post another reminder about recent changes to UK Visa rules. Since 2012, it is impossible to (successfully) apply for a UK work visa if you have worked in the UK any time in the past 12 months. This will affect a lot of people who have rolling part-time positions in the UK. But what I hadn’t realised is that it is also hurting people moving between full-time jobs in the UK. And that’s a much more serious concern.
So in case you need (or will need) a UK work visa, and your would-be employer hasn’t kept up with all the visa changes that the Lib Dem/Tory government has brought in, it is very important to be aware of this rule.
I very much hope that the rule will be scrapped after the 2015 election; it seems to be causing harm without any obvious benefit. But I don’t think it would be a good idea to plan around that. For one thing, Labor might not win, or at least not win in their own right. (And I think we should act as if the Liberal Democrats are supporting policies that the government they partially constitute has introduced.) For another, new governments are often sadly tardy in fixing mistakes of the past governments, so even a Labor win doesn’t mean things start getting better that week. So even if your current UK visa expires after 2015, I’d start thinking about what you plan to do next, assuming that you can’t apply for another visa without a 12 month gap in employment.
|A student learning space at FCH|
|Journal of Buddhist Ethics|
This Year’s Theme:
Animal and Environmental Philosophy
The Philosophy Department of Bowling Green State University welcomes high-quality paper submissions from graduate students that engage issues in animal and environmental philosophy, broadly construed.
Keynote Speaker: Paul B. Thompson
W.K. Kellogg Chair in Agricultural, Food and Community Ethics
Michigan State University
In a paper I’m preparing, I argue that concrete moral details may influence judgments of moral responsibility in determinist circumstances through processes that are neither errant nor affective (Sorry, Shaun and Josh!). I tested a competing hypothesis—that ordinary judgments of moral responsibility vary in large part due to cognitive, effable differences in usage and conception—by providing 116 undergraduates with an abstract compatibilist case and asking them to evaluate moral responsibility, and then also asking them the extent to which the criminal’s level of moral responsibility would depend on the nature of the crime.
I was frankly surprised by just how strongly the evidence favored of this dependence recognition principle (DRC). Half of the variance in attributions of moral responsibility (r2 = .582) could be explained by individual differences in attitude toward the DRC. Those who thought moral responsibility to be “extremely dependent” or “absolutely dependent” on the nature of a crime attributed just as much moral responsibility for abstract cases (M = 6.07) as those in a contrastive group of 114 participants given a concrete case (M = 6.54), adapted from this gorgeous and fascinating story in the New Yorker (which is way more interesting than this here blog post, trust me). Meanwhile, respondents who thought moral responsibility was “not at all” or “a little” dependent on the nature of a crime gave the incompatibilist responses to the abstract moral crime we have come to think of as typical (M = 2.30).
I think any interpretation of these results should acknowledge concerns about ceiling effects and sample size, and I would not be surprised if there were be a small but significant effect with a larger sample. But there was enough power to reveal that whatever variance abstract/concrete effects the DRC is incapable of explaining, it is far less than the variance that can be explained by DRC—at least, in this one case. I am very interested in whatever critical feedback analytic philosophers, experimental philosophers, and psychologists might be able to provide. I see several shortcomings in this initial exploration, but I’m sure there are many more I have missed—and maybe even a few positives I missed, too! So, I’d love your comments, no matter how complex or how blunt.
|Campus in the sunshine..|
|Bright enough outside TC007?|
The provisional programme for the 4th Workshop of Experimental Philosophy Group UK is now available on our website: http://sites.google.com/site/experimentalphilosophygroupuk/fourth-workshop
There are also a few Analysis Trust student bursaries available - these offer up to 50% of the cost of registration and accommodation. Email firstname.lastname@example.org, giving your name, affiliation and the cost of your accommodation, and register here: http://shop.bris.ac.uk/browse/product.asp?compid=1&modid=1&catid=661 to apply for a bursary.
Registration will close on 5 September at the latest.
This event is a BIRTHA Conference, and we are grateful to the University of Bristol, the Mind Association and the Analysis Trust for their support.
|Students in Seville Alcazar|
|A well-earned rest after the journey|
|The Mezquita in Cordoba|
|Leicester for Diwali|
|Speaking at CESNUR|
Here are two new articles, both shortish reply pieces. Neither has been finalized yet, but final versions need to be submitted soon, so any comments in the next few days are welcome.