A place in a sentence is extensional if words with the same extension can always be substituted into it without changing the truth-value of the whole sentence. (That definition is a little too crude in about three ways, but bear with me.) A place in a sentence is intensional, in one sense of “intensional”, when words that necessarily share the same extension can always be substituted into it without changing the truth-value of the whole sentence.
It has become increasingly clear since the 1970s that we need to carve meanings more finely than by “intensions” in the sense associated with the specification above. Call the sorts of intensions employed, for example, by Richard Montague possible worlds intensions. Handling belief clauses by insisting that anyone who believes something believes everything necessarily equivalent to it has always caused problems. Once we accept that names are rigid designators, allowing their substitution in all sorts of representational and psychological contexts causes trouble: the Sheriff of Nottingham can be hunting for Robin Hood without hunting for Robin of Locksley, or so it seems.
There seem to be places outside our psychological talk that require hyperintensionality. Talk of entailment in the sense of logical consequence, for example: it does not logically follow from apples being red that all bachelors are unmarried, let alone that water is H2O, even though it does follow that either apples are red or apples are not red. Use of counter-possible conditionals is another example: two conditionals can have necessarily false antecedents but differ in truth-value. Talk about moral obligation and permission seems to be hyperintensional, as anyone struggling with substituting logical equivalents in the scope of deontic operators may have seen. I’m just back from a conference in Colorado where people were insisting that “in virtue of”, “because”, and other explanatory expressions were hyperintensional. (Benjamin Schnieder, Gideon Rosen and Kit Fine were three in particular.) Once you look around you see quite a bit of hyperintensionality.
There’s a piece of rhetoric I associate with Richard Sylvan about this. He was fond of suggesting that there would be a move from using possible-worlds intensions to using hyperintensional resources that would parallel the move made from extensionalism to possible-worlds intensionalism. In the nineteen-sixties, the big goal was to be able to do philosophy of language while treating language extensionally: think of Davidson’s project in particular, though Quine was also a big booster of the extensionalist program. I guess it was typical of that project to assign extensions to categories of expressions, and then have some syncatogramatic expressions that operated on extensions to yield other extensions. (E.g. “all” did not get an extension, but (All x)(Fx) operated on the extension of “F” to yield a sentence-extension, i.e. a truth-value)
There are still people trying to carry out that extensionalist project, but it came under increasingly severe attack since the early 1970s. (And maybe earlier: I think Carnap might be an important precursor here, along with Prior, and perhaps many others). The extensional programme was not very satisfying in its treatment of propositional attitude reports, entailment, normative discourse such as the use of “ought”, and a number of other areas. But the star witness against the extensional programme was modal vocabulary. Treating “necessarily” extensionally does not get you very far, and after Saul Kripke popularised possible-worlds semantics for “necessarily”, the floodgates started to open. Richard Montague and David Lewis were among the vanguard of those arguing for a systematic, intensional treatment of natural language, arguing that it handled all sorts of constructions that extensional treatments faced serious difficulty with.
The intensions that Montague and Lewis relied upon were set-theoretic constructions out of possible worlds and possible individuals. (Not just sets of possibilia or functions from possibilia to possiblia, but also sets of those sets, functions from those functions to other functions, etc. etc.) The Montague project of trying to handle all of language with these possible-worlds intensions is alive and well today: I take Robert Stalnaker to be one of its prominent contemporary philosophical defenders, though I haven’t scrutinised his recent work to see if any weakening has happened.
But I think that project is doomed. There is too much work that needs to be done that requires hyperintensional distinctions, and those trying to hold the line that everything can be done with possible-worlds intensions will look as outdated in thirty years as the extensionalists look to the intensionalists today.
Of course, even if we decided we wanted to do more justice to hyperintensional phenomena than standard possible-worlds semantics, we have several options about how to go on from here. The response that is perhaps closest to the standard possible-worlds tradition is to let the semantic value of a piece of language be a pair of a possible-worlds-intension plus some kind of constituent tree, that serves as a logical form or otherwise conveys information about the internal linguistic structure of the expression. Alternatively, we could let the semantic value of a complex expression be a tree whose nodes are possible-worlds intensions: Lewis discusses this way of going, for example, in OTPW p 49-50.
Another response that is close to the possible-worlds tradition is to use impossible worlds as well as possible ones. Since things that do not vary across possible worlds can vary across impossible worlds, impossible worlds give us finer-grained distinctions. If we allow logically impossible worlds, we can even get the effect of places in sentences where substitution of logical equivalents fail, since for example the worlds where (p or not-p) obtain need not be the ones where (q or not-q) obtain. I take it that semantics using situations instead of worlds is often a close cousin of this.
More radical responses to hyperintensionality include moving to an algebraic semantics, such as the sort advocated by George Bealer. Even these can be seen as successors to the possible-worlds tradition, since the structures of the algebras are often inspired by the structural relationships possible-worlds intensions stand in to each other. No doubt philosophers will come up with other approaches too - some revert to talking about Fregean senses and functions on them, though whether this is much more than a cosmetic difference from algebraic approaches I’m not sure.
Why does this matter for metaphysics? Well, one immediate reason it matters is that the metaphysics of language had better be able to cope with hyperintensionality and hyperintensions. One place that disputes in the philosophy of language often spill over is into the metaphysics of meaning, of truth (or at least truth-conditions), of propositions and so on.
A connected reason is that respect for hyperintensionality might go along with more warmth towards hyperintensional entities. We may be less likely to smile on the demand that properties that necessarily have the same instances are identical, for example. This in turn may motivate rejecting the picture of properties as sets of their actual and possible instances. Indeed, set theory might be of less use in metaphysics in general once we want to individuate things hyperintensionally.
There are other ways the hyperintensional turn could affect metaphysics. It might make us more sympathetic to impossible worlds, for example: I’ve argued elsewhere that counter-possible conditionals give us a good reason to postulate impossible worlds. It might make us think that some relational predicates are not associated with relations, or maybe are associated with finer-grained relata than they appear to be associated with: see Carrie Jenkins’s post about grounding. Modal analyses of hyperintensional pieces of language seem unappealing, since modal analyses are normally only intensional not hyperintensional. I could go on.
So, metaphysicians, join the hyperintensional revolution! You have nothing to lose but your coarse grains!
Thursday, March 26, 2009
Monday, March 23, 2009
Presentism, causation and truthmakers for the past
I’m working on both causation and the truthmaker objection to presentism, and it seems to me that it might be possible to kill two birds with one stone. What follows is the basic idea, and I’d love to hear your thoughts.
Suppose that presentism is true. What is the nature of causation? It’s the relation between what and what? Or, more relevantly, between when and when? Since, according to presentism, the past does not exist, either causation is a relation between nothing and something in the present, or causation is simultaneous, or causation is not a relation at all. The first option seems dubius. A two place relation (I’m ignoring contrastivism, for the moment) has two relata, after all, not one.
What, then, about the second option? C. B. Martin defends this view in The Mind in Nature—or, at any rate, that’s my understanding of what Martin defends. But it’s not clear how to make sense of causal processes on this view. (Persistence intuitively has causal constraints; how are we to make sense of these constraints if all causation is simultaneous?)
The third option seems to me the route to go. Here’s an initial proposal: Causation is a fact about presently existing (Armstrongian) states of affairs, or tropes if you have them. It is a fact about e, say, that c brought it about. Suppose, however, that existentialism is true, so that if x does not exist, there are no singular propositions about x. If c is a state of affairs and the particular that is a non-merelogical constitutent of c no longer exists, then the fact that c caused e is the fact about e that something c-like brought it about. If c is a trope no longer instantiated and the instantiation condition is true, so that uninstantiated properties do not exist, then too causation is the fact that something c-like brought about e.
How are we to understand “something c-like”? Here’s one proposal: Properties are or of necessity confer causal powers, so we can understand “something c-like” as “something with the following causal powers profile...” (Of course the Neo-Humeans can’t really accept this view, but how many Neo-Humeans are presentists?)
What should we say about the fact in question, that e was brought about by something c-like? It might be a property of the world, as in Bigelow’s “Presentism and Properties.” It might be a property of e. Or it might not be a property, but a fact grounded in something else. Or a primitive fact about e.
Whatever answer one gives here seems also to be an answer to the objection to presentism from truthmakers about the past. Hence the presentist, so long as they can offer a theory about the nature of the fact that e was brought about by something c-like, can kill two birds with one stone, a theory of causation and a response to the truthmaker objection.
Here’s an initial proposal. Take property instances to be tropes. Then, with certain other assumptions about tropes, events can be understood as tropes. So trope c caused trope e. That turns out to be a fact about e: that it was brought about by c. Since I’m inclined to accept both existentialism and the instantiation condition, this will turn out to be the fact, about e, that it was brought about by something c-like. The fact is a basic truth, and e alone is its truthmaker. This is analagous to e’s also being, in virtue of either being or of necessity conferring causal powers, (part of) the truthmaker for counterfactuals describing what objects with e would do in various circumstances. It is a truthmaker for future truths and for the past truth about c.
One further claim, and we have a theory of truthmakers for the past. These basic causal facts about tropes are cumulative. So the fact that e was brought about by c is the fact that e was brought about by something c-like which was brought about by something...., which was brought about by something..., and so on. As long as there is a causal chain from some present state of affairs to every past state of affairs, there is a present truthmaker for every past state of affairs.
Tropes carry with them their entire causal history and their entire power profile, and so are truthmakers for past and future truths. Present property instances do a lot of work on this view, but that’s about what we should have expected given presentism.
Suppose that presentism is true. What is the nature of causation? It’s the relation between what and what? Or, more relevantly, between when and when? Since, according to presentism, the past does not exist, either causation is a relation between nothing and something in the present, or causation is simultaneous, or causation is not a relation at all. The first option seems dubius. A two place relation (I’m ignoring contrastivism, for the moment) has two relata, after all, not one.
What, then, about the second option? C. B. Martin defends this view in The Mind in Nature—or, at any rate, that’s my understanding of what Martin defends. But it’s not clear how to make sense of causal processes on this view. (Persistence intuitively has causal constraints; how are we to make sense of these constraints if all causation is simultaneous?)
The third option seems to me the route to go. Here’s an initial proposal: Causation is a fact about presently existing (Armstrongian) states of affairs, or tropes if you have them. It is a fact about e, say, that c brought it about. Suppose, however, that existentialism is true, so that if x does not exist, there are no singular propositions about x. If c is a state of affairs and the particular that is a non-merelogical constitutent of c no longer exists, then the fact that c caused e is the fact about e that something c-like brought it about. If c is a trope no longer instantiated and the instantiation condition is true, so that uninstantiated properties do not exist, then too causation is the fact that something c-like brought about e.
How are we to understand “something c-like”? Here’s one proposal: Properties are or of necessity confer causal powers, so we can understand “something c-like” as “something with the following causal powers profile...” (Of course the Neo-Humeans can’t really accept this view, but how many Neo-Humeans are presentists?)
What should we say about the fact in question, that e was brought about by something c-like? It might be a property of the world, as in Bigelow’s “Presentism and Properties.” It might be a property of e. Or it might not be a property, but a fact grounded in something else. Or a primitive fact about e.
Whatever answer one gives here seems also to be an answer to the objection to presentism from truthmakers about the past. Hence the presentist, so long as they can offer a theory about the nature of the fact that e was brought about by something c-like, can kill two birds with one stone, a theory of causation and a response to the truthmaker objection.
Here’s an initial proposal. Take property instances to be tropes. Then, with certain other assumptions about tropes, events can be understood as tropes. So trope c caused trope e. That turns out to be a fact about e: that it was brought about by c. Since I’m inclined to accept both existentialism and the instantiation condition, this will turn out to be the fact, about e, that it was brought about by something c-like. The fact is a basic truth, and e alone is its truthmaker. This is analagous to e’s also being, in virtue of either being or of necessity conferring causal powers, (part of) the truthmaker for counterfactuals describing what objects with e would do in various circumstances. It is a truthmaker for future truths and for the past truth about c.
One further claim, and we have a theory of truthmakers for the past. These basic causal facts about tropes are cumulative. So the fact that e was brought about by c is the fact that e was brought about by something c-like which was brought about by something...., which was brought about by something..., and so on. As long as there is a causal chain from some present state of affairs to every past state of affairs, there is a present truthmaker for every past state of affairs.
Tropes carry with them their entire causal history and their entire power profile, and so are truthmakers for past and future truths. Present property instances do a lot of work on this view, but that’s about what we should have expected given presentism.
Friday, March 20, 2009
Like Matteo, I've been thinking about haecceitism. I claim it is best pronounced the way Kaplan liked it: Hex'-ee-i-tis-m. I'm not sure why I care about this, but I do.
There is a lot of older stuff in the metaphysics literature (Black, Adams, etc.) and a lot of more recent work in the philosophy of physics literature (Saunders, Ladyman, French and Krause, etc.) and the two discussions don't have a lot of points of contact. (Disclaimer: I'm going to read Katherine Hawley's paper on the PII in the next week or so; perhaps this will join the two debates together a bit more for me.)
Here are a few things I find confusing. (1) The physics people often seem to run together (sometimes on purpose) epistemological issues about indiscernibility with metaphysical ones. The fact that two particles are indistinguishable for us seems to entail, to some, that they are indistinguishable simpliciter. I'm not clear on what, if any, metaphysical lessons can be learned when we take this sort of strong empiricist stance. (2) Haecceitism is often not well enough defined. Sometimes it means that objects have primitive thisnesses (following Adams) but sometimes it just means objects are (or could be) primitively different. This is an important distinction. The more minimal kind of haecceitism, which I'd argue doesn't even deserve the name, just says we can have objects that are perfect duplicates but nevertheless differ. They don't differ because they have special thisnesses, because they don't have thisnesses. They just differ even without differing in their (non-identity-based) properties. (3) Saunders and others want to sidestep the PII by denying that bosons are objects. They are some other sort of entity. But how does this supposed to help with anything metaphysically interesting? I always took the PII to apply to things of any sort.
Finally, a note: physicists often make claims about particles being identical when they really mean they are of the same kind. Argh!
Tuesday, March 17, 2009
From Borderline Tables to Count Indeterminacy
Imagine assembling a two-piece table: the top (T) is being affixed to the base (B). There are points in the assembly process at which T and B are just beginning to be fastened together and at which, intuitively, it is vague whether they compose anything. Ted Sider’s (riff on Lewis’s) argument from vagueness purports to show that (despite appearances) there can’t be borderline cases of composition. Here’s the argument: If it could be indeterminate whether some things compose something, then it could be indeterminate how many things there are (e.g., whether there are just two things – T and B – or three things – T, B, and a table). But there can’t be count indeterminacy. So there can’t be borderline composition. And (moreover) if there can’t be borderline composition, then composition must be unrestricted.
There are ways of blocking the argument, but they’re pretty nasty (e.g., nihilism, sharp cut-offs, ontic vagueness, relativism). So many prefer just to accept the conclusion, that composition is unrestricted (“universalism”). Some don’t even think they’re biting a bullet here because (for one reason or another) they think that universalism is innocuous.
I want to convince you that universalists aren’t out of the woods yet. Let’s grant that T and B definitely compose *something*: a mereological fusion (or MF for short). But surely T and B are, at the very least, a borderline case of composing a *table*. Even universalists should admit that. But here’s the rub. No table is identical to any MF. Tables can survive the annihilation of certain of their parts; MFs can’t. So if T and B don’t compose a table, there are three things: T, B, and the MF. If they do compose a table, there are four things: T, B, the MF, and the table. Since they’re a borderline case of composing a table, they’re a borderline case of composing something other than an MF. In which case it’s indeterminate whether there are three or four things. In which case there’s count indeterminacy. In which case the argument from vagueness fails.
Friends of the argument from vagueness need to find some way to block this argument from borderline tables to count indeterminacy. And they need to find a way of doing this that doesn’t undercut the argument from vagueness. As far as I can tell, friends of the argument from vagueness have two options. Both involve resisting the move from T and B’s being a borderline case of composing a table to their being a borderline case of composing something other than an MF. And both involve finding something that definitely exists and is definitely composed of T and B and that itself is a borderline case of being a table.
Option #1: They definitely don't compose anything other than an MF. Here the idea is that there is definitely only one thing composed of T and B – namely, the MF – and the MF itself is a borderline case of being a table. To get this response to work, you’re going to need some way of defusing the sort of Leibniz’s Law argument I gave above for the distinctness of MFs and tables. Here are some of the tasty options: you can deny that tables can survive the loss of parts, you can say (a la Burke) that the original MF ceases to exist when its parts come to be arranged tablewise, or (like Lewis and Sider) you can go for a counterpart-theoretic account.
Option#2: They definitely do compose something other than an MF. Here the idea is that there is a further thing composed of T and B which (unlike the MF) has a “tablish” modal profile, but it’s nevertheless indeterminate whether this further thing is a table, e.g., because it’s indeterminate whether its parts are sufficiently stuck together to count as a table. There are unprincipled ways of taking this line, e.g., by saying that exactly one modally table-like entity conveniently springs into existence as soon as the grey area begins, but let’s set those aside. The only principled way of taking this line (as far as I can tell) is to accept bazillionthingism (a.k.a., plenitude, explosivism, absolutism), on which there are a bazillion things, with different modal profiles, occupying the region that’s filled by T and B. In that case, there isn’t count indeterminacy. There are exactly a bazillion and two things: T, B, and the bazillion things composed of them.
So friends of the argument from vagueness are going to get saddled with some sort of non-innocuous commitment: either bazillionthingism or else one of the revisionary packages needed to block the Leibniz’s Law arguments. Some (e.g., Lewis and Sider) have already chosen their poison. But nobody escapes unscathed.
There are ways of blocking the argument, but they’re pretty nasty (e.g., nihilism, sharp cut-offs, ontic vagueness, relativism). So many prefer just to accept the conclusion, that composition is unrestricted (“universalism”). Some don’t even think they’re biting a bullet here because (for one reason or another) they think that universalism is innocuous.
I want to convince you that universalists aren’t out of the woods yet. Let’s grant that T and B definitely compose *something*: a mereological fusion (or MF for short). But surely T and B are, at the very least, a borderline case of composing a *table*. Even universalists should admit that. But here’s the rub. No table is identical to any MF. Tables can survive the annihilation of certain of their parts; MFs can’t. So if T and B don’t compose a table, there are three things: T, B, and the MF. If they do compose a table, there are four things: T, B, the MF, and the table. Since they’re a borderline case of composing a table, they’re a borderline case of composing something other than an MF. In which case it’s indeterminate whether there are three or four things. In which case there’s count indeterminacy. In which case the argument from vagueness fails.
Friends of the argument from vagueness need to find some way to block this argument from borderline tables to count indeterminacy. And they need to find a way of doing this that doesn’t undercut the argument from vagueness. As far as I can tell, friends of the argument from vagueness have two options. Both involve resisting the move from T and B’s being a borderline case of composing a table to their being a borderline case of composing something other than an MF. And both involve finding something that definitely exists and is definitely composed of T and B and that itself is a borderline case of being a table.
Option #1: They definitely don't compose anything other than an MF. Here the idea is that there is definitely only one thing composed of T and B – namely, the MF – and the MF itself is a borderline case of being a table. To get this response to work, you’re going to need some way of defusing the sort of Leibniz’s Law argument I gave above for the distinctness of MFs and tables. Here are some of the tasty options: you can deny that tables can survive the loss of parts, you can say (a la Burke) that the original MF ceases to exist when its parts come to be arranged tablewise, or (like Lewis and Sider) you can go for a counterpart-theoretic account.
Option#2: They definitely do compose something other than an MF. Here the idea is that there is a further thing composed of T and B which (unlike the MF) has a “tablish” modal profile, but it’s nevertheless indeterminate whether this further thing is a table, e.g., because it’s indeterminate whether its parts are sufficiently stuck together to count as a table. There are unprincipled ways of taking this line, e.g., by saying that exactly one modally table-like entity conveniently springs into existence as soon as the grey area begins, but let’s set those aside. The only principled way of taking this line (as far as I can tell) is to accept bazillionthingism (a.k.a., plenitude, explosivism, absolutism), on which there are a bazillion things, with different modal profiles, occupying the region that’s filled by T and B. In that case, there isn’t count indeterminacy. There are exactly a bazillion and two things: T, B, and the bazillion things composed of them.
So friends of the argument from vagueness are going to get saddled with some sort of non-innocuous commitment: either bazillionthingism or else one of the revisionary packages needed to block the Leibniz’s Law arguments. Some (e.g., Lewis and Sider) have already chosen their poison. But nobody escapes unscathed.
Thursday, March 5, 2009
Haecceities and Haecceitism
Haecceities are primitive identities in a world. Haecceitism has to do with> primitive trans-world identities (allowing for de re differences between worlds without qualitative differences). My question regards the connection between the two. Prima facie, possession of haecceity implies haecceitistic differences between worlds. However, it has been pointed out that identity might be primitive with respect to a set of conditions but not others (Legenhausen (1989)), and showed that the two things can be kept distinct (Lewis (1986)), and in fact haecceitism can be true even if there are no haecceities. Adams (1979), one of the main recent proponents of primitive thisness, thinks haecceitism is also true, but feels compelled to provide an argument for it, additional to the existence of haecceities. Is counterpart theory the only way to believe in haecceities but not in haecceitism? Is it relevant whether haecceities are considered to be genuine properties (Duns Scotus), or just ‘aspects’ of things only separable via conceptual distinction (Ockham and other Scholastics, Adams himself)?
Generalising, it seems four possibilities are allowed; and if one introduces the distinction between moderate and extreme forms of haecceitism and/or anti-haecceitism (that is, as I understand it, primitive identity with or without essentialist constraints on the one hand, and non-primitive identity without or with the Identity of the Indiscernibles on the other) probably even more (8? I am not sure about extreme anti-haecceitism with primitive identities, maybe it only requires the Identity of the Indiscernibles to be a contingent truth). But which combinations are really possible/plausible? What conditions do they require exactly? What are people's intuitions/preferences?
Generalising, it seems four possibilities are allowed; and if one introduces the distinction between moderate and extreme forms of haecceitism and/or anti-haecceitism (that is, as I understand it, primitive identity with or without essentialist constraints on the one hand, and non-primitive identity without or with the Identity of the Indiscernibles on the other) probably even more (8? I am not sure about extreme anti-haecceitism with primitive identities, maybe it only requires the Identity of the Indiscernibles to be a contingent truth). But which combinations are really possible/plausible? What conditions do they require exactly? What are people's intuitions/preferences?
Wednesday, March 4, 2009
Barker on Chance and Cause II
In my last post, I discussed Barker's CC1. I said I'd leave discussion of CC2 for later, and here it is. CC2, recall, is this principle
And indeed we found it objectionable. The easiest way to see it is to rehearse Humphreys' problem for propensity theories: if chances are probabilities, Bayes' theorem entails that in general if Ch(e|c) is non-trivial (i.e., not zero or one), then Ch(c|e) will be non-trivial. And this looks weird if this is conceived of as a conditional chance in line with CC2; if e occurs, then it looks like at the time of e, c will generally have some non-trivial chance, and e will be a condition which determines the chance of c but doesn't cause it. In general, as Barker notes, effects are evidence for causes, and so give their causes a probability, which cannot be a chance consistently with CC2 unless there is far more backwards causation than usually thought.
Barker doesn't opt for the idea that backwards causation is widespread. His primary response is that past-directed probabilities, like those that effects give to causes and that appear in inverse conditional probabilities, 'are not real chances'. And if they aren't real chances, then CC2 won't give us 'bogus backward causation'.
Now of course any counterexample can be defined away, which is in effect what Barker does here. But this isn't completely ad hoc, since he does offer an argument. Barker appeals to this principle:
Now, when some assumptions collectively lead to an absurdity, we are only required to reject some one of them, not any particular one. But it seemed to us that Barker had clearly chosen the wrong one: it is RC that has to go, not the assumption that chances are probabilities. I can't imagine even those who defend the counterfactual chance-raising view of causation as liking RC as a way of expressing what's right about it.
But let's say we do accept Barker's way out. If chances aren't probabilities, then what are they? About this I really am in the dark. They can't be the things that govern credences, since Lewis' arguments in 'A Subjectivist's Guide to Objective Chance' suggest that whatever function it is that regulates credence will be a probability function. They won't have much to do with frequencies, since past conditional frequencies will approximate the past probabilities which aren't the past chances, according to Barker. They won't obey the Basic Chance Principle of Bigelow, Collins, and Pargetter—or indeed many of the platitudes that circumscribe the conceptual role of chance that Jonathan Schaffer has recently outlined. (It won't meet these platitudes both through failing to be a probability, and because CC1 and the existence of backwards causation entail the existence of backwards chances, inconsistent with many of these platitudes, notably Schaffer's Realization Principle, Future Principle, and Lawful Magnitude Principle) Maybe Barker-chance meets other platitudes; but will it be genuinely chance if it doesn't meet these platitudes or something like them? It looks like only a probability can play the chance role.
One last thing: in his discussion of apparently spontaneous uncaused events, Barker makes the point that even in those cases the structure of the entities involved can be the cause. He discusses a case of radioactive decay; the decay is, he says, caused by the structure of the element that decays. Fine; but he then says that if the decay does not occur, it is not caused by the structure of the element. This I didn't see: it seems to me that the chance of decay is fixed by the structure, so why not say it causes the lack of decay just as much as the decay? Barker says 'one could not say that there was no decay because [the element] was present'—but why not?
CC2: If at a time t, there is a non-zero chance of e and e obtains, then at least some of the conditions at t that determine the chance of e at t, caused e.Of this principle, Barker says 'Unlike CC1, CC2 is bound to be controversial'; given our discussion of CC1, I guess this makes CC2 really controversial!
And indeed we found it objectionable. The easiest way to see it is to rehearse Humphreys' problem for propensity theories: if chances are probabilities, Bayes' theorem entails that in general if Ch(e|c) is non-trivial (i.e., not zero or one), then Ch(c|e) will be non-trivial. And this looks weird if this is conceived of as a conditional chance in line with CC2; if e occurs, then it looks like at the time of e, c will generally have some non-trivial chance, and e will be a condition which determines the chance of c but doesn't cause it. In general, as Barker notes, effects are evidence for causes, and so give their causes a probability, which cannot be a chance consistently with CC2 unless there is far more backwards causation than usually thought.
Barker doesn't opt for the idea that backwards causation is widespread. His primary response is that past-directed probabilities, like those that effects give to causes and that appear in inverse conditional probabilities, 'are not real chances'. And if they aren't real chances, then CC2 won't give us 'bogus backward causation'.
Now of course any counterexample can be defined away, which is in effect what Barker does here. But this isn't completely ad hoc, since he does offer an argument. Barker appeals to this principle:
RC: Where c and e occur, if the chance at tc of e would have been lower, had c not obtained, then if there is no redundant causation in operation, c caused e.RC basically expresses the counterfactual chance-raising account of causation, without the usual restriction to non-backtracking counterfactuals. As such, even when e is prior to c, RC still holds; so if there were widespread backwards chances, there would be widespread backwards causation. This is absurd; so Barker rejects the assumption that these backwards probabilities are chances.
Now, when some assumptions collectively lead to an absurdity, we are only required to reject some one of them, not any particular one. But it seemed to us that Barker had clearly chosen the wrong one: it is RC that has to go, not the assumption that chances are probabilities. I can't imagine even those who defend the counterfactual chance-raising view of causation as liking RC as a way of expressing what's right about it.
But let's say we do accept Barker's way out. If chances aren't probabilities, then what are they? About this I really am in the dark. They can't be the things that govern credences, since Lewis' arguments in 'A Subjectivist's Guide to Objective Chance' suggest that whatever function it is that regulates credence will be a probability function. They won't have much to do with frequencies, since past conditional frequencies will approximate the past probabilities which aren't the past chances, according to Barker. They won't obey the Basic Chance Principle of Bigelow, Collins, and Pargetter—or indeed many of the platitudes that circumscribe the conceptual role of chance that Jonathan Schaffer has recently outlined. (It won't meet these platitudes both through failing to be a probability, and because CC1 and the existence of backwards causation entail the existence of backwards chances, inconsistent with many of these platitudes, notably Schaffer's Realization Principle, Future Principle, and Lawful Magnitude Principle) Maybe Barker-chance meets other platitudes; but will it be genuinely chance if it doesn't meet these platitudes or something like them? It looks like only a probability can play the chance role.
One last thing: in his discussion of apparently spontaneous uncaused events, Barker makes the point that even in those cases the structure of the entities involved can be the cause. He discusses a case of radioactive decay; the decay is, he says, caused by the structure of the element that decays. Fine; but he then says that if the decay does not occur, it is not caused by the structure of the element. This I didn't see: it seems to me that the chance of decay is fixed by the structure, so why not say it causes the lack of decay just as much as the decay? Barker says 'one could not say that there was no decay because [the element] was present'—but why not?
Tuesday, March 3, 2009
Defining Measures Over Spaces Richer Than The Continuum
Cian Dorr asked an interesting question in the comments of my previous post:
I'm probably not comfortable enough with the maths to come up with elegant treatments of mathematics with higher cardinalities than the continuum, but here's one way of doing it, though it is a bit kludgy. Let us start with a space that has 2^continuum many points. Instead of defining the fields, measures etc. over points, define them over equivalence classes of points, where the equivalence classes contain 2^continuum-many points each. For example, the distance measure needed to get the right predictions in the physics we do doesn't treat as different all the points "around" a given point. (I’ll talk in this entry about “space” though the remarks will carry over straightforwardly to spacetime.)
You might wonder what right these so-called points have to be called "points" if e.g. a distance metric does not distinguish them. Why aren’t the equivalence classes better candidates to be identified as the "points"? But there are a few answers available: maybe there's also a more discerning and more natural function F from points to somethings that does distinguish the points inside our equivalence classes, and our distance measure is a crude abstraction from F that is good enough for practical purposes. Or maybe the natural relation in the area is not a function on equivalence classes, but a relation between points that is uniform across these equivalence classes: that is, when we have two equivalence classes C and D, then if any member of C stands in R to any member of D, then every member of C stands in R to every member of D. Or it should be easy enough to come up with other marks of distinction that the members of the equivalence classes have that make them better candidates to count as points than the classes - maybe the members are the ultimate parts of spacetime, for example, or maybe we have general reasons for thinking classes can’t be points.
So the members of the equivalence classes might be the genuine points, and arbitrary fusions of them might have a good claim to be regions, thus giving us more than Beth2 regions. But the physical theories need not operate very differently - it just turns out that where we treated our mathematical physics as defining fields, measures, etc. over points, it should instead be treated as defining those quantities over equivalence classes of points. Of course, we are left with the question of what the fundamental physical relationships are that we are modelling with our functions, but I hope I’ve said enough to indicate that there are a number of options here.
If the model I have just given works, then it will be trivial to carry out a similar procedure to generate models of larger spaces: simply ensure that the equivalence classes contain N points, where N can be any cardinal you like. It does not work as smoothly once each equivalence class has more than set-many points, though that raises quite different sorts of problems.
These models mimic standard physics, and for some purposes we might want to see what sorts of models of higher-cardinality spaces we could come up with that exploit the extra structure of those spaces to produce more complicated “physical” structures. But the sort of model I described should be enough to raise an epistemic question I alluded to in my previous post. If physics as we do it would work just as well in these richer spaces, why be so sure that we are not in one of these spaces? I think some appeal to simplicity or parsimony is good enough to favour believing we are not in such a higher-cardinality space. But I seem to be more of a fan of parsimony than a lot of people. This case might be another one to support the view that many physicists implicitly employ parsimony considerations in theory choice, perhaps even considerations of quantitative parsimony!
I wonder how you would do physics in a spacetime finite volumes
of which contain more than continuum many points? The physical theories of spacetime I'm familiar with are all based in ordinary differential geometry, which is about finite-dimensional manifolds, which by definition are locally isomorphic to R^n. I don't know how you'd begin to define, e.g., the notion of the gradient of a scalar field, if you were trying to work in something bigger.
I'm probably not comfortable enough with the maths to come up with elegant treatments of mathematics with higher cardinalities than the continuum, but here's one way of doing it, though it is a bit kludgy. Let us start with a space that has 2^continuum many points. Instead of defining the fields, measures etc. over points, define them over equivalence classes of points, where the equivalence classes contain 2^continuum-many points each. For example, the distance measure needed to get the right predictions in the physics we do doesn't treat as different all the points "around" a given point. (I’ll talk in this entry about “space” though the remarks will carry over straightforwardly to spacetime.)
You might wonder what right these so-called points have to be called "points" if e.g. a distance metric does not distinguish them. Why aren’t the equivalence classes better candidates to be identified as the "points"? But there are a few answers available: maybe there's also a more discerning and more natural function F from points to somethings that does distinguish the points inside our equivalence classes, and our distance measure is a crude abstraction from F that is good enough for practical purposes. Or maybe the natural relation in the area is not a function on equivalence classes, but a relation between points that is uniform across these equivalence classes: that is, when we have two equivalence classes C and D, then if any member of C stands in R to any member of D, then every member of C stands in R to every member of D. Or it should be easy enough to come up with other marks of distinction that the members of the equivalence classes have that make them better candidates to count as points than the classes - maybe the members are the ultimate parts of spacetime, for example, or maybe we have general reasons for thinking classes can’t be points.
So the members of the equivalence classes might be the genuine points, and arbitrary fusions of them might have a good claim to be regions, thus giving us more than Beth2 regions. But the physical theories need not operate very differently - it just turns out that where we treated our mathematical physics as defining fields, measures, etc. over points, it should instead be treated as defining those quantities over equivalence classes of points. Of course, we are left with the question of what the fundamental physical relationships are that we are modelling with our functions, but I hope I’ve said enough to indicate that there are a number of options here.
If the model I have just given works, then it will be trivial to carry out a similar procedure to generate models of larger spaces: simply ensure that the equivalence classes contain N points, where N can be any cardinal you like. It does not work as smoothly once each equivalence class has more than set-many points, though that raises quite different sorts of problems.
These models mimic standard physics, and for some purposes we might want to see what sorts of models of higher-cardinality spaces we could come up with that exploit the extra structure of those spaces to produce more complicated “physical” structures. But the sort of model I described should be enough to raise an epistemic question I alluded to in my previous post. If physics as we do it would work just as well in these richer spaces, why be so sure that we are not in one of these spaces? I think some appeal to simplicity or parsimony is good enough to favour believing we are not in such a higher-cardinality space. But I seem to be more of a fan of parsimony than a lot of people. This case might be another one to support the view that many physicists implicitly employ parsimony considerations in theory choice, perhaps even considerations of quantitative parsimony!
Subscribe to:
Posts (Atom)