Abstracts

Guillermo Badia (University of Otago)
Paraconsistent Metaphysics
Edward Zalta has proposed a theory of abstract objects which is very strong for interpretations of metaphysical theories. Even positions like JC Beall's Really Full-Blooded Platonism turns out to be interpretable in Zalta's theory. Nevertheless, the system has some limitations that one might want to overcome. I will present a paraconsistent version of the theory that does exactly that. It can be roughly put as follows: any abstract object you might think of (even the wildest ones with circular or genuinely contradictory descriptions) already exists. One of my aims is to elucidate whether a paraconsistent logician is better off with a paraconsistent metaphysics.

Sam Baron (University of Western Australia)
Co-Author Jonathan Tallant
On Razor Revision
Ockham’s razor asks that we not multiply entities beyond necessity. The razor is a powerful methodological tool, enabling us to articulate reasons for preferring one theory to another. There are those, however, who would modify the razor. Schaffer (2010a: 313—our italics), for one, tells us that, ‘I think the proper rendering of Ockham’s razor should be ‘Do not multiply fundamental entities without necessity’’. Our aim, here, is to challenge such re-workings of Ockham’s razor. We claim that the revised razor generates the wrong results: either because it would have us posit entities that do no explanatory work, or else because it is in tension with existing scientific practice.

John Bigelow (Monash University)
Co-Author Sarah-Jane Dempsey
Prior to Relativity
Arthur Prior thought that nothing exists that is not present. But he was also painfully aware that this obvious truth is inconsistent with a widespread philosophical interpretation of Einstein’s theories of Relativity. He thought, however, that the obvious truth of presentism is not inconsistent with Relativity theory itself, properly understood as a theory in physics rather than as a theory in metaphysics. Relativity theory tells us which things will appear to be present according to our measurements and calculations from any given ‘frame of reference’. But it does not tell us which of these appearances corresponds to reality. Presentism is inconsistent with the metaphysical claim that none of these appearances correspond to reality. But it is not inconsistent with the physical theory that restricts itself to descriptions of how things appear from different frames of reference, making no claim about which of these appearances corresponds to reality. We will argue that Prior was right. And we will bolster his position by adding further reasons for thinking that there is indeed a truth of the matter concerning which things are (non-relatively) present, even if the physical theories of Relativity entail that we can never ‘verify’ this truth simply from calculations based on measurements made with measuring rods and clocks. There may be reasons for thinking there is a truth of the matter concerning such things, even if Relativity theory can never tell us what precisely that truth is – as Einstein himself acknowledged.

John Bishop (University of Auckland)
Co-Author Ken Perszyk (Victoria University of Wellington)
A Euteleological Conception of Divinity
This paper explores a non-personal conception of divinity arising from the observation that the kind of total explanation offered in theism involves interpreting reality as existing for the sake of an ultimate purpose. In certain ways, it involves a development of our view as set out in our paper ‘Divine Action Beyond the Personal OmniGod’, in Jonathan Kvanvig (ed.), Oxford Studies in Philosophy of Religion, vol. 5, pp. 1-17. The paper concludes with a suggested response to the obvious objection that theist scriptures, prayers and liturgies make sense only if God is rightly understood as a supreme person, and we emphasis the importance of distinguishing what is compelling for religious psychology from what is required for religious understanding.

John Bishop (University of Auckland)
The Intrinsic Dispositionalist Account of the Ability to Act
This paper is a discussion of an impressive recent book on the free will and determinism debate: Kadri Vihvelin, Causes, laws and free will: Why determinism doesn’t matter (Oxford University Press 2013). I argue that, though the compatibilist may indeed be committed to the view that the capacity for free action is an intrinsic disposition, more is needed to defeat the libertarian agent-causationist, who will hold that intrinsic dispositions of the sort that can constitute the relevant capacity cannot be realised in a purely event-causal order. (The paper should be intelligible to those who have not read Vihvelin's book.)

Joshua Black (University of Sheffield)
Peirce and Sellars on the Problem of Universals
Charles Peirce and Wilfrid Sellars, both central figures in the pragmatist tradition, appear to take radically different positions on the problem of universals. Peirce takes ‘extreme scholastic realism’ to be both necessary for a consistent pragmatism, and tacitly assumed by natural science. Sellars is no less insistent that a thorough-going nominalism is required by naturalism. This difference is all the more striking against a deep background of methodological agreement. In this paper I explore the prospects for a reconciliation between the two based on their shared commitment to process metaphysics. According to Rescher, the move from a substance to a process ontology disrupts traditional solutions to the problem of universals. Roughly, processes are like universals in so far as they can be repeated, and like concrete particulars in so far as they are causally potent. I argue that Peirce and Sellars share this kind of process-based approach to the problem of universals. On my view, their clashing terminology is a result of emphasising either the universal-like, or the particular-like, features of processes. If I am correct, the apparently radical difference between Peirce and Sellars on the problem of universals is merely apparent. I conclude the paper by considering some significant, but more tractable, differences between the two that should motivate further inquiry.

Ben Blumson (National University of Singapore)
Impossible Fiction
The following three theses are individually plausible, but jointly inconsistent:
(1)  Everything fictional is imaginable
(2)  Everything imaginable is possible
(3)  Something fictional is impossible
In this paper I will try to resolve this trilemma by arguing that (1) almost everything fictional is imaginable, (2) almost everything imaginable is possible and so (3) almost nothing fictional is impossible. I will also argue against Derek Matravers’ attempts to use this trilemma to argue against accounts of fiction in terms of imagination.

Tracy Bowell (University of Waikato)
On Engaging with Others: A Wittgensteinian Approach to (some) Problems with Deeply Held Beliefs
My starting point for this talk is a problem in Critical Thinking pedagogy – the difficult of bringing students to a point where they are able, and motivated, critically to evaluate their own deeply held beliefs. I first interrogate the very idea of a deeply held belief, drawing upon Wittgenstein’s idea of a framework belief - a belief that forms part of a ‘scaffolding’ for our thoughts - or of a belief that functions as a hinge around which other beliefs pivot. I then examine the role of deeply held beliefs, thus conceived, in our ways of being in the world, exploring the extent to which engagement with others whose deeply held beliefs differ significantly from ours may be possible through imaginative ‘travel’. Finally, I reflect upon the extent to which these imaginative moments also offer up opportunities for critical reflection upon our own deeply held beliefs and, thus, the possibility of changing or adapting those beliefs.

Justin Bruner (Australian National University)
Indices are Strategically Complex
One of the major puzzles in evolutionary theory is how communication and information transfer are possible when the interests of those involved conflict. Simply put, communication is possible if there are substantial costs associated with sending a signal or, alternatively, if signals themselves are difficult, if not impossible, to fake (such signals are often known as indices). In this second case, perfect information transfer seems inevitable if there are physical constraints, which limit the signal repertoire of an individual, effectively making bluffing an impossibility. This, I shall argue, is incorrect. Indices by no means guarantee information transfer. I demonstrate the existence of a so-called pooling equilibrium and discuss why the traditional argument for perfect information transfer (Frank's full disclosure principle) does not hold in all cases. Finally, I attempt to outline under what conditions signaling constraints will evolve.

Douglas Campbell (University of Canterbury)
Is a de-extinct mammoth a genuine mammoth, or just a forgery?
Synthetic biology is advancing in leaps and bounds, raising pressing philosophical and ethical questions as it does so. Projects to resurrect a number of extinct species, including the Pyrenean ibex, the mammoth, the passenger pigeon and the gastric brooding frog, are now well underway, and the first live birth of a de-extinct creature (a Pyrenean ibex) has occurred. It is plausible that the techniques being developed could be used, within a few decades, to repopulate New Zealand’s forests with such lost birds as the huia, or even, maybe, the moa. The methods of synthetic biology also show considerable promise as a way of restoring genetic vigor to endangered species like the kakapo and the black robin, which are suffering from severe inbreeding depression. One objection to the use of these technologies is that the resulting creatures are artificial, rather than natural, and not genuine members of the species from whose genetic material they are derived. I argue that this objection fails. My argument turns on an analysis of the particular ‘species’ concept relevant to the aims and purposes of wildlife conservation.

Philip Catton (University of Canterbury)
Philosopher, Tohunga, Engineer
Notwithstanding the contrasts between contemporary philosophers and contemporary engineers, tohunga — who are the knowledge specialists in Māori traditional culture — bear some resemblances to contemporary philosophers and some resemblances to contemporary engineers. Tohunga also make art, perform experiments, mentor apprentices, and preserve and broaden oral memory arts, as bards. Tohunga are not traditionally politically top leaders, but to them traditional Māori society sharply looks nevertheless in its decision-making. Tohunga help organize and make wise what others do. Whether to call tohunga ‘scientists’ or thence to call the specialist reaches of Mātauranga Māori ‘science’ is on many counts an unhelpful question. So also, it is unhelpful to ask whether tohunga should be called simply ‘philosophers’ or likewise simply ‘engineers’. You do not do cross-cultural study well if you seek a single synonym in your language for the Māori word ‘tohunga’. For this very reason however, to consider the nature and importance of the station of tohunga in Māori traditional societies and then to look back upon contemporary philosophers is instructive. New Zealand philosophers are, in the face of their nation’s official biculturalism, unusually strongly invited to do just this. For they ought to ask themselves what it would be to dialogue well with tohunga. And if philosophers could discover what it would be to dialogue well with tohunga, then in the new perspective that they would win for looking back onto themselves, why engineers and they have drawn so far apart from one another would be clarified.

Kasper Christensen (Victoria University of Wellington)
On Logical  Necessity in Kripke Semantics
Kripke semantics has been criticised in the literature, primarily by Cocchiarella and Hintikka, for failing to deliver an adequate semantics when L is understood as logical necessity. The reason lies in the interaction of the truth-clause for claims of the form Lp and the assumption that W can be a set of any size. Since Lp is true if and only if p is true at all w in W, models in which W does not contain alllogically possible worlds do not give the correct meaning of L. In my paper I use Etchemendy's distinction between interpretational andrepresentational semantics to argue that the criticism should not be accepted as it stands.

Tim Dare (University of Auckland)
Predictive Risk Modeling and Child Maltreatment: Ethical Challenges
The New Zealand Government commissioned a study to find out whether it is possible to use administrative data held in its social welfare data bases to identify children at risk of maltreatment. The researchers developed a predictive risk modeling (PRM) tool using an algorithm with significant capacity to ascertain and stratify children’s risk of experiencing maltreatment in the future, generating a risk score that could be sent to frontline staff or agencies for response. (The study report can be seen at http://www.msd.govt.nz/about-msd-and-our-work/publications-resources/research/vulnerable-children/index.html). The potential benefits of the Vulnerable Children PRM are considerable and are of obvious moral value. However the application of predictive risk modeling to child maltreatment also has very clear ethical risks and costs, including those generated by predictable false positives, by the possible stigmatization of already vulnerable populations, by the probable use of data without consent, by predictable resource allocation issues the tool will raise, and by difficulties in designing and implementing effective interventions. This paper asks whether these ethical costs can be ameliorated or completely addressed, and whether those that cannot be addressed are outweighed by the benefits that might be delivered by the tool.


Ramon Das (Victoria University of Wellington)
Evolutionary debunking arguments and partners in guilt
A large number of ‘evolutionary moral debunking’ arguments have been published in the last few years.  Such arguments claim that considerations from evolutionary biology and/or evolutionary psychology undermine our ordinary moral beliefs.  This undermining takes different forms depending on the type of debunking argument.  Some philosophers, such as Sharon Street, argue that such considerations tell in favour of an antirealist position in ethics.  Others, such as Richard Joyce, argue that they show that none of our moral judgments are epistemically justified.  As far as I can tell, however, none think that their evolutionary debunking arguments can be extended to show that our non-moral beliefs are similarly undermined: they deny that their arguments have relevant ‘partners in guilt’ in the non-moral domain.  In particular, they deny that the relevant evolutionary debunking considerations apply to our scientific beliefs, or to our ordinary beliefs about the external world.  In this paper I argue that they are wrong to think this.  For all the moral debunkers have shown, moral and non-moral beliefs are partners in guilt (or innocence) and stand or fall together.

Greg Dawes (University of Otago)
Religion and Science: A Case of Differing Epistemic Norms
In discussions regarding science and religion, there is much talk about the ‘conflict thesis’. But there is also confusion about what this entails. Is the alleged conflict between (a) bodies of doctrine, (b) communities of practitioners, (c) traditions of thought and practice, (d) distinctive modes of thought, or (e) differing conceptions of how one attains knowledge? Option (e) seems the most defensible. Scientific and religious communities share a common epistemic goal, but disagree about how it is to be attained. This gives rise to differing, perhaps incompatible, epistemic norms. Is the resulting conflict avoidable? Probably not, given a traditional conception of religious authority.

Zhuo-Ran Deng (University of Canterbury)
Co-Authors: Douglas Campbell and Jack Copeland
The Inconceivable Popularity of Conceivability Arguments
Three major anti-physicalist arguments purporting to show that consciousness isn’t part of the physical world are Descartes’ argument from the distinctness between mind and body, Saul Kripke’s modal argument against identity theory, and David Chalmers’ zombie argument against psychophysical supervenience. All three arguments share the general form of a ‘conceivability’ argument. This paper describe a formal fallcy of this general form. I will show that all three arguments fail because conceivability arguments can be ‘mirrored’. That is, they can be turned around to prove a conclusion that directly contradicts the conclusion they are designed to prove.

Sidney Diamante (University of Auckland)
At Arm's Length: The Octopus Nervous System and the Unity of Consciousness
Octopuses are the most sophisticated invertebrates, behaviourally and neurally. The highly developed octopus nervous system endows the octopus with complex cognitive capacities comparable to those of vertebrates. In an intriguing evolutionary development, its components are highly autonomous, and much of the information processing and motor control takes place peripherally (in the arms)--not in the central brain. Such a division of cognitive labour thus makes the octopus an excellent case study in embodied cognition. This paper examines the implications the octopus nervous system raises for embodied cognitive architectures.

Antony Eagle (University of Adelaide)
The Epistemic Significance of Agreement
We do not, generally, treat the judgements of our epistemic peers as mere indicators of the truth – as if people are like more-or-less reliable thermometers. Given this, we need to say something about what we ought to do when we find out that a peer agrees (or disagrees) with us about something. I argue for the following claim: if what we find out is that a peer agrees with our evaluation of how the evidence bears on P, then we should respond by becoming more confident that we were correct in evaluating how the evidence bears on P. This increased confidence can be modelled by making use of Skyrms' notion of resilience. A case can also be made that we should respond to peer disagreement by becoming less resilient in our credences. I make this case, then situate my proposal within the extensive recent literature on peer disagreement.

Marinus Ferreira (University of Auckland)
How Many Conventions of Language are There?
In David Lewis’s seminal account, language is conventional, and Ruth Millikan's competing account agrees. But we may distinguish between two different ways this could be: 'conventional monism', where each language is constituted by one convention, e.g. to speak English rather than Welsh within a community; or 'conventional particularism', where there is a convention for every substantial part of the language, i.e. one for every word, one for every phrase construction, and so on. Monism isn’t appealing, since most things we say about conventions are about fine-grained particular conventions. However, particularism seems like an unprincipled hodge-podge of a theory, since it is a mystery how we could be guided by thousands (or hundreds of thousands, or millions) of different and significant conventions at the same time. In response to this problem, I present 'nested conventions', where there is a multitude of conventions of language but they are in a structured relationship with each other. This proposal is relatively straightforward for Millikan to accommodate, but here I demonstrate how we can allow for nested conventions using Lewisian conventions, giving some reasons to prefer this account to Millikan's. I propose that there are narrower conventions selecting a behaviour from one set of alternatives, with a wider convention over a set of alternatives of a higher order, such that the same behaviour counts as conforming both to the narrower and the wider convention. I develop an example where the way we construct verb phrases in English is a convention about the arrangement of noun phrases, verbs, and verb-modifiers, each of which is a set of nested conventions of its own.

Ben Fraser (Australian National University)
Co-Author Tim Dean (UNSW)
Moral mismatch
Evolutionary mismatch occurs when a trait evolves in one environment, the environment changes, and in the new environment the trait has a detrimental effect. A classic case of mismatch is our evolved 'sweet tooth'. Here, we consider an evolutionary mismatch hypothesis about morality. We identify the evidential burden such a hypothesis must shoulder. We also explore the implications of moral mismatch. Do we need to go on some kind of 'moral diet'?

Michael Gilchrist (Victoria University of Wellington)
Reflections on Deflationary Metaontology
In his 2009 article, ‘The Deflationary Metaontology of Thomasson’s Ordinary Objects’, Jonathan Schaffer presents a number of challenges to Amie Thomasson’s neo-Carnapian approach to ontology. I review these (and related) challenges, construct some possible replies, and then consider what light this imaginary dialogue sheds on the broader deflationary project in ontology.

Patrick Girard (University of Auckland)
Going to the cricket at the end of the world
Several scenarios have been proposed as counter-examples to counterfactual analysis based on comparative similarity of worlds. The Doomsday switch: If I pressed the Doomsday switch, that would be the end of the Universe. But a world in which some random object flies at the rescue of the world and miraculously thwarts me is more similar than a cataclysmic world. Going to the cricket: If it wasn’t raining today, we would have gone to the cricket. But unbeknownst to us, the trains aren’t working today, and we would not have been able to go to the crickets anyway. These are two among several scenarios devised against the intelligibility of comparative similarity of worlds. I will argue that there is a systematic resolution of those issues based on ceteris paribus logic. But beware, I mean ceteris paribus logic as formalised in preference logic for game theory, not the confused notion of ceteris paribus as some ad hoc patch work for laws in the philosophy of science.

Roderic Girle (University of Auckland)
Robinson Crusoe and the Epistemologists
Why do we need the distinction between knowledge and belief? After all, we can assert things, agree or disagree with what others say, indicate our degree of agreement or disagreement, say whether we think something is correct or incorrect, use anyone of a myriad 'modal' operators such as "certain", "definite" "probable", "possible" and "necessary". So, why do we need the distinction? This paper will look at giving an answer by considering the contrast between Technical and Social Intelligence. Technical Intelligence is sometimes seen as the Robinson Crusoe model of intelligence, of an isolated agent solving problems. Social Intelligence is sometimes seen as the model where Robinson Crusoe is no longer an isolated agent but has been joined by Man Friday, and maybe Woman Thursday, male child Wednesday and female child Tuesday. Once Robinson Crusoe becomes a member of a society, he needs his Social Intelligence to solve far more than technical problems. Language acts as a framework for understanding social situations; but above all, for manipulating such situations in the game of survival and reproduction. In this context, the terms "know" and "believe" have their place in the strategies for the manipulation of social life and for the assertion of authority, for face and mana. Austin began to grasp this with his social performance epistemology. This paper explores this social dimension further than Austin would have been comfortable with.

Mike Grimshaw (University of Canterbury)
“I am not cut out for the life of the regular ministry” A.N. Prior and theology as illuminating illusion
In 1936 Arthur Prior withdrew from study as a theological student at Knox College, Dunedin, stating: “I am not cut out for the life of the regular ministry.” Yet Prior continued to read, write and think theologically for many years. In a diary entry for March 25 1942 he writes: ‘Theology is an illusion, but it is an illusion that is somehow “close to life”, and the study of theological systems illuminates real problems in some way, and that’s why I’m interested in it. The theologian’s “story of man’s redemption” is not just fiction, but real history distorted’. What this might mean is discussed in reference to recent work on Prior and theology by Jakobsen (2014)(2012); Halse (2012)Ujvasri(2012), Uckelman(2012) and the recently available 37 papers by Prior in the Nachlass of A.N. Prior (www.priorstudies.org)

Katie Hamilton (Victoria University of Wellington)
What Is Philosophical Expertise?
One main challenge to experimental work in philosophy is the expertise argument: because philosophers are specifically trained to think about philosophical problems they have superior intuitions about them. If this is the case then examining the intuitions of non-philosophers, or the ‘folk’, may be considered philosophically uninformative. In this paper I will look at what it takes to have ‘expert’ intuitions, and whether the intuitions of philosophers are in fact more reliable sources of evidence to answer philosophical questions.

Cory Johnson (University of Otago)
Prior's Leibnizian Logic E
At the end of his life, Prior turned to focus on logic as it relates to the notion of the 'self'. In constructing such a system, Prior thought that, more than anything, his 'egocentric logic' E could shine light on the Monadology of Leibniz. At that time, though, Prior admitted that he didn't take E as seriously as he took his tense logics. My goal is to show that a logic such as E, contrary to Prior's final feelings, indeed has great worth, especially if adopted by philosophers of consciousness that are currently arguing for forms of first-personism.

Anton Killin (Victoria University of Wellington)
Prior's revolutionary theory of nations
In one of his earliest publications, “The Nation and the Individual”, Prior applies the analytic method to the concept 'nation'. Prior distinguishes three analyses of nation and considers the implications of each for understanding the relations between individuals and nations. I will briefly outline each of these analyses, and focus on Prior’s preferred analysis. I will argue that it is best interpreted as an early formulation of revolutionary fictionalism. Philip Pettit has recently argued against construing group agents (e.g. nations) as fictions, advocating a strong realist position. I weigh in on this debate.

Ladislav Koreň (University of Hradec Králové)
Rationality of norm-driven choice
Issues concerning the nature, function and impact of social norms and rules on human behaviour and interaction have been the hot topic of interdisciplinary research and debates in the last decades. One of the most crucial questions — at least for philosophers — concerns the relation between norm-driven behaviour and rationality. This paper focuses on the relation between norm-driven (rule-governed) choice and Rational Choice (RC) as an important model of rational agency. Supporters of RC often maintain that, from the formal point of view, RC-apparatus can accommodate normative motives (e.g. normative preferences being given a due weight in agents` consistently ordered all-in preferences mathematically represented by appropriate utility functions). To this, however, it might be retorted that, even if RC can be used to reconstruct a norm-driven choice, its reconstructions distort or hide from us particularly vital aspects of it. For one thing, normative motives are often action- or process-oriented, whereas RC-reconstructions are outcome-oriented. For another thing, norm-driven choice is often a contextually more efficient process of problem-solving, saving us time and lowering the costs of information gathering and computational processing, compared to calculative choice that would have us compare and weigh feasible options in the light of available information about the structure (constraints, causal/probabilistic relations) of the choice-situation (though RC-modellers often demand only that choice of behaviour be modelled as if agents were reasoning in this demanding manner). In light of this, it might be suggested that rationality of norm-driven choice is more adequately conceived of as a bounded and problem-solving rationality of a sort. Indeed, one of the most intriguing arguments in support of this approach is that norms (rules) help us to avoid/escape traps of purely calculative (or strategic) rationality. I shall focus precisely on this point, summarizing and discussing what I take to be the most interesting evidence for it (e. g. cases of norm-driven choice that are consistent with the spirit of bounded-adaptive rationality yet hardly explicable on the assumption that actors reasoned into them calculatively or strategically).

Fred Kroon (University of Auckland)
Where for Causal Descriptivism
When philosophers talk about the reference of proper names they will often begin by declaring their allegiance to the new Kripkean orthodoxy, an amalgam of anti-descriptivism about both semantic content and reference-determination, referentialism about content, and something like a causal-historical theory of reference-determination (the anti-descriptivism in this package is non-negotiable, the others less so). Causal descriptivists reject this package (Frank Jackson’s a priori argument for descriptivism in ‘Reference and Description Revisited’ is perhaps the most famous of the arguments for such a position), but it is fair to say that their brand of descriptivism about reference has not come close to dislodging Kripkean orthodoxy. In my talk I review some of the reasons.

Vladimir Krstić (University of Auckland)
Fallis was almost right about lying
Saint Athanasius was rowing on a river when the pursuers – who did not recognize him – asked, “Where is the traitor Athanasius?” “Not far away”, the Saint replied and rowed past them. This is usually seen as an example where a Saint managed to avoid lying in crisis. He did not say anything he does not believe; rather, he implicated that he is not the traitor Athanasius. Because implicatures are not part of asserted propositions, the Saint did not lie. Building onto insights that generated Don Fallis’s 2013 account, I argue that Saint Athanasius did lie. Fallis admirably argued that if a person is lying, she (a) must say something that she does not believe to be true and she (b) must represent herself as believing that thing. Contrary to him, I argue that conditions (a) and (b) should be substituted with a single condition, ‘a liar must assert something he recognizes as false’. This account is based on two revised notions: the notion of implicatures and the notion of assertions. I will argue that some of conventional linguistic tools that we count as implicatures are conventional designators or word-theories and, as such, are parts of assertions. Therefore, more than what was said should be considered when judging whether a person is lying. In the case under scrutiny, the Saint asserts ‘Not-me is not far away’, which is an assertion constituted out of two sub-assertions, ‘not-me’ and ‘not far away’. The Saint is lying because he warrants the truth of ‘[the traitor is] not me’. The second important feature of my account is based on the claim that a person can sincerely assert something she does not believe to be true. Asserting S need not necessarily involve believing that S is true. Rather, asserting S concerns recognizing that S is true and warranting that S is true. Thus, lying should be evaluated with respect to whether the person recognizes her assertion as true and, if she does not, whether she warrants that assertion as true.

Christopher Lean (Australian National University)
Hierarchy in Ecology and its implications for Biodiversity
The hierarchical structure of biological entities has received considerable philosophical attention since William Wimsatt’s influential discussion of how organisms can be multiply decomposed along different ‘theoretic perspectives’ (Wimsatt, 1972). This enquiry has yielded considerable advances in our understanding of the relationship between the entities studied by different biological disciplines. However, there has not been a similar examination of the ways ecologists describe ecological systems. I aim to rectify this omission by outlining how the different research programs in ecological science yield different non-isomorphic sets of variables. Crucially, there are distinct explanatory variables within ecological science; for example the movement of energy and material resources in a system (ecosystem ecology), functional compositional relations of assemblages (community ecology), and spatial configuration of organisms (population ecology). These variables have a nested structure within their local disciplines (vertical composition) and we can attempt to translate the identity relations of variables between disciplines (horizontal identity). Understanding the relationship between the variables of ecological science impacts both our concept of biodiversity and ability to preserve biodiversity.

Manuel Lechthaler (University of Otago)
Exclusion Arguments for Eliminativists
Exclusion arguments are used by eliminativists (Merricks, Kim, …) to argue against the existence of objects of a certain kind (composite objects, mental objects, …). The basic idea of these arguments is that those objects – if they were to exist – do not cause any events, because all the causal work is done by objects of another kind (subatomic partless particles, physical objects, …) and in general, events are not overdetermined, i.e. they are not caused by more than one sufficient cause. In this paper, I will present the general form of exclusion arguments and discuss some of their problematic features: the use of Alexander's Dictum ("to be real is to have causal powers"), and the genuine difference between the examples used to motivate the exclusion of general overdetermination and the examples of overdetermination used in the special versions of the exclusion argument.

Catherine Legg (University of Waikato)
Idealism Operationalized: Charles Peirce's Theory of Perception
This paper begins by outlining Hume's understanding of perception according to which ideas are copies of impressions, which are thought to constitute a foundational confrontation with reality. This understanding is contrasted with Peirce's theory of perception according to which percepts give rise to perceptual judgements, but perceptual judgements are not a copy but an index (or 'true symptom' - just as a weather-cock indicates the direction of the wind) of the percept. Percept and perceptal judgement are thereby able to mutually inform and correct one another in rich ways, as the perceiver develops mental habits of interpreting their surroundings.

Matthew Macdonald (Victoria University of Wellington)
Epistemic Circularity
Establishing that a belief source is trustworthy by trusting it is an ‘epistemically circular’ procedure. Such procedures are generally viewed with suspicion. Arguments which exhibit epistemic circularity, like those which exhibit logical circularity, seem dialectically ineffective against an interlocutor. Moreover, allowing epistemically circular procedures to provide justification would be to risk allowing various bogus belief sources to justify affirmations of their own trustworthiness. Nonetheless, epistemic circularity is very difficult to avoid. I’ll argue that unless some epistemically circular procedures can provide justification, inquiry into the nature of justification is a doomed enterprise.

James Maclaurin (University of Otago)

Is Biodiversity a Natural Kind?
One might think of ‘biodiversity' as (1) a term that picks out a feature of living systems that allows us to explain and categorise them much as atomic number does for chemical systems or (2) a term which describes a set of methodological approaches to an otherwise disparate group of scientific problems or (3) a term that picks out the variety of features of biological systems that are ultimately of political, ethical, or economic significance or (4) a term of folk biology that provides psychological but not philosophical motivation for certain important conservation endeavours or (5) some combination of (1) - (4). Recognising that there is unsettled philosophical debate about the nature and significance of natural kinds, this paper argues that it is nonetheless useful to ask the question ‘Is biodiversity a natural kind?’

Duncan Martin (Victoria University of Wellington)
Concrete action guidance for virtue oriented ethical theories: A critique of Sandler's solution
One problem for virtue oriented ethical theories is that they do not seem to be able to provide specific guidance in concrete situations or on concrete issues. To remedy this issue, Ronald Sandler claims that we need to abandon ethical principles which define right action in terms of the characteristic behaviour of virtuous persons (qualified agent principles), and instead define right action in terms of how well a given action hits the target of the operative virtues (target centred principles). This approach is thought to sidestep a number of related problems, and to provide the basis for specific action guidance, through the application of operative virtues and v-rules to concrete situations. In response, I defend two related claims. First, I argue that we can successfully avoid the problems associated with qualified agent principles of right action, without needing to abandon such principles altogether. Second, I argue that the proposed solution does not depend upon the principle of right action offered by Sandler.

Cei Maslen (Victoria University of Wellington)
Causation, Absences and the Prince of Wales
In this paper, I defend a counterfactual approach to causation by absences from some recent criticisms due to Sartorio (Sartorio 2010). I also identify some new issues for alleged cases of simultaneous causation.

John Matthewson (Massey University)
When less mechanistic information produces better mechanistic explanations
Mechanistic explanation can be characterised as conveying information about the mechanism that underlies some particular phenomenon. Given this characterisation, we might think that any given mechanistic explanation will be improved by the addition of more detailed information regarding the relevant mechanism(s). This view is often at least implicitly held in the literature regarding mechanistic explanation, but some commentators have questioned whether increased mechanistic detail will always generate a better mechanistic explanation. For example, Levy and Bechtel (2013) argue that explanations may sometimes be *enhanced* through the omission of such detailed information. I agree with Levy and Bechtel in principle, but their claim will only go through if additional mechanistic detail at least sometimes imposes a cost in explanatory power. And once we outline what is required for such a trade-off to hold, it is not clear how often the appropriate circumstances will arise. Time permitting, I will also consider what this issue might show us about the status of mechanistic explanation as a distinctive type of explanation.

Samuel Mawson (Monash University)
Co-Author Michael Selgelid
Benefits for Whom? Measuring the Value of Genetic Selection
When facing a choice regarding which potential person to bring into existence, many people have the intuition that it is morally preferable to create someone with a higher, rather than a lower quality of life. Parfit’s arguments in ‘Reasons and Persons’ (1984) demonstrate that the task of developing a satisfactory utilitarian principle that accounts for this intuition is fraught with difficulty. Both average and total utility principles produce unacceptable conclusions. This problem arises in a range of contexts, including decision-making about which health interventions ought to be publicly funded. Such decisions are often informed by utilitarian comparisons of the amount of benefit that different interventions can be expected to provide. Some health interventions affect which future persons will come to exist. For example, Preimplantation Genetic Diagnosis (PGD) allows embryos to be screened prior to implantation via IVF and can be used to avoid the selection of embryos that carry harmful genetic diseases. In order to quantify the benefit that an intervention such as PGD provides, it is necessary to determine how the impersonal benefit of causing healthier persons to be born should be weighed against the person-affecting benefit of making existing persons better off. I will argue that using different, but commensurable, metrics to calculate these two kinds of benefits would allow policy makers to generate a comprehensive account of the benefit that an intervention provides, without incurring any of the problems faced by total or average utility principles. I will provide an account of how these two metrics can be constructed in order to yield a satisfactory resolution to Parfit’s Puzzle.

Jenny McMahon (University of Adelaide)
A Cinematic Realism
Can we derive new ways of thinking from film (new classifications or revisions), or is film merely an expression of new models which only emerge within more determinate environmental and community interactions? Film as fiction is experienced in an indeterminate way in the sense that it does not engage the kind of means-end dynamic in our actions and projects which produce new or revised classifications. However, a consideration will be given to film’s capacity to present new modes of thought in virtue of new connotations, attitudes and outlooks. The argument will draw upon theories of perception according to which there is a constructive aspect to the perceptual object, borrow from Stanley Cavell concerning the constructivist nature of what we take to be real, while attempting to uncouple constructivism and relativism. The “realism” which anchors the paper refers to a structure of belief concerning the parameters of our intentions; that is, belief constrained by its effectiveness in the world for satisfying inter-subjective requirements. As such, there is scope for a robust sense of cinematic realism if film could be said to provide metaphors, tropes, models and other heuristics for understanding and in turn, shaping experience. Testing this possibility with further insights from Cavell, it will be argued that film can actualise the constructed and universal nature of reality like no other artistic medium.

Alex Miller (University of Otago)
Morality Without Moral Facts?
In their paper “Morality Without Moral Facts” (in J. Dreier (ed.) Contemporary Debates in Moral Theory (Blackwell 2005)), Terry Horgan and Mark Timmons outline a metaethical view which they call “cognitivist expressivism”. According to Horgan and Timmons, cognitivist expressivism avoids the metaphysical and epistemological commitments of moral realism, while avoiding the sorts of problems faced by other views opposed to moral realism, such as error theories and non-cognitivism. Horgan and Timmons claim that cognitivist expressivism “plausibly accommodates the various deeply embedded assumptions of ordinary moral thought and discourse”. I will argue that cognitivist expressivism fails to deliver the metaethical goods promised by Horgan and Timmons.

Nicholas Munn (University of Waikato)
Prisoners, voting.
In 2010, New Zealand joined the UK in preventing all criminals serving prison sentences from voting. This restrictive position on criminal disenfranchisement is indefensible both in theory and in practice. Here I argue against any criminal disenfranchisement, and in doing so analyse the particular problems of blanket disenfranchisement such as has been introduced in New Zealand.

Tim Oakley (La Trobe University)
How to release yourself from an obligation
In some cases, you may release someone from some obligation they have to you. For instance you may release them from a promise they made to you, or an obligation to repay money they have borrowed from you. But most take it as clear that if you have an obligation to someone else, you cannot in any way release yourself from that obligation. I shall argue the contrary. The issue is important because one standard problem for the idea of having duties to oneself relies on the impossibility of self-release. The argument (the “Release Argument”) is that a duty to oneself would be a duty from which one could release oneself, but that is an absurdity, and so there can be no duties to oneself. This argument is to be rejected because a duty from which one can release oneself is perfectly possible, and such release occurs quite properly from time to time. (Some people will know that I presented a paper defending duties to oneself a year ago, at the Auckland conference, and criticised the Release Argument then. That criticism proved to be flawed. The arguments of the present paper are new.)

Graham Oddie (University of Colorado)
Fitting attitudes and solitary goods
The fitting attitude approach to value embraces a schema for matching up values with attitudes in a systematically pleasing way. For something to have a certain value it is necessary and sufficient that it be fitting, or appropriate, or good, or obligatory (or something) to take a certain attitude to bearers of that value. The FA schema provides a tight connection between value and responses to value, while preserving the necessary distance between values and actual responses to their manifestations. The idea seems obvious for certain thick evaluative attributes-the delightful is whatever it is fitting to take delight in; the shameful whatever it is fitting to be ashamed of, and so on. But it also seems rather plausible for the thin evaluative attributes—goodness and badness and betterness—although here the appropriate attitudes do not lie as close to the surface of the specification of the values. I will follow custom and use the now customary terminology of favoring as a convenient placeholder for the fitting attitude, whatever it is. So the good is what it is fitting to favor, the bad what it is fitting to disfavor, the better what it is fitting to favor more. There are three main challenges to the fitting attitudes account. The first and most widely discussed challenge concerns the wrong kinds of reasons (WKR for short). The WKR challenge can, I think, be met, though only by adopting a notion of fittingness that jettisons the possibility of a FA reduction of the axiological realm to the non-axiological. Here I focus on two much less discussed challenges: Bykvist’s Solitary Goods objection, and the Isomorphic Response objection.

Graham Oddie (University of Colorado)
Autonomy, entanglement and Prior’s gem
Arthur Prior discovered many logical gems but the one at issue here is his famous argument against the logical autonomy of ethics—the thesis that one cannot derive genuinely normative (axiological or deontic) conclusions from premises all of which are non-normative (or natural).  We could call the denial of autonomy entanglement.  Prior starts with a clearly normative propositon, NORM, and a genuinely normative claim, NORM.  Consider the following inference:
(*)  NAT therefore NAT v NORM. 
Either NAT v NORM is a normative proposition or it isn’t.  If it is then (*), if valid, is a violation of autonomy.  But if tNAT v NORM isn’t a normative propositon then, on the assumption that the negation of a natural proposition is natural, we have the following violation:
(**)  ~NAT,  NAT v NORM therefore NORM.
There are many really very sophisticated analyses of what is going on here which attempt to preserve No-Ought-From-Is from Prior’s gem.  I offer a very simple framework and an associated analysis that preserves the rational kernel of the Humean thesis, while also vindicating Prior’s insight that there is a kind of entanglement at the level of propositions. While this is fine as far as it goes, there is a different argument for a deeper entanglement at the level of reality, one that does threaten the Humean orthodoxy.

Jerome Agboola Odebunmi (University of Waikato)
The Consequence of Dewey's Naturalism for Epistemology
One of the objections raised by John Dewey against the traditional and modern epistemology is that it is speculative. According to him, many epistemologists in these periods were preoccupied with presuppositions about the nature of knowledge and conceptual analyses of epistemological terms such as truth and justification. Dewey argues that these epistemologists ended up with many theories about what knowledge “ought to be” at the expense of what knowledge actually is. In addition, he contends that an understanding of why we need to acquire knowledge or how knowledge is inherently instrumental, offers a more promising focus. A good theory of knowledge, according to him, must make reference to concrete human situations which he called “transactions” with nature. It is on this ground that he contrasts “experimental knowledge” (which he endorses) with what he called “speculative” or propositional knowledge.  This paper examines how and to what extent Dewey’s epistemology is experimental and free from presuppositions.

Glen Pettigrove (University of Auckland)
Re-Conceiving Character: The Social Ontology of Humean Virtue
Contemporary ethicists conceive of character as a stable, enduring state that is internal to the agent who possesses it. This conception is reflected in the verbs they commonly use in association with ‘character’: One ‘manifests’, ‘expresses’, ‘reflects’, or ‘reveals’ one’s character. The aim of my project is to show that for writers in the 17th and 18th centuries, character is neither stable nor exclusively internal. Early modern authors are much more likely to write of ‘giving’ a character, ‘drawing’ a character, ‘deserving’ a character, and ‘maintaining’ a character. I shall argue that character, as they conceive of it, is fragile and has a social, rather than an individual, ontology.

Charles Pigden (University of Otago)
NOFI and Non-Cognitivism: the History of a Mistake
Just fifty years ago it was the reigning orthodoxy among philosophers that  Hume’s No-Ought-From-Is passage (henceforth NOFI) implies or at least strongly suggests  non-cognitivism  Why was this error so widespread?    This is mostly a matter of recent history though the confusion goes back to Bentham. To begin with philosophers have confounded the Logical and the Semantic Autonomy of Ethics, the view that you can’t get moral conclusions from non-moral premises by logic alone and the view that  you cannot get moral conclusions from non-moral premises by logic plus analytic bridge principles (since there are no such principles). This has sometimes led to the view that Semantic Autonomy can be derived from Logical Autonomy, an error nearly made by the young Bertrand Russell and definitely made by the young Arthur Prior. If semantic autonomy is ‘derived’ from Logical Autonomy and then combined with the view that statements are only factually meaningful if they are analytically entailed by observation statements of some kind, it is easy to arrive at a conclusion that moral judgements lack factual content, and to think that Hume’s NOFI implies it.  Popper added to the confusion in his Open Society and ‘What Can Logic Do for Philosophy?’ arguing that ‘it is impossible to derive a sentence stating a norm … from a sentence stating a fact’ a claim he supposed was supported by logic.  I endeavor to untangle these confusions restoring Hume’s NOFI to meta-ethical impotence.

Charles Pigden (University of Otago)
No-Ought-From-Is and Prior’s Paradox after Fifty-four years
Once upon a time philosophers were generally agreed that you can’t get an Ought from an Is though there was some unclarity as to what this meant. Did it mean that you can’t you can’t get moral conclusions from non-moral premises by logic alone (Logical Autonomy) or that you cannot get moral conclusions from non-moral premises by logic plus analytic bridge principles (Sematic Autonomy)? But in 1960 Arthur Prior dropped a bombshell arguing that it is possible to derive moral conclusions from non-moral premises by logic alone.  In this paper I discuss the rival  attempts to reformulate and prove some version of No-Ought-From-Is despite Prior’s counterexamples, the general idea being that you can't derive substantively moral conclusions from non-moral premises by logic alone. Do these rival  solutions to Prior’s Paradox suggest  anything of meta-ethical consequence about the nature of moral  judgments? The answer, I suggest, is ‘no’ since they all of them rely on some version of the conservativeness of logic, involving inference barriers  between kinds of propositions  that need not be ethical.  

Martin Prior (SOAS, University of London)
From Polish Notation to Tense Logic: Logic and Language
In this paper I would like, as both a linguist and the son of Arthur Prior, to talk about how my father's work, perhaps mainly subliminally, affected my own work. And as a linguist my perceptions are by no means uncritical. Polish notation is about the syntactic representation of ideas, and natural language displays a number of classes of word-order, for example verb-subject-object combined with prepositions, not unlike Polish notation, and subject-object-verb combined with postpositions, not unlike Reverse Polish Notation. But then natural language often goes against such notations with hidden operators like multiplication in 'xy'. Over the question of tense, in the so-called Prior-Reichenbach controversy, logicians have tended to favour Prior, and linguists Reichenbach. I say they are complementary, and it is important for both disciplines to be aware of this.

Matteo Ravasio (University of Auckland)
Smashing a Perfectly Good Guitar – Why we don't like to see musical instruments being misused and destroyed
Certain contemporary musical pieces require the destruction or the misuse of musical instruments. Moreover, some rock and punk musicians have the habit to mistreat or smash guitars during their live shows. Our instinctive reaction to this is a negative one : even when we might accept it as serving a higher aesthetic purpose, the destruction of an instrument is seen as something blameworthy. Although the reason for our gut reactions might seem straightforward, further scrutiny seems to reveal something peculiar in them. Stephen Davies has argued that we respect musical instruments because we treat them like 'honorary persons'. His view on the issue will be presented and an alternative account suggested.

Paul Redding (University of Sydney)
Hegel’s weakly inferentialist, pluralist logic
With his “inferentialist” approach to semantics, Robert Brandom has shed light on one dimension of Hegel’s strangely sounding claim that the syllogism is the “truth” of the judgment: we cannot have a conception of what is actually said in a judgment without understanding its relations to other judgments in patterns of inference. In this paper I argue that on examination Hegel’s inferentialism is weaker than Brandom’s own, in that for Hegel inferential relations are necessary but not sufficient for a judgment to have content. This in turn, I suggest, is linked to the pluralist nature of his logic—that is, that Hegel employs two distinct senses of logical consequence. Finally, I suggest an alternative way of thinking of the dynamics of Hegel’s logic by weakening what Brandom discusses in terms of the notion of doxastic commitment to the attitude of presupposition.

Adriane Rini (Massey University)
The Logic of 'Logic and the Basis of Ethics'
Arthur Prior’s Logic and the Basis of Ethics, (1949) was so well received at Oxford that it led to an invitation to Prior to travel from New Zealand to Oxford to give the Locke Lectures, which he did in 1956. In spite of the impact that Logic and the Basis of Ethics had in its day, it is now to almost conspicuously understudied. This paper focuses on several questions. How did an unknown young man from New Zealand get his 111-page book picked up by OUP? Why did the book make an initial splash? Why did it slip so suddenly from discussion? And, what is it that’s doing the real work in Prior’s book? As his title announces, it seems that the answer is logic. But what exactly did this mean to Prior?

Denis Robinson (University of Auckland)
Available Options: ingenuity, imagination, and skill as handmaidens of rationality
This exploratory paper will survey a schematic commonality between some very different kinds of situation. In each kind of situation it is possible to raise questions of the rationality of belief, or of action, or of the rightness of actions of diverse kinds, where rightness may relate teleologically to the optimal achievement of a desired result, or to the moral rightness of an action. Broadly, in these diverse cases, we may wish to optimize our chances of arriving at truth, succeeding in our actions, or doing the right thing. Characterizations of rational choice of belief or action, or of suitable rules – of inference, of weighing probabilities, of achieving an end, or of morality – are important parts of many philosophical attempts to say how human cognitive powers may best be employed to achieve such optimization. But there are other human powers or capacities which also deserve to be called, broadly speaking, “cognitive”, but which are neither part of “rationality” as often understood, nor exercises in choosing, formulating or following rules.

Sagar Sanyal (University of Melbourne)
Value in ecological economics
The conceptual part of the talk picks out one way in which a type of economy or a part of the environment may be valuable, and situates this in relation to use value, exchange value, and the debates about instrumental/intrinsic value, and anthropocentric/non-anthropocentric value. The political economy part of the talk adopts an eco-socialist perspective to explore how an interest in this value creates reasons to interfere in global capitalism. Specifically, labour-intensive agricultural economies of the rural South and forest environments (often in countries of the South) are valuable in the ecological economy sense. Yet these economies do not generate enough exchange value or use value for those who derive a livelihood from them to escape poverty within the current framework of international trade and finance. We have a reason to move finance capital, productive capital, commodities and services to these regions even if this is not reciprocated by a future return flow of exchange value or use value. I briefly discuss implications for capitalism if this is to be a lasting feature of the global ecological economy, rather than a one-off transfer.

Vanessa Scholes (Open Polytechnic / Kuratini Tuwhera)
Stereotypes and discrimination
Philosophers have not reached a consensus on how to specify the moral wrong(s) of discrimination. There are various philosophical accounts of when discriminatory actions are bad or wrong, and what makes them so. Agent-based accounts consider the moral wrong of discrimination to be located in the false or prejudiced beliefs or attitudes of the agent. Expressivist accounts specify the moral wrong in the objective meaning expressed by the action. Consequentialist accounts consider the moral wrong to be a matter of the consequences for the people affected by discrimination. These accounts tend to focus on 'taste' discrimination, which involves treating people differently on the basis of a personal taste or prejudice for or against a particular group. I am interested in the ethics of 'statistical' discrimination - applying a group statistic to assess an individual and treat them differently, without personal tastes or prejudices. I will draw on the literature on stereotypes to argue that this discrimination ought to feature more prominently in accounts of the moral wrong(s) of discrimination.

Kim Shaw-Williams (Australian National University)
Towards a New View of Human Origins: The Bipedal Wetlands Foraging Hypothesis
I argue the evolution of bipedalism was triggered by the entry of Early Miocene African catarrhine primates into an omnivorous wetlands foraging dietary niche, which required habitual wading when searching for small aquatic/semi-aquatic fauna and flora. This is the socio-ecological niche our ancestral stem lineage of above-branch or plantigrade, non-suspensory climbing large apes, as represented by Ardipithecus ramidus, has always occupied. Since we were committed omnivores, alloparenting was part of our resource-based sociality, as it is in other extant primate omnivores—for instance the Barbary “ape” Macaca sylvanus. From the Mid-Miocene onwards this lineage began diverging into the suspensory-climbing frugivorous/herbivorous lineages leading to Pongo, Gorilla, Australopithecus and Pan. The originally more frugivorous australopithecines became as secondarily derived towards suspensory arboreal locomotion as Gorilla. As Miocene/Pliocene open wetland/savannahs expanded they entered a mainstay sedge-harvesting niche. Hence they became megadont bipedal waders, but retained arboreal adaptations. There is good evidence of a Mid-Pliocene non-arboreal, obligate biped: a nomadic post-Ardipithecan lineage of omnivorous wetland/savannah foragers sympatric to the more specialized australopithecines. Therefore our immediately pre-Homo Pliocene ancestors retained a good precision grip and an alloparenting sociality. They were therefore much better social learners than extant apes and the australopithecines.

Paul Silva (Monash University)
How Etiological Information Yields Defeat
Abstract: Suppose you learn that your particular upbringing is partially causally responsible for your religious (or political or moral) beliefs, and that you would not have had those particular beliefs had you grown up in some other community (even if you were exposed to the same evidence). Intuitively, there are some cases where gaining this kind of etiological information about your beliefs can, all by itself, defeat your justification. However, some have argued that such information about the causes of your beliefs cannot defeat your justification. I argue that this is mistaken. The argument turns out to have surprisingly far-reaching consequences for epistemology.

Hartley Slater (University of Western Australia)
Prior’s Anti-Platonism
Prior’s posthumously published book Objects of Thought is in two sections, and thereby on two different kinds of object: propositions, and the individuals that some propositions are about. I have written before about the fear Prior had of a platonic understanding of propositions, i.e. as the referents of ‘that’-clauses.  The criticisms I have levelled in this area have implications for his operator analysis of expressions like ‘It is necessary that p’ as opposed to the predicative analysis which is required of their equivalents ‘That p is necessary’.  I have also written before about Prior’s attitude to a platonic understanding of individuals, i.e. as the referents of Russell’s ‘logically proper names’. Prior had a close knowledge of theories of individuals, including that of Russell.  However, when he formulated his tense logic Q, in Time and Modality, he presented a non-Russellian account of individuals, making out that they were physical objects that might exist at one time but not at another.  So they were not abstract objects having a necessary existence.  In this paper I examine Prior’s anti-Platonism more generally, in the context of the development of twentieth century logic.

Jay Sloss (Rangi Ruru Girls School)
Co-Author: Richard Tweedie (Hagley Community College)
Support your local philosophy teacher. Philosophy in Secondary Schools: What we do and what’s missing. What is the National Certificate of Educational Achievement (NCEA) framework? How does NCEA Philosophy work?
In 2009 Alain Badiou and Slavoj Žižek published a book of two essays titled Philosophy in the Present. The book examined the question of whether philosophy should interfere in the world. Badiou (2009) argued that “philosophy confronts thinking as choice, thinking as decision” (5). If Philosophy plays some kind of role in decision-making (in terms of elucidating the fundamental choices of thought), how much of that role should be played out in NZ secondary schools?
The NZ Association for Philosophy Teachers (secondary) – NZAPT, argue that young people require some basic logical and ethical tools if they are to interrogate choices of thought. Currently direct access to these tools is denied them within the National Certificate of Educational Achievement (NCEA) framework. The Ministry of Education’s apparent commitment to values surrounding thinking, (in terms of the NZ Curriculum’s ideological vision of “confident, connected, actively involved and lifelong learners”) stalls, when thinking itself is not able to be formally assessed. There are currently no real options for the assessment of reasoned (secular) ethical judgements within the NZ Curriculum. Practical ethics can be assessed, but there remains a blind-spot when it comes to specific assessment/examination of the bases and principles of ethics.The NZ Association for Philosophy Teachers (NZAPT) have been working to remedy this situation with the introduction of dedicated NCEA Philosophy Achievement Standards. However, we have been caught in what a Ministry of Education spokesperson in 2012 termed a “chicken or egg thing…where there is this tension that if only they had NCEA achievement standards they would be moving”. NZAPT have a number of secondary school programmes in place despite the ministry’s unwillingness to resolve the so-called chicken-egg ‘thing’. This presentation will briefly overview what is currently happening with Philosophy education in NZ secondary schools, as well as what is currently not able to happen because of oversights within the NZ Curriculum. The basics of NCEA will be outlined, along with a call to action for University philosophy departments in the movement to expand the range of philosophy taught in NZ secondary schools.

Nicholas Smith (University of Auckland)
Motives and Right Action
In this paper, I defend the view that the motives of an action can have bearing on the rightness of that action. Along the way, I discuss the widely accepted distinction between the rightness and goodness of action as well as the relation between what is done and why something is done when an action is performed. Finally, I consider some examples from Steven Sverdlik designed to show that motives are rightness-relevant and defend the cogency of these examples from important objections.

Kim Sterelny (Australian National University)
Farewell to Content
This paper defends a neo-Dennettian view of the psychosemantics program; on the attempt to develop a reductive naturalist theory of mental content. It takes the teleosemantic program to be the current flagship version of reductive naturalism, and the paper appeals to recent work on human cognitive and social evolution (including my own recent work) to argue that there are fundamental problems in trying to extend a teleosemantics of animal cognition to human cognition. But it also argues (admittedly, more sketchily) that this should not be seen as a threat to a naturalistic conception of human cognitive and communicative life.

Christine Swanton (University of Auckland)
Nietzsche, Depth Psychology, and the Objectivity of Ethics
This paper defends an interpretation of Nietzsche’s discussion (GM Essay 1) of the “slave revolt” in morality (involving a revaluation of values) in terms of resentment as a depth phenomenon – more specifically a form of defence. This interpretation is at odds with Poellner’s interpretation of the revaluation in terms of self-deception. There are implications for the understanding of Nietzsche’s contribution to moral theory. On the interpretation defended, Nietzsche can be read as an objectivist (suitably understood) about virtue and vice. Vice in particular is e.g. expressive of resentment, reactive and weak. However according to Bernard Williams and on orthodox interpretations of Nietzsche, understandings of virtue and vice through the thick concepts (such as those discussed by Nietzsche (e.g. patience, humility, justice)) are relative to insider perspectives – a form of relativism looms. How should we deal with this problem?

Fox Swindells (University of Canterbury)
A Capabilities Solution to Enhancement Regulation
Human enhancements have the potential to dramatically change individuals' capabilities, and will lead to serious harms if unregulated. However, it is unclear how states should act to mitigate these harms. I argue that the capabilities approach provides a useful metric to determine what action states should take regarding each enhancement technology. According to the capabilities approach, states are responsible for ensuring that citizens are able to be and do certain things that are essential to human life. I, therefore, consider the impact of a range of enhancements on individuals' abilities to be and do certain things in order to determine what action a state should take regarding each technology. For some enhancements the capabilities approach will require the state to ensure their availability to their citizens, such as with life extension. Others will require the state to take a vastly different approach, such as banning clearly harmful enhancements. For a number of technologies that pose neither harm nor obvious benefit the state will potentially not need to regulate them in any way. However, there remain some enhancements where it is extremely unclear how the state should act, such as a technology that removed the need to sleep, but required the user to reduce their lifespan to do so. Exploring these examples will allow me to demonstrate that the capabilities approach can provide valuable, realistic, advice to guide public policy in response to enhancement technologies.

Koji Tanaka (University of Auckland)
Tonking a Modern Conception of Logic
Prior to Prior, it was widely held that analytically valid inferences are those whose validity arises only from the meanings of the expressions occurring in them. In 'The Roundabout Inference-Ticket', A.N. Prior takes this view to task. He introduced the logical operator 'tonk' to question this view. I will show that Prior's discussion applies more generally to a modern conception of logic - the conception that was explicitly formulated by David Hilbert and that most logicians have uncritically accepted ever since - according to which a system of logical inferences can be recognised as such if it is internally coherent. I will lay out what is involved in such a view and how Prior's tonk operator can show it to be problematic.

John Thrasher (Monash University)
Stability and the Price of Compliance
Rawls argued that deliberators in the original position choose principles of justice in ideal theory characterized, minimally, by an assumption of “strict compliance.” Many have wondered how the principles selected in “ideal theory” are meant to be applied in non-ideal theory characterized by partial compliance. I argue that ideal theory is problematic but not for the commonly held reasons. The problem instead comes from segmenting the choice of principles into multiple ideal stages before proceeding to non-ideal theory. Choice in the original position is segmented into at least two stages: one where the principles are chosen (P) and another where they are evaluated in terms of relative stability(S). Since the first stage includes an assumption of strict compliance, information crucial to evaluation in the second stage is thereby excluded. Choosers in the first stage are selecting principles without knowing the price they will have to pay, in terms of relative stability, for those principles in the second stage. Segmentation in this way, I argue, will make choice in the original position path-dependent because the principles selected when the stages are ordered one way (P→S) will almost certainly not be the same principles that are selected if the order is reversed (S→P). This arises from a common and well-known property of segmented choices that lack external criteria of consistency. The result, I argue, is that the justification of particular principles depends crucially on the order of choice and the information available at any given stage. Path-dependence in the choice process makes the choice itself non-rationalizable and unstable under full publicity. Importantly, this problem is all at the level of ideal theory. The crucial problem is removing information about relative compliance costs during the selection of principles. I argue that this problem can be solved by modelling the original position as a one stage choice that includes stability and compliance information.

Lachlan Douglas Walmsley (Macquarie University)
The Origin of Decoupled Representation
Kim Sterelny (2003) develops an account of the evolution of human-like beliefs, known as decoupled representations. According to Sterelny, cognitive maps of the physical or social environment may be the origins of decoupled representations. In this paper, I propose that the origins of decoupled representations are found earlier than Sterelny claims. This proposal is motivated by Wayne Christensen’s (2010) criticism of Sterelny’s account, in which Christensen argues that empirical evidence indicates that decoupled representation evolves much earlier than Sterelny claims—as early as the evolution of the nervous system. I defend Sterelny’s account against Christensen’s criticisms on the grounds that Christensen misinterprets many of Sterelny’s claims. However, although Christensen’s evidence does not show that decoupled representation evolves earlier than Sterelny claims, Sterelny’s account struggles to explain some of Christensen’s evidence unless the process of decoupling begins earlier than Sterelny’s account indicates. I argue that robust tracking, another system of behavioural control in Sterelny’s account, is the origin of decoupled representation.

Zach Weber (University of Otago)
Co-Authors Guillermo Badia (University of Otago) and Patrick Girard (University of Auckland)
What is an inconsistent truth table?
There are nowadays many non-classical logics. Some have been argued for as the correct logic. And yet it is almost always presupposed that when we are talking about a logic we must be working in a classical metatheory. Grammar and truth tables are taken to be `classically behaved'. Reliance on classical logic is pretty peculiar, though, if one takes the rhetoric that often accompanies non-classical logics seriously. A reason for the classical-fallback is simply pragmatic: no one really knows what e.g. a fully paraconsistently constructed truth table looks like. And so the purpose of this paper is just to show the answer.

Sean Welsh (University of Canterbury)
The Moral Architecture of a Service Robot: Moral Forces, Value Circuits and Patient Necessity
A "moral force" (Jackson, 1992) inclines or compels a cognitive agent to make a decision to act. In a complex decision there may be a range of moral forces "pushing" and "pulling" (i.e. supporting and opposing) a given action. One might regard moral forces as a subset of a more general notion of "cognitive forces" that influence decisions that are ultimately based in physical forces as they apply to body, brain and environment. Such cognitive forces might be detected and transmitted by what I term value circuits. Such circuits motivate biological agents such as human beings. Value circuits, as I define them, start with sensors and end with reportable elements of experience (qualia) that inform or motivate decisions to act. The moral cognition in Arkin (2009) is vigorously debated mostly with a view to banning its application in lethal autonomous weapons systems. Relatively few papers present a detailed critique of its limitations. Arkin (2009) can be characterized as "single circuit" moral cognition for a military robot. This paper seeks to outline what more robust and adaptable "multiple circuit" moral cognition in a civilian robot might look like. I envisage such a robot would be a "service robot" (Scherer, 2010) that might implement ethics based on the needs of human patients along moral lines outlined in Reader (2007) and technical lines partly derived from Arkin (2009) but more complex. In particular, they would be augmented with a robotic implementation of value circuits.

Daniel Wilson (University of Auckland)
Fine Art’s Esteem and the Buck Passing Theory of Value
In his 2014 book, Beyond Art, Dominic McIver Lopes argues that there is no artistic value that is common to all the (fine) arts. In this paper, I argue—contra Lopes—that art esteem is a value that is inextricably linked to art practices and even to our conception of fine art. The structure of this paper is as follows. I begin by showing that Lopes’ arguments for the claim that there is no characteristically artistic value fail to secure that conclusion. I then present my positive argument and evidence in support of the existence of institutionalised art esteem. In the final section I examine the philosophical import of art esteem. I comment on its nature, its role with respect to art’s function (I suggest) of rewarding deep artistic appreciation, and its connection to the evaluative conception of art.