Owen C. King
Current Research

I work in two different areas of ethics. In the first area, at the intersection of normative ethics and metaethics, I examine kinds of value, like well-being, that pertain to individual persons and their lives. Second, in applied ethics, I investigate the moral issues raised by new computing technology, especially machine learning systems.


Well-being and Related Kinds of Value

In short, the central thesis of my dissertation was this: When philosophers and other people have talked about well-being, more than one kind of value has been in play. In my dissertation, I worked to tease apart these different kinds of value. I'm presently working on a few related papers, all of which share the goal of exposing the structure of the conceptual landscape around well-being. The payoff is greater clarity in our thinking about what benefits people. This, in turn, may make us (both as philosophers and citizens) more effective in crafting policies that promote the greatest good.

"Pulling Apart Well-being at a Time and the Goodness of a Life" Ergo 5.13, 349-370. (2018)

I argue that we must distinguish between a person's well-being at a time and the goodness of her life as a whole. Frequently, the concept of well-being and the concept of a good life are used interchangeably. And, even when they are distinguished, it is commonly assumed that the relationship between them is straightforward. I argue that this is a mistake. Of course it is true that the goodness of a person's life partly depends on her well-being at the moments of her life. But the goodness of a life depends also on facts other than the momentary well-being. Although others have noted this and hence argued that the goodness of a life cannot be simply the sum of the well-being in the life, I show that the same considerations support a much stronger conclusion: We have no guarantee that increases in well-being, even all else equal, will result in a better life on the whole. The result is that we have at least two distinct concepts of what is good for a person which ought to be theorized and assessed independently.

"The Good of Today Depends Not on the Good of Tomorrow: A Constraint on Theories of Well-Being" [under review]

This article addresses three questions about well-being. First, is well-being future-sensitive? I.e., can present well-being can depend on future events? Second, is well-being recursively dependent? I.e., can present well-being (non-trivially) depend on itself? Third, can present and future well-being be interdependent? The third question combines the first two, in the sense that a yes to it is equivalent (given some natural assumptions) to yeses to both the first and second. To do justice to the diverse ways we contemplate well-being, I consider our thought and discourse about well-being in three domains: everyday conversation, social science, and philosophy. This article’s main conclusion is that we must answer the third question with no. Present and future well-being cannot be interdependent. The reason, in short, is that a theory of well-being that countenances both future-sensitivity and recursive dependence would have us understand a person’s well-being at a time as so intricately tied to her well-being at other times that it would not make sense to consider her well-being an aspect of her state at particular times. It follows that we must reject either future-sensitivity or recursive dependence. I ultimately suggest, especially in light of arguments based on assumptions of empirical research on well-being, that the balance of reasons favors rejecting future-sensitivity.

"De Se Pro-Attitudes and Distinguishing Personal Goodness" [under development]

Think of well-being and the goodness of a person's life as species of what is good for a person—i.e., personal goodness, in contrast to goodness simpliciter. And consider the class of response-dependence theories of personal goodness that say, roughly, that what is good for a person (in some way) is what she is disposed to desire or to favor under certain conditions. A prima facie problem for these theories is that not everything a person is disposed to desire or to favor seems good for her. A person may desire preservation of remote wetlands. But, if the wetlands are preserved, that is not in any straightforward sense good for her; it does not increase her well-being or improve her life, especially if she is unaware of it. The solution I advance is that the desires that are relevant to personal goodness have a special sort of content: they have de se or essentially indexical content. If this is right, it shows us something distinctive about what is good for persons; it shows how well-being, goodness of a life, and the like, are bound up with a distinctively first-personal kind of thinking.


Computing Ethics and Data Ethics

Although I have a wide variety of interests in computing ethics—including issues about privacy, intellectual property, and the changing character of labor—my current work is focused on ethical issues for machine learning. In traditional computing, programmers formulate and code the logical statements that classify data and make decisions. Machine learning is a fundamentally different approach. Instead of programmers writing the logic of classification and decision, the computers generate this logic on the basis of the data (the training data) they are given.

An ethical issue for machine learning emerges when we notice that judgments and inferences, even when issuing from a reliable mechanism, may not be ethically neutral. For example, judgments based on certain stereotypes may be objectionable, even when accurate. Humans tend to be guided and constrained by a sense of discretion when making these judgments, but machine learning systems have no such guidance. This gives rise to some morally worrisome scenarios—particularly, I think, when machine learning systems are tasked with predicting, classifying, or describing what people are thinking. I am currently working on three papers related to this sort of worry.

"Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems" In On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence, Berkich & d’Alfonso (eds.), Springer, 265-282. (2019)

Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy.

"Presumptuous Attributions of Aims, Artificial Social Cognition, and Conformity" [under review]

This article examines ethical dimensions of the attributions by artificial systems of mental states (particularly, aims, like intentions and desires) to humans. As such, this article is an inquiry into the ethics of artificial social cognition. The focus will be presumptuous attributions of aims—here understood as aim attributions based crucially on the premise that the person in question will have aims like superficially similar people. As I will point out, though seldom theorized, this sort of presumptuousness is a moral concern with which we are already acquainted. The technologies in focus will be recommender systems based on collaborative filtering, systems which are now commonly used to automatically recommend products to consumers. Examination of these systems demonstrates that they quite naturally attribute aims presumptuously. I argue that unreflective adoption of such systems is morally undesirable, in part because a foreseeable consequence of such adoption is an unwarranted, yet self-perpetuating, inducement of conformity.

"Self-fulfilling Prophecies and Feedback Loops in Automated Prediction" (with Mayli Mertens) [under development]

A self-fulfilling prophecy (SFP) may be defined as a prediction that is partially responsible for its own fulfillment. This simple definition is correct, but not especially informative. It does little to help us diagnose SFPs or notice the conditions where they are likely to occur. This paper is part of a project to develop and apply a theory of SFPs, especially in the context of automated prediction. In this paper, we first lay out our theory of SFPs. This includes a discussion of the nature of prediction, and an articulation of two necessary conditions for a prediction to be self-fulfilling. According to the different ways that a prediction might meet these conditions, we distinguish three types of SFPs. With our theory in hand, we examine two cases: automated product recommendation, as by recommender systems, and recommendation of sources of information.

"Artificial Social Cognition" [under development]

The goal of this paper is to articulate ethical principles to guide the development, training, and deployment of artificial systems intended to discover and describe the mental states (especially preferences and intentions) of individual persons. Such systems are currently under development and also being deployed by the likes of Facebook, Amazon, and Google, as well as many smaller entities. The main philosophical thrust here is that, issues about accuracy aside, there is distinction to be drawn between ethically appropriate and ethically inappropriate evidential bases for drawing inferences about a person. Failure to rely on only the appropriate basis may amount to a failure to respect a person as an autonomous individual. This point has profound ethical implications about how computer systems ought to handle our data in the coming decades.


Computing and Professionalism

I am especially interested in one issue in Computing Ethics which, unlike the papers just described, is not directly related to machine learning. The issue is about whether and under what conditions computing should be considered a profession, in the way that medicine, engineering, librarianship, and law are professions.

"Anti-features, the Developer-User Relationship, and Professionalism" [under development]

This paper is an attempt to develop some conceptual resources—particularly the concept of an anti-feature—helpful for articulating ethically significant aspects of the relationship between software developers and end-users. After describing many examples of anti-features and considering several definitions proposed by others, I explain what all anti-features have in common. Roughly, an anti-feature is some software functionality that (1) is intentionally implemented, (2) is not intended to benefit the user, (3) makes the software worse, from the standpoint of the intended user. (This makes anti-features distinct from both features and bugs). I argue that, if we are to consider software development a profession, a condition on a person having the status of professional software developer is that she be unwilling to implement anti-features.