Owen C. King
Current Research

I work in two different areas of ethics. In the first area, at the intersection of normative ethics and metaethics, I examine kinds of value, like well-being, that pertain to individual persons and their lives. Second, in applied ethics, I investigate the moral issues raised by new computing technology, especially machine learning systems.

Well-being and Related Kinds of Value

In short, the central thesis of my dissertation was this: When philosophers and other people have talked about well-being, more than one kind of value has been in play. In my dissertation, I worked to tease apart these different kinds of value. I'm presently working on a few related papers, all of which share the goal of exposing the structure of the conceptual landscape around well-being. The payoff is greater clarity in our thinking about what benefits people. This, in turn, may make us (both as philosophers and citizens) more effective in crafting policies that promote the greatest good.

"Pulling Apart Well-being at a Time and the Goodness of a Life" Ergo 5.13, 349-370. (2018)

I argue that we must distinguish between a person's well-being at a time and the goodness of her life as a whole. Frequently, the concept of well-being and the concept of a good life are used interchangeably. And, even when they are distinguished, it is commonly assumed that the relationship between them is straightforward. I argue that this is a mistake. Of course it is true that the goodness of a person's life partly depends on her well-being at the moments of her life. But the goodness of a life depends also on facts other than the momentary well-being. Although others have noted this and hence argued that the goodness of a life cannot be simply the sum of the well-being in the life, I show that the same considerations support a much stronger conclusion: We have no guarantee that increases in well-being, even all else equal, will result in a better life on the whole. The result is that we have at least two distinct concepts of what is good for a person which ought to be theorized and assessed independently.

"Well-Being and Life Stories" [under development]

I argue for a limit in what can count in our assessments of well-being. In particular, I show that the value of features of a person's life narrative—including meaningfulness in her life, insofar as meaningfulness depends on her life story—cannot be fully encompassed by assessments of well-being. The problem, in short, is that a person's levels of well-being may figure prominently in her life's narrative and may be one determinant of meaningfulness. Since narrative and meaningfulness depend on well-being, if well-being also depended on narrative and meaningfulness, well-being would depend on itself in an objectionably circular way. The result, then, is a further delineation of well-being, in contrast to distinct (though related) dimensions of evaluation.

"De Se Pro-Attitudes and Distinguishing Personal Goodness" [under development]

Think of well-being and the goodness of a person's life as species of what is good for a person—i.e., personal goodness, in contrast to goodness simpliciter. And consider the class of response-dependence theories of personal goodness that say, roughly, that what is good for a person (in some way) is what she is disposed to desire or to favor under certain conditions. A prima facie problem for these theories is that not everything a person is disposed to desire or to favor seems good for her. A person may desire preservation of remote wetlands. But, if the wetlands are preserved, that is not in any straightforward sense good for her; it does not increase her well-being or improve her life, especially if she is unaware of it. The solution I advance is that the desires that are relevant to personal goodness have a special sort of content: they have de se or essentially indexical content. If this is right, it shows us something distinctive about what is good for persons; it shows how well-being, goodness of a life, and the like, are bound up with a distinctively first-personal kind of thinking.

Computing Ethics and Data Ethics

Although I have a wide variety of interests in computing ethics—including issues about privacy, intellectual property, and the changing character of labor—my current work is focused on ethical issues for machine learning. In traditional computing, programmers formulate and code the logical statements that classify data and make decisions. Machine learning is a fundamentally different approach. Instead of programmers writing the logic of classification and decision, the computers generate this logic on the basis of the data (the training data) they are given.

An ethical issue for machine learning emerges when we notice that judgments and inferences, even when issuing from a reliable mechanism, may not be ethically neutral. For example, judgments based on certain stereotypes may be objectionable, even when accurate. Humans tend to be guided and constrained by a sense of discretion when making these judgments, but machine learning systems have no such guidance. This gives rise to some morally worrisome scenarios—particularly, I think, when machine learning systems are tasked with predicting, classifying, or describing what people are thinking. I am currently working on three papers related to this sort of worry.

"Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems" [under review]

Consider new, advanced image recognition systems—systems which are capable of generating natural language descriptions of what is happening in photographs. The outputs of these systems are only as good as the data on which they are trained. Unfortunately, the data on which these systems are trained sometimes encode overly general generalizations about persons' personalities and intentions. If systems have been trained on such data, then they may describe people in ways that are presumptuous and fail to respect them as individuals. To help avoid this problem, I offer guidelines for procurement of training data for these sorts of systems.

"Presumptuous Attributions of Aims, Artificial Social Cognition, and Conformity" [under revision]

This article explores some of the ethical dimensions of the attribution, by humans or artificial systems, of aims (like intentions or desires) to human agents. The focus will be on presumptuous attributions of aims, those attributions based crucially on the premise that one person will have aims like superficially similar people. The moral issues here are seldom theorized because, in ordinary interpersonal contexts, the issues are relatively minor and easily addressed. However, the capability of new artificial systems to automatically attribute aims to humans dramatically changes the situation, in two ways. First, aims can now be inferred and attributed with unprecedented speed and scope. Second, artificial systems may perform these attributions without us noticing, thus bypassing the respect that we customarily grant each other. Thus multiplied and embedded, a relatively minor moral concern becomes major. We begin with some detailed examples, to bring the main moral issues into view. The examples serve as touchstones for discussion of two central concepts: over-specific aim attribution and presumptuous aim attribution. Modern collaborative filtering recommender systems (like the systems that intelligently recommend products and services) quite naturally attribute aims over-specifically and presumptuously. A foreseeable consequence of widespread use of these systems is an unwarranted, yet self-perpetuating, inducement of conformity. The paper concludes with a more general examination of the moral significance of presumptuous aim attribution.

"Artificial Social Cognition" [under development]

The goal of this paper is to articulate ethical principles to guide the development, training, and deployment of artificial systems intended to discover and describe the mental states (especially preferences and intentions) of individual persons. Such systems are currently under development and also being deployed by the likes of Facebook, Amazon, and Google, as well as many smaller entities. The main philosophical thrust here is that, issues about accuracy aside, there is distinction to be drawn between ethically appropriate and ethically inappropriate evidential bases for drawing inferences about a person. Failure to rely on only the appropriate basis may amount to a failure to respect a person as an autonomous individual. This point has profound ethical implications about how computer systems ought to handle our data in the coming decades.

"The Value of Free and Open Source Training Data" [under development]

This paper starts from a relatively old idea in computing ethics: that, at least for some applications, there is a strong moral case favoring the creation and use of free and open source software. I examine how this plays out in the context of machine learning systems. With traditional software, the ability to scrutinize, adjust, and adapt the program's underlying logic is sometimes crucial. With a machine learning system, the underlying logic tends to be inscrutable, because it has been produced algorithmically from training data instead of written by a human programmer. Hence, the rationale for the accessibility of source code in traditional software suggests an analogous rationale for making training data for machine learning systems free and accessible. I lay out this case and characterize the sorts of practical scenarios in which it is most pressing.

Computing and Professionalism

I am especially interested in one issue in Computing Ethics which, unlike the papers just described, is not directly related to machine learning. The issue is about whether and under what conditions computing should be considered a profession, in the way that medicine, engineering, librarianship, and law are professions.

"Anti-features, the Developer-User Relationship, and Professionalism" [under development]

This paper is an attempt to develop some conceptual resources—particularly the concept of an anti-feature—helpful for articulating ethically significant aspects of the relationship between software developers and end-users. After describing many examples of anti-features and considering several definitions proposed by others, I explain what all anti-features have in common. Roughly, an anti-feature is some software functionality that (1) is intentionally implemented, (2) is not intended to benefit the user, (3) makes the software worse, from the standpoint of the intended user. (This makes anti-features distinct from both features and bugs). I argue that, if we are to consider software development a profession, a condition on a person having the status of professional software developer is that she be unwilling to implement anti-features.