One of my research programs explores the normative dimensions of such emotion regulation. For example, suppose you become angry at someone who insulted you. How should you manage this emotion? Are there reasons for you to diminish its intensity or extinguish it altogether? Are there sometimes reasons to increase its intensity or provoke yourself (or others) to anger?
I use conceptual resources from philosophy of emotion as the foundation for another research program that investigates the ethics of affective artificial intelligence. What is at stake when we outsource emotional labor to AI? Are we deceived when we have emotional reactions to chatbots designed to simulate social interactions?
Feel free to contact me if you are interested in reading or discussing any of the papers here.
AI Emotion Recognition and Affective Injustice (With Michael Dale.)
In press at Erkenntnis.
Abstract: Artificial intelligence can now recognize our emotions using algorithms that interpret our facial expressions. This technology is used to help assess an applicant’s interview performance, an individual’s potential for criminal behavior, whether a student is paying attention during an online class, and more. Assuming that such technology could reliably recognize human emotions, it nonetheless cannot assess whether an emotion is apt, which matters for how we ought to treat someone. Specifically, we argue that such uses of AI Emotion Recognition constitute affective injustice that occurs when someone’s emotions are treated unjustly. We hope to draw attention to this issue so that designers and proponents of AI Emotion Recognition recognize the principled limitations of the technology in its current state.
How Anger Helps Us Possess Reasons for Action
Forthcoming in Philosophical Quarterly.
Abstract: I argue that anger helps us possess reasons to intervene against others. This is because fitting anger disposes us to intervene against others in light of reasons to do so. I propose that anger is a presentation of reasons that seems to rationalize such interventions, in much the same way that perceptual experience is a presentation of reasons that seems to rationalize our judgments about our environment. In this way, anger can help us possess reasons that make specific actions rational to perform. Moreover, the significance of anger to practical rationality informs how we should regulate anger, especially the anger of others. Along these lines, I argue that it is wrong to prevent anger to the extent that this prevents someone’s possession of reasons to intervene, and it is right to provoke anger to the extent that this enables someone’s possession of reasons to intervene.
Mind Design, AI Epistemology, and Outsourcing (With Susan Schneider and Garrett Mindt.)
Forthcoming in Social Epistemology.
Abstract: From brain machine interfaces to neural implants, present and future technological developments are not merely tools, but will change human beings themselves. Of particular interest is human integration with AI. In this paper, we focus on enhancements that enable us to outsource our own epistemic work to AI. How does outsourcing epistemic work to enhancements affect the authorship of and responsibility for the final product of that work? We argue that in the context of performing and reporting research, outsourcing does not diminish one’s responsibility for mistakes in the final product, in contrast to a recent position expressed by Coeckelbergh and Gunkel that concerns about authorship obscure the ethical and social issues here. Moreover, we suggest that that responsibility may sometimes be shared between oneself and the group that designed, marketed, and sold the enhancement. Our investigation does not aim to settle these issues but instead demonstrates their urgency and develops frameworks for understanding the relevant issues.
There Are No Irrational Emotions
In Pacific Philosophical Quarterly, Vol. 103, Issue 2 (2022).
Abstract: Folk and philosophers alike argue whether particular emotions are rational. However, these debates presuppose that emotions are eligible for rationality. Drawing on examples of how we manage our own emotions through strategies such as taking medication, I argue that the general permissibility of such management demonstrates that emotions are ineligible for rationality. It follows that emotions are never irrational or rational. Since neither perception nor emotion are eligible for rationality, this reveals a significant epistemic continuity between them, lending support to perceptual views of emotion.
The Phenomenology of Encounters with Value
Forthcoming in The Journal of Philosophy of Emotion.
Abstract: Sophie Grace Chappell’s Epiphanies: The Ethics of Experience characterizes encounters with value as passive experiences similar to perceptual experience, with a special focus on epiphanies as significant encounters with value that may revolutionize our worldview. This lends itself to an attractive value epistemology that assimilates some knowledge of value into a model provided by perceptual knowledge. In opposition to Chappell, I argue that some encounters with value are active: sometimes, we experience value as a creation of our own agency. In my view of encounters with value, value is not always given to us through experience as if it were from the external world. I develop a sketch of how this view expands upon Chappell’s value epistemology and understanding of epiphanies.
Frogs or Sleepwalkers? Sycophants or Props? A Reply to Schneider on Chatbot Epistemology
Social Epistemology Review and Reply Collective 14 (10): 120–127.
Abstract: Susan Schneider argues that AI chatbots pose a threat to human autonomy that is difficult for users to detect. Like the proverbial frog in the pot of water that is slowly heating to a deadly boil, we are unaware of the developing threat to our autonomy. Contrary to Schneider, I argue that we are keenly aware of the potential impact of AI chatbots, more so now than in the future if (or when) our society widely adopts and accepts them. I propose that, rather than think of ourselves as frogs in the slowly boiling pot, we are like sleepwalkers before our slumber, drawing upon Langdon Winner’s influential concept of technological somnambulism. Then I complicate Schneider’s view about how AI chatbots will influence our autonomy by suggesting that, rather than think of AI chatbots as hallucinating sycophants, we can (and sometimes do) treat them as props for fictional stories, and our emotional responses to fiction are not necessarily irrational or deceiving. Ultimately, I propose that we transform epistemic and ethical debates about chatbots into practical questions about how we (as a society) want to relate to and use them.
Planescape: Torment as Philosophy: Regret Can Change the Nature of a Man
In The Palgrave Handbook of Popular Culture as Philosophy (2024).
Abstract: In the video game Planescape: Torment, players assume the role of the Nameless One, an immortal being who suffers from amnesia. By making choices for the Nameless One, players decide not only what happens to the Nameless One, but also the development of his moral character. In this way, Planescape: Torment invites its players to consider “what can change the nature of a man.” In the game’s canonical ending, the Nameless One regrets the great harm he inflicted on others, and he gives up his immortality to amend his wrongdoing. Thus, the game holds that it is regret that can change someone’s moral character for the better. A defense of this claim about regret can be found in Aristotle’s view that one must practice virtuous actions in order to develop the moral virtues. The alignment system of Planescape: Torment demonstrates a similar connection between action and character: the Nameless One improves his moral character by taking selfless actions. Since regret motivates one to practice virtuous action to make amends for one’s wrongdoing, regret enables one to develop virtue, and so better moral character. Although Spinoza argues that we should avoid feeling regret because it makes us miserable, Planescape: Torment suggests that the painfulness of regret is what makes it an effective source of motivation to practice virtuous actions.
USS Callister and Non-Player Characters (With Russ Hamer.)
In Black Mirror and Philosophy: Dark Reflections (2019).
Abstract: This chapter explores the ethics of Robert Daly's actions in the episode “USS Callister”. We consider issues of privacy that relate to him stealing his co-workers DNA in order to scan them into the game, as well as the ethics of how he treats the digital avatars of his co-workers within the game. Examining Daly's actions from a few different approaches, we argue that Daly's actions towards his co-workers avatars are very likely immoral, though ultimately we cannot know without knowing Daly's thoughts. Finally, we end the chapter with some considerations relating to video games in general and the ways in which we must act when we play video games.
(Draft) "AI-Generated Writing and Emotional Intimacy"
(Draft) "Emotional Absences, Epistemic Sentimentalism, and Evaluative Gaslighting"
(Early Stage) "AI Companions and the Paradox of Fiction"
(Early Stage) "Completionism as Video Gaming Vice"
(Early Stage) "Emotional Endorsement"