Is it wrong to feed plutonium to beagles for fun?
And how can you prove it?
I’d like to think that it is, in fact, wrong to feed plutonium to beagles for fun1. That being said, how would you go about proving that to be the case?
The Realm of Facts
Morality is a nebulous concept, especially so in a situation like this. If we hope to convince a plutonium-wielding villain to voluntarily choose not to subject our lovable beagles to a radioactive breakfast, there are a few ways we could go about doing so. First, we can try to rely on the objective facts we can perceive in our environment.
Beagles do not like eating plutonium
Plutonium is bad for the health of beagles
Most beagles do not eat plutonium
These statements can be a first line of defense because they’re things that we can assume everyone mutually agrees on. However, the downside is that they lack the power to actually change anything. Regardless of whether or not plutonium is bad for beagles, the villain has their own desires which override the raw facts of the situation. For example, take Case A: “it is both the case that plutonium is bad for beagles and that I want to feed plutonium to beagles.” Or, Case B: “it is both the case that plutonium is good for beagles and that I want to feed plutonium to beagles.” In either case, someone can only be swayed to change their behavior insofar as their personal sentiments lead them to care about the various facts of the case. Someone might have a sentiment that leads them to consider the nutritional/health value of what they feed to a dog, but that doesn’t prevent someone else from simply not caring.
The Realm of Sentiments
So, maybe we should instead appeal to the selfish desires of the plutonium-wielding villain. If it’s really just their sentiments in play, we can try to convince them that they will personally suffer in some way if they go through with their plot.
You’ll have to wield plutonium in order to feed it to the beagles, and wielding plutonium is always bad for your health
If you feed plutonium to these beagles, other beagles might take note and gang up against you in the future
Plutonium is really expensive, so you could choose to sell it and spend that money on something more fun
This seems to have more of a bite to it — we can see how these types of statements can persuade people. Despite that, there are a few clear flaws. First, there’s still a similar problem to the one we encountered in the earlier case: what if they just don’t care? You can say that plutonium is expensive, but what if they don’t care about money as much as they care about their nuclear plot? You can say that it’s bad for their health, but what if they don’t care about their own health?
Furthermore, there’s somewhat of a surface level problem: doesn’t it feel weird to solve a moral situation in purely self-interested terms? We, as beagle-lovers, are likely against the plutonium plot because of an intuitive sense of wrongness. While it may be pragmatic to do whatever it takes to save the beagles, even if it means appealing to the selfish nature of the villain, there’s a sense in which it feels wrong to even put the issue in terms of things like personal finances or someone’s desire to protect themselves. We might think that we shouldn’t even have to be making this argument. So, what’s left?
The Realm of Objective Morality
If we wish to truly solve the situation in a satisfying manner, the best we could hope for is appealing to an objective form of morality. In this case, we aren’t just saying that it would be unsafe, inconvenient, or so on to go through with the plutonium plot. The moral claim is stronger, saying that even if it’s the case that the villain really wants to go through with it, they should be able to rationally understand that it would be wrong to do so. There are various ways to go about this:
God has chosen to make it wrong to feed plutonium to beagles, and God’s moral command is absolute
You must not treat others in a way that you would not wish to be treated. Therefore, if you would not want a beagle to feed plutonium to you, you must not feed plutonium to the beagle
It is categorically wrong to treat any rational being as a mere means to your own ends. Beagles are rational, and it is therefore wrong to use them as a means in the end of your plutonium plot2
This type of statement goes beyond an appeal to emotions, circumstances, preferences, and so on. If we can establish some sort of universal moral law, then one can prove that certain acts are moral or immoral. Maybe they disagree, but if we say something is objectively wrong, we are saying that it doesn’t matter if anyone disagrees. However, we still run into the same problem we’ve seen again and again: what if they just don’t care?
We can think of this issue in terms of a spectrum. On one end you have people that don’t give any moral consideration to beagles, and on the other side you have people that give every moral consideration to them. We could imagine that there’s some point on that line where someone has just enough care for beagles that they are forced to reckon with any moral arguments. Maybe they don’t care enough to actually change their behavior, but someone with any amount of a sense of care would have to at least give the beagles some type of consideration. So the question then becomes, what do we do about everyone below the caring threshold?
I see three broad options: convincing them, ignoring them, or punishing them. The first is the most idealistic, and perhaps the least likely to happen in real life. Maybe we could show them some amount of facts, or sway their hearts with poems, or make them read Tom Regan’s The Case for Animal Rights3. If that doesn’t work, we could be forced to try to ignore them. You might see this with the language of compromise: “let’s just agree to disagree”, “that’s just my opinion”, or “it’s okay if you choose to do that, but I won’t support it”. But if we truly think that it’s not just personally disagreeable, but wrong to feed plutonium to dogs, can we really choose to ignore them? Moral claims are strong for a reason: if we truly think something is wrong, then it would be weird to just act as if nothing is wrong.
This leaves us with one option: punishing them, or otherwise intervening ourselves to save the beagles. Maybe you build up a society and make a legal system that has laws that put people in prison if they feed plutonium to beagles. Maybe you enforce your belief through the use of cultural norms and social punishment: only weird sickos would feed plutonium to a beagle, so we should banish anyone who does so from our society. Maybe we take direct action and use force to pry the beagles away from the villain. In a way this solves the problem, but it’s still somewhat unsatisfying. Sure, we were able to save the beagles, but how do we argue that our morality is objectively true while the other person’s isn’t?
The Realm Beyond Ethics
Each of the example arguments so far exists within the realm of ethics: the study of moral phenomenon. The whole conversation about beagles and plutonium is argued through moral claims and arguments: “is it wrong to do X?”, “In what cases is it permissible to do Y?”
I’d argue that in trying to solve the issue of plutonium and dogs (or any specific moral claim), we miss a large part of the conversation entirely. This is where metaethics comes in.
Roughly speaking, we can divide the types of questions we can ask about moral debates into two camps. There are first order questions, which might ask which party is in the right and why. Then, there are second order question which ask what the parties are doing when they engage in the debate4. Here are some examples of metaethical questions:
Can moral statements be true or false in the same way as factual statements?
What type of psychological state does a moral statement express?
Do moral judgments necessarily require a motivation to act as that judgment prescribes?
Because these sorts of questions are so abstract, they tend to be unconsciously ignored during moral conversations. When we engage in a question of moral disagreements, we tend to jump to throwing facts at each other, hoping that if the other person just learns this one thing, they’ll realize they were in the wrong and have a change of heart. But I’d argue that this often misses a step. Before we even argue about the facts of the situation, we should ask ourselves: what does it even mean to say that something is wrong? Otherwise, we run the risk that we both agree on the statement “X is wrong” even though we mean completely different and contradictory things.
I’d argue that a large part of the confusion surrounding moral debate is because of a faulty assumption: that moral statements express beliefs. It makes intuitive sense that saying “X is wrong” expresses a belief in the fact that X is indeed wrong, and that we can evaluate such statements in terms of true and false. So why would someone possibly argue that saying “X is wrong” doesn’t express any beliefs, and that it can’t even be true or false? That question is best left for a future post.
The linked study doesn’t say that they did it for fun, but I have my hunches.
Kant did not include non-human animals in his conception of the categorical imperative, but I think he would have if he talked to a smart enough dog.
Or perhaps Singer’s Animal Liberation if they seem to care about means way more than ends.
Here I’m using the definition given in Alexander Miller’s An Introduction to Contemporary Metaethics.

