Utilitarian Dumbledore

Catlady (Rita Prince Winston) catlady at wicca.net
Sun Dec 9 02:44:51 UTC 2007


No: HPFGUIDX 179728

Pippin Foxmoth wrote in
<http://groups.yahoo.com/group/HPforGrownups/message/179496>:

<< I don't think Dumbledore is a Utilitarian. The statement
"Dumbledore would have been happier than anybody to think there was a
little more love in the world" (HBP ch 29) gives the game away.
Apparenly, that's Situational Ethics in a nutshell. (snip) Situational
ethics differs from Utilitarianism in that it recognizes Agape,
absolute unconditional unchanging love for all peoples, as the only
law. >>

I've never read up on Situational Ethics. Your description sounds like
"Love, and then do as you will", which I think was written by
Augustine of Hippo. That maxim has been called a form of
antinomianism, but it *doesn't* mean 'do whatever you want, because
you're guaranteed to be saved anyway'. It means, if you love (God, our
fellow humans, a higher good) enough, then your heart will naturally
automatically want to do good things and be repulsed by bad things. 

(By the way, how did Dumbledore become famous for pronouncements that
Love is more powerful than Dark Magic -- how did he come to believe
that Love is so powerful -- when it seems he himself barely ever felt
love before he met Harry? An infatuation with Grindelwald (whether
romantic or purely intellectual). Putting up with Doge, maybe even
showing him a bit of affection from time to time. It was Aberforth,
not Albus, who went on about what a sweet kid Arianna had been, so
just because Albus felt guilty over being the indirect and possibly
direct cause of her death isn't proof that he loved his little sister.)

But Utilitarianism was proposed by Jeremy Bentham saying that the most
ethical course of action is that which results in the greatest good
for the greatest number. As opposed to all 'absolutist' (to borrow a
word from the thread on Existentialism) systems in which one must
always follow a rule regardless of the horrible consequences, such as
Never tell a lie, even if a would-be armed murderer asks you where his
intended victim is hiding. 

[Interrupting my response to Pippin to respond to what Sharon wrote in
<http://groups.yahoo.com/group/HPforGrownups/message/179533>:

<< A deontological ethic is all about duties, rights and obligations,
and universal moral principles. >>

That's what I called 'absolutist' above. 'Deontological' is the
correct technical term but I don't understand it as a word. "Ontology"
is the area of philosophy that discusses what it means to 'be' or to
'exist', so is 'De-ontology' the study of how to stop existing?

<< For example, Immanuel Kant, famous German philosopher, claimed that
universal moral principles were "categorical imperatives" -- meaning
they apply to everyone all the time -- and the fist categorical
imperative is that we should only act as if it would be OK for
everyone to act as we are. In other words, if you can't honestly say
that it would be OK for everyone to do what you are about to do, then
you shouldn't do it because it would be unethical/immoral. >>

And I used the same example, above, that everyone uses: if it is ever
not-okay to lie, does that mean it's not okay to lie to the Gestapo
agent who asks you where Anne Frank is hiding?

[[Pippin Foxmoth offered in
<http://groups.yahoo.com/group/HPforGrownups/message/179607> a version
of Hillel's statement, saying << But in theory, evil is doing to our
fellow beings that which is hateful to us, and good is learning not to
do it. >>

I find that totally non-deontological. It's not based on a set of
rules like don't hit people and always offer to share your food. It's
based on how people feel ('hateful' is a word about feelings) about
what is done to them. How people feel is a result, not a rule.
'Minimize how much you make people suffer' is not that far from
'Maximize how much you make people happy'. ]]

Back to Sharon:
<< Gryffindors are supposed to be deontological--that is, bound by
their duty to others, hence the displays of courage etc in the face of
difficult situations. (snip) So a Gryffindor, such as Harry, should
understand that there are guiding principles for his conduct, which
tell him what he should do in any particular ethical dilemma. (snip)
Now Harry doesn't always abide by this Gryffindor-ish morality. in
fact, he often takes it upon himself to break the rules for a higher
cause >>

I don't believe that 'follow the rules' is a universal moral
principle. Look at the rules instituted by the Carrows, like
practising the Cruciatis Curse on students who were in detention. 

I'm a little torn between wanting to say that 'the greatest good for
the greatest number' IS a universal moral principle, and listing
examples of what some people have said are universal moral principles:
Don't lie. Don't kill. Kill the murderers. Forgive those that harm
you. Take revenge on those that harm you. Protect people who are in
danger from people stronger than them. Help the police catch fugitives.

I'm not all convinced that Gryffindors are supposed to be
deontological, with the possible exception of 'always show courage,
even if being cautious would protect many people besides yourself,
even if running away would lead your enemy into a trap'. The rest of
'Gryffindor chivalry' (which I think was best shown by a RAVENCLAW,
Michael Corner, who rescued a first-year who was chained up by the
carrows) seems to me more a matter of taking action to achieve
chivalrous results rather than a list of rules of what actions to take. 

<< Harry (snip) often takes it upon himself to break the rules for a
higher cause, which is utilitarian thinking. (snip) Slytherins are a
paradigm of utilitarian thinking -- they do what they have to do to
achieve their ends. >>

Breaking a rule to achieve the greatest good for the greatest number
IS Utilitarian thinking (see my above comment to Pippin). Doing
something -- anything -- to achieve an end which produces more harm
(even spread among many people) than good is NOT Utilitarian thinking.
Especially if the good achieved is only good for oneself and one's
favorite few people, because self-interest (shout out to Betsy Hp!) is
one of the greatest opponents to ethical behavior. In fact, I think I
could argue that self-interest, rage, and stupidity are the *only*
opponents to ethical behavior.]

Back to my response to Pippin Foxmoth's
<http://groups.yahoo.com/group/HPforGrownups/message/179496>:

I think Lizzyben may have been *exactly* right in calling Dumbledore's
motives Utilitarian. Back in OoP, he indicated that his desire to
vanquish Voldemort was for the sake of multitudes of unspecified
people whom Voldemort would harm if able to do so, and in order to
meet that goal, he *should* be quite willing to sacrifice Harry (and,
as it happens, himself and members of the Order and even innocent
bystanders). The Life and Freedom of many people outweighing the Life
and Freedom of a few people, as well as outweighing absolutist rules
against lying to your allies and setting up an ambush of your allies.

There's nothing non-Utilitarian about saying that Remus should take on
a little extra unhappiness (an unwanted marriage) in order to give a
lot of happiness to Tonks and a little happiness to many people,
TOnks's friends who want to see her happy. My objection is not to
increasing Remus's unhappiness, but to the prediction that marrying
someone who doesn't want to marry you will result in long-term happiness.

The Bentham theory has always made sense to me, altho' I'm aware that
there are many philosophical and practical difficulties with it. One
set of difficulties is whether one person getting 150 points of
benefit really outweighs 100 people each losing 1 point of benefit --
is that really fair to those harmed? 

The question leads to the obvious practical difficulty of measuring
benefit and harm so they can be compared. Both, there are all kinds of
different benefits and harms, and also the same benefit or harm is not
worth exactly the same to each person. Which leads back to the
philosophical difficulty of *defining* benefit and harm. Which harms
you more: stating verbally that you worship Jupiter and Juno and
Minerva and Mars and the Emperor as gods, or being killed for refusing
to say it?

And when one is trying to decide what action to take, one doesn't
actually *know* what the consequences will *be*, never mind classing
them as benefit or harm and trying to measure them.

All of us take some actions because of the expected results. Most of
us set alarm clocks in the expectation that they will go off at the
set time, and we will then get up. All of us avoid stepping in a
doggy-mess that we see on the sidewalk in the expectation that
stepping into it will cause some to stick to the shoe (or foot). Many
of us take care (or even write and read a budget) to make sure that we
don't spend so much on diamonds or antique harmonicas that we don't
have enough to pay next month's rent.

But we all know that our expectations could be wrong.

Weeks and weeks ago, there were some posts asserting that Dumbledore's
moral/ethical failure was that he tried to act to achieve future
results instead of acting only on the values of the present moment,
because it is very arrogant for a human to think that he can predict
the results of an action. I think that was the first time that I heard
that offered as a philosophical rather than a practical problem with
Utilitarianism.

My own value system views looking toward the future as a duty (more
than merely a virture), not as a sin. Budgets and diets and planting a
garden are all done looking toward a likely tho' not certain future
outcome. So I certainly don't agree that looking toward future
outcomes is a reason not to try to achieve good outcomes.





More information about the HPforGrownups archive