Excerpt from An Illustrated Book of
Bad Arguments by Ali Almossawi
Argument from Consequences
Arguing from consequences is speaking for or against the truth of a statement by appealing to the
consequences it would have if true (or if false). But the fact that a
proposition leads to some unfavorable result does not mean that it is false.
Similarly, just because a proposition has good consequences does not all of a
sudden make it true. As history professor and author David Hackett Fischer puts
it, “It does not follow that a quality which attaches to an effect is
transferable to the cause” [Fischer].
In the case of good consequences, such an argument may appeal to an
audience’s hopes, which at times take the form of wishful thinking. In the case
of bad consequences, the argument may instead play on an audience’s fears. For
example, take Dostoevsky’s line, “If God does not exist, then everything is
permitted.” Discussions of objective morality aside, the apparent grim
consequences of a purely materialistic world say nothing about whether or not
it is true that God exists.
One should keep in mind that such arguments are faulty only when they are
used to support or deny the truth of a statement, and not when they deal
with decisions or policies [Curtis].
For example, a politician may logically oppose raising taxes for fear that it
would adversely impact the lives of his constituents.
This fallacy is one of many in this book that can be termed a red
herring, because it subtly redirects the discussion away from the original
proposition—in this case, to the proposition’s result.
Straw Man
To “put up a straw man” is to intentionally caricature a person’s
argument with the aim of attacking the caricature rather than the actual
argument. Misrepresenting, misquoting, misconstruing, and oversimplifying an
opponent’s position are all means by which one can commit this fallacy. The
straw man argument is usually more absurd than the actual argument, making it
an easier target to attack. It may also lure the other person toward defending
the more ridiculous argument rather than their original one.
For example, a skeptic of Darwinism might say, “My opponent is trying to
convince you that we evolved from chimpanzees who were swinging from trees, a
truly ludicrous claim.” This is a misrepresentation of what evolutionary
biology actually claims, which is that humans and chimpanzees shared a common
ancestor millions of years ago. Misrepresenting the idea is much easier than
refuting the evidence for it.
Appeal to Irrelevant Authority
An appeal to authority is an appeal to one’s sense of modesty,
which is to say, an appeal to the feeling that others are more knowledgeable [Engel],
which may often—but of course not always—be true. One may reasonably appeal to pertinent
authority, as scientists and academics typically do. A vast majority of the
things that we believe in, such as atoms and the solar system, are on reliable
authority, as are all historical statements, to paraphrase C. S. Lewis. An
argument is more likely to be fallacious when the appeal is made to an irrelevant
authority, one who is not an expert on the issue at hand. A similar appeal
worth noting is the appeal to vague authority, where an idea is
attributed to a faceless collective. For example, “Professors in Germany showed
such and such to be true.”
One type of appeal to irrelevant authority is the appeal to ancient
wisdom, in which a belief is assumed to be true just because it originated
some time ago. For example, “Astrology was practiced in ancient China, one of
the most technologically advanced civilizations of the day.” This type of
appeal often overlooks the fact that some things are idiosyncratic and change
naturally over time. For example, “We do not get enough sleep nowadays. Just a
few centuries ago, people used to sleep for nine hours a night.” There are all
sorts of reasons why people might have slept longer in the past. The fact that
they did is insufficient evidence for the argument that we should do so today.
Equivocation
Equivocation exploits the ambiguity of language by changing the meaning
of a word during the course of an argument and using the different meanings to
support an ill-founded conclusion.2
(A word whose meaning is maintained throughout an argument is described as
being used univocally.) Consider the following argument: “How can you be
against faith when you take leaps of faith all the time: making investments,
trusting friends, and even getting engaged?” Here, the meaning of the word
“faith” is shifted from a spiritual belief in a creator to a willingness to
undertake risks.
This fallacy is commonly invoked in discussions of science and religion,
where the word “why” may be used equivocally. In one context, it is a word that
seeks cause, which as it happens is the main driver of science, and in
another it is a word that seeks purpose, which deals with morality and
other realms where science may well have no answers. For example, one might
argue: “Science cannot tell us why things are. Why do we exist? Why be moral?
Thus, we need some other source to tell us why things happen.”
False Dilemma
A false dilemma is an argument that presents a limited set of two
possible categories and assumes that everything in the scope of the discussion
must be an element of that set.3
Thus, by rejecting one category, you are forced to accept the other. For
example, “In the war on fanaticism, there are no sidelines; you are either with
us or with the fanatics.” In reality, there is a third option, one could very
well be neutral; and a fourth option, one may be against both; and even a fifth
option, one may empathize with elements of both.
In The Strangest Man, Paul Dirac’s biographer recounts a parable
that physicist Ernest Rutherford once told his colleague Niels Bohr: A man
bought a parrot from a pet store, only to bring it back because it didn’t talk.
After several such visits, the store manager eventually said, “Oh, that’s
right! You wanted a parrot that talks. Please forgive me. I gave you the parrot
that thinks” [Farmelo].
Rutherford was clearly using the parable to illustrate the genius of the silent
Dirac, but one can imagine how someone might use such a line of reasoning to
suggest that a person is either silent and a thinker or talkative
and an imbecile.
Not a Cause for a Cause
This fallacy assumes a cause for an event where there is no evidence that
one exists.When two events occur one after the other (or simultaneously), this
may be by coincidence, or due to some other unknown factor. One cannot conclude
that one event caused the other without evidence. “The recent earthquake
was because we disobeyed the king” is not a good argument.
This fallacy has two specific types: “after this, therefore because of
this” (post hoc ergo propter hoc) and “with this, therefore because of
this” (cum hoc ergo propter hoc). With the former, because one event
preceded another, it is said to have been the cause. With the latter, because
an event happened at the same time as another, it is said to have been the
cause. In various disciplines, this is known as confusing correlation
with causation.4
Here is an example paraphrased from comedian Stewart Lee: “I can’t say
that, because in 1976 I did a drawing of a robot and then Star Wars came
out, they must have copied the idea from me.” And here is another that I
recently saw on an online forum: “The hacker took down the railway company’s
website, and when I checked the train schedule, what do you know, they were all
delayed!” What the poster failed to realize is that trains can be late for all
kinds of reasons, so without any kind of scientific control, the inference that
the hacker was the cause is unfounded.
Appeal to Fear
This fallacy plays on the fears of an audience by imagining a scary
future that would be of their making if some proposition were accepted. Rather
than provide solid evidence that the proposition would lead to a certain
conclusion (which might be a legitimate cause for fear), such arguments rely on
rhetoric, threats, or outright lies. For example, “I ask all employees to vote
for my chosen candidate in the upcoming election. If the other candidate wins,
he will raise taxes and many of you will lose your jobs.”
Here is another example, drawn from the novel The Trial: “You
should give me all your valuables before the police get here. They will end up
putting them in the storeroom, and things tend to get lost in the storeroom.”
Here, although the argument is more likely a threat, albeit a subtle one, an
attempt is made at reasoning. Blatant threats or orders that do not
attempt to provide evidence should not be confused with this fallacy, even if
they exploit one’s sense of fear [Engel].
When an appeal to fear proceeds to describe a series of terrifying events
that will occur as a result of accepting a proposition—without clear causal
links between them—it becomes reminiscent of a slippery slope argument.
And when the person making the appeal provides one and only one alternative to
the proposition under attack, it becomes reminiscent of a false dilemma.
Hasty Generalization
This fallacy is committed when one forms a conclusion from a sample that
is either too small or too special to be representative. For example, asking
ten people on the street what they think of the president’s plan to reduce the
deficit can in no way be said to gauge the sentiment of the entire nation.
Although convenient, hasty generalizations can lead to costly and
catastrophic results. For instance, it may be argued that an engineering
assumption led to the explosion of the Ariane 5 rocket during its first
test flight: The control software had been extensively tested with the previous
model, Ariane 4—but unfortunately these tests did not cover all the
possible scenarios of the Ariane 5, so it was wrong to assume that the
data would carry over. Signing off on such decisions typically comes down to
engineers’ and managers’ ability to argue, hence the relevance of this and
similar examples to our discussion of logical fallacies.
There is another example in Alice’s Adventures in Wonderland,
where Alice infers that, since she is floating in a body of water, a railway
station, and thus help, must be close by: “Alice had been to the seaside once
in her life, and had come to the general conclusion, that wherever you go to on
the English coast you find a number of bathing machines in the sea, some children
digging in the sand with wooden spades, then a row of lodging houses, and
behind them a railway station” [Carroll].
Appeal to Ignorance
This kind of argument assumes a proposition to be true simply because
there is no evidence proving that it is false.5
Hence, absence of evidence is taken to be evidence of absence. Carl Sagan gives
this example: “There is no compelling evidence that UFOs are not visiting the
Earth; therefore UFOs exist” [Sagan].
Similarly, before we knew how the pyramids were built, some concluded that,
unless proven otherwise, they must have been built by a supernatural power. But
in fact, the “burden of proof” always lies with the person making a claim.
More logically, and as several others have put it, one should ask what is
likely based on evidence from past observation. Which is more likely: That an
object flying through space is a man-made artifact or natural phenomenon, or
that it is aliens visiting from another planet? Since we have frequently
observed the former and never the latter, it is more reasonable to conclude
that UFOs are probably not aliens visiting from outer space.
A specific form of the appeal to ignorance is the argument from
personal incredulity, where a person’s inability to imagine something leads
them to believe that it is false. For example, “It is impossible to imagine
that we actually landed a man on the moon, therefore it never happened.”
Responses of this sort are sometimes wittily countered with, “That’s why you’re
not a physicist!”
No True Scotsman
This argument comes up after someone has made a general claim about a
group of things, and then been presented with evidence challenging that claim.
Rather than revising their position, or contesting the evidence, they dodge the
challenge by arbitrarily redefining the criteria for membership in that group.6
For example, someone may posit that programmers are creatures with no
social skills. If someone else comes along and repudiates that claim by saying,
“But John is a programmer, and he is not socially awkward at all,” this may
provoke the response, “Yes, but John isn’t a true programmer.” Here, it
is not clear what the attributes of a programmer are; the category is not as
clearly defined as that of, say, people with blue eyes. The ambiguity allows
the stubborn mind to redefine things at will.
This fallacy was coined by Antony Flew in his book Thinking about
Thinking. There, he gives the following example: Hamish is reading the
newspaper and comes across a story about an Englishman who has committed a
heinous crime, to which he reacts by saying, “No Scotsman would do such a
thing.” The next day, he comes across a story about a Scotsman who has
committed an even worse crime. Instead of amending his claim about Scotsmen, he
reacts by saying, “No true Scotsman would do such a thing” [Flew].
Genetic Fallacy
A genetic fallacy is committed when an argument is either devalued
or defended solely because of its origins. In fact, an argument’s history or
the origins of the person making it have no effect whatsoever on its validity.
As T. Edward Damer points out, when one is emotionally attached to an idea’s
origins, it is not always easy to disregard those feelings when evaluating the
argument’s merit [Damer].
Consider the following argument: “Of course he supports the union workers
on strike; he is, after all, from the same village.” Here, the argument
supporting the workers is not being evaluated based on its merits; rather,
because the person behind it happens to come from the same village as the
protesters, we are led to infer that his position is worthless. Here is another
example: “As men and women living in the twenty-first century, we cannot
continue to hold these Bronze Age beliefs.” Why not, one might ask. Are we to
dismiss all ideas that originated in the Bronze Age simply because they came
about at that time?
Conversely, one may also invoke the genetic fallacy in a positive sense,
by saying, for example, “Jack’s views on art cannot be contested; he comes from
a long line of eminent artists.” Here, the evidence used for the inference is
as lacking as in the previous examples.
Guilt by Association
Guilt by association is used to discredit an argument for proposing an
idea that is shared by some socially demonized individual or group. For
example, “My opponent is calling for a healthcare system that would resemble
that of socialist countries. Clearly, that would be unacceptable.” Whether or
not the proposed healthcare system resembles that of socialist countries has no
bearing whatsoever on whether it is good or bad; it is a complete non sequitur.
Another argument, which has been repeated ad nauseam in some societies,
is this: “We cannot let women drive cars because people in godless countries
let their women drive cars.” Essentially, what these examples try to argue is
that some group of people is absolutely and categorically bad. Hence, sharing
even a single attribute with that group would make one a member of it, which
would then bestow on one all the evils associated with that group.
Affirming the Consequent
One of several valid formal arguments is known as modus ponens
(the mode of affirming) and takes the following form: If A then C, A; hence
C. More formally: A ⇒ C, A ⊢ C. A is called the antecedent and C the consequent, and they
form two premisses and a conclusion. For example:
Premiss: If A
then C
|
If water is boiling at sea level, then its temperature is at least
100°C.
|
Premiss: A
|
This water is boiling at sea level;
|
Conclusion: C
|
hence its temperature is at least 100°C.
|
Such an argument is sound in addition to being valid.
Affirming the consequent is a formal fallacy that takes this form: If A then C, C; hence A.
The error lies in assuming that because the consequent is true, the antecedent
must also be true, which in reality need not be the case.
For example, “People who go to college are successful. John is
successful, hence he must have gone to college.” Clearly, John’s success could
be a result of schooling, but it could also be a result of his upbringing, or
perhaps his eagerness to overcome difficult circumstances. Generally, because
schooling is not the only path to success, one cannot say that a person
who is successful must have received schooling.
Appeal to Hypocrisy
Also known by its Latin name, tu quoque, meaning “you too,” this
fallacy involves countering someone’s argument by pointing out that it
conflicts with his or her own past actions or statements [Engel].
Thus, by answering a charge with a charge, it diverts attention from the
argument at hand to the person making it. This characteristic makes the fallacy
a particular type of ad hominem attack. Of course, just because someone
has been inconsistent about his position does not mean that his position cannot
be correct.
On an episode of the topical British TV show Have I Got News for You,
a panelist objected to a protest in London against corporate greed because of
the protesters’ apparent hypocrisy, pointing out that while they professed to
be against capitalism, they continued to use smartphones and buy coffee.7
Here is another example, from Jason Reitman’s movie Thank You for
Smoking, where a tu quoque–laden exchange is ended by the
smooth-talking tobacco lobbyist Nick Naylor: “I’m just tickled by the idea of
the gentleman from Vermont calling me a hypocrite when this same man, in one
day, held a press conference where he called for the American tobacco fields to
be slashed and burned, then he jumped on a private jet and flew down to Farm
Aid where he rode a tractor onstage as he bemoaned the downfall of the American
farmer.”
Slippery Slope
A slippery slope argument attempts to discredit a proposition by
arguing that its acceptance will undoubtedly lead to a sequence of events, one
or more of which are undesirable.8
Although the sequence of events may be possible—each transition
occurring with some probability—this type of argument assumes that every
transition is inevitable—while providing no evidence in support of that.
This fallacy plays on the fears of an audience and is related to a number of
other fallacies, such as the appeal to fear, the false dilemma,
and the argument from consequences.
For example, “We shouldn’t allow people uncontrolled access to the
internet. The next thing you know they will be frequenting pornographic
websites, and soon enough, our entire moral fabric will disintegrate and we
will be reduced to animals.” As is glaringly clear, no evidence is given, other
than unfounded conjecture, that internet access implies the disintegration of a
society’s moral fabric. Moreover, the argument presupposes certain things about
people’s behavior within the society.
Appeal to the Bandwagon
Also known as the appeal to the people, this argument uses the
fact that many people (or even a majority) believe in something as
evidence that it must be true. This type of argument has often impeded the
widespread acceptance of a pioneering idea. For example, most people in
Galileo’s day believed that the sun and the planets orbited around Earth, so
Galileo faced ridicule for his support of the Copernican model, which correctly
puts the sun at the center of our solar system. More recently, physician Barry Marshall
had to take the extreme measure of dosing himself with H. pylori
bacteria in order to convince the scientific community that it may cause peptic
ulcers, a theory that was, initially, widely dismissed.
Advertisements frequently use this method to lure people into accepting
something solely because it is popular. For example, “All the cool kids use
this hair gel; be one of them.” Although becoming a “cool kid” is an enticing
offer, it does nothing to support the imperative that one should buy the advertised
product. Politicians also use similar rhetoric to add momentum to their
campaigns and influence voters.