The first book of rationality
This is part 1 of 6 in my series of summaries. See this post for an introduction.
Part I
Interlude: The Simple Truth
Part I
Map and Territory
T
|
his part introduces the Bayesian ideas of
belief, evidence, and rationality. Ideally, our theories should function like
maps for navigating the world – these are proper
beliefs. In practice this isn’t always the case, due to cognitive bias.
Biases come in different
forms. A “statistical bias” is when you learn from consistently
unrepresentative samples of data, leading to worse predictions. A cognitive bias is a systematic error in
how we think – which can skew our beliefs so that our beliefs less accurately
represent the facts, or skew our decision-making so that we less reliably
achieve our goals. When the word “bias” is used in this text, it will mostly
refer to cognitive bias. Rationality
is about forming the best beliefs and making the best choices you can with the
evidence you have and the situation you are in. It is therefore the project of
overcoming cognitive bias.
The idea of rationality
studied by mathematicians, psychologists and social scientists is different
from the Hollywood stereotype of the “rationalist” who suppresses all emotion
and intuition. In reality, there are cases where it helps to take your feelings
into account when deciding, and to avoid overthinking things. A rationalist
would use “System 1” (fast implicit cognition) processes when they are
reliable, while keeping in mind that his/her gut intuitions aren’t a reliable
guide for when to trust System 1. Hence the need for “System 2” (slow explicit
cognition) processes.
The sciences of mind tell us
that biases often result when our brains use shortcuts known as cognitive heuristics to guess at the
right answer. They often work, but can also lead us to make predictable
mistakes. Researchers have discovered many faces of human bias, including
people’s tendency not to notice their own biases! To do better, we need a
systematic understanding of why good reasoning works and of how the brain falls
short of it. This is the approach of “Rationality: From AI to Zombies”.
1
|
Predictably Wrong
|
This
chapter talks about cognitive bias in general, as a feature of our minds’ structure.
|
Rationality refers to the art of making one’s mental
map correspond to the territory (epistemic
rationality) and achieving one’s values (instrumental rationality). In other words, epistemic rationality
deals with systematic methods of improving the accuracy of our beliefs and finding
out the truth, like Bayesian probability theory. Instrumental rationality deals
with systematic methods of making the world more as we’d like it to be and
accomplishing our goals, for example by using decision theory. We use these
methods because we expect them to systematically work, but they are not
inherent to the concept of rationality. And being rational as a human involves
more than just knowing the formal theories; it is not realistic for people to
fully obey the mathematical calculations of the Bayesian laws underlying
rational belief and action. Nevertheless, we can still strive to be less wrong and win more.
Contrary to popular belief, rationality does not oppose all emotion. It can either
intensify or diminish feelings, depending on how the world is, since our
emotions arise from our models of reality. It is allowed to care and feel
strongly, as long as it is not opposed by the truth. For example, a rational
belief that something good has happened, leads to rational happiness. When
something terrible happens, it is rational to feel sad. Remember: “That which
can be destroyed by the truth should be”, as P.C. Hodgell said, but also “that
which the truth nourishes should thrive.”
Three main reasons why we seek out truth (i.e. epistemic rationality) are: firstly, for curiosity; secondly,
for practical utility (knowing how to effectively achieve goals); and thirdly, for
intrinsic value. Pure curiosity, as an emotion, is not irrational (see above).
Instrumentally, you have a motive to care about the truth of your beliefs regarding
anything you care about. In this case, truth serves as an outside verification
criterion. Treating rationality as a moral duty, however, can be problematic if
it makes us overly dogmatic in our approach to proper modes of thinking.
Nonetheless, we still need to figure out how to avoid biases.
Biases present obstacles to
truth produced by the shape of our mental machinery, and insofar we care about
truth, we should be concerned about such obstacles. Our brains are not
evolutionarily optimized to maximize epistemic accuracy. Biases don’t arise
from brain damage or adopted moral duties, but are universal in humans. Note
that a bias is not the same as a mistake (i.e. an error in cognitive content,
like a flawed belief), but can result in mistakes. The truth is a narrow
target, and the space of failure is much wider; thus what we call a “bias” is
less important than improving our truth-finding techniques.
The availability
heuristic is when we judge the probability or frequency of an event by ease
of retrieval from memory (i.e. how easily examples of it come to mind). By not
taking into account that some pieces of evidence are more memorable or easier
to come by than others, this leads to biased estimates. Therefore we tend to
underestimate the likelihoods of events that haven’t occurred recently (and may
even consider them “absurd”) while we overestimate the frequency of events that
receive more attention. For example, homicide is less common than we perceive,
and we neglect the risk of major environmental hazards. Selective reporting by
the media is a major cause of this.
The conjunction
fallacy is when we assume that a more detailed description is more
plausible, even though probability theory says that the joint probability of X and Y must be equal to or less than the
probability of Y. In mathematical terms: P(X&Y) ≤
P(Y). For example, the set of
accountants who play jazz is a subset of all people who play jazz. Yet
psychology experiments have found that subjects on average rate the statement
“Linda is a bank teller and is active in the feminist movement” as more
probable than “Linda is active in the feminist movement.” This is partly
because subjects use a judgment of
representativeness to judge the probability of something by how typical it
sounds.
To avoid the conjunction
fallacy, you have to notice the word “and”, and then penalize the probability
of the scenario. Feel every added detail as a burden. For each independent detail,
ask where it came from, and how you know it.
The planning
fallacy is when people underestimate how long a project will take to finish
(and ignore potential unexpected catastrophes). We tend to envision that
everything will go as expected and overestimate our efficiency at achieving a
task. Reality usually delivers results somewhat worse than our planned “worse
case”. This helps explain why so many construction projects go overbudget. It
can be countered by an outside view –
asking how long it took to finish broadly similar projects in the past. Avoid
thinking about your project’s special unique features.
The illusion
of transparency is when we expect others to understand what we mean by our
words, because it seems obvious to us. We know what our own words mean. Yet our
words are more ambiguous than we think: we overestimate how often others
understand our intentions, and we underestimate differences in interpretation. Experimental
results have confirmed this: for example, when given ambiguous sentences to
speak in front of an audience (like “the man is chasing a woman on a bicycle”),
subjects thought that they were understood in 72% of the cases, but they were
actually understood in 61% of the cases.
We are biased to underestimate inferential distances (the length of a chain
of reasoning needed to communicate an explanation to another person), because
in the ancestral environment you were unlikely to be more than one inferential
step away from anyone else. Everyone then had shared background knowledge – not
so today. Thus we tend to explain only the last step of an argument and not
every step that must be taken from our listeners’ premises. This could be why
scientists today have difficulty in communicating new knowledge about
complicated subjects to outsiders. A clear argument should start from what the
audience already accepts.
Unlike mice or other animals, we humans can
think about our thinking processes, reason about our reasoning, and make
separate mental buckets for the map and territory. We can understand how our
eyes and visual cortexes enable us to see light, and thus we can distinguish
between senses and reality and recognize optical illusions. We can even apply
reflective corrections to our systematic errors. (Science is one such process.)
So although our minds are flawed lenses, this ability to see our own flaws and apply
second-order corrections to biased first-order thoughts makes us more powerful.
2
|
Fake Beliefs
|
In this
chapter we look at ways that one’s expectations can come apart from one’s
professed beliefs. Errors come not only from our minds’ structure, but can
take the form of our minds’ contents, such as bad habits and bad ideas that
have been invented or evolved.
|
Not all beliefs are directly about sensory
experiences, but beliefs should “pay rent” in anticipations of experience. Always
ask which experiences to anticipate and which not to anticipate (instead of
“what statements should I believe?”). For example, if you believe phlogiston is
the cause of fire, then what do you expect to see happen because of that? What
does this belief not allow to happen? Beliefs should infer causes behind
sensory experience, or else they end up floating
(detached from reality). Arguments about floating networks of belief can go on
forever. It may sound like two people are disagreeing over whether a piece in
an art museum is “great art”, but they probably do not differ in terms of
anticipated experiences: both would predict lots of artists talking about it
and being influenced by it, and also predict that most casual museum visitors
would not call it beautiful.
Yudkowsky writes a parable about a society of
people forced to live underground for centuries. In this world, the sky is only
a legend, and people are divided over whether this “sky” is blue or green. Taking
on a belief has acquired social implications, and the result is a variety of
compromises to truth-seeking. The Green faction and Blue faction vote
differently and have at times been violent toward each other. One day, an
explorer in the upper caverns discovers an opening to the sky, and various
citizens finally see the sky’s true color. However, not all react the same way.
The scientists are the ones who are excited to explore and learn about this new
outside world.
Belief-as-anticipation can diverge from
cognitive behavior that protects floating propositional beliefs or self-image.
This manifests as “belief in belief”: believing that you ought to believe.
Hence people can anticipate in advance which experimental results they will
need to excuse; for example, someone claims that there is a dragon in their
garage, yet they avoid falsification. If you look for the dragon, the person
will say “it’s an invisible dragon!” Can we hear heavy breathing? “It’s an
inaudible dragon!” and so on. This indicates that some part of their mind knows
what’s really going on. They do not anticipate as if the dragon were real, but
they may honestly believe that they
believe there is a dragon – perhaps because they think it virtuous or
beneficial to believe. In addition, the person will deny that they only believe
in belief because it isn’t virtuous to believe in belief.
When somebody’s anticipations get out of sync
with what they believe they believe, point out how their hypothesis is
vulnerable to falsification. For example, imagine someone says “I don’t believe
artificial intelligence is possible because only God can make a soul.” You can
reply that either their religion allows for AI to happen, or that us building
an AI would disprove their religion. One possible outcome is that they will backpedal
or say “let’s agree to disagree on this” – but Aumann’s Agreement Theorem shows that if two honest Bayesian rationalists
with common priors disagree, at least one is doing something wrong. (Ideally, the
two agents would update on each other’s beliefs until they reach agreement.)
When parents can’t verbalize an object-level
justification or want their child to stop asking questions (e.g. about
religion), they appeal to “adulthood”, saying things like “you’ll understand
when you’re older”. But “adulthood” is more about peer acceptance than about
being right. Beware errors in the style of maturity! Dividing the world up into “childish” and
“mature” is not a useful way to think, because nobody is done with maturing. And the stars in the night sky
are much older than any of us, and future intergalactic civilizations may
consider us infants by their standards.
Often people display
neutrality or suspended judgment as a way to signal maturity or wisdom, as if
they were above the conflict. They fear being embarrassingly wrong or losing
their reputation for standing above the fray. However, neutrality is still a
definite judgment, and like any judgment it can be wrong! Truth is not handed
out in equal parts before a dispute, and refusing to take sides is seldom the
right course of action. To care too much about your public image is to limit
your true virtue. Prioritize your responsibilities on the basis of limited
resources, not wise pretensions.
The claim that religion is a “separate
magisterium” or metaphor which cannot be proven or disproven is a lie. Religion
originally made claims about the world, but the evidence was stacked against
it, so religion made a socially-motivated retreat to commitment (belief in
belief). In the past, religions made authoritative claims about everything,
from science to law, history, government, sexuality and morality. Only recently
have religions confined themselves to ethical claims in an attempt to be non-disprovable,
and people still see religion as a source of ethics. But since ethics has
progressed over time, the ethical claims in ancient scripture should also be
wrong.
Some people proudly flaunt scientifically
outrageous beliefs (like pagan creation myths) – not to persuade others or
validate themselves, but either to
profess or to cheer for their side (akin to marching naked at a pride
parade). This is even weirder than “belief in belief”. These people aren’t
trying to convince themselves that they take their beliefs seriously, but they
just loudly cheer their beliefs, like shouting “Go Blues!”
Anticipation-controlling beliefs are proper
beliefs, while professing, cheering, and belief-in-belief can be considered
improper beliefs.
Another type of improper belief is belief-as-clothing: beliefs that fail to
control anticipated experiences can function as group identification (like
uniforms do). Once you identify with a tribe (whether a sports team or
political side), you passionately cheer for it, and you can’t talk about how
the Enemy realistically sees the world. For example, it is considered
Un-American in Alabama to say that the Muslim terrorists who flew into the
World Trade Center saw themselves as brave and altruistic. Identifying with a
tribe is a very strong gut-level emotional force, and people will die for it.
If a normal-seeming statement is not followed
by specifics or new information, then it’s likely an applause light telling the audience to cheer. For example: “We need
a democratic solution!” Words like “democracy” or “freedom” are often applause
lights that are used to signal conformity and dismiss difficult problems,
because no one disapproves of them. This also applies to people talking about “balancing
risks and opportunities” or solving problems “through a collaborative process”
without following it up with specifics. Such statements do not have
propositional meaning.
3
|
Noticing Confusion
|
This
chapter provides an explanatory mechanism of how rationality works, and why
it is useful to base one’s behavior on rational expectations and what it
feels like to do so.
|
Like time and money, your anticipation is a
limited resource which you must allocate as best you can, by focusing it into
whichever outcome actually happens. That way, you don’t have to spend as much
time concocting excuses for other outcomes. Post-hoc theories that “explain”
all possible outcomes equally well (as used by TV pundits to explain why bond
yields always fit their pet market theory) don’t focus uncertainty. But if you
don’t know the outcome yet (no one can foresee the future), then you should
spend most of your time on excuses for the outcomes you anticipate most.
Evidence is an event entangled by chains of
cause-and-effect with a target of inquiry, such that it correlates with
different states of the target. The event should be more likely if reality is
one way than if reality is another. For example, your belief about your
shoelaces being untied is the outcome of your mind mirroring the state of your
actual shoelaces via light from the Sun bouncing off your shoelaces and striking
the retina in your eye, which triggers neural impulses. If the photons ended up
in the same physical state regardless of whether your shoelaces were tied or
untied, the reflected light would not be useful evidence about your shoelaces. If
your eyes and brain work correctly, then your beliefs will end up entangled with
reality, and be contagious (i.e. your beliefs themselves would be evidence).
Beliefs that are not entangled with reality are not accurate – but blind faith.
This also means that you should conceivably be able to believe otherwise given
different observations.
We may accept someone’s personal testimony or
hearsay as Bayesian rational evidence, but legal evidence has to meet
particular standards (so that it doesn’t get abused by those with power), and
scientific evidence must take the form of publicly reproducible generalizations
(because we want a reliable pool of human knowledge). This is different still
from historical knowledge, which we cannot verify for ourselves. Science is
about reproducible conditions rather than the history of a particular experiment.
Predictions based on science, even if not yet tested, can be rational to
believe. In a way, scientific evidence and legal evidence are subsets of
rational evidence.
You require evidence to form accurate beliefs,
but how much depends on (a) how
confident you wish to be; (b) how a
priori unlikely the hypothesis seems; and (c) how large the
hypothesis-space is. Thus if you want to be very confident, and you are
considering one hypothesis out of many, and that hypothesis is more implausible
than the others, you will need more evidence. The quantity of evidence can be
measured using mathematical bits,
which are the log base ½ of probabilities. For example, an event with 12.5%
chance conveys log0.5 (0.125) = 3 bits of information when it
happens.
You need an amount of evidence equivalent to
the complexity of the hypothesis just to locate it and single it out for
attention in the space of possibilities. Before Albert Einstein’s theory of
General Relativity was experimentally confirmed by Sir Arthur Eddington,
Einstein was confident that the observations would match his theory. This
suggests that Einstein already had enough evidence at hand when he first
formulated the hypothesis. From a Bayesian perspective, he wasn’t as arrogant
as he seemed.
Occam’s Razor is often phrased as “the simplest
explanation that fits the facts”, but what does it mean for a theory to be
complex? With formalisms of Occam’s Razor, the complexity of descriptions is
measured by Solomonoff Induction (the
length of the shortest computer program that produces the description as
output) or Minimum Message Length
(the shortest total message as a function of a string describing a code plus a
string describing the data in that code). It is not measured in English
sentences. To a human, “Thor” feels like a simpler explanation for lightning
than Maxwell’s Equations, but that’s because we don’t see the full complexity
of an intelligent emotional mind. Of course, it is easier to write a computer
program that simulates Maxwell’s Equations than one simulating Thor.
Belief is easier than disbelief because we
believe instinctively and require conscious effort to disbelieve. But a
rationalist should be more confused by fiction than by reality, for a model
that fails to constrain anticipation (by permitting everything and forbidding
nothing) is useless. If you are equally good at explaining any outcome, you
have zero knowledge. When trying to explain a story, pay attention to the
feeling of “this feels a little forced.” Your feeling of confusion is a clue –
don’t throw it away. Either your model is false or the story is wrong.
During the Second World War, California
governor Earl Warren said that, since there was a lack of sabotage by
Japanese-Americans, this was a sign of a Fifth Column (subversive group)
existing. But the absence of a Fifth Column is more likely to produce an
absence of sabotage! Absence of proof is not proof of absence. However, in
Bayesian probability theory, absence of evidence is evidence of absence. If something being present increases your
probability of a claim being true, then its absence must decrease it. Mathematically,
P(H|~E) < P(H) < P(H|E) where E is evidence for hypothesis H. Whether the
evidence is strong or weak depends on how high a likelihood the hypothesis
assigns to the evidence.
Due to the fact that P(H) = P(H&E) +
P(H&~E) the expectation, on average, of the posterior probability must
equal the prior probability. Thus for every expectation of evidence there is an
equal and opposite expectation of counterevidence. If you are about to make an
observation, then you must, on average, expect to be exactly as confident
afterwards as when you started out. So a true Bayesian cannot seek out evidence
to confirm their theory, but only to test their theory. Ignoring this is like
holding anything an accused witch does or says as proof against her.
People often dismiss social science findings as
“what common sense would expect”, because hindsight
bias (overestimating how predictable something was) makes it too easy to
retrofit these findings into our models of the world. Yet experiments have
found that subjects can rationalize both a statement and its opposite. For
example, subjects rate the supposed finding that “people in prosperous times
spend a larger portion of their income than during recessions” as what they
would have expected; but also rate the opposite statement as what they would
have expected! Thus hindsight bias leads us to undervalue the surprisingness of
scientific findings and the contributions of researchers. It prevents us from
noticing when we are seeing evidence that doesn’t fit what we really would have expected.
4
|
Mysterious Answers
|
This
chapter asks whether science resolves the problems of irrationality for us.
Scientific models aim to explain
phenomena and are based on repeatable experiments. Science has an excellent
track record compared to speculation, hearsay, anecdote, religion, appealing
stories, and everything else. But do we still need to worry about biases?
|
Imagine a square plate of metal placed next to
a hot radiator; if you place your hand on the plate and feel that the side
adjacent to the radiator is cool and the distant side is warm, why do you think
this happens? Saying “because of heat conduction” is not a real explanation
unless it constrains anticipation (more so than “magic”). Yet physics students
may profess it (rather than measure anything, or admit that they don’t know)
and mistakenly think they’re doing science. Normally you’d anticipate the side
of the plate next to the radiator to feel warmer; in this case, the plate was
just turned around. What makes something a real explanation is not the literary genre of the words you use, or
whether it sounds “scientific” enough, but whether it can explain only the
observations that actually happened. A fake
explanation is one that can explain any observation.
Verbal behavior is not intrinsically right or
wrong, but it gets you a gold star from the teacher. In schools, students are
expected to memorize the answers to certain questions (the “teacher’s
password”). For example, if the teacher asks “what is light made of?” the
student replies “waves!” But this is a word, not a proper belief or hypothesis.
Instead of guessing the password, we need to train students to anticipate
experiences and learn predictive models of what is and isn’t likely to happen –
or else they’ll get stuck on strange problems and refuse to admit their
confusion.
The X-Men movies use words like “genetic code”
to signify the literary genre of science. Some people talk about “evolution” to
wear scientific attire and identify themselves with the “scientific” tribe. But
they don’t ask which outcomes their model prohibits (or realize they even need
a model) and therefore they are not using real
science. You don’t understand the phrase “because of evolution” unless it
constrains your anticipations. Likewise, you shouldn’t automatically reject an
idea (like smarter-than-human artificial intelligence) just because it sounds
like science fiction.
It is easy for us to think that a theory
predicts a phenomenon when it was actually fitted to a phenomenon. Hindsight
bias makes fake causality hard to
notice. “Phlogiston” was used to explain why fire burned hot and bright, but it
made no advance predictions and could explain anything in hindsight (thus it
was a fake explanation). In terms of causal Bayes
nets, phlogiston double-counts the evidence. This means that if you have a directed
acyclic graph like this:
… then counting the forward
message from cause to effect and also
the backward message from fire being hot to the Phlogiston node is
contaminating the forward-prediction of phlogiston theory. You must separate
forward and backward messages, and count each piece of evidence only once! To
ensure your reasoning about causality is correct, write down your predictions
in advance.
A semantic stopsign (aka cognitive traffic signal aka curiosity-stopper)
is a failure to consider the obvious next question. Certain words and phrases
can act as “stopsigns” to thinking. For example, saying “God!” in response to
the paradox of the First Cause, or “Liberal Democracy!” in response to risks
from emerging technologies, without further query. They are not actual
explanations and don’t help to resolve the issue at hand, but they act as
markers saying “don’t ask any questions”. What distinguishes a stopsign is not
strong emotion, but how the word is used to halt the question-and-answer chain.
Before biochemistry, the theory of Vitalism
postulated a mysterious substance, élan
vital, to explain the mystery of living matter. But this was a
curiosity-stopper, not an anticipation-controller. It may feel like an explanation, but given “élan vital!” as the answer,
the phenomenon is just as mysterious and inexplicable as it was before. When
something feels like a mystery, that is a fact about your state of mind, not
the phenomenon. Mystery is a property of questions, not answers. Ignorance
exists in the map, not in the territory. Don’t cherish your ignorance, as Lord
Kelvin did regarding life! Kelvin, a vitalist, proudly made biology into a
sacred mystery beyond ordinary science.
The theory of “emergence” (how complex systems
arise from the interaction of simple elements) has become popular nowadays, but
it’s just a mysterious answer to a mysterious question. It’s fine to say “X
emerges or arises from Y” if Y is a specific model with internal moving parts, but
saying “Emergence!” as if it were an explanation in its own right does not
dissolve the mystery. After learning that a property is emergent, you aren’t
able to make any new predictions. It’s like saying, “It’s not a quark!” –
because every phenomenon above the
quark-level is “emergent”. It functions like a fake explanation or semantic
stopsign.
Complexity can be a useful concept, but it
should not be used as a fake explanation to skip over the mysterious part of
your model. Too often people assume that adding complexity to a system they
don’t understand (e.g. intelligence) will improve it. If you don’t know how to
solve a problem, adding complexity won’t help, because saying “complexity!”
doesn’t concentrate your probability mass. It’s better to say “magic” (or “I
have no idea”) as a placeholder to remind yourself that there is a gap in your
understanding.
What rule governs the sequence 2, 4, 6…? Many
of us test our hypotheses by looking for data that fits the hypothesis rather
than looking for data that would disconfirm our hypothesis. In the Wason 2-4-6
task, positive bias leads subjects to
confirm their hypothesis, rather than try to falsify it by testing negative
examples. Thus, only 20% correctly guess the rule. Subjects may test 4, 6, 8 or
10, 12, 14 and hypothesize “numbers increasing by two”, when the rule actually
is three numbers in ascending order. Remember that the strength of a hypothesis
is what it can’t explain!
You can’t fight fire with fire, nor random
chaos with randomness. The optimal strategy when facing uncertainty is still to
think lawfully and rationally. If tasked to predict a random sequence of red
and blue cards where 70% are blue, the best you can do is to predict blue on
every trial (and not 70% of the
time!). Yet in an experiment, the subjects acted like they could predict the
sequence, and they guessed red about 30% of the time. A random key does not
open a random lock just because they are “both random”. Faced with an
irrational universe, throwing away your rationality won’t help.
To get things right, “Traditional Rationality”
(which lacks Bayesian probability theory and experimental psychology) is not
enough, and just leads you to different kinds of mistakes, for example ignoring
prior likelihoods, the conjunction fallacy, and so on. Traditional Rationality
says you can formulate hypotheses without a reason to prefer them to the status
quo, as long as they are falsifiable. But this can waste a lot of time. It
takes a lot of rationality to avoid
making mistakes. The young Eliezer Yudkowsky erred in predicting that neurons
were exploiting quantum gravity.
We fail to learn the historical lesson of
science, which is that mundane phenomena used to be mysterious until science
solved them. Solving a mystery should make it feel less confusing! Yet alchemy
seemed reasonable at the time. Unfortunately we don’t personally experience
history. It seems to us now that biology, chemistry and astronomy are naturally
the realm of science, but if we had lived through their discoveries and watched
mysterious phenomena reduced to mundane, we would be more reluctant to believe
the next phenomenon is inherently mysterious.
We sometimes generalize from fictional evidence
(for example using the Terminator
movies as true prophecies of AI), while failing to be sufficiently moved by
actual historical evidence. Our brains aren’t well-equipped to translate dry
historical facts into experiences. Perhaps we should imagine living through
history, to make humanity’s past mistakes more available (and to be less
shocked by the future). Imagine watching mysteries be explained, watching
civilizations rise and fall, and being surprised over and over again. Perhaps
then we’ll stop falling for Mysterious Answers.
When you encounter something you don’t
understand, for example if you don’t know why it rains, you have at least three
options: you can Ignore the issue and avoid thinking about it; you can
try to Explain it (which sometimes takes a while, and the explanation
itself may require an explanation); or you can embrace and Worship the
sensation of mysteriousness – which is akin to worshipping your confusion and
ignorance. There are more ways to worship something than lighting candles
around an altar.
“Science!” is often used as a curiosity-stopper
rather than a real explanation. For example, why does flipping a switch turn
the light bulb on? “Electricity!” Although science does have explanations for
phenomena, it is not enough to simply appeal to “Science!” (this is like “God
did it!”). Yet for many people, noting that science has an answer is enough to
make them no longer curious about how something works. But if you can’t do the calculations that
control your anticipation, why should the fact that someone else knows diminish your curiosity? Be intrigued by the
world’s puzzles!
To test your understanding, ask whether you
would be able to regenerate the knowledge for yourself if it were deleted from
your memory. If not, it’s probably a floating belief or password, and you
haven’t really learned anything. If you don’t have enough experience to
regenerate beliefs when they are deleted, then do you have enough experience to
connect that belief to anything at all? Make the source of your thoughts part of you. If the knowledge is entangled
with the rest of the world, this method will allow you to apply it to other
domains and update it when needed. Thus, being “truly part of you”, it will
grow and change with the rest of your knowledge. When you find and absorb a
fountain of knowledge, see what else it can pour.
Interlude: The Simple Truth
THIS ESSAY BY Yudkowsky (hosted
on his website yudkowsky.net) is an allegory on the nature of knowledge and belief.
Some people would say that the notion of “truth” is naïve, but it would not be
wise to abandon the concept of truth. So what do we mean by “X is true”?
Imagine that you are a
shepherd in an era before recorded history or formal mathematics, and you want
to track your sheep. How could you tell, without spending hours looking,
whether there are any sheep grazing in the pasture or whether they are all
safely enclosed in the fold?
You could drop a pebble into a
bucket each time a sheep leaves the enclosure, and take one pebble out of the
bucket for each sheep that returns. When no pebbles are left in the bucket, you
can stop searching for the night.
In this analogy, the “sheep”
refer to reality, the “bucket” is your belief, and the “pebble level” is its
degree of truth-tracking. Beliefs are the thingies that determine your
predictions, and reality is the thingy that determines your experimental
results.
The pebble-and-bucket method
works whether or not you believe in it, because it makes the sheep control the
pebbles via interacting with things that interact with the pebbles, so there is
a chain of cause-and-effect. Likewise, the correspondence between reality and
your beliefs comes from reality controlling your beliefs, not the other way
around. And just like you wouldn’t have evolved a mouth if there were no food,
you wouldn’t need beliefs if there were no reality. If you think that your
beliefs alter your personal reality, you will still fall when you step off a
cliff.
Does two sheep plus two sheep really
ultimately equal four sheep? Yes. The simple and obvious answer isn’t always
the best choice, but sometimes it really is.
So hopefully the question seems trivial to you, instead of being a deep
mystery.
Comments
Post a Comment