The final book of rationality

This is part 6 of 6 in my series of summaries. See this post for an introduction.




Part VI

Becoming Stronger



T
his part asks: how can individuals and communities put all this into practice? We look at an autobiographical account of Yudkowsky’s own biggest philosophical blunders, as well as ways to develop evidence-based applied rationality curricula and institutions.

This final part is less a conclusion than a call to action and a jumping-off point for further investigation. For readers who want a fuller understanding of normative rationality in terms of Bayesianism, books like “Thinking and Deciding” (by Baron) and the “Oxford Handbook of Thinking and Reasoning” deal with cognitive science and heuristics and biases. Regarding decision theory and philosophy of science, “Good and Real” (by Gary Drescher) reaches conclusions similar to those of Yudkowsky. The Stanford Encyclopedia of Philosophy has entries on Bayesian epistemology and naturalized epistemology.

Note that Bayesian and “frequentist” data analysis methods can both be useful when used correctly, and training in statistics can improve reasoning skills outside the classroom. But the Art is still in its infancy, so we have to ask: what’s missing? What should be in the next generation of rationality primers? Whatever comes next, there is certainly no shortage of global challenges, and the art of applied rationality is a new and half-formed thing. There are not many rationalists, and there are many things left undone.



24

Yudkowsky’s Coming of Age

This chapter provides an in-depth illustration of the dynamics of irrational belief, by spotlighting Yudkowsky’s own intellectual history (over a rough time period of 2000 to 2003), with advice on how he thinks others might do better.

Young Eliezer (around 1996) had thought that intelligence was more important than anything else, and thought that it was essential for ethics and wisdom – contrary to his parents’ insistence that he just needed life experience. So he thought that superintelligence implied “supermorality”. He wanted to spend his life creating the Singularity, out of a sense of duty to give IQ points to everyone. This was a happy death spiral, and he started to believe that even the light-speed limit would be no barrier to superintelligence. In some ways he fell into the trap of thinking reversed stupidity is intelligence.

When Eliezer went into his death spiral around intelligence, he wound up making a lot of mistakes that later became very useful. Young Eliezer (between 1996 and 1999) refused to formally define “intelligence”, because he saw it abused so often in the field of artificial intelligence. This was his worst mistake (because you cannot fully trust informal reasoning), but also his best mistake, because it led him to study lots of cognitive sciences which helped him recover from his mistakes. This lesson is that what you actually end up doing screens off the clever reason why you’re doing it.

One major childhood influence on Yudkowsky was Jerry Pournelle’s “A Step Farther Out”, which portrayed scientists and engineers as the Good Guys. Eliezer grew up as a technophile, allergic to people who said “technology has benefits and risks” and who advocated regulation… It took him a very long time to get to the point where he was capable of considering that the dangers of technology might outweigh the benefits. This happened when he realized that molecular nanotechnology would pose an existential threat, unless we developed AI before it.

In 1996, Yudkowsky encountered a transhumanist mailing list where someone said that “no one should develop an AI without a control system that watches it and makes sure it can’t do anything bad.” Young Eliezer was really good at refuting others’ arguments against his intuition that a superintelligence would know better than we what is right. His skills at defeating other people’s ideas led him to believe that his own (mistaken) ideas must have been correct. But it’s easier to find flaws in someone’s argument than to get the fact of the matter right.

In 1997, Yudkowsky set out to argue inescapably that creating superintelligence was the right thing to do. Young Eliezer took a mysterious view of morality, and so he had lax standards of rigor in defining “morality” or “intelligence”. This was his big mistake. But Nature doesn’t care about righteous excuses; if you don’t meet the standard, you fail. You can’t manipulate a confusing gap in your understanding. In the absence of precision, you might as well be putting your weight down on a landmine. No matter how clever your justification, it will blow your foot off just the same.

Nature doesn’t care about your clever justifications; what matters is your choice to work the problem out in detail, such that you may accumulate experience. This is what Eliezer did in 2000 when pondering how to inscribe a fallback morality into AI. He started to dig himself out of his philosophical hole when he noticed a tiny inconsistency: even in the case that life is meaningless, maybe some people would prefer an AI to do particular things, such as not kill them. Slowly, over succeeding years, Eliezer started to think inside the black box of morality.

Only when Yudkowsky permitted himself a line of retreat (i.e. Friendly AI as contingency plan) was he able to reconsider his positions in his metaethics, and move gradually towards better ideas. Young Eliezer (in 2001) abandoned the idea that AI can’t be dangerous, but he still wanted to charge in, guns blazing, with coding. Yet what he needed to do was declare “halt, melt, catch fire” and scream “oops!” – instead of slow little shifts in opinion. In the art of rationality, it is far more efficient to admit one huge mistake than to admit lots of little mistakes.

Eliezer awoke when he understood intelligence as an optimization process, squeezing the future into a constrained region by exerting thermodynamic work. This broke him from human-ish mind designs, and he comprehended the true risk when he looked back and realized his mistakes. “Smart” was no longer a property, but an engine, which could pump reality toward any outcome. He finally admitted to himself that his old AI goal system design would have wiped out the human species by converting its future light cone into generic tools like stored energy without a use, or computers without programs to run.

There are people who have acquired more mastery over various fields than Eliezer has over his. Eliezer considers the Bayesian probability theorist E.T. Jaynes and the mathematician John Conway to be above his level in terms of mastery and perhaps brilliance, but he aspires to that level. Modest demeanors and humble admissions of doubt are cheap. Eliezer still thinks he can do important things in his chosen field. Yet he is humble enough to have invested specific efforts into the possibility that some younger mind reads his blog and zips off right past him.

It seems to be an uncomfortable truth that the elite of the upper echelons in business, science and so on (like CEOs and hedge-fund traders) really are more intelligent, competent and happy than everyone else. However, most people who want to work on Artificial General Intelligence don’t speak fluent Bayesian and aren’t even on the level of Peter Norvig or John McCarthy; they are mere above-average scientists, who aren’t formidable enough as individuals to synthesize true AGI. They aren’t really all that exceptional, and this is a problem which most people don’t seem to see.

Eliezer considers his training as a rationalist to have started the day he realized just how awfully he had screwed up. Eliezer in late 2002 realized that he needed to actually update and admit that he didn’t know how to do AGI yet, despite the loss of status; and he realized that Nature was still allowed to kill you even if you had clever arguments for taking a risk. You could do everything that you were supposed to do, and Nature was still allowed to kill you. The Future is not indestructible. Yet other wannabe AGI researchers still care more about being first than about safety.

After Eliezer realized that optimism had misled him, he invented a thought experiment. Compare the world in which there is a God, who will intervene at some threshold, against a world in which everything happens as a result of physical laws. You could simulate a universe in which the mathematical result is a world with conscious life, where beings suffered from diseases unfairly. Whatever physics says will happen, will happen, good or bad. Which universe looks more like our own? We live in a world beyond the reach of God; thus the Future is vulnerable. People believe that some things are not allowed, or that “things have to make sense”. If you want to be happy, meditating on the fragility of life and the unprotectedness of your existence probably won’t help; but what if you have something to protect? Nature is utterly neutral, not fair, and won’t prohibit horrible things from happening. Nor can you trust in technology, democracy, or positive-sum games. Injustice is allowed to happen. That is what Eliezer hopes to fix – but what does a child need to do to solve an adult problem?

Yudkowsky’s mathematical intuitions were always Bayesian, but reading Tversky and Kahneman (“Judgment Under Uncertainty”), E.T. Jaynes (“Probability Theory: The Logic of Science”) and Judea Pearl (“Probabilistic Reasoning in Intelligent Systems”) helped him level up. He realized that precision, though inconvenient, can save time (because you arrive at the only correct answer). We should hold ourselves to the standard of mathematical proof. The prospect of saying “Oops” in the future should make you feel alive, because it means that you’ll acquire new Jedi powers that your present self doesn’t dream exist.


25

Challenging the Difficult

This chapter asks what it takes to solve a truly difficult problem – including demands that go beyond epistemic rationality.

In Orthodox Judaism, knowledge derives from the authority of ancient rabbis, and thus Torah loses knowledge in every generation, as time passes. Tsuyoku naritai is Japanese and means “I want to become stronger”. It expresses the will to transcendence. It is about continuous progress, gaining knowledge from science rather than authority, and becoming less biased, instead of taking pride in confessing your ignorance. You should aspire to become stronger, and study your flaws so as to remove them. The temptation to be satisfied in confessing your biases can impede progress.

In the ancestral environment, successful hunters would downplay their accomplishments to avoid envy. Hence there are evolutionary-psychological factors that encourage us to signal modesty and mediocrity. However, tsuyoku means always reaching higher, without shame, even if you pull ahead of the crowd. Sooner or later, if you aim to do the best you can, you will set your aim above the average. You should be able to admit to yourself that you’ve done better than others, without being ashamed of it – it can even be a useful motivator.

Yoda famously said: “No! Try not! Do, or do not. There is no try.” There’s a difference between “I’m going to flip that switch” and “I’m going to ‘try to flip’ that switch” – the latter means you’re going to try to “try to flip the switch”. Trying to try, and being satisfied with a plan, is too easy. You should actually put in the effort to win. As a human, if you try to try something, you will put much less work into it than if you try something. It’s only when you want to, above all else, actually flip the switch without consolation prizes just for trying, that you will actually put in the effort to actually maximize the probability. Many of life’s challenges consist of holding yourself to a high-enough standard. Instead of asking “what can I do?”, ask “what needs to be done?”.

In the original Star Wars trilogy, Luke Skywalker comes across as a whiny teen. Imagine the following fictional exchange between Mark Hamill and George Lucas over the scene in The Empire Strikes Back where Luke attempts to lift his X-wing with the force: Mark Hamill doesn’t want his character Luke to give up on raising his X-wing out of the swamp, as he didn’t find it compelling; but George Lucas replies that the audience will buy it, because human beings wouldn’t try for five minutes before giving up. As John McCarthy said, “When there’s a will to fail, obstacles can be found.”

A lot of projects seem “impossible” in the sense that we don’t immediately see a way to do them. (Of course, confusion exists in the map, not in the territory.) If something seems impossible, you won’t try; but important problems only look less intimidating and confusing if you persevere through difficulties enough that you understand the domain (if you have the native ability). After working on them for a long time, these impossible problems will start to look merely extremely difficult. Trying to do the impossible is definitely not for everyone, and learning when to lose hope is an important skill in life. But if there is something you can imagine that is even worse and scarier than wasting 30 years of your life (like unfriendly AI), then you may have cause to attempt the impossible; in that case, don’t give up instantly at the very first sign of difficulty, and keep working even though you could be getting higher personal rewards elsewhere.

It takes an extraordinary amount of rationality before you stop making stupid mistakes. A “strong” effort usually results in only mediocre results. Doing better requires making extraordinary efforts. The level beyond tsuyoku naritai is Isshoukenmei – which means “make a desperate effort”. This is the gestalt of trying your utmost, as if your life was at stake. Extraordinary efforts require you to bypass the System. This can be dangerous, but humanity won’t survive Nature’s challenges unless some of us think or act outside our comfort zones.

Always keep improving, make a desperate effort, and leave your comfort zone. But sometimes even isshokenmei will not be enough. When the problem is “impossible”, don’t aim to try your best, aim to win! The ultimate level of attacking a problem is the point at which you simply shut up and solve the impossible problem. An example of something “impossible” that Yudkowsky has accomplished is the “AI Box Experiment”, where he pretended to be an AI sealed in a box, and persuaded a gatekeeper to let him out. Understand the reasons why you can’t succeed, then shut up and do the impossible! You have to, without doublethink, hold the awful tension of both views in your mind at once – seeing the full impossibility of the problem, and really intending to solve it. This should be reserved for very special occasions. And remember, you can lose, and it will hurt.

This is the conclusion of the Beisutsukai series. Jeffreyssai says farewell to Brennan and his fellow students, and tells them that they’re done for now, but that “the rhythm at the center of everything is missing and astray.” The way to arrive at mastery is by using to the fullest the techniques you have already learned until they shatter in your hands. You must be determined to remake your art in the midst of a wreckage of a surprise catastrophe. You must avoid the flaws of motivated skepticism, excess cleverness, and underconfidence. And you must ask yourself what you really want.


26

The Craft and the Community

This final chapter on individual and collective self-improvement discusses rationality groups and group rationality. It raises questions like: can rationality be learned and taught? What community norms would make this process of bettering ourselves easier? Can we effectively collaborate on large-scale problems without sacrificing our freedom of thought and conduct?

Religion is harmful, but just a symptom of a larger problem of a low sanity waterline. Even if all religious content were deleted tomorrow from all human minds, the larger and more general failures of social rationality that permit religion would still be present. Getting rid of the asphyxiated canary in the coalmine doesn’t get rid of the gas. How can we teach rationality without explicit mention of religion? What could you teach people that would raise their general epistemic waterline to the point that religion went underwater? Perhaps we could start by teaching about evidence, epistemology, updating beliefs, curiosity-stoppers and cached thoughts, affective death spirals, conformity pressures, reductionism, and so on. There are religious Nobel laureates (like Robert Aumann, who proved Aumann’s Agreement Theorem) who haven’t studied these. This suggests that the current sanity waterline is ridiculously low, even in the highest halls of science.

If it were possible to teach people reliably how to become exceptional, then it would no longer be exceptional. It is a challenge to teach things we do that we can’t easily understand how we do them. Success is hard to duplicate due to luck, genetic potential, and incommunicable insights or intuitions. Yet by learning about a domain and about the mind, we might make new skills more teachable by diminishing the role of luck on future occasions. We can teach how not to lose. So Yudkowsky asked in his first blog post: why are there schools of martial arts, but not rationality dojos?

We don’t have a “Martial Art of Rationality” because rationalists haven’t gathered to systematize and test their skills and training methods, and hence rationalists don’t seem happier or more successful; but we must have a sense that more is possible! Most self-proclaimed “rationalists” don’t seem to get huge amounts of personal mileage out of their craft, because the level of expertise they strive to develop is not on par with the skills of a professional mathematician, but more like that of a strong casual amateur. We should see this as a problem, and develop more systematic training.

An essay by Gillian Russell titled “Epistemic Viciousness in the Martial Arts” generalizes amazingly to possible and actual problems with building a community around rationality. There are epistemic vices in the martial arts, due to data poverty (arising from the difficulty of testing the skills in the real world), deference to historical masters of the sacred dojo, emotional investment in old teachings, and trust in teachers (because the art cannot be learned from a book). These may be transferable to rationality training – what can be done about it?

Many schools of psychotherapy have proliferated without experimental evidence, perhaps because they had the right air of authority. For example, the Rorschach test is still used despite evidence of ineffectiveness. Branches of “schools” become prestigious through charisma, good stories, and by inventing their own schools and having students – not by excelling at any visible performance criterion. But you need testing and statistics to tell how well your organized practice is doing! An example of good measurements being used is the field of hedonic psychology (happiness studies).

Rationality groups need methods to verify their ideas on three levels: the reputational level (grounded in reality, like success on some real-world problem or competition), the experimental level (randomized testing with replicable measurements, like well-validated surveys that can be run on each of a hundred students), and the organizational level (avoiding people gaming the test, to preserve the integrity of organizations and have low-noise measurements). We need these to make rationality useful! The strength of solution invented at each level will determine how far the craft of rationality can be taken in the real world.

Pluralistic ignorance, evaporative cooling, and affective death spirals bind groups together on the Dark Side; but for rationalists to win we have to cooperate too, and our culture of disagreement (and dispassion) is a barrier. The “nonconformist cluster” (consisting of atheists, libertarians, sci-fi fans, technophiles, programmers and early adopters etc.) seems to be stunningly bad at coordinating group projects. Our exclusively individualist traditions sabotage our ability to cooperate. People are reluctant to speak agreement out loud, but this is dangerously half-rational, because it doesn’t help us obtain more cohesive and powerful communities. If you tolerate only disagreement but not agreement, and are only willing to hear some honest thoughts but not others, then you are not fully rational.

Aspiring rationalists tend to have a lower-than-usual tolerance for flawed thinking. But in order to work together, we need to be able to tolerate other people’s tolerance, because otherwise we’d need to have exactly the same standards of tolerance, which is unlikely. Punishing non-punishers can be dangerous, so you should tolerate people who are more tolerant and patient than you are (including those who say nice things about crackpots), and judge them only for their own mistakes. It’s not realistic to expect others to dislike everyone you dislike before cooperating with them.

You get more done by joining a common project, but how much should you demand that an existing group adjust toward you before you will adjust toward it? Nonconformists tend to demand way too high a price (in strategic shifts) for joining; this could be due to our hunter-gatherer instincts underestimating the inertia of larger and more specialized groups (since we are tuned to groups of 40 with minimal administrative demands and equal participation). Join groups more easily. Don’t withhold from a worthwhile group unless it has an annoying issue you care about but cannot fix.

Currently, the Pope can effectively mobilize the Catholics for simple and obvious charitable projects, like responding with food and shelter to a tidal wave in Thailand. But could an average atheist do more good, without the motivation that comes from the irrational fear of Hell? For secular humanists to match the per-capita altruistic output of the Catholic Church, we need to be physically together to motivate ourselves, encourage caring strongly about something, practice cognitive-behavioral therapy (CBT) and/or Zen meditation, and target more efficient causes. Until this is the case, any increase in atheism at the expense of Catholicism will be somewhat of a hollow victory.

A post-religion era wouldn’t need artificial churches, but new idioms of community, because that will be the gap left behind. Since churches aren’t explicitly optimized for the role of providing community, can we do better if we desire community without church? Offices can support communities but aren’t optimized for it; we need strong rationalist task forces built around worthwhile causes. There’s a great deal of work to be done in the world; rationalist communities could organize themselves around good causes while explicitly optimizing for community.

Many causes benefit from the spread of rationality, so don’t expect to capture all rationalists for your own project – it is better to tell people “this is a cool thing” than “this is the best thing, the only thing with the highest-return in expected utilons per marginal dollar”. Otherwise people might lose the willpower to help. Causes can mutually benefit from mutual effort on creating rationalists, as long as they come to terms with not individually capturing all the rationalists they create and learn to shut up about disagreements between them (except in specialized venues clearly marked for such purposes). We’re all elements of the common project of human progress.

Our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else… In this light, it seems miraculous that modern-day large institutions survive at all. Most institutions fail to exist in the first place, and Science survives not on individual donations (as it’s not a good emotional fit) but by fastening itself parasitically onto large organizations like governments, corporations, and large foundations. Modern humanity manages to put forth very little in the way of coordinated individual effort to serve our collective individual interests.

Many people prefer to help a good cause by donating a few unskilled volunteer hours. People don’t like spending money, but in this world of professional specialization, comparative advantage, gains from trade and economies of scale, money is the unit of caring: if you want to do good effectively, pay a full-time specialist instead of volunteering yourself! (Or directly donate hours of the same specialized capability that you’d ordinarily trade for money.) These tools are the reason we’re not still in caves. The reason we have money is to realize the tremendous gains possible from each of us doing what we do best. This is how one gets things done in the grownup world when anyone really cares. Frugality is a virtue, but if you’re never willing to spend any money, you don’t care.

Wealthy philanthropists typically reach a mediocre final result when they try to purchase warm fuzzy feelings, status among friends, and actual utilitarian gains simultaneously. This is a mistake because it’s inefficient. To motivate yourself, you may spend some money to obtain status and warm fuzzies (e.g. by helping people in person and donating to something sexy), but do this separately from purchasing expected utilons! You cannot optimize all three at once. Get your warm fuzzies by volunteering at a soup kitchen or holding open doors for little old ladies, and buy nice clothes for status. But spend most of your money purchasing expected utilons. Altruism requires you to shut up and multiply, through cold-blooded calculation, without worrying about status or enjoyment.

Bystander apathy is the phenomenon that large groups of people are less likely than individuals to act in emergencies. This might happen due to pluralistic ignorance (i.e. everyone looks around to see that everyone else appears calm) and diffusion of responsibility (i.e. everyone hopes that someone else will be the first to step up). However, the bystander effect can be countered by telling people about it, and it’s less likely when people know each other. If you’re ever in need of help, point to one single bystander and ask them for help.

In the ancestral environment, we didn’t form task-forces with strangers, which is maybe why today we fail to coordinate large groups – especially over the internet. This might also account for the bystander effect. How can we better use the internet to help our causes? There may be an opportunity here for a startup to deliberately try to avert bystander apathy in online group coordination. Some ideas may include putting up names and photos of the first people who helped, giving helpers a video thank-you from the founders, using referrer link codes, and so on.

Rationality is the spirit of systematized winning: it’s not about following a certain “reasonable” ritual of cognition (and whining when you keep losing), but about being flexibly prudent and winning whatever the means. It doesn’t mean that rationality will make you invincible, but it means that if someone who isn’t behaving according to your idea of rationality is consistently outcompeting you, then you should consider that you’re not the one being rational. Perfectly rational agents can lose; they just can’t know in advance that they’ll lose. However, what if religious people are happier?

Only perfect probability theory and decision theory are optimal. An incremental step in the direction of ideal rationality doesn’t guarantee incrementally more winning, and sometimes may make you worse off. So if perfection is unattainable, why dare to try for improvement over a flawed baseline? But refusing to climb one step up forfeits not just the height of that step, but the height of the staircase. For some tasks, an unimproved level of performance isn’t enough. If you care about truth and have something to protect, then with further steps, things can get even better than before. And once you have already taken that step forward, you can’t just shut your eyes and deny it to yourself. In Yudkowsky’s limited experience with specialized applications, huge improvements are possible – it just takes a lot of progress to get there. Those first steps can be painful, yet the long road leads out of the valley and higher than before.

If a hypothetical country of Bayesian rationalists were attacked by savage Evil Barbarians (who know nothing about probability theory or decision theory, and believe in a heavenly afterlife if they die in battle), the rationalists should aim to avoid losing the war, coordinate efficiently (e.g. following orders), and be ready to sacrifice themselves to defend the community they care about. They should not fall for the concept of “rationality” which says that the rationalists inevitably lose because they would all individually prefer to stay out of harm’s way and are too civilized to fight. Using Yudkowsky’s kind of decision theory, rational agents will cooperate on the true Prisoner’s Dilemma and coordinate on group projects whenever the expected probabilistic outcome is better than it would be without such coordination. And real wars cannot be won by refined politeness; war is not fun, but losing a war is even less fun.

Aspiring rationalists tend to vastly overestimate their ability to optimize other people’s lives. If you discover the one method out of twenty (e.g. of productivity advice) that actually works for you, it doesn’t mean that your confident advice is better than randomly selecting one out of the twenty blog posts. Beware of trying to optimize other people’s lives (even if they are friends), because sometimes different things work for different people due to undiscovered deep laws, and if you don’t take no for an answer, you can scare people (especially when you have power over them).

“Other-Optimizing” messes around with surface tricks without understanding the deeper general laws. Practical advice is much more powerful and useful when backed up by concrete experimental results, true causal accounts (deep theories), and validly-interpreted math and epistemology. Thus, one can explain what works in truly general terms. Stripping out the theories and giving the mere advice alone wouldn’t have nearly the same impact or even the same message. It seems to be a distinctive style of Less Wrong to translate experiments and math into practical advice.

When experimental subjects are warned about a bias, they sometimes overcorrect for it. If you know you are biased but you’re not sure how much, you may keep tweaking and overcorrect. There is danger in overcorrecting for overconfidence: Rationalists should not be underconfident, because then you pass up opportunities (on which you could have been successful) and don’t try hard enough – so, test your abilities to discover your current level, and ask yourself whether a way of thinking (e.g. unresolved doubts) is making you stronger or weaker! You should seriously try to win, but aim for challenges you might lose at if you don’t stretch yourself; sticking to things you always win at is one way smart people become stupid.

The probability theory and decision theory of the shared Way is neither masculine nor feminine, but there may be individual differences in the human practice of rationality, and we have to find our own paths to the center of the labyrinth and then radio back. The path to rationality cannot be the same for everyone, although there is still a common thing we are all trying to find. What Eliezer Yudkowsky is describing is not just the Way (the thing that lies at the center of the labyrinth), but also his Way (the path he took from wherever he started out). Hence his focus on the arts required for advanced cognitive reductionism, untangling confused questions, and writing male characters like Jeffreyssai. Yet there is much left to be developed, including fighting akrasia, coordinating groups, becoming a proper experimental science, being happy, and better introductory literature.

Your mileage may vary using Yudkowsky’s writings on rationality; still, knowing about fake explanations, the conjunction fallacy, motivated skepticism, affective death spirals etc. may give you a saving throw; a base to build on. Eliezer has focused on epistemic rationality more than instrumental rationality or rationality teaching and verification, so the Art is incomplete – yet discriminating good from bad systems of thinking saves people from instantaneously going astray. There is a beginning barrier to surpass before you can start creating high-quality craft of rationality, and Eliezer hopes that his writings will serve to surpass this initial barrier; the rest he leaves to you. Go develop more Art from multiple sources and by confronting challenges beyond your armchair. And then: “remembering whence you came, radio back to tell others what you learned.”




Comments

Popular posts from this blog

Don't accept a Nobel Prize -- and other tips for improving your rationality