Wrong Models

Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

— George Edward Pelham Box

I have seen some ultrafinitists go so far as to challenge the existence of 2100 as a natural number, in the sense of there being a series of “points” of that length. There is the obvious “draw the line” objection, asking where in 21, 22, 23, … , 2100 do we stop having “Platonistic reality”? Here this … is totally innocent, in that it can be easily be replaced by 100 items (names) separated by commas. I raised just this objection with the (extreme) ultrafinitist Yessenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to start with 21 and asked him whether this is “real” or something to that effect. He virtually immediately said yes. Then I asked about 22, and he again said yes, but with a perceptible delay. Then 23, and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take 2100 times as long to answer yes to 2100 then he would to answering 21. There is no way that I could get very far with this.

Philosophical Problems in Logic [PDF] by Harvey M. Friedman (source)

Civilization advances by extending the number of important operations which we can perform without thinking of them.

— Alfred North Whitehead

The first author must state that his coauthor and close friend, Tom Trobaugh, quite intelligent, singularly original, and inordinately generous, killed himself consequent to endogenous depression. Ninety-four days later, in my dream, Tom’s simulacrum remarked, “The direct limit characterization of perfect complexes shows that they extend, just as one extends a coherent sheaf.” Awakening with a start, I knew this idea has to be wrong, since some perfect complexes have a non-vanishing \(K_0\) obstruction to extension. I had worked on the problem for 3 years, and saw this approach to be hopeless. But Tom’s simulacrum had been so insistent, I knew he wouldn’t let me sleep undisturbed until I had worked out the argument and could point to the gap. This work quickly led to the key results of this paper. To Tom, I could have explained why he must be listed as a coauthor.

Higher Algebraic K-Theory of Schemes and of Derived Categories [PDF] by Robert Wayne Thomason

What we need is imagination, but imagination in a terrible strait-jacket. We have to find a new view of the world that has to agree with everything that is known, but disagree in its predictions somewhere, otherwise it is not interesting. And in that disagreement it must agree with nature.

The Character of Physical Law by Richard Feynman

My word for the elusive aspect of human thought still lacking in synthetic imitations is “slippability”. Human thoughts have a way of slipping easily along certain conceptual dimensions into other thoughts, and resisting such slippage along other dimensions. A given idea has slightly different slippabilities — predispositions to slip — in each different human mind that it comes to live in.

Metamagical Themas by by Douglas Hofstadter

Once the finished work exists, scholars looking at it may seize upon certain qualities of it that lend themselves easily to being parametrized. Anyone can do statistics on a work of art once it is there for the scrutiny, but the ease of doing so can obscure the fact that no one could have said, a priori, what kinds of mathematical observables would turn out to be relevant to the capturing of stylistic aspects of the as-yet-unseen work of art.

Metamagical Themas by by Douglas Hofstadter

I believe that every true theorist is a kind of tamed metaphysicist, no matter how pure a ‘positivist’ he may fancy himself. The metaphysicist believes that the logically simple is also the real. The tamed metaphysicist believes that not all that is logically simple is embodied in experienced reality, but that the totality of all sensory experience can be ‘comprehended’ on the basis of a conceptual system built on premises of great simplicity. The skeptic will say that this is a ‘miracle creed’. Admittedly so, but it is a miracle creed which has been borne out to an amazing extent by the development of science.

Ideas and Opinions by Albert Einstein

I have deliberately used the word “marvel” to shock the reader out of the complacency with which we often take the working of this mechanism for granted. I am convinced that if it were the result of deliberate human design, and if the people guided by the price changes understood that their decisions have significance far beyond their immediate aim, this mechanism would have been acclaimed as one of the greatest triumphs of the human mind. Its misfortune is the double one that it is not the product of human design and that the people guided by it usually do not know why they are made to do what they do. But those who clamor for “conscious direction”—and who cannot believe that anything which has evolved without design (and even without our understanding it) should solve problems which we should not be able to solve consciously—should remember this: The problem is precisely how to extend the span of out utilization of resources beyond the span of the control of any one mind; and therefore, how to dispense with the need of conscious control, and how to provide inducements which will make the individuals do the desirable things without anyone having to tell them what to do.

The Use of Knowledge in Society by Friedrich Hayek

Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around and find new methods of proof, so that he ought eventually to be able to reach a decision about any given formula. This would be the argument. Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

Alan Turing, Lecture at the London Mathematical Society [PDF]

In science, if you know what you are doing, you should not be doing it. In engineering, if you do not know what you are doing, you should not be doing it.

The Art of Doing Science and Engineering by Richard Hamming

Too much time is wasted because of the assumption that methods already in existence will solve problems for which they were not designed; too many hypotheses and systems of thought in philosophy and elsewhere are based on the bizarre view that we, at this point in history, are in possession of the basic forms of understanding needed to comprehend absolutely anything.

The View from Nowhere by Thomas Nagel

Consider, for instance, such words as “backlog,” “burnout,” “micromanaging,” and “underachiever,” all of which are commonplace in today’s America. I chose these particular words because I suspect that what they designate can be found not only here and now, but as well in distant cultures and epochs, quite in contrast to such culturally and temporally bound terms as “soap opera,” “mini-series,” “couch potato,” “news anchor,” “hit-and-run driver,” and so forth, which owe their existence to recent technological developments. So consider the first set of words. We Americans living at the millennium’s cusp perceive backlogs of all sorts permeating our lives — but we do so because the word is there, warmly inviting us to see them. But back in, say, Johann Sebastian Bach’s day, were there backlogs — or more precisely, were backlogs perceived? For that matter, did Bach ever experience burnout? Well, most likely he did — but did he know that he did? Or did some of his Latin pupils strike him as being underachievers? Could he see this quality without being given the label? Or, moving further afield, do Australian aborigines resent it when their relatives micromanage their lives? Of course, I could have chosen hundreds of other terms that have arisen only recently in our century, yet that designate aspects of life that were always around to be perceived but, for one reason or another, aroused little interest, and hence were neglected or overlooked.

Analogy as the Core of Cognition [PDF] by Douglas Hofstadter

The inductivist or Lamarckian approach operates with the idea of instruction from without, or from the environment. But the critical or Darwinian approach only allows instruction from within — from within the structure itself

I contend that there is no such thing as instruction from without the structure. We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error. As Ernst Gombrich says, ‘making comes before matching’: the active production of a new trial structure comes before its exposure to eliminating tests.

The Myth of the Framework by Karl Popper, from The Beginning of Infinity by David Deutsch

Compared to organizational problems, technical problems are straightforward. Distributed systems are considered hard because real systems might drop something like 0.1% of messages, corrupt an even smaller percentage of messages, and see latencies in the microsecond to millisecond range. When I talk to higher-ups and compare what they think they’re saying to what my coworkers think they’re saying, I find that the rate of lost messages is well over 50%, every message gets corrupted, and latency can be months or years.

I could do that in a weekend! by Dan Luu

The issue is not that the underlying rules are wrong so much as that they are irrelevant – rendered impotent by principles of organization.

A Different Universe by Robert B. Laughlin

They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

The Bitter Lesson by Richard Sutton

The idea of course supports the related notion that the first essential for consciousness is change, particularly noncyclic, irregular or startling change. And such a concept explains why mere knowledge, which is generally static even when not instinctive or inherited, is seldom very conscious (depending on your definition) while learning, which always involves change, is much more likely to be conscious (by any definition).

The Seven Mysteries of Life: An Exploration of Science and Philosophy by Guy Murchie

I have yet to see any problem, however complicated, which, when you look at it in the right way, did not become still more complicated

— Poul Anderson

(How, asked Wittgenstein, can one define “games”? Is there any property all games must share in order to be games? Or are games linked rather in a network of similarities, akin to family resemblance, where some family members have the same nose, some the same walk, some the same temperament?)

— The Mind-Body Problem by Rebecca Goldstein

When things get complicated enough, you’re forced to change your level of description. To some extent that’s already happening, which is why we use words such as “want”, “think”, “try”, and “hope,” to describe chess programs and other attempts at mechanical thought. Dennett calls that kind of level switch by the observer “adopting the intentional stance.” The really interesting things in AI will only begin to happen, I’d guess, when the program itself adopts the intentional stance toward itself!

A Coffeehouse Conversation by Douglas Hofstadter

The purpose of the learning is not so much in finding a classifying rule (for example, a hyperplane), but rather in finding a feature space in which such separation is possible. After a “good” transformation of the image space into a feature space has already been found, there is practically no question of finding a classifying rule. It has been automatically found by that time.

Pattern Recognition by Mikhail M. Bongard quoted in Solving Bongard Problems With Deep Learning

Which reals could possibly be “missing” from our universe? Every real you can name—42, π, √e, even uncomputable reals like Chaitin’s Ω—has to be there, right? Yes, and there’s the rub: every real you can name. Each name is a finite string of symbols, so whatever your naming system, you can only ever name countably many reals, leaving 100% of the reals nameless.

The Complete Idiot’s Guide to the Independence of the Continuum Hypothesis by Scott Aaronson

West didn’t seem to like many of the fruits of the age of the transistor. Of machines he had helped to build, he said, “If you start getting interested in the last one, then you’re dead.” But there was more to it. “The old things, I can’t bear to look at them. They’re clumsy. I can’t believe we were that dumb.” He spoke about the rapidity with which computers became obsolete. “You spend all this time designing one machine and it’s only a hot box for two years, and it has all the useful life of a washing machine.” He said, “I’ve seen too many machines.” One winter night, at his home, while he was stirring up the logs in his fireplace, he muttered, “Computers are irrelevant.”

— The Soul of a New Machine by Tracy Kidder

Pursuing what he called “what’s-the-earliest-date-by-which- you-can’t-prove-you-won’t-be-finished scheduling,” West had promised his bosses that Eagle would be debugged and brought to life by April.

Flying Upside Down by Tracy Kidder

Back at Data General, one day during the debugging, his weariness focused on the logic analyzers and the small catastrophes that come from trying to build a machine that operates in billionths of a second. He went away from the basement of Building 14 that day, and left this note in his cubicle, on top of his computer terminal: “I’m going to a commune in Vermont and will deal with no unit of time shorter than a season.”

Flying Upside Down by Tracy Kidder

What I find however is that there are a base of unspoken intuitions that underlie expert understanding of a field, that are never directly stated in the literature, because they can’t be easily proved with the rigor that the literature demands. And as a result, the insights exist only in conversation and subtext, which make them inaccessible to the casual reader.

Why Deep Learning Works Even Though It Shouldn’t

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

— John von Neumann (source) (or with just one)

I might say that I wasn’t for one moment suggesting that computers were not a good thing. I am very impressed with what is being done. The question is not that. The question is, is there a danger, particularly within the topics of this symposium on serendipity? I mean are they really going to make it even more and more difficult to have the serendipitous discoveries? You may believe that we only have to go on investigating in more detail the things we already know about in the universe. I don’t believe that. Do you really believe that the discoveries of the last twenty five years that we have been privileged to witness are the only new things that have to be discovered about the universe? I don’t! And the real thing that worries me is that if we go on channeling the research to what can be done by complex computerized equipment and data handling, are we going to miss the new things that are still waiting to be found?

— Bernard Lovell, Impact of Computers on Radio Astronomy, in Serendipitous Discoveries in Radio Astronomy [PDF]

And so, once again, what looks like a technical problem—function naming—turns out to be deeply, personally human, to require human social skills to resolve effectively. I hate that.

Naming from the Outside In by Kent Beck

I think I was probably mostly worse at things than I thought. The VC’s had a point when they said people remember how you made them feel more than what you said.

Managing programmers is tough. That’s one reason I don’t miss IT, because programmers are very unlikable people. They’re not pleasant to manage. In aviation, for example, people who greatly overestimate their level of skill are all dead. You don’t see them as employees. J.F.K., Jr., is not working at a Part 135 charter operation because he’s dead. It’s not that he was a bad pilot; it’s just that his level of confidence to level of skill ratio was out of whack, and he made a bunch of bad decisions that led to him dying, which is unfortunate.

In aviation, by the time someone might be your employee, probably their perceived skill and their actual skill are reasonably in line. In IT, you have people who think, “I’m a really great driver. I’m a really great lover, I’m a great programmer”. But where are the metrics that are going to prove them wrong? Traffic accidents are very infrequent, so they don’t get the feedback that they are a terrible driver because it’s so unlikely that they’ll get into an accident. A girlfriend leaves them—well, it was certainly her deep-seated psychological problems from childhood. Their code fails to ship to customers. It was marketing’s fault!

If a software company dies, you can blame the marketing people. Programmers almost all walk around with a huge overestimate of their capabilities and their value in an organization. That’s why a lot of them are very bitter. They sit stewing at their desks because the management isn’t doing things their way. They don’t understand why they get paid so little. It is tough to manage these folks. But on the other hand, there are better and worse ways to do it. If you want to ensure that the customer gets high-quality code and that the product is high-quality, you have to step on these younger folks’ egos and say, “No, that’s not the way to do it.” The question is, how harsh can you be? I could have been kinder and gentler for sure.

— Philip Greenspun in Founders at Work: Stories of Startups’ Early Days by Jessica Livingston

Young man, in mathematics you don’t understand things. You just get used to them.

— John von Neumann

Barabási and his team pointed out that scale-free networks also embody a compromise bearing the stamp of natural selection: They are inherently resistant to random failures, yet vulnerable to deliberate attack against their hubs. Given that mutations occur at random, natural selection favors designs that can tolerate haphazard insults. By their very geometry, scale-free networks are robust with respect to random failures, because the vast majority of nodes have few links and are therefore expendable. Unfortunately, this evolutionary design has a downside. When hubs are selective targeted (something that random mutation could never do), the integrity of the network degrades rapidly—the size of the giant component collapses and the average path length swells, as nodes become isolated, cast adrift on their own little islands.

Evidence for this predicted mix of robustness and fragility is manifested in the resilience of living cells. In a study of the network of protein interactions in yeast, Barabási group that the most highly connected proteins are indeed the most important ones for the cell’s survival. They reached this conclusion by cleverly combining information from two different databases. First they looked at the connectivity data, where two proteins are regarded as linked if one is known to bind to the other. This interaction network follows a highly inhomogeneous, scale-free architecture, with a few kingpin proteins mediating the interactions among many more poorly connected peons. Then Barabási’s team correlated the connectivity data with the results of systematic mutation experiments, in which biologists had previously deleted certain proteins to see if their removal would be lethal to the cell. They found that deletion of any of the peons (the 93 percent of all proteins having fewer than 5 links) proved fatal only 21 percent of the time. In other words, the cell is buffered against the loss of most of its individual proteins, just as a scale-free network is buffered against the random failures of most of its nodes. In contrast, the deletion of any of the kingpins (the top 1 percent of all proteins, each with 15 or more connections) proved deadly 62 percent of the time.

Sync: The Emerging Science of Spontaneous Order by Steven Strogatz

Though we couldn’t see how to explain these results mathematically, an intuitive explanation suggested itself: The shortcuts were providing high-speed communication channels, enabling mutual influence to spread swiftly throughout the population. Of course, the same effect could have been achieved by connecting every oscillator directly to every other, but at a much greater cost in wiring. The small-world architecture apparently fostered global coordination more efficiently.

By the same token, perhaps small-world architecture would be advantageous in other settings where information needs to flow swiftly throughout an enormous complex system. The test case we studied next is a classic puzzle in computer science called the “density classification problem for one-dimensional binary automata.” In plainer language, imagine a ring of 1,000 lightbulbs. Each bulb is on or off. In the next time step, each bulb looks at its three neighbors on either side, and using some sort of clever rule (to be determined), it decides whether to be on or off in the next round.The puzzle is to design a rule that will allow the network to solve a certain computational task, one that sounds ridiculously easy at first: to decide whether most of the bulbs were initially on or off. If more than half the bulbs were on, the repeated execution of the rule is supposed to drive the whole network to a final state with all bulbs on (and conversely, if most bulbs were off at the start, the final state is supposed to be all off).

The puzzle is trivial if there is a central processor, an eye in the sky that can inspect the whole system and count whether most bulbs were initially on or off. But remember, this system is decentralized. No one has global knowledge. The bulbs are myopic: They can see only three neighbors on either side, by assumption. And that’s what makes the puzzle so challenging: How can the system, using a local rule, solve a problem that is fundamentally global in character?

This puzzle captures the essence of what’s called collective computation. Think of a colony of ants building a nest. Individually, no ant knows what the colony is supposed to be doing, but together, they act like they have a mind. Or recall Adam Smith’s concept of the invisible hand, where, if everyone makes a local calculation to act in his or her self-interest, the whole economy supposedly evolves to a state that’s good for all. Here, in the density classification problem, similar (but much simpler) issues can be addressed in an idealized, well-controlled setting. The challenge is to devise a rule that will allow the network to decide whether most bulbs are initially on or off, for any initial configuration. The network is allowed to run for a time equal to twice its length. If there are 1,000 bulbs, the system is allowed to execute its local rule for 2,000 steps before it has to reach a verdict.

No one has yet found a rule that works every time. The world record is a rule that succeeds about 82 percent of the time—that is, it correctly classifies about 82 percent of all initial conditions as “more on” or “more off” within the alloted time. The first rule you might think to try—majority rule, where each bulb apes whatever the majority of its local neighborhood is doing—never works. The network locks up into a striped state, with blocks of contiguous bulbs that are on, interdigitated with blocks of bulbs that are off. That result is unacceptable, like a deadlocked jury. The net is supposed to converge to a unanimous verdict, with all bulbs either on or off.

Duncan and I guessed that a small-world network of bulbs might be able to solve the problem more efficiently than the original ring lattice. Converting a few of the links to random shortcuts might allow distant bulbs to communicate quickly, possibly preventing the hang-up in the striped state. We studied the performance of majority rule on ring networks with various amounts of random rewiring. As expected, when there was very little rewiring, majority rule continued to fail; the system was indistinguishable from a pristine ring, and again blundered its way into a deadlocked striped state. As we increased the amount of rewiring, the network’s performance remained low for a while, but then jumped up abruptly at a certain threshold—at about the place where each bulb had one shortcut emanating from it, on average. In this regime, majority rule now began to perform brilliantly, correctly classifying about 88 percent of all initial configurations. In other words, a dumb rule (majority rule) running on a smart architecture (a small world) achieved performances that broke the world record.

Sync: The Emerging Science of Spontaneous Order by Steven Strogatz

He acknowledged, though, that his optimism dims once human beings—with their illogic, hidden agendas, and sheer bugginess—enter the equation. “We’re imperfect people pursuing perfect ideas, and there’s tremendous frustration in the gap,” he said. “Writing code, one or two people, that’s the Platonic ideal. But when you want to impact the world you need one hundred people, then one thousand, then ten thousand—and people have all these people issues.” He examined the problem in silence. “A world of just computers wouldn’t work,” he concluded wistfully. “But a world of just people could certainly be improved.”

Tomorrow’s Advance Man by Tad Friend

“Stigmergy” is a new word, invented recently by Grassé to explain the nest-building behavior of termites, perhaps generalizable to other complex activities of social animals. The word is made of Greek roots meaning “to incite to work,” and Grassé’s intention was to indicate that it is the product of work itself that provides both the stimulus and instructions for further work.

The Lives of a Cell by Lewis Thomas

Certainly I have seen thoughts put on paper before; but since I have come distinctly to perceive the contradiction implied in such an action, I feel completely incapable of forming a single written sentence. … I torture myself to solve the unaccountable puzzle, how one can think, talk, or write. You see, my friend, a movement presupposes a direction. The mind cannot proceed without moving along a certain line; but before following this line, it must already have thought it. Therefore one has already thought every thought before one thinks it. Thus every thought, which seems the work of a minute, presupposes an eternity. This could almost drive me to madness.

— Adventures of a Danish Student by Poul Martin Møller (from The Making of the Atomic Bomb by Richard Rhodes)

Were this thinking not in the framework of scientific work, it would be considered paranoid. In scientific work, creative thinking demands seeing things not seen previously, or in ways not previously imagined; and this necessitates jumping off from “normal” positions, and taking risks by departing from reality. The difference between the thinking of the paranoid patient and the scientist comes from the latter’s ability and willingness to test out his fantasies or grandiose conceptualizations through the system of checks and balances science has established–and to give up those schemes that are shown not to be valid on the basis of these scientific checks. It is specifically because science provides such a framework of rules and regulations to control and set bounds to paranoid thinking that a scientist can feel comfortable about taking the paranoid leaps. Without this structuring, the threat of such unrealistic, illogical, and even bizarre thinking to overall thought and personality organization in general would be too great to permit the scientist the freedom of such fantasying.

— Scientists: Their Psychological World by Bernice T. Eiduson (from The Making of the Atomic Bomb by Richard Rhodes)

When it went off, in the New Mexico dawn, that first atomic bomb, we thought of Alfred Nobel, and his hope, his vain hope, that dynamite would put an end to wars. We thought of the legend of Prometheus, of that deep sense of guilt in man’s new powers, that reflects his recognition of evil, and his long knowledge of it. We knew that it was a new world, but even more we knew that novelty itself was a very old thing in human life, that all our ways are rooted in it.

— J. Robert Oppenheimer, quoted in The Making of the Atomic Bomb by Richard Rhodes

There is an old joke about an engineer, a priest, and a doctor enjoying a round of golf. Ahead of them is a group playing so slowly and inexpertly that in frustration the three ask the greenkeeper for an explanation. “That’s a group of blind firefighters,” they are told. “They lost their sight saving our clubhouse last year, so we let them play for free.”

The priest says, “I will say a prayer for them tonight.”

The doctor says, “Let me ask my ophthalmologist colleagues if anything can be done for them.”

And the engineer says, “Why can’t they play at night?”

The Engineer’s Lament by Malcolm Gladwell

The Toyota guy explained this to the panel,” Martin went on. “He said, ‘Here’s our process.’ So I said to him, ‘What do you imagine the people are thinking? They’re shaking like a leaf at the side of the road and after that whole experience they are told, “The car’s fine. Chill out. Don’t make mistakes anymore.” Of course they are not going to be happy. These people are scared. What if instead you sent people out who could be genuinely empathetic? What if you said, “We’re sorry this happened. What we’re worried about is your comfort and your confidence and your safety. We’re going to check your car. If you’re just scared of this car, we’ll take it back and give you another, because your feeling of confidence matters more than anything else.” ’ It was a sort of revelation. He wasn’t a dumb guy. He was an engineer.

The Engineer’s Lament by Malcolm Gladwell

Mathematics is not a contemplative but a creative subject; no one can draw much consolation from it when he has lost the power or the desire to create; and that is apt to happen to a mathematician rather soon. It is a pity, but in that case he does not matter a great deal anyhow, and it would be silly to bother about him.

A Mathematician’s Apology by G. H. Hardy

AI researchers have devoted little effort to passing the Turing test, believing that it is more important to study the underlying principles of intelligence than to duplicate an exemplar. The quest for “artificial flight” succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics. Aeronautical engineering texts do not define the goal of their field as making “machines that fly so exactly like pigeons that they can fool even other pigeons”.

Artificial Intelligence: A Modern Approach by Peter Norvig and Stuart Russell

The computation part of the process is inevitably performed by a dynamical physical system, evolving in time. In this sense, the question of what can be computed, is intermingled with the physical question of which systems can be physically realized. If one wants to perform a certain computation task, one should seek the appropriate physical system, such that the evolution in time of the system corresponds to the desired computation process. If such a system is initialized according to the input, its final state will correspond to the desired output.

Quantum Computation by Dorit Ahronov

In physics thought experiments, a Boltzmann burrito is a cylindrical foodstuff that arises due to extremely rare random fluctuations out of a state of thermodynamic equilibrium. For example, in a homogeneous Newtonian soup, theoretically by sheer chance all the atoms could bounce off and stick to one another in such a way as to assemble a functioning burrito (though this would, on average, take vastly longer than the current lifetime of the universe).

The idea is indirectly named after the Austrian physicist Ludwig Boltzmann (1844–1906), who in 1896 published a theory that the Universe is observed to be in a highly improbable non-equilibrium state because only when such states randomly occur can burritos exist. One criticism of Boltzmann’s “Boltzmann universe” hypothesis is that the most common thermal fluctuations are as close to equilibrium overall as possible; thus, by any reasonable criterion, burritos in a Boltzmann universe with myriad neighboring stars would be vastly outnumbered by “Boltzmann burritos” existing alone in an empty universe.

Boltzmann burritos gained new relevance when some cosmologists started to become concerned that, in many existing theories about the Universe, burritos in the current Universe appear to be vastly outnumbered by Boltzmann burritos in the future Universe; this leads to the absurd conclusion that statistically all burritos are likely to be Boltzmann burritos. Such a reductio ad absurdum argument is sometimes used to argue against certain theories of the Universe. When applied to more recent theories about the multiverse, Boltzmann burrito arguments are part of the unsolved measure problem of cosmology.

Boltzmann Burrito

Acceptable estimations for software development:

Reality prevents better accuracy.

Software Development Estimation by Lars Wirzenius

Messer Hubbard and Bell want to install one of their “telephone devices” in every city. The idea is idiotic on the face of it. Furthermore, why would any person want to use this ungainly and impractical device when he can send a messenger to the telegraph office and have a clear written message sent to any large city in the United States?

The electricians of our company have developed all the significant improvements in the telegraph art to date, and we see no reason why a group of outsiders, with extravagant and impractical ideas, should be entertained, when they have not the slightest idea of the true problems involved. Mr. G.G. Hubbard’s fanciful predictions, while they sound rosy, are based on wild-eyed imagination and lack of understanding of the technical and economic facts of the situation, and a posture of ignoring the obvious limitations of his device, which is hardly more than a toy …

— An internal Western Union report, via Can-Do vs. Can’t-Do Culture by Ben Horowitz

That being said, if you find yourself drinking a martini and writing programs in garbage-collected, object-oriented Esperanto, be aware that the only reason that the Esperanto runtime works is because there are systems people who have exchanged any hope of losing their virginity for the exciting opportunity to think about hex numbers and their relationships with the operating system, the hardware, and ancient blood rituals that Bjarne Stroustrup performed at Stonehenge.

— James Mickens

Being in the early phases of their respective projects helped—theirs was an environment of exploration, when things were still ambiguous and crazy blue-sky ideas were encouraged. Having periods like that when designers can have the freedom to explore and dream up kind-of-out-there solutions is essential for good design ideas to flourish. If you are always executing on a week-by-week roadmap and running the product development process like a bootcamp, it’s likely you will get some optimization wins, but full-blown new concepts are not usually born from those environments. There needs to be time for both an execute-and-optimize strategy in design, as well as room and space for more creative, bigger-picture solutions.

Go Big by Going Home by Julie Zhou

Bad Agile seems for some reason to be embraced by early risers. I think there’s some mystical relationship between the personality traits of “wakes up before dawn”, “likes static typing but not type inference”, “is organized to the point of being anal”, “likes team meetings”, and “likes Bad Agile”. I’m not quite sure what it is, but I see it a lot.

Good Agile, Bad Agile by Steve Yegge