Monday, October 25, 2010

Making Me Guess

It is often tricky when you are asked to estimate some odd quantity, like how many people dressed in panda suits there were at a party. The person asking you to guess is usually hoping to surprise you with the large deviation of your guess ("what you'd expect") and the actual quantity. But you, the guesser, can't help but take into account the fact that they are asking you, which they presumably wouldn't be doing unless the quantity to be guessed were extremely unlikely and otherwise surprising.

So, the supposedly naive guesser is forced to either guess especially high and annoy the asker, or guess low and appease the asker. Guessing high makes you look like a smartass, while guessing low is tedious.

As a young lad, I would never really know how to choose between guessing high and low, and the choice often brought on some anxiety. But now I usually just explain the whole situation, indicating how being asked to guess has instantaneously shifted my probability distribution. Incidentally, I no longer have any friends. 

(Inspired by a representative convo with Brittany)

Sunday, October 24, 2010

Trade Off #14: Precision vs Simplicity


When you describe something, the more precisely your model explains the given data, the more complicated it must be. Don't believe me? Lo, behold these examples, then:
  • In describing the path up to my apartment, I could say "there are stairs", or I could say "there are fourteen stairs"; vagueness is less precise but it is also simpler. The bottom line is that having to walk up any number of stairs is too many.
  • In fitting a model to data, one can explain more variance by including more free variables, at the cost of complication. There are plenty of ways to punish a model for having additional parameters and thus make the model earn each of its parameters through explanatory ability. (see here and here)
  • The failure of humans to adequately trade off precision and simplicity in certain contexts, like when we say that the prob of X and Y is greater than the prob of just X, is one of our well-documented cognitive biases. (see here)
There are some well-known incidents in the history of science that on first glance appear to be exceptions to this trade-off. For example, Kepler's idea of elliptical planetary orbits eliminated the need for astronomers to model extra epicycles, both simplifying and adding precision to our understanding of planetary motion.

But in the view of this committee, these precision-enabling paradigm shifts are especially complicated, involve the shifting of assumptions at a fundamental level, and only seem simple in distant hindsight. That's one reason why they are so hard to come upon.

(photo of spiral galaxy, which Johannes Kepler probably would have marveled at, goes to NASA's Marshall Center)

Being Skeptical Of Your Skepticism

If you notice one particular flaw in an author's fact checking or reasoning, to what extent do you discount the rest of what that author claims?

If you hold a narrative view of other's actions--X did Y because she is a Z--then you will tend to assume that the author is a liar and cannot be trusted.

But truth-seeking, like any other human tendency, is on a spectrum. (A psychopath is at one extreme, while Abe Lincoln, god bless his honest soul, held down the other). So that binary categorization is almost surely wrong.

Yet, we are heavily biased towards categorizing people and against seeing the full spectrum. Sometimes this bias is due to laziness, as labeling others as liars liberates us from the effort of actually understanding their claims. But less trivially, labeling others allows us to feel like part of a more exclusive group, a group that would presumably never commit such an error.

Surely, we must downshift our faith in the author's other claims some upon finding that they have made a mistake. But remember the high prior probability that the authors are merely fallible, and don't differ much in the degree of their truth-seeking from the rest of us. Now, if they get two things wrong...

Bottom Line: While it's healthy to be skeptical, it's healthier still, for the body if not the ego, to be appropriately skeptical of your own skepticism.

Tuesday, October 19, 2010

Zuckerberg On Reward And Interest

He discusses The Social Network here. Precisely what he says is that Hollywood "can't wrap their head around the idea that someone might build something because they like building things." And... trigger the applause lights.

The reality is somewhere in the middle. To intimate that making facebook was totally intrinsic, which he in fact intimates, is going too far--social esteem from the Peter Thiels and Sean Parkers of the world had to be a motivating factor. But murky middle grounds are much harder to convey in a movie or sound bite. Harder to convey in a blog post, too.

Sunday, October 17, 2010

Trade Off #13: Robustness vs Fragility


One of the more ridiculous scenes in Star Wars: A New Hope is when one measly torpedo hitting a thermal exhaust sets off a chain reaction which destroys the entire Death Star. Even if the Empire doesn't consider a small one-man fighter to be any threat, and even though the shot is ostensibly "one in a million," buying into the whole fiasco requires some boyish naïveté.

But outside of Hollywood, it's surprising how many systems behave similarly. Designs built to maintain function despite large perturbations of a certain type are often highly vulnerable to perturbations from a different angle. It seems that optimizing for robustness to expected deviations generally comes at the expense of increasing fragility to unexpected deviations. Examples:
  • Forest buffer zones that are designed to prevent against particular types of fires can be superseded by unexpected types of fires, that, for example, come from a different direction. (see diagram here)
  • A Boeing 777 has complicated chips that can account for variation in the distribution of cargo or atmospheric conditions, but it is vulnerable to an electrical outage or computer error in a way that a simpler plane would not be. (see here)
  • Genetic regulatory networks are designed to sense and maintain function in a variety of environments, but a mutation that changes the internal connections of this regulatory network is almost always lethal. (see here)
Some have argued that, evolutionarily, there has been selection for suboptimal designs in biological systems in order to make them less vulnerable to unexpected events. If this is broadly true, it would perhaps be useful for us humans to recognize as we go about designing the institutions that shape our market interactions.

On the other hand, this trade-off is a fairly recent idea, it's not particularly well-defined, and it will be important to see what consensus develops towards it before we draw too many implications. Still, as far as this committee is concerned, robustness vs fragility is indeed canonical.

(Photo comes from flickr user Scott Beale)

Saturday, October 9, 2010

Trade Off #12: Protection vs Freedom


Preventing deleterious forces from harming individuals typically comes at the cost of constraining the actions of those individuals in some way. Thus we come to the common trade off between protection and freedom. Examples:
  • When herds of prey animals are large enough, they stand a chance to fight off a given predator. Thus they tend to aggregate together, lowering their freedom but increasing the probability of their survival. (see here)  
  • Work is one way to trade freedom now for protection from various exogenous forces in the future. Like preparing for the zombie apocalypse. (see here
  • Economic interventions that increase freedoms, like organ donation markets, are typically argued against on the basis of protecting individuals from exploitation. (see here)  
This is one of the more politically charged trade offs, which I have naturally been avoiding like a beaker of Yersinia pestis. But for me it has proven useful to frame these political debates in terms of trade offs. This enables one to see the other person's side, even when the values which inform actual decisions on those trade offs might differ.

(Credit for photo of Harlech Castle goes to theroamincatholic)

Thursday, October 7, 2010

Your Relationship With Your Former Self

Fernando Pessoa considers this question in The Book Of Disquiet,
I often find texts of mine that I wrote when I was very young--when I was seventeen or twenty. And some have a power of expression that I do not remember having then. Certain sentences and passages I wrote when I had just taken a few steps away from adolescence seem produced by the self I am today, educated by years and things. I recognize I am the same as I was. And having felt that I am today making a great progress from what I was, I wonder where this progress is if I was then the same as I am today. 
Pessoa realized he was underestimating his former self after reading his old writing. This makes sense. It's harder to construct a personal narrative of growth when the sentences showing that you used to be just as sweet remain visible, instead of diffusing into infinity like spoken words, or being lost in the synaptic puncta of the cortex, like most thoughts. 

So with the masses leaving digital footprints in tweets and status updates, will we all soon find it more difficult to believe in our redemption stories? As the world freaks out about others peering into their privacy, perhaps the person we should be most concerned about finding our innermost thoughts is ourselves, in the future. Our syntax might seem a little too tight, our inner monologues a little too kindred.

This is one of the questions I ponder as I scroll through old posts on a rainy evening. And my other question is... was I more alive then, than I am now?

Monday, October 4, 2010

Trade Off #11: False Alarm vs Oversight


We can distinguish mistakes into two forms. The first type is a false alarm, in which you overestimate the likelihood that an event will occur, and the second type is an oversight, in which you underestimate the likelihood that the event will occur. Suppressing the probability of an oversight will make a false alarm more likely, and vice versa. Plenty of examples, I'll just give three:
  • Statisticians make a distinction between "type one" errors, rejecting a null hypothesis when it shouldn't be, and "type two" errors, failing to reject a null hypothesis when it should be. If the null hypothesis is that a given event will not happen, then type one errors can be thought of as false alarms, and type two errors as oversights.
  • A lifeguard can choose to pay less attention to each individual momentary dip under water, and thus lower his stress from false alarms. But he inevitably does so at the risk of increasing the risk of an oversight--not noticing when someone is underwater for too long.
  • Rhodopsin switches conformational states in response to photon exposure. We can think of a false alarm as when rhodopsin changes states even when a photon has not hit it, and an oversight as when rhodopsin fails to switch states despite photon exposure. Evolution seems to have strongly selected for minimizing false alarms as opposed to minimizing oversights. (That is, oversights still occur ~ 30% of the time; see here)
This is one of the more solid members of the canon and can be used to explain a lot. Some people choose to ignore it--saying, for example, that a lifeguard should always pay as much attention to his swimmers as possible. But anyone who has been a lifeguard for long knows this is nigh impossible, that the stress builds up, and at some point you have to make trade offs.

(Above photo credit goes wholly to flickr user Abhijit Patil)