Religion, GDP and Happiness

The percentage of a country's population that feels religion plays an important role in their lives (x-axis) is negatively correlated with the GDP per capita (Y-axis, in log-scale):

Here are the heatmaps, first for religion:

and per capita GDP (PPP):

From the heat maps above, several countries in Africa and Asia stand out as having low per capita GDP and high importance of religion, while all of Scandinavia and Australia stand out as places with high per capita GDP and low importance of religion. The US is somewhat of an outlier with with highest per capita GDP in the world of ~$35k and about 65% of the population reporting that religion plays an important role in their lives.

For completeness sake, here is the heat map for life satisfaction, or long-term happiness:

Combining data on religion and GDP with data with data on long term happiness, we get roughly the following picture:

  1. Long term happiness correlates positively with per capita GDP.
  2. Long term happiness correlates negatively with importance of religion.
  3. Importance of religion correlates negatively with per capita GDP.

My guess is that the causal structure between the 3 variables: Religion (R), Happiness (H) and per capita GDP is the following:

When per capita  GDP is high, the basic needs of life are met, people are relatively comfortable, and therefore long term happiness is high, but for the same reason, very few people feel compelled to ask fundamental questions of the type that people turn to religion for. This has the effect of creating a negative correlation between Religion and Happiness. In the causal structure above, religion and happiness are related, but given per capita GDP, they are independent.

The Super Investors of Graham-and-Doddsville

Image The Super Investors of Graham-and-Doddsville is neat little a 13-page article by Warren Buffet that appeared in 1984 in a Columbia Business School magazine. In it, Buffet talks about the value-investing method of his teachers Graham and Dodd of Columbia University.

Main takeaways from the article:

I'm convinced that there is much inefficiency in the market. These Graham-and-Doddsville investors have successfully exploited gaps between price and value. When the price of a stock can be influenced by a "herd" on Wall Street with prices set at the margin by the most emotional person, or the greediest person, or the most depressed person, it is hard to argue that the market always prices rationally. In fact, market prices are frequently nonsensical.”

  • Graham made a distinction between investing and speculating:

An investment operation is one which, upon thorough analysis, promises safety of principal and a satisfactory return. Operations not meeting these requirements are speculative.

  • The main tenet of Graham and Dodd's school of investing is to look for inconsistencies between the current market price of a company and its true value. This requires analyzing a company's balance sheet and activities in order to come up with a fair value. When the market price drops because of a silly reason, rather than a change in the company's value, value investors argue that it will eventually rebound back. An example would be a company's stock price nosediving if its CEO is found to be involved in a personal scandal and is about to be fired. When no value investing opportunities present themselves, value investors prefer to sit with cash and wait, rather than invest in risky investments. 
  • Value investors look for a margin of safety, that is a significant difference between the market price and the value.

... You don’t try and buy businesses worth $83 million for $80 million. You leave yourself an enormous margin. When you build a bridge, you insist it can carry 30,000 pounds, but you only drive 10,000 pound trucks across it. And that same principle works in investing

  • Value investors wait for the right opportunity to invest in a security with a large margin of safety. In absence of such opportunities, they prefer to sit on cash. Mohnish Pabrai, another student of the Graham-and-Dodd school, says of Charlie Munger, also of the Graham-and-Dodd school, and Buffet's partner at Berkshire Hathaway:

You have to be like a man standing with a spear next to a stream. Most of the time he's doing nothing. When a fat juicy salmon swims by, the man spears it. Then he goes back to doing nothing. It may be six month before the next salmon goes by.

  • Buffet argues that the value investing school of Graham and Dodd has produced a significantly large crop of disciples that have performed extremely well with that method, adding that this could not have been pure chance. 
  • Ridiculing complex mathematical models, Buffet has this to say about the practice of using multiple variables and seasonal trends in making 'investment' decisions:

I always find it extraordinary that so many studies are made of price and volume behavior, the stuff of chartists. Can you imagine buying an entire business simply because the price of the business had been marked up substantially last week and the week before? Of course, the reason a lot of studies are made of these price and volume variables is that now, in the age of computers, there are almost endless data available about them. It isn't necessarily because such studies have any utility; it's simply that the data are there and academicians have worked hard to learn the mathematical skills needed to manipulate them. Once these skills are acquired, it seems sinful not to use them, even if the usage has no utility or negative utility. As a friend said, to a man with a hammer, everything looks like a nail.

  • Risk and reward are not always positive correlated, case in point being Value Investing, where risk and reward are negatively correlated. I thought this was a significant point -- most discussions on investing implicitly assume that risk and reward are positively correlated.

Sometimes risk and reward are correlated in a positive fashion. If someone were to say to me, "I have here a six-shooter and I have slipped one cartridge into it. Why don't you just spin it and pull it once? If you survive, I will give you $1 million." I would decline -- perhaps stating that $1 million is not enough. Then he might offer me $5 million to pull the trigger twice -- now that would be a positive correlation between risk and reward! The exact opposite is true with value investing. If you buy a dollar bill for 60 cents, it's riskier than if you buy a dollar bill for 40 cents, but the expectation of reward is greater in the latter case. The greater the potential for reward in the value portfolio, the less risk there is.

  • It appears to me that 'investors' who are looking for a gain in a specified short time scale (say 1 month) are more constrained that those willing to wait longer (i.e. value investors). The comparison is akin to a company only looking to show great quarterly results, versus a company willing to spend on R&D that would mature only over the next 5-10 years. This line of thinking suggests that professional money managers (the majority of which are not value investors according to Buffet) are less consistent in their performance because they fall in the former category of firms, since they must show their performance quarterly, or annually. What if the pressure to post a performance number every quarter/year-end was absent?
  • It struck me that it requires careful analysis of a company's records and activities to figure out its fair value. If a significant fraction of users make investment decisions without making this analysis  (Graham calls this speculation, as opposed to investment), this amounts to a larger amount of herd mentality in the market, and by Buffet's own quote above, this would lead to deviations from the efficient markets hypothesis. In a sense, the success of value investing relies on others not being value investors. Addressing this thought, Buffet ends the article befittingly:

... some of the more commercially minded among you may wonder why I am writing this article. Adding many converts to the value approach will perforce narrow the spreads between price and value. I can only tell you that the secret has been out for 50 years, ever since Ben Graham and Dave Dodd wrote Security Analysis, yet I have seen no trend toward value investing in the 35 years that I've practiced it. There seems to be some perverse human characteristic that likes to make easy things difficult. The academic world, if anything, has actually backed away from the teaching of value investing over the last 30 years. It's likely to continue that way. Ships will sail around the world but the Flat Earth Society will flourish. There will continue to be wide discrepancies between price and value in the marketplace, and those who read their Graham & Dodd will continue to prosper.

Now, all I need to do is figure out how to come up with the true value of a company.

Creativity is Just Connecting Things

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people. (...)

Unfortunately, that’s too rare a commodity. A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem. The broader one’s understanding of the human experience, the better design we will have.

— Steve Jobs

Lack of price discovery in the Healthcare Market

Price discovery is

... the process of determining the price of an asset in the marketplace through the interactions of buyers and sellers.

Buyers and sellers of a good or service set the price by expressing how much they are willing to buy or sell for. When a very large number of buyers and sellers interact in this way, the final price of the good or service settles to one that incorporates all this information. However, in the healthcare market, this mechanism is absent.

When you are sick and you visit a doctor, you do not get to ask up front what you will be charged for seeing the doctor! Instead, you present your health insurance card to the receptionist, thereby disclosing information about your ability to pay, and creating information assymetry. The other party (i.e. the healthcare provider) can now select a price , taking into account the information you have just disclosed, and that too, after you have made use of their services, rather than before. This is akin to getting to know the price of a haircut in a barber shop, after you have had the haircut. There is no price discovery mechanism in the healthcare industry in US. I recommend:

  1. Providers should not be allowed to see the insurance coverage a patient has. Each patient should simply be treated. Claims and payments should be handled by a third party, such as a government agency, that acts as a mediator between the hospital and the patient.
  2. Providers should be mandated to disclose a menu of charges, like a restaurant or a barber.
  3. Providers should charge the same fee, irrespective of whether a patient has insurance or not, or how much insurance coverage she has.

Ostensibly, providers check a patient's insurance card in order to make sure that the patient can pay for the services she is about to purchase. Instead, this function should be done by a 'lookup' over the internet on a server run by a government agency: Input = 'name of the medical procedure', Output = 'Covered or not'. This will prevent the provider from learning about the extent of coverages available to the patient. Not learning about this will force providers to more honestly assess fees, and compete with each other more realistically, bringing down costs borne by the patients. I think this will also prevent providers from passing on large unnecessary fees to insurance companies.

A formal mechanism design formulation of the problem would be interesting to tackle. 4 players: patients, providers, insurance companies, government.

Understanding Happiness

Happiness is as hard to define as it is to achieve. Everybody wants to be happy. Even masochists. I think it is best to use  a non-constructive definition:

Happiness is the goal that drives all human actions and desires.

If long term happiness if everybody's ultimate goal, then it is worth learning how to achieve long term happiness. In fact, if being happy is the ultimate goal (as opposed to say, being wealthy), then our education system should also be teaching us how to be happy over a life time, rather than purely technical or vocational skills. Simple GDP growth does not imply an increase in the happiness of a society -- as indicated by data from the last ~40 years in the US, comparing per capita GDP and happiness levels:

Screen Shot 2012-12-17 at 3.09.43 PM

While per capita GDP has risen more or less steadily, happiness levels have remained more or less stagnant in the last ~40 years.

Should countries develop public policy with the goal of making a society happier, rather than with the goal of increasing GDP? I think it is an idea worth exploring (Scandinavian countries seem to rank highest in in the world in happiness scores, despite high taxes). The government of Bhutan came up with the Gross National Happiness index, which measures the average life satisfaction of the citizens in a country.

World_Map_of_Happiness

This correlates well with health, access to education, and wealth (GDP per capita). At any given time, the relationship between average happiness of a country and per capita GDP seems to log-linear, meaning that happiness is roughly linear in the log of the per capita GDP.

Screen Shot 2012-12-15 at 6.07.29 PM

This is because in order to increase the happiness level of a society by 1 unit, the increase in wealth required is proportional to the current welath. For e.g., if the required amount of increase in personal wealth for a group with per capita income of $1000 is $x, then it is $10x for a group with per capita income of $10,000.

Near the end of this talk, Daniel Kahneman says that in a study done with the Gallup organization, he found that:

Below an income of … $60,000 a year, people are unhappy, and they get progressively unhappier the poorer they get. Above that, we get an absolutely flat line. … Money does not buy you experiential happiness, but lack of money certainly buys you misery.

Kahneman distinguishes between two types of happiness: that of the experiencing self and that of the reflecting self. It is possible to be happy in the experiencing self but have a poor happiness score when reflecting on a long time frame in the past, and vice-versa. For the type of happiness that measure life satisfaction in retrospect, there is no flat time -- i.e. it continues to increase with increasing wealth. I don't find this too surprising. It is the difference between short term and long term happiness. It is easy to be happy in the short term at the expense of the long term. On the other hand, tolerating displeasure during hard work in the present can have a huge payoff in long term happiness in the future.

In this TED talk, Dan Gilbert showcases his research that shows that happiness can be synthesized by individuals. So happiness is not some finite resource that needs to be distributed among people, instead one can simply choose to be happy, despite seemingly adverse conditions. This is fascinating, because it provides experimental evidence that happiness has to do not just with our external circumstances (such as GDP per capita), but also with how we process information in our minds. Several religions have the concept of expressing gratitude. The act of being grateful basically synthesizes happiness out of thin air.

Defining Intelligence

Intelligence is a slippery thing to define. The following definition recently struck me:

That which allows one to maximize happiness over long term.

I like this definition because it is short (c.f. MDL, Occam's Razor), it makes logical sense, and it carries a lot of meaning without going into details of how to be intelligent. It is logical to me because of the following argument: Suppose a person is allowed to life two versions of his life starting from some fixed point in his life. All events and circumstances in the two versions are the same except for actions taken by the person. Then he can be said to be more intelligent in that version of his life in which he achieves greater happiness over the rest of his life.

Intelligence is needed in order to understand what actions will make us happy, for how long, and whether there will be any effects of those actions on our future happiness. Making decisions to maximize cumulative happiness is certainly a non-trivial task. Sometimes one must put oneself through short-term adversity (e.g. graduate school at little on no stipend, or an athlete undergoing gruelling training for a race) to be able to do well later. Sometimes, one decides to undertake an action that provides short term happiness, but at the cost of long term happiness. It takes intelligence to learn to avoid such behaviour n the future.

Modern definitions of intelligence from the scientific and psychology community are incredibly long-winded [Wikipedia]

A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do.

and:

Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.

The same Wikipedia page also lists various different definitions given by researchers. The long-windedness of these definitions is somewhat excusable as an attempt to be all-inclusive and general. But in the end, the notion of intelligence is a man-made model, invented to try and explain phenomena. I think a focus on happiness as the central phenomenon to be explained goes a long way in simplifying our understanding of intelligence.

Average age of first-time Mothers

The average age of first time mothers in the developed countries of the world has been rising for the last ~40 years.

Here is another plot that shows the rate of occurrence of Down Syndrome, a chromosomal defect, as a function of the age of the mother at the time of child birth.

The curve really starts to shoot up at 30. In the UK, the average age of a first time mother is 30 years. It is well known that the fertility rate in women decreases after the age of 30 and drops rapidly after 35. Older mothers are likely to find it harder to have a baby and if they do, then they run a higher risk of chromosomal defects. Given the possibilities of all these negative consequences, the increase in the average age is a bit disturbing. It seems like there is a hidden cost to more women working and for longer.

Why is it that women are waiting longer before having their first born despite the risks? Most of my hypotheses (of which more than one, or none, may be true) have to do with women working:

  • Having invested significantly into an education, greater number of women are entering the workforce, with the desire to be financially independent.
  • There is greater financial pressure in families for women to work.
  • The policies of workplaces in these countries are not favourable to childbirth. I can see this is true in the US, but I doubt this holds for Wester European countries, which I know have policies favorable to child birth.

One source of further information is the following map, showing the absolute increase, in years, in the age of a first time mother, over the last 40 years, state by state in the US:

This number is highest in the North Eastern states of NY, NJ, MA, CT, etc. The intensity of the colors in the map above correlates well with population density, and with economic activity in general (meaning more working women).  Here are two more plots I came across in a US based study done by P&G, that suggest that at least in the US, employer policies may be responsible.

What would a mother value most from their employers?:

How much guilt to mothers feel about work-life balance?:

What is the biggest problem in the world?

I have been posing this question to friends and acquaintances (and to myself) in one form or another for a while now. The answers I have received have varied significantly. I am not the first to pose this question of course. Here is one of several online polls, posing the same question, with 700+ responses so far. Here are some others. Some of the responses I have received personally and gathered from various online postings like the ones above, in no particular order include:

Environmental change Poverty, Hunger, clean drinking water
The P vs NP problem Ego
War Communication between people
Lack of tolerance Jobs, economy
Ignorance Fear
Greed Lack of genuine love, hatred
Religion Racism
Moral decline Energy shortage
Sin Drugs
Terrorism Apathy, lack of empathy
Anger Pollution
Love for Money Politics
Forgetfulness of God Overpopulation, limited world resources
Toxic waste Consumerism
Death Selfishness
HIV All –isms: Nationalism, sexism, racism..
Cancer Envy

One of the problems with the way the question is posed above is that it does not specify what 'problem' and 'biggest' mean.

Define problem. We will define problem as 'That which brings suffering to humans'.

Define biggest.  Biggest could mean 'one that affects the largest number of people', 'the scientific problem that would create the biggest impact if solved', or 'one with the greatest economic impact', etc. I am interested in a specific version of this question, in which 'biggest' means 'most fundamental', i.e. one which can be said to be a root cause of many other problems.

Causal structure. A natural question to pose in order to move in the direction of getting an answer to my version of 'biggest problem' is: how many degrees of freedom are really present in the above responses (and what are they)? That is, are they all independent problems, or do they stem from a relatively small set (1-2) of root causes (with others being effects)? For example, lack of tolerance and energy shortage can be said to be causes of war.  It is also clear that not all the problems listed above are at the same level of generality -- some seem intuitively more abstract or fundamental than others. For e.g., war seems more in the realm of effects or symptoms, compared to say anger, fear or greed. In other words, even though they are all problems, some of the items in the list above are really effects rather than causes, and I am interested in the causes. To restate the question properly:

What is the true causal structure of the world problems?

Here is a small toy example of what I mean by causal structure:

An arrow from A to B indicates 'A causes B'. In the above example, energy shortage is stated to be a cause for war, and lack of tolerance is also stated as a cause for war. Also, once energy shortage is taken into account as a cause for war, then war is not caused by overpopulation or consumerism. In other words, overpopulation and consumerism do lead to war, but only through energy shortage.

One correct answer. What strikes me most about the restated question above is that there must exist a definite answer to it. That is, there is an objective reality associated with the question. The causal structure is not a matter of subjective opinion. There is one true structure of cause and effect. I am not claiming the number of independent root causes at the very top of the causal structure is 1 (perhaps this is the case). All I am saying there is one definite causal structure. The 'one correct answer' aspect is interesting because while it is arduous to build a causal structure, checking whether a proposed structure makes sense should be much easier.

I am looking for this causal structure. I think that gaining an understanding of the causal structure can be more insightful than an understanding of the each of the problems in isolation [1]. If you think you have have a causal structure of even part of the list of problems above, please write to me or leave me a comment. If you contact me with a proposed causal structure, please use the following format:

Cause1 -> Effect1 Cause2 -> Effect2

and so on, with one cause-effect pair per line. For the above toy example, this would be:

Overpopulation -> Energy shortage Consumerism -> Energy shortage Energy shortage -> War Lack of tolerance -> War

Think of this as a jigsaw puzzle, in which the problems are the blocks (feel free to pick whatever set of problems you want from the above list, or otherwise. Of course, the more complete the set, the better.), and one has access to as many arrows as needed (The fewer the arrows, the better).

_____

Notes

[1] I think this may be true in general. In middle school I recall homework and exam questions in various subjects asking us to fill in the blanks or match entries in column A with the entries in column B. I feel explaining the causal structure between a set of things would make a very instructive exercise in school because it would force a student to think.

In the eye of the Beholder

Why is the picture on the right more appealing than the one on the left?

What is it that we find more interesting about the picture on the right, compared to the one on the left? The picture on the left contains more information. So we are certainly not looking for more information. One might say we don't know how to interpret the image on the left into anything familiar, but it is television static. A more precise answer is given by Jurgen Schmidhuber, who argues convincingly that:

Artists (and observers of art) get rewarded for making (and observing) novel patterns: data that is neither arbitrary (like incompressible random white noise) nor regular in an already known way, but regular in way that is new with respect to the observer's current knowledge, yet learnable (that is, after learning fewer bits are needed to encode the data).

This explains the pictures on top. The picture on the left is not compressible because it is a matrix of uniformly random 0/1 pixels. The Monet on the right evokes familiar feelings, and yet adds something new. I think what Schmidhuber is saying is that the amount of compressibility should neither be too little, nor too much. If  something is not very compressible, then it is too unfamiliar. If something is too compressible, then it is basically boring. In other words, the pleasure derived first increases and then decreases with the compressibility, not unlike this binary entropy curve.

Let us ask the same question again for the following pair of images (you have to pick one over the other):

My guess is that most people will find the image on the right more appealing (it is for me at least). Please drop me a comment with a reason if you differ. When I look at the image on the right, it feels a little more familiar, there are some experiences in my mind that I can relate to the image - for example looking straight up at the sky through a canopy of trees (white = sky, black=tree leaves), or a splatter of semisolid food in the kitchen.

In order for an object to be appealing, the beholder must have some side information, or familiarity with the object beforehand. I learnt this lesson the hard way. About 2 years ago, I gave a talk at a premier research institution in the New York area. Even though I had received complements when I'd given this talk at other venues, to my surprise, this time, audience almost slept through my talk. I learnt later that I had made the following mistake: in the abstract I'd sent to the talk's organizer, I had failed to signal that my work would likely appeal to an audience of information theorists and signal processing researchers. My audience had ended up being a bunch of systems researchers. The reason they dozed through my talk was that they had just a bit less than the required background to connect the dots I was showing them.

It is the same with cultural side information or context -- the familiar portion of the object allows the observer to latch on. The extra portion is the fun. Without the familiar, there is nothing to latch on to. The following phrases suddenly take on a precise quantifiable meaning:

  • "Beauty lies in the eyes of the beholder": the beholder carries a codebook that allows her to compress the object she is observing. Each beholder has a different codebook, and this explains 'subjective taste'.
  • "Ahead of its time": Something is ahead of its time if it is very good but does not have enough of a familiar portion to it, to be appreciated by the majority of observers.

I can think of lots of examples of art forms that deliberately incorporate partial familiarity into them -- e.g. music remixes, Bollywood story lines. Even classical music first establishes a base pattern and then builds on top of it. In this TED talk, Kirby Ferguson argues that all successful creative activity is a type of remix, meaning that it builds upon something familiar.

Takeaways:

  1. When writing a paper or giving a talk, always make sure the audience has something familiar to latch on to first. Otherwise, even a breakthrough result will appear uninteresting
  2. Ditto for telling a story or a joke, DJing music at a party, or building a consumer facing startup. Need something familiar to latch on to.
  3. In some situations it may be possible to gauge the codebook of the audience (e.g. having a dinner party conversation with a person you just met), to make sure you seem neither too familiar, nor too obscure.

Climate change and The Burglar's Stopping Problem

The climate change hypothesis is that global changes in climate leading to significantly higher number of severe weather events are predominantly man-made, and in particular, the release of greenhouse gases such as carbon-dioxide into the atmosphere is a leading cause. After conveniently escaping the national spotlight in the US during the presidential campaigns, climate change has once again appeared in the news, thanks to Hurricane Sandy. Munich-Re, the reinsurance giant released a report, somewhat presciently on Oct 17, that says:

Nowhere in the world is the rising number of natural catastrophes more evident than in North America. The study shows a nearly quintupled number of weather-related loss events in North America for the past three decades, compared with an increase factor of 4 in Asia, 2.5 in Africa, 2 in Europe and 1.5 in South America.

Unambiguously proving that man-made climate change has a role to play in a specific event such as Sandy is more of an ideological debate, than a statistical exercise. And so there are many many people who can boldly claim that man-made climate change is fiction.

A few industrialized nations are responsible for the bulk of CO2 emissions.

Some of these nations have refused to ratify the Kyoto Protocol, which calls for reduction in CO2 emissions. No points for guessing which colors in the map below denote countries that have not ratified the treaty.

(Brown = No intention to ratify. Red = Countries which have withdrawn from the Protocol. Source: Wikipedia)

Most of the world apparently believes in man-made climate change. When will these other countries wake up? I can't help but think of the following stopping problem:

Taken from G. Haggstrom (1966) “Optimal stopping and experimental design”, Ann. Math. Statist. 37, 7-29.

A burglar contemplates a series of burglaries. He may accumulate his larcenous earnings as long as he is not caught, but if he is caught during a burglary, he loses everything including his initial fortune, if any, and he is forced to retire. He wants to retire before he is caught. Assume that returns for each burglary are i.i.d. and independent of the event that he is caught, which is, on each trial, equally probable. He wants to retire with a maximum expected fortune. When should the burglar stop?

Traffic Lights: Regulation vs. free-markets

In the aftermath of the Sandy Hurricane, many parts of the NY/NJ area have sustained power outages, and as a result, traffic lights in these areas are not functional. This requires drivers to approach a traffic junction as a multi-way stop-sign. This got me thinking: What if, in place of traffic lights, we had just stop signs everywhere, and the rule was: the next car to go should be the car at the head of the longest queue. I believe this is an optimal scheduling policy in a certain sense (it provides an optimal throughput x delay product -- that is for a given average delay at the intersection, it would provide the highest rate number of cars going through [1] ). In this policy, each driver is trusted to follow the scheduling policy faithfully. For argument sake, I am ignoring (1) the time spent by each driver having to figure out which queue is the longest at each step, (2) how the driver at the head of each queue gets information about the length of each queue, and (3) the loss in efficiency incurred by slowing down and starting. Compared to this self-enforced scheduling policy, traffic lights can be very suboptimal. You know this if you have ever stood on a red light waiting to turn green while the street with the green signal has no traffic. Why then do we have traffic lights? The problem is that in the self-enforcing scheduling policy, there will be some drivers who will free-load, i.e. they will not obey the rule and simply take the turn for themselves, even if the turn belongs to someone else according to the scheduling rule. Further, when this happens, it will often result in collisions between the free loader and the rightful owner of the turn. This is why traffic lights are necessary, even though they come at the expense of reduced overall efficiency.

There is a nice lesson embedded here that speaks to the need for government regulation by way of analogy: Regulation is necessary to enforce fairness and safety by preventing freeloaders and accidents, even though a free market might provide higher overall benefit if everyone was guaranteed to behave properly. Therefore regulation is the price we must pay, in the form of reduced overall benefit, to counter the fact that all market participants do not behave as per the rules if left to themselves.

EDIT 1: The loss in overall utility when all participants are allowed to act selfishly, compared to the state where each participant acts for the overall good of the set of all participants, is called the price-of-anarchy. This is different from (but related to) the loss in overall utility from the imposition of regulations. A simple 2-player prisoner's dilemma can exhibit the price of anarchy when all participants are worse off if allowed at act selfishly, compared to the overall optimal for the 2 players. In the traffic light example, when players act selfishly, they create unfairness and also end up endangering everyone (including themselves, but perhaps they don't realize this bit). Hence the utility derived by each participant is lower, compared to if they all cooperated perfectly.

EDIT 2: Regulation can be thought of simply as a mechanism designed to improve the utility received by players beyond what it would be in anarchy, by changing the (rules of the) game a little. Regulation typically doesn't take the system to the overall optimal (which corresponds to perfectly cooperating players in the original game) of the original game. The 'price of regulation' ( = utility of overall optimum - that achieved by regulation) should be less than the price of anarchy (= overall optimum - state achieved by anarchy). Modern day regulators need to be really good at mechanism design!

EDIT 3: Perfect cooperation can be unstable against defection by free loaders [2] because the utility a player derives by unilaterally defecting is greater than that obtained by cooperating. If everyone is well aware of the risk of an accident upon defecting, then this can serve as a disincentive to defecting because the utility from defecting, after factoring in the probability of an accident may no longer make defecting worthwhile. This suggests that simply increasing awareness of the risks posed by misbehavior upon the misbehaving player, might improve the overall equilibrium a bit. Of course, this requires that the defector bear extra personal risk.

___

[1] I know this because it holds true for scheduling packets transmissions in a class of communication networks [citation required].

[2] I experienced free loaders first hand during the last few days after Sandy in 2 different contexts: people going out of turn at road intersections, and people trying to break into the long line at a gas station.

Non-linearities in Life

Conventional wisdom says that the marginal gains accrued per unit increase in effort are diminishing. In some cases this does not hold, and the gains seem to be super-linear, rather than sublinear in the amount of effort. Consider for a moment that you are driving in your car, let's call it Car 1, down a straight long road that has a series of traffic lights. A second car, lets call it Car 2, is driving near you at a speed that is only very slightly greater than yours. The other car is inching ahead of your car very slowly. At time t, the distance by which Car 2 is ahead of you is d(t).

As you near the next traffic signal, the light turns amber and you begin to slow down. But the driver of the other car doesn't feel the need slow down, but instead makes it through the light without stopping, while you wait at the traffic signal. The gap between the two cars has suddenly increased. If the two cars were to maintain the same small different in speeds throughout the road, slowing and stopping only at traffic lights, then at time t, the distance d(t) between them will be a non-linear function, quite different from the linear function that it would have been, had there been no traffic lights. What do you think d(t) would look like?

It's not hard to reason out the shape of d(t). At any given time, either both cars are moving,, in which case, they have a relative velocity of v2-v1, or car 1 is stopped at a signal (relative velocity of v2), or car 2 is stopped at a signal (relative velocity of -v1). So the plot of d(t) versus t would be piece wise linear with one of 3 possible slopes in any segment. In the figure below, each segment is labelled with which car(s) is moving during that segment.

Over a long enough time t, d(t) will be significantly above the dotted line, i.e. the value of d(t) that one would expect if there were no traffic lights. Therefore, in the long term, Car 2, which started out with a small velocity advantage ends up far far ahead of Car 1, thanks to non-linearities introduced by the traffic lights.

I have often mused about the similarity between the scenario above and other aspects of life, such as career growth. Just like the traffic lights, there are a number of thresholds in real life, e.g cut-off marks for the A+ grade in exams, a threshold for being accepted at a prestigious university or a job interview, or the difference between getting and not getting an award. Two individuals that differ by only a small amount, will, over the course of a long time interval, find themselves on different sides of these thresholds enough number of times. Thus, small differences in skill level and/or determination between very similar workers get amplified, leading to drastic differences in their achievements over a lifetime. Natural processes around us, including interactions between humans, are highly non-linear, and my suspicion is that humans do not accurately perceve the extent of the non-linearities. The fact that long term benefit can be superlinear in effort is a significant realization, because it means that putting in just a little bit of extra effort -- e.g. an extra hour at work daily, an extra skill, extra effort in networking or maintaining relationships -- can have a disproportionately positive effect in the long term. I'll end with a passage from Richard Hamming's 1986 lecture at Bell Labs titled 'You and Your Research' that touches upon this:

.. Now for the matter of drive. You observe that most great scientists have tremendous drive. I worked for ten years with John Tukey at Bell Labs. He had tremendous drive. One day about three or four years after I joined, I discovered that John Tukey was slightly younger than I was. John was a genius and I clearly was not. Well I went storming into Bode's office and said, ``How can anybody my age know as much as John Tukey does?'' He leaned back in his chair, put his hands behind his head, grinned slightly, and said, ``You would be surprised Hamming, how much you would know if you worked as hard as he did that many years.'' I simply slunk out of the office!

What Bode was saying was this: "Knowledge and productivity are like compound interest.'' Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don't want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime. I took Bode's remark to heart; I spent a good deal more of my time for some years trying to work a bit harder and I found, in fact, I could get more work done.

1 hour extra per day turns out to be equal to be an extra ~1.5 months of 8 hour work days per year!

Stopping Problems

Optimal stopping problems frequently pop up in real life decision making. I have been collecting a bunch of stopping problems, with the aim of clustering similar problems together, and finding general solution strategies to specific flavors of problems. Here is my (growing) list:

  1. Hiring, Marriage, Parking, Home selling: You are to hire an employee from a pool of N candidates. Each candidate has a fixed integer as its rank, but the rank of a candidate becomes known only after the interview. After each interview, you must decide whether to hire that candidate or to interview the next one. On moving to the next candidate, you loose the opportunity to hire the one you just interviewed. If you hire none of the first N-1 candidates, you must hire the N^th. When should you stop interviewing? Other versions of the same problem: Marriage: Replace candidates with potential mates. House selling: The goal is to sell a house at max profit. Parking: To look for a parking spot so as to park your car nearest to the entrance of a supermarket. You can either choose an empt spot or pass up on it in the hope of getting a better one. Other versions: (i) N is potentially infinite, but there is a cost proportional to the amount of time you take to decide. (ii) In the house selling problem, instead of ranks, one observes a real number as the score of a candidate. The goal is to stop at the candidate with as high a score as possible.
  2. Bug finding, proof-reading: Review of a piece of code or writing has a certain cost per review $C. The number of bugs is a random number with some known distribution. An undetected bug costs $B. Each time a review is done, each bug present has an iid chance of being detected and fixed, with some known probability p. When to stop reviewing?
  3. Gaussian process excursions: You are given a Gaussian process X(n), n = 0, 1, 2.. such that X(n) is Gaussian distributed with mean zero, and a known covariance function K(i,j) between X(i) and X(j). You are allowed to observer a maximum of N successive samples of X(t), and at each step decide whether to stop or continue. The reward upon stopping at $latex t_1 \leq N$ is $latex X(t_1)$. This problem generalizes problem #1, by allowing observations to be correlated rather than IID.
  4. A drunkard can walk either 1 step west or 1 step east at each time step, with equal probability. When must she stop, in order to maximize her distance from where she started? What if the number of steps is limited to N? Another version of the same problem: You have a shuffled deck of cards and you must draw cards from it, one at a time. A red card gives you $1 and a black card makes you $1 poorer. When should you stop picking cards in order to maximize profit?
  5. You are to toss an unbiased coin repeatedly. When you stop tossing, your reward is the fraction of tosses that came up heads. When do you stop tossing? What if you are allowed only N tosses?
  6. You are to throw an unbiased die a maximum of N times. When you stop throwing, your reward is the number on the upturned face of the die. How do you decide when to stop? This is a bit like the version of the Hiring problem in #1 above in which each candidate has a score rather than just a rank.
  7. Two players A and B play a game. Player A writes down two different integers chosen from the set {1, 2, ..., 100} on two separate pieces of paper and places them face downward on a table. Player B's task is to pick one of the pieces of paper, and either stop if she believe it is the larger of the two number, or pick the 2nd one if she believe the 2nd one is the larger one. What should be the strategy of player B? What should be the strategy of player A if her goal is to make player B loose?
  8. A discrete time stochastic process that produces iid random variables $latex X_1, X_2,... $ from a known distribution, changes at a certain time T to a different distribution. The task is to detect the change as quickly as posible after it has occurred, assuming the the cost of detection is proportional to the time since the change and that there is a fixed cost to a false alarm.
  9. [One armed Bandit] As a doctor, you have two possible treatments for a certain disease. Treatment A is known to work with probability p_A, and treatment B has an unknown probability of working, but with some prior distribution over the probability of working. Given N patients to be treated sequentially, how do you decide which treatment to use, if the goal is the maximize the number of patients cured. This is called a 1-armed bandit problem because of the following equivalent problem: A casino slot-machine has two lever. Pulling a lever provides the player with either $1 or $0. The lever on the left is known to have a probability of p_left of providing $1 in winnings, while the lever on the right has an unknown probability. How do you go about picking levers if you have N tries? If you have infinitely many tries? These are stopping problems because, if, by virtue of trials of treatment A, if ever becomes known that treatment B is better, then it is optimal to continue using treatment B thenceforth. The problem therefore boils down to one in which one need only find a time at which to switch from using treatment A to treatment B.

Finite number of steps

It turns out that stopping problems are non-trivial to solve when the number of tries is infinite (e.g Question #2 above). But when the number of tries is finite, then dynamic programming is often useful in arriving at an optimal strategy.

For a simple example of how DP can help find an optimal stopping strategy, let us take Question #3 with N=3. A finite horizon provides a way to reason about the problem at each step by working backwards. After having tossed the die 2 times, it makes sense to toss the die a 3rd time only if the expected outcome of the 3rd toss is better than the 2nd toss. Since tosses are IID, the expected outcome of the 3rd toss is 3.5 and therefore it makes sense to go into the 3rd toss only if the 2nd toss is a 1,2 or 3, but not if it is a 4,5, or 6.  Applying the same logic, working backwards one stage: after having thrown the die the first time, it makes sense to toss a 2nd time only if the expected reward from continuing the game is higher than the present reward. The expected reward from going into the 2nd throw can be computed by considering two disjoint cases: if the game stops with the 2nd throw and if the game continues into the 3rd throw. This gives: (3/6)(4+5+6)/2 + (3/6)(1+2+3+4+5+6)/2 = 4.25. Therefore the 1st throw is good enough to end the game with only if it is > 4.25, that is, if it is a 5 or a 6.

Problem #1 has been shown to have a relatively simple solution, which was pretty surprising when I first came across it. The solution, for reasonably large values of N, is: observe the first ~37% of the candidates without stopping, and then pick the first one that beats all the ones before it. The intuition is that observing some of the candidates without stopping provides information about the quality of candidates, which provides a baseline for making a decision. Stopping too soon is suboptimal because you do not have sufficient information to make an informed decision. Stopping too late is suboptimal because you do not have much choice left. The 37% solution is provable optimal and is guaranteed to find the best candidate 37% of the time in repeated trials. A proof is readily available by googling.

Habit Formation, Part 2

This post builds upon an earlier post on modeling habit formation. To follow this post, please read this post first. Go on, I'll wait. The urn model in that post is not really a complete description of habit formation because the limit distribution of the system (i.e., fraction of white balls after a very large number of draws) is completely determined by the initial configuration of the urn (i.e., number of white and black balls). In reality, habits are formed from repeated actions, and our actions are based on not just our inclinations but also the conscious choices that we make, which can sometimes be against our inclinations. Free will is missing from this model. There is no way to break a bad habit if one start out with one.

Therefore, let us alter the basic urn model slightly to capture choice: At each step, before picking a ball, we explicitly declare which color we are interested in for that pick: a white ball ( = follow the desirable habit) or a black ball ( = follow the undesirable habit). Suppose we choose to aim for a white ball. A ball is then sampled from the urn at random, and if a white ball does indeed come up, then:

  1. A payoff of $W is received (We wanted white and we got white, so we are happy to the extent of $W).
  2. The white ball is placed back into the urn, along with an extra white ball (Reinforcement).

But if we decide to pick white and a black ball comes up, then there is no payoff, and the black ball is placed back in the urn without any extra ball. However, if we chose black, a black ball is guaranteed to show up, we get a payoff of #B, where # is a different currency from $, and the black ball is placed back along with an extra black. The extra black ball makes picking a white ball harder when a white is desired.

Consider the implications of this model:

  1. Suppose there are very few white balls compared to black. A white ball would rarely show up. With the decision to pick a white, most of the time there will be no payoff and no reinforcement. But with the decision to pick a black ball, there is guaranteed payoff of #B units, but at the cost of picking a white later on harder.
  2. As the bad habit is reinforced, the chance of reverting to the good habit never dies away completely, but diminishes as the number of black balls grows in comparison to the whites. ('It's never too late, but it does get harder').
  3. The purpose of the different currencies is to model the fact that the pleasure obtained from following a bad habit is qualitatively different from the one obtained from following a good one. Since most habits are based on short-term pleasure at the expense of long term benefit (e.g. cigarette smoking, binge eating /drinking, putting off exercise),  currency # may correspond to lots of short term kicks, while $ may correspond to long term well being.

Short-term thrills

I believe human behavior is fundamentally driven by a desire to be happy. However, different individuals can have very different ideas about what will make them happy. Since most bad habits result from a focus on short term happiness, at the expense of long term happiness, we would do well to make the following tweak: # and $ can be added to give a total payoff, but each unit of # currency is worth only $latex \epsilon < 1$ unit, 1 iteration after it was earned. The other currency, $, however, does not decay with time. The question is what behavior emerges when the objective is simply maximization of total payoff? It is intuitive that the answer will depend upon whether the game has a finite horizon or an infinite horizon. Since players have finite lifetimes, it is appropriate to use a finite time horizon, but since a person does not know when he/she will die, the horizon must be a life expectancy with some uncertainty (variance). In other words, nobody knows when they will die, but everybody knows they will die. The only time it makes logical sense to pick a black ball is if the player believes his remaining lifetime is so small that the expected cumulative gain from deciding to pick whites is smaller than the expected cumulative gain (including the decay effect) from picking blacks. Of course this depends upon how he spent his past -- i.e. whether or not he built up a high fraction of whites.

Breaking a bad habit

Suppose a person starts out with a predominant inclination towards an undesirable habit and wishes to switch to the good habit. It is clear that if she always chooses the white ball, then she will eventually succeed in creating a larger number of white balls and a good $ balance. But she will have to go many tries and no payoff before that happens. In practice, this requires determination, and she may be tempted to pick the black ball because it is so easily available.

Suppose $latex D \in [0,1]$ is the fraction of times that (s)he will decide to aim for a white ball, i.e. the player's determination. It is intuitive that a larger value  $latex D$ would help her switch to the good habit in fewer iterations. It would be interesting to see if there is a threshold value of $latex D$ below which the bad habit is never broken. In particular, I expect there to be a threshold value $latex D(w,N-w)$, below which the bad hait is never broken, w.p. 1, where $latex w$ is the number of whites in the urn and $latex N-w$ is the number of blacks.

Game over when wealth < 1 unit

Now let us further suppose, that in the course of playing the game, a player can never have less than 1 currency units. In other words, the game stops if the total currency units with a player reaches zero. All other rules are the same: 1 unit earned from a white ball, 1 unit from a black ball but units earned from black balls decay to $latex \epsilon$ of their value every time slot. Deciding to pick a black guarantees a black, and an extra black goes in. Deciding to pick a white, provides a white with a probability proportional to the fraction of whites in the urn, and an extra white goes in. With the new rule stipulating 'game over' when the player's wealth falls below 1, the player is incentivized to pick a black if she ever approaches 1 unit of wealth. This is meant to model the inclination of individuals with low self worth and happiness levels to engage in activity that would yield short term happiness (alcohol, drugs, binge eating, etc. ) at the expense of long term prospects.

Prime Factorization Trees

A positive integer can be visualized as a tree, structured by its prime factors. I wrote a little program which takes a positive integer as input, and creates a tree layout that shows its prime factors visually. Here's the tree for 70:

The tree is drawn recursively, starting with a root node in the center of a circular layout. The degree of the root node is the largest prime factor. Then, more prime factors are added recursively (including multiplicities of the same prime factor) in decreasing order of factor, by growing the tree. Finally, the leaf nodes are displayed as filled black circles. As seen from the tree above, the prime factorization of 70 is 7*5*2.

Here is the layout for 700 = 7 * 5 * 5 * 2 * 2:

This method of displaying numbers has been proposed and implemented by others on the internet, with much prettier results. I was too tempted to quickly try it for myself using a graph visualization library, because the graph drawing library can do all the heavy lifting. I used Graphviz (developed at AT&T Labs) which I use often for visualizing network data. I also wanted to see if the underlying tree structure revealed by the edges in the graph makes the prime factorization more evident.

Here are a few more trees:

243 = 3^5:

43 (a prime):

128 = 2^7:

121 = 11 * 11:

625 = 5^4:

2310 = 11 * 7 * 5 * 3 * 2:

Code gist (Without prime factorization implementation and graph library):

[sourcecode language="python"] def createnode(x): # x == 1: support node # x == 0: real node global nodenames name = random.random() while name in nodenames: name = random.random() nodenames.add(name) if x == 1: retval = '"'+str(name)+'"'+whitenode else: retval = '"'+str(name)+'"'+blacknode print retval return '"'+str(name)+'"'

def createedge(x,y,t): # x and y are nodes, t is type (0=last stage of the tree, 1=all other stages) if t == 1: print x + '--' + y + ' [color="#DDDDDD"]' if t == 0: print x + '--' + y + ' [color=grey]' return x + '--' + y

pf = factorization(n) # keys are factors, values are the multiplicity of the prime factors factors = [] for fac in pf: for multiplicity in range(0, pf[fac]): factors.append(fac) factors = sorted(factors, reverse=True) print 'graph G{' print 'overlap = false' nodenames = set() edgeset = set() whitenode = ' [fixedsize=True, width=0.1, height=0.1, color = white, fontcolor = white, shape = circle]' blacknode = ' [color = black, style = filled, fontcolor = black, shape = circle]' rootnode = createnode(1) currentset = set() currentset.add(rootnode) level = len(factors) + 1 for i in range(0, len(factors)): level -= 1 f = factors[i] nodetype = 1 if i == len(factors) - 1: nodetype = 0 newnodeset = set() for eachnode in currentset: for j in range(0, f): newnode = createnode(nodetype) newnodeset.add(newnode) edgeset.add(createedge(eachnode, newnode, level, nodetype)) currentset = newnodeset print '}' [/sourcecode]

For prime factorization, I used the this implementation.

Learning using Puzzles

Mathematical and logic puzzles are a great way to learn. I think they should be used to a greater extent in school and college education than they are, because they:

  1. Create an excitement in the mind of the student/solver.
  2. Force the solver to think about what method to use to attack the problem. This is much closer to real world situations, where one is not told about what area a problem is in. Tech companies recognize this when they use puzzles  while interviewing job candidates as a way to observe how the candidate thinks.
  3. Over time, allow students to recognize patterns in problems and the best way to attack them, so that wehn they are faced with a new, real world problem, they have a good idea of what to try first.

I am certainly not the first one to note the educational value ofpuzzles. Martin Gardner, whose puzzles I have tremendously enjoyed has himself advocated for puzzle-based learning.

Each puzzle is usually based on one (or, sometimes two, but very rarely more than two) fundamental areas of math: probability, number theory, geometry, critical thinking etc. For example, take the following puzzle (try these puzzles before reading answers at the end of the post):

[1] By moving just one of the digits in the following equation, make the equation true:

$latex 62 - 63 = 1$

It should be clear that this puzzle is part math and part logic.

Often there is more than one way to arrive at the solutions -- e.g. using a geometric visualization for a algebraic problem in 2 or 3 variables. For example take the following puzzle:

[2] A stick is broken up into 3 parts at random. What is the probability that the thee parts form a triangle?

One may choose to solve the above problem using algebra, but it can be visualized geometrically too.

Speaking of geometric methods, there is also something to be said about solving puzzles inside one's head, rather than using pen and paper. I feel it adds to the thrill because it can be more challenging. It is also fun to visualize objects and their relationships in mathematical terms,  completely inside one's head. The following is an example of a pretty hard puzzle, which was solved entirely inside one mathematicians head :

[3] Peter is given the product of two integers both of which are between 2 and 99, both included. Sally is given only their sum. The following conversation takes place between them:

P: I cannot figure out the two numbers from the product alone.

S: I already knew that you couldn't have figured them out.

P: Oh, now that you say that, I know what the two numbers are.

S: I see. Now that you say you know what they are, I also know what they are.

What are the two integers?

___

ANSWERS:

[1] $latex 2^6 - 63 = 1$

[2] Without loss of generality, assume that the stick is of unit length. Since the three lengths add up to 1, and are each $latex \in [0,1]$, they can be treated as three random variables $latex (x,y,z)$ in 3D space, lying on the 2-dimensional triangular simplex with vertices at (1,0,0), (0,1,0) and (0,0,1). The problem now boils down to determining what fraction of the surface of this triangle corresponds to tuples that cannot form a triangle. From the triangle inequality, we know that $latex x + y > z$ and so on for the other two pairs. A little mental visualization reveals that these boundary conditions are met when each side of these inequalities is $latex \frac{1}{2}$. The region corresponding to feasible points is a smaller equilateral triangle inscribed within the simplex and is one-fourth the area of the simplex. The answer therefore must be $latex \frac{1}{4}$.

[3] The answer can be found in this PDF of a note written by EW Dijkstra, who solved the problem in his head.

Document as a Database

The problem with conventional text documents is that you cannot easily query them. Suppose I have a 20 page piece of text explaining various things, perhaps with mathematical equations, and plots, etc. In order to figure out something, I need to patiently read through the document till I have my answer. When time is short, and usually time is short, I will skip some parts in a hurry to get to what I need. The skipped parts might contain intermediate facts or definitions, making it impossible for me to understand what I am looking for even when I do find it. This situation is because information is being presented in the document in an order decided by the author. Perhaps this is the best order. But for a specific query (e.g. what is the main result of this paper? ), it would be nice to have a way to do better than having to patiently read the whole document. The best we have right now is Ctrl+F on words and phrases, which just doesn't cut it for queries of reasonable complexity (e.g. 'Why did the GDP of Germany double so quickly?' while reading an article on the history of the German economy).

The computer doesn't understand the document :-(

It would be pretty useful to somehow model documents as databases that can support complex queries. Especially when the documents are large, like books. Perhaps I am asking for the holy grail of NLP? Surely, if IBM can build a Jeopardy-winning Watson, it should be possible to build a question-answering service for individual documents? Outside information is permitted.