As everyone by now knows, Nassim Nicholas Taleb is a Lebanese–American essayist, scholar, statistician, former trader and risk analyst whose books Fooled by Randomness, Black Swan, Anti-fragile have dealt with problems of randomness, probability and uncertainty. The Sunday Times has rated Black Swan as one of the 12 most influential books since World War II. Taleb now serves as a professor at several universities, including as distinguished professor of risk engineering at the New York University’s Tandon School of Engineering.

Taleb is back with another book on a theme that is close to his heart and which he has been expounding for the past few years in his articles and Twitter comments: skin in the game (SITG). The idea is simple. If you grab the upside (of your actions or decisions), you should get the downside too. His argument is that the chances of informed action and prediction can be improved if we better comprehend the multiple causes of ignorance when dealing with everything from health, safety politics, sports, etc. One of the causes of ignorance is that, in an opaque system fraught with uncertainty, there is an opportunity for one set of players to hide risk: to benefit from the upside when things go well but not pay for the downside (Black Swan events) when one’s luck runs out.
Taleb has made a natural progression from randomness to extreme events to, now, what he calls hidden asymmetries. Economic literature focuses on incentives as encouragement or deterrent, but not on disincentives that should eliminate from the system incompetent and nefarious risk-takers who inflict harm. An unskilled forecaster, who, if shielded from financially harmful consequences of his wrong forecasts, would continue to contribute to the build-up of risks in the system. But if he is under a disincentive to do so—the forecaster pays for his wrong forecasts—he would, eventually, be removed from forecasting. This disincentive is simply skin in the game.
You can have as good a risk management system as you want, but nothing can be better than an inherent and internal system of risk control exercised by the participants, knowing that they have to pay for it.
This idea is old as the hills. Taleb, who was born in what he archaically calls Levant—a collection of cities on the eastern Mediterranean seaboard. The Levant is a short distance away from the principal centres of several older civilisations—Sumerian, Babylonian, Ottoman, Persian, Greek and Roman and Taleb, proud of this lineage, has always drawn his anecdotes and inspiration from these ‘ancients’. In this book, too, he assembles a lot of historical evidence to show that the ancients were fully aware of this incentive to hide risks and implemented very simple but potent heuristics or rules of thumb.
About 3,800 years ago, Hammurabi’s code specified that if a builder builds a house and the house collapses and causes the death of the owner of the house, that builder shall be put to death—the best risk management rule ever. What the ancients understood very well was that the builder would always know more about the risks than the buyer, and can hide sources of fragility and improve his profitability by cutting corners. The foundation is the best place to hide such things. The builder can also fool the inspector, for the person hiding risk has a large informational advantage over the one who has to find it. Taleb also notes that Hammurabi’s law is not necessarily literal: damages can be ‘converted’ into monetary compensation.
Over the centuries, the idea of SITG has appeared in many forms. The rule under Lex Talionis is “An eye for an eye, a tooth for a tooth.” The 15th Law of Holiness and Justice is “Love your neighbour as yourself.“ Isocrates, a Greek rhetorician, and Hillel the Elder, a famous Jewish religious leader and scholar, have both said: “Do not do unto others what you would not have them do unto you.” Philosopher Immanuel Kant’s Formula of the Universal Law is: “act only in accordance with that maxim through which you can at the same time will that it become a universal.”
What are the modern-day applications of SITG? The best place to apply this is on policy-makers and politicians (though it is unclear who would apply it, since law and policies are framed by them.) One of the most important points Taleb makes is that this idea is not ‘scalable’. You cannot have the players in the national government with SITG; and it is far from international organisations. It is in the small, local, personal and decentralised systems where SITG works best because participants are, typically, kept in check by feelings of shame on harming others with their mistakes. In a large, national, multi-national, anonymous and centralised system, the sources of error are not so visible and pinpointing accountability is not easy. When it fails, everybody except the culprit ends up paying for it, leading to national and international measures of indebtedness against future generations or ‘austerity’ imposed—again by large global organisation run by those without SITG such as World Bank and International Monetary Fund. This naturally means that Taleb is dead against ‘big governments’.

After policy-makers come corporate managers, especially the finance people—the ‘agents of capitalism’—for whom Taleb has special scorn. The manager who loses money does not return his bonus, forget about sharing the losses. Taleb has never concealed his hatred for economists, quantitative modellers and policy wonks. These people have no disincentive and are never penalised for their errors. So long as they please the journal editors, or produce cosmetically sound ‘scientific’ papers, their work is fine. So, we end up using models, such as portfolio theory and similar methods, without any remote empirical or mathematical reason. The solution is to prevent economists from teaching practitioners, simply because they have no mechanism to exit the system, in the event of causing risks that harm others. Again, this brings us to decentralisation by a system where policy is decided at a local level by smaller units and, hence, there’s no need for economists.
Then come the predictors, who rarely pay for their predictions while their precise quantification encourages people to take more risks. The solution is to ask—and only take into account—what the predictor has done (what he has in his portfolio, if he is a stock-adviser), or is committed to doing in the future. Finally, to deal with warmongers, Taleb approvingly quotes Ralph Nader, an American activist who fought many battles for the consumer, and to whom Taleb dedicates the book. Nader has proposed that those who vote in favour of war should subject themselves (or their own kin) to the draft. Beyond all ‘isms’ and theories of how institutions should be run, what we really want is accountability for decision-makers who influence our lives. And that is what SITG suggests. But there are huge practical problems in trying to apply this approach in real life. Who will write out the rules of SITG? Wouldn’t human beings, being what they are, find exactly the same way to game the system to save their skin?
Most importantly, Taleb’s strong, combative, haughty and thin-skinned personality comes through in every page. When he is not delving into SITG theory, he is either mocking someone (academics like Steven Pinker, Thomas Piketty and Nobel Laureate Richard Thaler, journalists, bureaucrats, social science departments, corporate managers, Monsanto and other GMO companies, large corporations, and so on) or praising himself. The number of people, disciplines and institutions that he thinks are charlatans and fools, is astoundingly vast and his rudeness breathtaking.
This book is nowhere in the same league as Fooled by Randomness and Black Swan. There are sweeping generalisations also (for example, about the change in public mood in UK, India and the US) that just don’t ring true. All that’s a pity because the core SITG idea is relevant, though with limited applicability.