That’s where the heading of this piece comes from — in trading jargon, when someone holds a ‘long gamma’ position, any movement in price is good news. In other words, long gamma means that which benefits from volatility or the non-linear. Excessive planning and smoothening are attempts to force something that’s predominantly non-linear into an easy linear graph, a simplification that distorts dangerously.
Taleb thus argues that depriving political and economic systems of natural volatility (non-linearity) — that is, making things artificially smooth — harms them more by leaving them unprepared when the biggie strikes. Take the turkey example. A turkey fattened for 1000 days imagines that life and the butcher love it. The turkey, its friends and family have absolutely no reason for 1000 days to doubt this. On the 1001 day, the Black Swan strikes. The most dangerous mistake the turkey made was to believe that the absence of evidence of harm meant the absence of harm.
Monthly Archives: August 2013
Can we define as a necessary condition for charlatan as someone who gives advice but doesn’t have downside from it…
Can we define as a necessary condition for charlatan as someone who gives advice but doesn’t have downside from it, with no skin in the game?
We can’t call someone a charlatan for “being wrong”, as error is part of the scientific enterprise. The Popperian criterion falsifiability, unfortunately, doesn’t work in a complex system. But skin in the game works well as self-harm would eventually keep the falsity in check.
via Can we define as a necessary condition… – Nassim Nicholas Taleb | Facebook.
The law of large number under fat tails (Micro Mooc)
The law of large number under fat tails (Micro Mooc)
(Explains some of the bullshit in social science)
https://www.youtube.com/watch?v=80ekenKK_jE
Law of Large Numbers and Fat Tails, Technical Note #3
Micro Mooc #3. The law of large numbers is too slow under fat tails. This is a simplified (but technical) presentation of a segment of “Probability and Risk …
via The law of large number under fat tails (Micro… – Nassim Nicholas Taleb.
On the Difference between Binary Prediction and True Exposure With Implications for Forecasting Tournaments and Prediction Markets by Nassim Nicholas Taleb, Philip E. Tetlock :: SSRN
On the Difference between Binary Prediction and True Exposure With Implications for Forecasting Tournaments and Prediction Markets
Nassim Nicholas Taleb
NYU-Poly; Université Paris I Panthéon-Sorbonne – Centre d’Economie de la Sorbonne (CES)
Philip E. Tetlock
University of California, Berkeley – Organizational Behavior & Industrial Relations Group; University of Pennsylvania – Management Department
June 25, 2013
Abstract:
There are serious differences between predictions, bets, and exposures that have a yes/no type of payoff, the “binaries”, and those that have varying payoffs, which we call the “vanilla”. Real world exposures tend to belong to the vanilla category, and are poorly captured by binaries. Vanilla exposures are sensitive to Black Swan effects, model errors, and prediction problems, while the binaries are largely immune to them. The binaries are mathematically tractable, while the vanilla are much less so. Hedging vanilla exposures with binary bets can be disastrous — and because of the human tendency to engage in attribute substitution when confronted by difficult questions, decision-makers and researchers often confuse the vanilla for the binary.
Number of Pages in PDF File: 7
Keywords: Predictions, Risk, Decision, Judgment and Decision Making, Fat Tails
working papers series
Another Pinker statistical fallacy, which can teach students how NOT to look at risks and mix random variables…
Another Pinker statistical fallacy, which can teach students how NOT to look at risks and mix random variables of different tail properties, or confuse types of estimators. This afternoon, to kill time on a long flight I decided to look for scientistojournalistic fallacies so I went to Steven Pinker’s twitter account. I immediately found one there. Heuristic: go to Pinker. He promotes there a WSJ article to the effect that “Terrorism kills far fewer people than falls from ladders”; the article was written by a war correspondant, Ted Koppel and is very similar to his Angels thesis.
Now let’s try a bullshit-detecting probabilistic reasoning.
A- Falls from ladder are thin-tailed, and the estimate based on past observations should hold for the next year with an astonishing accuracy. They are subjected to strong bounds, etc. It is “impossible” to have, say, >1% of a country’s population dying from falls from ladders the same year. The chances are less than 1 in several trillion trillion trillion years. Hence a journalistic statement about risk converges to the scientific statement.
B- Terrorism is fat tailed. Your estimation from past data has monstrous errors. A record of the people who died last few years has very very little predictive powers of how many will die the next year, and is biased downward. One biological event can decimate the population.May be “reasonable” to claim that terrorism is overhyped, that our liberty is more valuable, etc. I believe so. But the comparison here is a fallacy and sloppy thinking is dangerous. Worse, Koppel compares terrorism today to terrorism 100 years ago when a terrorist could inflict very limited harm.
via Another Pinker statistical fallacy, which can… – Nassim Nicholas Taleb.