Thursday, February 7, 2013

A question for Nassim Taleb fans

I read an interesting article last night, detailing a public exchange between Daniel Kahneman and Nassim Taleb.
[E]ach man was asked to write a biography of seven words or less. Taleb described himself as: “Convexity. Mental probabilistic heuristics approach to uncertainty.” Kahneman apparently pleaded with the moderator to only use five words, which were: “Endlessly amused by people’s minds.” Not surprisingly these two autobiographies are descriptive of the two men’s bodies of work. Much of the discussion at this event, however, was not about making decisions under uncertainty, but a sort of tit for tat, with Kahneman asking probing questions and making pointed observations of Taleb. Little of the Nobel laureate’s [i.e. Kahneman's] work was discussed.
It would seem that Kahneman had Taleb on the back foot at various times during the exchange, pointing out (among other things) that the latter's framing of situations suffered from a clear "anchoring" bias. 

The above article also reminded me of a lingering question that I have about Taleb's work -- not least of all because it relates to the type of research that made Kahneman famous (i.e. the limits of heuristics in the face of statistical problems). Having failed to get any responses to my query on Twitter, I'd like to try and flesh it out here.

Let me state up front that I have yet to read, in full, any of Taleb's books. (They are patiently waiting on my kindle.) However, I have read several chapters from them and, moreover, a number of the articles that Taleb has penned in different media outlets. For instance, this essay for Edge magazine which seems to nicely sum up his position. 

So, I'm reasonably confident that I know where Taleb is coming from. I should also say that I think some of his points are very well made. Such as the "inverse problem of rare events" -- basically, that it is incredibly difficult to gauge the impact of extremely rare events exactly because they occur so infrequently. We lack the very observations that are needed to build up a decent idea of the probability distribution of their associated impact. As Taleb explains in the Edge essay: "If small probability events carry large impacts, and (at the same time) these small probability events are more difficult to compute from past data itself, then: our empirical knowledge about the potential contribution -- or role -- of rare events (probability × consequence) is inversely proportional to their impact."[*]

My reading of Taleb also leads me to think that he that he more or less regards everyone as blind to "black swan" (low probability, high impact) events. If that is true, however, I'm wondering how he squares that notion with the consistent empirical finding that people tend to overestimate the likelihood of low probability, high impact events. (And vice versa for more common, low impact events.) Consider the following chart, for example, which was originally produced in a seminal study by Lichtenstein et al. (1978):

Relationship between judged frequency and actual number of fatalities per year for 41 causes of death.
What we see here is that people have a clear tendency to overstate -- by an order of several magnitudes -- the relative likelihood of death arising due to "unusual and sensational" causes (tornadoes, floods, etc). The opposite is true for more mundane causes of death like degenerative disease (diabetes, stomach cancer, etc).

Similarly, have a look at Table 2 (p. 19) in this follow-up study by the same authors, where various groups of people were asked to rank the relative risks of different technologies. We clearly see a incompatibility between the opinions of experts and those expressed by laymen. For example, nuclear power is perceived to be far more risky by members of the general public than by those familiar with the actual number of fatalities and diseases brought on by this technology.

Now, Taleb might respond by that saying these are the exactly the type of misleading comparisons that he is talking about! He could argue that the "actual" observed fatalities are not necessarily an accurate representation of the underlying risks. After all, a single major event could significantly alter the average number of deaths of any particular cause (e.g. nuclear meltdown)... 

Well, perhaps, but I'm not totally convinced. For one thing, that says very little about the flipside of this problem, which is the degree to which "normal" causes of death are underestimated -- both in absolute terms and relative to more sensational outcomes. Second, by now we have accumulated decent data on numerous low-probability events that have occurred (rare as they are), from the outbreak of plague to massive natural disasters. Third, even disregarding my previous points, it doesn't seem at all obvious to me that the public is guilty of consistently underplaying the role of black swan events. Indeed, if anything they appear to be using a heuristic which causes them to significantly overestimate the likelihood of rare events.... Perhaps as a way of adjusting for the -- unquantifiable? -- impact that these outcomes could have if they do occur?

To restate my question then to those of you that know Taleb better than myself: Does he ever integrate (or reconcile) his theory about the ignorance of black swan events with the empirical evidence that people consistently overestimate the likelihood of low probability, dramatic outcomes?

UPDATE: This post appears to invoked Taleb's ire in somewhat amusing fashion. See follow-up here.
UPDATE 2: Second follow-up and some big name support of my basic point here.

___
[*] This type of unquantifiable uncertainty happens to be a big area of research in the climate change literature. In particular, the 'dismal theorem' proposed by Marty Weitzman, whom I have mentioned numerous times before on this blog. See here for more.

2 comments:

  1. You are missing the point. In the complex domain, the very presumption that the researchers you cite can compute the true risk of tail events is what Taleb is calling into question. This would preclude any "empirical" study comparing perceived to actual risk, since it is the actual risk that is on shaky empirical grounds. Your casual assumption that "by now we have accumulated decent data" is just wrong, since most of the missing data is by definition in the tails.

    A further problem with this kind of research in particular: how people respond when asked to estimate risk (ie report mental salience, according to Kahneman) != how people actually live their lives. Do you honestly think people take the same amount of care to avoid Botulism exposure as they do avoiding electrocution in daily life (as the data you cite would necessitate if empirically sound)?

    You should make it a rule to actually read an author's work before discussing your "reading" of him.

    ReplyDelete
    Replies
    1. You should make it a rule to actually read an author's work before discussing your "reading" of him.

      You are doing little to dispel the notion of an insufferable Taleb clique. So now it is impossible to pose a question on the blogosphere -- phrased in extremely polite terms I might add -- without having read someone's entire corpus? I was very up-front about not having read his books in full. However, I also pointed out that I have read a number of his other writings. The Edge essay that I quoted from, in particular, is typically cited as the most concise exposition of his view.

      Speaking of reading, you may want to go through my post again. My intention was hardly to invalidate his conceptual point about accurately computing damages in the tail. I understand the statistical mechanics very well and have used this very argument numerous times in discussing climate mitigation (as indicated in the footnote of the post).

      What I am interested in, however, is how or whether Taleb reconciles these particular findings -- narrow as they may be -- with his broader view. In particular, does he think people are exhibiting an efficient heuristic in "over"stating the relative likelihood of death by tornado versus stroke (for example)? You can see this follow-up post for reasons that I don't think he provided a satisfactory answer to that question.

      Delete