A while ago, Fast Company published an article relating to a new study from Princeton’s Department of Mechanical and Aerospace Engineering, claiming that “Facebook will lose 80% of users by 2017”. This article was duly shared around the world across multiple channels, reaching millions of people. If you looked very carefully though, you’d find that the statistics are exclusively based on how many times the word ‘Facebook’ appears in Google analytics. Critically, these analytics do not include mobile usage, which accounts for the majority of Internet traffic, including a rise to over 100 million mobile Facebook users at the time. Such insight was quietly set aside to make way for the attention-grabbing headline.
We see many market reports and professional comment that, one would assume, is valid and considered. However, I see a continuous trend where hidden information resides behind colourful charts that are widely quoted and used as a basis for investment of time, energy or money. In the Black Swan, Taleb calls this ‘silent evidence’.
Are we seeing the full picture? It may be circumstance that determines the answer. After all, sub-editors often remove the subtleties that surround what is written. Obviously, our consumption of information has to be made to fit our increasingly ‘bite-size’ and ‘instant satisfaction’ personalities, but I fear this may be at the expense of real truth.
In its most basic form, silent evidence is easy to spot. For example, if I were to prove to you that sober drivers cause more accidents than drivers under the influence of alcohol, would you conclude that it is safer to drive whilst under the influence?
That seems to be presented in my argument, however what is missing is that there are a relatively small number of drivers who drive under the influence, but who account for a disproportionately high number of accidents.
This trap is actually very common. As it happens, there are a number of ways in which information can be misleading, including:
- False data – an easy one, just plain downright lies
- Bad sampling – often seen where a very small segment of people are asked a question and the resulting percentages are scaled across a much larger population
- Predictive questions – a modern day media classic is “would you like adverts on your mobile device?” This is predominantly asked when the required result is a resounding ‘no’. If you want the answer to include more ‘yes’ responses, you would remove the word advertising and switch it for “useful content that would make your life better?” This leads to a major skew towards the positive. Either way, the questions have predictable answers
- Misleading selections – commonly where a snapshot of real data is used which intentionally misses out preceding periods which would harm the impact – for instance, if you wanted to show an upturn in advertising spend, but only three months in a year had an increase, you wouldn’t show the downturn that happened before, only the growing months (which may well be making up a fraction of the previous loss)
- Self-adjusted rankings – the editorial right to remove any justification of ranking. In whatever industry you’re in, you may have seen companies who claim to be the “World’s Number 1”. Surely there can only be one, right? But from closer inspection you find that the information not included is the part that defines exactly what ranking conditions they include. Is it in terms of revenue, profit, employee numbers or experience of the CEO? We are only shown the juicy bits and the terms and conditions are nowhere to be seen
- Limiting qualifiers – one of my favourites and similar to self-adjusted rankings. This is where you word a statistic in a way that the result is essentially fixed. For instance: “The brown bear is the largest land predator in the world”. The word ‘predator’ rules out elephants which are bigger but aren’t predators, while the word ‘land’ rules out various whales which are predators but don’t live on land. The statement is built for the brown bear to dominate
- Percentage accentuation – so common. Take a company making a bunch of people redundant. If the company has 100 staff and gets rid of 20, in the interests of making the statistic sexier, it would be “Company lays off 20% of entire workforce!” because 20 people doesn’t sound anywhere near as dramatic as 20%. However, in a company of 1 million, the 20% is still quite sexy but nothing sounds as big as “Company lays off 200,000 people!” The liberal insertion of exclamation marks is my own of course…
In summary, publishers have a responsibility to promote accurate and contextually detailed data to others, and viewers have an opportunity to dig deeper. As information spreads so quickly in this ultra-connected world, the misrepresentation of truth re-frames what ‘truth’ is – especially when those in a position of authority are relaying information that is believed on sight.
To quote an unknown source discussing statistics:
“86% of statistics are made up on the spot and the remaining 24% are mathematically flawed.”
Taken from 28 Thoughts – see ‘books‘ on the menu.