Are fortune tellers good at foretelling future? In most cases no and also completely opposite. What about stock market fortune tellers who predict future of stocks ?
Predictors of the stock markets are more common nowadays. Open Twitter, news channels, what’s app groups, newspapers there’s a flush of predictors asserting to where markets will do or how stocks will be in near future ( I’m surprised by exact numbers and confidence or hubris)
If prediction doesn’t work then why are so many people predicting ? The problem is not who predicts what they predict, but problem is in the prediction itself. Let’s uncover some basic flaws in prediction.
The first flaw is not taking into account the outsider view which Daniel Kahneman outlines in his book “ Thinking fast and slow”. The time taken to complete a project from an insider point of view is always distorted and in most cases it takes more time than expected. Such problems arise when we think we know more than what we actually know, it’s called the Expert fallacy.
A case of prediction error outlined in Nassim Taleb’s book The black swan says “ The Sydney opera house was supposed to open in 1963 at a cost of AUS 7mn $, although it opened 10 years later with estimated AUS 104 Mn$. The causes of delays were out of what people thought .
How I felt to the trap of prediction without accounting for a Black swan or an unknown. when I took a loan in 2017 i predicted I would close it by 2019. In 2 years things were as per plan, but in early 2019 it was time to realize that it would delay by another 1 year, and for a cause that I have never imagined in 2017.
In stock market prediction, the predictors ignore black swan events outside their models. when predicting a company’s future or other economic outcomes we never account for “ Black swan” events that lies outside our models that can prolong or distort an outcome , because they are too abstract to come up with. We overestimate a low probability event that is told to us than a high probability event that’s not told.
Ex : Death due to terrorist attack seems more scary and likely than death in car accidents. But actually the odds are higher in car accident than a terrorist attack.
Ex: In 2019-2020 people predicted a set of stocks to keep rising and companies outperforming, but what happened was an event outside our predictive capability ( Corona virus) which put our spreadsheets to rest for a while, this outside event or a black swan event is what causes prediction so difficult, and that is why I never place bets based on outcomes of my prediction because I know there are (unknown unknowns )
Learn to build error rates, if you’re predicting a stock price will be 100 in 6 months, then you have to also say an error rate of +/- 10% meaning the stock could be around 90 or 110. After 6 months if the stock price is around 30 then the error rate is 70 %. That’s how accurate the outcome on that prediction has played out.
This helps to see things more objectively and also helps plan various scenarios for an outcome. Narrow framing in your predictions always distorts your ability to see outside events.
The second flaw in prediction is the higher the information you have the more confident you are, but lesser accurate your outcomes of your prediction. With higher information lower your accuracy.
Let’s see how the second flaw manifests.
In 1974, Paul Slovic — a world-class psychologist, and a peer of Nobel laureate Daniel Kahneman — decided to evaluate the effect of information on decision-making. This study should be taught at every business school in the country. Slovic gathered eight professional horse handicappers and announced, “I want to see how well you predict the winners of horse races.” Now, these handicappers were all seasoned professionals who made their livings solely on their gambling skills.
Slovic told them the test would consist of predicting 40 horse races in four consecutive rounds. In the first round, each gambler would be given the five pieces of information he wanted on each horse, which would vary from handicapper to handicapper. One handicapper might want the years of experience the jockey had as one of his top five variables, while another might not care about that at all but want the fastest speed any given horse had achieved in the past year, or whatever.
Finally, in addition to asking the handicappers to predict the winner of each race, he asked each one also to state how confident he was in his prediction. Now, as it turns out, there were an average of ten horses in each race, so we would expect by blind chance — random guessing — each handicapper would be right 10 percent of the time, and that their confidence with a blind guess to be 10 percent.
So in round one, with just five pieces of information, the handicappers were 17 percent accurate, which is pretty good, 70 percent better than the 10 percent chance they started with when given zero pieces of information. And interestingly, their confidence was 19 percent — almost exactly as confident as they should have been. They were 17 percent accurate and 19 percent confident in their predictions.
In round two, they were given ten pieces of information. In round three, 20 pieces of information. And in the fourth and final round, 40 pieces of information. That’s a whole lot more than the five pieces of information they started with. Surprisingly, their accuracy had flatlined at 17 percent; they were no more accurate with the additional 35 pieces of information. Unfortunately, their confidence nearly doubled — to 34 percent! So the additional information made them no more accurate but a whole lot more confident. Which would have led them to increase the size of their bets and lose money as a result.
Beyond a certain minimum amount, additional information only feeds — leaving aside the considerable cost of and delay occasioned in acquiring it — what psychologists call “confirmation bias.” The information we gain that conflicts with our original assessment or conclusion, we conveniently ignore or dismiss, while the information that confirms our original decision makes us increasingly certain that our conclusion was correct.
Nassim Taleb in The black swan says
“Forecasting by bureaucrats tends to be for anxiety relief rather than for adequate policy making”
Predicting in any complex environment is very difficult especially in stock markets. Anchoring bias is another pitfall in prediction where the predictor anchors on variable, and even more dangerous is they don’t update the variable they anchor on even when there is no evidence of a positive outcome.
To avoid the trap of falling for the fortune tellers or forecasters in any complex areas with many moving parts it’s always good to consider a few following things
1. Consider the previous forecasts and outcomes, with error rates. In most forecasts the error rates will deviate more than forecast itself.
2, Look if the forecast includes various other scenarios. If the forecaster is narrow framing it’s because of anchoring to reduce randomness in prediction
3. Be the fox , keep an open mind to the forecasts and don’t anchor on a set of predictions.
4. Empirical studies shows experts like economists or experts in certain areas are no better than a random person in prediction , Don’t fall prey for the expert fallacy or authority bias
5. More confident and higher the information you have lesser will be the accuracy of your prediction .
6. It’s always good to get a second opinion even if it’s from a expert.
ex: A doctor who says there is evidence of a disease could be wrong when actually it’s a False positive ( you are told you have the disease when you don’t)