My research situates itself in the field of judgmental forecasting. But what is judgmental forecasting exactly?
Let’s look at the two words separately. First, I look at forecasting, usually applied in an operational context. Let me give an example. How much soup cans does a company need to stock in Summer versus in Winter? What will be the rise in sales when the company launches a national campaign on laundry detergent? These numbers are all projections, predictions, or in other words, forecasts.
The first rule of forecasting, is that forecasts are always wrong! Kind of depressing, isn’t it? The job of the prediction model, and of the forecaster, is to make the error as small as possible. This brings us straight to the ‘judgmental’ part; a prediction can be made by a computer model, by a human expert, or by a combination of both. It’s the latter that is most popular. Usually, a forecaster lets a computer model run its course, often based on historic sales data, and evaluates the output of the model. In the case of the national campaign for instance, the computer model may underestimate the number of bottles of detergent that will be sold. The forecaster can then adjust the forecast upwards, to account for the knowledge on the campaign.
But unfortunately, that is where it often goes wrong. People often adjust when they shouldn’t, and ‘fiddle around’ with the model output. A variety of explanations have been offered for this. Maybe they just want to show that they are paying attention to the task at hand. Maybe they feel their job is only warranted if they contribute something to the model output. Maybe it’s about ownership and control. The truth of the matter is, forecasters adjust. And they often harm forecast accuracy in doing so. Why? Because their adjustments are often based on biases rather than on knowledge unknown to the model. For instance, we as people are infallibly overoptimistic. So we tend to adjust our sales upwards. Or we adjust out of political reasons: showing the boss what amazing figures you’ll make this trimester.
So what is my research about? I run online experiments where I look at the adjustments people make and I compare them to the statistical forecasts. Or I offer the forecaster multiple forecasts and ask them to choose the best one. There’s also a study running on trust in forecasts, on the effect of risk information in personal savings forecasts, a study on algorithm aversion and algorithm appreciation, a study on overoptimism in intrapreneurship, and so on. I refer you to the research menu to explore my different publications and ongoing research. And as always, if you’re interested in collaboration, go to the contact form and drop me a message!
Recently, I’ve been getting more and more invested in the idea of algorithm aversion and algorithm appreciation as motivation of our behaviour in working with computermodels. Do we have a natural aversion for models, because we expect them to be perfect while they are not? Or do we appreciate their computational power? A mystery to be solved..