Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
SAS for Finance

You're reading from   SAS for Finance Forecasting and data analysis techniques with real-world examples to build powerful financial models

Arrow left icon
Product type Paperback
Published in May 2018
Publisher Packt
ISBN-13 9781788624565
Length 306 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Harish Gulati Harish Gulati
Author Profile Icon Harish Gulati
Harish Gulati
Arrow right icon
View More author details
Toc

Table of Contents (9) Chapters Close

Preface 1. Time Series Modeling in the Financial Industry 2. Forecasting Stock Prices and Portfolio Decisions using Time Series FREE CHAPTER 3. Credit Risk Management 4. Budget and Demand Forecasting 5. Inflation Forecasting for Financial Planning 6. Managing Customer Loyalty Using Time Series Data 7. Transforming Time Series – Market Basket and Clustering 8. Other Books You May Enjoy

Good versus bad forecasts

Forecasting plays a fundamental role in ensuring the success and future viability of an organization. The famous camera and film company Kodak failed to forecast the growth of digital photography, for example, and has now fallen behind its peers. Coca-Cola had to famously ramp up its own production after pulling the curtains on its revamped Coke New in response to a surge in popularity of Pepsi in the 1980s. Because of this, Coke miscalculated the potential popularity of its new formula and branding and instead went back to its core formula and started calling it Classic Coke. As we can see, most business decisions involve estimates or forecasts. These decisions can range from opening a new manufacturing facility to launching a new product range, opening up more stores, or even launching a new mobile application.

Since forecasting is so integral to the whole business process, it is important to get the forecast as accurate as possible. Seldom will you see a forecast that is the exact quantity of the observed event. However, it isn't rare to see forecasts miss their objective by a wide margin either. There are various statistical measures that we will cover in upcoming chapters that will help us to assess the probable success of a model's forecast. For now, let's first assess the following factors that define the quality of forecasts produced:

  • Subject area: Forecasting accuracy depends on what is being forecasted and its application. In the earlier example of a spacecraft being readied for launch, the accuracy of the weather forecast cannot be wrong by a significant margin as the cost of getting it wrong will be high. However, if you are simply planning a picnic, the costs of getting the weather forecast wrong are completely different.
  • Consistency: One of the most difficult tasks in the corporate world is to get peers in various teams to trust the model that generates forecasts. An inconsistent model isn't going to help in convincing others to support the model through budget allocation or ensuring that the output is consistently used. Any model needs testing, and validation data in ideal circumstances, before it is formally signed off. If a model is replacing a manual process, then it might be a good idea to conduct a pilot or parallel run where the model and the manual input both act as inputs. This might help overcome any teething problems when making the model operational, and may also highlight any concerns before the model gets a reputation.
  • Error margins: In most cases, the brief for any forecasting model is to get it right–but get it right by what percentage? Is there an acceptable tolerance? And should a model be right all the time? A model isn't a crystal ball, and there will be instances when forecasts aren't right or miss the mark within a reasonable level. While sharing forecasts, a modeler should provide a confidence level that implies a margin for error. A business' expectation of the error margin may be different, so it is best to discuss this prior to building.
  • Rare events: In hindsight, rare events are relatively simple to model. You will probably see a big crest or trough in a time series line plot and will therefore be able to make out a significant event. If it's a rare event, dummy variable creation should help smooth out its effect. If another rare event occurs in the future, a model might not be able to deal with it effectively. There is always a possibility of this happening and the impact of rare events is unpredictable. A modeler should be able to interpret an event's effects and communicate with stakeholders on the relationship between the event and their forecast, as well as if the model needs recalibrating.
  • Judgment versus modeled forecasts: In any large organization, there will be individuals who might think they can predict a scenario better than a model. They might deal with the forecasted scenario more closely than the modeler, they might have some inherent bias towards an outcome, or they might feel that the model doesn't take into account factors that they think are important. Rather than taking a skeptical view of such individuals, a modeler should try and engage them to see if their model can learn from their experiences.
  • Volatile environment: Will a model's performance be the same in both a volatile and stable period? Post-Lehman crisis, the interest rates of major central banks have nosedived. The two scenarios that a model would probably forecast in this scenario is that the central bank will hold rates or revise them downwards. Once the US Federal Bank starts revising rates upwards, other banks might follow. So, will the model be able to correctly forecast the rate rise and be able to effectively predict for an alternative scenario? A model that incorporates good explanatory variables should ideally be able to predict rate rises and accommodate for other scenarios; however, not all models continue to perform well and any deterioration may prompt the need for model recalibration or a rebuild.
  • Assessing period: Some models are built for a one-off objective, whereas others are built into business as usual (BAU) processes and will generate forecasts for years to come. The benchmark for judging forecasts as good or bad should depend on the assessing period. Another aspect of the assessing period to take note of is its length. A weather model might be more accurate when forecasting a day ahead but not when forecasting a month ahead, for example. The monthly forecast model might need to use different variables and methodology, and so this model might not be fit for purpose in this case. A modeler should therefore try to build separate models for predicting the risk of default by a customer at any point in time versus any time in the next 12 months. A regulator might also require that certain businesses and models are validated every few months or years to ensure they are fit for purpose.
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime