Model development and implementation for current expected credit losses (CECL), as well as subsequent model validations, have been top of mind for financial institutions since the new standard was first released by the Financial Accounting Standards Board (FASB) in June 2016. A large portion of this effort included gathering and cleaning data for the models, or in some cases, relying on peer data due to data gaps or accessibility to more accurate data. Some financial institutions were able to capture historical data within their models that covered the past 20 years, which included the Great Recession. Even with 20 years of data, many financial institutions could not have truly prepared for how the industry would have been affected in a worst-case scenario perpetuated by a worldwide pandemic.
The Pandemic’s Impact to the Model Approach
While CECL models can be constructed in various ways (transition matrix, probability of default/loss given default (PD/LGD), open pool/vintage, etc.), many models rely heavily on certain predictor variables, especially regression models, the most common predictor variable being unemployment. The year 2020 saw extreme movements in unemployment figures and at a much more staggering rate than what was observed during the Great Recession. Additionally, many institutions relied more heavily on national unemployment data rather than state unemployment data. Although national unemployment data is likely more correlated with losses than any available state data, state unemployment data could have been useful to institutions when stress testing specific portfolios or evaluating specific industries to help support modeling decisions. Many institutions only relied on national unemployment rates, and the pandemic resulted in unemployment increasing faster than any stressed scenario, which was an outcome previously not included in any training dataset. Additionally, stressing to such an extreme (i.e., beyond the severity of the Great Recession) was typically not a common practice in development.
The unprecedented impact of government restrictions during the pandemic was also likely not factored into model defaults – for example, if borrowers accepted government assistance through the Paycheck Protection Program (PPP) and the Coronavirus Aid, Relief, and Economic Security (CARES) Act, how would it distort their actual performance? Many CECL models did not take into consideration the need to flag accounts for forbearance and how such accommodations may impact a borrower or loan’s true performance.
While the pandemic’s impact was unpredictable, financial institutions have learned that the quantitative inputs of many CECL models were unfortunately not stressed enough regarding development data, such as predictor variables. Additionally, stressing various forecast scenarios and/or reasonable and supportable (R&S) periods, was not a common practice during development. Institutions may have identified which macroeconomic variables are highly correlated with their losses using statistics, but the pandemic highlighted the need to perform additional analysis to evaluate the sensitivity of macroeconomic variables, or in some cases, the sensitivity of risk ratings for those models that use risk rating to evaluate the probability of default.
Future Considerations for CECL Models
As institutions continue to emerge from the pandemic, there are some key considerations to improve CECL models, stress testing and their validation.
Financial institutions can consider establishing certain benchmarks for their models, such as realized lifetime loss rates, the Federal Reserve’s COVID-19 stress testing results for certain portfolios, allowance for credit loss (ACL) benchmarks on quarterly disclosures of comparable banks, working peer groups, etc. Also, institutions will need to evaluate how they choose to incorporate or exclude certain data points in the future, including whether to use any data collected in 2020 in order to model a future crisis, such as information about the impact of government assistance, borrower and loan performance and unemployment. Roll rate challenger models could also be developed to help guide certain qualitative adjustments, which can help aid in decision-making.
The pandemic also highlighted the need for institutions to conduct more stress testing of their models’ qualitative component, as well as quantifying the various qualitative factors. For example, one option could be to consider the best and stressed eight quarters (or any appropriate number of quarters) to help anchor basis point adjustments in the qualitative component. Institutions could also consider the impact of COVID-19-related restrictions, including commercial real estate, the future of travel, social distancing and increased remote/virtual work environments. Qualitative adjustments should be supported by economic data such as a real estate index or housing index, and the basis point adjustment should be tied to a quantitative measurement that not only supports the movement in a qualitative factor but also supports the total amount of basis points allocated to a specific qualitative factor.
Institutions should evaluate the qualitative and quantitative ratio (how much of the allowance estimate comes from each) on a regular basis to make sure the qualitative component, which may include management adjustments, does not mask certain deficiencies within the quantitative model that may warrant redevelopment. Consider whether assumptions are still in-line with model expectations, or if they are overlooked because the reserve is too heavily calculated based on the qualitative component rather than the quantitative calculation. The qualitative component cannot simply be an overlay without adequate support and any additional overlay or management adjustment beyond the qualitative factors themselves should be monitored.
Institutions must develop better ongoing monitoring processes, especially stress testing, and report those ongoing monitoring results to the governing risk committees, as well as model risk management. Even though much time may have been devoted to developing the model and selecting an approach, consider how your CECL model defines default criteria, which acts as a key assumption on which the model is based – is the definition of default criteria too narrow, and should it be reevaluated to consider accommodations now that the world is living through a pandemic? Conversations should also take place regarding model vendors and reliance on those third parties for stress testing – consider evaluating if regular stress testing can be performed in-house for better quality control, cost efficiency and frequent performance analysis.
The precision, accuracy and stability of the model may have been considered during model development, but model accuracy and robustness should also be frequently reevaluated to help inform model overlays; any trends tied to sweeping model overlays should be measured and tracked, since model results should not rely heavily on overlays. An established system/process should be in place for when an overlay is justified or required and when it may be masking other issues.
DHG Can Help
The COVID-19 pandemic has shifted how financial institutions approach their implementation of the CECL standard, as well as how and when they disclose results and when the models should be validated, which may require the help of a trusted advisor. For more information about CECL model validation and potential improvements and accommodations for your financial institution, reach out to us at email@example.com.
About DHG Financial Services
DHG Financial Services professionals provide you with in-depth industry knowledge and a wide range of advisory, assurance and tax services to address issues facing your industry in today’s challenging environment.