Plansmith Blog

Dynamic Decay Rates: Worthy Challenge or Futile Pursuit?

Posted by Dave Wicklund on 1/6/25 9:11 AM

Over the past several years, the banking industry has seen seismic shifts in deposits as trillions of dollars in Government stimulus were released into the economy, followed by a period of dramatic increases in market rates, which then resulted in massive amounts of low yielding balances migrating into higher-paying CDs and non-bank investment products.

As we all know, that movement of funds has not only taken a toll on earnings, but it has left us, and examiners, questioning the stability of those remaining deposit balances. That deposit stability is also a key factor/assumption in interest rate risk modeling. That assumption is commonly referred to as a “decay rate,” and those assumptions are primarily intended to capture the natural attrition of deposit account customers over time.

Given the importance of these assumptions, we’ve started to see some examiners suggest that decay rates should not be “static” across all model shock scenarios, but, rather, vary based on other factors such as mergers, competition, alternative investments, reputation, and general economic conditions.

The upside is that we can pretty quickly refute that idea for factors such as mergers, competition, and reputation, since we’re supposed to be modeling interest rate risk, and those items have nothing to do with interest rate movements. However, “economic conditions” and “alternative investments,” arguably can be related to movements in market rates (i.e., interest rate risk).

Unfortunately, that then opens up a whole other set of questions/challenges surrounding making assumptions about how decay rates should change (be made dynamic) for an infinite number of possible rate scenarios stemming from those changing economic conditions and what other alternate investments might be available in the market.  

At the most basic level, one could argue for using dynamic decay rates in parallel shock scenarios (like we typically do for loan or MBS prepayment speeds), but for those of us that actually then have to do those studies, I’d ask, “how do you do that?” There are so many different ways to do a decay study, and getting good historical data can be a significant challenge, so how do you actually do the math and come to any meaningful conclusions? Even though we’ve just lived through a relatively extreme increasing rate environment, and any given financial institution could look at their own changes in non-maturity deposits (NMD) levels in that environment, how do you account for all of the factors that likely impacted that actual customer behavior; most significantly, did the institution raise their NMD rates to keep deposits, what was the institution paying on CDs, and what CD (or even MMDA) specials were available in the market (or even on-line nationwide)? The simple answer is, “you can make a qualitative adjustment to the math,” but that would then have to be supportable as well.

And even if we get to some meaningful dynamic decay assumption for rising rates, what about falling rate environments? Outside of 2020/2021 (which was clearly abnormal because of trillions of dollars in stimulus cash being dumped into the economy), most financial institutions won’t be able to go back to the last falling rate environment of 2007/2008 and get meaningful data at the customer account level to do a good study. Moreover, even if you’ve come up with different decay rates for all of the rising and falling rate scenarios, what kind of adjustments would be made for all the possible non-parallel shock scenarios?

It's certainly fair to entertain the idea of dynamic decay rates, and good arguments can be made that they aren’t static in all rate environments, but, in reality, the hurdles of getting good data, using an applicable methodology, and making realistic and supportable qualitative adjustments are high enough as it is. I’ve seen so many bad studies over the years, that I have very little faith that you’d end up with anything that is truly predictive. In the end, you just end up making up numbers and hoping that it satisfies your examiners and auditors.

So, that’s enough negativity. What’s the answer? I think you continue to push for good standard decay studies that are well documented and regularly updated, and then be sure that those decay rates (and other key model assumptions) are subject to rigorous and regular sensitivity testing. Institutions should also be sure to conduct alternate scenario modeling when considering material strategic initiative and/or changes to their balance sheets are expected to occur. At a minimum, all institutions should run a stress or alternate scenario at least once a year where they materially shorten their decay rates so they at least realize what the impact would be on EVE if, for any reason, their NMDs become significantly less stable.

If you need help with decay studies, deposit trend reviews, or sensitivity testing,  reach out to us. We can take a look at some options to be sure that your IRR management program not only meets regulatory expectations, but also helps you better manage risk and plan for changing market rate environments.

Give us a call or email us at advisory@plansmith.com to discuss your organization's individual needs.

Topics: interest rate risk management, IRR, asset liability management

Subscribe Now!

Posts by Tag

See all

Recent Posts