When it Comes to Market Liquidity, what if Private Dealing System is Not “The Only Game in Town” Anymore? (Part 1)

A Tribute to Value Investing

“Investors persist in trading despite their dismal long-run trading record partly because the argument seduces them that because prices are as likely to go up as down (or as likely to go down as up), trading based on purely random selection rules will produce neutral performance… Apparently, this idea is alluring; nonetheless, it is wrong. The key to understanding the fallacy is the market-maker.”

–Jack Treynor (using Walter Bagehot as his Pseudonym) in The Only Game In Town.


By Elham Saeidinezhad | Value investing, or alternatively called “value-based dealing,” is suffering its worst run in at least two centuries. The COVID-19 pandemic intensified a decade of struggles for this popular strategy to buy cheap stocks in often unpopular enterprises and sell them when the stock price reverts to “fundamental value.” Such a statement might be a nuisance for the followers of the Capital Asset Pricing Model (CAPM). However, for liquidity whisperers, such as “Money Viewers,” such a development flags a structural shift in the financial market. In the capital market, the market structure moves away from being a private dealing system towards becoming a public one. In this future, the Fed- a government agency- would be the market liquidity provider of the first resort, even in the absence of systemic risk. As soon as there is a security sell-off or a hike in the funding rate, it will be the Fed, rather than Berkshire Hathaway, who uses its balance sheet and increases monetary base to purchase cheap securities from the dealers and absorb the trade imbalances. The resulting expansion in the Fed’s balance sheet, and monetary liabilities, would also alter the money market. The excessive reserve floating around could transform the money market, and the payment system, from being a credit system into a money-centric market. In part 1, I lay out the theoretical reasons blinding CAPM disciples from envisioning such a brave new future. In part 2, I will explain why the value investors are singing their farewell song in the market.

Jack Treynor, initially under the pseudo name Walter Bagehot, developed a model to show that security dealers rely on value investing funds to provide continuous market liquidity. Security dealers are willing to supply market liquidity at any time because they expect value-based dealers’ support during a market sell-off or upon hitting their finance limit. A sell-off occurs when a large volume of securities are sold and absorbed in the balance sheet of security dealers in a short period of time. A finance limit is a situation when a security dealer’s access to funding liquidity is curtailed. In these circumstances, security dealers expect value investors to act as market liquidity providers of near last resort by purchasing dealers excess inventories. It is such interdependence that makes a private dealing system the pillar of market-liquidity provision.

In CAPM, however, such interconnectedness is neither required nor recognized. Instead, CAPM asserts that risk-return tradeoff determines asset prices. However, this seemingly pure intuition has generated actual confusion. The “type” of risk that produces return has been the subject of intense debates, even among the model’s founders. Sharpe and Schlaifer argued that the market risk (the covariance) is recognizably the essential insight of CAPM for stock pricing. They reasoned that all investors have the same information and the same risk preferences. As long as portfolios are diversified enough, there is no need to value security-specific risks as the market has already reached equilibrium. The prices are already the reflection of the assets’ fundamental value. For John Lintern, on the other hand, it was more natural to abstract from business cycle fluctuations (or market risk) and focused on firm-specific risk (the variance) instead. His stated rationale for doing so was to abstract from the noise introduced by speculation. The empirical evidence’s inconsistency on the equilibrium and acknowledging the speculators’ role was probably why Sharpe later shifted away from his equilibrium argument. In his latest works, Sharpe derived his asset pricing formula from the relationship between the return on individual security and the return on any efficient portfolio containing that security.

CAPM might be confused about the kind of risk that matters the most for asset pricing. But its punchline is clear- liquidity does not matter. The model’s central assumption is that all investors can borrow and lend at a risk-free rate, regardless of the amount borrowed or lent. In other words, liquidity provision is given, continuous, and free. By assuming free liquidity, CAPM disregards any “finance limit” for security dealers and downplays the importance of value investing, as a matter of logic. In the CAPM, security dealers have constant and free access to funding liquidity. Therefore, there is no need for value investors to backstop asset prices when dealers reach their finance limit, a situation that would never occur in CAPM’s world.

Jack Treynor and Fischer Black partnered to emphasize value-based dealers’ importance in asset pricing. In this area, both men continued to write for the Financial Analysts Journal (FAJ). Treynor, writing under the pseudonym Walter Bagehot, thinks about the economics of the dealer function in his “The Only Game in Town” paper, and Black responds with his visionary “Toward a Fully Automated Stock Exchange.” At the root of this lifelong dialogue lies a desire to clarify a dichotomy inside CAPM.

Fischer, despite his belief in CAPM, argued that the “noise,” a notion that market prices deviate from the fundamental value, is a reality that the CAPM, built on the market efficiency idea, should reconcile with. He offered a now-famous opinion that we should consider stock prices to be informative if they are between “one-half” and “twice” their fundamental values. Mathematician Benoit Mandelbrot supported such an observation. He showed that individual asset prices fluctuate more widely than a normal distribution. Mandelbrot used this finding, later known as the problem of “fat tails” or too many outliers, to call for “a radically new approach to the problem of price variation.”  

From Money View’s perspective, both the efficient market hypothesis and Manderbrot’s “fat tails” hypothesis capture parts of the data’s empirical characterization. CAPM, rooted in the efficient market hypothesis, captures the arbitrage trading, which is partially responsible for asset price changes. Similarly, fat tails, or fluctuations in asset prices, are just as permanent a feature of the data. In other words, in the world of Money View, arbitrage trading and constant deviations from fundamental value go together as a package and as a matter of theoretical logic. Arbitrageurs connect different markets and transfer market liquidity from one market to another. Simultaneously, despite what CAPM claims, their operation is not “risk-free” and exposes them to certain risks, including liquidity risk. As a result, when arbitrageurs face risks that are too great to ignore, they reduce their activities and generate trade imbalances in different markets.

Security dealers who are making markets in those securities are the entities that should absorb these trade imbalances in their balance sheets. At some point, if this process continues, their long position pushes them to their finance limit-a point at which it becomes too expensive for security dealers to finance their inventories. To compensate for the risk of reaching this point and deter potential sellers, dealers reduce their prices dramatically. This is what Mandelbrot called the “fat tails” hypothesis. At this point, dealers stop making the market unless value investors intervene to support the private dealing system by purchasing a large number of securities or block trades. In doing so, they become market liquidity providers of last resort. For decades, value-based dealers used their balance sheets and capital to purchase these securities at a discounted price. The idea was to hold them for a long time and sell them in the market when prices return to fundamental value. The problem is that the value investing business, which is the private dealing system’s pillar of stability, is collapsing. In recent decades, value-oriented stocks have underperformed growth stocks and the S&P 500.

The approach of favoring bargains — typically judged by comparing a stock price to the value of the firm’s assets — has a long history. But in the financial market, nothing lasts forever. In the equilibrium world, imagined by CAPM, any deviation from fundamental value must offer an opportunity for “risk-free” profit somewhere. It might be hard to exploit, but profit-seeking arbitrageurs will always be “able” and “willing” to do it as a matter of logic. Fisher Black-Jack Treynor dialogue, and their admission of dealers’ function, is a crucial step away from pure CAPM and reveals an important fallacy at the heart of this framework. Like any model based on the efficient market hypothesis, CAPM abstracts from liquidity risk that both dealers and arbitragers face.

Money View pushes this dialogue even further and asserts that at any moment, security prices depend on the dealers’ inventories and their daily access to funding liquidity, rather than security-specific risk or market risk. If Fischer Black was a futurist, Perry Mehrling, the founder of “Money View,” lives in the “present.” For Fischer Black, CAPM will become true in the “future,” and he decided to devote his life to realizing this ideal future. Perry Mehrling, on the other hand, considers the overnight funding liquidity that enables the private dealing system to provide continuous market liquidity as an ideal system already. As value investing is declining, Money View scholars should start reimagining the prospect of the market liquidity and asset pricing outside the sphere of the private dealing system even though, sadly, it is the future that neither Fischer nor Perry was looking forward to.

Elham Saeidinezhad is Term Assistant Professor of Economics  at Barnard College, Columbia University. Previously, Elham taught at UCLA, and served as a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory

Can Algorithmic Market Makers Safely Replace FX Dealers as Liquidity Providers?

By Jack Krupinski


Financialization and electronification are long term economic trends and are here to stay. It’s essential to study how these trends will alter the world’s largest market—the foreign exchange (FX) market. In the past, electronification expanded access to the FX markets and diversified the demand side. Technological developments have recently started to change the FX market’s supply side, away from the traditional FX dealing banks towards principal trading firms (PTFs). Once the sole providers of liquidity in FX markets, dealers are facing increased competition from PTFs. These firms use algorithmic, high-frequency trading to leverage speed as a substitute for balance sheet capacity, which is traditionally used to determine FX dealers’ comparative advantage. Prime brokerage services were critical in allowing such non-banks to infiltrate the once impenetrable inter-dealer market. Paradoxically, traditional dealers were the very institutions that have offered prime brokerage services to PTFs, allowing them to use the dealers’ names and credit lines while accessing trading platforms. The rise of algorithmic market markers at the expense of small FX dealers is a potential threat to long-term stability in the FX market, as PTFs’ resilience to shocks is mostly untested. The PTFs presence in the market, and the resulting narrow spreads, could create an illusion of free liquidity during normal times. However, during a crisis, such an illusion will evaporate, and the lack of enough dealers in the market could increase the price of liquidity dramatically. 

      In normal times, PTFs’ presence could create an “illusion of free liquidity” in the FX market. The increasing presence of algorithmic market makers would increase the supply of immediacy services (a feature of market liquidity) in the FX market and compress liquidity premia. Because liquidity providers must directly compete for market share on electronic trading platforms, the liquidity price would be compressed to near zero. This phenomenon manifests in a narrower inside spread when the market is stable.  The FX market’s electronification makes it artificially easier for buyers and sellers to search for the most attractive rates. Simultaneously, PFTs’ function makes market-making more competitive and reduces dealer profitability as liquidity providers. The inside spread represents the price that buyers and sellers of liquidity face, and it also serves as the dealers’ profit incentive to make markets. As a narrower inside spread makes every transaction less profitable for market makers, traditional dealers, especially the smaller ones, should either find new revenue sources or exit the market.

      During a financial crisis, such as post-COVID-19 turmoil in the financial market, such developments can lead to extremely high and volatile prices. The increased role of PTFs in the FX market could push smaller dealers to exit the market. Reduced profitability forces traditional FX dealers to adopt a new business model, but small dealers are most likely unable to make the necessary changes to remain competitive. Because a narrower inside spread reduces dealers’ compensation for providing liquidity, their willingness to carry exchange rate risk has correspondingly declined. Additionally, the post-GFC regulatory reforms reduced the balance sheet capacity of dealers by requiring more capital buffers. Scarce balance sheet space has increased the opportunity cost of dealing. 

Further, narrower inside spreads and the increased cost of dealing have encouraged FX dealers to offer prime brokerage services to leveraged institutional investors. The goal is to generate new revenue streams through fixed fees. PTFs have used prime brokerage to access the inter-dealer market and compete against small and medium dealers as liquidity providers. Order flow internalization is another strategy that large dealers have used to increase profitability. Rather than immediately hedge FX exposures in the inter-dealer market, dealers can wait for offsetting order flow from their client bases to balance their inventories—an efficient method to reduce fixed transaction costs. However, greater internalization reinforces the concentration of dealing with just a few large banks, as smaller dealers do not have the order flow volume to internalize a comparable percentage of trades.

Algorithmic traders could also intensify the riskiness of the market for FX derivatives. Compared to the small FX dealers they are replacing, algorithmic market makers face greater risk from hedging markets and exposure to volatile currencies. According to Mehrling’s FX dealer model, matched book dealers primarily use the forward market to hedge their positions in spot or swap markets and mitigate exchange rate risk. On the other hand, PTFs concentrate more on market-making activity in forward markets and use a diverse array of asset classes to hedge these exposures. Hedging across asset classes introduces more correlation risk—the likelihood of loss from a disparity between the estimated and actual correlation between two assets—than a traditional forward contract hedge. Since the provision of market liquidity relies on dealers’ ability to hedge their currency risk exposures, greater correlation risk in hedging markets is a systemic threat to the FX market’s smooth functioning. Additionally, PTFs supply more liquidity in EME currency markets, which have traditionally been illiquid and volatile compared to the major currencies. In combination with greater risk from hedging across asset classes, exposure to volatile currencies increases the probability of an adverse shock disrupting FX markets.

While correlation risk and exposure to volatile currencies has increased, new FX market makers lack the safety buffers that help traditional FX dealers mitigate shocks. Because the PTF market-making model utilizes high transaction speed to replace balance sheet capacity, there is a little buffer to absorb losses in an adverse exchange rate movement. Hence, algorithmic market makers are even more inclined than traditional dealers to pursue a balanced inventory. Since market liquidity, particularly during times of significant imbalances in supply and demand, hinges on market-makers’ willingness and ability to take inventory risks, a lack of risk tolerance among PTFs harms market robustness. Moreover, the algorithms that govern PTF market-making tend to withdraw from markets altogether after aggressively offloading their positions in the face of uncertainty. This destabilizing feature of algorithmic trading catalyzed the 2010 Flash Crash in the stock market. Although the Flash Crash only lasted for 30 minutes, flighty algorithms’ tendency to prematurely withdraw liquidity has the potential to spur more enduring market dislocations.

The weakening inter-dealer market will compound any dislocations that may occur as a result of liquidity withdrawal by PTFs. When changing fundamentals drive one-sided order flow, dealers will not internalize trades, and they will have to mitigate their exposure in the inter-dealer FX market. Increased dealer concentration may reduce market-making capacity during these periods of stress, as inventory risks become more challenging to redistribute in a sparser inter-dealer market. During crisis times, the absence of small and medium dealers will disrupt the price discovery process. If dealers cannot appropriately price and transfer risks amongst themselves, then impaired market liquidity will persist and affect deficit agents’ ability to meet their FX liabilities.

For many years, the FX market’s foundation has been built upon a competitive and deep inter-dealer market. The current phase of electronification and financialization is pressuring this long-standing system. The inter-dealer market is declining in volume due to dealer consolidation and competition from non-bank liquidity providers. Because the new market makers lack the balance sheet capacity and regulatory constraints of traditional FX dealers, their behavior in crisis times is less predictable. Moreover, the rise of non-bank market makers like PTFs has come at the expense of small and medium-sized FX dealers. Such a development undermines the economics of dealers’ function and reduces dealers’ ability to normalize the market should algorithmic traders withdraw liquidity. As the FX market is further financialized and trading shifts to more volatile EME currencies, risks must be appropriately priced and transferred. The new market makers must be up to the task.

Jack Krupinski is currently a fourth-year student at UCLA, majoring in Mathematics/Economics with a minor in statistics. He pursues an actuarial associateship and has passed the first two actuarial exams (Probability and Financial Mathematics). Jack is working to develop a statistical understanding of risk, which can be applied in an actuarial and research role. Jack’s economic research interests involve using “Money View” and empirical methods to analyze international finance and monetary policy.

Jack is currently working as a research assistant for Professor Roger Farmer in the economics department at UCLA and serves as a TA for the rerun of Prof. Mehrling’s Money and Banking Course on the IVY2.0 platform. In the past, he has co-authored blog posts about central bank digital currency and FX derivatives markets with Professor Saeidinezhad. Jack hopes to attend graduate school after receiving his UCLA degree in Spring 2021. Jack is a member of the club tennis team at UCLA, and he worked as a tennis instructor for four years before assuming his current role as a research assistant. His other hobbies include hiking, kayaking, basketball, reading, and baking.

Are the Banks Taking Off their Market-Making Hat to Become Brokers?

“A broker is foolish if he offers a price when there is nothing on the offer side good to the guy on the phone who wants to buy. We may have an offering, but we say none.” –Marcy Stigum


Before the slow but eventual repeal of Glass-Steagall in 1999, U.S. commercial banks were institutions whose mission was to accept deposits, make loans, and choose trade-exempt securities. In other words, banks were Cecchetti’s “Financial intermediaries.” The repeal of Glass-Steagall allowed banks to enter the arena so long as they become financial holding companies. More precisely, the Act permitted banks, securities firms, and insurance companies to affiliate with investment bankers.Investment banks, also called non-bank dealers, were allowed to use their balance sheets to trade and underwrite both exempt and non-exempt securities and make the market in both the capital market and the money market instruments. Becoming a dealer brought significant changes to the industry. Unlike traditional banks, investment banks, or merchant banks, as the British call it, can cover activities that require considerably less capital. Second, the profit comes from quoting different bid-ask prices and underwriting new securities, rather than earning fees. 

However, the post-COVID-19 crisis has accelerated an existing trend in the banking industry. Recent transactions highlight a shift in power balance away from the investment banking arm and market-making operations. In the primary markets, banks are expanding their brokerage role to earn fees. In the secondary market, banks have started to transform their businesses and diversify away from market-making activities into fee-based brokerages such as cash management, credit cards, and retail savings accounts. Two of the underlying reasons behind this shift are “balance sheet constraints” and declining credit costs that reduced banks’ profit as dealers and improved their fee-based businesses. From the “Money View” perspective, this shift in the bank’s activities away from market-making towards brokerage has repercussions. First, it adversely affects the state of “liquidity.” Second, it creates a less democratic financial market as it excludes smaller agents from benefiting from the financial market. Finally, it disrupts payment flows, given the credit character of the payments system.

When a banker acts as a broker, its income depends on fee-based businesses such as monthly account fees and fees for late credit card payments, unauthorized overdrafts, mergers, and issuing IPOs. These fees are independent of the level of the interest rate. A broker puts together potential buyers and sellers from his sheet, much in the way that real estate brokers do with their listing sheets and client listings. Brokers keep lists of the prices bid by potential buyers and offered by potential sellers, and they look for matches. Goldman, Merrill, and Lehman, all big dealers in commercial paper, wear their agent hat almost all the time when they sell commercial paper. Dealers, by contrast, take positions themselves by expanding their balance sheets. They earn the spread between bid-ask prices (or interest rates). When a bank puts on its hat as a dealer (principal), that means the dealer is buying for and selling from its position. Put another way, in a trade, the dealer is the customer’s counterparty, not its agent.

Moving towards brokerage activity has adverse effects on liquidity. Banks are maintaining their dealer role in the primary market while abandoning the secondary market. In the primary market, part of the banks’ role as market makers involves underwriting new issues. In this market, dealers act as a one-sided dealer. As the bank only sells the newly issued securities, she does not provide liquidity. In the secondary market, however, banks act as two-sided dealers and supply liquidity. Dealer banks supply funding liquidity in the short-term money market and the market liquidity in the long-term capital market. The mission is to earn spreads by constantly quoting bids and offers at which they are willing to buy and sell. Some of these quotes are to other dealers. In many sectors of the money market, there is an inside market among dealers. 

The money market, as opposed to the bond market, is a wholesale market for high-quality, short-term debt instruments, or IOUs. In the money market, dealing banks make markets in many money market instruments. Money market instruments are credit elements that lend elasticity to the payment system. Deficit agents, who do not have adequate cash at the moment, have to borrow from the money market to make the payment. Money market dealers expand the elasticity daily and enable the deficit agents to make payments to surplus agents. Given the credit element in the payment, it is not stretching the truth to say that these short-term credit instruments, not the reserves, are the actual ultimate means of payment. Money market dealers resolve the problem of managing payments by enabling deficit agents to make payments before they receive payments.

Further, when dealers trade, they usually do not even know who their counterparty is. However, if banks become brokers, they need to “fine-tune” quotes because it matters who is selling and buying. Brokers prefer to trade with big investors and reduce their ties with smaller businesses. This is what Stigum called “line problems.” She explains that if, for example, Citi London offered to sell 6-month money at the bid rate quoted by a broker and the bidding bank then told the broker she was off and had forgotten to call, the broker would be committed to completing her bid by finding Citi a buyer at that price or by selling Citi’s money at a lower rate and paying a difference equal to the dollar amount Citi would lose by selling at that rate. Since brokers operate on thin margins, a broker wouldn’t be around long if she often got “stuffed.” Good brokers take care to avoid errors by choosing their counterparties carefully.

After the COVID-19 pandemic, falling interest rates, the lower overall demand for credit, and regulatory requirements that limit the use of balance sheets have reduced banks’ profits as dealers. In the meantime, the banks’ fee-based businesses that include credit cards late-fees, public offerings, and mergers have become more attractive. The point to emphasize here is that the brokerage business does not include providing liquidity and making the market. On the other hand, dealer banks generate revenues by supplying funding and market liquidities in the money and capital markets. Further, brokers tend to only trade with large corporations, while dealers’ decisions to supply liquidity usually do not depend on who their counterparty is. Finally, the payment system is much closer to an ideal credit payment system than an ideal money payment system. In this system, the liquidity of money market instruments is the key to a well-functioning payment system. Modern banks may wear one of two hats, agent (broker) or principal (dealers), in dealing with financial market instruments. The problem is that only one of these hats allows banks to make the market, facilitate the payment system, and democratize access to the credit market.

Elham Saeidinezhad is Term Assistant Professor of Economics  at Barnard College, Columbia University. Previously, Elham taught at UCLA, and served as a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory

The Paradox of Yield Curve: Why is the Fed Willing to Flatten the Curve but Not Control It?

From long experience, Fed technicians knew that the Fed could not control money supply with the precision envisioned in textbooks.” Marcy Stigum


By Elham Saeidinezhad – In the last decade, monetary policy wrestled with the problem of low inflation and has become a tale of three cities: interest rate, asset purchasing, and the yield curve. The fight to reach the Fed’s inflation target started by lowering the overnight federal funds rate to a historically low level. The so-called “zero-lower bound restriction” pushed the Fed to alternative policy tools, including large-scale purchases of financial assets (“quantitative and qualitative easing”). This policy had several elements: first, a commitment to massive asset purchases that would increase the monetary base; second, a promise to lengthen the maturity of the central banks’ holdings and flatten the yield curve. However, in combination with low inflation (actual and expected), such actions have translated into persistently low real interest rates at both the yield curve’s long and short ends, and at times, the inversion of the yield curve. The “whatever it takes” large-scale asset purchasing programs of central banks were pushing the long-term yields into clear negative territory. Outside the U.S., and especially in Japan, central banks stepped up their fight against deflation by adopting a new policy called Yield Curve Control, which explicitly puts a cap on long-term rates. Even though the Fed so far resisted following the Bank of Japan’s footsteps, the yield curve control is the first move towards building a world that “Money View” re-imagines for central banking. Yield curve control enables the Fed to assume its “dealer of last resort role” role to increase its leverage over the yield curve, a private dealer territory, without creating repeated dislocations in the private credit market. 

To understand this point, let’s start by translating monetary policy’s evolution into the language of Money View. In the traditional monetary policy, the Fed uses its control of reserve (at the top of the hierarchy of money) to affect credit expansion (at the bottom of the hierarchy). It also controls the fed funds rate (at the short end of the term structure) in an attempt to influence the bond rate of interest (at the long end). When credit is growing too rapidly, the Fed raises the federal fund’s target to impose discipline in the financial market. In standard times, this would immediately lower the money market dealers’ profit. This kind of dealer borrows at an overnight funding market to lend in the lend in term (i.e., three-month) market. The goal is to earn the liquidity spread.

After the Fed’s implementation of contractionary monetary policy, to compensate for the higher financing cost, money market dealers raise the term interest rate by the full amount (and perhaps a bit more to compensate for anticipated future tightening as well). This term-rate is the funding cost for another kind of dealer, called security dealers. Security dealers borrow from the term-market (repo market) to lend to the long-term capital market. Such operations involve the purchase of securities that requires financing. Higher funding cost implies that security dealers are willing to hold existing security inventories only at a lower price, and increasing long-term yield. This chain of events sketches a monetary policy transmission that happens through the yield curve. The point to emphasize here is that in determining the yield curve, the private credit market, not the Fed, sets rates and prices. The Fed has only some leverage over the system.

After the GFC, as the rates hit zero-lower bound, the Fed started to lose its leverage. In a very low-interest-rate condition, preferences shift in favor of money and against securities. One way to put it is that the surplus agents become reluctant to” delay settlement” and lower their credit market investment. They don’t want promises to pay (i.e., holding securities), and want money instead. In this environment, to keep making the market and providing liquidity, money market, and security dealers, who borrow to finance their short and long-term inventories, respectively, should be able to buy time. During this extended-time period, prices are pushed away from equilibrium. Often, the market makers face this kind of trouble and turn to the banks for refinancing. After GFC, however, the very low-interest rates mean that banks themselves run into trouble.

In a normal crisis, as the dealer system absorbs the imbalances due to the shift in preferences into its balance sheet, the Fed tried to do the same thing and take the problem off the balance sheet of the banking system. The Fed usually does so by expanding its balance sheet. The Fed’s willingness to lend to the banks at a rate lower than they would lend to each other makes it possible for the banks to lend to the dealers at a rate lower than they would otherwise charge. Putting a ceiling on the money rate of interest thus indirectly puts a floor on asset prices. In a severe crisis, however, this transmission usually breaks down. That is why after the GFC, the Fed used its leverage to put a floor on asset prices directly by buying them, rather than indirectly by helping the banks to finance dealers’ purchases.

The fundamental question to be answered is whether the Fed has any leverage over the private dealing system when interest rates are historically low. The Fed’s advantage is that it creates reserves, so there can be no short squeeze on the Fed. When the Fed helps the banks, it expands reserves. Hence the money supply grows. We have seen that the market makers are long securities and short cash. What the Fed does is to backstop those short positions by shorting cash itself. However, the Fed’s leverage over the private dealer system is asymmetric. The Fed’s magic mostly works when the Fed decides to increase elasticity in the credit market. The Fed has lost its alchemy to create discipline in the market when needed. When the rates are already very low, credit contraction happens neither quickly nor easily if the Fed increases the rates by a few basis points. Indeed, only if the Fed raises the rates high enough, it can get some leverage over this system, causing credit contraction. Short of an aggressive rate hike, the dealer system increases the spread slightly but not enough to not change the quantity of supplied credit. In other words, the Fed’s actions do not translate automatically into a chain of credit contraction, and the Fed does not have control over the yield curve. The Fed knows that, and that is why it has entered large-scale asset purchasing programs. But it is the tactful yet minimal purchases of long-term assets, rather than massive ones, that can restore the Fed’s control over the yield curve. Otherwise, the Fed’s actions could push the long-term rates into negative territory and lead to a constant inversion of the yield curve.

The yield curve control aims at controlling interest rates along some portion of the yield curve. This policy’s design has some elements of the interest rate policy and asset purchasing program. Similar to interest rate policy, it targets short-term interest rates. Comparable with the asset purchasing program, yield curve control aim at controlling the long-term interest rate. However, it mainly incorporates essential elements of a “channel” or “corridor” system. This policy targets longer-term rates directly by imposing interest rate caps on particular maturities. Like a “corridor system,” the long-term yield’s target would typically be set within a bound created by a target price that establishes a floor for the long-term assets. Because bond prices and yields are inversely related, this also implies a ceiling for targeted maturities. If bond prices (yields) of targeted maturities remain above (below) the floor, the central bank does nothing. However, if prices fall (rise) below (above) the floor, the central bank buys targeted-maturity bonds, increasing the demand and the bonds’ price. This approach requires the central bank to use this powerful tool tactfully rather than massively. The central bank only intervenes to purchase certain assets when the interest rates on different maturities are higher than target rates. Such a strategy reduces central banks’ footprint in the capital market and prevents yield curve inversion- that has become a typical episode after the GFC.

The “paradox of the yield curve” argues that the Fed’s hesitation to adopt the yield curve control to regulate the longer-term rates contradicts its own reasoning behind the introduction of a corridor framework to control the overnight rate. Once the FOMC determines a target interest rate, the Fed already sets the discount rate above the target interest rate and the interest-on-reserve rate below. These two rates form a “corridor” that will contain the market interest rate; the target rate is often (but not always) set in the middle of this corridor. Open market operations are then used as needed to change the supply of reserve balances so that the market interest rate is as close as possible to the target. A corridor operating framework can help a central bank achieve a target policy rate in an environment in which reserves are anything but scarce, and the central bank has used its balance sheet as a policy instrument independent of the policy interest rate.

In the world of Money View, the corridor system has the advantage of enabling the Fed to act as a value-based dealer, or as Mehrling put it, “dealer of last resort,” without massively purchasing assets and constantly distorting asset prices. The value-based dealer’s primary role is to put a ceiling and floor on the price of assets when the dealer system has already reached their finance limits. Such a system can effectively stabilize the rate near its target. Stigum made clear that standard economic theory has no perfect answer to how the Fed gets leverage over the real economy. The question is why the Fed is willing to embrace the frameworks that flatten the yield curve but is hesitant to adopt the “yield curve control,” which explicitly puts a cap on long-term rates.

Elham Saeidinezhad is Term Assistant Professor of Economics  at Barnard College, Columbia University. Previously, Elham taught at UCLA, and served as a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory

Is the New Chapter for the Monetary Policy Framework Too Old to Succeed?

Bagehot, “Money does not manage itself.”


By Elham Saeidinezhad – In this year’s Jackson Hole meeting, the Fed announced a formal shift away from previously articulated longer-run inflation objective of 2 percent towards achieving inflation that averages 2 percent over time. The new accord aims at addressing the shortfalls of the low “natural rate” and persistently low inflation. More or less, all academic debates in that meeting were organized as arguments about the appropriate quantitative settings for a Taylor rule. The rule’s underlying idea is that the market tends to set the nominal interest rate equal to the natural rate plus expected inflation. The Fed’s role is to stabilize the long-run inflation by changing the short-term federal funds rate whenever the inflation deviates from the target. The Fed believes that the recent secular decline in natural rates relative to the historical average has constrained the federal funds rate. The expectation is that the Fed’s decision to tolerate a temporary overshooting of the longer-run inflation to keep inflation and inflation expectations centered on 2 percent following periods when inflation has been persistently below 2 percent will address the framework’s constant failure and restore the magic of central banking. However, the enduring problem with the Taylor rule-based monetary policy frameworks, including the recent one, is that they want the Fed to overlook the lasting trends in the credit market, and only focus on the developments in the real economy, such as inflation or past inflation deviations, when setting the short-term interest rates. Rectifying such blind spots is what money view scholars were hoping for when the Fed announced its intention to review the monetary policy framework.

The logic behind the new framework, known as average inflation targeting strategy, is that inflation undershooting makes achieving the target unlikely in the future as it pushes the inflation expectations below the target. This being the case, when there is a long period of inflation undershooting the target, the Fed should act to undo the undershooting by overshooting the target for some time. The Fed sold forecast (or average) targeting to the public as a better way of accomplishing its mandate compared to the alternative strategies as the new framework makes the Fed more “history-dependent.” Translated into the money view language, however, the new inflation-targeting approach only delays the process of imposing excessive discipline in the money market when the consumer price index rises faster than the inflation target and providing excessive elasticity when prices are growing slower than the inflation target.

From the money view perspective, the idea that the interest rate should not consider private credit market trends will undermine central banking’s power in the future, as it has done in the past. The problem we face is not that the Fed failed to follow an appropriate version of Taylor rule. Rather, and most critically, these policies tend to abstract from the plumbing behind the wall, namely the payment system, by disregarding the credit market. Such a bias may have not been significant in the old days when the payment system was mostly a reserve-based system. In the old world, even though it was mostly involuntarily, the Fed used to manage the payment system through its daily interventions in the market for reserves. In the modern financial system, however, the payment system is a credit system, and its quality depends on the level of elasticity and discipline in the private credit market.

The long dominance of economics and finance views imply that modern policymakers have lost sight of the Fed’s historical mission to manage the balance between discipline and elasticity in the payment system. Instead of monitoring the balance between discipline and elasticity in the credit market, the modern Fed attempts to keep the bank rate of interest in line with an ideal “natural rate” of interest, introduced by Knut Wicksell. In Wicksellians’ world, in contrast to the money view, securing the continuous flow of credit in the economy through the payment system is not part of the Fed’s mandate. Instead, the Fed’s primary function is to ensure it does not choose a “money rate” of interest different from the “natural rate” of interest (profit rate capital). If lower, then the differential creates an incentive for new capital investment, and the new spending tends to cause inflation. If prices are rising, then the money rate is too low and should be increased; if prices are falling, then the money rate is too high and should be decreased. To sum up, Wicksellians do not consider private credit to be intrinsically unstable. Inflation, on the other hand, is viewed as the source of inherent instability. Further, they see no systemic relation between the payment system and the credit market as the payment system simply reflects the level of transactions in the real economy.

The clash between the standard economic view and money view is a battle between two different world views. Wicksell’s academic way of looking at the world had clear implications for monetary policy: set the money rate equal to the natural rate and then stand back and let markets work. Unfortunately, the natural rate is not observable, but the missed payments and higher costs of borrowing are. In the money view perspective, the Fed should use its alchemy to strike a balance between elasticity and discipline in the credit market to ensure a continuous payment system. The money view barometer to understand the credit market cycle is asset prices, another observable variable. Since the crash can occur in commodities, financial assets, and even real assets, the money view does not tell us which assets to watch. However, it emphasizes that the assets that are not supported by a dealer system (such as residential housing) are more vulnerable to changes in credit conditions. These assets are most likely to become overvalued on the upside and suffer the most extensive correction on the downside. A central bank that understands its role as setting interest rates to meet inflation targets tends to exacerbate this natural tendency toward instability. These policymakers could create unnaturally excessive discipline when credit condition is already tight or vice versa while looking for a natural rate of interest.

Elham Saeidinezhad is Term Assistant Professor of Economics  at Barnard College, Columbia University. Previously, Elham taught at UCLA, and served as a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory

Is our Monetary System as Systemic and International as Coronavirus?

By Elham Saeidinezhad | The coronavirus crisis has sparked different policy responses from different countries. The common thread among these reactions is that states are putting globalization on pause. Yet, re-establishment of central bank swap lines is making “money,” chiefly Eurodollars, the first element that has become more global in the wake of the Coronavirus outbreak. This is not an unexpected phenomenon for those of us who are armed with insights from the Perry Mehrling’s “Money View” framework. The fact that the monetary system is inherently international explains why the Fed reinstalled its standing U.S. dollar liquidity swap line arrangements with five other central banks just after it lowered its domestic federal fund’s target to zero percent.  However, the crisis also forces us to see global dollar funding from a lens closer to home; the fact that the Eurodollar market, at its core, is a domestic macro-financial linkage. In other words, its breakdown is a source of systemic risk within communities as it disrupts the two-way connection between the real economy and the financial sector. This perspective clarifies the Fed’s reactions to the crisis in hand. It also helps us understand the recent debate in the economics profession about the future of central bank tools.

The Great Financial Crisis of 2008-09 confirmed the vital importance of advancing our understanding of macro-financial linkages. The Coronavirus crisis is testing this understanding on a global scale. Most of the literature highlights the impact of sharp fluctuations in long-term fundamentals such as asset prices and capital flows on the financial positions of firms and the economy. In doing so, economists underestimate the effects of disturbances in the Eurodollar market, which provides short-term dollar funding globally, on real economic activities such as trade. These miscalculations, which flow from economists’ natural approach to money as a veil over the real economy, could be costly. Foreign banks play a significant role in the wholesale Eurodollar market to raise US dollar financing for their clients. These clients, usually multinational corporations, are part of a global supply chain that covers different activities from receiving an order to producing the final goods and services. Depending on their financial positions, these firms either wish to hold large dollar balances or receive dollardenominated loans. The deficit firms use the dollar funding to make payments for their purchases. The surplus firms, on the other hand, expect to receive payments in the dollar after selling their products. The interconnectedness between the payment system and global supply chains causes the Eurodollar market to act as a bridge between the real economy and the financial sector.

The Coronavirus outbreak is putting a strain on this link, both domestically and globally:  it disrupts the supply chain and forces every firm along the chain to become a deficit agent in the process. The supply chain moves products or services from one supplier to another and is essentially the sum of all firms’ sales. These sales (revenues) are, in effect, a measure of payments, the majority of which occur in the Eurodollar market. A sharp shock to the sale, as a result of the outbreak, precipitates a lower ability to make payments. When an output is not being shipped, the producer of final goods in China does not have dollar funding to pay the suppliers of intermediate products. As a result, firms in other countries do not have a dollar either. The trauma that coronavirus crisis injects into manufacturing and other industries thus leads to missed payments internationally. Missed payments will make more firms become deficit agents. This includes banks, which are lower down in the hierarchy, and the central banks, which are responsible for relaxing the survival constraints for the banking system. By focusing on the payments system and Eurodollar market, we were able to see the “survival constraint” in action.

The question for monetary policy is how far the central bank decides to relax that survival constraint by lowering the bank rate. This is why central banks, including the Fed, are reducing interest rates to zero percent. However, the ability to relax the survival constraint for banks further down in the hierarchy depends also on the strength of foreign central banks to inject dollar funding into their financial system. The Fed has therefore re-established the dollar swap line with five other major central banks. The swap lines are available standing facilities and serve as a vital liquidity backstop to ease strains in global funding markets. The point to hold on to here is that the U.S. central bank is at a level in the hierarchy above other central banks

Central banks’ main concern is about missed payments of U.S. dollars, as they can deal with missed payments in local currency efficiently. In normal circumstances, the fact that non-U.S. central banks hold foreign exchange reserves enables them to intervene in the market seamlessly if private FX dealers are unable to do so. In these periods, customer-led demand causes some banks to have a natural surplus position (more dollar deposits than loans) and other banks to have an inherent deficit position (more dollar loans than deposits). FX dealers connect the deficit banks with the surplus banks by absorbing the imbalances into their balance sheets. Financial globalization has enabled each FX dealer to resolve the imbalance by doing business with some U.S. banks, but it seems more natural all around for them to do business with each other.  During this crisis, however, even U.S. banks have started to feel the liquidity crunch due to the negative impacts of the outbreak on financial conditions. When U.S. banks pull back from market-making in the Eurodollar market, there will be a shortage of dollar funding globally. Traditionally, in these circumstances, foreign central banks assume the role of the lender of last resort to lend dollars to both banks and non-banks in their jurisdiction. However, the severity of the Coronavirus crisis is creating a growing risk that such intermediation will fracture. This is the case as speculators and investors alike have become uncertain of the size of foreign central banks’ dollar reserve holding.

To address these concerns, the Fed has re-established swap lines to lend dollars to other central banks, which then lend it to banks. The swap lines were originally designed to help the funding needs of banks during 2008. However, these swap lines might be inadequate to ease the tension in the market. The problem is that the geographic reach of the swap lines is too narrow. The Fed has swap lines only with the Bank of Canada, the Bank of England, the Bank of Japan, the European Central Bank and the Swiss National Bank. The reason is that the 2008-09 financial crisis affected banks in these particular jurisdictions severely. But the breadth of the current crisis is more extensive as every country along the supply chain is struggling to get dollars. In other words, the Fed’s dollar swap lines should become more global, and the international hierarchy needs to flatten.

To ease the pressure of missed payments internationally, and prevent the systemic risk outbreak domestically, the Fed and its five major central bank partners have coordinated action to enhance the provision of liquidity via the standing U.S. dollar liquidity swap line arrangements. These tools help to mitigate the effects of strains on the supply chain, both domestically and abroad. Such temporary agreements have been part of central banks’ set of monetary policy instruments for decades. The main lessons from the Coronavirus outbreak for central bank watchers is that swap lines and central bank collaborations, are here to stay — indeed, they should become more expansive than before. These operations are becoming a permanent tool of monetary policy as financial stability becomes a more natural mandate of the central banks. As Zoltan Pozsar has recently shown, the supply chain of goods and services is the reverse of the dollar funding payment system. Central banks’ collaboration prevents this hybridity from becoming a source of systemic risk, both domestically and internationally.


This piece was originally part of “Special Edition Roundtable: Money in the Time of Coronavirus” by JustMoney.org platform.

Elham Saeidinezhad is lecturer in Economics at UCLA. Before joining the Economics Department at UCLA, she was a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory

Austerity in the UK: Senseless and Cruel

As the UK recorded its first current budget surplus in 16 years, the IMF was quick to use this development as sufficient proof to declare the austerity measures, imposed by the UK government in the aftermath of the financial crisis, a success. To the IMF, the UK case of eliminating its budget deficit, while avoiding a prolonged recession, and faring better than other European countries, supports the case for further austerity.

However, this overly simplistic interpretation disregards the long-term structural problems that the UK economy is facing, does not acknowledge the active role played by the Bank of England (BoE) in mitigating the crisis, nor does it attempt to understand what is behind the growing voter discontent that led to the Brexit vote. Furthermore, given that the austerity measures have been linked to 120,000 deaths, it seems rather odd to celebrate this approach.

While at a first glance, one might think the UK economy is in pretty good shape, with low unemployment levels and continuous growth for the last 8 and a half years, a closer look at the data reveals a less optimistic picture. As outlined in this report from the Center for Economic and Policy Research (CEPR) that I co-authored with Mark Weisbrot, the UK economy is facing some serious challenges.

The last decade has failed to deliver any improvement in living standards to most households, with real median incomes of working-age households barely returning to their pre-recession levels this year. Retired household have fared somewhat better, yet are under threat as a target for further spending cuts. While increased employment has meant household incomes reached their precession levels, real hourly wages have not. To make matters worse, a widely cited decline in the gender pay gap is due to a larger drop in male wages, rather than female wages increasing.  

One of the most striking and unusual aspects of the recovery is that poverty, by some measures, has actually increased for people of working age. After accounting for housing costs, the percentage of people aged 16–64 with income below the poverty threshold has risen to 21 percent in 2015/16, from 20 percent in 2006/07.

In terms of productivity growth, which is the engine of rising living standards, the past decade has been the worst for the UK since the 18th century. The slowdown in productivity growth means that GDP per person is about 20 percent lower than it would have been if the prior growth trend continued. The problem of slow productivity growth is directly linked to low investment levels in the UK, which has the lowest rate of gross capital formation amongst G7 countries.

The UK currently finds itself in an economy where demand is lagging, and the prospects of Brexit bring significant uncertainty over the future. This is an environment that is unlikely to attract major private investment, especially in the areas it is most needed. There is a clear need for public investment and spending that can grow the economy and improve living standards. More austerity might seem to reduce the government’s deficit now but its price will ultimately be paid through lost output and slower growth.

The negative feedback from the fiscal tightening was undoubtedly mitigated by the expansionary monetary policy conducted by the BoE, which also explains why the UK was able to withstand austerity without deepening its recession and fared better than countries in the eurozone. The BoE started lowering its Bank Rate in October 2008 until it reached 0.5 percent. The rate was further decreased in the aftermath of Brexit to 0.25 percent, only to be raised again to 0.5 percent at the end of 2017.

The most important step taken by the BoE was its Quantitative Easing program, launched in August 2008, to buy bonds and ensure long-term interest rates for the UK remain low. The European Central Bank (ECB) only took similar steps for euro denominated sovereign bonds in July 2012.

While the IMF portrays the UK net public debt-to-GDP ratio as unsustainable high (it was 80.5 percent in 2017), this assessment is mostly arbitrary, especially given the UK’s specific circumstances. The burden on the public debt is best measured by the interest payments on the debt, relative to the size of the economy since the principal is generally simply rolled over. At present, the net interest payments on the debt are about 1.8 percent of GDP, a number significantly lower than in the 1980s when interest payments on the debt were generally above 3 percent of GDP annually, and in the 1990s when they were between 2 and 3 percent per year.

It is essential to note that financial markets recognize there is little risk to holding UK bonds, and the UK government can currently borrow at negative real interest rates. Given that the UK issues bonds in its own currency, investors understand there is no risk of default.

There are many public investments that have a positive real rate of return by increasing the productivity of the economy. Thus, given the current circumstances, it seems rather absurd to focus on reducing the debt rather than growing the economy.  

There is no doubt that Brexit is one of the major challenges that the UK faces. However, particularly in this context of uncertainty, macroeconomic policies play an essential role. Unnecessary fiscal and monetary tightening pose an immediate threat to economic progress and the UK’s ability to improve living standards of its residents.

Imposing austerity on an economy where incomes have not recovered from the last recession, there is a large slowdown in productivity growth, an overall lack of investment, and the government can finance its spending at negative real interest rates is senseless and cruel.

For more details, graphs, and complete sources check out the full report.

 

Behind Optimism in the Economy, the Fed Fears the Next Recession

After a two-day meeting concluding on Wednesday, Federal Reserve officials voted to keep interest rate unchanged at a target rate of 0.25 to 0.50 percent. According to Chairwoman Janet L. Yellen’s press conference, members of the Federal Open Market Committee (FOMC) feel they are closing-in on the Fed’s statutory mandate—to foster maximum employment and price stability—and they consider that the case for a rate increase before the end of the year is still strong.

Their optimistic view of the economy is based on economic growth picking up its pace in the second half of the year, mainly supported by household spending and what Ms Yellen described as “solid increases in household income.” Meanwhile, the labor market has been tightening and some Fed officials consider the low unemployment rate to be at its full employment value, or at least “pretty close to most FOMC participants’ estimates of its longer-run equilibrium value,” to use Ms Yellen’s words. Even though inflation remains below the Fed’s target, given current economic growth and an improving labor market, we will see a pick up soon after “transitory influences holding down inflation fade.”

“We’re generally pleased with how the U.S. economy is doing,” expressed Ms. Yellen.

However, after presenting such an optimistic economic outlook, it seems at odds that Fed officials lowered their projections of GDP growth for 2016. The median growth projection for the year is now 1.8 percent, down from 2.0 percent in June. In short, it seems contradictory that policymakers believe the case for an interest rate increase has “strengthened”, while at the same time revising down their growth projections, for the third time this year.

Sending a strong signal in August that a possible interest rate hike was coming and then pulling back, made market participants question the Fed’s and Ms. Yellen’s credibility. But again, during the press conference following the meeting, Ms. Yellen said that an interest rate increase is due before the end of the year, if “we simply stay in the current course.” So which one is it? Is the economy strong enough to operate with higher rates, or not?

I think the conflicting message can be somewhat explained by noting two things. First, Chair Yellen’s remarks at Jackson Hole, last month. This speech was about the monetary policy toolkit the Fed has at its disposal to respond to future economic downturns. Among the tools, however, Ms. Yellen didn’t include the alternative of negative interest rates. Whether or not negative rates are a good idea is not the point. The point is that she declared that “doing so was impossible,” sending a strong message that the Fed is very much constrained by the zero lower bound on nominal rates. Then this past Wednesday Ms. Yellen reiterated that the zero lower bound is a “concern,” saying that monetary policy action has “less scope than [she] would like to see or expect [them] to have in the long run.”

Ms. Yellen’s remarks at Jackson Hole made it clear that the Fed trusts that, whenever the next downturn hits, “conventional interest rate reductions” will be their first line of defense. However, in order to make those reductions, rates cannot be down to where they are right now; they need some room for maneuver. For example, during the past nine recessions the FOMC cut the fed funds rate by an average 5-1/2 percentage points. Meaning that right now the Fed is 5 percentage points short of what they would need to reduce rates, if an average recession hits the economy.

Second, as Ms. Yellen noted during the Q&A, monetary policy operates with long and variable lags—in other words, that the implementation and effects of new monetary policies would normally take some time. For this reason, she argued, the principle of forward looking is so important. That is, acting ahead of time, and based on projections and forecasts, before a threat materializes. In this regard Ms. Yellen stated, “I’m not in favor of the whites of their eyes rights sort of approach. We need to operate based on forecasts.” Moreover, the Chairwoman has repeatedly stated that any adjustments in the stance of monetary policy will be “gradual,” in a succession of small increases. Thus, it would be inconsistent to keep rates unchanged when the Fed forecasts of inflation are pretty much on target for 2017 and 2018 and when monetary adjustments will be gradual and needing some time to be implemented and have effect.

It seems as if the Fed fears its ability—or lack of—to respond to the next crisis, and those fears could be weighing heavily in their considerations of a rate hike. If that is the case then the good news, for them, is that they have their favorable outlook of the economy and the principle of forward looking to justify the hike.

So, if there are no negative surprises, it’s very likely that before the end of the year the fed funds rate will increase from its current target range of 0.25-0.50 percent to the 0.50-0.75 percent range. The next Fed meeting will take place just a week before the election. Even though the Fed is not supposed to play politics, policymakers will not want to rattle the markets right before polls open. So November’s meeting is not likely to be the one, all bets are on December.

 

Illustration by Heske van Doornen