Opinion

Christopher Papagianis

More to the financial crisis than just subprime

Christopher Papagianis
Jul 12, 2012 15:24 EDT

Arpit Gupta, a Ph.D. student in finance at Columbia University’s Graduate School of Business, contributed to this column.

Just as the recession in the early 2000s became linked with the bursting of the tech bubble, for many the financial crisis in 2008 has been synonymous with the blow-up of subprime mortgages.

But there was more to 2008 than that.

Gary Gorton, an economist at Yale, recently published an analysis that shows how well some subprime mortgage-backed securities have performed over the past few years – a very counterintuitive conclusion. Citing one of his graduate students, Gorton explains that AAA/Aaa-rated subprime bonds issued in the peak bubble years (when mortgage underwriting was arguably the weakest in history) were only down 0.17 percent as of 2011. In other words, the highly rated subprime bonds – or toxic assets so associated with the financial crisis – have experienced only minimal losses since the bubble popped.

Of course, that bond statistic ignores the numerous costs borne by the federal government in response to the crisis. For example, the mortgage giants Fannie Mae and Freddie Mac required more than a $150 billion bailout, and the Federal Reserve dropped and held interest rates to historical lows, in part so that millions of homeowners could refinance their mortgages (often into new mortgage products that were also backed by taxpayers through another government program). That is, it’s important to acknowledge that subprime mortgage products have done relatively well partly because of post-crisis government interventions that were costly for taxpayers.

Nevertheless, the subprime bonds’ better-than-expected performance can help us think through the broader role of highly rated securities in the financial crisis. In particular, it helps focus our attention away from the assets themselves (as it now appears that some subprime securities held up surprisingly well) and toward how the assets were financed using leverage and risky holding structures.

***

Consider the production process for mortgage securities during the crisis. To convert bundles of poorer-quality mortgages into valuable securities, banks made use of waterfall structures in securitization. Any future losses on subprime mortgages would first go to the lower-rated tranches of the securities. The highest-rated tranches – given AAA status – would only lose money in the event of extraordinary mortgage losses, which effectively wiped out the lower-rated tranches. Generally, lower-rated tranches accounted for 20 to 25 percent of the securitization. This meant that if mortgages defaulted and only 50 percent of the value was recovered through foreclosure, then between 40 percent and 50 percent of the mortgages in a pool would have to default for the AAA noteholder to suffer any losses. While defaults and loss severities on subprime loans have been bad, they have not been this extreme in most cases thus far, which is why the highly rated subprime bonds seem to have escaped serious losses so far – declining in value only a small amount, as Gorton suggests.

If the performance of these mortgage pools remains sustainable in the future, then one surprising conclusion post-crisis may be that the securitization waterfall structure will have actually held up surprisingly well. The construction of mortgage-backed securities in most instances will have sufficiently left enough of a buffer in place to protect the highest-rated tranches from serious losses.

Of course, the much bigger problems lay in the quality of that buffer – the lower-rated pieces of subprime mortgage-backed securities. These were generated in many cases almost as a waste by-product of the securitization process. As the housing bubble burst, it was the holders of these assets that suffered massive losses, since they were in the first loss position.

While it was originally difficult to find willing buyers of the lower-rated pieces of subprime mortgage-backed securities, issuers eventually combined and repackaged them into derivative products called collateralized debt obligations. It was these products – including the so-called synthetic variety, which relied on credit default swaps – that proved to be the real problem products. Many of them were held by structured investment vehicles (often sponsored by banks) and constitute one of the reasons financial institutions faced insolvency during the crisis.

Underlying the demand for CDO products was the phenomenon previously discussed: the universal hunger for highly rated financial products. Overwhelming demand for AAA-rated securities induced banks to create new financial instruments that effectively stretched the definitional bounds of what was truly a quality or safe asset. These structured products wound up constituting a large fraction of the losses borne by subprime mortgages.

***

The problem with securities during the financial crisis wasn’t just how they impacted the asset side of the balance sheet. Rather, the greatest fallout appears to be the result of how these (and other similar) securities were funded through short-term loans on the wholesale market.

Prior to the financial crisis, in the so-called shadow banking system, banks came to use securities – highly rated MBS and asset-backed commercial paper – for the purposes of short-term borrowing and lending. The higher the security was rated, the greater its collateral value, which allowed the bank to secure more funding on better terms.

The onset of the financial crisis led to large-scale downgrades of many of the securities that were used as collateral. Even though a lot of these securities did not end up experiencing large credit losses over time (per Gorton), they did suffer huge declines in market value (at least initially).

Many mortgage securities were used in relationships of overnight lending referred to as repo – between banks and other financial institutions. The downgrading of mortgage-backed securities led to greater margin calls, leading to trading losses and finally some fire-asset sales. In short, these assets could no longer support the loans secured against them; their collateral value fell and, effectively, there was a “run” on major aspects of the financial system as lenders demanded their money back. The resulting losses – from the collapse of trading arrangements, not of the underlying securities – wound up bankrupting major financial participants like Bear Sterns and Lehman Brothers.

Recent research by Northwestern economist Arvind Krishnamurthy and colleagues has found that similar problems with asset-backed commercial paper were actually far greater in scope. The resulting collapse of credit in other areas of the financial sector, such as money market mutual funds, subsequently fueled the recession.

The lesson is that it wasn’t just the product that was the issue – fragile financing mechanisms were really the key driver in the financial crisis. If financial intermediaries had held their asset positions with less leverage or with longer-duration borrowings, they would have been able to ride out market gyrations. Instead, reliance on debt and short-term holdings forced banks into costly sales and drove widespread insolvency.

The mix of leverage and the shared interdependence on extremely short-term wholesale funding markets (comprising both repos and commercial paper) turned what were initially relatively small losses into tens of trillions in lost output globally.

In many respects, the pre-crisis shadow banking system resembled the pre-Depression-era banking system, which was also prone to fragility and frequent crises. The problem back in the 1930s was partially addressed through the use of deposit insurance, which limited the potential for bank runs (though with the cost of boosting moral hazard).

***

Regulators today face two options in dealing with this complex financial system. One option would be to further encourage financial complexity but offer sufficient insurance (along the lines of the FDIC deposit guarantee) to financial institutions. However, this would expose taxpayers to future losses that could rise into the trillions, meaning the guarantee itself might not be sufficient to prevent a crisis to begin with. Additionally, it would further fuel moral hazard.

Another option would be to recognize the systemic fragility and work to combat the underlying sources. The first of these is the widespread reliance on “safe assets,” which itself is partially a regulator-driven phenomenon. As we’ve seen, regulators’ preferences for seemingly safe assets incentivizes market participants to create and transform risky assets into new products that can be passed off as safe. Additionally, no asset is truly safe from losses to begin with, and doubling down on that fiction simply raises the stakes when default finally happens.

The second core reform should be to restructure the liability system of systemically important financial institutions. Maturity mismatch, to the extent it happens, should take place in traditionally regulated commercial banking institutions. Firms should be free to pursue real financial innovation, so long as their actions do not result in demands for bailouts or contagious financial losses for others.

A review of this narrative suggests a rather stark picture of the reality of the modern financial system. Rather than the financial crisis being a one-off result of a historically anomalous housing boom, it increasingly appears that the central problem was a financial system so levered and dependent on near-term financing that relatively small losses could spark big problems. Absent reform, look for this pattern to return.

COMMENT

Had this crisis merely been caused by subprime loans, then subprime borrowers would have lost their homes, which would have been reabsorbed into the market with negligible damage to the economy. The derivatives took a minimal risk and raised it to the power of near infinity which someone mistook for meaning no risk. The power of exponents is by definition, exponential.

Posted by Greenspan2 | Report as abusive

The next crisis – and the decline of ‘safe assets’

Christopher Papagianis
Jul 2, 2012 18:20 EDT

The policy response that perhaps best connects the U.S. financial crisis and the still brewing eurozone problem is that regulators have endeavored to make financial institutions more resilient. Policymakers on both sides of the Atlantic have focused on increasing financial institutions’ capital and liquidity positions to try to limit future bank failures and systemic risk. Both goals are served by increasing banks’ holdings of “safe assets” that are easily sold and retain value across different global economic environments.

But what if there simply aren’t enough safe assets to go around? After all, safe assets aren’t only being gobbled up in the name of financial stability. Today, the global investor universe is undoubtedly more risk-adverse and is naturally hungrier for these same stores of value. From insurers to pension funds, the demand for safe assets – and the corresponding dearth of supply – has led to strange, if not ominous, distortions in the market.

For example, over the last few years there have been numerous periods where the yields for short-term U.S. and German sovereign debt have turned negative. The real yields, or the amount earned after adjusting for inflation, on front-end Treasury notes are currently less than -1 percent. This means investors have been putting aside their search for yield, willing to lock in (small) losses with their new purchases because there were very few alternative and liquid markets where they could park their money on better terms.

In a recent report, the IMF explores this growing tension between the supply and demand of safe assets – and the takeaways are nothing short of frightening.

Let’s start with what are considered safe assets today – even though, of course, this categorization is crude and likely to be overly inclusive. There is sovereign debt (with debt ratings above A/BBB), investment grade corporate debt, gold, and also highly rated securitizations and covered bonds. The IMF estimates that there are approximately $75 trillion of these safe assets in the market today.

With banks, pension funds, insurance companies, sovereign wealth funds and central banks all gorging on these assets to differing degrees, real prices have been on a tear. But, surely the market will eventually adjust as the supply trajectory for these assets grows overtime to meet the new demand, naturally releasing some of the pressure around prices?

A close look at just the narrative around government bonds reveals the extent of the problem ahead and why the market shouldn’t be counted on to self-correct. Back in 2007, before the housing bubble popped in the U.S., roughly 70 percent of the sovereign debt for the world’s most advanced countries was rated -AAA. Today, this rating only applies to 50 percent of these countries, a drop affecting approximately $15 trillion in “safe” sovereign assets.

The fluid crisis in Europe is one reason that a snapshot of countries’ credit profiles is of limited value, at least for the foreseeable future. The IMF projected that more than a dozen countries could fall from the class of safe asset issuers in the coming years. It concluded that by 2016 the total pool of safe assets might fall by 16 percent, or more than $9 trillion. In short, global fiscal retrenchment – here in the U.S. and across Europe – is expected to be slow and painful.

Some might argue that a drop in the supply of safe assets means that buyers will have to move down the safety scale and purchase assets that are only a little bit riskier. But this only makes the system more vulnerable. If the financial crises of the last few years have taught one consistent lesson, it’s that there really isn’t such a thing as a truly safe asset. Practically everyone now appreciates that there is no hidden part of the globe where an investor can buy an asset that doesn’t contain at least some amount of credit, inflation, currency or market risk.

There is a problem with the theory that the global economic crises have corrected all the past flaws with how risk is measured and priced, particularly with regard to once-heralded safe assets like sovereign bonds or mortgage-related securities.

First, it overlooks just how ingrained the concept of safe assets is in the global financial regulatory architecture. The problem in the euro zone banking system is partly due to banks holding their home government’s debt for regulatory and central bank funding purposes. Banks and other financial institutions are increasingly asked to use high-quality collateral as margin in derivatives trades. Reforms in this area make sense since derivative bets were insufficiently capitalized – but the impact of new rules, like margin requirements, on the demand for safe assets and on credit availability has to be acknowledged.

The problem is that all the regulatory efforts that seek to reduce leverage in the financial system not only presuppose the existence of safe assets but also assume that what is and is not a safe asset can be known with any degree of certainty.

Japan’s government debt, for example, is still considered to be among the safest in the world, despite a gross debt to GDP ratio of over 200 percent. How can one know if or when market participants will regard Japan’s debt with the same apprehension that they regard Spain’s today?

On the supply side, there are very few bright spots. Sure, a greater number of  emerging countries will join their more developed brethren over time, but few analysts expect a flurry of new emerging economies to start issuing AAA-rated sovereign debt in the near term.  Building the required legal institutions and financial architecture will take years, if not decades, for some countries to make this transition.

The private sector used to be a prime provider of safe assets through production channels like securitization. Private-sector issuance in the U.S. alone has declined by more than $3 trillion since 2007. There are obvious tensions between the desire for additional debt issuance and its impact on the safety of the issuer. The future of the housing government-sponsored enterprises (GSEs) also looms large given the importance of their debt and mortgage-backed securities as collateral.

While some are rightfully calling for a rebalancing of the U.S. mortgage market away from the government (since it is guaranteeing practically all new mortgages today), others are worried that this transition will lead to less government-backed MBS issuance – an important component of the current global supply of safe assets.

Eventually, the labored search for safe assets will drive prices to the point where investors have to settle for riskier assets. With interest rates that are expected to stay close to zero for some time (reflecting a world with slow growth and increased financial stress), the market is only becoming more susceptible to ripple effects from sudden drops in prices that turn safe assets almost overnight into unsafe ones, which then may no longer count or satisfy a key regulatory requirement. As the IMF puts it: “Demand-supply imbalances in safe asset markets could also lead to more short-term volatility jumps, herding, and cliff effects” – and even fuel new asset bubbles.

Ratings downgrades on U.S. and European sovereign securities teach us that what is a safe asset one day can be almost toxic the next. Building a regulatory architecture on these assets becomes dangerous because the transition from “safe” to “toxic” is likely to come at the same time that a bank’s dependence on the asset’s safety is greatest. As we have seen, the biggest panics are those that involve what were presumed to be safe assets, like the short-term commercial paper of Lehman. By hinging regulation on them again, the world seems to be tempting fate.

 

COMMENT

What you did say is that the financial world is getting more dangerous. Globalization has its advantages and its pitfalls. What happens in Podunk has ripple effects on the rest of the globe.

Posted by ptiffany | Report as abusive

Why not enact an ‘intelligent’ national infrastructure plan?

Christopher Papagianis
Jun 19, 2012 17:12 EDT

There are about 1 billion cars on the world’s roads today. By mid-century, forecasts have that number climbing to 4 billion. Meanwhile, Congress is mired in a debate over whether to pass a new highway bill. Senator Barbara Boxer, a chief negotiator of the pending bill, lamented recently that she was “embarrassed for the people of this country” that this measure had not been enacted. After all, she said, passing highway bills used to be as popular and as important as “motherhood and apple pie.”

As with all previous highway bills, proponents generally wrap their arguments in projections for new jobs, or rhetoric that links fresh infrastructure spending to unclogging the arteries of commerce. For the president, a highway bill fits his campaign theme of getting America back to work. In a recent speech in Cleveland, the president issued a call to “rebuild America” and to do “some nation-building here at home.” The main obstacle remains how to pay for new spending and investment.

Flashback to 1998 and 2005: Those were the last years Washington enacted “highway bills,” or measures to reauthorize federal infrastructure spending programs. Now that the economy is sputtering in 2012, many would like to see Congress pull a page from the playbooks of those years. The taxpayer price tags for the ’98 and ’05 multiyear highway bills were $218 billion and $286 billion, respectively. Count President Obama as part of today’s infrastructure-stimulus choir, as he has proposed a $556 billion six-year bill.

Harvard Professor Edward Glaeser argues: “America’s infrastructure needs intelligent reform, not floods of extra financing or quixotic dreams of new moon adventures or high-speed railways to nowhere.”

U.S. policymakers would be wise to take a moment this summer to reflect on whether the national strategy they are contemplating for infrastructure investment properly prioritizes performance and leverages technology.

Federal and state spending on transportation has grown faster than inflation for decades, yet the broader system’s performance has continued to deteriorate. The future of infrastructure in the U.S. is about achieving system performance – like attacking problems such as road congestion – rather than always adding raw capacity.

Over the last five or so years, an alternative vision for the future of infrastructure has unfolded, one that views travelers as customers who prioritize an efficient commute and a transportation system that’s safe. This recast framework has been enabled, in part, by the emergence of new tools to measure travelers’ objectives and system deficiencies. Private investment is also starting to flow to develop the new underlying technologies and creative new business models.

While the infrastructure grid has long had cameras to help spot accidents causing delays, the pervasiveness of smartphones, new GPS technologies and other sensors (those in and above ground) has exponentially added to the data pool.

One of the top complaints from driving customers is congestion, traffic delays and overly long commutes. New startups are developing applications to help cities do everything from identifying potholes faster to spotting in almost real time the fender bender that is slowing down traffic. The fresh focus on performance has also led to straightforward tech ideas like flexible screens that can be erected quickly at the scene of an accident to stop the rubbernecking by nearby travelers that causes congestion.

New companies like SFPark, Parkmobile and Streetline are seeking to transform the conventional parking meter. These companies utilize apps, linking data from wireless sensors (either embedded or tacked onto the parking spot pavement), to match parking availability with consumer location and demand.

With the explosion of data in and around our transportation infrastructure, large companies have also set their sights on developing analytical platforms for cities and other urban planners. Cisco’s “Smart + Connected Communities” initiative and IMB’s “Smarter Cities” visions are leading the way. The tagline for Smarter Cities lays out the broader premise: “that the world is becoming more interconnected, instrumented, and intelligent, and this constitutes an opportunity for new savings, efficiency, and possibility for progress.”

Over the last couple of years IBM helped design the first ever, citywide clearinghouse for infrastructure data in Brazil, called the Operations Center of the City of Rio. What makes this center unique is that it has integrated practically all of the city’s major information or response-related departments and agencies so that there is “a holistic view” of how the city is functioning or performing in real time, 365 days a year.

As the New York Times reported in a profile on the Rio center earlier this year, these platforms are being utilized not only by cities but also by smaller organizations like the Miami Dolphins, which wants to more efficiently manage the traffic around its new stadium. Schools are another good example. Everyday Solutions, a relatively new startup, provides a Web-based utility that monitors travel times and ridership rates and helps parents track the school bus their kids are on. (For more examples, check out Fast Company’s top 10 list of most innovative companies in transportation.)

Academia is also advancing both tech research and deployment: Check out Carnegie Mellon’s Traffic21 and Singapore-MIT Alliance for Research and Technology, or SMART.

The units of transportation are facing a frontier of change that will see cars, trucks and buses transformed into intelligent vehicles. Earlier this year at the 2012 Mobile World Congress in Barcelona, Ford Motor Co executive Bill Ford shared his “Blueprint for Mobility”, which lays out how transportation can change over the next decade. The auto company is investing in platforms that take advantage of the increasing number of sensors in and around vehicles as well as vehicle-to-vehicle communication initiatives, including accident warning or prevention systems.

Sebastian Thrun’s vision for self-driving, or “semiautonomous,” cars has the potential to improve mobility, and more important, safety. Over the last 10 years, more than 350,000 people have lost their lives on American roads. Thrun and his colleagues at Google X Lab have developed working prototypes that can travel thousands of miles without a driver behind the wheel. The cars can travel on highways, merge at high speeds and navigate city streets, eliminating the thousands of little decisions that drivers make that contribute to congestion and accidents. The self-driving car, with its ability to communicate with other vehicles and utilize precision technology, offers the potential to circumvent many of these problems.

Given that this sector is just starting to sprout up on its own, perhaps the federal government should stay on the sidelines in the near term to avoid stifling innovation. Yet just last year Google helped Nevada draft the nation’s first state law to allow self-driving cars on its roads (with preset conditions like requiring human co-pilots).

For Bill Ford, the opportunities on the more immediate horizon are quite clear. Cars could become “a billion computing devices” or “rolling collections of sensors” and made part of one large data network to “advance mobility, reduce congestion, and improve safety.” Sure, the benefits might be realized more quickly with the right help from the government. But if the value proposition exists and infrastructure customers start to demand better performance, this new vision may already be inevitable.

COMMENT

OneOfTheSheep,
I agree with you on the advantages, and preference by the people of the auto-centric culture.
But I think you are a bit harsh on @upstarter.
The size of the economic pie has been increased by government involved programs such as the transcontinental railroads, the Interstate Highway system, rural electrification etc.
Our economic system is, and should be, a cooperation between private and public sectors.
As they say, if you want a true free market system, free from government controls and protections, move to Somalia.

Posted by BJacobian | Report as abusive

State GDPs are evidence that Republicans may retake the Senate

Christopher Papagianis
Jun 7, 2012 13:14 EDT

The recent jobs and GDP numbers released by the government were a broad disappointment, and plenty of analysts have discussed the implications of the data. Yet, most of the analysis has focused on two dimensions – whether it’s now more or less likely that Congress or the Fed will act on either the fiscal or monetary fronts to try to boost the economic recovery.

The consensus is that the odds are marginally higher now that the Fed will signal something stimulative at its next meeting on June 19-20, while Congress is still hopelessly deadlocked, and the economy will have to show significantly more weakness for this dynamic to change.

However, there is a third dimension happening at the state level. The state-by-state GDP numbers out this week suggest that the probability that Republicans will take the Senate is rising. The weak economic growth numbers in some battleground states imply that Republicans could pick up several key U.S. Senate seats and probably take back the majority for the first time since 2006.

The Bureau of Economic Analysis reported yesterday that the U.S. real GDP by state grew 1.5 percent in 2011 after a 3.1 percent increase in 2010.

First, some quick facts about the current Senate composition. Democrats have a 53-47 majority (which includes two “independents” who caucus with the Democrats).

A flip in control of the Senate has the potential to dramatically alter the framework for the policy negotiations around the looming federal deficit and the end-of-year “fiscal cliff”. Since Republicans are expected to hold their majority in the House, a full Republican Congress would be more likely to extend the current tax rates and maintain significant spending cuts while blocking President Obama’s plans to increase taxes (should he win in November), including those on capital and savings.

There are 33 Senate seats up for election this November. Democrats hold 23 of them, and 7 are “open,” meaning that the incumbent isn’t running for re-election. By comparison, Republicans have only 3 open seats (Maine, Texas and Arizona) among the 10 that will be defended by incumbents.

If Republicans: 1) hold Texas, Arizona and Massachusetts; 2) lose Maine to an independent who caucuses with the Democrats; and 3) snatch all six of the remaining Senate seats (Virginia, Florida, North Dakota, Nebraska, Montana and Missouri), then Republicans would have a 53-47 majority.

So, how did each of these states do in terms of the percent change in real GDP in 2011? Here is how the 2011 numbers compare with 2009 and 2010 by region:

 

The answer addresses a question that many prospective voters will be asking themselves come November – whether their local economy is improving or not.

The new 2011 GDP stats in these three states probably won’t move the forecast needle – with Texas and Arizona staying Republican and Maine flipping to an independent caucusing with Democrats:

Texas: Highest quintile with 3.3 percent GDP growth in 2011.

Arizona: Second-lowest quintile with 1.5 percent.

Maine: Lowest quintile with -0.4 percent.

How about the six states where Democrats might lose or are most vulnerable:

Virginia: Second-lowest quintile with 0.5 percent growth.

Florida: Second-lowest quintile with 0.5 percent growth.

North Dakota: Highest quintile with 7.6 percent growth. (A recent boom in the mining sector contributed greatly to this figure.)

Nebraska: Lowest quintile with 0.1 percent growth.

Montana: Lowest quintile with zero growth.

Missouri: Lowest quintile with zero growth.

The weak state-by-state GDP numbers suggest that Senate candidates, particularly Democrats across the country (including President Obama – who will also be battling in these states), are more likely to be on the defensive regarding the economy and the underwhelming recovery.

It is important that these convictions about state growth be tempered. It’s certainly possible that if the economy improves on a national basis or the president benefits from a second wave of youth voters in November, akin to that in 2008, the impact will end up trickling down the ballot to the Senate contests.

But it’s difficult to be optimistic about the broader backdrop this summer, as the EU unravels and investment is expected to slow down in the second half of the year as the fiscal cliff approaches. The prospect for a significant improvement in the economy before November appears to be rather low at this point.

COMMENT

Why would it matter who sits in the Senate? They all work for the same paymasters. Just as all the current lot do. Policy does not change with one Party government. That is one reason we are now breaking the longest war record formerly set by the Vietnam War.

Nothing will change, at least for the better. Plant a garden.

Posted by usagadfly | Report as abusive

What could hold back the start of a recovery in housing this year?

Christopher Papagianis
May 31, 2012 12:27 EDT

This post is adapted from the author’s testimony at a recent hearing before the U.S. Senate Banking, Housing, and Urban Affairs Committee.

Many of the major negative housing trends that dominated the headlines since the crisis are now well off their post-crisis peaks. While prices are only flat to slightly down year-over-year, there is finally some optimism for probably the first time in more than three years. But before we get ahead of ourselves, let’s examine some of the economic fundamentals and also assess the policy and regulatory headwinds that are still blowing from Washington.

New delinquencies are trending lower on a percentage basis. The decline in home prices also appears to be leveling off or approaching a bottom on a national basis. Data from CoreLogic suggests that house prices have increased, on average, across the country over the first three months of 2012 when excluding distressed sales. Even the numbers from the Case-Shiller Index out this week suggest that a floor in home prices has been reached.

There is also a relative decline in the supply of homes for sale. The chart below shows how the existing stock of homes for sale is now approaching a level equal to five to six months of sales. This is a very promising development. According to the Commerce Department, the housing inventory fell to just over five months of sales in the first quarter, the lowest level since the end of 2005.

In short, the level of housing supply today suggests that the market is close to equilibrium, which implies house prices should rise at a rate consistent with rents. Market analysts often look at a level above or below six months of sales as either favoring buyers or sellers, respectively. It’s not surprising then that the recent stabilization of home prices nationally has occurred as the existing inventory, or supply level, has declined.

A couple of important caveats should be kept in mind, however. First, almost any discussion of national inventory trends can gloss over regional problems, or acute supply challenges in individual state markets. Second, the transaction data around home sales suggests that any near-term demand-supply equilibrium is occurring off of an extremely low transaction volume. In essence, weak demand for single-family homes appears to have eclipsed the supply challenge moving forward for the housing market.

Consider that homes are more affordable than they have been in decades.

The National Association of Realtors Home Affordability Index measures the “affordability” of a median-income family purchasing a median-priced home (using a 20 percent downpayment for a 30-year fixed rate mortgage). All of which is to say that house prices look low on a historical, user-cost basis.

So, this begs the question: Why are home sales still so depressed?

One major reason: tight lending and underwriting standards. Earlier this month, Federal Reserve Chairman Ben Bernanke commented on this trend by reviewing information from the latest Senior Loan Officer Opinion Survey on Bank Lending Practices (SLOOS).

Most banks indicated that their reluctance to accept mortgage applications from borrowers with less-than-perfect records is related to “putback risk”– the risk that a bank might be forced to buy back a defaulted loan if the underwriting or documentation was judged deficient in some way.

Federal Reserve Governor Elizabeth Duke also gave a speech earlier this month on this theme, providing yet even more detail on the conclusions from the April SLOOS:

  • Compared with 2006, lenders are less likely to originate Government Sponsored Enterprise-backed loans when credit scores are below 620 regardless of whether the downpayment was 20 percent or not.
  • Lenders reported a decline in credit availability for all risk-profile buckets except those with FICO scores over 720 and high downpayments.

When the lenders were asked why they were now less likely to offer these loans:

  • More than 84 percent of respondents who said they would be less likely to originate a GSE-eligible mortgage cited the difficulty obtaining mortgage insurance as factor.
  • More than 60 percent of lenders pointed to the risks of higher servicing costs associated with delinquent loans or that the GSEs might require them to repurchase loans (i.e., putback risk).

Another important market development to acknowledge is that lenders can’t keep up with demand, particularly with regard to mortgage refinancings. Anecdotal evidence suggests that some lenders are simply struggling to process all the loan applications coming their way. Part of the problem appears to be the structural shift in the market toward full and verified documentation of income and assets, which has lengthened the processing time for mortgage applications.

But if lenders and servicers don’t have enough capacity, why are they not just hiring more staff or upgrading their infrastructure so they can handle more loans or business? This seemingly innocent question is really important. Don’t market participants still perceive this business as profitable long term with a comparatively good return on investment when viewed against other business lines?

Governor Duke’s conclusion is spot-on. Lenders or servicers are hesitating in the near term because they just don’t have a good sense of how profitable the housing finance-and-servicing business will be over the medium-to-long term.

And that’s because of the policy questions that haunt the housing sector. There is perhaps no other major industry that faces more micro-policy uncertainty than housing today. Putting aside broader GSE reform, these uncertainties can be grouped into two buckets: servicing and underwriting.

On the servicing side, federal regulators are in the process of establishing new industrywide rules governing their behavior, changing how servicers get compensated and altering the way the business itself can be valued if it’s part of a broader bank balance sheet.

On the underwriting side, the Dodd-Frank law pushed regulators to try to finalize very complicated rules governing who should be able to qualify for certain types of mortgages (i.e., ability to pay standards), including those that are bundled into mortgage-backed securities.

All of these actions will affect the future of house prices, as credit terms and mortgage availability are intimately linked to the user-cost of housing generally.

The urgency to resolve all of this uncertainty is all the more important because while there are clear short-term impacts on the market, there are also potential long-term consequences. For example, if lenders decide to hold off on making new near-term investments in their mortgage business, the long-term potential of a full rebound in housing may be diminished as the existing or legacy infrastructure and skills can be expected to atrophy further.

Mortgage servicers are not in business to lose money. Moreover, the total volume of resources devoted to performing this function – employees, investment in computers and telecommunications infrastructure, legal compliance officers, sales staff – is not static. It adjusts upward and downward based on perceived opportunities, expected future revenues and government involvement.

Some big investments are not being made because of concerns that regulations will impose costs on the industry that cannot be recovered through servicing fees or other revenue streams. Here there have been a few positive developments recently, suggesting that at least some investors are willing or able to take on the aforementioned headwinds.

Non-bank and specialty mortgage servicers like Nationstar and Ocwen are buying up MSRs from the large banks. Home prices also appear to have reached the point where investors could buy properties and rehabilitate them for less than it would cost to construct them brand-new. This trend helped spark some fresh investments in late 2011, which has generated some modest momentum for 2012. In the first quarter this year, GDP was 2.2 percent and residential investment provided a 0.4 percent contribution or share of that figure.

But so much policy uncertainty still looms.

No one really knows who the ultimate purchasers of mortgages are likely to be five years from now. Since the ultimate holders of mortgages – currently Fannie Mae and Freddie Mac, on behalf of the government – are the servicers’ client base, the current lack of clarity on who or what is likely to fund mortgages in the future has obvious ripple effects on servicers and all other professions exposed to mortgage finance.

A similar phenomenon is casting a shadow over the mortgage insurance industry. The difficulties in obtaining mortgage insurance are constraining lenders from selling to Fannie and Freddie, even if they have found buyers and are willing to originate the loans. Several mortgage insurance companies have failed in recent years, others are no longer offering insurance on a forward-looking basis and are just managing their existing exposures.

Resolving even some of the uncertainty holds by far the greatest potential for responsibly helping the housing market moving forward. It’s just too bad that there is an election in November, since it means policymakers and regulators can be expected to dither out of fear of upsetting a particular interest group before votes are cast.

For Washington, JPMorgan’s big failure can be an opportunity

Christopher Papagianis
May 16, 2012 15:25 EDT

In light of JPMorgan Chase’s bad derivatives trades, the media’s spotlight has appropriately turned to the pending Volcker Rule. That’s the moniker for the still-under-development regulation that might restrict big banks from pursuing hedging strategies across their entire portfolio, including their own bets. Proponents say banks shouldn’t be able to do this, since banks hold consumer deposits that are effectively guaranteed by taxpayers and since taxpayers could be forced to bail out a foundering bank if it’s deemed too big to fail. On the other hand, the underlying law for the Volcker Rule, the Dodd-Frank financial reform law, specifically exempts or allows hedging related to individual or “aggregated” positions, contracts or other holdings, which may very well have covered JPMorgan’s recent trade.

Inside the beltway, a fresh dispute is now emerging between regulators and policymakers on whether the current draft of the Volcker Rule can even apply to scenarios like JPMorgan’s, given how explicit Dodd-Frank is on this topic. On the one hand, there is the Office of the Comptroller of the Currency, which is starting to argue that these JPMorgan trades would likely have been exempted from the not-yet-final Volcker Rule. But then there are policymakers (namely, Senator Carl Levin) who are trying to make the case that Congress never intended for the law’s language to be interpreted so broadly.

While all the details around JPMorgan’s failed trading strategy emerge, there is an even more interesting backdrop to consider – whether JPMorgan Chase and other banks are still too big to fail. It was only a week ago that the Senate Banking Committee held a hearing where Paul Volcker, Thomas Hoenig and Randall Kroszner testified on “Limiting Federal Support for Financial Institutions.” While they each expressed different viewpoints, it was newly installed FDIC Director Hoenig who made the most news. He used the stage to discuss a paper he wrote in May 2011 on “Restructuring the Banking System to Improve Safety and Soundness.”

In broad strokes, Hoenig doesn’t think that the “too big to fail” (TBTF) problem has been adequately addressed. His conclusion is that the TBTF banks are effectively too big to manage and too complex to understand, and should be made smaller by defining what is and isn’t an “allowable activity.” For Hoenig, “banks should not engage in activities beyond their core services of loans and deposits if those activities disproportionately increase the complexity of banks such that it impedes the ability of the market, bank management, and regulators to assess, monitor, and/or control bank risk taking.”

Hoenig’s plan is bold, to say the least, and even Senator Bob Corker joked at one point that maybe it should be called the Hoenig Rule.

Volcker actually spoke first at the hearing and alluded to Hoenig’s plan when the committee asked if anything should be done about the great increase in concentration at the largest banks (before and through the crisis years):

I don’t know how to break up these banks very easily. But some of the things we’re talking about – reduced trading, for instance – will reduce the overall size of the bank reasonably. Some of the restraints on derivatives will reduce their off-balance sheet liabilities significantly.

So they are at least modest steps. There is a provision in the law they cannot grow beyond certain limits by merger or acquisition. So there are some limits here. But if you say – asked me whether I prefer a banking system that had less concentration, I would. But I – I think we can live more or less with what we have.

While Volcker thinks we can more or less live with this ongoing TBTF problem, the broader public and even some regulators aren’t so sure. For example, the acting head of the FDIC, Martin Gruenberg, recently gave an important speech that was intended to persuade the market that existing tools will work when the next crisis hits. As a Dow Jones reporter put it, “regulators are looking to chip away at the tacit understanding that the government will step in to save top financial institutions seen as vital to the economy or banking system.”

This is where the JPMorgan story provides a potentially revealing case study on just how much progress has actually been made over the past few years. Josh Rosner, managing director at Graham Fisher & Co and co-author of the great book Reckless Endangerment, posed a very interesting question during a recent television interview:

Can we really talk about there being a free market when at the end of the day you’ve got institutions that are in fact Too Big to Fail? … One of the questions I would ask is, did JPMorgan’s counterparties demand more collateral from them in the face of these exposures the way they [JPM] did against Lehman right before its failure?

If the answer is no… then clearly everyone assumes that JPM is always money good because it is Too Big to Fail … so we’re not talking about regular risk taking behavior of firms that can win or lose or succeed or fail. We’re talking about a specific subset of firms. I keep coming back to when are we going to actually address that issue.

Right now, there doesn’t appear to be any (public) evidence that counterparties demanded more collateral as JPMorgan revealed the position details or the extent of its losses. Then again, perhaps it’s just that a $2 billion-plus loss isn’t viewed by the market as that big a deal when the balance sheet of the firm in question is about 1,000 times larger (and is still generating profits)?

Look for the Senate to stitch all these themes together in the next couple of weeks when it holds its first hearing since JPMorgan’s disclosure. Remember, the aforementioned Senate hearing with Volcker and Hoenig occurred just before the JPMorgan story broke. Senators will surely want to examine whether this derivatives loss is a (potential) public policy problem because it occurred at a bank, or because it occurred at a $2 trillion-plus bank. Context is very important. Risk taking is not “bad” unless it occurs inside an institution that cannot fail, and cannot fail because the government, on behalf of taxpayers, won’t let it.

For now, however, it’s hard to see how any new Dodd-Frank-related legislation moves through the divided Congress, especially in an election year. This means that it’s the regulators and the signals they send out to the market that will matter most for gauging when, how and in what form the Volcker Rule gets finalized. That said, one hopes this JPMorgan situation will spark a fresh debate about the shortcomings of Dodd-Frank and specifically on the most important systemic problem post-crisis – ending too big to fail, once and for all.

COMMENT

The shortcomings of the Dodd-rank bill are a direct result of the extensive lobbying by the banks during its formation and passage. Most of the financial institutions lobbying to water down any regulation spent more money influencing legislation than they paid in US Federal Income taxes. In other words, US taxpayers are footing the bill for Wall Street to continue its speculative behavior.

Volker is always pragmatic and realizes you have to work with what you have. A sudden change in today’s banking structure (downsizing, splitting up, etc.) would destabilize an already tenuous situation. However, Volker is absolutely on the right track that if you limit derivative trading and off-balance sheet use you are effectiely downsizing the size of these institutions and the risk they entail.

Frankly, having been on Wall Street for over 30 years, I think it is time that we completely rethink the use of derivatives and futures in our financial markets. No longer are these instruments used as a hedge (protecting your collateral) but as speculative bets on price directions of various asset classes. Furthermore, the leverage that is entailed in derivative trading and the lack of pricing transparency in non-echange traded derivatives is a recipe for boom/bust.

Most of Congress and the WHite HOuse do not understand the derivative market. Now we see that even the gurus of JP Morgan don’t fully understand these markets – and they are suppose to be the experts. However, the money from hedge funds and banks into the political coffers are clouding good judgement and any real regulation.

When the vast majority of voters are clearly behind regulation and controls on speculators, it is a clear example of how weak democracy is in the USA.

Posted by Acetracy | Report as abusive

Making sense of what comes next in Greece

Christopher Papagianis
May 9, 2012 18:03 EDT

Analysts are scrambling to interpret the voting results from Greece’s first election since the crisis began in late 2009, hoping to accurately gauge the political risk that a new parliament in Greece will successfully (and meaningfully) renegotiate the previous austerity accords. At stake is the ongoing debt-financing support from the International Monetary Fund, European Commission and European Central Bank. Already the triumvirate has warned that it will not follow through on the next loan disbursement unless the new Greek government also follows through in detailing next month how it will achieve budgetary savings of more than 11 billion euros for 2013 and 2014.

Here are a few important guideposts to keep in mind as the news out of Greece develops over the next few weeks.

1. Another election is likely, which means general fears about fresh instability will remain elevated over the next month. The two major political parties (Conservative New Democracy and Socialist Pasok) that endorsed the austerity pacts over the last few years lost big in the election. Combined, a bunch of smaller, and in some cases fringe, political groups (including the Neo-Nazi Golden Dawn party) won more than 60 percent of the popular vote. In all, Sunday’s results left Greece with its most fragmented parliament since democracy was restored in the country back in the 1970s. The consensus view is that it will be difficult, if not impossible, for these diverse factions to form a coalition government. If that happens, a new “do-over” election will have to take place, though probably not before the middle of June.

2. The debate around Greece, and specifically whether it will exit the euro, is about to get louder. By some estimates, almost three-quarters of the Greek population wants to keep the euro. Ironically, this most recent election in Greece was not really framed for voters as a binary choice between either abandoning austerity or maintaining the euro. Parties on both the left and right chose instead to emphasize that they represented fresh leadership alternatives that could still deliver win-win fiscal policy solutions (i.e., avoiding much of the scheduled pain from austerity). In many respects, the best way to characterize the election results at this stage is that votes were cast not for specific new policies (with a full appreciation of the consequences) but more out of general frustration with the incumbent parties. Corruption and other party-specific (rather than policy-specific) factors loomed large as well. Expect the commentariat to examine this tension within public opinion, which may help clarify whether holding on to the euro while avoiding all near-term austerity is an untenable position.

3. The election results appear to have been driven by younger, not older, voters. Overall, it appears that voter turnout was low (exactly how low is unclear). Younger voters were the most energized. Older voters, who probably have the most to lose from a disorderly exit from the euro, did not turn out. In part, this dynamic looks as if it helped fuel the rise of third- or fourth-tier parties. A lot of the post-election attention is appropriately focused on Alexis Tsipras, the 38-year-old politician who ushered his Syriza party to a second-place finish behind the New Democracy Party. Syriza got a big boost from younger voters in this election, and many analysts believe that will pay off in any future election. If – or, more likely, when – there is another election next month, a key trend to watch will be whether there is any rebound in support for the New Democracy and Pasok parties, since voter turnout patterns will probably normalize.

***

Given all this uncertainty and the growing risk that Greece may exit the euro over the next year, why hasn’t the market reaction been more dramatic? After all, Citigroup has a new forecast arguing that it now believes the likelihood of a “Grexit” is between 50 percent and 75 percent.

It appears as though the dominant view in the market right now is that Europe is in a better situation to absorb or muddle through a Greek exit than it was six months ago, when Greece last threatened a referendum on its EU membership. Analysts are generally pointing to a combination of factors: Many investors have already fled Greece or been forced to take losses, and European bank balance sheets have already adjusted to losses on Greek assets.

That even more capital might leave Greece is, along with the deficit, one of the things most limiting the country’s next coalition government, whoever joins it.

This year, Greece is expected to run a primary deficit of 1 percent of GDP. The primary deficit means the government needs to borrow to fund basic government services. Next year, the primary account is projected to be in balance, and the deficit is set to consist entirely of interest payments. At that point, incentives for policymakers may change, since they would no longer be beholden to creditors to finance general government services. (Of course, they would still need creditors to finance the interest payments on their debt.) But in the meantime the IMF, European Commission and European Central Bank will still have leverage over Greece’s fate.

As Greece approaches its next election, it will be important to track the campaign rhetoric. A coalition government that unites in opposition to austerity will also be saying goodbye to the euro and hello to a new Greek currency. That could cause depositors to flee Greek banks. In the event of a disorderly exit from the EU, the new Greek drachma will likely be worth much less than a euro. Depositors will suspect that such a devaluation would be coming and then have every incentive to exchange a euro on deposit in Greece for a euro on deposit in Germany before the unlimited 1-for-1 conversion rate ends. Otherwise, their money would lose much of its value. That’s the kind of mass exodus Greece cannot afford.

To avoid this scenario, the new government – whenever it’s installed – will have to take concerted steps to make membership in the currency union seem permanent. Right now, it’s hard to see how a new coalition government that would roll back austerity, while also instilling the necessary confidence in the currency union to stave off a summer flight of capital, could be formed. Painful as austerity may seem, at this stage, all of the alternatives appear to be much worse.

PHOTO: Supporters of the extreme-right Golden Dawn party raise flares as they celebrate poll results in Thessaloniki, northern Greece, May 6, 2012. Golden Dawn is set to become the most extreme right-wing group to sit in parliament since Greece returned to democracy after the fall of a military junta in 1974.   REUTERS/Grigoris Siamidis

COMMENT

so gvmavros, you and your husband epirat still cannot give us the truth?

so we use google instead, literally

Posted by scyth3 | Report as abusive

Is Uncle Sam ever truly an investor?

Christopher Papagianis
May 2, 2012 15:48 EDT

Last week, a debate erupted about whether the government’s massive Troubled Asset Relief Program (TARP) made or lost taxpayers money. Assistant Secretary for Financial Stability Timothy Massad and his colleagues at the Treasury Department argue that TARP is going to end up costing a lot less than originally expected and may even end up turning a profit for taxpayers. Breakingviews Washington columnist Daniel Indiviglio scoffs at this, arguing that TARP “looks more like a loss of at least $230 billion.”

While the two sides are miles apart on their calculations (and it is important to examine why), their disagreement reflects a broader philosophical dilemma that deserves more attention. It concerns whether the U.S. government should be held to the same standards as private investors. Put another way, should policymakers adopt the same analytical approach that private-market participants use to evaluate or measure the prospective return on new investments? The answer has important consequences for defining the roles for the public sector and private enterprise – and particularly how the U.S. government accounts for all of its trillions in direct loan programs and loan guarantees.

Let’s start by using TARP as a case study. The calculation Treasury uses is simple: If a bank that received a TARP capital injection pays back the original amount, then the taxpayer broke even. If some interest or dividend income (i.e., on the government’s ownership stake from the injection) is generated, then the taxpayer likely made a profit on the investment.

Indiviglio takes a different approach, arguing that Treasury’s “fuzzy math wouldn’t fly with any sensible portfolio manager.” He insists that government needs to factor in the cost of money and its value over time.

The crux of this argument is about whether the government’s investment strategy should be evaluated in the same way that a private investor would evaluate a potential investment. Massad confronted this head on in his rebuttal to Indiviglio (delivered through Politico’s Morning Money):

“The [Indiviglio] piece doesn’t look at the math correctly in light of the purpose of, and need for, the TARP investments. The government isn’t a hedge fund and nor should it have acted like one. We made these investments to help put out the financial fire and prevent a second Great Depression. And it’s certainly good news for taxpayers that we’re going to get most if not all of the money back…”

While the “money-in-money-out” approach has obvious intuitive appeal, there are actually ways of demonstrating its limitations. One stems from the Congressional Oversight Panel (COP) for TARP, which looked at this issue back in 2008 and 2009. The COP commissioned a valuation project to determine whether the Treasury received a fair value price for its investments. The COP found that “additional information about the value of the TARP transactions could be derived by comparing those transactions to three large transactions involving private sector investors that were undertaken in the same time period.”

  • Berkshire Hathaway purchased an interest in Goldman Sachs (September 2008)
  • Mitsubishi UF announced an investment in Morgan Stanley (September 2008)
  • Qatar Holdings LLC purchased a stake in Barclay’s (October 2008).

While COP noted these private investments were not perfect analogs, comparing these transactions with the government’s investments did reveal that “unlike Treasury, private investors received securities with a fair market value … of at least as much as they invested, and in some cases, worth substantially more.” The COP valuation report concluded that: “Treasury paid substantially more for the assets it purchased under the TARP than their then-current market value.”

Here is the key table from the COP report:

This table shows that the government injected capital into these institutions at a 28 percent premium (on average) to what other private investors were willing to pay (the table’s 22 percent is the subsidy rate, or percentage of purchase price that went directly to bank management and shareholders). Note, these figures were calculated after taking into account any boost in value the financial firms got from the announcement that they would be receiving support under TARP (and the Capital Purchase sub-Program).

At its most base level, Massad’s argument is fairly circular. What Treasury did was rescue the financial system, which was good because it rescued the financial system. That is to say, the capital injections through TARP broke even, on average, largely because the capital injections themselves stabilized the financial system. But this valuation debate is not about whether the government should or should not have injected capital into these institutions. That is taken as a given. The question is whether that capital should have been injected on such concessionary terms? Sure, Warren Buffett didn’t have enough capital to rescue the entire financial system, but why couldn’t the government have driven the same bargain?

Indiviglio concludes his argument on TARP by reinforcing this key point:

Even using a more conservative discount rate of 10 percent would still leave the loss at over $190 billion. The U.S. Treasury isn’t a hedge fund, so was willing to invest poorly for the bigger, unquantifiable return delivered by stability. But rather than try and obscure the painful price tag of its rescue, it should be emphasizing that avoiding a global meltdown was worth the cost.

This section also identifies the key variable – the discount rate – that determines the true cost of the program. Most people don’t know this, but an “only in Washington” law (codified under the Federal Credit Reform Act) requires that to project the costs of a federal loan program, official scorekeepers must discount the expected loan performance using risk-free U.S. Treasury interest rates. There is no factor for “market risk”, which is the likelihood that loan defaults will be higher during times of economic stress and those defaults will be more costly as a result.

Jason Delisle of the New America Foundation has written extensively on this topic, arguing that when the government values risky investments using only risk-free discount rates, lawmakers have a perverse incentive to expand rather than limit the government’s loan programs. This is because a private-sector institution that extends the same loans (say on the exact same terms) would be required to factor in market risk. And, when this difference results in the government program showing that its lending activities would turn a profit, policymakers are inclined to expand the program to capture more of these fictitious profits (which conveniently they can also spend on other programs, even if the returns never materialize).

The confusion about how to view the U.S. government’s role as an investor has led many in Washington to argue that loan programs can subsidize everything from mortgages to student loans – all at no cost to the taxpayer. The principal concern with not evaluating a government and private investment in the same manner is that the government’s purported profits are often cited as proof of an inherent advantage the government has over the private sector in delivering credit. The truth is that this result generally comes from a less-than-full accounting of the risks taxpayers have been made to bear. (For more, see the work by Deborah Lucas at MIT, who also consulted on the COP report and other CBO studies.)

Jason Delisle (and I) wrote a piece at Economics21 last month spotlighting a very revealing comment that Shaun Donovan, secretary of the Housing and Urban Development Department (HUD), made before Congress defending the status quo on government accounting. Donovan argued that the Federal Housing Administration could provide 100 percent guarantees on the credit risk of low-downpayment mortgages, charge less than a private company would for bearing this risk and still make a profit for taxpayers. In his view, FHA “doesn’t have shareholders,” and it doesn’t “have a need for return on equity.”

Here is the bottom line: When the government issues a loan guarantee, it’s the taxpayers who become the equity investors. They are the ones who will be asked to make up any unexpected loss on the loans over time. U.S. Treasury holders certainly won’t be asked to take a haircut.

Just because the government, rather than a private company, extends a loan doesn’t mean that the market risk vanished. Taxpayers would be better off if the government’s accounting rules for its credit programs reflected that there is only one true cost of capital – and it’s the price investors are willing to pay in the market.

PHOTO: Boxing gloves during a training session of heavyweight boxing titleholder Vladimir Klitschko of Ukraine in Duesseldorf, March 17, 2010. REUTERS/Ina Fassbender

COMMENT

Don’t call them investments if you dont want to be held held to the same standards as any other investor. It’s a useful euphemism potlitically, but also very misleading.

Posted by MBoulNZ | Report as abusive

Can Silicon Valley fix the mortgage market?

Christopher Papagianis
Apr 25, 2012 12:12 EDT

Without question, the rise of social networks has been the dominant theme in Silicon Valley over the past few years. Platforms like Facebook and Twitter have inspired countless startups looking to latch on to networks to deliver new applications and services for consumers. In many ways, the glue that binds these enterprises is an advanced ability to organize and analyze the reams of user data generated by these networks or systems. Entirely new business models have emerged to try and capitalize on this improved understanding of consumer preferences and behavior.

Over the last couple of years, the analytics experts in Silicon Valley have started to turn their attention to other big data problems. A question that is increasingly attracting their attention is: How can the fallout from the subprime mortgage crisis be better managed for all the players involved, including at-risk homeowners, lenders, mortgage servicers and investors?

We’ve heard a lot about the near-universal frustration that at-risk borrowers have had with their mortgage servicers. The common refrain is that if mortgage servicers could only make smarter and quicker decisions on how to modify the terms of individual mortgages, then there would be fewer foreclosures on the margin and lenders or mortgage investors would actually lose less money in aggregate, since the foreclosure process itself is costly.

For many, the challenges in this area are about asymmetries of information or structural market frictions, since win-win outcomes aren’t realized as often as they should be in an otherwise efficient marketplace. Glenn Hubbard and Chris Mayer at Columbia University have developed plans for addressing some of the frictions that have blocked borrowers from taking advantage of today’s low interest rates by refinancing their mortgages. But now new companies, some with their roots firmly established in Silicon Valley, are eyeing the mortgage servicing market as fertile ground for deploying their creative and analytical firepower.

A prime example is Palantir Technologies (pronounced Pal-an-TEER). At its core, Palantir develops platforms that help other companies integrate and analyze their data. Initially Palantir’s focus was on the intelligence and defense community, helping organizations like the CIA and FBI ferret out terrorist activities. Analogous platforms have since been developed to help financial institutions comb through their networks to identify suspicious or fraudulent transactions. Hedge funds, including one of the world’s largest – Bridgewater Associates LP – have also knocked on Palantir’s doors looking for ways to leverage their open-ended or extendable platform as a way to better process or integrate their investment-related data and research, which often comes from multiple sources.

Joe Lonsdale, co-founder of Palantir, and Rosco Hill, one of his colleagues, recently gave a TEDx New Wall Street presentation about how this Silicon Valley-to-Wall Street workstream is evolving and has the potential to improve mortgage servicing. One of the underappreciated problems in the mortgage market today is that most servicers are still playing catch-up from when the housing bubble burst and their systems started to get overloaded with processing non-performing mortgages. A top U.S. Government regulator on housing – the Federal Housing Finance Agency (FHFA) – has even launched an initiative to restructure the way that mortgage servicers are compensated to try to establish a more durable servicing industry that is better prepared for boom-and-bust cycles.

The activities and costs associated with servicing a performing versus a non-performing mortgage are fairly dramatic. Before the housing crisis, when home prices were rising and foreclosure levels were down, the servicing industry was primarily thought of as a payments processing business (i.e., sending out forms to borrowers, collecting payments and passing cash flows on to lenders or investors). The best servicers were the most efficient processors, looking at each turn for new ways to streamline their systems to reduce costs and achieve economies of scale as a way to maximize returns.

Servicing non-performing mortgages, however, is a labor-intensive business. It can be difficult if not impossible to achieve economies of scale, since many mortgage workouts or modification strategies involve direct or personal interactions with borrowers. The underinvestment in servicing technology heading into the housing crisis was perhaps best summarized by FHFA:

Prior to 2007, servicers were mainly focused on building efficiencies in the servicing of performing loans in order to reduce costs and optimize financial returns. Relatively few chose to invest in the technology, systems, infrastructure and staff needed to service large or rapidly growing volumes of non-performing loans. Consequently, many servicers were ill-prepared to efficiently process the high numbers of delinquencies that occurred after the housing market collapsed. Since then, the servicing industry has increased its investment in the processes and technologies needed to meet the challenge of servicing non-performing loans in today’s environment.

While the five big banks that dominate the servicing industry (Wells Fargo, Bank of America, Citigroup, JPMorgan Chase and Ally Financial) have increased their investments in servicing-related technologies and infrastructure over the past few years, smaller servicers are also now looking to gain market share. Part of this emerging story is about the new data platforms that special servicers are utilizing to distinguish themselves from some of their competitors.

One area where new technologies are starting to make a difference is in helping servicers approve short- sale transactions as an alternative to foreclosure. (A short sale is when the lender accepts less than the full amount owed on the debt in a sales transaction, but still releases its claim on the underlying real estate.) The promise of new technology platforms on this front is that they can connect different data sets on home prices and other variables, whereas many big bank servicing platforms still rely on closed systems that don’t easily integrate all the public and proprietary data sources that are available. Better data integration allows for more comprehensive search and discovery processes, which have the potential to help servicers confirm what exactly is a fair value price for a home in a declining market.

The ultimate goal is to find the “efficient” spot on the axis in between fully automated and individually personalized mortgage modification solutions. The key is using all the data that’s out there to gain a better understanding of why individual borrowers are at risk of foreclosure, learning how better data can speed up the decision process for servicers evaluating modification options, and identifying common factors that could lead to the development and then deployment of more personalized foreclosure avoidance strategies.

The mortgage market has long been driven by quantitative analytics, but Joe Lonsdale and Rosco Hill framed a key question in their TEDx presentation that suggests a transformation of sorts is playing out at the nexus of Silicon Valley and Wall Street. In describing an exchange that a Palantir team had with a large bank that was evaluating new servicing-related technologies, a bank executive asked both his own IT-servicing department managers and the Palantir data mavens in the room whether the answers to today’s servicing-related challenges were more about finding mechanical or creative solutions? The answer is both, but it’s the underappreciated role of creativity in the development process of the data platform itself (i.e., turning an analytical tool into a decision-making platform) that gives Silicon Valley the edge in providing a real breakthrough on mortgage servicing.

PHOTO: Realtor and bank-owned signs displayed near a house for sale in Phoenix, Arizona, January 4, 2011. REUTERS/Joshua Lott

COMMENT

This is not a big data problem. The largest mortgage loan originator and servicer was Countrywide, now Bank of America. The computerized Countrywide Loan Underwriting Expert System (CLUES) processed all the Uniform Residential Loan Applications (URLA), Federal Form 1033, to determine all the variations of loan criteria, summarized with a decision: “RECOMMENDED” or “NOT RECOMMENDED”.

The problem is not that the extensive and detailed database was inadequate, nor the underwriting – aka “Artificial Intelligence” – risks not determined. The problem was that a lot of people decided to ignore these “recommendations” in pursuit of higher returns.

The phony “complex” risk models created to justify high-risk low-doc & no-doc loans (not just subprime) to known unqualified borrowers weren’t questioned as to one basic assumption: “Housing prices will always go up.”

We don’t need any geniuses from Silicon Valley or anywhere else to tell us what went wrong with the casino culture of Countrywide, Fannie Mae, Freddie Mac, IndyMac, AIG, Citigroup, JP Morgan Chase, Wells Fargo and now Bank of America.

I used to work for two of these entities. The fraud was well known, not exactly a secret to thousands of employees including managers and executives. What’s being done about changing this casino culture? Outside some curious pieces on “60 Minutes” and detailed discussions on “Moyers & COmpany”, essentially nothing beyond talk. The Department of Justice continues to sit on its hands and bemoan the lack of funding to pursue these elusive thousands of potential witnesses.

BTW, this was (and is) not primarily a subprime loan crisis as the media keeps harping, but a serious problem with obsessive gambling involving low-doc-no-doc loans of a large portion of ARM loans with losses guaranteed 100% by the federal government. This hasn’t changed one whit.

The Tenth Percenters win. Their capital is preserved and they continue to receive better than ten percent returns from the casino “banks”.

Posted by ptiffany | Report as abusive
  •