Tuesday, December 11, 2012

Remembering Albert Hirschman

Albert Hirschman, among the greatest of social scientists, has died. He was truly one of a kind: always trespassing, relentlessly self-subversive, and never constrained by disciplinary boundaries.

Hirschman's life was as extraordinary as his work. Born in Berlin in 1915, he was educated in French and German. He would later gain fluency in Italian, then Spanish and English. He fled Berlin for Paris in 1933, and joined the French resistance in 1939. Fearful of being shot as a traitor by advancing German forces, he took on a new identity as a Frenchman, Albert Hermant. In 1941 he migrated to the United States, met and married Sarah Hirschman, joined the US Army, and soon found himself back in Europe as part of the war effort. After the end of hostilities he was involved in the development of the Marshall Plan, and subsequently spent four years in Bogotá where many of his ideas on economic development took shape. He and Sarah were married for more than seven decades; she died in January of this year.

Not only did Hirschman write several brilliant books in what was his fourth or fifth language, he also entertained himself with the invention of palindromes. Many of these were collected together in a book, Senile Lines by Dr. Awkward, which he presented to his daughter Katya. Forms of expression mattered to him as much as the ideas themselves. In opposition to Mancur Olson, he believed that collective action was an activity that came naturally to us humans, and was thrilled to find that one could invert a phrase in the declaration of independence to express this inclination as "the happiness of pursuit."

Hirschman's intellectual contributions were varied and many but the jewel in the crown is his masterpiece Exit, Voice and Loyalty. In this one slim volume, he managed to overturn conventional wisdom on one issue after another, and chart several new directions for research. The book is concerned with the mechanisms that can arrest and reverse declines in the performance of firms, organizations, and states. It was the interplay of two such mechanisms - desertion and articulation, or exit and voice - which Hirschman considered to be of central importance.

Exit, for instance through the departure of customers or employees or citizens in favor of a rival, can alert an organization to its own decline and set in motion corrective measures. But so can voice, or the articulation of discontent. Too rapid a rate of exit can undermine voice and result in organizational collapse instead of recovery. But a complete inability to exit can make voice futile, and poor performance can continue indefinitely.

Poorly functioning organizations prefer that an exit option be available to their most strident critics, so that they are left with less demanding customers or members or citizens. Hence a moderate amount of exit can result in the worst of all worlds, "an oppression of the weak by the incompetent and an exploitation of the poor by the lazy which is the more durable and stifling as it is both unambitious and escapable." Near-monopolies with exit options for the most severely discontented can therefore function more poorly than complete monopolies. It is not surprising that many dysfunctional states welcome the voluntary exile of their fiercest internal critics.

The propensity to exit is itself determined by the extent of loyalty to a firm or state. Loyalty slows down the rate of exit and can allow an organization time to recover from lapses in performance. But blind loyalty, which stifles voice even as it prevents exit, can allow poor performance to persist. It is in the interest of organizations to promote loyalty and raise the "price of exit", but the short term gains from doing so can lead to eventual collapse as both mechanisms for recuperation are weakened.

Among Hirschman's many targets were the Downsian model of political competition and the Median Voter Theorem. Since he considered collective action to be an expression of voice, readily adopted in response to dissatisfaction, there was no such thing as a "captive voter." Those on the fringes of a political party could not be taken for granted simply because they had no exit option: the inability to exit  just strengthened their inclination to exercise voice. This they would do with relish, driving parties away from the median voter, as political leaders trade-off the fear of exit by moderates against the threat of voice by extremists.

Albert Hirschman lived a long and eventful life and was a joyfully iconoclastic thinker. His books will be read by generations to come. But he will always remain something of an outsider in the profession; his ideas are just too broad and interdisciplinary to find neat expression in models and textbooks. He was an intellectual rebel throughout his life, and it is only fitting that he remain so in perpetuity. 

Friday, December 07, 2012

Risk and Reward in High Frequency Trading

paper on the profitability of high frequency traders has been attracting a fair amount of media attention lately. Among the authors is Andrei Kirilenko of the CFTC, whose earlier study of the flash crash used similar data and methods to illuminate the ecology of trading strategies in the S&P 500 E-mini futures market. While the earlier work examined transaction level data for four days in May 2010, the present study looks at the entire month of August 2010. Some of the new findings are startling, but need to be interpreted with greater care than is taken in the paper.

High frequency traders are characterized by large volume, short holding periods, and limited overnight and intraday directional exposure:
For each day there are three categories a potential trader must satisfy to be considered a HFT: (1) Trade more than 10,000 contracts; (2) have an end-of-day inventory position of no more than 2% of the total contracts the firm traded that day; (3) have a maximum variation in inventory scaled by total contracts traded of less than 15%. A firm must meet all three criteria on a given day to be considered engaging in HFT for that day. Furthermore, to be labeled an HFT firm for the purposes of this study, a firm must be labeled as engaging in HFT activity in at least 50% of the days it trades and must trade at least 50% of possible trading days. 
Of more than 30,000 accounts in the data, only 31 fit this description. But these firms dominate the market, accounting for 47% of total trading volume and appearing on one or both sides of almost 75% of traded contracts. And they do this with minimal directional exposure: average intraday inventory amounts to just 2% of trading volume, and the overnight inventory of the median HFT firm is precisely zero.

This small set of firms is then further subdivided into categories based on the extent to which they are providers of liquidity. For any given trade, the liquidity taker is the firm that initiates the transaction, by submitting an order that is marketable against one that is resting in the order book. The counterparty to the trade (who previously submitted the resting limit order) is the liquidity provider. Based on this criterion, the authors partition the set of high frequency traders into three subcategories: aggressive, mixed, and passive:
To be considered an Aggressive HFT, a firm must... initiate at least 40% of the trades it enters into, and must do so for at least 50% of the trading days in which it is active. To be considered a Passive HFT a firm must initiate fewer than 20% of the trades it enters into, and must do so for at least 50% of the trading days during which it is active. Those HFTs that meet neither... definition are labeled as Mixed HFTs. There are 10 Aggressive, 11 Mixed, and 10 Passive HFTs.
This heterogeneity among high frequency traders conflicts with the common claim that such firms are generally net providers of liquidity. In fact, the authors find that "some HFTs are almost 100% liquidity takers, and these firms trade the most and are the most profitable."

Given the richness of their data, the authors are able to compute profitability, risk-exposure, and measures of risk-adjusted performance for all firms. Gross profits are significant on average but show considerable variability across firms and over time. The average HFT makes over $46,000 a day; aggressive firms make more than twice this amount. The standard deviation of profits is five times the mean, and the authors find that "there are a number of trader-days in which they lose money... several HFTs even lose over a million dollars in a single day."

Despite the volatility in daily profits, the risk-adjusted performance of high frequency traders is found to be spectacular:
HFTs earn above-average gross rates of return for the amount of risk they take. This is true overall and for each type... Overall, the average annualized Sharpe ratio for an HFT is 9.2. Among the subcategories, Aggressive HFTs (8.46) exhibit the lowest risk-return tradeoff, while Passive HFTs do slightly better (8.56) and Mixed HFTs achieve the best performance (10.46)... The distribution is wide, with an inter-quartile range of 2.23 to 13.89 for all HFTs. Nonetheless, even the low end of HFT risk-adjusted performance is seven times higher than the Sharpe ratio of the S&P 500 (0.31).
These are interesting findings, but there is a serious problem with this interpretation of risk-adjusted performance. The authors are observing only a partial portfolio for each firm, and cannot therefore determine the firm's overall risk exposure. It is extremely likely that these firms are trading simultaneously in many markets, in which case their exposure to risk in one market may be amplified or offset by their exposures elsewhere. The Sharpe ratio is meaningful only when applied to a firm's entire portfolio, not to any of its individual components. For instance, it is possible to construct a low risk portfolio with a high Sharpe ratio that is composed of several high risk components, each of which has a low Sharpe ratio.

To take an extreme example, if aggressive firms are attempting to exploit arbitrage opportunities between the futures price and the spot price of a fund that tracks the index, then the authors would have significantly overestimated the firm's risk exposure by looking only at its position in the futures market. Over short intervals, such a strategy would result in losses in one market, offset and exceeded by gains in another. Within each market the firm would appear to have significant risk exposure, even while its aggregate exposure was minimal. Over longer periods, net gains will be more evenly distributed across markets, so the profitability of the strategy can be revealed by looking at just one market. But doing so would provide a very misleading picture of the firms risk exposure, since day-to-day variations in profitability within a single market can be substantial.

The problem is compounded by the fact that there are likely to by systematic differences across firms in the degree to which they are trading in other markets. I suspect that the most aggressive firms are in fact trading across multiple markets in a manner that lowers rather than amplifies their exposure in the market under study. Under such circumstances, the claim that aggressive firms "exhibit the lowest risk-return tradeoff" is without firm foundation.

Despite these problems of interpretation, the paper is extremely valuable because it provides a framework for thinking about the aggregate costs and benefits of high frequency trading. Since contracts in this market are in zero net supply, any profits accruing to one set of traders must come at the expense of others:
From whom do these profits come? In addition to HFTs, we divide the remaining universe of traders in the E-mini market into four categories of traders: Fundamental traders (likely institutional), Non-HFT Market Makers, Small traders (likely retail), and Opportunistic traders... HFTs earn most of their profits from Opportunistic traders, but also earn profits from Fundamental traders, Small traders, and Non-HFT Market Makers. Small traders in particular suffer the highest loss to HFTs on a per contract basis.
Within the class of high frequency traders is another hierarchy: mixed firms lose to aggressive ones, and passive firms lose to both of the other types.

The operational costs incurred by such firms include payments for data feeds, computer systems, co-located servers, exchange fees, and highly specialized personnel. Most of these costs do not scale up in proportion to trading volume. Since the least active firms must have positive net profitability in order to survive, the net returns of the most aggressive traders must therefore be substantial.

In thinking about the aggregate costs and benefits of all this activity, it's worth bringing to mind Bogle's law:
It is the iron law of the markets, the undefiable rules of arithmetic: Gross return in the market, less the costs of financial intermediation, equals the net return actually delivered to market participants.
The costs to other market participants of high frequency trading correspond roughly to the gross profitability of this small set of firms. What about the benefits? The two most commonly cited are price discovery and liquidity provision. It appears that the net effect on liquidity of the most aggressive traders is negative even under routine market conditions. Furthermore, even normally passive firms can become liquidity takers under stressed conditions when liquidity is most needed but in vanishing supply.

As far as price discovery is concerned, high frequency trading is based on a strategy of information extraction from market data. This can speed up the response to changes in fundamental information, and maintain price consistency across related assets. But the heavy lifting as far as price discovery is concerned is done by those who feed information to the market about the earnings potential of publicly traded companies. This kind of research cannot (yet) be done algorithmically.

A great deal of trading activity in financial markets is privately profitable but wasteful in the aggregate, since it involves a shuffling of net returns with no discernible effect on production or economic growth. Jack Hirschleifer made this point way back in 1971, when the financial sector was a fraction of its current size. James Tobin reiterated these concerns a decade or so later. David Glasner, who was fortunate enough to have studied with Hirshlefier, has recently described our predicament thus:
Our current overblown financial sector is largely built on people hunting, scrounging, doing whatever they possibly can, to obtain any scrap of useful information — useful, that is for anticipating a price movement that can be traded on. But the net value to society from all the resources expended on that feverish, obsessive, compulsive, all-consuming search for information is close to zero (not exactly zero, but close to zero), because the gains from obtaining slightly better information are mainly obtained at some other trader’s expense. There is a net gain to society from faster adjustment of prices to their equilibrium levels, and there is a gain from the increased market liquidity resulting from increased trading generated by the acquisition of new information. But those gains are second-order compared to gains that merely reflect someone else’s losses. That’s why there is clearly overinvestment — perhaps massive overinvestment — in the mad quest for information.
To this I would add the following: too great a proliferation of information extracting strategies is not only wasteful in the aggregate, it can also result in market instability. Any change in incentives that substantially lengthens holding periods and shifts the composition of trading strategies towards those that transmit rather than extract information could therefore be both stabilizing and growth enhancing. 

Wednesday, November 28, 2012

Death of a Prediction Market

A couple of days ago Intrade announced that it was closing its doors to US residents in response to "legal and regulatory pressures." American traders are required to close out their positions by December 23rd, and withdraw all remaining funds by the 31st. Liquidity has dried up and spreads have widened considerably since the announcement. There have even been sharp price movements in some markets with no significant news, reflecting a skewed geographic distribution of beliefs regarding the likelihood of certain events.

The company will survive, maybe even thrive, as it adds new contracts on sporting events to cater to it's customers in Europe and elsewhere. But the contracts that made it famous - the US election markets - will dwindle and perhaps even disappear. Even a cursory glance at the Intrade forum reveals the importance of its US customers to these markets. Individuals from all corners of the country with views spanning the ideological spectrum, and detailed knowledge of their own political subcultures, will no longer be able to participate. There will be a rebirth at some point, perhaps launched by a new entrant with regulatory approval, but for the moment there is a vacuum in a once vibrant corner of the political landscape.

The closure was precipitated by a CFTC suit alleging that the company "solicited and permitted" US persons to buy and sell commodity options without being a registered exchange, in violation of US law. But it appears that hostility to prediction markets among regulators runs deeper than that, since an attempt by Nadex to register and offer binary options contracts on political events was previously denied on the grounds that "the contracts involve gaming and are contrary to the public interest."

The CFTC did not specify why exactly such markets are contrary to the public interest, and it's worth asking what the basis for such a position might be.

I can think of two reasons, neither of which are particularly compelling in this context. First, all traders have to post margin equal to their worst-case loss, even though in the aggregate the payouts from all bets will net to zero. This means that cash is tied up as collateral to support speculative bets, when it could be put to more productive uses such as the financing of investment. This is a capital diversion effect. Second, even though the exchange claims to keep this margin in segregated accounts, separate from company funds, there is always the possibility that its deposits are not fully insured and could be lost if the Irish banking system were to collapse. These losses would ultimately be incurred by traders, who would then have very limited legal recourse.

These arguments are not without merit. But if one really wanted to restrain the diversion of capital to support speculative positions, Intrade is hardly the place to start. Vastly greater amounts of collateral are tied up in support of speculation using interest rate and currency swaps, credit derivatives, options, and futures contracts. It is true that such contracts can also be used to reduce risk exposures, but so can prediction markets. Furthermore, the volume of derivatives trading has far exceeded levels needed to accommodate hedging demands for at least a decade. Sheila Bair recently described synthetic CDOs and naked CDSs as "a game of fantasy football" with unbounded stakes. In comparison with the scale of betting in licensed exchanges and over-the-counter swaps, Intrade's capital diversion effect is truly negligible.

The second argument, concerning the segregation and safety of funds, is more relevant. Even if the exchange maintains a strict separation of company funds from posted margin despite the absence of regulatory oversight, there's always the possibility that its deposits in the Irish banking system are not fully secure. Sophisticated traders are well aware of this risk, which could be substantially mitigated (though clearly not eliminated entirely) by licensing and regulation.

In judging the wisdom of the CFTC action, it's also worth considering the benefits that prediction markets provide. Attempts at manipulation notwithstanding, it's hard to imagine a major election in the US without the prognostications of pundits and pollsters being measured against the markets. They have become part of the fabric of social interaction and conversation around political events.

But from my perspective, the primary benefit of prediction markets has been pedagogical. I've used them frequently in my financial economics course to illustrate basic concepts such as expected return, risk, skewness, margin, short sales, trading algorithms, and arbitrage. Intrade has been generous with its data, allowing public access to order books, charts and spreadsheets, and this information has found its way over the years into slides, problem sets, and exams. All of this could have been done using other sources and methods, but the canonical prediction market contract - a binary option on a visible and familiar public event - is particularly well suited for these purposes.

The first time I wrote about prediction markets on this blog was back in August 2003. Intrade didn't exist at the time but its precursor, Tradesports, was up and running, and the Iowa Electronic Markets had already been active for over a decade. Over the nine years since that early post, I've used data from prediction markets to discuss arbitrageoverreactionmanipulationself-fulfilling propheciesalgorithmic trading, and the interpretation of prices and order books. Many of these posts have been about broader issues that also arise in more economically significant markets, but can be seen with great clarity in the Intrade laboratory.

It seems to me that the energies of regulators would be better directed elsewhere, at real and significant threats to financial stability, instead of being targeted at a small scale exchange which has become culturally significant and serves an educational purpose. The CFTC action just reinforces the perception that financial sector enforcement in the United States is a random, arbitrary process and that regulators keep on missing the wood for the trees.

---

Update: NPR's Yuki Noguchi follows up with Justin Wolfers, Thomas Bell, Laurence Lau, and Jason Ruspini here; definitely worth a listen. Brad Plumer's overview of the key issues is also worth a look.

Sunday, November 18, 2012

Curtailing Intellectual Monopoly

I never thought I'd see an RSC policy brief referring to mash-ups and mix-tapes, but I was clearly mistaken.

The document deals in an unusually frank manner with the dismal state of US copyright law. Perhaps too frankly: it was quickly disavowed and taken down on the grounds that publication had occurred "without adequate review." Copies continue to circulate, of course (the link above is to one I posted on Scribd). Although lightly peppered with ideological boilerplate, the brief makes a number of timely and sensible points and is worth reading in full.

Aside from extolling the virtues of "a robust culture of DJ’s and remixing" free from the stranglehold of copyright protection, the authors of the report make the following claims. First, the purpose of copyright law, according to the constitution, is to "promote the progress of science and useful arts" and not to "compensate the creator of the content." Copyright law should therefore be evaluated by the degree to which it facilitates innovation and creative expression. Second, unlike conventional tort law, statutory damages for infringement are "vastly disproportionate from the actual damage to the copyright producer." For instance, Limewire was sued for $75 trillion, "more money than the entire music recording industry has made since Edison’s invention of the phonograph in 1877." Third, the duration of coverage has been expanding, seemingly without limit. In 1790 a 14 year term could be renewed once if the the author remained alive; current coverage is for the life of the author plus 70 years. This stifles rather than promotes creative activity.

The economists Michele Boldrin and David Levine have been making these points for years. In their book Against Intellectual Monopoly (reviewed here), they point out that the pace of innovation in industries without patent and copyright protection has historically been extremely rapid. Software could not be patented before 1981, nor financial securities prior to 1998, yet both industries witnessed innovation at a blistering pace. The fashion industry remains largely untouched by intellectual property law, yet new designs keep appearing and enriching their creators. Innovative techniques in professional sports continue to be developed, despite the fact that successful ones are quickly copied and disseminated.

In 19th century publishing, British authors had limited protection in the United States but managed to secure lucrative deals with publishers, allowing the latter to saturate the market at low prices before new entrants could gain a foothold. More recently, commercial publishers have turned a profit selling millions of copies of unprotected government documents. For instance, the 9/11 Commission Report was published by both Norton and Macmillan in 2004, and a third version by Cosimo is now available.

Copyright restrictions for scientific papers are especially illogical, since faculty authors benefit from the widest possible dissemination and citation of their work. Furthermore, in the case of journals owned by commercial publishers, copyright is typically transferred by the author to the publisher. Neither the content creators nor the uncompensated peer-reviewers who evaluate manuscripts for publication benefit from protection in such cases. Fortunately, thanks to the emergence of new high-quality open-source journals sponsored by academic societies, things are starting to change.

It's not clear why the policy brief was taken down, or what motivated it in the first place. Henry Farrell, while agreeing with the positions taken in the report, argues that damage to an industry that has historically supported Democrats may be a factor. In contrast, Jordan Bloom and Alex Tabarrok both believe that pressure on Republicans from the entertainment industry led to the brief being withdrawn. They can't all be right as far as I can see. But less interesting than the motivation for the report is its content, and the long overdue debate on patents and copyrights that could finally be stirred in its wake. 

Wednesday, November 07, 2012

Prediction Market Manipulation: A Case Study

The experience of watching election returns come in has become vastly more social and interactive than it was just a decade ago. Television broadcasts still provide the core pubic information around which expectations are formed, but blogs and twitter feeds are sources of customized private information that can have significant effects on the evolution of beliefs. And prediction markets aggregate this private information and channel it back into the public sphere.

All of this activity has an impact not only on our beliefs and moods, but also on our behavior. In particular, beliefs that one's candidate of choice has lost can affect turnout. It has been argued, for instance, that early projections of victory for Reagan in 1980 depressed Democratic turnout in California, and that Republican turnout in Florida was similarly affected in 2000 when the state was called for Gore while voting in the panhandle was still underway. For this reason, early exit poll data is kept tightly under wraps these days, and states are called for one candidate or another only after polls have closed.

This effect of beliefs on behavior implies that a candidate facing long odds of victory has an incentive to inflate these odds and project confidence in public statements, lest the demoralizing effects of pessimism cause the likelihood of victory to decline even further. Traditionally this would be done by partisans on television sketching out implausible scenarios and interpretations of the incoming data to boost their supporters. But with the increasing visibility of prediction markets, this strategy is much less effective. If a collapse in the price of a contract on Intrade reveals that a candidate is doing much worse than expected, no amount of cheap talk on television can do much to change the narrative.

Given this, the incentives to interfere with what the markets are saying becomes quite powerful. Even though trading volume has risen dramatically in prediction markets over recent years, the amount of money required to have a sustained price impact for a few hours remains quite small, especially in comparison with the vast sums now spent on advertising.

In general, I believe that observers are too quick to allege manipulation when they see unusual price movements in such markets. As I noted in an earlier post, a spike in the price of the Romney contract a few days ago was probably driven by naive traders over-reacting to rumors of a game-changing announcement by Donald Trump, rather than by any systematic attempt at price manipulation. My reasons for thinking so were based on the fact that frenzied purchases of a single contract (while ignoring complementary contracts) are terribly ineffective if the goal is to have a sustained impact on prices. If one really wants to manipulate a market, it has to be done by placing large orders that serve as price ceilings and floors, and to do this across complementary contracts in a consistent way.

As it happens, this is exactly what someone tried to do yesterday. At around 3:30 pm, I noticed that the order book for both Obama and Romney contracts on Intrade had become unusually asymmetric, with a large block of buy orders for Romney in the 28-30 range, and a corresponding block of sell orders for Obama in the 70-72 range. Here's the Romney order book:

And here's the book for Obama:


Since the exchange requires traders to post 100% margin (to cover their worst case loss and eliminate counterparty risk), the funds required to place these orders was about $240,000 in total. A non-trivial amount, but probably less than the cost of a thirty-second commercial during primetime.

Could this not have been just a big bet, placed by someone optimistic about Romney's chances? I don't think so, for two reasons. First, if one wanted to bet on Romney rather than Obama, much better odds were available elsewhere, for instance on Betfair. More importantly, one would not want to leave such large orders standing at a time when new information was emerging rapidly; the risk of having the orders met by someone with superior information would be too great. Yet these orders stood for hours, and effectively placed a floor on the Romney price and a ceiling on the price for Obama.

Meanwhile odds in other markets were shifting rapidly. Nate Silver noticed the widening disparity and was puzzled by it, arguing that differences across markets should "evaporate on Election Day itself, when the voting is over and there is little seeming benefit from affecting the news media coverage." Much as I admire Nate, I think that he was mistaken here. It is precisely on election day that market manipulation makes most sense, since one only needs to affect media coverage for a few hours until all relevant polls have closed. Voting was still ongoing in Colorado, and keeping Romney viable there was the only hope of stitching together a victory. Florida, Virginia and Ohio were all close at the time and none had been called for Obama. A loss in Colorado would have made these three states irrelevant and a Romney victory virtually impossible.

Given this interpretation, I felt that the floor would collapse once the Colorado polls closed at 9pm Eastern Time, and this is precisely what happened:


Once the floor gave way, the price fell to single digits in a matter of minutes and never recovered.

It turned out, of course, that none of this was to matter: Virginia, Ohio, and (probably) Florida have all fallen to Obama. But all were close, and the possibility of a different outcome could not have been ruled out at the time. The odds were low, and a realistic projection of these odds would have made them even lower. Such is the positive feedback loop between beliefs and outcomes in politics. Under the circumstances, the loss of a few hundred thousand dollars to keep alive the prospect of a Romney victory probably seemed like a good investment to someone.

Should one be concerned about such attempts at manipulation? I don't think so. They muddy the waters a bit but are transparent enough to be spotted quickly and reacted to. My initial post was retweeted within minutes by Justin Wolfers to 24,000 followers, and by Chris Hayes to 160,000 shortly thereafter. Attempts at manipulating beliefs are nothing new in presidential politics, it's just the methods that have changed. And as long as one is aware of the possibility of such manipulation, it is relatively easy to spot and counter. The same social media that transmits misinformation also allows for the broadcast of countervailing narratives. In the end the fog clears and reality asserts itself. Or so one hopes. 
--- 

Update: The following chart shows the Obama price breaking through the ceiling just before the polls closed in Colorado:


It's the extraordinary stability of the price before 8:45pm, which was sustained over several hours, that is suggestive of manipulation.

Monday, November 05, 2012

The Rationality of Voting

Every election year, like clockwork, some people feel the need to remind the rest of us that (contrary to the exhortations of politicians and peers) our votes do not, in fact, count. Not only do they not count in New York, California or Texas, they don't count in Colorado, Ohio, or Florida either. While the likelihood that a single vote will be decisive may be incrementally higher in the latter set of states, it is negligible everywhere. Steve Levitt goes so far as to say that "it’s only the not so smart people who vote because they’re actually going to influence the election." Phil Arena is a bit more charitable, arguing that "people who believe that their vote counts are simply mistaken." Kindred Winecoff concurs.

Here's Arena's version of the argument:
If you've ever said something like "My vote doesn't count, because I live in New York", you're the type of person who makes my head hurt.  We may not know for sure how things will turn out in New Hampshire this coming Tuesday, but that doesn't mean that an individual's vote will "count" for much of anything in that state.   The fact that everyone who knows anything about politics knows how things will go in New York (or California, or Texas) doesn't make any meaningful difference to the question of whether individual votes in those states are likely to determine the outcome...  Don't confuse uncertainty over the final outcome with a significant probability of a single vote determining the outcome.  Those two things are not even remotely the same.
And yet we have people waiting in line for hours to cast ballots in Ohio, and making multiple trips to polling stations in Florida, bearing significant burdens to engage in what Winecoff asserts is simply "cheap talk." Would these voters make similar sacrifices to vote in New York or California? And if not, are they somehow deluded or dumb?

I believe that it is Arena and Levitt who are mistaken about the rationality of voting, and not the voters themselves. The premise of their argument is correct, but not the conclusions they draw from it. The likelihood of a single vote being decisive is negligible in all states, and voters by and large are fully cognizant of this fact. And yet it is perfectly rational for some voters to incur significant costs to vote in New York, and to incur even greater costs to do so in Ohio.

To see why, one needs only to recognize that the elation one feels when a preferred candidate wins depends both on the margin of victory and on whether or not one has cast a ballot. A single voter cannot materially affect the former, but can certainly determine the latter. Furthermore, the margin of victory can be forecast with a fair amount of accuracy: there is little doubt that the margin in Ohio, no matter who wins tomorrow, will be smaller than that in New York. Provided that the joy of celebrating a victory is greater when one has cast a ballot, and especially so when the margin of victory is small, it makes perfect sense to incur greater costs to vote in Ohio than in New York.

Similar arguments apply to the grief that comes with defeat. In this case it is the failure to cast a ballot when the margin is tight that can give rise to great regret. People in this situation are perfectly well aware that their vote alone would not have materially affected the outcome, but they are not much comforted by this thought. The point is that if a relatively small coalition could have jointly generated a different outcome, then one's failure to join such a coalition can be a cause of distress. There is nothing irrational about such preferences, and they clearly lead to greater turnout when and where elections are predicted to be close. This turnout differential is not based on mistaken beliefs about what one alone can accomplish, and is not driven by cognitive limitations either.

This perspective on voting also explains why people often vote strategically, rather than always voting their conscience. Think of Nader supporters contemplating a vote for Gore in 2000. The size of the coalition of such supporters who could have blocked a Bush victory in Florida turned out to be extremely small, and the possibility that Nader could play such a spoiler role in the election was certainly anticipated. Some of those who chose to vote their conscience may well have regretted this choice once the outcome of the election was finally determined, and some who chose to vote for Gore may well have done so to avoid such regret. The fact that no single voter was decisive is entirely irrelevant. Collective responsibility for coalitional choices comes naturally to us, especially when the groups involved are not large. It is not the motives themselves, but the pretense that they do not exist that constitutes the true departure from rationality.

People vote for all kinds of different reasons. Some consider it a civic duty, others enjoy the process, and still others take satisfaction from the exercise of voice. Voting can be a powerful expression of identity; an affirmation, as Noah Millman puts it, of membership in a political tribe. It can be a result of peer pressure or the desire to avoid social sanction. But incurring greater costs to vote in closer elections is also perfectly consistent with a calm, reasoned, and above all intelligent response to the preferences with which we are endowed.

---

Update: Andrew Gelman and Steve Waldman are also worth reading on these issues, although their perspectives differ somewhat from mine. Andrew argues that in swing states the probabilities of being decisive are not effectively negligible, and therefore does not accept the basic premise of Arena's argument. Steve maintains that the argument is "right but wrong-headed" and shows that norms of political participation sustained by sanctions can be stable. Voting is clearly rational in the presence of such norms, as is resistance to the kinds of arguments that Arena makes.

The first action I took as an American citizen was to register to vote; I did this within minutes of receiving my naturalization certificate. I plan to cast a ballot tomorrow in the great and resilient city of New York, even though there isn't a competitive race in sight. Doing so won't affect the outcome, but it will certainly affect the experience of watching the returns come in, no matter who the winner may be. 

Wednesday, October 24, 2012

Algorithms, Arbitrage, and Overreaction on Intrade

There were some startling price movements in the presidential contracts on Intrade yesterday. Here's the price and volume chart for the Romney contract, which pays out $10 if he is elected and nothing otherwise:



At 7:52am the price of this contract stood at 40 (this is a percentage of the contract face value, so represents $4.00). Over the next two hours the price edged up to 42, with about 4500 contracts traded. That's where things stood at 9:58. Over the next three minutes the price rose sharply to 48.5, with a further 1700 contracts traded. This was followed by sharp oscillating movements between the peak and 42, which can be seen as the red blur at around 10am in the chart. By 10:15 the price had fallen to 43 and the oscillations had mostly ceased. An hour later the price was back down to 41, with about 14,000 contracts and $63,000 having changed hands over three hours.

What caused this unusual price behavior? There's been some talk of attempted price manipulation, but I have my doubts because the trader who was buying aggressively over this period was extremely naive. (I am fairly certain that this was a single trader). Throughout the buying frenzy, the Obama contract never fell below 57 and there was a substantial block of bids at or above this price. The trader who was buying Romney at 48 could have made the same bet for 43 by simply selling the Obama contract at 57. In fact, he would have obtained a slightly superior contract, which would pay off if any person other than Obama were to win, including but not limited to Romney.

This fact also explains the oscillations in the Romney price, and the decline to 57 under selling pressure of the Obama price. Any individual who had posted ask prices in the 43-48 range in the Romney market had these orders met by the crazed buyer, and could then sell Obama above 57 for an immediate arbitrage profit. As it happens, there are algorithms active on Intrade that do precisely this. They post ask prices that, together with with highest bid in the complementary contract, add up to slightly more than 100. As soon as these orders trade, they sell the complementary contract at once, booking a riskless profit. These algorithms posted prices in the 42-43 range over the period in question, and the buyer repeatedly traded through them to reach the higher Romney asks. Hence the red blur in the chart. Only when the buyer gave up or ran out of funds did the price settle down.

If this was not an instance of attempted manipulation, then what was it? I suspect that it was an overzealous response to reports of a major announcement concerning the presidential race, promised by Donald Trump. Further frenzied activity in the presidential and state level markets took place in the evening, as speculation about the nature of the announcement started to spread.

This whole bizarre episode tells us very little about the presidential race, but does shed some light on how these markets work. Changes in one market spill over instantaneously to changes in linked markets via arbitrage, some of it executed algorithmically. And algorithms that work very effectively when rare can end up with disastrous results if copied. For instance, if two algorithms were to follow the strategy outlined above, it is possible that only one of them may be able to complete the second sale since the first mover would have snapped up the existing bids. This kind of game is currently being played in the world of high frequency trading, but with much higher stakes and considerably more serious economic consequences.  

Sunday, October 14, 2012

Of Bulls and Bair

Sheila Bair's new book, Bull by the Horns, is both a crisis narrative and a thoughtful reflection on economic institutions and policy. The crisis narrative, with its revealing first-hand accounts of high-level meetings, high-stakes negotiations, behind-the-scenes jockeying, and clashing personalities will attract the most immediate attention. But it's the economic analysis that will constitute the more enduring contribution.

Among the many highlights are the following: a discussion of the linkages between securitization, credit derivatives and loan modifications, an exploration of the trade-off between regulatory capture and regulatory arbitrage, an intriguing question about the optimal timing of auctions for failing banks, a proposal for ending too big to fail that relies on simplification and asset segregation rather than balance sheet contraction, a full-throated defense of sensible financial regulation, and a passionate critique of bailouts for the powerful and politically connected even when such transactions appear to generate an accounting profit.

Let's start with securitization, derivatives and loan modifications. Under the traditional model of mortgage lending, there are strong incentives for creditors to modify delinquent loans if the costs of doing so are lower than the very substantial deadweight losses that result from foreclosure. But pooling and tranching of mortgage loans creates a conflict of interest within the group of investors. As long as foreclosures are not widespread enough to affect holders of the overcollateralized senior tranches, all losses are inflicted upon those with junior claims. In contrast, loan modifications lower payments to all tranches, and will thus be resisted by holders of senior claims unless they truly see disaster looming. One consequence of this "tranche warfare" is that servicers, fearing lawsuits, will be inclined to favor foreclosure over modification.

But this is not the end of the story. Bair points out that the interests of those using credit derivatives to bet on declines in home values are aligned with those of holders of senior tranches, as long as the latter continue to believe that foreclosures will not become widespread enough to eat into their protected positions. This is interesting because these two groups are taking quite different price views: one is long and the other short credit risk. Bair notes that some of the "early resistance" to FDIC loan modification initiatives came from fund managers who "had purchased CDS protection against losses on mortgage-backed securities they did not own." The irony is that they were joined in this resistance by holders of senior tranches who were relying (overoptimistically, as it turned out) on the protective buffer provided by the holders of junior claims.

Another interesting discussion in the book concerns the trade-off between regulatory arbitrage and regulatory capture. A fragmented regulatory structure with a variety of norms and standards encourages financial institutions to shop for the weakest regulator. In the lead up to the crisis, such regulatory shopping occurred between banks and nonbanks, with mortgage brokers and securities firms operating outside the stronger regulations imposed on insured banks. But Bair also notes that the "three biggest problem institutions among insured banks - Citigroup, Wachovia, and WaMu - had not shopped for charters; they had been with the same regulator for decades. The problem was that their regulators did not have independence from them."

This is the problem of regulatory capture. Bair argues that while a single monolithic regulator would put an end to regulatory arbitrage, it could worsen the problem of regulatory capture: "a diversity of views and the ability of one agency to look over the shoulder of another is a good check against regulators becoming too close to the entities they regulate." It's a point that she has made before, and clearly believes (with considerable justification) that the FDIC has provided such checks and balances in the past. It was able to do so in part through its power under the law and in part through the power to persuade; yet another reminder of the continued relevance of Albert Hirschman's notion of voice.

A very different kind of trade-off concerns the timing of auctions for failing banks. One of the policies that Bair favored at the FDIC was the quick sale of failing banks prior to closure in order to avoid a period of government stewardship. She recognizes, however, that there are some costs to this. Bids from prospective buyers who have not had time to closely examine the asset pool of the failing institution will tend to be lower than the expected value of these assets, given the need to maintain adequate margins of safety in the face of risk. Waiting until a more precise estimate of the value of the bank's assets can be obtained can therefore result in higher bids on average. But there are also costs to waiting: a "deterioration of franchise value" occurs as large depositors and business customers look elsewhere, and this can offset any gains from a more precise valuation of the asset pool. Bair seems to have concluded that sale before closure was always the best course of action, but I suspect that this need not be the case, especially when the asset pool is characterized by high expected value but great uncertainty, and the bulk of deposits are insured. In any case, it's a question deserving of systematic study.

Bair describes herself as a lifelong Republican and McCain voter; she is contemplating a write-in vote for Jon Huntsman this November. Yet she seems quite immune to partisan loyalties and pressures. Her description of Barney Frank is positively affectionate. She has kind words for some Democrats (such as Elizabeth Warren and Mark Warner), but offers blistering criticism of others (Robert Rubin, Tim Geithner and Larry Summers in particular). Among Republicans too, she is discerning: Bob Corker's efforts on financial reform are lauded, but the "deregulatory dogma" overseen by Alan Greenspan comes under forceful attack. She laments the "disdain" for government and its "regulatory function" and describes as a "delusion" the idea that markets are self-regulating. These are not the views of a political partisan.

On policy, Bair is opposed to the use of derivatives for two-sided speculation (when neither party is hedging) and would require an insurable interest for the purchase of protection against default. She describes synthetic CDOs and naked CDSs as "a game of fantasy football" with no limit to the size of wagers that can be placed. She wants a "lifetime ban on regulators working for financial institutions they have regulated." And she argues, as did James Tobin a generation ago, that the attraction of the financial sector for some of the best and brightest of our youth is detrimental to long term economic growth and prosperity.

No review of this book would be complete without mention of the bailouts, which troubled Bair from the outset, and which she now feels were excessive:
To this day, I wonder if we overreacted... Yes, action had to be taken, but the generosity of the response still troubles me... Granted, we were dealing with an emergency and had to act quickly. And the actions did stave off a broader financial crisis. But the unfairness of it and the lack of hard analysis showing the necessity of it trouble me to this day. The mere fact that a bunch of large financial institutions is going to lose money does not a systemic event make... Throughout the crisis and its aftermath, the smaller banks - which didn't benefit at all from government largesse - did a much better job of lending than the big institutions did. 
What bothers her most of all is the claim that the bailouts were justified because they made an accounting profit:
The thing I hate hearing most when people talk about the crisis is that the bailouts "saved the system" or ended up "making money." Participating in bailout measures was the most distasteful thing I have ever had to do, and those ex post facto rationalizations make my skin crawl... The bailouts, while stabilizing the financial system in the short term, have created a long-term drag on our economy. Because we propped up the mismanaged institutions, our financial sector remains bloated... We did not force financial institutions to shed their bad assets and recognize their losses... Economic growth is sluggish, unemployment remains high. The housing market still struggles. I hope that our economy continues to improve. But it will do so despite the bailouts, not because of them.
The ideal policy, according to Bair, would have been to put insolvent institutions into the "bankruptcy-like resolution process" used routinely by the FDIC, but she recognizes that the legal basis for doing so was not available at the time. She therefore signed on to measures that were instinctively repugnant to her, and tried to corral and contain them to the extent possible.

The argument that the bailouts "made money" is specious for two reasons. First, the funds provided were given well below market value, and the cost to taxpayers should be computed relative to the value of the service provided. If insurance is provided at a fraction of the actuarially fair price, and no claim is made over the period of insurance (so the insurer makes money), this does not mean that there was no subsidy in the first place. Steve Waldman has made this point very effectively in the past. Furthermore, the cost to taxpayers should take into account any loss of revenues from more sluggish growth. If Bair is right to argue that the bailouts were excessively generous, to the point that growth prospects were damaged for an extended period, the loss of tax revenue must be included in any assessment of the cost of the bailout.

As noted above, there are many accounts in the book of meetings and decisions, and a number of speculative inferences about the actions and intentions of others. Some of these will be hotly disputed. But focusing on these details is to miss the larger point. The crisis offers us an opportunity to think about the flaws in our economic and political system and how some of these might be fixed. It also suggests interesting directions in which economic theorizing could be advanced. The book helps with both efforts, and it would be a pity if these substantive contributions were drowned out in a debate over conversations and personalities.

Wednesday, August 15, 2012

On Prices, Narratives, and Market Efficiency

The fourth anniversary of the Lehman bankruptcy has been selected as the release date for a collection of essays edited by Diane Coyle with the provocative title: What's the Use of Economics? The timing is impeccable and the question legitimate.

The book collects together some very thoughtful responses by Andrew Haldane, John Kay, Wendy Carlin, Alan Kirman, Andrew Lo, Roger Farmer, and a host of other luminaries (the publishers were kind enough to send me an advance copy). There's enough material there for several posts but I'd like to start with the contribution by John Kay.

This one, as it happens, has been published before; I discussed Mike Woodford's reaction to it in a previous post. But reading it again I realized that it contains a perspective on market efficiency and price discovery that is concise, penetrating and worthy of some elaboration. Kay doesn't just provide a critique of the efficient markets hypothesis; he sketches out an alternative approach based on the idea of prices as the "product of a clash between competing narratives" that can form the basis of an entire research agenda.

He begins with a question famously posed by the Queen of England during a visit to the London School of Economics: Why had economists failed to predict the financial crisis? Robert Lucas pointed out in response that the inability to predict a financial crisis was in fact a prediction of economic theory. This is as pure a distillation of the efficient markets hypothesis is one is likely to find, and Kay uses it to evaluate the hypothesis itself:
Lucas’s assertion that ‘no one could have predicted it’ contains an important, though partial, insight. There can be no objective basis for a prediction of the kind ‘Lehman Bros will go into liquidation on September 15’, because if there were, people would act on that expectation and, most likely, Lehman would go into liquidation straight away. The economic world, far more than the physical world, is influenced by our beliefs about it. 
Such thinking leads, as Lucas explains, directly to the efficient market hypothesis – available knowledge is already incorporated in the price of securities. And there is a substantial amount of truth in this – the growth prospects of Apple and Google, the problems of Greece and the Eurozone, are all reflected in the prices of shares, bonds and currencies. The efficient market hypothesis is an illuminating idea, but it is not “Reality As It Is In Itself”. Information is reflected in prices, but not necessarily accurately, or completely. There are wide differences in understanding and belief, and different perceptions of a future that can be at best dimly perceived. 
In his Economist response, Lucas acknowledges that ‘exceptions and anomalies’ to the efficient market hypothesis have been discovered, ‘but for the purposes of macroeconomic analyses and forecasts they are too small to matter’. But how could anyone know, in advance not just of this crisis but also of any future crisis, that exceptions and anomalies to the efficient market hypothesis are ‘too small to matter’?
The literature on anomalies is not, in fact, concerned with macroeconomic analyses and forecasts. It is rather narrowly focused on predictability in asset prices and the possibility of constructing portfolios that can consistently beat the market on a risk-adjusted basis. And indeed, such anomalies are often found to be quite trivial, especially when one considers the costs of implementing the implied strategies. The inability of actively managed funds to beat the market on average, after accounting for costs and adjusting for risk, is often cited as providing empirical support for market efficiency. But Kay believes that these findings have not been properly interpreted:
What Lucas means when he asserts that deviations are ‘too small to matter’ is that attempts to construct general models of deviations from the efficient market hypothesis – by specifying mechanical trading rules or by writing equations to identify bubbles in asset prices – have not met with much success. But this is to miss the point: the expert billiard player plays a nearly perfect game, but it is the imperfections of play between experts that determine the result. There is a – trivial – sense in which the deviations from efficient markets are too small to matter – and a more important sense in which these deviations are the principal thing that matters. 
The claim that most profit opportunities in business or in securities markets have been taken is justified.  But it is the search for the profit opportunities that have not been taken that drives business forward, the belief that profit opportunities that have not been arbitraged away still exist that explains why there is so much trade in securities. Far from being ‘too small to matter’, these deviations from efficient market assumptions, not necessarily large, are the dynamic of the capitalist economy. 
Such anomalies are idiosyncratic and cannot, by their very nature, be derived as logical deductions from an axiomatic system. The distinguishing characteristic of Henry Ford or Steve Jobs, Warren Buffett or George Soros, is that their behaviour cannot be predicted from any prespecified model. If the behaviour of these individuals could be predicted in this way, they would not have been either innovative or rich. But the consequences are plainly not ‘too small to matter’. 
The preposterous claim that deviations from market efficiency were not only irrelevant to the recent crisis but could never be relevant is the product of an environment in which deduction has driven out induction and ideology has taken over from observation. The belief that models are not just useful tools but also are capable of yielding comprehensive and universal descriptions of the world has blinded its proponents to realities that have been staring them in the face. That blindness was an element in our present crisis, and conditions our still ineffectual responses. 
Fair enough, but how should one proceed? Kay suggests the adoption of more "eclectic analysis... not just deductive logic but also an understanding of processes of belief formation, anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses, and governments actually do."

I have no quarrel with this prescription, but I'd also like to make a case for more creative and versatile deductive logic. One of the key modeling hypotheses in the economics of information is the so-called Harsanyi doctrine (or common prior assumption), which stipulates that all differences in beliefs ought to be modeled as if they arise from differences in information. This hypothesis implies that individuals can only disagree if such disagreement is not itself common knowledge: they cannot agree to disagree. It is not hard to see that such a hypothesis could not possibly allow for pure speculation on asset price movements, and hence cannot account for the large volume of trade in financial markets. In fact, it implies that order books in many markets would be empty, since a posted price would only be met by someone with superior information.

The point is that over-reliance on deductive logic is not the only problem as far as financial modeling is concerned; the core assumptions to which deductive logic has been applied are themselves too restrictive. To my mind, the most interesting part of Kay's essay suggests how one might improve on this:
You can learn a great deal about deviations from the efficient market hypothesis, and the role they played in the recent financial crisis, from journalistic descriptions by people like Michael Lewis and Greg Zuckerman, who describe the activities of some individuals who did predict it. The large volume of such material that has appeared suggests many avenues of understanding that might be explored. You could develop models in which some trading agents have incentives aligned with those of the investors who finance them and others do not. You might describe how prices are the product of a clash between competing narratives about the world. You might appreciate the natural human reactions that made it difficult to hold short positions when they returned losses quarter after quarter.
There is definitely ongoing work in economics that explores many of these directions, some of which I have surveyed in previous posts. But the idea of prices as the product of a clash between competing narratives about the world reminded me of a paper by Harrison and Kreps, which was one of the earliest models in finance to shed the common prior assumption.

For anyone interested in developing models of heterogeneous beliefs in which trading occurs naturally over time, the Harrison-Kreps paper is the perfect place to start. They illustrate their model with an example that is easy to follow: a single asset provides title to a stream of dividend payments that may be either high or low, and investors disagree about the likelihood of transitions from high to low states and vice versa. This means that investors who value the asset most in one state differ from those who value it most in the other. Trading occurs as the asset is transferred across investors in the two different belief classes each time a transition to a different state occurs. The authors show that the price in both states is higher than it would be if investors were forced to hold the asset forever: there is a speculative premium that arises from the knowledge that someone else will, in due course and mistakenly in your opinion, value the asset more than you do. The contrast with the efficient markets hypothesis is striking and clear:
The basic tenet of fundamentalism, which goes back at least to J. B. Williams (1938), is that a stock has an intrinsic value related to the dividends it will pay, since a stock is a share in some enterprise and dividends represent the income that the enterprise gains for its owners. In one sense, we think that our analysis is consistent with the fundamentalist spirit, tempered by a subjectivist view of probability. Beginning with the view that stock prices are created by investors, and recognizing that investors may form different opinions even when they have the same substantive information, we contend that there can be no objective intrinsic value for the stock. Instead, we propose that the relevant notion of intrinsic value is obtained through market aggregation of diverse investor assessments. There are fundamentalist overtones in this position, since it is the market aggregation of investor attitudes and beliefs about future dividends with which we start. Under our assumptions, however, the aggregation process eventually yields prices with some curious characteristics. In particular, investors attach a higher value to ownership of the stock than they do to ownership of the dividend stream that it generates, which is not an immediately palatable conclusion from a fundamentalist point of view.
The idea that prices are "obtained through market aggregation of diverse investor assessments" is not too far from Kay's more rhetorically powerful claim that they are "the product of a clash between competing narratives".  What Harrison and Kreps do not consider is how diverse investor assessments change over time, since beliefs about transition probabilities are exogenously given in their analysis. But Kay's formulation suggests how progress on this front might be made. Beliefs change as some narratives gather influence relative to others, either though active persuasion (talking one's book for instance) or through differentials in profits accruing to those with different worldviews. While Kay is surely correct that a rich understanding of this process requires more than deductive reasoning, it is also true that deductive reasoning has not yet been pushed to its limits in facilitating our understanding of market dynamics.

Monday, August 13, 2012

Building a Better Dow

The following post, written jointly with Debraj Ray, is based on our recent note proposing a change in the method for computing the Dow.

---

With a market capitalization approaching 600 billion, Apple is currently the largest publicly traded company in the world. The previous title-holder, Exxon Mobil, now stands far behind at 400 billion. But Apple is not a component of the Dow Jones Industrial Average. Nor is Google, with a higher valuation than all but a handful of firms in the index. Meanwhile, firms with less than a tenth of Apple's market capitalization, including Alcoa and Hewlett-Packard, continue to be included.

The exclusion of firms like Apple and Google would appear to undermine the stated purpose of the index, which is "to provide a clear, straightforward view of the stock market and, by extension, the U.S. economy." But there are good reasons for such seemingly arbitrary omissions. The Dow is a price-weighted index, and the average price of its thirty components is currently around $58. Both Apple and Google have share prices in excess of $600, and their inclusion would cause day-to-day changes in the index to be driven largely by the behavior of these two securities. For instance, their combined weight in the Dow would be about 43% if they were to replace Alcoa and Travelers, which are the two current components with the lowest valuations. Furthermore, the index would become considerably more volatile even if the included stocks were individually no more volatile than those they replace. As John Presbo, chairman of the index oversight committee, has observed, such heavy dependence of the index on one or two stocks would "hamper its ability to accurately reflect the broader market."

Indeed, price-weighting is decidedly an odd methodology. IBM has a smaller market capitalization than Microsoft, but a substantially higher share price. Under current conditions, a 1% change in the price of IBM has an effect on the index that is almost seven times as great as a 1% change in the price of Microsoft. In fact, IBM's weight in the index is above 11%, although its valuation is less than 6% of the total among Dow components.

This issue does not arise with value-weighted indexes such as the S&P 500. But as Prestbo and others have pointed out, the Dow provides an uninterrupted picture of stock market movements dating back to 1896. An abrupt switch to value weighting would introduce a methodological discontinuity that would "essentially obliterate this history." Attention has therefore been focused on the desirability of a stock split, which would reduce Apple's share price to a level that could be accommodated by the questionable methodology of the Dow.

But an abrupt switch to a value weighting or the flawed artifice of a stock split are not the only available alternatives. In a recent paper we propose a modification that largely preserves the historical integrity of the Dow time series, while allowing for the inclusion of securities regardless of their market price. Our modified index also leads to a smooth and gradual transition, as incumbent stocks are replaced, to a fully value-weighted index in the long run.

The proposed index is composed of two subindices, one price-weighted to respect the internal structure of the Dow, and the other value-weighted to apply to new entrants. The index has two parameters, both of which are adjusted whenever a substitution is made. One of these maintains continuity in the value of the index, while the other ensures that the two subindices are weighted in proportion to their respective market capitalizations. Stock splits require a change in parameters (as in the case of the current Dow divisor) but only if the split occurs for a firm in the price-weighted subindex.

Once all incumbent firms are replaced, the result will be a fully value-weighted index. In practice this could take several decades, as some incumbent firms are likely to be remain components far into the future. But firms in the price-weighted component of the index that happen to have weights roughly commensurate with their market capitalization can be transferred with no loss of continuity to the value-weighted component. This procedure, which we call bridging, can accelerate the transition to a value-weighted index with minimal short-term disruption. Currently Coca Cola and Disney are prime candidates for bridging.

Under our proposed index, Apple would enter with a weight of less than 13% if it were to replace Alcoa. This is scarcely more than the weight currently associated with IBM, a substantially smaller company. Adding Google (in place of HP or Travelers) would further lower the weight of Apple since the total market capitalization of Dow components would rise. This is a relatively modest change that, we believe, would simultaneously serve the desirable goals of methodological continuity and market representativeness.

Friday, July 13, 2012

Market Overreaction: A Case Study

At 7:30pm yesterday the Drudge Report breathlessly broadcast the following:
ROMNEY NARROWS VP CHOICES; CONDI EMERGES AS FRONTRUNNER
Thu Jul 12 2012 19:30:01 ET 
**Exclusive** 
Late Thursday evening, Mitt Romney's presidential campaign launched a new fundraising drive, 'Meet The VP' -- just as Romney himself has narrowed the field of candidates to a handful, sources reveal. 
And a surprise name is now near the top of the list: Former Secretary of State Condoleezza Rice! 
The timing of the announcement is now set for 'coming weeks'.
The reaction on Intrade was immediate. The price of a contract that pays $10 if Rice is selected as Romney's running mate (and nothing otherwise) shot up from about 35 cents to $2, with about 2500 contracts changing hands within twenty minutes of the Drudge announcement. By the sleepy standards of the prediction market this constitutes very heavy volume. Nate Silver responded at 7:49 as follows:
The Condi Rice for VP contract at Intrade possibly the most obvious short since Pets.com
Good advice, as it turned out. By 9:45 pm the price had dropped to 90 cents a contract with about 5000 contracts traded in total since the initial announcement. Here's the price and volume chart:


One of the most interesting aspects of markets such as Intrade is that they offer sets of contracts on a list of exhaustive and mutually exclusive events. For instance, the Republican VP Nominee market contains not just the contract for Rice, but also for 56 other potential candidates, as well as a residual contract that pays off if none of the named contracts do. The sum of the bids for all these contracts cannot exceed $10, otherwise someone could sell the entire set of contracts and make an arbitrage profit. In practice, no individual is going to take the trouble to spot and exploit such opportunities, but it's a trivial matter to write a computer program that can do so as soon as they arise.

In fact, such algorithms are in widespread use on Intrade, and easy to spot. The sharp rise in the Rice contract caused the arbitrage condition to be momentarily violated and simultaneous sales of the entire set of contracts began to occur. While the price of one contract rose, the prices of the others (Portman, Pawlenty, and Ryan especially) were knocked back as existing bids started to be filled by algorithmic instruction. But as new bidders appeared for these other contracts the Rice contract itself was pushed back in price, resulting in the reversal seen in the above chart. All this in a matter of two or three hours.

Does any of this have relevance for the far more economically significant markets for equity and debt? There's a fair amount of direct evidence that these markets are also characterized by overreaction to news, and such overreaction is consistent with the excess volatility of stock prices relative to dividend flows. But overreactions in stock and bond markets can take months or years to reverse.  Benjamin Graham famously claimed that "the interval required for a substantial undervaluation to correct itself averages approximately 1½ to 2½ years," and DeBondt and Thaler found that "loser" portfolios (composed of stocks that had previously experienced sharp capital losses) continued to outperform "winner" portfolios (composed of those with significant prior capital gains) for up to five years after construction.

One reason why overreaction to news in stock markets takes so long to correct is that there is no arbitrage constraint that forces a decline in other assets when one asset rises sharply in price. 
In prediction markets, such constraints cause immediate reactions in related contracts as soon as one contract makes a major move. Similar effects arise in derivatives markets more generally: options prices respond instantly to changes in the price of the underlying, futures prices move in lock step with spot prices, and exchange-traded funds trade at prices that closely track those of their component securities. Most of this activity is generated by algorithms designed to sniff out and snap up opportunities for riskless profit. But the primitive assets in our economy, stocks and bonds, are constrained only by beliefs about their future values, and can therefore wander far and wide for long periods before being dragged back by their cash flow anchors.

---

Update (7/13). Mark Thoma and Yves Smith have both reposted this, with interesting preludes. Here's Yves:
I’d like to quibble with the notion that there is such a thing as a correct price for as vague a promise as a stock (by contrast, for derivatives, it is possible to determine a theoretical price in relationship to an actively traded underlying instrument, so even though the underlying may be misvalued, the derivative’s proper value given the current price and other parameters can be ascertained).  
Sethi suggests that stocks have “cash flow anchors”. I have trouble with that notion. A bond is a very specific obligation: to pay interest in specified amounts on specified dates, and to repay principal as of a date certain... By contrast, a stock is a very unsuitable instrument to be traded on an arm’s length, anonymous basis. A stock is a promise to pay dividends if the company makes enough money and the board is in the mood to do so. Yes, you have a vote, but your vote can be diluted at any time. There aren’t firm expectations of future cash flows; it’s all guess work and heuristics.
I chose the term "anchor" with some care, because the rode of an anchor is not always taut. I didn't mean to suggest that there is a single proper value for a stock that can be unambiguously deduced from the available information; heterogeneity in the interpretation of information alone is enough to generate a broad range of valuations. This can allow for drift in various directions as long as the price doesn't become too far detached from earnings projections.

Mark argues that the leak to Drudge was an attempt at distraction:
Rajiv Sethi looks at the reaction to the Romney campaign's attempt to change the subject from Romney's role at Bain to potential picks for vice-president (as far as I can tell, Rice has no chance -- she's "mildly pro-choice" for one -- so this was nothing more than an attempt to divert attention from Bain, an attempt one that seems to have worked, at least to some extent).
This view, which seems to be held left and right, was brilliantly summed up by Nate Silver as follows:
drudge (v.): To leak news to displace an unfavorable headline; to muddy up the news cycle.
I was tempted to reply to Nate's tweet with:
twartist (n.): One who is able by virtue of imagination and skill to create written works of aesthetic value in 140 characters or less.
But it seems that the term is already in use.

Saturday, June 30, 2012

Fighting over Claims

This brief segment from a recent speech by Joe Stiglitz sums up very neatly the nature of our current economic predicament (emphasis added):
We should realize that the resources in our economy... today is the same is at was five years ago. We have the same human capital, the same physical capital, the same natural capital, the same knowledge... the same creativity... we have all these strengths, they haven't disappeared. What has happened is, we're having a fight over claims, claims to resources. We've created more liabilities... but these are just paper. Liabilities are claims on these resources. But the resources are there. And the fight over the claims is interfering with our use of the resources
I think this is a very useful way to think about the potential effectiveness under current conditions of various policy proposals, including conventional fiscal and monetary stabilization policies.

Part of the reason for our anemic and fitful recovery is that contested claims, especially in the housing market, continue to be settled in a chaotic and extremely wasteful manner. Recovery from subprime foreclosures is typically a small fraction of outstanding principal, and properly calibrated principal write-downs can often benefit both borrowers and lenders. Modifications that would occur routinely under the traditional bilateral model of lending are much harder to implement when lenders are holders of complex structured claims on the revenues generated by mortgage payments. Direct contact between lenders and borrowers is neither legal nor practicable in this case, and the power to make modifications lies instead with servicers. But servicer incentives are not properly aligned with those of the lenders on whose behalf they collect and process payments. The result is foreclosure even when modification would be much less destructive of resources.

Despite some indications that home values are starting to rise again, the steady flow of defaults and foreclosures shows no sign of abating. Any policy that stands a chance of getting us back to pre-recession levels of resource utilization has to result in the quick and orderly settlement of these claims, with or without modification of the original contractual terms. And it's not clear to me that the blunt instruments of conventional stabilization policy can accomplish this.

Consider monetary policy for instance. The clamor for more aggressive action by the Fed has recently become deafening, with a long and distinguished line of advocates (see, for instance, recent posts by Miles KimballJoseph Gagnon, Ryan AventScott Sumner, Paul Krugman, and Tim Duy). While the various proposals differ with respect to details the idea seems to be the following: (i) the Fed has the capacity to increase inflation and nominal GDP should it choose to do so, (ii) this can be accomplished by asset purchases on a large enough scale, and (iii) doing this would increase not only inflation and nominal GDP but also output and employment.

It's the third part of this argument with which I have some difficulty, because I don't see how it would help resolve the fight over claims that is crippling our recovery. Higher inflation can certainly reduce the real value of outstanding debt in an accounting sense, but this doesn't mean that distressed borrowers will be able to meet their obligations at the originally contracted terms. In order for them to do so, it is necessary that their nominal income rises, not just nominal income in the aggregate. And monetary policy via asset purchases would seem to put money disproportionately in the pockets of existing asset holders, who are more likely to be creditors than debtors. Put differently, while the Fed has the capacity to raise nominal income, it does not have much control over the manner in which this increment is distributed across the population. And the distribution matters.

Similar issues arise with inflation. Inflation is just the growth rate of an index number, a weighted average of prices for a broad range of goods and services. The Fed can certainly raise the growth rate of this average, but has virtually no control over its individual components. That is, it cannot increase the inflation rate without simultaneously affecting relative prices. For instance, purchases of assets that drive down long term interest rates will lead to portfolio shifts and an increase in the price of commodities, which are now an actively traded asset class. This in turn will raise input costs for some firms more than others, and these cost increases will affect wages and prices to varying degrees depending on competitive conditions. As Dan Alpert has argued, expansionary monetary policy under these conditions could even "collapse economic activity, as limited per capita wages are shunted to oil and food, rather than to more expansionary forms of consumption."

I don't mean to suggest that more aggressive action by the Fed is unwarranted or would necessarily be counterproductive, just that it needs to be supplemented by policies designed to secure the rapid and efficient settlement of conflicting claims.

One of the most interesting proposals of this kind was floated back in October 2008 by John Geanakoplos and Susan Koniak, and a second article a few months later expanded on the original. It's worth examining the idea in detail. First, deadweight losses arising from foreclosure are substantial:
For subprime and other non-prime loans, which account for more than half of all foreclosures, the best thing to do for the homeowners and for the bondholders is to write down principal far enough so that each homeowner will have equity in his house and thus an incentive to pay and not default again down the line... there is room to make generous principal reductions, without hurting bondholders and without spending a dime of taxpayer money, because the bond markets expect so little out of foreclosures. Typically, a homeowner fights off eviction for 18 months, making no mortgage or tax payments and no repairs. Abandoned homes are often stripped and vandalized. Foreclosure and reselling expenses are so high the subprime bond market trades now as if it expects only 25 percent back on a loan when there is a foreclosure.
Second, securitization precludes direct contact between borrowers and lenders:
In the old days, a mortgage loan involved only two parties, a borrower and a bank. If the borrower ran into difficulty, it was in the bank’s interest to ease the homeowner’s burden and adjust the terms of the loan. When housing prices fell drastically, bankers renegotiated, helping to stabilize the market. 
The world of securitization changed that, especially for subprime mortgages. There is no longer any equivalent of “the bank” that has an incentive to rework failing loans. The loans are pooled together, and the pooled mortgage payments are divided up among many securities according to complicated rules. A party called a “master servicer” manages the pools of loans. The security holders are effectively the lenders, but legally they are prohibited from contacting the homeowners.
Third, the incentives of servicers are not aligned with those of lenders:
Why are the master servicers not doing what an old-fashioned banker would do? Because a servicer has very different incentives. Most anything a master servicer does to rework a loan will create big winners but also some big losers among the security holders to whom the servicer holds equal duties... By allowing foreclosures to proceed without much intervention, they avoid potentially huge lawsuits by injured security holders. 
On top of the legal risks, reworking loans can be costly for master servicers. They need to document what new monthly payment a homeowner can afford and assess fluctuating property values to determine whether foreclosing would yield more or less than reworking. It’s costly just to track down the distressed homeowners, who are understandably inclined to ignore calls from master servicers that they sense may be all too eager to foreclose.
And finally, the proposed solution:
To solve this problem, we propose legislation that moves the reworking function from the paralyzed master servicers and transfers it to community-based, government-appointed trustees. These trustees would be given no information about which securities are derived from which mortgages, or how those securities would be affected by the reworking and foreclosure decisions they make. 
Instead of worrying about which securities might be harmed, the blind trustees would consider, loan by loan, whether a reworking would bring in more money than a foreclosure... The trustees would be hired from the ranks of community bankers, and thus have the expertise the judiciary lacks...  
Our plan does not require that the loans be reassembled from the securities in which they are now divided, nor does it require the buying up of any loans or securities. It does require the transfer of the servicers’ duty to rework loans to government trustees. It requires that restrictions in some servicing contracts, like those on how many loans can be reworked in each pool, be eliminated when the duty to rework is transferred to the trustees... Once the trustees have examined the loans — leaving some unchanged, reworking others and recommending foreclosure on the rest — they would pass those decisions to the government clearing house for transmittal back to the appropriate servicers... 
Our plan would keep many more Americans in their homes, and put government money into local communities where it would make a difference. By clarifying the true value of each loan, it would also help clarify the value of securities associated with those mortgages, enabling investors to trade them again. Most important, our plan would help stabilize housing prices.
As with any proposal dealing with a problem of such magnitude and complexity, there are downsides to this. Anticipation of modification could induce borrowers who are underwater but current with their payments to default strategically in order to secure reductions in principal. Such policy-induced default could be mitigated by ensuring that only truly distressed households qualify. But since current financial distress is in part a reflection of past decisions regarding consumption and saving, some are sure to find the distributional effects of the policy galling. Nevertheless, it seems that something along these lines needs to be attempted if we are to get back to pre-recession levels of resource utilization anytime soon. And the urgency of action does seem to be getting renewed attention.

The bottom line, I think, is this: too much faith in the traditional tools of macroeconomic stabilization under current conditions is misplaced. One can conceive of dramatically different approaches to monetary policy, such as direct transfers to households, but these would surely face insurmountable legal and political obstacles. It is essential, therefore, that macroeconomic stabilization be supplemented by policies that are microeconomically detailed, fine grained, and directly confront the problem of balance sheet repair. Otherwise this enormously costly fight over claims will continue to impede the use of our resources for many more years to come.