Capital Ideas – The History of the Development of CAPM

thumbnail
Reading Time: 5 minutes

I picked up Peter Bernstein’s Capital Ideas over the weekend. If you deal with systematic investments or risk management, it is a fascinating history of the development of the ideas that underlie quantitative finance today.

I still have about two thirds of the way to go but two things stood out for me so far. The first is the iterations over which the Capital Asset Pricing Model developed. And the second is the realisation that making money in capital markets has always been hard, despite protestations to the contrary by today’s fund managers.

The three most important characters in the development of the CAPM are well-known. They are Harry Markowitz, James Tobin, and William Sharpe. What I did not know was how they were related and how their ideas built on top of each other to result in the CAPM.

Stage 1 – Efficient Portfolios

The seminal paper that started this revolution came in 1952, published in the Journal of Finance. The paper was simply titled Portfolio Selection. This is where Markowitz came up with the idea of efficient portfolios. An efficient portfolio is one which has the lowest risk for a given level of expected return. At this point, there is no notion of a market portfolio. All he says is that a portfolio’s risk goes down as stocks with low correlation (covariance) are added to the portfolio and goes on to quantify that risk. He says that an investor must first determine the amount of risk she is willing to take and then pick the most efficient portfolio commensurate with that risk.

There were real implementation problems with this idea though. As is well understood today, building a reliable covariance matrix for any reasonably large list of securities is notoriously difficult. But there was a more practical problem. Computing power back in 1952 was close to non-existent. Even nine years later in 1961, William Sharpe reported that the commercially available computer required 33 minutes to solve a portfolio optimization problem for just 100 securities. In inflation-adjusted dollars, a single such run would cost about $300.

Also Read on FinMedium:  Covid-19: Asset Allocation and Government Policy

Stage 2 – The Separation Theorem

So it stood for six years, until Tobin, a Sterling professor at Yale, wrote his Liquidity Preference as Behavior Toward Risk in February 1958. The paper was published in a journal called The Review of Economic Studies. He noted that an investor would keep a certain percentage of his assets in the form of cash (or Government bonds) depending on the interest rate in the economy, while investing the rest in risky assets. Cash could, in theory at least, be considered a risk-free asset. This paper was actually about how people chose to split their assets between risky assets and cash. But the implications for Markowitz’s model was extraordinary.

Markowitz’s ideas was that for any given level of risk, there is a portfolio which gives you the highest return. Now the idea of a super-efficient portfolio emerged. This was a portfolio, among all the efficient portfolios that had the highest level of return per unit of risk. That is, if you have two efficient portfolios, one which gives you 5 units of return for 2 units of risk, and another which gives you 12 units of return for 4 units of risk, the second portfolio is more efficient. This is because it provides 3 units of return per unit of risk compared to the first which only provides 2.5 units of return per unit of risk.

What Tobin’s Separation Theorem did was to separate the decision of how much risk an investor wanted to take from the decision about the composition of the risky portfolio. The risky portfolio always had to be the super-efficient portfolio described above. The amount of risk that the investor undertook was then adjusted by allocating part of the assets to the portfolio and the rest to cash. This meant that instead of trying to construct the different portfolios for every level of risk, one now only had to compute one most efficient portfolio.

Stage 3 – The Diagonal Model and CAPM

By 1959, Markowitz had proposed that the returns on most stocks are correlated. However, instead of pursuing this idea on his own, he enlisted his graduate student, William Sharpe.

Also Read on FinMedium:  Valuation in Motion: CCL Products (India) Valuation: Freshly Brewed Value

A big breakthrough came in 1961 when a 27 year old Sharpe wrote his A Simplified Model for Portfolio Analysis. Although he submitted it in 1961, it finally appeared in Management Science in January 1963.

The first step is establishing the idea of corner portfolios. Sharpe proves that any efficient portfolio can be constructed as a combination of two corner portfolios. Given a fixed number of securities, there are only a fix number of corner portfolios to determine.

The second step, and the real progenitor of the CAPM is the Diagonal Model. Remember the covariance matrix I mentioned before? The diagonal model greatly simplified the construction and calculation of the covariance matrix. Sharpe essentially said that all the covariance between two securities is determined by their individual covariances with a single driving factor. This factor could be the level of the stock market as a whole, the Gross National Product, some price index or any other factor thought to be the most important single influence on the returns from securities.

This simplification reduced the number of parameters to be estimated from almost 2 million to 6000 for an analysis of 2000 securities. However, the problem of computing all the corner portfolios still remains. Sharpe is still worried about the time and cost of calculating all these portfolios. So he makes the next jump. He shows that in the presence of lending and borrowing money for investment, a lot of so called efficient portfolios cease to be so. That reduces the amount of computation dramatically.

The final piece of the puzzle was put in place in 1964. Capital Asset Prices – A Theory of Equilibrium Under Conditions of Risk was published in the Journal of Finance in September. Sharpe assumes homogeneity of investor expectations. This means that there is a consensus about the expected returns for all securities in the market. The consequence of this assumption, as he demonstrates, is that the most super-efficient portfolio I mentioned above is simply the market portfolio! Combining this with Tobin’s separation theorem (it is in this paper that Sharpe cites Tobin explicitly), he finally establishes that the best portfolio for any given level of risk is a partial allocation to the market portfolio with the rest invested in a risk-free asset (cash or government bonds).

Also Read on FinMedium:  Market Update for 27/07/2020: | Street-fluence

The model was complete, 12 years after the first seeds of the idea were planted!

As I mentioned at the start, one thing that struck me is that it has always been hard to make money in financial markets. Consider the fact that a lot of the efforts that led to the development of the CAPM were because of the costs of computation that Markowitz’s model required. In fact the costs were so prohibitive that few actual practitioners would adopt these methods. So while we could run models today and prove that the simplistic CAPM worked back in the 60s and 70s, the reality was that it would take a lot of money and effort to actually run such a portfolio back then. When the CAPM became easy to run, the alpha went away. The frontier keeps moving. And the contemporaneous implementation of cutting edge methods always remains difficult and expensive. It is an fascinating, and endless pursuit.



Source link

Every Wednesday and Saturday, we send Info-Graphic and FinMedium Weekly Digest newsletters to our 25000+ Subscribers.

Join Them Now!

Please Share :)
Parijat
On a quest to understand what makes a business great and what makes a great investment.
Back To Top