A+| A| A-

Experiments in Macroeconomics and Finance

Part 3 reviews experiments in finance and economics, with an emphasis on evaluating the learning from such experiments.

SURVEY OF EXPERIMENTAL ECONOMICS

3 Experiments in Macroeconomics and Finance

Part 3 reviews experiments in finance and economics, with an emphasis on evaluating the learning from such experiments.

C
hristopher Sims stated, “Economists can do very little experimentation to produce crucial data. This is particularly true of macroeconomics” (1996: 107). In the 15 years since this was written experimental macroeconomics has become a rapidly evolving field. Experimental macroeconomics is a sub-field of experimental economics that uses controlled laboratory methods to understand macroeconomic phenomena and to test the specific assumptions and predictions of macroeconomic models. Macroeconomic theories have been traditionally tested using non- experimental field data like gross domestic product (GDP) and its constituents. Leaving aside the rare naturally occurring experiment, it would seem difficult, if not impossible, to address macroeconomic questions using field experiments.

An important factor making macroeconomic experiments in the laboratory possible is the emphasis on building macro models on microeconomic foundations. These models often study how exogenous variables affect household or firm-level decisions and then scale these decisions up to explain the behaviour of the aggregate economy. Perhaps Sims’ belief that macroeconomic models are impossible to test in a laboratory could hinge on the impossibility of approximating behaviour on a macro scale. Yet, in a macro model that is based on micro foundations, the number of agents does not really matter since macro behaviour is just scaled up micro behaviour. This has enabled experimental macroeconomists exploring non-strategic macroeconomic models to construct experiments that are essentially designed to test the underlying micro models. However, macroeconomic experiments are generally designed with a specific story in mind, unlike micro models and games that strive for generality. These models have a specific setting in mind, involving expectations dynamics, inter-temporal optimisation and consumption or saving decisions, inflation and unemployment, and the like. The advantage of these experiments is that they enable us to gain insights that cannot be obtained using standard macroeconometric approaches. Field data for testing macroeconomic models are often simply not available. There can also be identification, endogeneity and equilibrium selection issues that cannot be satisfactorily addressed using standard econometric methods.

It is interesting to note the manner in which an infinite horizon is simulated in the laboratory. Researchers have generally employed two designs in the laboratory. One design used by Marimon and Sunder (1993) was to recruit subjects for a fixed period of time but terminate the session early without advance notice, following the end of some period of play. Because Marimon and Sunder used a forward-looking dynamic model, they used one-stepahead forecasts made by a subset of subjects who were paid for their forecast accuracy to determine final period allocations. In

August 27, 2011 vol xlvi no 35

SURVEY OF EXPERIMENTAL ECONOMICS

the alternative design, there was a small probability, ȕ, that any given session would be the last among those played in a sequence. Enough time was allowed for several indefinite sequences to be played in an experimental session (Duffy and Ochs 1999, 2002; Lei and Noussair 2002; Capra et al 2005). In an overlapping generations model involving an infinity of goods and agents, Marimon and Sunder allowed each subject to live several twoperiod lives over the course of an indefinite sequence of periods. But it is also possible to argue that one does not really need infinitely many agents to achieve a competitive equilibrium. Results from many double auction experiments suggest that a competitive equilibrium can be achieved rather quickly with as few as three to five agents operating on each side of the market.

As of now, the main insights from macroeconomic experiments have been (1) an assessment of the micro assumptions underlying macro models; (2) better insights into the dynamics of expectations that play a crucial role in macroeconomic phenomena; (3) a way to resolve equilibrium selection problems for models with multiple equilibria; (4) a way to validate those macroeconomic predictions for which field data are lacking; and

(5) the impact of various macroeconomic policy interventions on household or firm-level behaviour (Duffy 2008).

3.1 Optimal Consumption Savings Decisions

An important theme in macroeconomic experiments has been experiments involving the one sector infinite horizon optimal growth model pioneered by Ramsey (1928), Cass (1965) and Koopmans (1965). These models implied that individuals solve a dynamic inter-temporal optimisation problem by deriving their consumption and savings plans over an infinite time horizon. Some of the important experimental findings in this area are Hey and Dardanoni (1988), Carbone and Hey (2004), Noussair and Matheney (2000), Lei and Noussair (2002) and Ballinger et al (2003).

Hey and Dardanoni found that experimental consumption was quite different from optimal behaviour with consumption being dependent on past income realisation. However, the comparative static implications of the theory were well supported. Changes in the discount rate or the return on savings had the same impact on consumption in the laboratory as under optimal consumption behaviour. Carbone and Hey found a great deal of heterogeneity in terms of an agent’s abilities to confront the life cycle consumption-savings problem, with most applying too short or too variable a planning horizon to conform to optimal behaviour. They concluded that “subjects do not seem to be able to smooth their consumption stream sufficiently with current consumption too closely tracking current income”. Noussair and Matheny incorporated a concave production technology, which served to endogenise savings in their model. They found that theoretical predictions regarding the speed of convergence did not find much support. Subjects occasionally resorted to consumption binges, allocating nearly nothing to the next period’s capital stock, in contrast to the prediction of consumption smoothing. However, this behaviour was somewhat mitigated as the experience of subjects participating in the experiment improved. Ballinger et al explored the impact of social learning. They eliminated time discounting in an experiment with a finite 60-period horizon.

Economic & Political Weekly

EPW
August 27, 2011 vol xlvi no 35

Subjects were matched into three-person families and made decisions in a fixed sequence. The first generation subjects (G1) made decisions in isolation for the first 20 periods; in the next 20 periods (21 to 40), their behaviour was observed by second generation (G2) subjects, and in one treatment the two generations could freely communicate with each other. The same procedure was then implemented with the third generation (G3) watching the G2 subjects for the next 20 rounds, and so on. In their findings, they reported that subjects tended to consume more than the optimal level in the early periods of their lives, leading to suboptimal consumption in the later stages. Consumption was also found to be highly sensitive to near lagged changes in income. Most significantly, they reported that the consumption behaviour of the G3 players was significantly closer to the optimal behaviour than the consumption behaviour of the G1 agents, suggesting that social learning by observation played a significant role.

3.2 Expectations Formation

Expectations of future endogenous variables play a critical role in self-referential macroeconomic models. Models that use rational expectations to close them are faced with a problem in that their field tests are a joint test of the models and the assumption of rational expectations. Early tests by Marimon and Sunder (1993) were the first to elicit inflation forecasts, which were then used to determine subjects’ inter-temporal consumption or saving decisions, and via market clearing, the actual price level. Recent work has begun to recognise that rational expectations presume too much knowledge on the part of the agents. For example, rational expectations presume a common knowledge of rationality and also a knowledge of the underlying economic model. Given these strong assumptions, researchers have tried to replace this with some version of bounded rationality and find out whether boundedly rational agents operating for some time in a known, stationary environment might eventually learn to possess rational expectations from observation of the relevant time series data. Broadly speaking, there have been two versions of boundedly rational expectations formation. The first approach has been motivated by Keynes’ (1936) comparison of financial markets to a beauty contest where contestants had to choose the six prettiest faces from 100 photographs. The winner of the contest was not the one who chose the prettiest face, but the one whose choice was the closest to the average choice. Keynes observed that individuals might form expectations not just of average opinions, but also of what the average opinion expects average opinions to be and higher degrees of such average opinions. This was tested experimentally by Nagel (1995). In Nagel’s design, a group of 15 to 18 subjects were asked to guess, simultaneously and independently, a real number in the closed interval [0,100]. They were informed that the person whose guess was closest to the average of all guesses times a number p<1 would be the winner. The solution of the game was simple. The only Nash equilibrium was that all N players guess 0. If p was allowed to be equal to 1, then any number in the interval [0,100] was a Nash equilibrium. When p=1/.2, Nagel found that in three sessions the equilibrium prediction of 0 was never chosen. On the other hand, there were large spikes in the neighbourhood of 50, 25 and 12.5. The choice

SURVEY OF EXPERIMENTAL ECONOMICS

of 50 was not really rational since it implied an expected mean of 100 with p = 1/2. These players exhibited the lowest level of reasoning. The somewhat more sophisticated players assumed a mean of 50 and chose 25, while even more sophisticated players assumed 12.5. With repetitions, subjects did eventually converge to the unique rational expectations equilibrium. This implied that in multi-agent economies where all agents know the model, the common-knowledge-of-rationality assumption might not be valid. On the contrary, as a result of cognitive constraints, agents might adopt rules of thumb that lead to systematic forecast errors. Rational expectations might be viewed as a long-term phenomenon.

An alternative view of boundedly rational behaviour assumes that agents do not know the underlying model and behave like econometricians using possibly mis-specified forecasting rules that they update in real time. This approach has been studied experimentally by Bernasconi et al (2006), Hey (1994), Van Huyck et al (1994), Kelley and Friedman (2002), Hommes et al (2007) and Adam (2007). These reported mixed results. For instance, Hommes et al found that agent forecasts were not statistically significantly different from rational expectations forecasts though their variance was significantly greater. Even with limited information, there was no unexploited opportunity to improve price predictions. In the case of Adam, the issue was not whether agents could learn to form rational expectations of future inflation but whether they could find the correct specification of the reduced form model (which in Adam’s treatment also involved lagged output) that they should use to form rational expectations. He found that in most of the experimental sessions, subjects did not condition their forecast on lagged output.

3.3 Experimental Results on Coordination Problems

Many macroeconomic models are characterised by multiple equilibria. Which of these equilibria do agents coordinate on? The answer to this is often vital to questions of economic well-being. The structure of exogenous institutions might play a part in determining the coordination equilibrium. This question is rather hard to settle with field data because of the problems of endogeniety and the close association of a cluster of institutions. For example, if democracy and free speech are generally found together, it might be difficult to unravel their individual roles in determining which of the equilibria the agents will coordinate on. Laboratory experiments can be helpful in such cases.

An important class of coordination problems involves poverty traps (Lei and Noussair 2002, 2007). They had low-level (involving low levels of capital stock, output and welfare) and high-level (involving higher output, capital stock and welfare) equilibria in a model involving a non-convex production function. In one treatment, five subjects made decisions in a decentralised fashion while another group played the role of a social planner and made collective consumption or savings decisions. In both the cases, the model had an infinite horizon. The main experimental finding was that in the decentralised treatment the low-level equilibrium was a powerful attractor, while in the social planer treatment neither of the two equilibria was ever achieved. Capra et al (2005) studied the possibility that various institutional features could help the economy escape the poverty trap. They too had a

56 non-convex production function. Indeed, the baseline treatment was the same as the Lei and Noussair (2007) decentralised treatment; subjects decided independently and simultaneously how to allocate their output between current consumption and savings. In the communications treatment, subjects were free to communicate with each other before the opening of the market for capital. In the voting treatment, two subjects were randomly selected to propose consumption or savings plans for all the five players, which were subsequently put to vote. The winning (by simple majority) proposal was implemented. In a hybrid treatment, both communication and voting stages were included. In the baseline treatment, subjects could not escape the poverty trap. The addition of communication and voting helped sometimes, though not always. In the hybrid case that allowed communication as well as voting, experimental economies always seemed able to escape from the poverty trap. There seemed to be some causality from these institutions to economic performance.

Further models of coordination failure that have been studied involve bank runs (Garrat and Keister 2005; Schotter and Yorulmazer 2003). These studies used the Diamond and Dybvig (1983) coordination game model of bank runs as their theoretical structure. In this three-period inter-temporal model, depositors optimally deposited their unit endowment in a bank (period 1), given that the bank had exclusive access to a long-term investment opportunity, and the deposit contract it offered. This deposit contract insured the depositors against uncertain liquidity shocks. In period 1, a fraction found that their immediate liquidity needs forced an early withdrawal. The remaining depositors withdrew in period 2. The bank used its knowledge of these fractions to optimally devise a deposit contract that stipulated that depositors might withdraw the whole of their unit endowment in period 1, while those who waited for period 2 to withdraw would earn a return R >1. There existed two equilibria; the impatient types withdrew in period one and the patient ones waited for period 2 (separating equilibrium), and a pooling equilibrium where uncertainty about the behaviour of other patient types caused all patient agents to mimic the impatient types and withdraw in period 1, causing a run.

Garrat and Keister (2005) had a baseline treatment where five subjects had $1 each deposited in a bank and decided whether to withdraw this amount or wait till the next period to obtain $1.5. Following each withdrawal opportunity, subjects learnt the number of players in their group of five who had chosen to withdraw. In different treatments, the number of withdrawal opportunities and the number of withdrawal opportunities that the bank could sustain while continuing to deliver $1.5 to all those who waited were varied. In the baseline game, no group ever coordinated on a bank run while a majority of the groups coordinated on the second period withdrawal, which was a pay-off dominant equilibrium. In the second treatment, at each withdrawal opportunity, there was a small probability that one randomly selected player would be forced to withdraw. Subjects did not get to know if a withdrawal was forced or not. The probabilities of withdrawals were chosen so that there was a pay-off dominant equilibrium in which there was no voluntary withdrawal. With this modification, the coordination on the bank run equilibrium

August 27, 2011 vol xlvi no 35

SURVEY OF EXPERIMENTAL ECONOMICS

became much more likely. Using a somewhat different design, Schotter and Yorulmazer (2003) arrived at a similar conclusion.

The work surveyed so far forms only a small fraction of the recent experimental research in macroeconomics, which is not covered here because of space limitations. For a detailed survey of recent experimental work in macroeconomics, see Duffy (2008). We now turn to experimental findings in financial economics, particularly on experimental assets markets.

3.4 Why More Data?

This question is frequently asked. Of all branches of economics, financial economics probably has the most detailed and up-tothe-minute observational data from stock exchanges around the world. This branch of eco nomics is characterised by a strong empirical tradition. Why then do we need to spend time and money to conduct experiments with financial markets and gather even more data? Data from stock exchanges include bids, asks, transaction prices, volume, and so on. In addition, data from information services includes information on actions and events that may influence markets. Theories of financial markets (and economics of uncertainty more generally) are built on investor expecta tions. We need data on investor beliefs and expectations to empirically distinguish among competing theories. Yet, neither of these two sources of data does, nor can, report on investor expectations.

In experimental markets, the researcher knows the underlying parameters, and either knows or can make reasonable conjectures about investor expectations. Armed with this knowledge, the researcher knows the price and other predictions of alternative theories. Indeed, the experiments are designed so that the alternative theories yield mutually distinct predictions for the market. This approach allows us to conduct powerful tests of theories, which are not possible from the field data alone; we know little about the parameters and expectations that generate the field data from stock exchanges. We shall return to illustrative examples in a later section after addressing five questions.

3.5 What Can We Learn from Such Simple Markets?

Experimental markets are typically conducted in simple laboratory or classroom settings with a small number of student subjects who may have little prior experience of trading and investment. But, in reality, security markets are complex, populated by experienced, sophisticated professionals. Naturally, the second question frequently asked is, what can we possibly learn from such Mickey Mouse experiments about far more complex “real” markets? An experimenter may pay, say, $50 to each participant after a two-or three-hour session while traders in the security markets we are interested in often have millions, if not billions, of dollars at stake.

All science is aimed at finding simple principles that explain or predict a large part (rarely all) of the phenomenon of interest. Simple models, whether in mathematics or in the laboratory, make many assumptions. These can be divided into core assumptions and assumptions of convenience. Core assumptions are essential features of the environment while assump tions of convenience are made for mathematical tractability (probability distributions and preference functions in most cases). The power of a theory depends on the robustness of its explanations and predictions as

Economic & Political Weekly

EPW
August 27, 2011 vol xlvi no 35

the environments from which we gather the data deviate from its assumptions of convenience (Sunder 2001). An experimenter can deliberately and progressively make the experimental environment deviate from the assumptions of convenience in the theory to measure this robustness. This robustness check is not possible with the field data generated by the environment prevailing in the market.

In economics and finance, as in other sciences, simple experiments are used to discover and verify general basic principles. We learn to count through analogies of images or physical objects. We learn to swim in knee-deep water. We learn and verify the laws of electricity, not by using a computer or radio, but by simple instruments such as a potentiometer or ammeter. The manipulation of simple controls and a monitoring of the results build a fundamental knowledge of science. The noise generated by countless factors in complex environments makes it difficult to detect the fundamental principles that underlie the economics of the environment in which we are interested. Simple maths and lab models help us learn before we immerse ourselves in the complexity of real-world phenomena. If the principle is general, it should be applicable not only to a complex environment but also to the simple one of a laboratory. If it does not pass the test of simple environments, its claim to be applicable to more complex environments is weakened.

3.6 Experimental vs Behavioural Finance

The third question often raised is whether experimental finance is the same as behavioural economics or behavioural finance. The answer is no. In experimental economics the emphasis is on the design of markets and other economic insti tutions to gather empirical data to examine the consequences of institutions and rules. We assume that people do what they think is best for them, given what they think they know. They may be inexperienced, but they are not dumb; they learn. In experimental finance, we design experiments to sort out the claims of competing theories. On occasion, we might conjecture a new theory from the data but then we do not use the data used to generate the conjecture to test it. Like engineers, experimentalists design test beds to examine the properties and performance of alternative market institutions. The focus in this literature is on equilibrium, efficiency, prices and allocations. This work complements mathematical modelling and empirical research with field data.

3.7 What Have We Learnt?

The fourth question is, what have we learnt from experiments? In the last couple of decades, asset market experiments have yielded some important findings by exploiting the advantages of laboratory controls. These findings were not and could not have been reached from field data or mathematical modelling alone. However, in combination with field data and modelling, laboratory experiments have helped us make substantial advances in our understanding of security markets. Let us review some key findings.

Security markets can aggregate and disseminate information. In other words, markets can be informationally efficient. However, just because they can, does not mean they always are. Information dissemination, when it occurs, is rarely instantaneous or perfect; learning takes time. Efficiency is a matter of degree, not a 0-1 issue. Plott and Sunder (1982) asked if markets can disseminate

SURVEY OF EXPERIMENTAL ECONOMICS

information from those who know to those who do not. A satisfactory answer to this question cannot be obtained from analysis of field data because we do not know which investor has what information. Plott and Sunder (1982) used a simple experiment to address the question. As Table 2 shows, they designed a simple, single-period, two-state (X or Y) security, with the probability of each state given. The market was populated with four traders each of three types, making a total of 12 traders. Type I received dividends of 400 in State X and 100 in State Y while the other two types received dividends of 300, 150 and 125, 175, respectively. Each trader was endowed with two securities and 10,000 in “cash” at the beginning of each period. The last column of Table 2 shows the expected dividends from the security for each of the three types of traders when they do not know whether State X or Y is realised. Under this no-information condition, the equilibrium price of the security would be 220, the maximum of the three expected values (220, 210 and 155) and Type I traders should buy all the securities at this price from the other traders.

Table 2: Information Dissemination Equilibria in a Simple Asset Market

Trader Type State X Prob = 4 State Y Prob = 6 Expected Dividend I 400 100 220 II 300 150 210 III 125 175 155
PI eq price asset holder 400 Trader type I informed 220 Trader type Iuninformed
RE eq price asset holder 400 Trader type I 175 Trader type II

Source: Plott and Sunder (1982).

Suppose the realised state is X and two traders of each type are informed at the beginning of the period that the state is X, and the other two are not. The informed traders know that the value of the dividend from the security (if they decide to hold it) is given in column X, while the uninformed traders (assuming they are risk neutral) would value the securities at the expected values given in the last column of the table. The equilibrium price would be the maximum of these six numbers, 400, and informed Type I traders would buy the security at that price. If the realised state is Y, by a similar argument, the equilibrium price would be 220, the maximum of the six numbers in the Y column and the expected value column, and the uninformed Type I traders should buy the security at that price. This equilibrium is labelled the prior information (PI) equilibrium because it assumes that the traders rely state is X. This would yield an equilibrium price of 400, which is the maximum of the three dividends in the column for State X, and all traders of Type I would buy securities from the others. Similarly, in State Y, the equilibrium price would be 175, which is the maximum of the three dividends in the column for State Y, and all traders of Type III would buy the securities. This market was designed so that the PI and the RE hypotheses yielded mutually distinct predictions of market outcomes in prices and allocations.

Figure 1 shows the results for one of these markets. In periods 1 and 2, traders were not given any information and the prices were located in the vicinity of the no-information prediction of 220. In period 3, State Y was realised, and the prices were much closer to the RE prediction of 175 than to the PI prediction of 220. Similar results were repeated in the other periods (5, 6, 8 and 10) when State Y was realised. The observed allocative efficiency (shown in numbers above the x axis), as well as prices, are much closer to the predictions of the RE model than the PI model. This experiment provided direct empirical evidence that markets could disseminate information from the informed to the uninformed through the process of trading alone, without any exchange of verbal communication. Such markets could achieve high levels of efficiency by transferring securities to the hands of those who value them most.

Evidence on the ability of markets to disseminate information led to a more ambitious experiment. Can markets behave as if the diverse information in the hands of different traders is aggregated so that it is in the hands of all? To address this question, Plott and Sunder (1988) designed a market with three states of the world (X, Y and Z). When the realised state was, say, X, they informed some traders that it was “not Y” and informed the others that it was “not Z”. Do markets aggregate the diverse information

Figure 1: Dissemination of Information in Security Markets

Number of end-of-period asset holdings that are

14 13 6 24 14 15 24 22 24 22 24 24 RE consistent
14 13 9 16 5 5 13 0 18 0 24 24 PI consistent
450
400
350
300

entirely on the information they receive at the beginning of the

period, and do not learn any additional information about the realised state from their participation in the market.

The PI equilibrium is problematic because it assumes that traders will not learn from their failures. Whenever uninformed

Type I traders pay a price of 220 to buy a security, they will discover that the state turns out to be Y with a dividend of only 100, leaving them with a loss. If we assume that one cannot fool some of the people all the time, these traders will learn not to buy the securities at that price, making the PI equilibrium unsupportable. Under the rational expectations (RE) or efficient market hypothesis, in formation about the state would be disseminated from the

250

234 217 189 333 163 173 364 165 396 167 177 395 95 98 79 100 88 89 100 98 100 99 100 100 54 82 13 100 38 47 100 88 100 94 100 100 1(X) 2(Y) 3(Y) 4(X) 5(Y) 6(Y) 7(X) 8(Y) 9(X) 10(Y) 11(Y) 12(X) Period (state) Source: Plott and Sunder (1982). No information Private information to six insiders Private information to All

Transacted price

200

150

100

Average price Efficiency (E) Efficiency (TE)

0

informed to the uninformed traders through the market process. Under this assumption, in State X, all traders would know the

August 27, 2011 vol xlvi no 35

EPW

SURVEY OF EXPERIMENTAL ECONOMICS

in the hands of individual traders and behave as if everyone learns that the realised state is X in such a case? They found that such aggregation and dissemination of diverse information could take place in markets, and they could achieve high levels of information and allocative efficiency. The same happened when investors had homogeneous preferences (which made it easier for traders to infer information from the actions of others). Just because markets can aggregate and disseminate information does not mean that all markets do so under all conditions. Experiments show that market conditions must allow investors the opportunity to learn information from what they can observe. Even in simple experimental markets these conditions are not always satisfied for various reasons (for instance, too many states, too few observations and repetitions to facilitate learning). For example, in the information aggregation experiment mentioned above, a complete market for three Arrow-Debreu securities was efficient, but an incomplete market for a single security was not.

Even in the best of circumstances, equilibrium outcomes are not achieved instantaneously. Markets tend towards efficiency but cannot achieve it immediately. It takes time for investors to observe, form conjectures, test them, modify their strategies, and so on. With repetition, investors get better at learning, but when the environment changes continually, including the behaviour of other investors, the learning process may never reach a stationary point.

If markets are efficient in the sense of aggregating and disseminating in formation across traders, who will pay for costly research? Grossman and Stiglitz (1980) and other authors have pointed out this problem. Experiments have helped us understand what actually goes on and allowed us to better address this conundrum of the efficient market theory – a finite rate of learn ing makes it possible to support costly research, even in markets which tend towards efficient outcomes. Enough people will conduct research so that the av erage returns to research equal the average cost. Research users have higher gross profits, but their net profits are the same as the profits of the others. As investors learn (in a fixed environment), their value of information decreases because they can ride free on others’ information, and the market price of in formation drops. If the supply of information can be maintained at a lower price, the price drops to a level sustainable by learning friction. If the supply of information also falls with its price, we get a noisy equilibrium.

After exposures of misleading research having been distributed to clients by the research departments of investment bankers in recent years, regulators have sought to separate research and investment banking functions, and, in some cases, required the investment industry to fund the free distribution of investment research to the public. Experimental research throws some light on the possible consequences of mandating the provision of free research to investors. It would be difficult, if not impossible, to assess the quality of such “free” research distributed to the public. It is not clear if optimal investment in research can be maintained without private incentives to benefit from the results of the research. But mandated free distribution of research is likely to reduce its quality to a level where its price would be justified.

Economic theory tends to emphasise transaction prices as the main vehicle for the transmission of information in markets.

Experimental markets show that other observables (bids, asks, volume, timing, and the like) also transmit infor mation in markets. In deep markets, price can be the outcome of information transmitted through these other variables. In period 8 in Figure 1, for example, the first transaction occurred at the RE price. To arrive at the RE price, the traders needed to have information. This information transmission had already taken place through other variables before the first transaction of the period was executed. Derivative markets help increase the efficiency of primary markets. Forsythe, Palfrey and Plott (1982), in the first asset market experiment, showed that futures markets speeded up convergence to equilibrium in the primary market. Friedman et al (1983) and Kluger and Wyatt (1990) found that option markets increased the informational efficiency of equity markets.

Traditionally, market efficiency has been defined statistically

– if you cannot make money from information (past data, public, or all information), the market is deemed to be efficient. Experiments have revealed that statistical efficiency is a necessary but not a sufficient condition for the informational efficiency of markets. The last four periods of the market depicted in Figure 2 from Plott and Sunder (1988) were efficient by statistical criteria but were not informationally efficient. Just because you cannot make money in the market does not mean that the price is right. Even when investors know that the price is not right, they may have no means of profiting from that knowledge.

3.8 What Is Next?

The above paragraphs give a highly selective summary of what we have learnt from experimental asset markets. What is coming up next? The existence and causes of market bubbles is a perennial

Figure 2: Aggregation of Information in Security Markets

X97 10 20 170 215 80 215 – – Avg price Y 68 67 – 2 – 154 1 158 –

Z 181 223 268 – – – – 175 286 166 147 150 166

Efficiency 74.3 96.3 100 86.8 100 100 86.2 88.75 100 26.5 26.7 -3.75 55.9 300 200 100 0 300 200 100 0 300 20 10 1(Z) 2(Z) 3(Z) 4(X) 5(Y) 6(Y) 7(X) 8(Y) 9(Z) 10(Z) 11(X)12(Y) 13(Z) Single security Contigent claims Z - Security Y - Security X - Security

Period (state)

Economic & Political Weekly

EPW
August 27, 2011 vol xlvi no 35

SURVEY OF EXPERIMENTAL ECONOMICS

subject in financial economics. What might we learn about bubbles from experiments? Smith, Suchanek and Williams (1988) showed that bubbles could arise in simple asset markets with inexperienced subjects, and tended to diminish with experience. Lei, Noussair and Plott (2001) showed that bubbles could arise even when investors could not engage in speculative trades. They suggested that bubbles could arise from errors in decision-making even if a lack of common knowledge of rationality was absent (“bigger fool” beliefs).

An experiment by Hirota and Sunder (2007) explored the possibility that the fundamental economic model of valuation – discounted cash flow (DCF) – might become difficult to apply in markets populated by short-term traders. When a security matured beyond the investment horizon, personal DCF included the sale price at that horizon. The sale price depended on other investors’ expectations of DCF beyond the investor’s own horizon. Applying DCF involved backward induction to the maturity of the security through the expectations and valuations of future “generations” of investors. Bubbles could arise even with rational investors who made no errors if they could not backward induct the DCF. In 11 experimental sessions, they found that bubbles arose consistently when the markets were populated by investors with short-term investment horizons, and did not arise with long-term investors. The DCF valuation model makes heroic assumptions about the knowledge nec essary to do backward induction. Even if investors are rational and make no mistakes, it is unlikely that they will have the common knowledge necessary for the price to be equal to the fundamental valuation in a market populated by limited-horizon investors. Not surprisingly, the pricing of new technology, high growth and high-risk equities are more susceptible to bubbles. In such circumstances, if we do not have common knowledge of higher order beliefs, testing theories of valuation becomes problematic.

This is only a thumbnail sketch of some experimental results on asset markets. We have not discussed many other important and interesting studies (for a more comprehensive survey, see Sunder’s paper in Kagel and Roth 1995). With the experimental camera focused on information processing in asset markets, the theoretical line drawing has been filled with details, shadows, colour – and warts. This finer grain portrait of asset markets confirms the rough outline of the extant theory. But it is considerably more complex and is providing guidance and challenges for further theoretical investigations of the interplay of information in asset markets. It is a unique experience to watch trading in an experimental asset mar ket. You know all the information, parameters and alternative theoretical predictions. Yet, what you see often surprises you, forcing you to rethink and gain new insights into how these markets work. Experimental asset mar kets are our Lego sets. Playing with them produces new ideas and helps us sort out good ideas from the bad ones. This section would have given the reader a flavour of the experimental approach to macroeconomics and finance. As stated in the introduction, the objective of this survey is not to give an exhaustive review of the literature but to acquaint an Indian audience with the broad approach.

Another area that has been prolific in terms of experimental applications is the area of public economics. Here, experimenters have particularly favoured the provision of public goods. Part 4 of the survey takes an overview of this area.

August 27, 2011 vol xlvi no 35

To read the full text Login

Get instant access

New 3 Month Subscription
to Digital Archives at

₹826for India

$50for overseas users

Comments

(-) Hide

EPW looks forward to your comments. Please note that comments are moderated as per our comments policy. They may take some time to appear. A comment, if suitable, may be selected for publication in the Letters pages of EPW.

Part 4 of the survey offers an overview of public goods experiments, an area where experimentation has been particularly fruitful.

The survey begins, in Part 1, with a presentation of the historical emergence of the subject and provides the methodological justification for...

Over the past few years, experimental economics has become increasingly visible in research activity in India.The concluding part of this survey...

Part 2 of the survey looks at experimental results dealing with individual choice. The discussion compares the two dominant experimental...

Back to Top