This is the final post of an introduction to agent-based models in US equity markets. The first post provided a definition of a model and a brief overview of how economists use models to simplify reality. The second post introduced agent-based models, simulation, and how they are related. We will conclude with an introduction to the polar types of agents: zero-intelligence and learning agents.
Zero intelligence versus learning agents
Gode and Sunder (1993) define a zero intelligence (ZI) trader as a trader that “has no intelligence, does not seek or maximize profits, and does not observe, remember, or learn.” Farmer et al. (2005) also provide a description: “The model makes the simple assumption that agents place orders to buy or sell at random, subject to constraints imposed by current prices.” They go on to explain that their ZI traders do observe current prices – a deviation from the previous definition. The constraints are essentially dynamic bounds on limit order prices.
The agents in my Tick Pilot paper are ZI as characterized in Farmer et al. (2005): they observe the top of book (price and size) and place orders to buy or sell consistent with their pricing heuristic. The buying and selling choice is random subject to pricing constraints.
- Agents interact with other agents and the environment.
- The agent has a goal or set of goals and can perceive the gap between its current state and desired state.
- The agent has a set of heuristics (rules of thumb) that map the current state into decisions. This is called the agent’s mental model.
- The agent’s mental model tracks which rules have helped it achieve its goals. Historically successful rules are used more often than less successful rules. Feedback from the environment causes the agent to learn over time.
Holland and Miller (1991) define complex (1, 2, 3) adaptive (4, 5) systems:
- A network of interacting agents.
- Exhibits dynamic behavior that emerges from the individual agent activities.
- Aggregate behavior can be described without detailed knowledge of the individual agents.
- Agent actions can be assigned a value (payoff, fitness, utility).
- Agent behaves so as to increase this value over time.
Beinhocker (2006) summarizes the evolutionary (learning) approach: differentiate, select, amplify. The evolutionary approach is implemented on a computer with genetic algorithms.
Wikipedia gets the final word: In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection.
There’s a lot going on in this definition. The links are helpful – especially to folks with a hard science background. But, I suspect it is all a bit of a mystery to those who never studied evolutionary biology, computer science, physics, etc. How would you explain zero intelligence, learning agents and genetic algorithms to an accountant, attorney or MBA?
In an upcoming post, I will provide some specific examples of failed attempts to communicate with a wider audience about simulation and agent-based modeling applied to US equity markets.