#21 - Behind The Cloud: Demystifying AI in Asset Management: Is It Really a Black Box? (2/6)

Human Decision-Making vs. AI: Transparency and Bias in Investing

October 2024

One of the central concerns in AI-led investing is the comparison between AI decision-making and human decision-making. While AI is often viewed as a “black box,” human decision-making, driven by experience and gut feeling, is not always as transparent or free from bias as we might think. In this chapter, we’ll explore how humans and AI process information, what biases come into play, and how each approach offers its own strengths and weaknesses in asset management.

How Humans Make Investment Decisions

Human decision-making is a complex process, involving both rational analysis and emotional intuition. In asset management, a portfolio manager might analyze market trends, financial reports, or economic forecasts to make decisions. However, much of the final decision-making process is influenced by subjective factors—experience, personal bias, and even gut feeling.

While it might seem like humans have more control over their decisions than AI, studies show that many human decisions are influenced by cognitive biases and are not always rational. Factors like overconfidence, herd behavior, or loss aversion can cloud judgment, leading to suboptimal investment choices. In fact, research shows that human decision-making is prone to bias in over 90% of cases, particularly in uncertain environments like financial markets.

The Role of Bias in Human Decisions

Bias is a well-documented factor in human decision-making. Some of the most common biases that affect investment decisions include:

  • Overconfidence Bias: Investors tend to overestimate their knowledge or ability to predict market movements, leading them to take on excessive risks.
  • Confirmation Bias: People tend to favor information that confirms their pre-existing beliefs, ignoring data that contradicts their viewpoint.
  • Herding: Investors often follow the behavior of others, assuming that the majority must be making the right choice, even when the data says otherwise.
  • Loss Aversion: People are more sensitive to potential losses than they are to equivalent gains, which can lead to overly conservative investment decisions.

These biases are part of human nature and difficult to eliminate. Even experienced fund managers can be swayed by emotional factors when making decisions.

The Human Brain and Data Processing

The human brain processes information in a highly sophisticated but often subjective manner. Unlike AI, which can process huge datasets in parallel, humans work with a limited set of data at any given time. Cognitive psychology research indicates that human brains tend to focus on a small number of key variables, often overlooking complex patterns in favor of simpler explanations.

While human decision-making might feel more explainable than AI, it is still not entirely transparent. Much of the processing happens at a subconscious level, and decision-makers often struggle to explain why they made a particular choice beyond vague feelings or past experiences. In fact, only around 60% of human decisions are based on rational, explainable reasoning, with the rest influenced by intuition and subconscious factors.

How AI Makes Investment Decisions

AI, on the other hand, approaches decision-making from a data-driven perspective. AI models are trained to analyze massive datasets, detect patterns, and make predictions based purely on the input data. Unlike humans, AI does not experience emotional biases, and its decisions are based entirely on logical processing.

However, this doesn’t mean AI is free from bias altogether. The quality of the input data and the way models are trained can introduce biases. For example, if the historical data used to train an AI model is skewed or incomplete, the resulting predictions may also be biased. This makes data quality a critical factor in AI decision-making. Human oversight is essential here, as humans are responsible for selecting the datasets and ensuring that the model is learning from accurate and relevant information.

Transparency in AI Decision-Making

AI is often perceived as a “black box” because the process behind its predictions is complex. However, this complexity doesn’t mean the process is inherently opaque. AI models, particularly simpler ones, can often provide explainability by highlighting the key data points that led to a decision. In more complex models, such as neural networks, researchers are developing new tools to increase transparency, such as Explainable AI (XAI) techniques, which help interpret model outputs.

Moreover, AI systems work on a deterministic basis—meaning that given the same inputs, they will always produce the same output. This predictability offers a level of transparency that is not always possible with human decision-making, which can vary based on mood, emotion, or outside factors that are hard to quantify.

The Role of Human Oversight in AI

While AI systems can process enormous amounts of data quickly, they still require human oversight to ensure that the inputs and outputs make sense. Humans play a crucial role in:

  • Selecting and curating data: Ensuring that the data fed into AI models is accurate and relevant.
  • Setting the strategy: Defining the overall objectives and parameters within which the AI model operates.
  • Monitoring for bias: Regularly reviewing AI models to detect and address any potential biases that may arise from the data.
  • Interpreting results: Using human judgment to make sense of AI-generated insights and apply them within the broader context of market conditions.

AI vs. Human: How Decisions Differ

The key difference between human and AI decision-making is consistency. While humans are influenced by a variety of emotional and psychological factors, AI makes decisions purely based on data. This means AI can maintain consistent performance, particularly in environments where large datasets are available, and decisions need to be made quickly.

For example, in high-frequency trading, AI can analyze market data in real time, execute trades within milliseconds, and stick to a predefined strategy without deviating due to emotional impulses. Humans, on the other hand, may hesitate or react based on their mood or external market factors.

Is Human Decision-Making Really More Transparent?

While humans might feel more in control of their decision-making process, much of it happens at a subconscious level. Cognitive research has shown that humans often rely on “gut feeling” or intuition, which can be difficult to explain. In contrast, AI is based entirely on data and mathematical models, which—while complex—are rooted in logic.

That said, AI’s perceived lack of transparency comes down to the difficulty in explaining how advanced algorithms work, particularly in deep learning models. But as technology evolves, tools for explaining AI decisions are becoming more accessible, providing asset managers with the insights they need to trust AI-driven strategies.

AI at Omphalos Fund: Removing Human Emotion from Investment Decisions

At Omphalos Fund, the role of humans in the investment process differs from traditional approaches. Humans are not making final investment decisions. Instead, the AI system is fully responsible for analyzing data, generating investment signals, and executing trades—all without the influence of human emotion.

Humans play an essential role in the development of the platform, the creation of the Time Series Forecasting model, and ensuring the quality of the input data. By carefully selecting the right datasets from reliable sources, humans control the quality of the input, which is critical to the AI’s performance.

Once the AI model is in place, human oversight focuses on monitoring the consistency of the AI system’s performance. If the model performs better or worse than expected, humans intervene to check its behavior, ensuring that it functions as intended and remains within set parameters. This approach allows for a blend of AI-driven decision-making with human oversight focused on platform integrity rather than emotional judgment in investment choices.

Conclusion: A Balanced Approach to Decision-Making

Neither AI nor human decision-making is free from flaws. Humans are prone to biases and subjective judgment, while AI can suffer from data-driven biases if not carefully managed. However, AI offers the advantage of consistency and data-driven rigor, while humans bring valuable experience and context to the decision-making process.

A major advantage of AI systems is their scalability—AI can simultaneously analyze and trade in thousands of markets, something humans cannot achieve, even with large teams of traders. Companies might hire thousands of people to manage Over-the-Counter (OTC) trades, but scaling human decision-making to this level is inefficient and impractical.

At Omphalos, AI takes full control over the investment process while humans ensure the quality and consistency of the system. This combination creates a robust, efficient, and transparent system designed to make unbiased investment decisions at scale.

In the coming weeks, we will explore the differences between AI decision-making and human “gut feeling,” the safeguards in place to prevent AI bias and overfitting, and how AI can become more transparent in the future. The goal is to demystify AI in asset management and show that the “black box” perception is more myth than reality.

In the next chapter, we will explore how AI systems generate investment signals and how these signals are integrated into systematic and explainable trading strategies. Stay tuned for Week 3!

Thank you for following us. We will try to continue to address relevant topics around AI in Asset Management.

If you missed our former editions of “Behind The Cloud”, please check out our BLOG.

© The Omphalos AI Research Team October 2024

If you would like to use our content please contact press@omphalosfund.com