Risk in an AGI World

#79 - Behind The Cloud: Risk in an AGI World (7/9)

March 2026 

This is the 7th chapter of the 10th 'Behind The Cloud' series. 

AGI for investments – How It Will Look and How It Will Change Markets

In this series, we explore how artificial intelligence in investing could evolve beyond today’s narrow, task-specific applications toward systems that resemble Artificial General Intelligence (AGI) in function - not in science-fiction form, but as autonomous, adaptive, and coordinated decision-making systems operating across assets, data domains, and market regimes. What follows is not a prediction in the strict sense. The future rarely unfolds exactly as imagined. Yet some variant of the trajectory described in this series is highly likely to materialize, and every plausible variant would have a profound impact on capital markets - on how investments are managed, how risks are understood, and how market behavior itself evolves. 

Chapter 7

Risk in an AGI World

Risk management has always lagged behind innovation in investing. As strategies grow more complex, risk models often remain rooted in simplified assumptions about distributions, correlations, and stability. The emergence of AGI-like investment systems widens this gap even further. This chapter examines why traditional risk frameworks struggle in an AGI-driven environment. When decision-making becomes autonomous, continuous, and adaptive, risk is no longer a static constraint imposed from the outside. It becomes an integral part of the system’s intelligence - learned, updated, and acted upon in real time.

The message is straightforward: AGI does not eliminate risk. It changes its nature. And it forces risk management to evolve from a control function into a co-evolving system behavior. The Old World: Risk as a Static Boundary

Classical risk management is built around a familiar architecture. Strategy is developed first. Risk is applied later. Limits are defined externally: maximum VaR, maximum leverage, maximum drawdown, maximum position size. The job of risk is to enforce discipline after decisions are made.

This architecture worked reasonably well in an era where strategies were slower, portfolios changed less frequently, and humans remained the final decision-makers. In that world, risk could act as a gatekeeper.

But AGI-like systems do not make decisions in discrete steps. They make them continuously. The decision and the risk implication occur at the same time. And the environment can change faster than a monthly risk report can describe.

This breaks the logic of static boundaries. In an AGI world, risk cannot be something you “apply.” It must be something the system embodies. Why Classical Risk Measures Become Misleading

Most classical risk metrics assume that tomorrow is similar enough to yesterday that historical distributions remain informative. Even when risk teams acknowledge fat tails and stress events, they often operationalize risk through simplified surrogates: rolling vol, historical VaR, fixed correlation matrices, or scenario sets calibrated to known crises.

The issue is not that these tools are useless. The issue is that they become increasingly misleading as markets and strategies become more adaptive. When decision-making is autonomous and the market is populated by similar systems, distributions can shift endogenously. Liquidity can vanish because models withdraw it. Correlations can converge because systems de-risk simultaneously. Volatility can cluster because feedback loops amplify initial moves. In such environments, backward-looking risk can become a comfort blanket. It describes the past precisely while missing the way the present is changing. VaR Fails Harder When Feedback Matters Value-at-Risk is a useful example. VaR was never designed to capture the severity of tail events. It quantifies “typical” losses under assumed conditions. In a feedback-driven environment, those assumed conditions are the first thing to break.

In an AGI market, VaR fails for three structural reasons:

  1. First, VaR is distribution-dependent, and the distribution is no longer stable. Systems change behavior, and the market changes with them.
  2. Second, VaR is typically calibrated on periods of normal liquidity. But liquidity is now conditional and can disappear rapidly. A strategy that appears low-risk on paper can become untradeable in practice.
  3. Third, VaR is blind to synchronization. If many systems use similar risk triggers, the probability of simultaneous de-risking rises. VaR measures losses assuming independent shocks, while the market increasingly produces dependent shocks.

The result is a paradox: the better models are at measuring risk, the more likely they are to act similarly — and the more risk becomes systemic rather than idiosyncratic.

It is worth noting that more advanced tail-aware metrics exist, such as Conditional Value-at-Risk (CVaR / Expected Shortfall), which attempt to address one of VaR’s key shortcomings by focusing on losses beyond the threshold rather than stopping at it. However, in many practical implementations these measures still inherit the same fragility if they rely on Gaussian distribution assumptions or stable historical calibration. In other words, the metric may become more sophisticated, while the underlying worldview remains dangerously simplified. Adaptive Risk: From Limits to Behavior

In an AGI-oriented framework, the core shift is not to find a better number. It is to change the structure of risk management. Adaptive risk differs from static risk limits in one central way: it treats risk as a dynamic state that is updated continuously, not as a fixed boundary that is checked periodically.

This means risk is managed by behavior:

  • exposure is scaled as uncertainty rises,
  • capital is reallocated when correlations converge,
  • components are quarantined when they drift outside expected behavior,
  • and the portfolio becomes more conservative not when losses occur, but when the conditions that create losses become more likely.

Risk management becomes less like a brake and more like a nervous system. 

A further implication is informational. Because the system is composed of hundreds or thousands of agents observing different data sources and market microstructures, risk decisions can be informed by multi-dimensional signals (changes in liquidity, dispersion, regime stability, crowding indicators, and other behavioral diagnostics) that are typically invisible to traditional risk frameworks built around a small set of aggregated metrics. In an AGI-like architecture, risk management is not only faster; it is based on a richer and more granular view of reality. Drawdowns as Signals, Not Endpoints

Traditional frameworks treat drawdowns as outcomes to be tolerated, explained, or minimized. In an AGI world, drawdowns become a signal, not just of market movement, but of model-environment mismatch.

The essential question shifts from “Did we lose money?” to “What does this loss imply about our assumptions?”

A drawdown can indicate many things: regime shifts, liquidity deterioration, crowding, correlation compression, or a decaying edge. The point is not to assign blame. It is to treat the drawdown as feedback and update behavior accordingly.

This is why survival becomes the primary objective. AGI-like systems can iterate quickly, but only if they remain alive long enough to learn. Once capital is impaired beyond recovery, learning stops. The Endgame: Risk Co-Evolves with Strategy

When strategy decisions are autonomous and continuous, risk cannot remain a separate discipline. It must co-evolve with the system.

This is where the AGI framing becomes concrete. A system that learns must also learn its own vulnerability. It must reassess not only what the market might do, but what its own behavior might cause - including second-order effects created by other AI systems reacting simultaneously.

In this world, robust performance is inseparable from robust risk thinking. The quality of the system is defined not by how it performs when conditions are normal, but by how it behaves when conditions are unfamiliar. Omphalos Perspective

At Omphalos, experience has shown that robust performance is inseparable from robust risk thinking. In an autonomous system, risk management is not a separate overlay. It is embedded into every layer: agent behavior, portfolio coordination, and allocation rules.

The aim is not to remove risk. It is to manage it as a living constraint: dynamic, context-aware, and conservative when uncertainty rises. This includes maintaining diversification where it matters most - especially in down markets - and ensuring that no single component can create a portfolio-level failure mode.

In an AGI world, the best systems will not be those that predict the most. They will be those that survive the longest, because survival is what makes adaptation possible. Supporting Research & News

  • BIS research on AI’s impact on financial stability — highlights how advanced AI systems introduce new risk channels. (https://www.sciencedirect.com/science/article/pii/S1572308925001019?)
  • Bank of England AI risk warning — reinforces regulatory concern about autonomous systems misjudging or amplifying risk.


Next week we will publish the 8th chapter of this series: "The Human Role After AGI'

If you missed our former editions of "Behind The Cloud", please check out our BLOG.

Omphalos Fund won the "Funds Europe Awards 2025" in the category "European Thought Leader of the Year".

Omphalos Fund is nominated for "EuroHedge Awards 2025"

 

© The Omphalos AI Research Team - March 2026

If you would like to use our content please contact press@omphalosfund.com