 
					
				 
															#60 - Behind The Cloud: Fundamentals in Quant Investing (4/12)
Optimization and Overfitting – The Biggest Trap of Quantitative Investing
October 2025
𝗙𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀 𝗼𝗳 𝗤𝘂𝗮𝗻𝘁𝗶𝘁𝗮𝘁𝗶𝘃𝗲 𝗜𝗻𝘃𝗲𝘀𝘁𝗺𝗲𝗻𝘁𝘀
In this series, the Omphalos AI Research Team want to discuss the key and fundamental aspects of quantitative investing in detail and depth. In particular, our series will not be a beautiful story of how to build the holy grail of investing, but rather a long list of pitfalls that can be encountered when building such systems. It will not be a complete and exhaustive list of pitfalls, but we will try to describe those we discuss in great depth so that their significance is clear to everyone. And importantly, this will not be a purely theoretical discussion. We will provide a practical view on all of these aspects — shaped by real-world lessons and, in many cases, by our own painful and sometimes even traumatic experiences in building and testing systematic investment strategies. These hard-earned lessons are precisely why Omphalos Fund has been designed as a resilient, modular, and diversified platform — built to avoid the traps that have undone so many before.
At Omphalos Fund, we have always been clear about one thing: artificial intelligence is not magic. It is a powerful tool, but its value depends entirely on the system it operates in and the rules that guide it. When applied to asset management, this means that even the most advanced AI can only be effective if it is built on a deep understanding of how markets work — with all their complexities, inefficiencies, and risks.
That is why our latest Behind the Cloud white paper takes a step back from the technology itself. Instead, it examines the foundations of quantitative investing — the real-world mechanics, pitfalls, and paradoxes that shape investment strategies. The aim is not to present a flawless “holy grail” of investing, but to show the challenges and traps that every systematic investor must navigate.
We believe this is essential for anyone working with AI in finance. Without appreciating the underlying business of investing, AI models risk becoming black boxes that look impressive in theory but fail in practice. By shedding light on the subtle but critical issues in quantitative investment design — from overfitting to diversification, from the illusion of normal distributions to the reality of risk of ruin — we provide context for why our platform is built the way it is: modular, transparent, and resilient.
The goal of this white paper is simple:
To help readers understand that using AI in asset management is not only about smarter algorithms — it’s about building systems that are grounded in strong investment fundamentals and designed to survive the real world of markets.
Chapter 4
Optimization and Overfitting – The Biggest Trap of Quantitative Investing
Optimization is at once the quant’s favorite tool and their most dangerous trap. The appeal is obvious: tweak the parameters, maximize the Sharpe ratio, and watch the equity curve rise smoothly. But what looks like refinement is often an illusion – a model learning the peculiarities of the optimization period rather than patterns that endure into the future.
The danger lies in overfitting. A perfectly optimized model is usually one that has “memorized” its training data. It appears flawless on paper but is fragile in practice. This is why so many quant strategies collapse: they were never robust, only finely tuned to a historical accident.
The critical distinction is between the optimization period and the test period.
Results on the optimization period are always good, because that’s where the parameters were trained to perform. But success there means nothing; it’s like grading an exam after seeing the answers. The true test comes on the test period – unseen data. And even then, if we repeat the process enough times, trying hundreds of parameter combinations, we can eventually get lucky and find a “good” result purely by chance.
The Illusion of Perfection
Bailey et al. (2014) demonstrated that the majority of published backtests in finance are likely false discoveries. Why? Because repeated experimentation – testing hundreds or even thousands of parameter combinations – guarantees that some will “look good” simply by chance. Statistically, the more you search, the more likely you are to stumble upon noise masquerading as signal.
Harvey & Liu (2015) pushed this further, showing that the sheer volume of financial research creates a “p-hacking” problem. With so many academics and practitioners running so many tests, it is inevitable that strategies appear statistically significant when in reality they are spurious.
López de Prado (2018) adds a practical warning: a backtest with impressive performance metrics should almost always be treated with suspicion. He argues that robust strategies do not look “too good to be true” – because real edges are modest, noisy, and irregular. The smoother and more perfect a backtest appears, the higher the likelihood it is an artifact of overfitting.
For beginners, overfitting might mean curve-fitting a moving average crossover until the line looks perfect. For professionals, it can be the misuse of machine learning: feeding in vast amounts of data and adjusting hyperparameters until predictive accuracy appears flawless. In both cases, the result is the same: a model that explains the past brilliantly but fails to generalize.
The paradox is unavoidable: the more time and data you devote to improving a model, the more likely it becomes that you are fitting noise rather than learning structure. This is not just a technical risk – it is the defining fragility of quantitative investing.
And even when a model appears sound — with a well-defined optimization period, a clean test set, and statistically convincing p-values — uncertainty never disappears. Because in the real world, we trade the right side of the chart: the part that hasn’t happened yet. The data we act on is always unseen, shaped by future market conditions that no model, no matter how refined, can fully anticipate. Robustness, not perfection, is the only true defense.
Research Spotlight
- Bailey et al. (2014): Showed that most published backtests are likely false discoveries due to multiple testing bias.
- Harvey & Liu (2015): Quantified the risk of “p-hacking” in finance, explaining why so many strategies look significant in sample but vanish out of sample.
- López de Prado (2018): Advocated for robust validation methods such as combinatorial cross-validation to detect and avoid overfitting, stressing that “if a backtest looks too good, it probably is.”
 
Omphalos Perspective
At Omphalos Fund, we treat optimization with deep suspicion. Every trading agent is designed to prove robustness, not perfection.
Instead of asking, “How good can we make this backtest look?” we ask:
- Does the strategy survive parameter changes, or does performance collapse when we shift a window by just a few days
- Is the result consistent across different market regimes, or does it only work in one historical period?
- How many degrees of freedom were used – and could the same results emerge simply from noise?
 
We deliberately avoid “over-tuned” systems. Our philosophy is that a robust strategy should work even when the parameters are slightly wrong. If an agent’s success depends on precise fine-tuning, it is not robust enough for the real world.
A Case in Point: When Optimization Goes Too Far
In the early 2000s, many hedge funds promoted highly optimized technical trading systems. Their backtests were impeccable: equity curves with minimal drawdowns, high Sharpe ratios, and apparent resilience across multiple years.
But when deployed live, many collapsed within months. Why? Because the “perfect” strategies had simply memorized the past. They were trained to exploit quirks of historical data – quirks that never repeated. The more optimized the system, the more fragile it was in reality.
It doesn’t take 100 parameters to fall into this trap. Sometimes just three to five variables, adjusted enough times, are sufficient to “explain” everything in history and predict nothing about the future.
Closing Thought
Optimization feels like science, but done recklessly it is little more than statistical storytelling. The truth is simple: the more perfect the backtest looks, the more likely it is a mirage.
At Omphalos Fund, we focus not on perfection but on resilience – systems that can survive imperfection, parameter drift, and regime changes. In finance, robustness beats beauty every time.
👉 In the next chapter, we’ll explore an even subtler trap: testing on testing periods. Even when investors avoid obvious overfitting, they can unknowingly contaminate their test data – creating the illusion of validation where none truly exists.
Stay tuned for Behind The Cloud, where we’ll continue to explore the frontiers of AI in finance and investing.
If you missed our former editions of “Behind The Cloud”, please check out our BLOG.
© The Omphalos AI Research Team – October 2025
If you would like to use our content please contact press@omphalosfund.com
 
 			  
 			  
								