AINL#012 Augmented Intelligence in Investment Management Newsletter

Welcome to the 012 Edition of the Newsletter on Augmented Intelligence in Investment Management (AINL). Every two weeks, we deliver five unique insights tailored to empower investment decision-makers. Our insights are carefully curated by a seasoned team of market specialists. Unbiased, actionable and practical. They will help you navigate through the noise.

 


AINL#12 SYNTHESIS


 

What do these recent developments mean for investment decision-makers?

 

1. AI Leverage Creates Asymmetry—Target High-HAILR Opportunities

From Article 1 and Article 3

AI is not equally productive for all users—invest where smart humans are paired with smart machines.
AI enables disproportionately large outputs from minimal human input in high-functioning systems. Investment professionals should:

  • Prioritize sectors and firms with high Human-to-AI Leverage Ratios (HAILR)—those where a small team equipped with AI drives significant economic value (e.g., software, design, finance, biotech).
  • Assess AI maturity not just by adoption but by integration depth—focus on businesses where AI amplifies human decision quality rather than substitutes it.

 

2. Avoid AI-Driven Cognitive Complacency—Design for Active Use

From Article 3 and Article 4

AI improves your edge only if you stay intellectually engaged.
Over-reliance on AI leads to “metacognitive laziness” and poor decision adaptation among lower performers. Investment professionals should:

Use AI as a thinking partner, not a shortcut—build prompts, frameworks, and tools that stimulate reflection and hypothesis testing.
Train teams to challenge AI outputs through scenario analysis and domain-specific judgment.
Design workflows that combine machine efficiency with human intent, especially in investment research and portfolio construction.

 

3. Prepare for AI Systemic Risk—Mitigate Overconcentration and Opaqueness

From Article 2 and Article 5

The AI herd effect is real—being contrarian means understanding the models everyone else is using. Widespread use of similar AI models introduces systemic risk: increased market correlation, third-party concentration, and model opacity. Investment professionals should:

Diversify model sources and maintain independent analytic capabilities.
Build AI governance frameworks to monitor data quality, model assumptions, and alignment with fiduciary principles.
Stay alert to information distortion risks, especially through AI-generated content in public financial discourse.

 


TOP 5 ARTICLES


 

ARTICLE ONE

Why the HAILR Ratio Is Key to Understanding AI’s Economic Impact

ARTIFICIAL INTELLIGENCE | Traub, et al., Jan 2024 | 1_2024 | Publication Report

Important Development

As AI rapidly scales, understanding the Human-to-AI Leverage Ratio (HAILR) becomes critical for investors. This paper introduces HAILR to show how a small amount of human effort could drive massive economic output. Understanding this dynamic could offer an edge in the Age of Abundance.

Why Relevant to You?

Investors who ignore AI-driven productivity shifts risk missing the biggest economic transformation of our time. This paper offers a clear, flexible model to understand how AI could reshape industries, labor markets, and profits. It reveals both the risks — like mass job displacement — and the historic opportunities for growth and wealth creation.

 


 

ARTICLE TWO

Did LLMs Cross The Uncertainty Barrier Through Thin-Slicing?

ARTIFICIAL INTELLIGENCE | Michigan State University, Organization & Environment | van Zanten | 04_2025 | Paper Article

Important Findings

Research on “thin slices” shows humans can reach accurate judgements on relatively little information It works for AI, too. LLMs given 10% of a public science presentation can accurately assess whether people will find it interesting. Even seven seconds of a talk yields results.

This study examined whether brief excerpts (thin slices) of scientific presentations can reliably predict the overall quality of the full presentations. The researchers employed large language models (LLMs) to evaluate transcripts of over 100 real-life science talks and their thin slices, with comparable outcomes like for human participants. Questions arose, whether this indicates the machine to have crossed the uncertainty barrier. Conclusion: study design and results do not qualify for such conclusion.

Why Relevant to You?

Unknown unknowns (black swans) represent the uncertain end of the Knightian complexity spectrum. As for now, the machine lacks data quantity and quality to reliably support human heuristics in finding order in the unstructured notion of uncertainty. The attempts are to find a way for the machine to cross this uncertainty barrier, ie via small language modeling. We are not there yet, or will never be. Still a domain to carefully observe.

 


 

ARTICLE THREE

The Uneven Impact of Generative AI on Entrepreneurial Performance

HUMAN & ARTIFICIAL INTELLIGENCE | Harvard Business School and Berkeley Haas | 7_2024 | Paper

Important Findings

We need more work like this study on AI as advisor to humans, rather than AI just doing work. This controlled study in Kenya found top small business entrepreneurs got a stunning 15% boost in profits when given an AI mentor, but low performers struggled with mentorship & did worse.

The authors attribute this increase in performance inequality to differences in how entrepreneurs selected from and implemented the AI advice they received, rather than differences in the advice itself. High-performing entrepreneurs appeared to work with the AI to discover and implement tailored, specific improvements for their businesses, whereas low performers tended to select and implement more generic advice around lowering prices and increasing advertising, which often proved detrimental.

Why Relevant to You?

We do know that AI advice alone is enough to measurably increase the performance of already high-performing entrepreneurs, expendable to high performance decision making under uncertainty, like in capital markets. What we need to further explore is how and when such advice can be included in the decision design to not trigger the adverse effects of suppressing the cognitive development of its users.

 


 

ARTICLE FOUR

Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance

HUMAN & ARTIFICIAL INTELLIGENCE | British Journal of Educational Technology | Fan Y. et al. | 3_ 2025 | Paper

Important Findings

This study investigates how augmented intelligence—collaboration between humans and AI like ChatGPT—affects student learning. In a randomized experiment with 117 university students, participants received learning support from either ChatGPT, a human expert, or analytic tools. While motivation levels remained consistent, the groups differed in self-regulated learning processes and performance. ChatGPT users improved in essay scores but showed no significant gains in knowledge transfer.

The study warns of “metacognitive laziness,” where over-reliance on AI limits deep learning. It emphasizes the need to balance AI support with active learner engagement to harness the full benefits of hybrid intelligence in education.

Why Relevant to You?

Professionals using AI tools must guard against passivity and aim for thoughtful engagement. This study highlights the importance of using AI to support—not replace—critical thinking. Educators and trainers should design learning tasks that foster reflection and autonomy, ensuring AI enhances rather than diminishes long-term learning and professional development.

­

 


 

ARTICLE FIVE

The Financial Stability Implications of Artificial Intelligence

HUMAN & ARTIFICIAL INTELLIGENCE | Financial Stability Board | 11_2024 | Report

Important Findings

After describing factors that have driven the growth of AI in the financial industry over the past few years, the report identifies the following related vulnerabilities that could negatively impact financial stability: a) third-party dependencies and services provider concentration; b) increased market correlation due to the widespread use of common AI models; c) cyber risks; d) model risk including opaque data quality and AI governance; e) the use of GenAI to spread disinformation in financial markets. The report also provides an overview of current industry and supervisory use cases of AI.

Why Relevant to You?

The report provides a concise summary of recent trends in the adoption of AI in the financial industry as well as related challenges in using this technology responsibly. Therefore, it is valuable source of information that can help to navigate the complexities of AI adoption in the financial sector, manage associated risks, and capitalize on new opportunities.