AINL#013 Augmented Intelligence in Investment Management Newsletter

Welcome to the 013 Edition of the Newsletter on Augmented Intelligence in Investment Management (AINL). Every two weeks, we deliver five unique insights tailored to empower investment decision-makers. Our insights are carefully curated by a seasoned team of market specialists. Unbiased, actionable and practical. They will help you navigate through the noise.

 


AINL#13 SYNTHESIS


 

What do these recent developments mean for investment decision-makers?

 

1. Use LLMs as Tactical Enhancers, Not Strategic Delegates

Recent studies (Articles 1 & 5) reaffirm that Large Language Models (LLMs) like ChatGPT can boost learning and task efficiency—particularly in structured, short-term applications. However, their performance degrades under complexity, time, and coordination constraints.

2. Prioritize Human-Centric AI Design to Safeguard Alpha Generation

The integration of AI into collaborative environments (Articles 3 & 4) reveals an underappreciated operational risk: poorly integrated AI can impair decision-making dynamics, elevate stress, and degrade team performance—especially in high-pressure, high-consequence settings such as portfolio management.

 

3. Prepare for an Evolving Regulatory and Risk Landscape in AI Finance

The Bank of England’s recent policy update (Article 2) flags a critical structural development: systemic risk from AI use is now officially on the regulatory radar. Model failures, herd behavior, and dependency on AI service providers are cited as plausible threats to market stability.

 


TOP 5 ARTICLES


 

ARTICLE ONE

 Meta-Analysis on how LLMs Can Improve Learning Outcomes

ARTIFICIAL INTELLIGENCE | Nature | 5_2025 | Publication

Important Development

This study aimed to assess the effectiveness of ChatGPT in improving students’ learning performance, learning perception, and higher-order thinking through a meta-analysis of 51 research studies published between November 2022 and February 2025.

Plenty of caveats, but altogether it reaffirms that ChatGPT helps learning when used appropriately. Effects on learning perception and higher-order thinking are positive, but less significant. More research to be done.

Why Relevant to You?

By now we understand that the current generation of LLMs cannot replace human thinking, but augment it. The remaining challenge is how to avoid the potential negative consequences, as highlighted in several references throughout this newsletter.

This meta analysis defines different design elements to be in place for a positive impact, exemplarily: (1) embed in educational framework (2) use as intelligent tutor (3) embed in diverse learning needs.

 


 

ARTICLE TWO

Financial Stability in the Age of AI

ARTIFICIAL INTELLIGENCE | Bank of England | 4_2025 | Paper

Important Findings

Recently, the Bank of England published the Financial Policy Committee’s view on how AI may impact financial stability. It offers many insights that are relevant to investors.

Why Relevant to You?

This paper explains how AI is reshaping the financial sector by improving efficiency, automating processes, and informing decisions like credit and insurance underwriting. It warns that widespread AI use could create systemic risks, such as model failures, market instability from herd behavior, and operational vulnerabilities due to dependence on a few AI service providers.

For investors, this signals that regulatory oversight of AI in finance will likely increase.

 


 

ARTICLE THREE

Trusting AI in Space? Simulated Mission Finds Human Teams Still Outperform

HUMAN & ARTIFICIAL INTELLIGENCE | Qin, Y., Lee, R. T., & Sajda, P. | 1_2025 | Paper

Important Findings

This study examined how human teams perform when collaborating with an AI teammate in a virtual reality sensorimotor task. Participants worked together to control a spacecraft navigating through rings in space and returning to Earth. Surprisingly, human-AI teams performed worse than all-human teams, particularly as task difficulty increased. The AI’s presence disrupted communication, elevated arousal (pupil dilation, blink rates), and led to overcompensating behaviors. Despite improved trust in the AI over time, performance stayed low. The findings underscore the need for AI design that supports natural team coordination and reduces cognitive strain.

Why Relevant to You?

As AI becomes embedded in teamwork, business leaders must understand its effect on performance. Even in structured tasks like collaborative spacecraft control, AI teammates reduced efficiency and increased stress. This highlights the importance of designing AI tools that complement human cognition, foster trust and communication, and minimize disruption, ensuring AI adds value to collaborative work rather than impeding it.

 


 

ARTICLE FOUR

How AI-integrated Applications Affect Financial Engineers

HUMAN & ARTIFICIAL INTELLIGENCE | BMC Psychology | Gao, K., Zamanpour, A. | 12, 555 (2024) | Article

Important Findings

The integration of AI is significantly reshaping the role of financial engineers. A study analysed opportunities and challenges that AI-integrated applications represent for their psychological safety and work-life balance. The conclusions are double. While participants acknowledge the potential benefits such as increased efficiency and productivity, they emphasized the concerns about work-related stress. Confidence appears to increase with longer AI exposure.

Overall, the study factualises the importance of considering the human implications of AI adoption and calls for proactive measures to support the well-being of financial professionals.

Why Relevant to You?

The take away is double too. Despite the gains, financial engineers face increased pressure to keep up with evolving technologies, fear of job displacement, and blurred boundaries between work and personal life. In other words, it puts talent at risk – and therefore potentially performance and revenues, as attracting new talents costs much more than keeping existing ones. Organisations desiring to retain their top talents should therefore see this as an important stimulus to complement the introduction of AI applications with structured guidance.

This said, the fact that financial engineers see a lot of potential in the transition is a very positive signal for jobs where the use of AI is still in its infancy. If well supported, this could create more rapid traction compared to jobs/industries where negative perception represents an important barrier to entry/progress.

 


 

ARTICLE FIVE

Limitations of LLMs In Critical Task Management

HUMAN & ARTIFICIAL INTELLIGENCE | Andon Labs, Arxiv | 2_2025 | Paper

Important Findings

While Large Language Models (LLMs) can exhibit impressive proficiency in isolated, short-term tasks, they often fail to maintain coherent performance over longer time horizons. In this paper, the authors designed an environment to specifically test an LLM-based agent’s ability to manage a straightforward, long-running business scenario: operating a vending machine. In short, can an AI make money?

This simulation suggests yes, with an important caveat. On average, Claude 3.5 & o3-mini beat a human, but they are high in variance, and fail at random times for complex reasons. Some of those failures are critical, like when Sonnet gets confused and attempts to alert the FBI about a non-existent fraud. In addition, the authors find no clear correlation between failures and the point at which the model’s context window becomes full, suggesting that these breakdowns do not stem from memory limits.

Why Relevant to You?

The inherent limitations in explainability, embedded by design in current GPT-based models, will confine their application to non-critical, short-term tasks. Consider the consequences if failures such as those illustrated above were to occur in high-stakes domains like capital allocation, defence operations, or medical treatment.