AINL#024 Augmented Intelligence in Investment Management Newsletter

Welcome to the 024 Edition of the Newsletter on Augmented Intelligence in Investment Management.

Every two weeks, we deliver five unique insights tailored to empower investment decision-makers. Our insights are carefully curated by a seasoned team of market specialists. Unbiased, actionable and practical. They will help you navigate through noise.

 


AINL#024 SYNTHESIS


1. Agentic AI amplifies efficiency but not yet decision quality

Sources > Carnegie Mellon & Stanford 2025, G20 Finance Ministers & Central Bank Governors 2025

Recent comparative studies show that AI agents outperform humans in speed and cost efficiency but still underdeliver in accuracy and contextual judgment. The Carnegie Mellon & Stanford (2025) experiment demonstrated that agents completed analytical and writing tasks ≈88% faster and up to 96% cheaper, yet often produced sub-standard or fabricated data, signalling that human–machine collaboration, not substitution, remains the alpha source. Similarly, central-bank case studies by the G20/BIS (2025) highlight that scaling AI safely requires disciplined data governance and human oversight to sustain credibility in high-stakes policy environments.

2. Adaptive intelligence, not static automation, will define the edge

Source > Geng et al., 2025.

Evidence from Geng et al. (2025) reveals that large language models can change their internal beliefs through repeated exposure, a phenomenon with major implications for investment analytics. “Belief drift” means models trained on financial narratives can unintentionally shift their bias over time, distorting research or portfolio insights. The takeaway: investment teams must treat LLMs as dynamic agents requiring continuous calibration, governance, and validation, much like a risk model with time-varying parameters.

3. Augmented intelligence elevates human capital more than it displaces it

Source > Yale 2025.

AI’s current labour-market impact remains incremental. The Yale (2025) analysis shows that structural job changes since the advent of GenAI remain only ~1 percentage point above early-internet levels, with Financial and Professional Services merely accelerating existing trends. Meanwhile, the University of Chicago & UC Berkeley (2025) study finds that guided access to LLMs can close human skill gaps. A hint that augmentation, not automation, yields the highest productivity uplift. For investment organizations, this supports reframing AI adoption as talent leverage: using models as cognitive scaffolds to shorten the competence curve and scale institutional learning.


TOP 5 ARTICLES


 

ARTICLE ONE

How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse Occupations

ARTIFICIAL INTELLIGENCE | Carnegie Mellon University and Stanford University | 11-2025 | Paper

Important Development

This paper examines how AI agents and humans work together, presenting the first direct comparison of human and agent workers across multiple essential work-related skills: data analysis, engineering, computation, writing, and design. Findings: current agents are fast, but not strong enough to do tasks on their own and approached problems from too much of a programing mindset.

We need more of those kind of – real world – studies, not optimized settings with the intent to make the machine shine in favor of funding raising, or other hidden agendas.

Why Relevant to You?

Agents produce work of inferior quality, yet often mask their deficiencies via data fabrication and misuse of advanced tools. Nonetheless, agents deliver results 88.3% faster and cost 90.4–96.2% less than humans, highlighting the potential for enabling efficient collaboration by delegating easily programmable tasks to agents.

 


 

ARTICLE TWO

Use of AI for Policy Purposes 

ARTIFICIAL INTELLIGENCE | G20 Finance Ministers and Central Bank Governors | 2025 | Report

Important Findings

The report examines how central banks and other supervisory institutions are leveraging AI for policy purposes. It offers a brief discussion of core AI concepts relevant to public authority use cases, focusing in particular on ML. It then provides examples of how central banks and supervisory authorities are already using big data and ML in four key areas: (i) information collection and the compilation of official statistics; (ii) macroeconomic and financial analysis; (iii) oversight of payment systems; and (iv) supervision and financial stability analysis.

Finally, the report stresses that, despite AI’s significant potential to enhance policymaking, the effective use of gen AI requires a number of challenges to be addressed. These range from data governance to investing in human capital and information technology (IT) infrastructure. A key lesson is that collaboration and the sharing of experiences emerge as important avenues for central banks, in particular by exploiting economies of scale and reducing the demands on IT infrastructure and human capital.

Why Relevant to You?

The report highlights how central banks and regulators are increasingly using AI to enhance monetary policy, financial stability oversight, and supervisory processes. This shift affects how data is collected, analyzed, and acted upon – potentially influencing interest rates, compliance expectations, and risk assessments.

 


 

ARTICLE THREE

­Is AI Persuadable?

ARTIFICIAL INTELLIGENCE | Geng, J., et al. | November 2025 | Paper

Important Findings

This paper is important because it reveals that large language models can alter their internal beliefs simply through continued exposure to new information or dialogue. This finding challenges the common assumption that model outputs remain stable once training ends. It highlights a new dimension of AI behaviour which has implications for safety, reliability, and long-term alignment.

Why Relevant to You?

The paper is relevant for investment practitioners because it shows that large language models can change their internal beliefs simply through repeated exposure to information. Many investment firms increasingly rely on these models for research, market analysis, and decision support, meaning such shifts could unintentionally alter insights or recommendations over time. This “belief drift” creates potential risks for portfolio decisions, risk assessments, and automated reporting if outputs become inconsistent or misaligned with investment principles.

 


 

ARTICLE FOUR

The Hidden Curriculum > How AI Could Trigger Social Mobility

ARTIFICIAL INTELLIGENCE | University of Chicago, UC Berkeley | 11_2025 | Paper

Important Findings

This paper shows the impact of the hidden curriculum on educational outcomes. It highlights that first-gen college students don’t realize a lot of unwritten rules that lead to success – the value of internships, student clubs, letters from professors. But giving them access to an LLM for guidance significantly closes the gap.

Why Relevant to You?

Elaborates on AI as advisor on detecting patterns unseen for the user at first. 

 


 

ARTICLE FIVE

AI With Zero Effects on US Labour Market 2022-25

ARTIFICIAL INTELLIGENCE­ | Yale University | 10_2025 | Paper

Important Findings

The changes in jobs from AI, that do exist are small and often began before generative AI appeared. This study measures job shifts with something called a dissimilarity index, which checks how much the mix of jobs changes compared to a starting point. Across 33 months since Nov-22, the index has moved only a little faster than in the computer and internet eras, about 1% higher than the early internet period. Some US industries like Information, Financial Activities, and Professional and Business Services show more movement, but these were already changing before.

Why Relevant to You?

Currently, measures of exposure, automation, and augmentation show no sign of being related to changes in employment or unemployment.