AINL#011 Augmented Intelligence in Investment Management Newsletter

Welcome to the 011 Edition of the Newsletter on Augmented Intelligence in Investment Management (AINL). Every two weeks, we deliver five unique insights tailored to empower investment decision-makers. Our insights are carefully curated by a seasoned team of market specialists. Unbiased, actionable and practical. They will help you navigate through the noise.

 


AINL#11 SYNTHESIS


 

What do these recent developments mean for investment decision-makers?

 

1. Beyond the Hype: Augmented Intelligence, Not Autonomous AI, Is the Real Game-Changer

AI excels at scaling performance, especially for less experienced workers, by capturing and applying accumulated human know-how. As shown in Brynjolfsson et al.’s work and Ren et al.’s findings, the true edge lies in human-AI complementarity: AI handles pattern-based, repetitive tasks while humans contribute contextual judgment and emotional intelligence. But this synergy depends on investor teams being open to algorithmic input and actively building AI literacy. Moreover, findings from BIS show that current AI models still struggle with real-time self-correction and fall short of full autonomy—reinforcing that AI should augment, not replace, human decision-makers in investment settings.

 

2. Ditch the Ratings, Follow the Impact: Rethinking ESG Through the SDG Lens

Van Zanten’s research reveals a troubling disconnect between ESG ratings and real-world impact, suggesting investors may be relying on flawed signals. In contrast, Sustainable Development Goals (SDGs) offer a more impact-centered and future-aligned metric for sustainable investing. This signals a paradigm shift: forward-looking investors must move from ESG risk mitigation to measuring actual societal and environmental contributions. In a capital market where perception often outweighs substance, adopting SDG-aligned frameworks enhances both portfolio resilience and credibility with increasingly impact-aware stakeholders.

 

3. Brains on Autopilot? Why Delegating Thinking to AI puts your Own Cognitive Development at Risk

Anthropic’s analysis of student AI use shows a growing trend of outsourcing high-order thinking—like analysis and creation—to generative AI. For investment professionals, this is a double-edged sword. While it can boost productivity, it also risks atrophy of core cognitive skills critical for contrarian thinking, probabilistic reasoning, and variant perception. Investors must ensure that AI tools don’t become a crutch. Instead, they should be embedded in structured decision-making workflows that preserve—and even sharpen—human judgment. In this new environment, developing meta-cognitive awareness and fostering intellectual humility may be just as valuable as mastering a financial model.

 


TOP 5 ARTICLES


 

ARTICLE ONE

Anthropic Education Report: How University Students Use Claude

ARTIFICIAL INTELLIGENCE | Anthropic | 4_2025 | Publication

Important Development

A recent study performed by Anthropic analyzed how students are using Claude for their academic tasks. Approximately one million anonymized conversations from Claude.ai Free and Pro accounts tied to higher education email addresses were analyzed. Findings: Students are primarily using AI for higher-order tasks like creating and analyzing.

Higher-order thinking skills:
• Creating (39.8%)
• Evaluating (5.5%)
• Analysing (30.2%)Lower-order thinking skills:
• Applying (10.9%)
• Understanding (10.0%)
• Remembering (1.8%)

Why Relevant to You?

The report’s authors are spot on in their concluding remarks about some of the key issues this raises, most relevant also for investment professionals and their own development of cognitive abilities:

‘As students delegate higher-order cognitive tasks to AI systems, fundamental questions arise: How do we ensure students still develop foundational cognitive and meta-cognitive skills?
How do we redefine assessment and cheating policies in an AI-enabled world?
What does meaningful learning look like if AI systems can near-instantly generate polished essays, or rapidly solve complex problems that would take a person many hours of work?
As model capabilities grow and AI becomes more integrated into our lives, will everything from homework design to assessment methods fundamentally shift?’
­

 


 

ARTICLE TWO

Are ESG Ratings Enough to Measure Corporate Sustainability?

SUSTAINABILITY | Organization & Environment | van Zanten | 04_2025 | Paper

Important Findings

Think ESG ratings tell the full story of corporate sustainability? This paper reveals why they often miss the mark—and what better captures real-world impact. By comparing ESG ratings with SDG scores, the paper uncovers a striking disconnect.

Why Relevant to You?

This paper challenges the reliability of ESG ratings, a widely used tool in sustainable investing. By showing that ESG ratings do not align with how investors and regulators assess corporate sustainability, it questions their effectiveness in guiding capital allocation. The study finds that SDG scores better reflect companies’ real-world impacts—both positive and negative—on sustainable development. This insight is crucial for investors seeking to align portfolios with long-term sustainability goals. Investors must go beyond ESG risk avoidance and incorporate impact-focused metrics like SDG scores.

 


 

ARTICLE THREE

Generative AI at Work

HUMAN & ARTIFICIAL INTELLIGENCE | Quarterly Journal of Economics | Brynjolfsson, Li & Raymond | 2025 | Paper

Important Findings

Based on a field experiment with customer-service workers of a Fortune 500 firm, the authors find that access to AI assistance during chat conversations with customers significantly improves agents’ performance in terms of average handle time, average time an agent finishes a chat, as well as customer satisfaction. While the introduction of a generative AI solution to assist in responding to customer inquiries has little or even negative effects on the performance of higher-skilled or more experienced workers, the performance improvement appears substantial for less experienced agents. The paper also provides a comprehensive literature review, which is worthwhile reading for its own sake.

Why Relevant to You?

To us, the paper implies that generative AI promises productivity gains in every domain, where LLMs can learn behavioral rules from many reoccurring interactions among humans in the past in order to apply those rules to similar situations in the future. By that, the model behind such a generative AI effectively becomes a storage of knowledge that has traditionally been attributed to personal experience and craftsmanship. We already observe this embodiment of knowledge in ML models in the area of medicine (recognition of carcinomas) and software development (code completion), where comprehensive personal experience is necessary to build up true expertise. Now, with this paper, we have empirical evidence for this in the area of customer service, where a generative AI helps new and less experienced workers get up to speed more quickly.

­


 

ARTICLE FOUR

Putting AI Agents Through Their Paces on General Tasks

HUMAN & ARTIFICIAL INTELLIGENCE | BIS Working Paper | Perez-Cruz & Shin | 2025 | Paper

Important Findings

As a contribution to the discussion on how to assess Artificial General Intelligences (AGIs), the authors evaluate the ability of two popular LLMs to play games from the NY Times (Worlde, Face Quiz, Flashback). Their experiments are designed to test the models’ ability to recognize their own mistakes and to take those mistakes into account during the next move. Based on rather modest success rates, they argue that in order to be truly effective in the workforce, AGI-aspiring models must have the ability to self-assess, self-criticise, and autocorrect.

Why Relevant to You?

Against the background of the ongoing AI hype, today’s managers are under constant pressure to include AI in their business processes. There is a general insecurity on the real abilities of AI products and how they can be used truly effectively. Perez-Cruz and Shin offer a perspective on this problem: Current copilot systems augment, rather than replace, human skills and workers. However, as models evolve, AI agents can become increasingly autonomous. But for that they must be able to learn from their mistakes on-the-fly.

 


 

ARTICLE FIVE

What Makes Human-AI Partnerships Work?

HUMAN & ARTIFICIAL INTELLIGENCE | Yuging Ren et al | 05_2024 | Paper

Important Findings

Recent studies on human-AI complementarity find a growing agreement on the respective strengths and limitations of both. They highlight effective ways to divide tasks between humans and AI and emphasizes that successful collaboration depends on specific human skills and a willingness to engage with AI.

Why Relevant to You?

This paper highlights two key reasons why AI outperforms humans in decision-making: our tendency toward overconfidence and difficulty unlearning outdated beliefs. Unlike humans, AI is less biased and more adaptable to new contexts, such as assessing default risk in peer-to-peer lending. The paper also shows that the effectiveness of augmented intelligence depends on whether humans appreciate or resist algorithmic input. Notably, expertise and past performance shape how willing people are to follow AI advice.