AINL#014 Augmented Intelligence in Investment Management Newsletter

Welcome to the 014 Edition of the Newsletter on Augmented Intelligence in Investment Management (AINL). Every two weeks, we deliver five unique insights tailored to empower investment decision-makers. Our insights are carefully curated by a seasoned team of market specialists. Unbiased, actionable and practical. They will help you navigate through the noise.

 


AINL#14 SYNTHESIS


 

What do these recent developments mean for investment decision-makers?

 

1. Enhance Alpha through Cognitive Augmentation, Not Automation Substitution

The evolution of large reasoning models (Article 1) points to a future in which LLMs can act as conceptual co-pilots in investment processes—surfacing non-obvious linkages across macro reports, central bank speeches, and regulatory disclosures. However, the Klarna case (Article 2) underscores a strategic boundary condition: full automation, especially in client-facing or trust-critical processes, risks impairing qualitative alpha sources such as client experience or relationship-driven flows. Investors should thus structure their AI deployments as augmentation levers—boosting research throughput and insight synthesis—rather than direct headcount replacements in fiduciary functions.

 

2. Monitor AI Biases as Latent Portfolio Risk Factors

The repurposed Asch experiments (Article 3) demonstrate that LLMs, like human analysts, are susceptible to framing effects and sequencing biases—impairing objectivity in judgment-heavy tasks. This has implications beyond HR or communications: model-based portfolio construction tools, if improperly tuned, may encode hidden biases in scoring ESG credentials, thematic exposures, or management assessments. Investment teams using LLM-augmented analytics should stress-test their models for these cognitive distortions, akin to validating factor models for data snooping or structural breaks.

 

3. Use AI for Signal Extraction, Not Executive Voice Substitution

The thin-slicing capabilities of LLMs (Article 4) show high utility in fast signal detection—for instance, triaging earnings calls or identifying anomalies in sell-side briefings. However, the Harvard-led CEO messaging study (Article 5) reminds us that perceived authenticity remains a critical variable in leadership communications, both internal and external. For portfolio managers and investor relations teams, AI should support the drafting and distillation of content, but final outputs must retain the authentic voice of the human originator to preserve trust-based capital with LPs and stakeholders.

 


TOP 5 ARTICLES


 

ARTICLE ONE

Towards Large Reasoning Models

ARTIFICIAL INTELLIGENCE | Xu et al. | 2025 | Article

Important Development

The survey provides a comprehensive review of recent research efforts in the progression towards large reasoning models. Starting with a not-too-technical overview on how LLMs are trained, it discusses potential approaches to automated annotation, optimizing pre-trained LLMs, enhancing multi-step reasoning, and fine-tuning.

For all those areas, reinforcement learning is presented as a promising avenue for future progress. The survey also discusses some recent solutions such as OpenAI o1 and o3 as well as open-source attempts.

Why Relevant to You?

The development of models which are capable of conceptual reasoning is an essential functionality of LLMs, if they should assist today’s information workers in fulfilling their main responsibilities. For example, extracting actionable insights from the continuous flow of often overlapping information provided by news outlets, central banks, supervisory agencies, think thanks etc. is a highly time-consuming process.

The better AI models become in identifying abstract concepts and their relationships with each other, the more effective information workers might become in processing new information and decision making. Knowing the technology behind this development is important both with respect to a responsible use of AI, but also with respect to investment opportunities.

 


 

ARTICLE TWO

Fintech Admits AI Went Too Far

ARTIFICIAL INTELLIGENCE | Bloomberg | 5_2025 | Article

Important Findings

In a recent Bloomberg-article the CEO of Klarna, a fintech company, admitted it went too far in replacing customer service roles with AI and is now reintroducing human support to improve service quality. The fintech is piloting a flexible, remote customer service model while maintaining its broader commitment to AI for efficiency gains. 

This shift reflects the limitations of AI in replicating human empathy and the importance of maintaining customer trust. The case underscores the risks and complexities of fully automating customer-facing roles in the financial sector.

Why Relevant to You?

Klarna’s reversal on fully automating customer service highlights the current limitations of AI in handling human interactions. This datapoint is valuable because it shows that even innovative firms are reassessing the scope of AI’s capabilities and may be dialing back expectations about full automation.

 


 

ARTICLE THREE

Accountability Issues Remain

HUMAN & ARTIFICIAL INTELLIGENCE | Mika Hämäläinen | 04 2025 | Article

Important Findings

A recent study analysed the primacy effect in ChatGPT, Gemini and Claude. They repurposed the Asch experiment (1946) conducted on human subjects. Given two candidates with equal descriptions which one is preferred, if one description has positive adjectives first before negative ones and another description has negative adjectives followed by positive ones.

In the first experiment (where candidates had to be compared), ChatGPT preferred the candidate with positive adjectives listed first, while Gemini preferred both equally often. Claude refused to make a choice.

When candidates had to be scored (individually), ChatGPT and Claude were most likely to rank both candidates equally. In the case where they did not give an equal rating, both showed a clear preference to a candidate that had negative adjectives listed first. Gemini was most likely to prefer a candidate with negative adjectives listed first.

Why Relevant to You?

The findings reinforce the need for caution and accountability in the use of LLMs. While AI models offer immense potential, their susceptibility to (broader than commonly known) biases must be addressed to ensure fair and equitable outcomes. One may think about HR screenings and AI driven team optimization. But the same might be worth considering for portfolio optimization based on variables that go beyond data points.


 

ARTICLE FOUR

The Art of Audience Engagement

HUMAN & ARTIFICIAL INTELLIGENCE | R. Schmälzle, S. Lim, Y. Du, G. Bente | 05 2025 | Article

Important Findings

This paper examines the thin-slicing approach for LLMs – the ability to make accurate judgments based on minimal information – in the context of scientific presentations. Drawing on research from nonverbal communication and personality psychology, it shows that brief excerpts (<10% of the talk) reliably predict overall presentation quality (in line with human ratings). The findings are robust across different LLMs and prompting strategies.  

Why Relevant to You?

While further research would be needed to refine predictions beyond the transcripts of presentations (i.e. based on the actual verbal communication), and eventually beyond scientific talks, it clearly indicates that LLMs could become scalable feedback tools to augment human communication.

 


 

ARTICLE FIVE

Why CEOs Should Think Twice Before Using AI to Write Messages

HUMAN & ARTIFICIAL INTELLIGENCE | HBR | May-June 2025 | Article 

Important Findings

A Harvard-led study tested whether employees could distinguish between messages written by a CEO and those generated by an AI trained to mimic his style. Employees correctly identified AI-generated responses only 59% of the time. However, messages they believed were AI-written were rated as less helpful – even if written by the human CEO. The study highlights the perception bias against AI and urges leaders to be transparent, limit AI use for impersonal content, and always triple-check messages.

Why Relevant to You?

This research is a critical reminder that how messages are perceived can matter more than who wrote them. For leaders, AI can save time but risks eroding trust if misused. In a world valuing authenticity, this makes a strong case for using AI as a drafting tool (not a mouthpiece) especially when communicating with teams or stakeholders who expect a personal touch.