AINL#006 Augmented Intelligence in Investment Management Newsletter

Welcome to the 006 Edition of the Newsletter on Augmented Intelligence in Investment Management (AINL). Every two weeks, we deliver five unique insights tailored to empower investment decision-makers. Our insights are carefully curated by a seasoned team of market specialists. Unbiased, actionable and practical. They will help you navigate through the noise.

 


AINL#006 SYNTHESIS


 

What do these recent developments mean for investment decision-makers?

 

1. AI-Augmented Research, Not AI-Driven Decision-Making

OpenAI’s Deep Research and advanced reasoning techniques enhance investment analysis but do not replace human judgment. While AI can synthesise high-quality sources and generate insights at scale, it lacks causal reasoning and remains prone to biases. Investors should integrate AI as an augmentation tool, using it to enhance due diligence and hypothesis testing rather than delegating investment decisions entirely.

 

2. Regulatory Adaptation is Critical for AI-Driven Investment Strategies

Evolving AI regulations—such as the EU AI Act and emerging U.S. liability standards—directly impact investment firms’ risk models, compliance frameworks, and portfolio management strategies. Establishing AI Ethics Committees and implementing explainable AI (XAI) frameworks can ensure alignment with regulatory scrutiny while maintaining transparency and investor confidence. Firms that proactively adapt will mitigate legal exposure and sustain competitive advantages.

 

3. AI Security & Bias Risk Management in Investment Processes

AI models are vulnerable to security threats, adversarial manipulation, and inherent biases, as evidenced by research on LLM social desirability biases and AI jailbreak risks. Investment managers must prioritise robust AI security measures and maintain strict oversight of data inputs to prevent misinformation from contaminating decision-making. Relying on secure, well-governed AI tools helps safeguard portfolio strategies against unintended biases and external manipulation.

 


TOP 5 ARTICLES


 

ARTICLE ONE

Who Is Responsible When AI Breaks the Law?‌‌

If an AI is a black box, who is liable for its actions? The owner of the platform, the end user or its original creator?

HUMAN & ARTIFICIAL INTELLIGENCE | ETHICAL USE OF AI | Yale School of Management | 12_2024 | Article
­

Important Development

This article examines AI liability challenges in the U.S., emphasizing accountability concerns as AI-driven decisions increasingly affect individuals and businesses. Courts and regulators are addressing these risks through existing laws while considering new transparency standards.

Notable cases include a recruiting algorithm that discriminated against older applicants or a biased housing ad system violating civil rights. The article highlights the need for standardized metrics, transparent AI training data, and clear evaluation criteria for ethical implementation. It also explores the challenge of opaque “black box” AI systems, with courts using tools like algorithmic disgorgement and limited discovery to enhance transparency and accountability.

Why Relevant to You?

For asset managers, evolving AI regulations impact investment models, risk assessments, and automated decision-making. Firms must establish robust governance frameworks, including AI Ethics Committees, to oversee responsible AI use and align with emerging laws. Proactively adapting to these regulations ensures compliance and secures long-term value in an increasingly AI-driven financial sector.

 


 

ARTICLE TWO

LLM Reasoning Ante Portas?

Not Yet, But Workarounds Are Improving

AUGMENTED INTELLIGENCE | CProf. Ethan Mollick | 2_2025 | Article

Important Findings

OpenAI’s o3-based Deep Research can be seen as first narrow agent that can do sophisticated, and likely economically valuable work. We are now exploring domains of AI systems that can conduct research with the depth and nuance of human experts, but at machine speed. OpenAI’s Deep Research demonstrates this convergence and gives us a sense of what the future might be.

LLMs don’t “think” and “reason” like humans do, so researchers developed tricks to improve its reasoning over time – like telling it to “think step by step before answering.” This approach, called chain-of-thought prompting, markedly improved AI performance. Reasoners essentially automate the process, producing “thinking tokens” before actually giving you an answer.

Why Relevant to You?

The quality of citations of Deep Research marks a genuine advance here. These aren’t the usual AI hallucinations or misquoted papers – they’re legitimate, high-quality academic sources. Still, all limitations of the current generation of LLMs remain intact (brute force approach, lack of causal explanation, etc), while the quality of workarounds improve. Not good enough yet from a compliance point of view, see Yale article above, but the cognitive stimulus of Reasoners for humans in their creative and critical thinking just increased in quality and strength. 

 


 

ARTICLE THREE

Could AI Reshape Portfolio Management?

ARTIFICIAL INTELLIGENCE | Frontiers in Artificial Intelligence | 04_2024 | Article

Important Findings

A literature study published by Frontiers in Artificial Intelligence on “Enhancing portfolio management using artificial intelligence” in 2024 offers valuable insights into the impact of AI on portfolio management. While AI improves data-driven decision-making, its “black-box” nature also poses risks. Regulations are in place that aims to minimize this risk, but they also could pose a barrier to AI adoption for investment firms.

Why Relevant to You?

AI can enhance efficiency within portfolio management, but the evolving regulatory landscape (e.g., the EU AI Act, and GDPR requirements) signals increased scrutiny of AI. Full automation of portfolio management therefore remains unlikely. AI should be positioned as an augmentation tool, with portfolio managers retaining final decision-making authority. In addition, investment firms implementing explainable AI (XAI) frameworks can improve regulatory compliance and stakeholder trust.

 


 

ARTICLE FOUR

LLMs With Big Five Biases

SUSTAINABLE INVESTING | Institute for Human-Centered AI, Stanford University | 12_2024 | Paper

Important Findings

Large language models display human-like social desirability biases in Big Five personality surveys. While most recent discussions about the objectiveness of LLMs talk about mentioning the sources, mentioning the reflection points or being inbiased about historical facts, it is interesting to see still/again research that analyses the social desirability bias of LLMs.

While this ‘only’ is analysed in specific use cases, it reconfirms that this bias exists in all tested models, including GPT-4/3.5, Claude 3, Llama 3, and PaLM-2. Bias levels appear to increase in more recent models.

Why Relevant to You?

The findings in this paper reinforce the importance of human guidance and oversight in objective decision taking. The assumption to let the machine make investment decisions on ones behalf to increase the degree of rationality, thus decision quality, remains falsified.

 


 

ARTICLE FIVE

AI Safety & Jailbreak Reduction

ARTIFICIAL INTELLIGENCE | Anthropic | 12_2022 | Paper

Important Findings

Anthropic claims new AI security method to block 95% of jailbreaks. It released a system called “constitutional classifiers” that it says filters the “overwhelming majority” of jailbreak attempts against its top model, Claude 3.5 Sonnet. It does this while minimizing over-refusals (rejection of prompts that are actually benign) and and doesn’t require large compute.

Why Relevant to You?

Since the Investment management industry works with highly sensitive data, AI safety is paramount to keep bad actors away. The paper is not most recent but shows the relevance for investment professionals to ensure their agents to rely on foundation models that are not hacked and/or influenced beyond the limitations those models come anyway. Imagine the biases triggered in case you are fed with intentionally wrong or misleading information. It’s like the Wolf of Wall Street on steroids.