AINL#020 Augmented Intelligence in Investment Management Newsletter

Welcome to the 020 Edition of the Newsletter on Augmented Intelligence in Investment Management.

We provide unique insights for investment decision-makers. Our insights are carefully curated by a seasoned team of market specialists. Unbiased, actionable and practical. They will help you navigate through the noise.

 


AINL#020 SYNTHESIS


1. Augmented Alpha Requires Human Conviction as Scarcity Premium

Multi-agent systems (BlackRock AlphaAgents) and retrieval-augmented frameworks (Wang et al.) show promise in signal generation, scenario analysis, and document consistency checks. Yet evidence from real-world workflows (Tomlinson et al.) confirms that AI is strongest in repetitive data tasks, not in conviction-driven allocation. For investment firms, the emergent truth is clear: AI expands breadth of inputs but not depth of conviction. The scarcity premium shifts to portfolio managers’ judgment under uncertainty—where alpha is increasingly derived.

2. Capital Efficiency Unlocks Democratization of AI in Asset Management

NVIDIA’s PostNAS efficiency gains (53x faster, ~98% cheaper inference) collapse the cost of capital for model deployment, shifting AI from capex-heavy experimentation to opex-light scalability. Combined with modular architectures (multi-agent, RAG), this enables smaller firms and mid-tier asset managers to access decision-intelligence once confined to scale players. The emergent implication: competitive moats migrate from infrastructure budgets to proprietary workflows and governance frameworks, making process design the differentiator rather than raw compute spend.
Sources: NVIDIA (2025); BlackRock (2025); Wang et al. (2025).

3. Governance of AI Becomes an Investment Risk Factor

Biancotti et al.’s findings on LLM misalignment risks (e.g., mimicking fraudulent FTX-style behaviors) highlight the operational and reputational downside of deploying untested models in financial decision chains. When combined with AI’s known blind spots (hallucinations, ethical trade-offs, bias propagation), investors face a new governance premium: ensuring alignment, auditability, and compliance is now as central as risk budgeting or factor exposure. Early adopters who hard-wire AI ethics and controls into their investment process will convert this into a trust premium with regulators and clients.
Sources: Biancotti et al. (2025); BlackRock (2025); Tomlinson et al. (2025).


TOP 5 ARTICLES


 

ARTICLE ONE

AlphaAgents by BlackRock. First Results Are In.

ARTIFICIAL INTELLIGENCE | BlackRock | 8_2025 | Paper

Important Development

Multi-agent collaboration has emerged as a promising approach, enabling multiple AI agents to work together to solve tasks. This study by BlackRock researchers investigates the application of role-based multi-agent systems to support stock selection in equity research and portfolio management through a Fundamental Agent, a Sentiment Agent and a Valuation Agent. The system runs on Microsoft’s AutoGen framework using GPT-4o.

Findings. Good enough for repetitive tasks of standardized assessment techniques. The risks of hallucination, lacking cognitive bias mitigation, or limited domain focus remain intact, as they come from the current generation of LLMs used. 

Why Relevant to You?

While not yet a full portfolio optimization engine, AlphaAgents represents a foundational step toward agentic investment systems. While currently

focused on stock selection, it can serve as a modular input to standard models like Mean-Variance Optimization or Black-Litterman, by supplying agent-driven signals for return estimation and scenario analysis. Limitations are seen in more advanced techniques, where uncertainty determines complexity.

­


 

ARTICLE TWO

Working with AI: Measuring the Occupational Implications of Generative AI

HUMAN & ARTIFICIAL INTELLIGENCE | Tomlinson et al. | 2025 | Paper

Important Findings

This paper marks an important development because it moves beyond theoretical predictions of AI’s labor impact by analyzing 200,000 real-world conversations between users and Microsoft Copilot. Unlike earlier work, it distinguishes between user goals (tasks people seek help with) and AI actions (tasks the AI performs), offering a clearer picture of augmentation versus automation. It introduces a novel AI applicability score to measure occupational impact, incorporating task success and scope.

Why Relevant to You?

The study shows that AI’s strongest impact lies in information gathering, writing, and advising, which map directly onto several investment roles. For analysts, this means routine tasks like data collection and drafting reports will be streamlined, freeing time for judgment and synthesis. Portfolio managers gain faster access to insights, with performance relying more on strategic decision-making than raw analysis. Overall, the study suggests AI will augment rather than replace jobs, but will reshape the skill mix and competitive edge of investment firms.

­


 

ARTICLE THREE

­ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long Narrative Reasoning

ARTIFICIAL INTELLIGENCE | Wang et al.| July 2025 | Paper

Important Findings

The paper proposes a retrieval-augmented generation (RAG) framework, which is directly inspired by the functional mechanism of the human Prefrontal Cortex. The system combines a well prepared knowledge source with a multi-step retrieval strategy for exploratory probing. An additional control loop monitors the consistency and completeness of the retrieved information. In experiments the framework was capable of handling non-superficial narratives that span across larger text bodies such as the volumes of a novel series.

Why Relevant to You?

ComoRAG’s ability to handle longer narratives has implications for various AI use cases that might range from better agentic systems to better text summarisation. From a legal compliance perspective, the ability of the approach to identify contradictions, omissions and irregularities could be a powerful tool during the preparation and analysis of documents such as financial statements, market reports or even key information documents for retail investors.

 


 

ARTICLE FOUR

Chat Bankman-Fried: An Exploration of LLM Alignment in Finance

HUMAN & ARTIFICIAL INTELLIGENCE | Biancotti et al. | 2025 | Paper

Important Findings

The paper proposes a simulation study to assess the likelihood that recent LLMs may deviate from ethical and lawful financial behaviour in favour of financial gains. Using the setting of the collapse of the cryptoasset exchange FTX in 2022, the authors prompt the models to impersonate the CEO of a financial institution and test whether they would misappropriate customer assets to cover internal losses. The results suggest that only three out of twelve models have a low or medium propensity to misalign, potentially due to the models’ tendency to frame alignment with social norms as another risk factor to be weighed against the potential gains from fraudulent activity.

Why Relevant to You?

Understanding how undesirable AI behaviour may arise, and how to prevent it, is of paramount importance. The paper provides a foundation for testing the alignment of LLMs in the financial sector. Moreover, it can assist financial authorities and institutions in better understanding and measuring the risks associated with the adoption of these models.

 


 

ARTICLE FIVE

Jet-Nemotron. PostNAS. NVIDIA With Progress on Input Efficiency

IMPLEMENTING GEN AI SPRINTS ­ | NVIDIA | 8_2025 | Article

Important Findings

Imagine slashing your AI inference budget by 98%. This is the result of a test NVIDIA performed using a novel PostNAS approach for retrofitting pre-trained models. An intertwined hardware and software architecture with output quality comparable to Qwen3, Qwen2.5, Gemma3, and Llama3.2.

Why Relevant to You?

PostNAS could offers a new, capital-efficient paradigm.
If further independent tests confirm, a 53x speedup translates to a ~98% cost reduction for inference at scale. This would fundamentally change the ROI calculation for deploying high-performance AI. Instead of spending millions on pre-training, you can now innovate on architecture by modifying existing models, dramatically lowering the barrier to entry for creating novel, efficient LMs.