AINL#019 Augmented Intelligence in Investment Management Newsletter

Welcome to the 019 Edition of the Newsletter on Augmented Intelligence in Investment Management (AINL). Every two weeks, we deliver five unique insights tailored to empower investment decision-makers. Our insights are carefully curated by a seasoned team of market specialists. Unbiased, actionable and practical. They will help you navigate through the noise.
AINL#19 SYNTHESIS
What do these recent developments mean for investment decision-makers?
1. Focus on the Differentiated Layers of the AI Value Chain
Linked to Article 1 > The rapid commoditisation of AI infrastructure—evident in the sharp imbalance between cloud infrastructure investment (USD 57 bn in 2024) and LLM API market size (USD 5.6 bn)—signals a shift in where economic rents will accrue. Professional investors should recalibrate exposure toward layers with defensible moats: proprietary models, unique datasets, and specialised agentic applications. The emergence of capable, lower-cost small language models (SLMs) reinforces this transition, potentially eroding margins in generic model hosting while amplifying returns in domain-specific AI integrations.
2. Treat AI as Enabler for Structured Processes, Not a Standalone Decision-Maker
Linked to Articles 2, 3 and 4 > Across compliance (Article 2) and workplace productivity studies (Article 4), evidence shows AI excels when embedded into structured, process-driven workflows—especially when retrieval-augmented generation (RAG) enhances domain relevance. In investment management, this translates into deploying AI to accelerate research synthesis, enhance reporting clarity, and augment legal/compliance functions, while maintaining human oversight for forecasting, allocation, and strategic decision-making. Article 3’s warning about AI blurring authorship and rigour in scientific research also applies to investment research: governance frameworks should ensure that AI-assisted outputs remain auditable, attributable, and aligned with fiduciary standards.
3. Build Workforce Resilience Through Skills Mapping and Targeted AI Integration
Linked to Articles 4 and 5 > Shifts in talent demand are driven by both AI adoption and broader macroeconomic headwinds. For investors in financial institutions, this underscores the need to differentiate between genuine automation risk and cyclical or structural employment trends.
TOP 5 ARTICLES
ARTICLE ONE
Are Small Language Models the Future of Agentic AI and an LLM Business Killer?
ARTIFICIAL INTELLIGENCE | NVIDIA | 6_2025 | Report
Important Development
Plausible reasoning. In a recent paper, Nvidia lays out the position that small language models (SLMs) are sufficiently powerful, inherently more suitable, and necessarily more economical for many invocations in agentic systems, and therefore might be the future of agentic AI.
Why Relevant to You?
This is yet another confirmation of how swiftly AI infrastructure is commoditizing, adding intense pressure on the lower layers of the stack to monetize their investments, so far with limited success.
Consider this disparity: While the market for LLM API services, the layer enabling agentic applications, was valued at USD 5.6 billion in 2024, investment into cloud infrastructure to host these models soared to USD 57 billion in the same year. This sharp imbalance signals a growing disconnect between capital deployment and value capture across the AI stack. As infrastructure becomes a commodity, the returns shift toward differentiated layers—those with proprietary models, data, or applications.
ARTICLE TWO
Using Large Language Models for Legal Decision Making
HUMAN & ARTIFICIAL INTELLIGENCE | Luketina, Benkel and Schütz | 2025 | Paper
Important Findings
The paper provides an experimental evaluation of the capability of LLMs to assist in legal decision-making within the framework of Austrian and European Union value-added tax law. The authors use fine-tuning and retrieval-augmented generation (RAG) to enhance LLM performance. The experiments are conducted both for textbook cases and for real-world cases. The results indicate the RAG approach’s effectiveness in enhancing LLMs’ ability to provide accurate justifications in real-world VAT cases.
Why Relevant to You?
Legal decision making is the core responsibility of public administration (SupTech) and firm-internal compliance work (RegTech). The process-like nature of this work makes it a natural candidate for AI augmentation in order to achieve higher efficiency. In this regard, the paper adds to the literature, which finds that the RAG approach is a promising technical solution to achieve this goal. From a management perspective this suggests, that external consultants or internal experts should at least have good reasons for not choosing a RAG approach when designing internal knowledge systems the employ AI.
ARTICLE THREE
Academia in Need for Dealing with LLM-Assisted Writing
HUMAN & ARTIFICIAL INTELLIGENCE | Sciene | July 2025 | Paper
Important Findings
Since the rise of LLMs, there has been a significant shift in academic writing. In 2024, the word “delves” appeared 2,700% more than its historical average, by one account. The analysis suggests that 13.5% of 2024 abstracts were processed with LLMs. The authors downloaded all PubMed abstracts until the end of 2024 and used all 15.1 million English-language abstracts from 2010 onward, recognizing the sharp rise of signal words, indicating the use of LLMs.
Why Relevant to You?
Given that English is the dominant language of science, and the majority of academic writers are non-native speakers, the growing use of large language models (LLMs) is not inherently a negative development. On the contrary—it underscores the need to proactively engage with the role of LLMs in scientific discovery and publication, not only at the level of universities, but also among journal editors and peer reviewers.
There is a fine line between enhancing the clarity of scientific writing and delegating the research and reporting process to machines. In some cases, we may even witness expectations that LLMs conduct parts of the research.
A brave new world in the coming – one in which the frameworks we use to judge rigour, authorship, and scientific integrity must be revisited and redefined. As AI becomes more embedded in the research process, we will need to rethink how we distinguish between human insight and machine output, and how we evaluate what constitutes good or bad science.
ARTICLE FOUR
AI in the Boardroom: Why Your Co-Pilot Still Needs a Human Pilot
HUMAN & ARTIFICIAL INTELLIGENCE | Microsoft | July 2025 | Paper
Important Findings
A recent study from Microsoft Research offers a new framework for understanding generative AI’s role in the workplace by distinguishing between AI user goals (what people aim to accomplish) and AI actions (what the AI actually perfoms). Microsoft’s analysis of 200,000 real-world AI interactions shows that AI is augmenting tasks like client communication, report writing, and summarizing financial information and continues to struggle with data-heavy tasks like forecasting or complex modeling. A separate earlier study by Claude researchers on over 4 million interactions confirmed this divide: AI performs well on writing and structured analysis but falls short on financial forecasting, budgeting, and investment reasoning.
Why Relevant to You?
Both studies agree that AI is most effective as a co-pilot augmenting human expertise rather than replacing it outright, especially in finance roles that blend technical insight with communication. However, these studies are based on user interactions prior to the most recent advanced reasoning models in which model performance on quantitative reasoning has improved.
ARTICLE FIVE
Is AI Killing Graduate Jobs?
IMPLEMENTING GEN AI SPRINTS | FT | 7_2025 | Article
Important Findings
The graduate job market is weakening, with AI often cited as a threat, but evidence shows multiple drivers. Since ChatGPT’s 2022 launch, UK graduate job postings have fallen sharply, especially in finance, tech, and accounting, yet declines also hit less AI-exposed sectors. Economic uncertainty, post-Covid corrections, offshoring, and sector-specific downturns are significant factors. While some graduate roles face automation risks, others remain stable or growing. Experts caution against overstating AI’s role, noting that “AI hype” may distort employer behaviour.
Why Relevant to You?
Workforce planning should distinguish real AI-driven disruption from broader economic and structural factors. Leaders should avoid overreacting to “AI hype” and instead assess sector-specific risks, talent needs, and macroeconomic pressures. AI literacy and integration strategies can help younger employees enhance productivity. Top management should embed scenario planning and continuous skills mapping into strategic reviews to anticipate shifts in talent demand and align workforce capabilities with evolving business models.
