A solution that leverages an LLM and is tailored to the needs of our analysts and portfolio managers has multiple uses, which we classify as the three C’s: consumption, characterization, and creation.
Consumption: This involves how data and insights are retrieved for analysis. Consumption offers the biggest potential productivity gains in the near to medium term. An investment analyst might leverage an LLM to help learn more about a potential investment. The LLM facilitates this by rapidly analysing and summarizing an aggregate set of information sources.
The analyst will then be able to conduct a back-and-forth conversation with the LLM to refine the request. This would enable an analyst to spend more time focused on evaluating the differentiating factors relating to individual companies that might make good long-term investment prospects—through fundamental analysis, factor analysis, or insights from management interviews.
Characterization: This refers to the ability of AI to analyse unstructured data (such as text or images) to uncover complex but useful patterns that might otherwise be hard to identify. For example, academics in data science have analysed years of the language used in 10-K reports. They’ve discovered a correlation between subtle changes in the presence of negative or positive words in those reports and subsequent stock returns. In a similar vein, we see huge potential in AI’s ability to review, in seconds, how sentiment on a stock has changed over time and to compare that with multiple data sources.
Creation: This refers to the way an LLM might also be used to draft content, including insights, investment updates, meeting notes, and other written materials. Automating aspects of content creation that were previously manual means that analysts can focus on more value-added analysis and decision-making.
Enhanced, not replaced, human decision-making
While AI-powered tools have significant potential to automate tasks and magnify the insights of our portfolio managers and analysts, we are also cognizant of the potential risks and the need for people to monitor and manage them.
One key risk is bias. AI accesses vast amounts of information but cannot determine the reliability of that information. If the data used by an AI-powered tool are biased, the algorithms created using that data will also be biased. Even the way a question is posed to an AI tool, known as a “prompt,” can introduce behavioural bias. For example, a negatively formulated prompt—such as “find holes in my thesis”—increases the risk of a negatively biased response, which may not be supported by the facts.
Another risk is around transparency. AI models can be complex and opaque, making it difficult to trace the basis of a response. This will clearly be a focus of regulatory scrutiny as capabilities evolve. We are also cognizant of privacy and security risks, as large volumes of data are consumed in training and using AI models.
Such risks warrant caution in the adoption of AI and the application of its outputs while our teams work to unlock its potential. Ultimately, we believe that investment processes augmented by AI will require human oversight and governance for successful active management.