Navigating investor expectations in the era of AI

Understanding investors’ expectations is vital to good dialogue and engagement and this is particularly true in emerging areas such as artificial intelligence (AI), which has become a new focal point for investors.

Companies such as Apple, Alphabet and Amazon are under increasing scrutiny regarding their AI practices as investors become increasingly aware of both the risks and opportunities associated with AI systems. From perpetuating racial inequalities to disrupting democratic processes and even contributing to high carbon emissions, the pitfalls are significant. While the materiality of these risks varies depending on the nature of AI deployment (production, processes, data collection and usage), navigation of ethical issues is paramount as a lack of oversight can lead to reputational damage and hindered value creation.

“AI is one of several structural changes under way, to which we must adapt on behalf of our clients – and future generations.” Sonja Laud, Chief Investment Officer at Legal & General Investment Management (LGIM). 

Responding to these challenges, investors are urging companies to adopt ethical standards and mechanisms surrounding the development and use of AI. This includes setting principles to guide AI usage, robust governance frameworks and ensuring oversight across the AI value chain. For example, in 2021 the Facial Recognition Technology (FRT) initiative led by Candriam rallied over 50 global institutional investors to promote responsible AI practices and hold tech giants accountable. This shift from vague commitments to concrete actions and measures reflects a broader movement towards responsible technology development and underscores the importance of aligning corporate actions with ethical values.

“We believe responsible development and use of AI will be important for well-functioning markets and legitimate products and services – and has the potential to affect the financial return on our investments over time.” Nicolai Tangen, CEO of Norges Bank Investment Management (NBIM), Carine Smith Ihenacho, Chief Governance and Compliance Officer of NBIM, and the Corporate Governance team.

More recently investors such as LGIM and NBIM have outlined baseline expectations for companies regarding AI governance, risk management, and transparency and clarity. From naming board members responsible for AI oversight to conducting regular risk assessments and disclosing governance policies, companies are expected to demonstrate their commitment to ethical practices. Failure to meet these expectations may lead to escalated engagement efforts such as the recent shareholder proposal at Apple to “disclose any ethical guidelines that the company has adopted regarding [its] use of AI technology”. This shareholder proposal follows a lack of commitment to transparency and disclosure around AI, despite engagement, and even has the support of investor advisory firm Institutional Shareholder Services (ISS). Engagement over AI-related liabilities will likely vary between companies that make AI systems and those who use the technology, with the former facing greater scrutiny.

“Should companies fail to meet our expectations, we will escalate our engagement on behalf of our clients, on what may well prove to be a generation-defining issue.” John Hoeppner, Head of US Stewardship and Sustainable Investments. 

The rapid advancement of AI technology presents both opportunities and challenges for companies and investors alike. While AI holds the potential of innovation and efficiency, its ethical implications cannot be ignored. With investors expected to increase demand for transparency and accountability from companies deploying or impacted by AI, it is essential that companies take these emerging expectations into account during engagements and reporting.

To navigate the complexities of AI, boards and companies should be asking themselves:

  • How is AI integrated into the company strategy, and what purpose does it serve?
  • Has the company outlined its approach to responsible AI, including risk management and bias mitigation?
  • How has the impact of AI on the workforce been assessed and addressed?
  • How often does the Board engage on AI matters and does it understand the design, training and testing processes of AI systems?
  • Are AI processes documented, communicated and verified for transparency and accountability?
  • How is the company reporting back on its AI strategy and governance in a way that builds trust?

How Luminous helps companies engage with their investors

Luminous has developed a six-stage process to help companies understand investor behaviour and report against investors’ expectations, and proactively manage shareholder relationships.

If you would like to hear more about our process, get in touch: stephen.butler@luminous.co.uk