Balancing the Risks and Rewards of Large Language Models like ChatGPT

Sep 6, 2023

This article was written by Christian Scott, COO & CISO at Gotham Security.

Artificial intelligence (AI) is nothing new in the alternative investment space, but the meteoric rise of Open AI’s ChatGPT and other large language models (LLMs) has sparked a contentious industry debate. With the ability to generate huge volumes of human-like text, images and video, including rapidly responding to natural language prompts, these tools could revolutionize how investment firms operate. 

However, innovation continues to outpace security and regulatory measures, raising concerns about the data privacy, intellectual property (IP), and compliance risks posed by LLMs, particularly when mishandled by employees. Recent leaks of ChatGPT chat logs, which exposed users’ conversation histories and payment data, have left many fund managers looking nervously over their shoulders. Now, developer OpenAI is under investigation by the FTC over whether it violated consumer protection laws. 

With ChatGPT gaining over 100 million active users within just two months of its launch, investment firms must confront the reality that some of their employees will already be using generative AI. But without specific regulatory guidance, how can the sector securely navigate the opportunities and risks?  

What’s the big deal?

ChatGPT has already defied conventional assumptions about AI’s power and potential across industries, from banking to healthcare to law. Recently, a fictional ChatGPT investment fund even outperformed the UK’s ten most popular funds, capturing the imagination – and fear – of alternative investment managers everywhere. 

LLMs offer firms more than just cost and time savings. In investment research, they can excel at uncovering hard-to-find sources and data, empowering funds to make more informed decisions. They also have the potential to strengthen firms’ compliance and risk management functions by mitigating the risks associated with material non-public information (MNPI). 

ChatGPT represents just the tip of the AI iceberg. With 1000+ new tools released weekly, LLMs are now starting to integrate third-party plug-ins, expanding user capabilities beyond the model’s constraints. The potential benefits for firms will continue growing – but so will the accompanying risks. 

Assessing the risks 

So, should ChatGPT be embraced within investment portfolios, or should firms steer clear of it entirely? Let’s start by evaluating the risks. 

Privacy is a major concern. Thoughtlessly incorporating ChatGPT into investment processes may lead employees to enter sensitive or proprietary information, which can then become part of the data model and, ultimately, be made public or exposed to hackers. 

Intellectual property is also at risk, as many users are unaware of the option to disable chat logging for model training. Any IP leaks could seriously harm the investment thesis, the organization itself, and the value it brings to clients. 

What’s more, ‘confidently incorrect’ AI can result in firms presenting bad advice or inaccurate data summaries to stakeholders, leading to flawed decision-making and potential regulatory and reputational damage. 

ChatGPT is also remarkably easy to trick, with hackers continually finding ways to outsmart its safety measures. Just a few cleverly crafted prompts can lead to the creation of sophisticated hidden malware and targeted phishing campaigns

Finding the right balance

Despite these risks, many organizations will be naturally drawn to the potential benefits of LLMs. After all, agile and resilient investment firms are known for embracing challenges rather than avoiding them.  

The key to striking the right balance between risk and reward is surprisingly simple. Firms just need to follow the cybersecurity oversight and risk management processes that should already be in place. This includes implementing robust written policies and procedures around the use of LLMs, and updating existing Acceptable Use Policies (AUPs) to specify when and how employees can use these tools on company devices and for business purposes. Gotham Security, an Abacus Group Company, offers a Sample Company Policy for LLM AI that can be leveraged as a starting point for dictating the secure, lawful, and ethical use of LLMs.

Before integrating LLMs into critical processes, especially for investments, firms must conduct thorough due diligence and compliance reviews. This involves establishing a clear process for validating the accuracy of data generated by ChatGPT. Remember, regulators will expect strong documentation justifying decisions based on the tool’s inputs and outputs. Additionally, due diligence should extend to third-party risks, prompting regular assessments of partners and vendors to evaluate potential risks linked to their ChatGPT usage. 

All procedures should have appropriate controls, including measures preventing staff from cutting and pasting text into ChatGPT. While it’s only human to seek shortcuts, doing so can expose the system to data leaks and other vulnerabilities. Firms must also continuously educate their employees on the responsible usage of these tools, emphasizing the need to supplement ChatGPT with traditional research and data points in investment processes. 

While ChatGPT and other LLMs present investment firms with unique opportunities to enhance efficiencies and stay ahead of the innovation curve, they also create unique challenges. Firms with a well-established multi-layered approach to cybersecurity and risk management will be best positioned to manage these barriers and unlock the benefits of the exciting technological advancements we’re witnessing today.

stock-market-candlestick-graph-map-stock-image

Learn more about how your firm can benefit from our comprehensive IT and cybersecurity services.

Contact Us