Are Alternative Investment Firms Becoming Over-Reliant on AI To Solve their Cyber Challenges?

Jun 17, 2023

This article was written by Paul Ponzeka, CTO at Abacus Group, and originally appeared in Forbes.

Following the footsteps of other sectors, alternative investment firms are embracing artificial intelligence (AI) and machine learning (ML) to strengthen their security posture. In fact, the global market for AI-based cybersecurity products is estimated to reach $133.8 billion by 2030—and it’s easy to see why emerging digital technologies hold such promise for hedge funds and private equity funds. AI has no downtime, and it can operate 24/7 in our distributed, always-on working environments without interruptions or breaks.  

With cyberattacks against firms more complex and frequent than ever, deploying the AI army can relieve the pressure on thinly stretched IT teams to cut through the noise of daily alerts and connect the reconnaissance dots. But automating threat detection and response can also cause complacency, leaving businesses unaware of new and emerging cyber risks. After all, technology can only do and predict so much. Hackers and social engineers are skilled at exploiting human psychology, and lines of code will rarely stop them from taking advantage when people make a mistake.

By treating AI as the solution, investment firms risk over-relying on technologies to solve all their cybersecurity problems. This is a dangerous game that can lead to less-than-desirable outcomes—and often plays right into the hands of malicious actors. Therefore, organizations must strengthen their first line of defense by taking a more holistic approach to threat prevention.

AI Won’t Protect You Against Hackers

AI algorithms fail to understand the context of cyber threats. They cannot take into account the human behavior or other external factors that may increase the likelihood of an attack. For example, social engineers thrive during times of uncertainty, including current inflationary pressures, and constantly seek new opportunities to incite strong feelings in their targets and coerce them into disclosing vital information. AI security tools may be able to spot a phishing email before an employee clicks on the link, but they will not see the "bigger picture" of human vulnerabilities and fears (e.g., financial worries) that can help to prevent such an attack from happening again. 

Hackers are taking advantage of AI, too. The technology can be used to identify patterns in software that reveal weaknesses, enabling cybercriminals to exploit these security holes. Furthermore, tools like ChatGPT are now being used to create AI-generated phishing emails, which have much higher rates of being opened than manually crafted emails. Criminals also leverage AI to design polymorphic malware that constantly changes its identifiable features to evade detection by automated defensive tools.

For alternative investment firms, these security risks are particularly acute because of their reliance on large networks of third parties. AI alone will not help with due diligence and risk management, as businesses often lack visibility and control over the cybersecurity practices of their vendors, making it difficult to mitigate vulnerabilities. A single weakness can easily affect the wider ecosystem of suppliers, vendors, partners and customers.

Now, with the SEC expected to finalize new security requirements for registered funds, there is no better time for organizations to take a proactive stance and strengthen their cyber resilience in the face of rising threats. AI should only be one of the tools for robust penetration testing and other vulnerability assessments.

Empowering The First Line Of Defense

For many firms, mounting a more effective cyber defense means investing in better technological tools. There are clear merits to this approach, but it must not happen at the expense of their first line of defense—their people.

 Even when the world of technology does evolve to help security teams hold up against sophisticated hackers, employees will always be the strongest and weakest link. Nothing acts as a better preventative measure than an empowered, well-informed and compliant workforce—and everyone from the C-suite down must play a proactive role in protecting their organization.

 The current limitations of AI highlight the need for multilayered cybersecurity awareness and regular training exercises so that every employee can identify and respond appropriately to potential threats. End-user education must be relevant, ongoing and varied as cyber risks evolve, particularly as new endpoints form—and with them a multitude of vulnerabilities.

Social engineering testing can also reveal crucial vulnerabilities and deficiencies in employee training, cybersecurity policies and procedures, and technical controls such as email spam checkers or multifactor authentication. This is especially the case as hybrid and remote working models have led to extra entry points for malicious actors to exploit. Social engineering testing is one of the most effective ways to identify the many tools a hacker could use to capitalize on human error or ransom the organization.

People First, Technology Second

Even in today’s rapidly evolving digital landscape, the most complex technology cannot eliminate human error or malicious intent. By getting the human factor right, alternative investment firms and the wider financial services industry can build a truly cyber-resilient culture.

Is there a place for AI in cybersecurity? Of course—but businesses need to strike the right balance between technology and people to increase the chances of protecting their critical data and systems from every threat. Educating and training employees will help to shape them into their organization’s strongest defense, with technological tools in place to support this more holistic, proactive approach to cybersecurity.

stock-market-candlestick-graph-map-stock-image

Learn more about how your firm can benefit from our comprehensive IT and cybersecurity services.

Contact Us