September 22, 2023

Security Pix Your World

Redefining Vigilance

Managing Privacy And Cybersecurity Risks In An AI Business World

4 min read

Jackie Shoback is cofounder and managing director of 1414 Ventures, an early-stage venture capital fund focused on digital identity.

Since the launch of ChatGPT last year, there has been no shortage of articles, white papers and podcasts about the proliferation of AI tools and their capabilities.

As it grows in popularity, it is estimated that 35% of companies are using AI today, with 42% exploring its future use and implementation. What’s exciting about generative AI is its impact on the global economy, growth potential and the new paradigms it brings for business models. According to a recent McKinsey study, generative AI could add $2.6 trillion to $4.4 trillion annually in economic value globally, according to 63 use cases they analyzed.

Balancing Risk And Reward

However, while it is easy to think about the applications of this technology in a range of functions—from operations to software engineering—as C-suite leaders and corporate directors, there is a real need for us to balance the risk-reward tradeoff when we embrace these new capabilities. In particular, I see consumer privacy and cybersecurity as two areas where balanced oversight and risk mitigation practices are especially needed.

I believe that maintaining consumer data privacy and governance standards should be key considerations as businesses embrace new AI tools and applications. Since AI models ingest large amounts of data as part of their training process, personally identifiable information (PII) can be an exposure risk if not properly monitored.

As models evolve and become more dynamic, there is also the risk of gathering PII from interactive engagement or through inference and/or connection. Either way, if PII is directly incorporated or indirectly determined, privacy can be compromised by nefarious actors through breaches, hacks, theft, phishing—essentially the issues that we see today but evolved and inflated.

Mitigating Privacy Risks

To mitigate these types of privacy risks, establishing an AI steering committee with key business, data science, analytics, information security and information technology stakeholders is critical. Using these committees, you can establish and/or review your practices, documentation and policies around areas such as model explainability, access to third-party data and data storage and deletion parameters.

Framework Based On Risk Appetite

Creating a framework by which you can assess how AI tools are being incorporated into your business processes and where privacy vulnerabilities exist is as important as the growth and profitability potential AI can deliver. I recommend that you find your baseline risk assessment and also ask management to identify and prioritize the risks based on risk appetite and reward trade-off.

Cybersecurity Practices And Protocols

Cybersecurity practices and protocols are another area to consider in the evaluation of new AI tools and platforms. Since AI models consume large amounts of data, they can attract bad actors who are looking for security vulnerabilities to enable confidential data theft or malware implementation.

Analogous to account takeover (ATO), where a hacker gains access to an online account with malicious intent, hackers seek to gain access to trained AI models to manipulate the system and access unauthorized transactions or PII. These are just a few examples. As the complexity of AI solutions increases, more vulnerability points across models, training environments, production environments and data sets will undoubtedly proliferate.


In order to assist companies in addressing risks associated with the design, development, deployment and usage of AI systems, the National Institute of Standards and Technology (NIST) recently published its AI Risk Management Framework. I think that leveraging this AI framework to assess cybersecurity and privacy policies and practices could make for a strong and appropriate starting point. To add to this, recently, the SEC adopted new rules on cybersecurity management and incident disclosure requirements, which I think could also be considered vis-a-vis any new AI initiatives.

Key Takeaways

Setting an AI strategy and execution plan requires careful consideration and the quantification of both opportunities for positive business impacts as well as accompanying risks. To appropriately manage privacy and cybersecurity concerns, there are a few key steps executive management can take and corporate board directors can inquire about that can help set up your enterprise for success.

1. Baseline Risk Management Assessment

A great first step is to assess any new AI tools/capabilities from a privacy and cybersecurity standpoint and see where there are gaps, particular vulnerabilities or oversight in management structures. The assessment outputs can help give you a good sense of risk management maturity, risk appetite and the appropriate tempo for AI adoption.

2. Build The Forums For Discussion

Creating alignment and awareness in your corporate culture regarding the opportunities and risks that go along with new AI initiatives and capabilities can also help. Appropriate forums like working groups and steering committees can help to create linkages between functions and business units so that complete processes and policies can be successfully created.

3. Realistic Resource Requirements

I believe that adequately fleshing out the opportunities, risks and overall requirements can help you assemble a complete picture of the resources needed. Much like digital transformation programs, embracing new AI technologies relies on having the appropriate mix of resources to support its planning, launch, implementation and maintenance.

Although many organizations have existing processes for areas like cybersecurity, risk management and data governance today, overlaying new AI tools and capabilities on existing structures without examining end-to-end processes and where they have weaknesses can result in future problems.

Because of this, I believe it’s critical to map an AI enterprise risk management (ERM) structure that incorporates how these new generative AI models and tools work. Being able to explain the models, the map, how the data interacts and where there are vulnerability points are practical ways to manage privacy and cybersecurity risk as businesses adopt the newest AI capabilities.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.