Embracing AI Governance: Lessons Learned from The Data Boom

Embracing AI Governance: Lessons Learned from The Data Boom

Recently I attended a panel discussion focused on the challenges executives face with Data and AI today. Among other discussion points, they predicted topics that will be front of mind to Chief Data and Artificial Intelligence Officers (CDAIOs) by 2030.

There's a lot to be excited about with AI—new technologies, emerging vendors, innovative models, and advanced techniques for training and applying these models. Personally, I'm particularly excited about the potential of AI Agents and how they can enable AI models to take action on their produced output.

Now I’ll be the buzz killer. Beyond all the fancy technical innovation, do you know what I predict we will definitely be talking about in 2030? AI Governance.

In industry tech discussions there’s a buzz around deploying AI, modernizing data platforms to support AI, and skilling up teams with AI fundamentals, but AI Governance isn’t quite getting the same level of excitement from practitioners. It’s understandable; governance isn’t typically the topic that lights up a room, especially since many companies are just starting their journey with AI.

However, this perspective is missing a crucial piece of foresight. The challenges we've encountered with managing data are only a sneak peek into the complexities we will face with managing AI.

We need to learn from the data challenges we face today due to explosive growth of data in the past. We are heading down the same road with AI, and proactive governance is key to navigating it successfully.

What is AI Governance?

Without diving in too much further, I’ll share the definition of AI Governance from an approachable article on the subject by eWeek.

AI governance is the active management of best practices that includes policies, standardized processes, and data and infrastructure controls that contribute to a more ethical and controlled artificial intelligence ecosystem.

With effective and active AI governance, training data, algorithms, and model infrastructure can be more closely monitored and controlled throughout initial development, training and retraining, and deployment. AI governance contributes to a more efficient AI operation as well as compliance with relevant data privacy and AI ethics regulations.

— Shelby Hiter, eWeek

While AI has caused us to extend our data governance policies, AI governance is not only an extension of data governance. Separate AI Governance policies and procedures are needed to addresses the unique complexities of AI. Companies need to employ a proactive and comprehensive approach to AI governance to ensure our use of AI is ethical, transparent, and effective.

Navigating AI's Tomorrow: Parallels and Lessons Learned from our Data Boom

Perhaps you're not entirely convinced about the need for separate AI Governance yet. You might be thinking it's unnecessary given that your current AI investment is minimal. Maybe you’re leveraging a few AI extensions in your software stack and your data science or AI team is simply using a few out of the box models. You might wonder why there's a need to govern something with such a small footprint.

Let's explore how we came to face such difficulties with data management, that are causing us to have to invest in data governance and quality clean up efforts today. How did it become so difficult to manage and trust our data? And how we can apply these lessons to avoid similar pitfalls with AI?

  • Parallel 1: Volume and Velocity: Just as we've seen data management challenges arise from an unprecedented increase in the amount and speed of data generation, AI systems will themselves multiply in both size and adoption.

  • Parallel 2: Variety: The diversity of data types has necessitated significant preprocessing and cleaning efforts. Similarly, AI systems, come in all shapes and sizes, making them difficult to apply standardized measurement and management approaches to.

  • Parallel 3: Quality and Accuracy: The adage "garbage in, garbage out" has especially has plagued data for years, and we will inherit this same issue in AI but with the added complexity of hallucinated AI responses that often offer little to no debugging ability.

  • Parallel 4: Evolving Regulations: Just as data privacy laws have added layers of complexity to data management, the regulations around the use of AI are ramping up. As with data collection and use, we must ensure that we are able to prove adherence to evolving AI rules.

  • Parallel 5: Increased Demand: The pressure for real-time, accurate insights drove demand for the data boom. The current inflated expectations of AI systems is driving an accelerated demand for the AI boom.

  • Parallel 6: Technological Advancements The advancements in technology that have facilitated data collection and storage lowered the barrier to collect, transmit and store data, enabling us to meet the demands of the data boom. Though there were some early struggles with GPUs, technology is advancing quickly increase efficiency to meet the demands of the AI boom.

The Time for AI Governance Is Now

The evolution of data quality and management issues should serve as a cautionary tale for AI adoption. Just as organizations eventually recognized the necessity of data governance, the need for AI governance is pressing—even before widespread AI implementation becomes a reality. By learning from the past and proactively establishing AI governance frameworks, organizations can navigate the complexities of AI more effectively, ensuring their AI initiatives are not only successful but also responsible and ethical. The time to prioritize AI governance is now, laying the groundwork for a future where AI can achieve its transformative potential safely and sustainably.

How to Implement AI Governance: A Thin Layer Governance Model

By now, I hope that I’ve conveyed why proactive AI Governance is crucial, regardless of your maturity in adoption. So, what’s the next step and where do you begin? The good news is that there are many brilliant minds developing visions, frameworks, and extensive knowledge bases on this topic.

While you can approach this in a number of ways, I would suggest to just get started and create a thin layer of AI Governance across your existing company and development processes following steps 1-4 below. You can then scale your investment in AI Governance as you scale your investment in AI.

Step 1: Inform Yourself on the Space

The first step is informing yourself on the space. Below are some valuable resources that provide a strong starting point:

  • Blueprint for an AI Bill of Rights - This document outlines a vision for protecting civil rights in the age of artificial intelligence. It serves as a guide to ensure that AI systems are developed and deployed in a manner that respects user privacy and agency.

  • NIST AI Risk Management Framework - This framework offers structured guidance on how to assess, manage, measure and mitigate risks associated with AI systems effectively. The framework is quite lengthy and if you’re looking to get up to speed more quickly, I would suggest the following:

  • Me-We-It: The Open Standard for Responsible AI by WEDF: This standard promotes discussions about ethical AI development. It provides a framework for considering the implications of AI from individual, team, and technological perspectives across different phases of AI development

  • EU AI Act: This act sets out the regulations that govern AI development and deployment within the European Union, emphasizing compliance and governance standards.

  • OWASP AI Security and Privacy Guide: This community guide is designed to offer clear and actionable insights for designing, creating, testing, and procuring secure AI systems.

  • MITRE ATLAS: A comprehensive knowledge base of adversary tactics and techniques against AI-enabled systems, helping organizations prepare against potential AI-specific threats.

Step 2: Create the Basic Policy

Create a simple, transparent and clear policy that outlines how AI should be developed, deployed, and maintained within your organization. Generally these policies should cover:

  • Ethical AI use

  • Data privacy and security measures

  • Compliance with relevant laws and regulations

  • Responsibilities and roles in AI projects

  • Guidelines for AI procurement and development

The specifics of what you cover in your policy will depend on your organization. For example you might have specific directives for your industry, geography or to support your company mission and principles. If you’re looking for help getting started, there are a growing number of AI policy templates available on the web and OWASP has a good AI governance checklist.

Step 3: Operationalize the Policy

Once policies are established, the next challenge is operationalizing them. This involves setting up processes that ensure the policies are actively followed. I highly suggest that you bake these procedures into your existing business processes and technologies. For example: include AI reviews in your procurement lifecycle and software development lifecycle.

  • Tracking of all AI/ML models: Maintain an inventory of all AI models to ensure oversight.

  • Standardized review/approval for all AI/ML models: Implement a review process that covers topics of interest (from step 1 reading material) and aligns with your company specifics (ie industry, region, mission, principles etc). If you’re unsure where to start, Me-We-It: The Open Standard for Responsible AI by WEDF, offers great questions pertinent to different phases of the development lifecycle.

  • An operational standard to enforce lifecycle reviews: Ensure continuous evaluation and updates to AI systems across the phases of their lifecycle.

Step 4: Monitor and Audit Your Adherence to Policies

Effective governance also includes regular monitoring and auditing to ensure adherence to established AI governance policies. This involves:

  • Transparency through tooling. Leverage your existing information and cloud security tooling to monitor the services being used (for example via network traffic monitoring or cloud security solutions)

  • Continuous improvement practices to update and refine AI governance strategies as new challenges and technologies emerge.

Thanks for Reading!

Implementing AI governance may seem daunting, but it is essential for ensuring that AI technologies are used responsibly and ethically. By starting with a simple foundational approach and gradually expanding your governance efforts, you can effectively manage the risks associated with AI while harnessing its benefits.

Remember, the lessons learned from past challenges in data management should inform our approach to AI governance. Please feel free to comment below if you have any questions or additions to this material. Together, we can navigate the complexities of AI governance and lead our organizations towards a more secure and ethical use of AI technologies.

Why Diversity in Data & AI Matters – And What We Can Do About It

Why Diversity in Data & AI Matters – And What We Can Do About It

Join Us for an Evening of Growth and Grit: Data Mishaps Night

Join Us for an Evening of Growth and Grit: Data Mishaps Night