I was in San Francisco over the weekend, watching my Florida Gators men’s basketball team advance to the Final Four, and had two interesting experiences -
I ordered breakfast through room service Saturday, and a brief time later the doorbell rang. There was Henry with my order – Henry the robot. When an order comes in, the kitchen staff prepares it and places it in the bin on the top of the robot, which navigates to the elevator, sensing and steering around people, pets, and any obstacles. It goes to the floor and door and triggers the doorbell. You enter the code sent to your phone on Henry’s screen, and the lid pops open and you retrieve your food and drink. Simple, fast, and efficient.
The second interesting sight was the Waymo taxi vehicles driving all around town. Like Uber and Lyft, you download the app and request a ride, and a vehicle shows up a few minutes later. Except it has no driver – they are autonomous vehicles. I tried to book one for my trip to the airport to experience it firsthand but couldn’t - they do not go beyond a specific footprint downtown. Yet.
I realize these two things are not big news and are becoming common in cities around the world. I bring it up as more proof of the breakneck speed of deployment of AI and innovative technologies, and the need for cybersecurity leaders to up their game in this space. These advances are not sustainable without security and governance guardrails, and cyber leaders own the first and should have a key role in the second.
Engaging with these new initiatives requires a proactive and strategic approach. If you feel that your program and/or company are not there yet, here are ideas for starting steps:
Engage with leaders and teams driving adoption (e.g., data science, engineering, product development, innovation teams) to understand their goals, timelines, and potential security and privacy considerations. Internal Audit can be an important partner here.
Understand the AI landscape within your organization – Work together with business and IT partners to conduct an inventory of all current and planned AI projects across the organization, including the purpose, data used, algorithms employed, and intended outcomes. Evaluate the organization's overall understanding and maturity level, including awareness of its risks and benefits.
Form an AI governance committee – Work with key stakeholders to form a team representing security, legal, compliance, data science, IT, and relevant business units to define policies, set standards, and oversee guidelines for development and deployment that consider:
Data governance, including sourcing, quality, labeling, privacy, and use.
Guidelines on model development, testing, validation, and vulnerability management.
Explainability and transparency requirements for understanding how AI models make decisions.
Bias detection and mitigation processes.
Measures to protect against attacks targeting AI models, such as data poisoning and evasion attacks.
Processes for assessing the security and compliance of AI solutions provided by external vendors.
Your organization’s framework for addressing ethical implications.
With that baseline and business support, you have the foundation for moving to your next steps, including –
Educating all employees about the opportunities, and the security and ethical risks associated with AI.
Integrating security into the AI development lifecycle.
Implementing security testing and continuous monitoring.
Tracking the origin and movement of data to ensure integrity and facilitate auditing.
Development of Incident Response plans and procedures for addressing security incidents related to AI systems.
Defining metrics to track the effectiveness of AI security controls and governance processes.
Do not get caught by surprise when Henry shows up at your door! Take proactive steps to effectively engage in your organization’s AI initiatives, establish robust governance and security, and help your organization move ahead with confidence using transformative modern technologies.
Hold Fast!
Stay True!
!
Great essay, Paul; while AI provides useful and positive opportunities for businesses and organizations to succeed, it also poses a significant cybersecurity threat that must be addressed proactively rather than reactively.