The stakes of something going wrong with AI are incredibly high. Only 29% of organizations feel fully equipped to detect and prevent unauthorized tampering with AI[1]. With AI, emerging risks target different stages of the AI lifecycle, while responsibility lies with different owners including developers, end users and vendors.
As AI becomes ubiquitous, enterprises will use and develop hundreds if not thousands of AI applications. Developers need AI security and safety guardrails that work for every application. In parallel, deployers and end users are rushing to adopt AI to improve productivity, potentially exposing their organization to data leakage or the poisoning of proprietary data. This adds to the growing risks related to organizations moving beyond public data to train models on their proprietary data.
So, how can we ensure the security of AI systems? How to protect AI from unauthorized access and misuse? Or prevent data from leaking? Ensuring the security and ethical use of AI systems has become a critical priority. The European Union has taken significant steps in this direction with the introduction of the EU AI Act.
This blog explores how the AI Act addresses security for AI systems and models, the importance of AI literacy among employees, and Cisco’s approach for safeguarding AI through a holistic AI Defense vision.
The EU AI Act: A Framework for Secure AI
The EU AI Act represents a landmark effort by the EU to create a structured approach to AI governance. One of its components is its emphasis on cybersecurity requirements for high-risk AI systems. This includes mandating strong security protocols to prevent unauthorized access and misuse, ensuring that AI systems operate safely and predictably.
The Act promotes human oversight, recognizing that while AI can drive efficiencies, human judgment remains indispensable in preventing and mitigating risks. It also recognizes the important role of all employees in ensuring security, requiring both providers and deployers to take measures to ensure a sufficient level of AI literacy of their staff.
Identifying and clarifying roles and responsibilities in securing AI systems is complex. The AI Act primary focus is on the developers of AI systems and certain general purpose AI model providers, although it rightly acknowledges the shared responsibility between developers and deployers, underscoring the complex nature of the AI value chain.
Cisco’s Vision for Securing AI
In response to the growing need for AI security, Cisco has envisioned a comprehensive approach to protecting the development, deployment and use of AI applications. This vision builds on 5 key aspects of AI security, from securing access to AI applications, to detecting risks such as data leakage and sophisticated adversarial threats, all the way to training employees.
“When embracing AI, organizations should not have to choose between speed and safety. In a dynamic landscape where competition is fierce, effectively securing technology throughout their lifecycle and without tradeoffs is how Cisco reimages security for the age of AI.”
Automated Vulnerability Assessment: By using AI-driven techniques, organizations can automatically and continuously assess AI models and applications for vulnerabilities. This helps identify hundreds of potential safety and security risks, empowering security teams to proactively address them.
Runtime Security: Implementing protections during the operation of AI systems helps defend against evolving threats like denial of service, and sensitive data leakage, and ensures these systems run safely.
User Protections and Data Loss Prevention: Organizations need tools that prevent data loss and monitor unsafe behaviors. Companies need to ensure AI applications are used in compliance with internal policies and regulatory requirements.
Managing Shadow AI: It’s crucial to monitor and control unauthorized AI applications, known as shadow AI. Identifying third-party apps used by employees helps companies enforce policies to restrict access to unauthorized tools, protecting confidential information and ensuring compliance.
Citizens and employees training: Alongside the right technological solutions, AI literacy among employees is crucial for the safe and effective use of AI. Increasing AI literacy helps build a workforce capable of responsibly managing AI tools, understanding their limitations, and recognizing potential risks. This, in turn, helps organizations comply with regulatory requirements and fosters a culture of AI security and ethical awareness.
“The EU AI Act underscores the importance of equipping employees with more than just technical knowledge. It’s about implementing a holistic approach to AI literacy that also covers security and ethical considerations. This helps ensure that users are better prepared to safely handle AI and to harness the potential of this revolutionary technology.”
This vision is embedded in Cisco’s new technology solution “AI Defense”. In the multifaceted quest to secure AI technologies, regulations like the EU AI Act, alongside training for citizens and employees, and innovations like Cisco’s AI Defense all play an important role.
As AI continues to transform each and every industry, these efforts are essential to ensuring that AI is used safely, ethically, and responsibly, ultimately safeguarding both organizations and users in the digital age.
[1] Cisco’s 2024 AI Readiness Index
Share: