Me, Myself & AI: Compliance, Control and Trust in an Algorithmic Age
Cybersecurity Stage
—
45 minutes
Artificial Intelligence
Machine Learning
Security
Real case studies
AI and machine learning promise transformative benefits, from efficiency and innovation to enhanced cyber defense. Yet this same technology is a double-edged sword.
The rapid adoption of AI introduces new compliance challenges around data privacy, bias, transparency, third-party risk, and ethical use—often at a pace faster than organizations can govern. As automation enables cybercriminals to launch attacks at unprecedented scale, it simultaneously becomes one of our most powerful defensive tools. The question is no longer whether to use AI, but how to do so responsibly.This talk explores how emerging global regulations such as the EU AI Act, U.S. legislative efforts, NIS2 and DORA are reshaping accountability for organizations that create, deploy, or rely on AI systems—particularly through third parties. Taking a risk-based approach, we examine what “high-risk AI” really means, why regulatory clarity is essential to avoid stifling innovation, and how standards such as ISO/IEC 42001 (AI Governance), 23894 (AI Risk Management), and 27090 (AI Security Controls) can provide practical guardrails. With data sovereignty becoming a battleground, geopolitical tensions fragmenting technology ecosystems, and over half of security incidents now linked to third parties, compliance is converging into a single concept: Digital Trust.Attendees will leave with a clearer understanding of how to integrate AI into compliance frameworks without losing control—knowing where their data is, who can access it, and which laws apply—while balancing innovation, security, and societal values. Because in an AI-driven world, trust isn’t a by-product of technology; it’s a strategic choice.
Read More...