Cyber Leaders Convene at RSA for Roundtable on AI Security and Responsible Adoption

Amid rising concerns over the risks posed by artificial intelligence (AI), a closed-door roundtable hosted by the UK’s Laboratory for AI Security Research (LASR) brought together top innovators, enterprise leaders, and academics to address the pressing challenge of securing AI while enabling its widespread adoption.

Held in partnership with the University of Oxford’s Global Cybersecurity Capacity Building Centre, Queen’s University Belfast’s Centre for Secure Information Technologies (CSIT), and Plexal, the event focused on how businesses can responsibly harness AI’s transformative potential without compromising on security. Hosted by UK House, the roundtable was part of this year's RSA Conference in San Francisco. 

As AI becomes increasingly integrated into core business operations, organisations face a growing imperative to balance innovation with resilience. The roundtable spotlighted emerging threats such as data poisoning, prompt injection, and model inversion—vulnerabilities that can compromise the integrity and reliability of AI systems. Participants shared insights on how to design secure AI systems from development through deployment.

The discussion also centred on strategies for mitigating risks and fostering trust in AI technologies. With the barriers to entry lowering and AI capabilities rapidly expanding, leaders acknowledged the need for collaborative approaches that align security with business growth. Opportunities for joint initiatives between enterprises, start-ups, and academic institutions were explored, highlighting the value of cross-sector collaboration in safeguarding the future of AI.

Building on research from the University of Oxford and the World Economic Forum, the event aimed to move beyond theoretical frameworks and toward actionable security solutions that can be adopted across industries. The dialogue underscored the critical role of proactive risk management in enabling AI’s safe and sustainable integration into modern society.

The roundtable marks a significant step in the UK's efforts to lead in AI security and innovation, setting the stage for future collaboration and policy guidance.

Explore the Research Behind the Discussions 

This event was informed by a collaborative paper between the World Economic Forum Centre for Cybersecurity and the GCSCC that was released in January. The Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards report highlights the steps that need to betaken to ensure that cybersecurity is fully embedded within the AI adoption life cycle. 

Amid a business landscape that is increasingly focused on responsible innovation, the report offers a clear executive perspective on managing AI-related cyber risks. Within the report the central question of 'how can organisations reap the benefits of AI adoption while mitigating the associated cybersecurity risks?' is answered. 

 

Laboratory for AI Security Research (LASR)

The National AI Cybersecurity Readiness Metric is part of the work the GCSCC is undertaking with LASR. 

Find out more about LASR, including information on partners, events and opportunities to engage on the official website

lasr  logo with tagline  blue gradient