AI Data Security in the Spotlight as International Guidance and Industry Changes Emerge

Written By

jonathon ellis Module
Jonathon Ellis

Partner
Australia

I am an experienced litigation and investigations lawyer based in Sydney, leading Bird & Bird's Australian disputes and investigations practice and co-leading our global Defence and Security practice.

jonathan tay Module
Jonathan Tay

Senior Associate
Australia

I am a senior associate in the Dispute Resolution team in Sydney. I provide succinct, solutions orientated advice to help our clients solve complex problems, mitigate future risks and develop strategies to simplify their decision-making process.

mia herrman Module
Mia Herrman

Associate
Australia

I am an associate in our Tech Transactions team in Sydney, specialising in technology, cybersecurity and privacy advisory work.

On 23 May 2025, cybersecurity and intelligence agencies from Australia, New Zealand, the United States, and the United Kingdom jointly released a landmark Cybersecurity Information Sheet (CSI), titled AI Data Security Best Practices for Securing Data Used to Train & Operate AI Systems. Developed by agencies including the NSA’s Artificial Intelligence Security Center (AISC), the FBI, CISA, the Australian Cyber Security Centre (ACSC), and their counterparts in New Zealand and the UK, the guidance reflects a growing global consensus around securing the integrity of artificial intelligence (AI) systems through robust data protection.

The CSI provides organisations, particularly those handling sensitive, proprietary, or mission critical data, with a comprehensive framework for managing AI data risks. It highlights the strategic importance of implementing strong data governance, adopting technical and operational safeguards, and embedding security practices across all stages of the AI lifecycle.

Guidance Highlights: Safeguarding AI Data End to End

The joint guidance is underpinned by an understanding that data is a cornerstone of AI performance and integrity, and a major vector for threat actors. The publication lays out a range of recommended controls, including:

  • Data encryption for data at rest, in transit, and during processing using standards like AES 256
  • Digital signatures, including quantum resistant methods, to verify data authenticity and prevent tampering
  • Trusted computing environments employing Zero Trust architecture and secure enclaves
  • Secure provenance tracking and immutable, cryptographically signed ledgers to ensure data lineage and transparency
  • Access controls and data classification aligned with NIST SP 800 53 to limit exposure based on sensitivity
  • Privacy preserving techniques, including data masking, differential privacy, and federated learning
  • Secure deletion protocols aligned with NIST SP 800 88 before decommissioning AI systems or infrastructure

Emerging Risks Across the AI Lifecycle

The guidance breaks down risks across six stages of the AI lifecycle, from “Plan & Design” to “Operate & Monitor.” In particular, it focuses on three critical data-centric risks:

  1. Data Supply Chain Vulnerabilities
    These arise when AI systems ingest third party or web scale data without rigorous verification, opening the door to corrupted inputs that can distort model behaviour. Mitigation measures include dataset verification, digital content credentials, and formal certification from data providers. The CSI also highlights the risk of “split view poisoning” in curated datasets and “frontrunning poisoning” in crowd sourced collections such as Wikipedia.
  2. Maliciously Modified or Poisoned Data
    This includes deliberate attempts to corrupt training datasets, adversarial machine learning attacks, and model inversion techniques. To counter these, the CSI recommends anomaly detection, data sanitisation, secure training pipelines, and collaborative learning methods. It also stresses the need for metadata validation and regular auditing to prevent errors that can lead to unintended or biased AI outputs.
  3. Data Drift
    Data drift refers to performance degradation over time due to shifts in the data environment, such as changes in user behaviour, market conditions, or input distributions. This can be gradual or abrupt. Mitigation strategies include regular input output monitoring, retraining with updated datasets, statistical drift detection, and ongoing data quality assessment.

What This Means for Organisations Using or Deploying AI

The joint government guidance and Meta’s internal governance changes signal a broader convergence between regulatory expectations and industry practices. For organisations that develop, deploy, or rely on AI technologies, several strategic takeaways emerge:

  • Treat data governance as a strategic imperative. Implement clear protocols for data sourcing, validation, and protection. Embed accountability into AI development and operations from the outset.
  • Incorporate AI specific data risks into broader cybersecurity and risk frameworks. This includes recognising and preparing for novel threats such as data poisoning, statistical bias, and model inversion attacks.
  • Don’t automate at the expense of human oversight. While automation can offer efficiency gains, decisions with ethical, legal, or social impact, particularly in regulated industries, require structured human in the loop processes, audit trails, and escalation paths.
  • Embed legal, privacy, and compliance input throughout the AI lifecycle. This is vital for ensuring that AI systems align with laws relating to data protection, discrimination, safety, and consumer rights. Sectors such as healthcare, finance, education, and digital platforms are especially vulnerable to reputational and legal risk.

Looking Ahead

The CSI represents a significant step toward building a coherent global approach to AI data security. By aligning cybersecurity best practices with the realities of AI development, the guidance provides a valuable resource for entities seeking to deploy trustworthy, resilient AI systems.

Organisations that adopt a proactive, cross disciplinary approach, integrating legal, technical, and governance perspectives, will be better placed to manage emerging AI risks and demonstrate responsible innovation.

For tailored legal and compliance advice on managing AI data security risks or implementing governance frameworks, contact our expert team.

Latest insights

More Insights
Curiosity line yellow background

Assessing the Scope of Part 9 of the Financial Services and Markets Act 2022 for Digital Token Service Providers

6 minutes Jun 23 2025

Read More
featured image

The Commission’s Quick Fix? Freezing Additional ESRS Requirements for CSRD First-Wave Undertakings

6 minutes Jun 19 2025

Read More
Curiosity line green background

From sales to sanctions: Optus faces $100 million penalty for unconscionable sales practices

Jun 19 2025

Read More