Australian Government to establish AI Safety Institute

On 25 November 2025, the Australian Government announced it will establish an Australian Artificial Intelligence Safety Institute (AISI) to respond to AI-related risks and harms, with operations expected to commence in early 2026.

The AISI will sit within Government as a technical and coordination hub for AI safety, working alongside existing regulators and the National AI Centre (NAIC). Australia will also join the International Network of AI Safety Institutes, aligning with approaches in the UK, US, Canada, Japan, South Korea and others.

While legislation or new binding obligations have not yet been released, this is a clear signal that more structured, technical oversight of advanced and “frontier” AI is coming.

What the AISI is expected to do

According to the Government’s announcement, the AISI will provide trusted, expert capability to:

  • monitor, test and share information on emerging AI technologies, risks and harms;

  • help government keep pace with rapid AI developments and dynamically address emerging risks;

  • enhance understanding of technical developments in advanced AI and their potential impacts;

  • act as a central hub to share insights and support coordinated government action across regulators;

  • guide businesses, government and the public on AI opportunity, risk and safety, including via established channels such as the National AI Centre;

  • support Australia’s commitments under international AI safety agreements and participate in global safety-testing efforts through the International Network of AI Safety Institutes.

The Government has emphasised that the AISI will complement existing legal and regulatory frameworks that already protect Australians’ rights and safety – rather than replace them.

Why this matters

  • Shift from principles to technical assurance
    Regulators will increasingly have in-house capacity to interrogate and test models, rather than relying solely on high-level principles or industry self-assessment.

  • Convergence with global frontier-AI oversight
    Participation in the international network means Australian expectations for testing, transparency and incident response are likely to track those emerging in other leading AI jurisdictions.

  • Higher expectations for high-risk and frontier use cases
    Systems with serious or systemic risk potential (for example, security-relevant capabilities, critical infrastructure, influence operations or large-scale decision-making) can expect heightened scrutiny and more prescriptive expectations.

  • Closer coordination across regulators
    The AISI is expressly tasked with working “directly with regulators” and acting as a central hub – signalling more consistent, coordinated regulatory responses to AI issues across privacy, consumer, competition, online safety, financial services and sectoral regimes.

What this could mean for your organisation

No immediate new obligations arise solely from the announcement, but organisations developing, procuring or deploying AI – particularly advanced or high-impact systems – should anticipate:

  1. More rigorous safety and governance expectations

    • Formal AI risk and impact assessments for high-risk systems.

    • Clearer expectations around testing, evaluation, red-teaming and documentation of model behaviour and mitigations.

  2. Greater transparency obligations in practice

    • Stronger pressure (and potentially future requirements) to provide regulators and the AISI with meaningful access to safety information and, in some cases, to models or evaluation environments.

  3. Vendor and model-provider scrutiny

    • Government guidance routed via NAIC and AISI is likely to influence what “good practice” looks like in vendor contracts (for example safety disclosures, incident-response cooperation, change-management and audit rights).

  4. Global alignment – but also localisation

    • Multinationals will need to ensure global AI governance frameworks are robust enough to satisfy both Australian expectations and those in the EU/UK/US, while still accounting for local regulatory and risk contexts.

Practical steps to consider now

Organisations may wish to:

  • map and classify AI use cases, with particular focus on high-risk and frontier-adjacent systems (including uses of third-party foundation models or APIs);

  • refresh AI governance frameworks – policies, risk assessments, sign-off processes and incident-response – so they are capable of meeting more formal technical-assurance expectations;

  • review AI-related contracts with vendors and model providers to ensure appropriate safety, transparency and co-operation provisions are in place;

  • prepare for increased regulatory engagement by strengthening documentation (model cards, risk assessments, DPIAs, safety test results, governance records) that could be requested or benchmarked against emerging AISI guidance.

How we can help

We are closely tracking:

  • the establishment and remit of the AISI;

  • the Government’s broader AI Safety and National AI Capability agenda;

  • international developments on frontier AI regulation and safety testing.

We can assist with:

  • AI risk and impact assessments for high-risk and frontier-adjacent use cases;

  • designing and implementing AI governance and assurance frameworks (including board-level oversight);

  • reviewing and negotiating AI-related contracts with vendors and model providers;

  • benchmarking your current AI practices against emerging global AI safety standards and likely AISI expectations.

For further information, or to discuss what the AISI may mean for your organisation, please contact our expert team.

Latest insights

More Insights
featured image

Countdown to Compliance: Are You Ready for 10 December?

4 minutes Nov 25 2025

Read More
Curiosity line blue background

Czech Republic: Impact of the New Cybersecurity Act on Foreign Direct Investment Screening

1 minute Nov 21 2025

Read More
featured image

Singapore's Strategic Play: The SGX-Nasdaq Dual Listing Bridge

8 minutes Nov 21 2025

Read More