John Galligan, General Manager, Corporate External and Legal Affairs at Microsoft ANZ outlines the measures the company is taking to combat the threat of AI generated content such as deepfakes and disinformation in the lead-up to the Australian federal election on Saturday 3rd May, 2025.
The upcoming Australian federal election will take place against the backdrop of a rapidly evolving information ecosystem, shaped by generative AI and increased cyber threats.
Following a historic year where over 2 billion people in 60+ nations voted in elections in 2024, Australia’s unique democratic system stands out. Our compulsory voting and robust regulations against foreign interference foster high trust in the electoral process.
The Australian Electoral Commission (AEC) is globally lauded for its independence, integrity, and innovation.
The Electoral Integrity Assurance Taskforce, chaired by the AEC, exemplifies unparalleled cross-agency collaboration. This taskforce, which includes the Australian Signals Directorate and Office of National Intelligence, ensures comprehensive electoral protection.
Despite these strengths, new challenges are emerging that are impacting electoral integrity and trust.
Bad actors targeting democracy
AI generated content such as deepfakes – convincing videos and audio made to look and sound like real people – threaten to spread disinformation, sow mistrust and undermine the democracy we value so highly.
This is why Microsoft has taken multiple steps to protect electoral processes as part of the company’s strategy to empower election stakeholders to defend democracy around the globe.
As a long-standing partner to the Australian government, Microsoft is committed to strengthening our nation’s resilience against threat actors who deceptively use AI and other technology to disrupt democracy and social cohesion.
Microsoft has a unique vantage point on this front. We combine efforts and insight from a dedicated team that is tasked with exploring the impact of technology on elections, with a threat intelligence capability comprising over 10,000 experts, analysts, and threat hunters.
The scale of threats we face – with our teams analysing 78 trillion signals daily – means that no single government or private sector organisation can defend alone. Collaboration, transparency and exchange of insights remain crucial.
The partnership between Microsoft and the Australian government has several examples of the impact of cross sector and cross border collaboration in combating bad actors.
For example, early last year, the Microsoft Threat Intelligence Centre provided key evidence to the Australian Government in the identification of Aleksandr Ermakov. Ermakov’s involvement in the 2022 Medibank hack exposed the private health information of 9.7 million Australians.
These efforts sit under the Microsoft-Australian Signals Directorate-Cyber-Shield (MACS), a world-leading partnership on intelligence sharing and collaboration to defend against cyber threats.
Highly convincing and hard to detect deepfakes
At the beginning of last year there was widespread fear about how AI might be used to create deceptive content during election periods, undermining democracy and spreading disinformation.
And while what we saw was not as pervasive as was initially feared, we shouldn’t assume it will be the same for Australia’s upcoming election and beyond, nor dismiss the threat.
During the 2024 elections, we found that the most convincing and hardest to detect AI-generated content were voice deepfakes. Often it wasn’t fully AI generated content, but partially edited, which made it even more realistic and hard to detect. Even a few seconds of fake audio in an otherwise authentic video can dramatically alter the context of a video.
In Australia, the ABC recently produced an AI generated voice recording of Senator Jacqui Lambie (with her permission) to demonstrate to their audience how convincing deepfakes can be.
Examples like this can help bring greater transparency around the threat of deepfakes and is key to helping citizens develop a healthy level of scepticism towards the content they are consuming online.
There’s a role here for the technology industry, the government and media organisations to collectively help build media literacy among citizens.
Empowering and protecting the information ecosystem
Crucial to upholding electoral integrity is empowering the people and organisations that work to support democracy.
This is why in January 2025, Microsoft met with more than 150 people from Australia’s political parties and their candidates, newsrooms, academia and government.
Through a variety of teams, Microsoft aims to empower these individuals and organisations who we know are more targeted by threat actors with the knowledge and tools to identify and prevent the spread of deepfake content and misinformation.
We also equip these individuals with our AccountGuard service free of charge to eligible customers to add an extra layer of cybersecurity protection. We’ve been doing this since 2018.
Political candidates who have a concern about a deepfake of themselves can report it to our webpage where we can investigate and take action.
From content creation to distribution and detection
Microsoft recognises that the creation of content is where a deepfake starts. As part of our Responsible AI approach, we have guardrails in place to ensure our AI systems are developed responsibly and in ways that warrant people’s trust.
This means guardrails to keep Bing Image Creator safe and prevent harmful use. It also means if citizens ask Bing election-related questions, they will be offered authoritative sources such as the Australian Electoral Commission.
On the detection side, Microsoft’s AI for Good Lab is focused on continuously refining image detection models, which remain crucial in helping determine whether images and videos have been tampered with.
A promising step in verifying authenticity involves efforts around Content Credential digital watermarks. These watermarks create a permanent record of an image or video’s origin and history. It is a standard that is now added to all images created using Microsoft’s consumer-facing AI tools, such as Bing.
Collectively, these efforts aim to strengthen the digital information ecosystem to instil trust in the creation and distribution of AI-generated content.
Unifying the global technology sector
In February 2025 at the Munich Security Conference, Microsoft met with several of the 20 signatory companies who came together last year to announce the Tech Accord to Combat Deceptive Use of AI in 2024 Elections.
We are revisiting what we’ve learned and how we evolve our commitments to meet the needs of the current democratic environment and threat landscape.
The goal of the 2024 Tech Accord was straightforward but critical – to combat video, audio, and images that are fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders.
It has been a non-partisan initiative designed to encourage free expression. It aimed to ensure that voters retain the right to choose who governs them, free of this new type of AI-based manipulation, and equipped with skills to navigate the increasingly complex digital environment.
The timing of this is fortuitous as we lead into Australia’s federal election. While there is still work to do, evolving this global initiative with learnings from 2024 will help the industry take practical steps and generate more and broader momentum.