BidScout
Back to The Bid Brief
Deep Dive30 January 20268 min read

The Algorithm is Reading Your Bid: What's Actually Happening Inside Federal Government AI

There's a lot of noise about artificial intelligence in government. Half of it is hype, the other half is fear, and somewhere in the middle is the truth that actually matters to anyone trying to win federal contracts.

So let's cut through it. Here's what's genuinely happening right now, January 2026, with AI inside the Australian Public Service. Not speculation. Not "coming soon." What's actually running.

The DTA is Already Using AI to Assess Your Marketplace Applications

This is the one that should get your attention.

The Digital Transformation Agency runs the Digital Marketplace 2 panel. If you've applied to DM2, you know the drill: you submit case studies of previous work, and human assessors rate them from one to five. Two assessors review each application independently. If they agree within a margin of one point, you're through to the delegate. If they disagree, a third person gets involved.

Here's what changed. The DTA has built a proof of concept using a large language model to do that assessment alongside a human. Not instead of a human. Alongside.

They tested it on 268 previous applications and measured three things.

First, agreement rate. When two human assessors review the same case study, they agree 81 percent of the time. When an AI and a human review the same case study, they agree 84 percent of the time. The AI is actually more consistent with human judgment than humans are with each other.

Second, margin of error. On average, when two humans rate something out of five, they disagree by 0.92 points. When an AI and a human rate the same thing, they disagree by 0.76 points. Tighter.

Third, correlation. When one assessor gives a high score, is the other likely to do the same? The AI tracked human patterns closely.

The DTA is now testing this on a larger dataset of 6,448 applications. If it passes governance and assurance checks, it could be live for the next marketplace round.

What does this mean for you? When you write your case studies, you're potentially writing for two audiences: a human who skims and makes judgment calls, and an AI that processes every word you've written against the evaluation criteria. If your case study is vague, or relies on the reader inferring your capability, you're gambling that the human will fill in the gaps. The AI won't.

The ATO is Rebuilding Its Entire Contact Centre Around AI

The Australian Taxation Office handles around 20 million interactions through its contact centre every year. At peak times, that's 160,000 calls a day. The system running all of this has been in place since 1999.

It's being replaced.

The program is called RISE, which stands for Reimagined Interactions Streamlined and Effective. The deadline is December 2028, and the tender documents make clear that AI is central to what they're looking for. They want configurable machine learning and large language models across all capabilities. They've specifically asked for multi-turn prompting, self-correction, and boundary setting.

One detail worth noting: the ATO has imposed strict conditions on how vendors can use its data. None of it can be used to train or improve AI models outside the agency. If you're bidding on this, your data handling and sovereignty story needs to be watertight.

But the ATO isn't waiting for RISE to use AI. According to an ANAO audit released earlier this year, they're already running 43 AI models across the organisation. These models help identify compliance risks, prompt taxpayers to check entries in myTax, and process documents to identify entities of interest.

The audit also found something interesting: the ATO doesn't have a complete inventory of all its AI uses. They told the auditors there were no other AI applications beyond those listed, and then a month later disclosed they'd been using commercial software with AI to extract intelligence from high volume data since 2016. The auditors recommended they get a proper register in place by March 2025.

If the ATO is having trouble tracking its own AI usage, imagine how much is happening across the rest of government that nobody's talking about publicly.

GovAI: The Platform Behind the Transformation

In July 2025, the Department of Finance launched GovAI. It's a whole of government platform that gives public servants a secure environment to learn about AI, experiment with tools, and test use cases.

More than 2,600 APS staff are now using it. Over 20 vendors have expressed interest in partnerships. And by April 2026, they're launching GovAI Chat, a sovereign generative AI tool that can handle PROTECTED level information. That's the classification level that covers most sensitive government work.

The point of GovAI is to move the public service from scattered experimentation to coordinated adoption. Every agency was doing its own thing. Now there's infrastructure, training, and a use case library showing what's working.

The Finance department has been running a series of lunch and learn sessions where agencies share how they're using AI. The Reserve Bank is using it to find themes across 25 years of meeting records. The Department of Agriculture has an AI assistant that helps staff interpret regulatory guidance. The ABS is converting plain text into coded data. The Department of Employment and Workplace Relations built something called Parlihelper that summarises Senate Estimates and audit findings.

This isn't future state planning. This is what's running right now inside agencies that will be assessing your next tender.

The Policy Framework You're Being Measured Against

In December 2025, the DTA released version 2.0 of the Policy for the Responsible Use of AI in Government. This is the document that sets the rules for how agencies can use AI, and by extension, what they expect from vendors who supply AI enabled services.

The policy introduces a few things you need to understand.

First, the AI Impact Assessment. Before an agency can deploy an AI use case, they need to complete a formal assessment of risks, biases, and potential harms. If a vendor can proactively provide content that aligns with this assessment framework, they're making the procurement easier for the agency. That's a competitive advantage.

Second, the transparency and explainability requirement. Public servants need to be able to explain and justify any decision where AI was involved. This effectively rules out black box solutions. If your product uses AI and you can't explain how it reaches its conclusions, you're going to have a hard time.

Third, the AI Technical Standard. This is a checklist of eight mandatory statements that any AI system needs to satisfy. Things like defining an operational model, enabling auditing, managing bias, applying version control, and watermarking AI generated content. Technical assessors will grade your solution design against these statements.

There are also new model AI clauses being incorporated into contracts. These are standard terms that cover how AI can be used in service delivery, what transparency is required, and how accountability works when something goes wrong.

What This Means for Your Next Bid

If you're still writing bids the same way you did two years ago, you're falling behind.

The government is actively using AI to process applications, assess compliance, and support evaluation. They're building infrastructure to scale this across every agency. They've published the standards they expect vendors to meet. And they're training thousands of public servants to work alongside these systems.

Your bid is increasingly likely to be read by an AI before a human ever sees it. Or alongside a human who's using AI to check your claims, compare your pricing, and flag inconsistencies.

This isn't a threat. It's an opportunity if you understand how these systems work.

At BidScout, we've spent the past year reverse engineering how government AI systems evaluate bids. We've analysed thousands of tender documents, mapped the evaluation criteria that matter, and built tools that help you optimise your submissions for both human and algorithmic readers.

The government has published its marking rubric. Most bidders haven't read it. We have.

The algorithm is reading. Make sure you're ready.

Newsletter

Get The Bid Brief in your inbox

Weekly tender intelligence, deep dives on government procurement, and the opportunities everyone else is missing. No spam, unsubscribe anytime.

Want to see how your bid stacks up?

Our AI Assessment Reports break down exactly how government systems will evaluate your submission, with specific recommendations to improve your score.

Browse live tenders