The ruling of the Council of State Section III dated October 20, 2025, no. 8092, offers a clear and balanced interpretation of the role of artificial intelligence in the preparation of technical bids by economic operators.
The dispute arose due to the declared use of ‘ChatGPT-4/OpenAI’ by the successful bidder in preparing its bid, with reference to various sub-criteria set out in the tender specifications.
The third-ranked company argued that, due to the use of AI tools, the bid prepared by the successful bidder was unreliable, given that the technologies used would have identified performance that the company could not guarantee. Therefore, as a result, the commission would have awarded disproportionate scores.
In its ruling, the Council of State rejected the appeal of the company, considering that the score assigned to the bid was not based solely on calculations derived from AI but on a multi-factor, consistent, and informed assessment, which was therefore unquestionable unless manifestly illogical.
In particular, the commission evaluated multiple dimensions of the bid, and on at least one key criterion (the organizational model), it even awarded a higher score to those who had not planned to use AI. In this context, it was specified that those who contest the technical feasibility must do so with specific and verifiable demonstrations; assertive opinions or ‘principled’ interpretations are not enough to undermine a consistent technical evaluation.
In any case, the message to operators is clear: AI can legitimately enter the competition, but as a means of achieving measurable outcomes (such as time, quality, reliability, traceability), not as a tool for guaranteeing the quality of the bid itself.
The message to contracting authorities is no less clear: innovation must be evaluated for what it produces and how it integrates into processes, governance, and safeguards (privacy, security, human control), not for the brand or hype. In this sense, the decision puts a stop to “tech-washing”: proclaiming technology without proof of effectiveness does not hold water, but, symmetrically, it is not the judge's job to replace the commission when the investigation has been serious and the balance between factors is reasonable.
One could argue that the opposite risk—underestimating the efficiency potential of AI—remains. This is true: in some highly organizational services (such as hospital cleaning), algorithms supporting scheduling, quality control, or reporting can have a real impact on costs and standards. The ruling does not close the door on this prospect; rather, it calls for it to be based on evidence: use cases, acceptance tests, performance indicators, and integration plans with roles and responsibilities. This protects both useful innovation and a level playing field.
To transform this approach into administrative practice, here are two measures:
(i) for specifications: ask for results and evidence, not technological slogans. Define service KPIs (accuracy of controls, turnaround times, quality of reports), provide for functional testing and periodic audits, clarify the role of human control and safeguards on data and privacy.
(ii) for bids: document what AI does (benchmarks, test environments, error rates), how it integrates into the process (escalation, logging, manual fallback), and what guarantees it offers (security, compliance, operational continuity).
Ultimately, AI is fully eligible for public tenders, but under the usual conditions: proven usefulness, measurability, consistency with the public interest.