
Navigating Victoria’s AI restrictions: implications for state-funded organisations
On June 24, 2025, the Victorian Government rolled out tough new guidelines around how Generative Artificial Intelligence (AI) can be used in the public sector. The aim? To make sure these powerful tools are deployed safely and responsibly. For any organisation that receives state funding—whether you’re in community housing, health, education, or another public-facing service—these rules aren’t just policy updates. They’re a signal that organisations can’t be complacent and need to change and adapt.
Understanding the scope of the restrictions
The new guidelines specifically address the use of Generative AI tools—systems capable of producing new content, such as text, images, or audio, in response to user prompts. Examples include ChatGPT, Google Gemini, and Microsoft Copilot. The Victorian Government’s policy emphasises that these tools should not be used for decision-making processes, assessments, or administrative actions that could impact individuals or communities (vic.gov.au).
Organisations are now required to:
- Conduct thorough risk assessments before integrating Generative AI tools.
- Ensure human oversight in all AI-assisted processes.
- Implement robust data privacy and security measures.
- Develop clear policies and training programs for staff on the use of AI technologies (ovic.vic.gov.au).
Defining artificial intelligence in this context
Artificial Intelligence, as defined by the Victorian Government, encompasses machine-based systems that perform tasks typically requiring human intelligence, such as reasoning, learning, and decision-making. Generative AI, a subset of this, refers to systems that can generate new content based on patterns learned from data (vic.gov.au).
It’s important to distinguish between Generative AI and other technologies. For instance, next-generation telephony platforms that have tools that record and transcribe calls, which Donnabrook helps state-funded industries leverage, may not fall under the same restrictions if they do not involve content generation or decision-making capabilities. However, if such tools incorporate AI elements that analyse or interpret data beyond simple transcription, they may be subject to the new guidelines. Check with any vendor you work with that provides AI-enhanced capabilities to see if they are aligned with your restrictions/guidelines in the state(s) you are governed by.
Enforcement and oversight
The framework to monitor and enforce these rules includes:
- Organisational Responsibility: Agencies must monitor staff use, enforce compliance policies, and provide education to personnel.
- Technical Controls: Tools must be agency-approved, with privacy-conscious settings and usage tracking.
- Incident Reporting: Breaches must be reported to OVIC under the Information Security Incident Notification Scheme.
- Compliance Consequences: Breaches may invoke actions under legislation such as the Privacy and Data Protection Act 2014 and Public Administration Act 2004.
Oversight sits with the Department of Government Services and the Whole of Victorian Government AI Interdepartmental Committee.
Implications for state-funded sectors
Organisations in sectors receiving state funding must now carefully evaluate their use of AI technologies. For example:
- Community Housing: AI tools used for assessing tenant applications or managing housing allocations must now involve human oversight and comply with the new risk assessment protocols.
- Health Services: AI applications in patient care or administrative processes must ensure data privacy and avoid autonomous decision-making without human intervention.
- Education: The use of AI in grading or student assessments must be transparent, with clear guidelines to prevent bias and ensure fairness.
Failure to adhere to these guidelines could result in non-compliance with state policies, potentially affecting funding and operational legitimacy.
Strategic considerations moving forward
Organisations should take proactive steps to align with the new guidelines:
- Audit Existing AI Tools: Review current technologies to identify any that fall under the Generative AI category and assess their compliance with the new policies.
- Develop Comprehensive Policies: Establish clear internal policies governing the use of AI, ensuring they align with state guidelines and ethical standards.
- Invest in Training: Educate staff on the responsible use of AI technologies, emphasising the importance of human oversight and data privacy.
- Engage with Stakeholders: Maintain open communication with funding bodies and regulatory agencies to stay informed about policy updates and expectations.
As the landscape of AI technology continues to evolve, staying informed and adaptable is crucial for organisations operating within the Victorian Public Sector. By understanding and implementing the new guidelines, organisations can ensure compliance while harnessing the benefits of AI in a responsible and ethical manner.