Media release: A balanced approach to AI regulation key to innovation and trust, says Clayton Utz expert
Australia must strike the right balance between fostering innovation and building community trust in its approach to AI regulation, according to Clayton Utz Partner and Head of AI, Simon Newcomb.
Speaking at the CEDA AI Leadership Summit during a panel discussion on AI for strategic innovation, Mr Newcomb emphasised that trust is central to the ongoing debate around how AI should be regulated.
"Government continues to take a considered approach to its regulatory response, while Australian businesses are already integrating compliance into their AI projects, governance and procurement processes," Mr Newcomb said.
"We’re seeing innovation happening across the country – but the pace of AI development and adoption is outstripping regulatory frameworks. That’s creating both real and perceived gaps."
Mr Newcomb said a key policy debate centres on whether Australia needs AI-specific legislation to address risks unique to the technology.
"The argument for new regulation is based on the potential severity of AI-related harms, which warrant compulsory risk management in high risk scenarios,” he explained.
"Regulation is also key to trust as it helps to give confidence that AI is safe. On the other hand, there’s concern that introducing AI-specific laws that are too restrictive could stifle innovation or duplicate existing laws. "
"We'll need to wait and see what direction the government ultimately takes, potentially when it releases its national AI plan later this year."
He noted that AI-related regulatory reviews and developments are underway through broader legal frameworks including, in consumer, privacy and copyright law.
Mr Newcomb also addressed the legal complexities of AI agents, cautioning against treating them as human-like entities.
“We tend to anthropomorphise AI agents, but legally they are not people — they are not legal agents, and they don’t owe any duties such as acting within a defined scope of authority, acting in the best interests of a principal or disclosing key information,” he said.
"This becomes particularly challenging when businesses are dealing with customers who use third-party AI agents. If an AI agent makes a mistake, it raises serious questions about where liability sits."
Mr Newcomb concluded that navigating these complexities will be critical for Australia to realise the benefits of AI, while managing the risks responsibly.