M&A-I? Artificial Intelligence in the transaction process: risks, directors' duties, and legal oversight for in-house counsel

David Wilkie, Braeden Southee, Victoria Zhou and Freddie Lomax
04 Mar 2026
5 minutes

Artificial intelligence (AI) is no longer a peripheral technology. It is now an effective tool being used to absorb large volumes of information quickly, surface patterns and anomalies, and focus attention where it matters most. That is why the conversation about AI in M&A is shifting from whether it will be used to how it should be used responsibly, and by whom.

In practice, the appeal is straightforward. Corporate transactions remain constrained by vast document volumes and the sheer human effort required to convert a data room into decision‑useful diligence and negotiation positions. Tools built on machine learning and generative AI can unlock material efficiencies across the deal cycle and meaningfully compress parts of that workflow.

However, courts, regulators, clients, and boards still expect that material decisions are made on an informed basis following appropriate scrutiny and review. This is particularly necessary given the risks which come with the use of AI technologies.

This article examines the increasing adoption of AI across the transaction lifecycle, where early adopters are leveraging its capabilities to streamline resource and time-intensive tasks into efficient processes that provide a competitive edge. However, AI adoption brings inherent risks. We explore these risks, the responsibilities of directors in establishing robust governance frameworks, how boards can effectively navigate these challenges, and the role of in-house counsel in ensuring compliance with legal frameworks and professional obligations.

Where AI is showing up across the deal cycle

A useful way to think about AI in deal execution is not as a single tool but as a set of capabilities that can be deployed at different stages of transactions. AI is increasingly completing specific tasks across the deal cycle including:

  • early-stage sourcing: AI can support target identification by scanning markets and analysing historical data, performance signals, and broader market trends. This enables buyers to spot emerging targets more quickly and helps deal teams assess strategic alignment and fit against acquisition goals. This is particularly powerful when deployed to monitor markets on an ongoing basis to identify opportunities as conditions evolve, reducing reliance on periodic manual sweeps.

  • due diligence: As this is one of the most document-heavy phases of any M&A transaction, AI can assist to triage, classify, and summarise large document sets to accelerate issue spotting and free up legal and commercial advisers for higher-value analysis.

  • negotiations: AI can track changes between document versions and highlight shifts in position or provisions that may alter the agreed risk allocation. Importantly, AI’s role here is facilitative rather than determinative: it sharpens oversight and reduces friction in the process, while substantive judgment and negotiation strategy remain firmly with legal and commercial advisers.

  • drafting: Certain tools can generate first drafts from precedents and standard clauses tailored to due diligence findings. AI can also support document standardisation across multiple transactions, which may be particularly valuable for acquirers pursuing repeat or programmatic M&A strategies.

  • post‑signing and post‑completion: AI is increasingly utilised to support obligation tracking, monitor key contracts and renewals, and support integration workstreams as AI use moves from deal‑specific to business‑as‑usual.

The risk profile of AI

The use of AI across the transaction lifecycle brings clear efficiency benefits, but it also introduces a distinct set of risks that require careful management:

  • reliability: The headline risk is reliability. AI outputs can appear authoritative but may contain inaccuracies. If treated as conclusions rather than prompts for verification, errors can quickly propagate through diligence, negotiations, and decision-making. The risk is not just that AI can hallucinate, but that its speed and confidence can obscure the need for careful human verification if workflows are not deliberately designed to preserve it.

  • scale: Large document populations amplify the risk of error propagation. Effective deployment requires clarity around what AI outputs are intended to do, whether that is triage, prioritisation, or summarisation, and, just as importantly, what the outputs are not intended to replace. Without the clarity of clearly defined parameters, there is a danger that judgment-based analysis is unintentionally compressed alongside mechanical tasks.

  • data handling and confidentiality: AI tools often process sensitive, confidential, and privileged information. The way that data is processed depends heavily on tool configuration and governance settings. Introducing AI into due diligence activities therefore requires careful platform selection, access controls, and protocol design to ensure compliance.

  • governance: There is also a governance dimension that becomes more pronounced as AI use moves from isolated transaction support into more routine deployment across deals. Where AI outputs are relied on by deal teams or boards, questions arise as to how that reliance is supervised and documented. The presence of AI does not dilute responsibility; it must be integrated into existing risk frameworks with clear accountability, transparency, and human oversight.

Directors’ duties and human oversight

Under Australian law, amongst other duties, directors must exercise their powers and discharge their duties with due care, and diligence. AI‑assisted due diligence sits comfortably within that same logic. Boards should insist on clarity about use cases and limits, verification checks, and escalation of red flags to human advisors.

Those duties do not require directors to become data scientists. However, a director’s duty to act in good faith and for a proper purpose in accordance with the Corporations Act 2001 (Cth) requires that directors have an appropriate AI governance framework which considers the implications of AI to ensure that necessary governance, risk management, and compliance frameworks are effectively in action. The Australian Government has introduced an evolving Voluntary AI Safety Standard which outlines guardrails for safe and responsible AI governance. If those standards are not observed, a company may become subject to increased legal and regulatory exposure.

Legal supervision of AI output as in-house counsel

Where in-house counsel utilise AI across the transaction cycle, solicitors must ensure that the work produced using AI complies with their professional obligations under the Legal Profession Uniform Law and the Australian Solicitors’ Conduct Rules. This requires supervising counsel to exercise the same level of care and caution as they would when overseeing the work of any junior lawyer or paralegal. This includes the responsibility to provide competent advice, maintain confidentiality throughout the transaction, avoid personal bias, and, in the event of litigation, ensure that they do not mislead or deceive the court by confirming the validity and accuracy of the material submitted. This principle is reinforced by the Supreme Court of NSW in its Practice Note on the use of generative AI, and applies to all proceedings.

A solicitor charged with supervising legal practice by others, and the provision of legal services generally, must also be aware of the risks of generative AI. Effective supervision in this context requires a careful assessment of the accuracy and completeness of AI-generated outputs.

In a recent case concerning the failure to check and verify the output of search tools, a law firm was ordered to personally pay the costs of the other parties to the proceeding on an indemnity basis. Murray on behalf of the Wamba Wemba Native Title Claim Group v State of Victoria [2025] FCA 731 is among a growing number of cases highlighting the absence of systems to ensure that a solicitor's work is properly supervised and reviewed.

The decision stresses the importance of effective supervision of AI and the need for systems to ensure that it is used ethically, within the legal framework, and with independent forensic judgment. Solicitors should rely on their training, experience, and research to review, verify, and correct AI outputs, ensuring their accuracy and reliability.

To assist legal practitioners navigate the ethical, regulatory, and practical implications of using AI in legal practice, the Law Council of Australia has published a range of resources to promote compliance. Suggested safeguards and systems for supervising work generated with the assistance of AI include understanding AI products and their risks, implements and review clear policies for their use, train staff on approved tools, and ensure compliance through robust monitoring systems to prevent unauthorised use.

Key takeaways

Used effectively, AI can improve deal execution by shifting effort away from mechanical review and towards higher-value tasks. That is why leading deal teams, law firms, and in-house legal teams are already applying it across the various stages of the deal cycle. 

However, use of the technology inherently comes with a suite of new risks. Speed alone is not a virtue; the best executed deals will balance efficiency with rigorous human oversight. Wherever AI is integrated in diligence and approvals, consider:

  • making deployment explicit: define the purpose of each AI use case and preserve human verification where it matters and regularly monitor its outputs.

  • protecting the perimeter: select secure platforms, enforce access controls and protocols, and ensure compliance with confidentiality and governance obligations.

  • human oversight: ensure AI outputs are reviewed by qualified professionals/supervising in-house counsel at every stage where they inform material decisions, and maintain oversight so that judgment calls are made by people, not delegated to AI models.

  • equipping the board: ensure directors receive clear explanations of how AI supported diligence, the checks performed, and residual risks to appropriately interrogate AI outputs.

  • ongoing assessment: regularly update policies to ensure compliance with best practices, court requirements, and relevant law society guidelines.

  • ongoing training: provide ongoing training to staff/in-house counsel on using approved AI tools and verifying their outputs.

  • enforcing compliance: establish systems to enforce compliance with AI policies and prevent unauthorised use of unapproved tools, including on personal devices.

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.