KPMG Australia said it fined a partner A$10,000 ($7,000) for using AI tools to cheat on an internal training course about AI, a violation of its internal policy.  

The incident rocks an industry that has built its reputation on trust, as AI offers professionals shortcuts to results that many times can be inaccurate. 

According to the Financial Times, more than two dozen staff at the firm were caught cheating on internal exams in the current financial year by using AI tools, underscoring a growing problem in the corporate world as AI has become more integrated in work.

KPMG says the partner whose name has been withheld from the public was made to retake the assessment. 

The incident lands at a time when KPMG is generating an increasing share of its roughly $40 billion ($57 billion) in global revenue from AI-related advisory work, putting added scrutiny on how the firm governs its own use of the technology. It also follows a separate case involving Deloitte Australia, which was forced to refund the government last year after errors linked to AI-generated content were found in an official report. 

Andrew Yates, chief executive of KPMG Australia, told the Financial Times, “Like most organisations, we have been grappling with the role and use of AI as it relates to internal training and testing,” adding that "it's a very hard thing to get on top of given how quickly society has embraced it.” 

Regardless, measures have been put in place to curb these practices, Yates said. “Given the everyday use of these tools, some people breach our policy. We take it seriously when they do. We are also looking at ways to strengthen our approach in the current self-reporting regime.” 

Similar steps are being taken across the profession, with the Association of Chartered Certified Accountants scrapping remote tests late last year, citing that its defence systems could not keep pace with the “sophistication” of cheating systems.  

During an Australian Senate inquiry into the industry, Yates said that the issue will continue to rattle the C-suite for the next few years. 

"It’s significant that leaders ranked AI-related issues as their top concern for the first time – not just for 2026, but for the next 3-5 years," he said.

The Australian Securities and Investments Commission (ASIC), the corporate regulator, confirmed the incident but said that it was left to the partner to “self-report to professional trade bodies.” 

Report Finds Nearly 40% of AI Output Is Lost to Rework and Misalignment
This means that even as more workers use AI, its output still requires significant review and editing rather than being taken at face value.