AI Ethics in 2026
1 CPD Hour
Description
Navigating the key ethical and governance concerns of AI in a rapidly shifting regulatory landscape.
Artificial Intelligence has moved from novelty to necessity, and in 2026 it is embedded in nearly every aspect of professional services, client advisory work and day-to-day business operations. With this rapid adoption has come a new wave of ethical dilemmas, regulatory scrutiny and governance obligations that firms can no longer afford to ignore. From the EU AI Act coming into full effect, to Australia’s Voluntary AI Safety Standard maturing toward mandatory guardrails, to Privacy Act reforms tightening accountability for automated decision-making, the landscape has shifted dramatically.
This webinar provides a practical, plain-English overview of the key ethical and governance concerns surrounding AI in 2026. We will examine real-world case studies of AI failures, bias incidents and regulatory actions, and translate complex frameworks into actionable guidance for firms of all sizes. Attendees will leave with a clear understanding of where their obligations lie, how to identify ethical risks in their own AI use, and what a defensible AI governance framework looks like.
Why attend:
- Understand the latest global and Australasian AI regulations including the EU AI Act, Australia’s AI guardrails and New Zealand’s responsible AI guidance
- Recognise the most common ethical pitfalls — bias, hallucination, data leakage, automation bias and shadow AI
- Learn how to assess AI tools and vendors against ethical and professional standards
- Hear real-world case studies from 2024–2026 of AI gone wrong, and the lessons for professional firms
- Take away a practical AI governance checklist you can apply to your own practice
This is an introductory, practical webinar suitable for professionals across firms of all sizes. No prior technical knowledge of AI is required, although attendees with some existing familiarity will also gain new insights into governance frameworks and emerging risks.
LEARNING OUTCOMES
By the end of this webinar, attendees will be able to:
- Understand the core ethical principles underpinning responsible AI use in 2026, including fairness, transparency, accountability and human oversight
- Identify the key regulatory frameworks affecting their practice, including the EU AI Act, Australia’s voluntary and mandatory AI guardrails, and New Zealand’s responsible AI guidance
- Recognise common ethical risks in AI systems such as bias, hallucination, data leakage, automation bias and inappropriate reliance on generative tools
- Evaluate AI vendors and tools against professional, ethical and privacy obligations relevant to their industry
- Apply a practical AI governance framework covering policy, training, monitoring and incident response within their own firm
- Assess the implications of AI decision-making on client relationships, professional duty of care and legal liability
- Design a clear, defensible approach to disclosing AI use to clients and stakeholders
SUITED TO
This webinar is suited to Partners, Directors, Practice Managers, Risk and Compliance Officers, Accountants, Auditors, Tax Professionals, Legal Practitioners and Advisors across firms of all sizes who are using, considering, or overseeing the use of AI in their practice. It is particularly relevant for those responsible for professional standards, governance, privacy and risk management. No prior technical AI knowledge is required, though some familiarity with generative AI tools such as ChatGPT or Microsoft Copilot will be helpful. The content is pitched at foundation to intermediate level.
PRESENTER
Nick Beaugeard, Managing Director, Released Group
Nick is a seasoned entrepreneur and technologist with a career spanning several decades. He began his journey in software development at the age of nine on an Apple ][ and has since founded multiple companies, leading them from inception to successful exits. Nick has been at the forefront of technological advancements, contributing to areas such as systems management infrastructure, WebAssembly, blockchain, and large-scale AI developments. His leadership has earned him accolades including Australian Software Developer of the Year (twice), six global Microsoft Innovation awards, and recognition as a finalist in the 2010 global RedHerring awards. In 2019, Forrester named him an IT Industry Super Connector, highlighting his extensive global IT industry relationships.
Beyond his professional achievements, Nick is deeply committed to his community. Residing in Dee Why since 2005, he has served on the Curl Curl Football Club committee for about a decade and as treasurer of the Manly Warringah Referees Association. In 2021, he was elected as a councillor for Curl Curl Ward, focusing on supporting local businesses, enhancing sports facilities, and improving council services. Nick is also a recognised speaker on AI and author of the bestselling "ChatGPT for Executives." He currently leads the HPE and NEXTDC AI Lab in Artarmon, NSW, and continues to drive innovation through his consulting firm, Released Group, and venture arm, Released Ventures.
You might be interested exploring additional titles on similar and other topics presented by this organisation.