Skip to main content

Generative Artificial Intelligence: Risk Management Implications for Accounting Firms

February 02, 2026

by CAMICO

Generative Artificial Intelligence: FAQ on CAMICO's Advisory Hotline

This FAQ document is not intended to be used or relied upon as a substitute for a firm’s compliance with applicable professional standards nor is it intended to be a substitute for seeking legal advice. CAMICO presents this FAQ guide for reference purposes only to highlight common policyholder inquiries regarding the risk management implications of using Generative Artificial Intelligence. CAMICO policyholders are welcome to contact CAMICO with specific questions, comments, or concerns at 1.800.652.1772 or by email at lp@camico.com. Additional risk management resources are available on CAMICO’s Members-Only Site (https://www.camico.com).

Section I: General Information

Q1. How is generative artificial intelligence impacting CPA firms?

A1. Artificial intelligence (“AI”) is transforming CPA firms, along with many other businesses, as many seek to leverage the use of generative AI to reduce repetitive tasks and similar pain points, accelerate innovation, and increase productivity. AI utilization for most CPA firms generally falls into the following two primary categories:

  1. AI as a “professional tool.” For example, when AI assists professionals with tax and accounting research, audit automation, document drafting, or other analysis subject to human oversight.

  2. Direct AI-to-client interaction. For example, autonomous AI tools used to communicate directly with clients — such as chatbots, automated tax and accounting guidance, or AI-driven customer service.

As the use of any AI technology is organization-specific, CPA firms need to establish a solid understanding of their unique needs and objectives and gain an understanding of how AI works before they can begin to identify what, if any, AI opportunities are appropriate for their firm.

For risk management purposes, it is important to distinguish between (1) AI used to assist professionals under their supervision, judgment, and review, and (2) AI systems that autonomously generate advice, interact directly with clients, and/or make operational decisions without human oversight. The former generally presents lower regulatory and liability exposure. The latter requires heightened controls, including monitoring mechanisms, to ensure accuracy, privacy, ethical use, and compliance with applicable laws and regulations.

Q2. With the plethora of AI tools on the market, what guidance does CAMICO have for firms beginning their AI due diligence?

A2. For those just starting out on this journey, it may be difficult to understand how AI tools work and how they have been trained. Consider engaging an IT professional to help you navigate the due diligence process. Any tool you deploy within your firm needs, at a minimum, to secure and protect confidential data. Therefore, it is imperative that a firm’s due diligence process include obtaining an understanding of the specific AI tools being considered. Some basic but important questions to consider include:

  • How does the AI tool manage privacy and security (third-party with training, third-party without training, firm-controlled environment, or locally on devices)?

  • How is data stored and processed?

  • Do contractual terms and conditions provide for the AI tool’s compliance with applicable laws and regulations?

Q3. Is implementing an AI governance structure important for CPA firms?

A3. Yes. Establishing an AI governance framework helps to promote security, compliance, and ethical AI use while helping firms maintain operational integrity, as well as client trust. Without clear governance, firms risk exposing sensitive confidential data, making flawed decisions, or potentially escalating their risk of noncompliance with evolving regulations. The key isn’t just adopting AI — but adopting it responsibly. From CAMICO’s perspective, “responsible use” of AI for CPA firms includes implementing and maintaining protocols and procedures designed to ensure transparency, privacy, accountability, compliance, and ethical use. For example, firms should adopt clear principles and practices for responsible AI use by establishing written guidelines to clarify that these technologies must not be used to create content that is inappropriate, discriminatory, or otherwise harmful to others (clients, employees, etc.) or the firm.

Section II: Risk Management Considerations

Q4. What are some of the risk management considerations for our firm as we evaluate potential AI utilization?

A4. Generative AI is not infallible. Whether using AI for assisting with research, automating calculations, crafting emails, or explaining the tax code, be alert for its inaccuracies. Often, AI-generated information is outdated, misleading, or even fabricated (referred to as “hallucinations”). Therefore, all AI-generated outputs must be reviewed to ensure accuracy and reliability. A proper review will also help mitigate the risk of inappropriate, discriminatory, or otherwise harmful content being shared and relied upon by your firm and others.

Another source of risk is inadvertently compromising the confidentiality of data. Before using a generative AI provider, we recommend performing due diligence on the AI provider to ensure their system complies with professional standards and regulations. For tax-related engagements, firms must also ensure compliance with IRC §7216 and the associated regulations, which restrict the use or disclosure of taxpayer information to third parties (including certain AI platforms). In some cases, written taxpayer consent may be required before transmitting data to external systems.

When conducting your due diligence, firms should research the AI provider’s reputation and confirm whether provider has a history of training its models on unauthorized data. Firms should also review the provider’s terms of use to understand how data and output are handled. Some vendors may reserve rights to access, store, reuse, or further train models on firm-generated content unless explicitly restricted. Contract terms should clearly state that the firm maintains ownership of its work product and client-related materials, that the provider will not use the firm’s data to train or improve models unless expressly authorized, and that confidentiality and data-protection obligations apply.

Q5. What should our firm do to mitigate potential risks as we move forward with AI adoption?

A5. As you explore the opportunities afforded by generative AI, it is imperative to understand its overall risks and countervailing safeguards to develop an appropriate comprehensive AI governance framework (see Question 3 in Section I above) for your firm. In addition, successful integration of generative AI tools requires a well-crafted implementation plan, including specific firm education and training to ensure responsible use. Firms should document that employees receive training regarding responsible use, confidentiality safeguards, verification of accuracy, and escalation procedures for any questionable AI-generated output.

Important aspects of maintaining confidentiality are ensuring data privacy and mitigating security risks. Firms should encrypt data as appropriate, implement access controls, and adhere to applicable data protection regulations. It may be necessary to consult with qualified legal counsel and update the firm’s Privacy Policy to ensure transparency about the categories of sensitive information collected; the sources of that information; the purposes for its collection; and how the firm stores, secures, and shares such information.

CAMICO also believes that a clear and concise generative AI policy to document your firm’s authorized usage is paramount in establishing responsible use of AI. Please see CAMICO’s generative AI policy template available on CAMICO’s Members-Only Site. As this will be unique to your firm, we recommend working with your firm’s legal counsel and IT specialists, as appropriate, as you develop, tailor, and implement your generative AI strategy and related usage policy.

Q6. Should I inform clients and update my existing engagement letter if my firm is using AI?

A6. Not all AI applications require client disclosure under legal and regulatory guidance as currently promulgated. At this time, the key distinction lies in whether AI is directly interacting with clients or merely a tool assisting professionals with their work. As AI tools become more sophisticated, firms will need to consider when and how to disclose their use of AI to clients.

As laws evolve, AI disclosure may become more important as regulatory bodies increase oversight. Current laws, including the European Union’s (“EU”) General Data Protection Regulation (GDPR), require businesses to inform individuals when AI tools make automated decisions that significantly impact them. The Federal Trade Commission (FTC) has issued warnings about AI transparency, emphasizing that companies must avoid misleading consumers and, therefore, should disclose AI use. Some states, including California, have passed AI transparency laws that specifically require businesses to disclose when customers are engaging with AI rather than humans.

Even if not required, transparency is considered best practice, as it provides an opportunity to reassure your clients of your responsible AI use, and that you are taking reasonable measures to safeguard their data and privacy. Refer to CAMICO’s illustrative engagement letter language below, which should be tailored as appropriate to address your firm’s specific utilization of AI:

Our firm may use generative artificial intelligence (“AI”) tools to improve efficiencies in areas such as tax and accounting research, document drafting, or other analysis to assist us with rendering services to you under the terms of this agreement. We have policies and procedures in place to ensure that any AI-generated content is subject to our firm’s strict quality control guidelines which include protocols for applying professional expertise, judgment, and oversight in the review and interpretation of any AI-generated content. In addition, we maintain reasonable safeguards to ensure responsible use of AI which includes strict adherence to the requirements set forth for confidentiality, privacy, security, and ethical use of AI in accordance with applicable laws and our professional standards.

Q7. If we use an AI tool to automate processes in Human Resources, what should we be considering?

A7. Employers must tread carefully when considering implementing AI in areas such as recruiting, performance management, or compensation. While AI tools promise efficiency and cost savings, they also create new risks, including the potential for discrimination claims. When evaluating potential AI tools, the key is to balance efficiency with compliance to ensure that such AI technology doesn’t undermine fairness or expose the firm to avoidable lawsuits.

For example, one of the biggest legal risks associated with using AI in the recruitment process is the concept of “disparate impact,” a policy or practice that appears neutral on its face but results in disadvantaging a protected group. The Equal Employment Opportunity Commission (EEOC) has already issued guidance warning that automated decision-making tools fall under the same anti-discrimination laws as traditional practices. Employers should be aware that claims may be brought not only by rejected applicants, but also by government agencies seeking to enforce civil rights laws. This type of risk underscores the need for firms to do their due diligence on AI tools and conduct bias audits before relying on AI algorithms when making employment decisions.

As many states are starting to regulate how employers may use AI in HR practices, with many of these new laws reflecting a trend towards transparency and informed consent, it is imperative that firms are current with federal and state AI-related employment laws. Other proactive, risk-mitigating controls for AI use in HR workflows include:

  • Conducting regular bias audits of AI tools

  • Requiring human review of AI-generated outputs

  • Maintaining transparency with employees and applicants regarding AI use

Q8. What other risk management best practices should I consider?

A8. CAMICO encourages the following additional best practices:

  • Get educated, as AI is here to stay. Learn more about the generative AI tools that are available and take appropriate due diligence steps to assess which, if any, of these tools may be appropriate to utilize in your firm.

  • Develop an implementation strategy. Successful integration of generative AI, or any new technological solution, requires a well-crafted implementation plan, including firm-specific education and training regarding responsible use.

  • Engage experts (IT professionals, legal counsel, etc.) as needed. Consider consulting with an attorney if you have questions regarding compliance with laws and regulations applicable to your firm. IT professionals may also be needed to appropriately address security measures and safeguards for the transmission of confidential client information.

  • Educate employees. Document your firm’s authorized usage (e.g., open use, limited use, or prohibited use) of generative AI and specific AI tools and communicate these terms and conditions to your staff. CAMICO offers a sample Generative Artificial Intelligence Chatbot Usage Policy template for this purpose on CAMICO’s Members-Only Site.
  • Stay informed. CPA firms need to stay informed about evolving AI regulations, as new laws continue to emerge at both state and federal levels in the U.S., in addition to stricter transparency requirements under the European Union (EU) AI Act. By proactively adapting to regulatory changes, businesses can more responsibly mitigate legal risks while leveraging AI’s benefits. 

Section 3: Additional Resources

Q9. Where can I find additional information regarding AI? 

A9. Refer to the following resources:

Policyholders with questions should contact the Loss Prevention department by email at lp@camico.com or call 1.800.652.1772 and ask to speak with a Loss Prevention Specialist. Additional risk management resources are available on CAMICO’s Members-Only Site.

Published with permission of CAMICO.