Best Practices for Community Association Managers and Board Members Using GenAI

Share this article:

AI has become so prevalent that even the state’s legislature codified “Generative Artificial Intelligence System” (“GenAI” for short) last year. Section 22757.1 of the California Business and Professions Code defines GenAI as “an artificial intelligence that can emulate synthetic content like text, images, audio, or video.” There are over 17 widely used software applications that utilize GenAI as a tool, each available to consumers for free or at an affordable rate. Although a convenient time-saver, particularly for non-sensitive and non-substantive tasks, managers and board members of community associations should exercise the utmost discretion when using GenAI to ensure that fiduciary duties are maintained, especially the duties of confidentiality and care.

First is the duty of confidentiality. In part, this duty requires managers and board members to Protect Personally Identifiable Information (“PII” for short) from becoming compromised. Software applications that utilize GenAI are able to retain data and metadata, meaning each GenAI tool can readily transmit whatever is input. For example, uploading an owner’s vehicle registration document containing a resident’s name, address, birthdate, and driver’s license into ChatGPT would be enough for a third party to identify that resident. If that information were extracted from Open AI’s database, whether through an inadvertent or intentional data breach, or if Open AI were to change its terms of service and sell uploaded information to third-party services, either scenario could be determined to be a breach of confidentiality on the part of the party that uploaded the data. This could, in turn, lead to violations of regulations and laws, such as the California Consumer Privacy Act. That is why it is currently still a best practice to never share PII with vendors that offer GenAI. Taking the time to educate team members on what constitutes PII and how to avoid sharing it helps an association demonstrate its compliance with federal and state privacy laws.

In practice, you may want to use only the vendors that post clear privacy policies. Try to avoid “copy and pasting” an email or document into a GenAI prompt, and if you have to, redact all PII, including the metadata, before uploading. Alternatively, try to use generic prompts if you need to ask GenAI for help, like “draft a concise letter to the Association’s Board President about X” or “provide three different ways to say to Resident A that they violate a rule about Y.” Literally type “Resident A” when you enter the prompt; you will then have to Find & Replace the owner’s actual name in place of “Resident A” before you send the letter. You can also tailor follow-ups to initial prompts, such as “revise the second paragraph in simpler terms,” to engage with GenAI and not merely copy its initial output.

Second is the duty of care, which requires, in part, fiduciaries to use reasonable effort to make informed, competent decisions and to act in good faith. In this context, reasonable effort would mean that outputs of GenAI usage are verified for accuracy before taking subsequent steps involving decision-making, such as posting notices to members, tenants, and guests. A manager or board member can demonstrate competency with GenAI by taking the time to understand its capabilities, limitations, and appropriate use. For instance, substantive tasks such as amending a governing document would require complex decision-making, which is more suitable for the association’s legal counsel to handle, rather than GenAI. Reviewing the substance of the final product created by GenAI is crucial before it is distributed to board members, management employees, or anyone else.

Acting in good faith requires diligence and transparency, which is especially important when work product is assisted by GenAI. Diligence can be demonstrated by disclosing how GenAI was utilized and the steps taken afterward. Regardless of how trustworthy GenAI may be, it would be wise to treat its output as a first draft and easily track how a human verified it for accuracy. To demonstrate transparency, managers and board members should implement internal policies that track all daily operations assisted by GenAI, regardless of how small or mundane a task may seem. Peers can hold each other accountable whenever they spot the use of GenAI without disclosure; doing so ensures compliance, especially if they are ever asked to disclose such information in a Section 5200 document request.

In summary, all fiduciaries should utilize GenAI with privacy and care in mind. Otherwise, absent-minded use of GenAI may expose the manager, the board, or the entire community association to fines, liability, or disciplinary action. Please feel free to reach out to your association’s legal counsel for any questions regarding best practices on using GenAI. Consulting with a certified information privacy professional would also be beneficial. A little caution goes a long way in safeguarding the data and integrity of our community associations.

List of widely used software applications in alphabetical order that utilize GenAI, followed by the company that developed it, the year it was first available, and noteworthy tidbits:

ChatGPT, Open AI, 2022, referred as a Chatbot like Alexa, and used as customer service on websites;
Claude, Anthropic, 2023, inspired by Claude E. Shannon to employ a constitutional approach to AI;
CoCounsel, Thomas Reuters, 2023, caused sanctions for submitting briefs with hallucinated citations;
Copilot, Microsoft, 2023, integrated with various Office software applications;
DeepSeek, High-Flyer, 2023, known for its training data;
Flux AI, Black Forest Labs, 2024, became leader in AI-image generation;
Gemini, Google, 2023, suspended in the past for posting historically inaccurate and offensive images;
GPT-4, Open AI, 2023, passed the Uniform Bar Exam, scoring in the 90th percentile among test takers;
Hippocratic, Munjal Shah, 2023, recognized as a leader in Generative AI for healthcare;
LLaMa, Meta, 2023, designed to be a base model for running a “local ChatGPT” on a PC;
Midjourney, Midjourney, Inc., 2022, utilized Discord App to create award-winning AI-generated images;
Operator, Open AI, 2025, designed as an AI agent that can automate online tasks like a person would;
Perplexity, Aravind Srinivas, 2022, designed as a conversational LLM-powered answer engine;
Protégé, LexisNexis, 2024, personalized AI-legal assistant with generative and agentic AI capabilities;
Speechify, Tyler Weitzman, 2022, originated as a text-to-speech platform and now an AI voice generator;
Veo, Google Deepmind, 2024, released as a multimodal video generative model.