Practical Data: End User Responsible AI Policy Template
Generative AI services are available for everyone to use. Complementing our previous work helping companies establish practical Data Governance and Data Model & Analytics Policies, we have created a sample "End User Responsible AI Policy" template. Implementing an "End User Responsible AI" policy is one effective way to promote safe and responsible AI usage within your organization. Policy documents are never a silver bullet and, in themselves, don't protect you. However, writing down and communicating how you expect everyone to behave when using these services goes a long way in establishing safe and responsible usage, while also signaling that you encourage the use of these tools. Cut and paste this to get started! For more information on how M&A Operating System can help your organization maximise its potential with data and AI, visit https:/www.maoperatingsystem.com/contact.
Introduction
As an organization, we encourage all employees to leverage generative and other AI services to maximize their personal and company potential. However, we recognize that this new technology introduces additional risks, including accuracy, bias, data privacy, security, and concerns related to intellectual property rights. This document outlines a principle-based set of policies to guide employees in the responsible use of End-User AI services within the organization, aiming to reduce the risks associated with leveraging these new capabilities.
This document focuses solely on the policy statements that drive responsible end-user AI usage, i.e., the “what”. This policy establishes a definition of “End User AI Services” and a new “AI Governance Committee,” which is tasked with developing procedures to successfully implement and oversee this policy, i.e., the “how”.
End User AI Usage Principles
This End User Responsible AI Policy is founded on three core principles that all end users leveraging AI services must adhere to:
Core Principle #1 – Responsible AI is Everyone's responsibility
As above, we encourage our employees to utilize the generative and other AI services to maximize both personal and company potential. The responsible and appropriate use of these tools lies first and foremost with the individuals who use them. As end users, we are all accountable.
We expect everyone who leverages these services to take individual responsibility for the decision to leverage an AI, the choice of tool, the information entered, and accountability for the accuracy and completeness of the generated output, including any ethical, compliance, or legal requirements. If you are unsure about the appropriateness of AI for your specific situation, please stop and consult with your manager or contact a member of the legal and compliance department for guidance.
For clarity, the term' end users' in this policy refers to all employees, contractors, outsourced staff, or any other individual who utilizes AI services on behalf of the organization.
Core Principle #2 – Treat all End User AI Use as an “Untrusted” 3rd Party for Data Privacy Purposes.
When entering any information into AI services, including prompt instructions, documents, images, etc, as an end user, you should always treat the services as an “untrusted” third party.
As such, as an end user, you should never include any information that could compromise client confidentiality, our organization's confidentiality, or any regulatory data protection rules such as GDPR, HIPAA, or CCPA, or any other internal data privacy or information security policy.
All Inputs must be carefully sanitized with personally identifiable or sensitive data excluded or anonymized before use.
Core Principle #3 – End User AI Services are Intended to Augment Human Activities and Require Human Oversight
End-user AI services, by definition, are personal tools designed to augment and enhance individual productivity and depth of understanding, rather than replace or eliminate human activity. They must not be used as the sole source of information or for final decision-making in any circumstance, with all output being validated by a qualified human.
Specifically;
- AI-generated outputs may produce biased or inappropriate outputs. End users must assess content for fairness, inclusivity, and ethical soundness, rejecting any production that fails to meet these standards.
- AI-generated outputs may contain errors or misleading content. End users must carefully review, validate, and apply personal professional judgment before incorporating any AI-generated material into work products or decisions.
- Use of AI-generated outputs may be subject to Laws, Regulations, or other Professional Standards. End-users must ensure that their use of external AI services complies with all applicable laws, regulations, industry standards, and professional codes of conduct.
The ultimate responsibility for the output content, accuracy, and appropriateness of any AI service lies solely with the end-user who leverages the service.
Examples of appropriate use of End User AI services include.
- For drafting internal, non-client-specific materials, such as research summaries, preliminary frameworks, or general brainstorming ideas.
- For generating industry-agnostic content that does not rely on or include confidential or identifiable information.
- For assisting in technical prototyping or internal code snippets, the provided outputs are thoroughly reviewed.
Policy Ownership
This policy document is owned by the current Chief Data AI Officer or equivalent senior executive and approved by the AI Governance Steering Committee. This policy must be reviewed and reapproved at least annually.
Alignment with Other Policies
This policy acknowledges that it is part of a broader, enterprise-wide risk and governance framework, and other policies may refer to the management and use of data. In general, these specific policies and requirements will always take precedence over this broad data policy. Any discrepancies or conflicts should be escalated through the respective oversight structures for both policies.
Related policies that touch data include:
- Data Governance Policy – The Data Governance policy outlines the procedures for managing data within an environment. All data leveraged as an input to a critical Data Analytics Model and results as an output should be subject to the guidelines and standards identified through the Data Governance Policy.
- Data Analytics and Model Policy – The Data Analytics and Model Policy outlines the procedures for managing analytics and models that utilize data to generate critical outcomes for financial, Operational, Risk, and Regulatory reporting and decision-making.
- Records Retention Policy – The Records Retention Policy outlines the retention periods for various types of data within the firm, ensuring compliance with business, operational, and legal requirements.
- Data Privacy Policy – The Data Privacy Policy classifies data based on its sensitivity, including Personal Identifiable Information, and describes how it should be handled.
- Cross-Border Data Transfer Policy – The Cross-Border Data Transfer Policy describes policies and processes for managing data across multiple geographic jurisdictions.
- Technology Architecture Policy – The Technology Architecture Policy establishes the standards for designing, implementing, and operating various technology solutions, including the selection of data storage technology.
- Information Security Policy – The Information Security Policy outlines the procedures for managing security risks within the firm, including the secure handling of data.
The list above should be reviewed and validated against existing company policies, with the quick summaries of each policy updated to ensure consistency. Ensure the list above includes all policies that reference data content, including definitions, data types, data usage, storage, and architecture. It is essential to acknowledge that, although this is a broad general data management policy, data is often at the core of many other policies.
Policy Scope
This policy defines “End User AI Services” as;
“Any internal or external or internal website, SaaS platform, or service, where individual end users enter instructions and other information into and receive value-added information as an output that leverages artificial intelligence or machine learning, such as text, code, images, video, or any other form of content.”
Examples of this include, but are not limited to;
- Text and code generation: ChatGPT, Claude, Gemini, Microsoft Copilot, GitHub Copilot
- Image generation: DALL·E, Soros
- Audio/video or multimedia generation: Runway, Descript
- Any publicly or privately available AI chatbots or SaaS tools offering generative content.
This policy does not cover two related but separate usages of artificial intelligence services;
- AI services for final decision-making as an automated process - See Data Analytics Model Policy
- AI services embedded within purpose-built enterprise-grade applications, ie, CRM, ERP, IDE applications – See Information Security Policy and Technology Architecture Policy
It is intended that these two related policies, along with this End User Responsible AI policy, govern all use of AI services.
The final decision regarding policy applicability lies with the AI Governance Committee.
Policy Changes
We expect this policy to evolve and expand over time as the availability of AI services and end-users' usage change. All changes to this policy and consequent approvals must be logged in the table below. Significant updates to the policy should be logged as a major version change. Minor incremental changes can be logged through a minor release. The final decision on whether to release a minor or major update resides with the Chief AI Officer, as the owner of this document.
Version |
Date |
CDO Approver |
Change Summary |
0.1 |
mm-ddd-yy |
Andrew Bush |
Initial Draft for Circulation |
1.0 |
mm-ddd-yy |
Andrew Bush |
First Approved Release |
End User Responsible AI Policy
Policy Statement #1 – Formation of “AI Governance Committee”
Following the establishment of this policy document, an ongoing Governance Committee will be established, responsible for all aspects of this policy, as well as for establishing the processes, procedures, and standards required to implement it.
The AI Governance Committee should be chaired by a senior executive with the title of Chief AI Officer or an equivalent, and comprise senior representatives from all business areas, corporate functions, Operations, and Technology. Current membership of this committee should be broadly published and visible across the firm.
The AI Governance Committee should have the authority to influence financial planning and budget planning within departments to meet the policy requirements outlined in this document.
The AI Governance Committee should meet at least quarterly, although it is recommended that it meets more frequently as needed, especially during the early phases of adoption.
The AI Governance Committee is responsible for all aspects of data governance oversight, including but not limited to:
- Establish an AI Governance Committee Charter that outlines the committee's purpose, scope, membership requirements, objectives, and operational procedures, including meeting frequency and quorum requirements. Ensuring this is reviewed and (re)approved at least annually.
- Maintain and publish an End User Responsible AI Policy (i.e., this document) over time and recertify it at least annually.
- Maintain and publish an Inventory of Approved End User AI Services (see Policy Statement #2)
- Establishing an AI Quality Program to assess the ongoing safety, use, and performance of approved End User AI Services (see Policy Statement #3)
- Recommend and oversee solutions to ensure all AI prompts and outputs are logged and have appropriate oversight. (See Policy Statement #4)
- Defining appropriate AI Guardrails that are required to minimize inappropriate AI usage and recommend and oversee solutions for making sure they are applied. (See Policy Statement #5)
- Establish a set of typical AI Roles and Responsibilities for the governance of end-user AI services, and publish the current list of role owners (See Policy Statement #6)
- Establishing an AI Incident Management and Reporting Program (AI-IMR) to manage the timely identification and resolution of issues relating to the use of AI services (See Policy Statement #7)
- Develop and maintain an End User Responsible AI Training Program aimed at ensuring everyone knows how to leverage AI services at the organization safely (See Policy Statement #8)
- Measuring and monitoring End User Responsible AI Policy Adoption across the organization.
- Publishing an Annual Report on the firm's End User Responsible AI
The AI Governance Committee may decide to delegate responsibility for executing these responsibilities to the Chief AI Office on its behalf, while continuing to provide oversight.
Policy Statement #2 – Maintain an Inventory of Approved End-User AI Services
To apply good governance and standards, this policy establishes the need to maintain an inventory of approved (and rejected) end-user AI services for use across the organization.
The inventory should contain information on the vendor, model, platform type, recommended usage, ownership, and contacts, as well as any other relevant supporting documentation.
The inventory should capture situations where multiple instances, versions, and implementations of the same model or service exist in the live environment and track their lifecycle independently.
The AI Governance Committee is responsible for establishing processes and procedures for an accurate inventory of end-user AI services.
Policy Statement #3 – Quality Control Process for Evaluating Ongoing End-User AI Services
In addition to maintaining an inventory of approved end-user AI services, this policy establishes the need to define a process for evaluating end-user AI services based on various factors.
Example factors could include, but are not limited to:
- Security – Does the platform conform to the latest Information Security Policies?
- Data Privacy – Does the platform comply with the latest Data Privacy policy, including the requirement to never use inputted data for ongoing model training?
- Appropriate Usage – Is there a clear business case for the state's appropriate uses of the tool?
- Bias, Fairness, Pre-Existing IP Rights, and Other Ethical Standards – Are there any known issues relating to these topics specific to this model that should be explicitly identified as concerning usage?
- Best in Class – does this platform offer material improvements over other services already identified and in use at the organization?
- Cost – Does this platform offer similar or better performance to other services already identified and in use at the platform at a comparable cost?
The existence of an AI services quality review process does not negate the personal responsibilities outlined in this document for end-users leveraging AI services.
Policy Statement #4 – All End User AI Prompts and Outputs Must be Logged
To enable the organization to understand usage patterns and facilitate governance and oversight functions, all prompts and any other information entered into a model, including attachments, as well as all resulting outputs, should be logged and made available to specific legal and compliance functions as needed.
It is the responsibility of end users to ensure they are leveraging an approved end-user AI service from a device with logging enabled. It is not sufficient to leverage an approved model from a personal account on a personal mobile device that does not facilitate prompt and output logging.
Examples of appropriate use of these logs include;
- Internal audits,
- In response to regulatory requests,
- Any suspicion of policy non-compliance.
- Employee performance-related situations.
- Ongoing AI usage and corporate strategy development.
The AI Governance Committee is responsible for establishing the correct records retention rules for AI logging in line with the firm's records management policy.
Policy Statement #5 – Ensure appropriate AI Responsibility Guardrails are in place
To protect individuals and organizations from issues related to bias, personal harm, legal issues, and pre-existing intellectual property rights, appropriate guardrails must be in place to assess input prompts and output content for inappropriate content. Inbound guardrails must be applied before any information is passed to an AI service, and outbound guardrails checks must be completed before output is presented to the end user.
Examples of appropriate guardrails for input and output information include;
- Prompt Attack and Prompt Hacking Protection
- PII and Other Individual Sensitive Data
- Client confidential information
- Hate and Fairness
- Sexual content
- Violence and harmful content
- Pre-existing (public) intellectual property rights
The AI Governance Committee is responsible for establishing and maintaining the list of required guardrails. This list should be reviewed at least semiannually against the latest industry’s best practices.
Policy Statement #6 - Common Roles and Responsibilities for Management of End-User AI Services
To facilitate the usage and oversight of AI services, this policy defines a standard list of AI roles applicable to all end-user services. Collectively, these roles are defined to reinforce the core principles laid out in this policy.
Role |
Description |
Sample Responsibilities |
Chief AI Officer / AI Sponsor |
The AI Sponsor is a senior executive role, broadly responsible for all aspects of En User AI usage at the organization, often holding the title of Chief AI Officer or an equivalent position. |
|
AI End User |
An AI End User is any person within the organization who chooses to leverage an AI model through the input of prompt instructions or other information to generate an output. |
|
AI Expert |
An AI Expert is an individual within the firm with substantial knowledge and understanding of the model type, its target usage, any associated risks, and any associated implementation details |
|
AI Quality Reviewer |
An AI Quality Reviewer is an (ideally independent) quality review process responsible for the assessment of the models at initial implementation and ongoing use at the firm. |
|
AI Compliance Reviewer |
An AI Compliance Reviewer is an individual within the firm with broad compliance oversight responsibilities, tasked with ensuring that all AI prompts and output comply with the organization’s policies, regulatory requirements, and ethical standards. |
|
Other roles continue to apply as defined by other governance policies, including, but not limited to, Technology Application Business Owner, Technology Application Technology Owner, Information Security Reviewers, etc.
The AI Governance Committee is responsible for establishing processes and procedures to identify individuals for roles and ensure that all roles are consistently filled for all live services in the AI Inventory.
Policy Statement #7 – Establishment of “AI Incident Management Reporting” (AI-IMR) program
To ensure the timely resolution of structural issues arising from employee use of end-user AI services, this policy establishes a process for escalating, tracking, and resolving problems.
The AI Governance Committee is responsible for the scope and definition of AI issues, examples of which could include.
- Inadvertent disclosure of client confidential or organizationally sensitive information
- Over-reliance on unvalidated AI output without appropriate review and validation
- Deliberate generation of biased or inappropriate content
- Use of unauthorized AI services (or the use of authorized services on unauthorized devices)
- Unintended Intellectual property or plagiarism concerns in the resulting output
- Discovery of material errors or factual misrepresentations in AI output.
The AI Governance Committee is responsible for defining the exact AI-IMR process, including the workflow tool and reporting mechanisms. This encompasses methods for identifying and escalating issues for consideration, prioritizing issues, assigning ownership, and validating post-resolution outcomes.
The AI Governance Committee is responsible for publishing the current process and status of current IMRs for broad company-wide consumption.
Policy Statement #8 – Establish a “End-User Responsible AI” Training Program
To encourage the broad and responsible use of AI services across the entire organization, this policy establishes the need for an organization-wide training program that educates every individual within the organization on their role in ensuring the safe and responsible use of AI services. This can be combined with a broader AI training program.
Example topics for inclusion in an “End User Responsible AI Training Program should include:
- What is “End User AI?”
- Principles of End User Responsible AI
- Everyone’s role in ensuring Responsible AI
- Risks and implications of inappropriate AI usage
- This policy and how the AI Governance Committee works to ensure responsible AI at the organization
- How to raise issues and concerns with AI usage
- Other available training materials on AI usage
The AI Governance Committee is responsible for overseeing the overall content of the training course, ensuring that it remains up-to-date and relevant in line with this policy, and that all individuals within the organization participate in the training regularly.