Generative AI services are available for everyone to use. Complementing our previous work helping companies establish practical Data Governance and Data Model & Analytics Policies, we have created a sample "End User Responsible AI Policy" template. Implementing an "End User Responsible AI" policy is one effective way to promote safe and responsible AI usage within your organization. Policy documents are never a silver bullet and, in themselves, don't protect you. However, writing down and communicating how you expect everyone to behave when using these services goes a long way in establishing safe and responsible usage, while also signaling that you encourage the use of these tools. Cut and paste this to get started! For more information on how M&A Operating System can help your organization maximise its potential with data and AI, visit https:/www.maoperatingsystem.com/contact.
As an organization, we encourage all employees to leverage generative and other AI services to maximize their personal and company potential. However, we recognize that this new technology introduces additional risks, including accuracy, bias, data privacy, security, and concerns related to intellectual property rights. This document outlines a principle-based set of policies to guide employees in the responsible use of End-User AI services within the organization, aiming to reduce the risks associated with leveraging these new capabilities.
This document focuses solely on the policy statements that drive responsible end-user AI usage, i.e., the “what”. This policy establishes a definition of “End User AI Services” and a new “AI Governance Committee,” which is tasked with developing procedures to successfully implement and oversee this policy, i.e., the “how”.
This End User Responsible AI Policy is founded on three core principles that all end users leveraging AI services must adhere to:
As above, we encourage our employees to utilize the generative and other AI services to maximize both personal and company potential. The responsible and appropriate use of these tools lies first and foremost with the individuals who use them. As end users, we are all accountable.
We expect everyone who leverages these services to take individual responsibility for the decision to leverage an AI, the choice of tool, the information entered, and accountability for the accuracy and completeness of the generated output, including any ethical, compliance, or legal requirements. If you are unsure about the appropriateness of AI for your specific situation, please stop and consult with your manager or contact a member of the legal and compliance department for guidance.
For clarity, the term' end users' in this policy refers to all employees, contractors, outsourced staff, or any other individual who utilizes AI services on behalf of the organization.
When entering any information into AI services, including prompt instructions, documents, images, etc, as an end user, you should always treat the services as an “untrusted” third party.
As such, as an end user, you should never include any information that could compromise client confidentiality, our organization's confidentiality, or any regulatory data protection rules such as GDPR, HIPAA, or CCPA, or any other internal data privacy or information security policy.
All Inputs must be carefully sanitized with personally identifiable or sensitive data excluded or anonymized before use.
End-user AI services, by definition, are personal tools designed to augment and enhance individual productivity and depth of understanding, rather than replace or eliminate human activity. They must not be used as the sole source of information or for final decision-making in any circumstance, with all output being validated by a qualified human.
Specifically;
The ultimate responsibility for the output content, accuracy, and appropriateness of any AI service lies solely with the end-user who leverages the service.
Examples of appropriate use of End User AI services include.
This policy document is owned by the current Chief Data AI Officer or equivalent senior executive and approved by the AI Governance Steering Committee. This policy must be reviewed and reapproved at least annually.
This policy acknowledges that it is part of a broader, enterprise-wide risk and governance framework, and other policies may refer to the management and use of data. In general, these specific policies and requirements will always take precedence over this broad data policy. Any discrepancies or conflicts should be escalated through the respective oversight structures for both policies.
Related policies that touch data include:
The list above should be reviewed and validated against existing company policies, with the quick summaries of each policy updated to ensure consistency. Ensure the list above includes all policies that reference data content, including definitions, data types, data usage, storage, and architecture. It is essential to acknowledge that, although this is a broad general data management policy, data is often at the core of many other policies.
This policy defines “End User AI Services” as;
“Any internal or external or internal website, SaaS platform, or service, where individual end users enter instructions and other information into and receive value-added information as an output that leverages artificial intelligence or machine learning, such as text, code, images, video, or any other form of content.”
Examples of this include, but are not limited to;
This policy does not cover two related but separate usages of artificial intelligence services;
It is intended that these two related policies, along with this End User Responsible AI policy, govern all use of AI services.
The final decision regarding policy applicability lies with the AI Governance Committee.
We expect this policy to evolve and expand over time as the availability of AI services and end-users' usage change. All changes to this policy and consequent approvals must be logged in the table below. Significant updates to the policy should be logged as a major version change. Minor incremental changes can be logged through a minor release. The final decision on whether to release a minor or major update resides with the Chief AI Officer, as the owner of this document.
Version |
Date |
CDO Approver |
Change Summary |
0.1 |
mm-ddd-yy |
Andrew Bush |
Initial Draft for Circulation |
1.0 |
mm-ddd-yy |
Andrew Bush |
First Approved Release |
Following the establishment of this policy document, an ongoing Governance Committee will be established, responsible for all aspects of this policy, as well as for establishing the processes, procedures, and standards required to implement it.
The AI Governance Committee should be chaired by a senior executive with the title of Chief AI Officer or an equivalent, and comprise senior representatives from all business areas, corporate functions, Operations, and Technology. Current membership of this committee should be broadly published and visible across the firm.
The AI Governance Committee should have the authority to influence financial planning and budget planning within departments to meet the policy requirements outlined in this document.
The AI Governance Committee should meet at least quarterly, although it is recommended that it meets more frequently as needed, especially during the early phases of adoption.
The AI Governance Committee is responsible for all aspects of data governance oversight, including but not limited to:
The AI Governance Committee may decide to delegate responsibility for executing these responsibilities to the Chief AI Office on its behalf, while continuing to provide oversight.
To apply good governance and standards, this policy establishes the need to maintain an inventory of approved (and rejected) end-user AI services for use across the organization.
The inventory should contain information on the vendor, model, platform type, recommended usage, ownership, and contacts, as well as any other relevant supporting documentation.
The inventory should capture situations where multiple instances, versions, and implementations of the same model or service exist in the live environment and track their lifecycle independently.
The AI Governance Committee is responsible for establishing processes and procedures for an accurate inventory of end-user AI services.
In addition to maintaining an inventory of approved end-user AI services, this policy establishes the need to define a process for evaluating end-user AI services based on various factors.
Example factors could include, but are not limited to:
The existence of an AI services quality review process does not negate the personal responsibilities outlined in this document for end-users leveraging AI services.
To enable the organization to understand usage patterns and facilitate governance and oversight functions, all prompts and any other information entered into a model, including attachments, as well as all resulting outputs, should be logged and made available to specific legal and compliance functions as needed.
It is the responsibility of end users to ensure they are leveraging an approved end-user AI service from a device with logging enabled. It is not sufficient to leverage an approved model from a personal account on a personal mobile device that does not facilitate prompt and output logging.
Examples of appropriate use of these logs include;
The AI Governance Committee is responsible for establishing the correct records retention rules for AI logging in line with the firm's records management policy.
To protect individuals and organizations from issues related to bias, personal harm, legal issues, and pre-existing intellectual property rights, appropriate guardrails must be in place to assess input prompts and output content for inappropriate content. Inbound guardrails must be applied before any information is passed to an AI service, and outbound guardrails checks must be completed before output is presented to the end user.
Examples of appropriate guardrails for input and output information include;
The AI Governance Committee is responsible for establishing and maintaining the list of required guardrails. This list should be reviewed at least semiannually against the latest industry’s best practices.
To facilitate the usage and oversight of AI services, this policy defines a standard list of AI roles applicable to all end-user services. Collectively, these roles are defined to reinforce the core principles laid out in this policy.
Role |
Description |
Sample Responsibilities |
Chief AI Officer / AI Sponsor |
The AI Sponsor is a senior executive role, broadly responsible for all aspects of En User AI usage at the organization, often holding the title of Chief AI Officer or an equivalent position. |
|
AI End User |
An AI End User is any person within the organization who chooses to leverage an AI model through the input of prompt instructions or other information to generate an output. |
|
AI Expert |
An AI Expert is an individual within the firm with substantial knowledge and understanding of the model type, its target usage, any associated risks, and any associated implementation details |
|
AI Quality Reviewer |
An AI Quality Reviewer is an (ideally independent) quality review process responsible for the assessment of the models at initial implementation and ongoing use at the firm. |
|
AI Compliance Reviewer |
An AI Compliance Reviewer is an individual within the firm with broad compliance oversight responsibilities, tasked with ensuring that all AI prompts and output comply with the organization’s policies, regulatory requirements, and ethical standards. |
|
Other roles continue to apply as defined by other governance policies, including, but not limited to, Technology Application Business Owner, Technology Application Technology Owner, Information Security Reviewers, etc.
The AI Governance Committee is responsible for establishing processes and procedures to identify individuals for roles and ensure that all roles are consistently filled for all live services in the AI Inventory.
To ensure the timely resolution of structural issues arising from employee use of end-user AI services, this policy establishes a process for escalating, tracking, and resolving problems.
The AI Governance Committee is responsible for the scope and definition of AI issues, examples of which could include.
The AI Governance Committee is responsible for defining the exact AI-IMR process, including the workflow tool and reporting mechanisms. This encompasses methods for identifying and escalating issues for consideration, prioritizing issues, assigning ownership, and validating post-resolution outcomes.
The AI Governance Committee is responsible for publishing the current process and status of current IMRs for broad company-wide consumption.
To encourage the broad and responsible use of AI services across the entire organization, this policy establishes the need for an organization-wide training program that educates every individual within the organization on their role in ensuring the safe and responsible use of AI services. This can be combined with a broader AI training program.
Example topics for inclusion in an “End User Responsible AI Training Program should include:
The AI Governance Committee is responsible for overseeing the overall content of the training course, ensuring that it remains up-to-date and relevant in line with this policy, and that all individuals within the organization participate in the training regularly.