01/Objectives
- Know what the policy covers — scope. Who is in, and what counts as AI use.
- Use the three-tier risk model correctly. Classify any AI use you encounter.
- Protect client and RCG confidential information. Never input it without authorization.
- Use only approved tools and accounts. For any RCG business work.
- Know how to escalate questions or concerns. Pause and ask when unsure.
RCG · AI Use Policy01 / 19
02/Stakes
- Client trust. Protecting client trust and confidentiality.
- Security & compliance. Meeting security and regulatory obligations.
- Accuracy & integrity. Upholding professional accuracy and integrity.
- Accountability. Misuse can lead to disciplinary action.
RCG · AI Use Policy02 / 19
03/Scope
- All RCG personnel — employees, contractors, and consultants.
- All AI tools used for RCG work.
- Embedded AI features in common software.
- Company and personal devices used for RCG work.
RCG · AI Use Policy03 / 19
04/Client Trust
- Be transparent with clients when required or expected.
- Do not misrepresent AI-assisted work as purely human-created when disclosure is required.
- Client obligations override convenience.
RCG · AI Use Policy04 / 19
05/Principles
- Fairness and non-discrimination.
- Accuracy and integrity — verify outputs.
- Human oversight and accountability.
- Transparency and honesty.
- Respect for privacy and intellectual property.
RCG · AI Use Policy05 / 19
06/Hard Limits
- Misleading, deceptive, or fraudulent content.
- Impersonation of people, organizations, or clients.
- Circumventing security controls or restrictions.
- Automating decisions without appropriate human review.
- Concealing AI usage when disclosure is required.
Prohibited regardless of tool, feasibility, or business justification.
RCG · AI Use Policy06 / 19
07/Principle · 01
- AI assists; it does not replace judgment.
- You are responsible for outputs and decisions.
- Review and validate AI output before using it.
“The AI said so” is never an acceptable justification.
RCG · AI Use Policy07 / 19
08/Principle · 02
- Never input without authorization. Never input client, proprietary, or confidential information without authorization and safeguards in place.
- When in doubt, do not use AI. Treat data as confidential and default to the safest handling.
RCG · AI Use Policy08 / 19
09/Principle · 03
- Pick the lowest-risk AI use that meets the need.
- Higher-risk uses require more controls and approvals.
RCG · AI Use Policy09 / 19
10/Framework
- Low risk. Routine internal tasks with no confidential or client data.
- Medium risk. Internal systems, development, and data analysis with extra controls.
- High risk. Client data, client-facing deliverables, and AI-enabled delivery — strictest controls and approvals.
RCG · AI Use Policy10 / 19
11/Tier · Low Risk
- Drafting internal emails and agendas (non-confidential).
- Summarizing publicly available information.
- Brainstorming ideas for internal discussions.
- Formatting, editing, or proofreading non-confidential internal documents.
- Learning using hypothetical or public information.
RCG · AI Use Policy11 / 19
12/Tier · Low Risk
- Any client data, client names, or project details under confidentiality.
- RCG proprietary information, financials, strategy, or personnel information.
- External-facing communications or deliverables without review and escalation.
- Any personal AI accounts or unapproved tools.
Most policy violations start as “just one detail.” Don’t share it.
RCG · AI Use Policy12 / 19
13/Tier · Medium Risk
- Applies to software development, system configuration, internal data analysis, and automation.
- No credentials, keys, or tokens in prompts.
- AI-generated code must be reviewed and tested before production use.
- Confirm tools do not retain or train on proprietary code when required.
RCG · AI Use Policy13 / 19
14/Tier · High Risk
- Client data involved.
- Client-facing deliverables.
- AI-enabled service delivery.
- Work product that directly impacts client outcomes.
If it touches clients or client data, assume high risk until confirmed otherwise.
RCG · AI Use Policy14 / 19
15/Tier · High Risk
- Client authorization when required or expected.
- Agreements and technical safeguards in place before using client data.
- Human review and validation of AI-assisted deliverables is mandatory.
- Tool approval via Appendix A before client-facing use.
If any condition is missing, stop and escalate.
RCG · AI Use Policy15 / 19
16/Client Data
- Client data must reside only in approved repositories and systems.
- Storage outside approved repositories is not authorized.
- Local storage is prohibited unless explicit written approval is granted.
- Temporary local access must be authorized, encrypted, time-limited, and securely deleted.
RCG · AI Use Policy16 / 19
17/Tooling & Governance
- Use only tools on the approved list, unless you have explicit written authorization for a specific exception.
- If a tool isn’t on the list, submit a request — don’t “just try it.”
- ITMS, Partners, and Information Security maintain the list. Reviewed quarterly.
The list grows — but new tools enter through the process, not around it.
RCG · AI Use Policy17 / 19
18/Personal Accounts
- No personal AI accounts or subscriptions for any RCG business use. Not the free tier, not “just this once.”
- Includes consumer-grade AI features that don’t have enterprise controls in place.
- Limited exception: personal learning only. Public info, your own time and device, not used in any RCG work product.
Keep a hard boundary between personal learning and RCG work.
RCG · AI Use Policy18 / 19
19/Operations & Help
- Installation requires IT approval. Installing AI apps or browser extensions on RCG devices needs IT sign-off — same as any other software.
- Usage may be audited. RCG may monitor and audit AI tool usage. No expectation of privacy for RCG-managed tools or accounts.
- Pause and escalate. If unsure or concerned, escalate to your supervisor, the AI Governance Committee, or IT Security.
RCG · AI Use Policy19 / 19