ARQVERA — AI TRANSPARENCY NOTICE
Last updated: 3 December 2025
1. INTRODUCTION
Arqvera uses artificial intelligence (AI) to enhance the quality, speed and consistency of its advisory, assurance, governance and transformation consulting services.
This notice explains how, when, and why we use AI, and the controls we apply to keep it safe, private and responsible.
Arqvera remains committed to transparency, human oversight, and the protection of our clients' information.
2. HOW ARQVERA USES AI
Arqvera uses AI in limited, controlled and responsible ways to:
- support the structuring and summarisation of non-confidential information
- accelerate analysis and draft insights (with human review)
- assist in producing early prototypes of deliverables
- enhance diagnostic models within Trust Arq™, Value Compass™, Capability Mirror™, Change Studio™ and AI.ccelerate™
- analyse anonymised datasets to identify patterns or opportunities
- streamline internal research and knowledge retrieval
AI does not make autonomous decisions, assign risk ratings, produce final deliverables, or replace consultant judgement.
3. WHAT DATA AI TOOLS MAY PROCESS
Arqvera follows a data minimisation principle:
AI tools may process:
- anonymised text
- generic content or frameworks
- non-confidential material
- internal synthetic datasets for testing
- limited structured inputs with client approval
- descriptive information already in the public domain
AI tools do not process:
- personal data, unless explicitly authorised and minimised
- special category data
- sensitive or regulated information
- confidential client data uploaded without consent
- data that could identify a person or organisation unless contractually agreed
All processing is subject to UK GDPR, our DPA, and client contractual controls.
4. HUMAN OVERSIGHT
AI at Arqvera is always subject to full human oversight:
- A qualified Arqvera consultant reviews, validates and interprets all AI-assisted outputs.
- No model output is used without human quality, accuracy and contextual checks.
- AI cannot approve governance decisions, sign-off deliverables, or determine risk or readiness scores.
AI augments human intelligence — it does not replace it.
5. ACCURACY, LIMITATIONS & HALLUCINATIONS
AI systems may generate:
- incomplete, outdated or incorrect information
- fabricated details (hallucinations)
- inconsistent output
Arqvera manages this risk by:
- validating all AI outputs before client use
- cross-checking against Arqvera frameworks
- rejecting or correcting flawed outputs
- documenting key model limitations
Where AI assists in creating deliverables, this will be disclosed to the client where material.
6. DATA PRIVACY, SECURITY & UK GDPR COMPLIANCE
Arqvera ensures that all AI use complies with:
- UK GDPR
- Data Protection Act 2018
- ICO AI and Data Protection guidance
- NCSC AI security principles
Controls include:
- strict minimisation of client data
- use of enterprise-grade AI platforms with DPAs
- MFA, encryption and access controls
- no training of external models on Arqvera or client data
- transparent data retention and deletion processes
- documented Data Protection Impact Assessments for high-risk use cases
7. APPROVED AI TOOLS
Arqvera uses only vetted and approved enterprise-grade AI vendors.
These may include:
- Google Gemini (Workspace)
- OpenAI / Azure OpenAI
- Anthropic Claude
- Microsoft Azure Cognitive Services
These tools are governed by:
- Arqvera's Subprocessor Register
- Arqvera's AI & Responsible Use Policy
No other tools may process client or confidential data.
8. NO TRAINING OF EXTERNAL MODELS
Arqvera does not allow:
- client data
- client deliverables
- Arqvera proprietary frameworks
- diagnostic logic or scoring structures
to be used to train or fine-tune external AI models.
Enterprise settings enforce no-training modes wherever technically supported.
9. WHEN WE MAY USE AI ON CLIENT DATA
AI may be used with identifiable or confidential client data only when:
- It is explicitly authorised in writing under an MSA or SOW;
- The data is minimised to the smallest amount necessary;
- Processed only using an approved enterprise-grade platform;
- A DPIA is completed for high-risk categories;
- Human review remains the primary decision-making mechanism.
Clients may request alternative processing routes (non-AI) at any time.
10. YOUR RIGHTS & OBJECTIONS
Clients may:
- request that AI is not used in their engagement
- request an explanation of how AI contributed to any deliverable
- request a non-AI alternative where proportionate
- request deletion of AI-generated work or logs where applicable
- request a list of subprocessors used
Arqvera will always comply unless prevented by law or technical constraints.
11. MONITORING & GOVERNANCE
Arqvera maintains an ongoing governance programme:
- regular reviews of AI tools
- annual AI policy audits
- incident detection and escalation
- bias, fairness and safety reviews
- continuous staff training
AI use is overseen by the:
- AI Governance Lead,
- Data Protection Lead, and
- Information Security Lead.
12. CONTACT
For questions about Arqvera's AI practices, please contact:
Arqvera Ltd, United Kingdom