Security and Privacy in AI and Web3: Best Practices for Business
AI tools and Web3 integrations introduce data security and privacy risks that many businesses have not yet assessed. This guide covers the specific risks of AI tool usage in business contexts, the data protection considerations under UK GDPR and the security hygiene required when adopting emerging technologies.
Nathan Hill-Haimes
Technical Director
The Business Reality of AI Tool Adoption
By 2025, AI tools — large language models, AI writing assistants, image generators, code assistants and analytics platforms — are in routine use across UK businesses of all sizes. Many of these tools have been adopted by individual employees without formal IT or security review, creating shadow AI risks analogous to the shadow IT problems of the SaaS era.
The security and privacy risks of AI tool usage are not theoretical. They include the inadvertent submission of confidential data to third-party AI training pipelines, the generation of inaccurate outputs that are acted upon without verification, the use of AI-generated content in regulated communications without human review, and the exposure of personal data to AI vendors who may process it in jurisdictions with different data protection standards.
Data Privacy Risks with AI Tools
Training Data Exposure
When employees submit prompts to cloud-based AI tools — including general-purpose LLMs — the input data may be used to improve the model unless the business has explicitly opted out or is using an enterprise plan with contractual data usage restrictions.
Microsoft Copilot for Microsoft 365 (enterprise and business plans) contractually commits that customer data is not used to train Microsoft's foundation models. Consumer-tier AI tools often have no such commitment. Businesses should audit which AI tools employees are using and confirm data handling terms for each.
Personal Data in Prompts
Under UK GDPR, if an employee submits personal data (customer names, email addresses, medical information, financial details) in an AI prompt, that submission constitutes processing of personal data. The AI vendor becomes a data processor, requiring a Data Processing Agreement (DPA) to be in place. Many AI tool vendor agreements include DPA provisions for enterprise plans but not for free or basic tiers.
Practical controls include: a prompt policy that prohibits submission of personal data to non-approved AI tools, training on recognising what constitutes personal data in a business context, and technical controls (DLP policies) that detect and block submission of sensitive data patterns to unauthorised destinations.
AI-Generated Content and Accuracy
AI language models generate plausible text that can contain factual errors, fabricated sources and incorrect figures. For businesses in regulated sectors — financial advice, legal services, healthcare — publishing or acting on AI-generated content without human review creates professional liability risks. This is a governance issue as much as a security one: establish a review process for AI-generated content before external publication or use in client-facing communications.
Web3 Security Considerations for Businesses
Web3 technologies — blockchain, decentralised applications, tokenisation and smart contracts — are seeing selective adoption by UK businesses in sectors including financial services, supply chain, identity verification and digital media. Security risks specific to Web3 environments include:
Private Key Management
Web3 authentication relies on cryptographic private keys rather than username/password pairs. If a private key is lost or stolen, access to associated assets (cryptocurrency, tokens, NFTs, access credentials) is irrecoverable. Businesses using Web3 tools must have a key management strategy — hardware security modules (HSMs), multi-signature arrangements or enterprise key management services — rather than relying on individual employees managing keys in software wallets.
Smart Contract Vulnerabilities
Smart contracts deployed on public blockchains are immutable once deployed and have caused significant financial losses through code vulnerabilities. Before deploying any smart contract for business use, a formal security audit by a qualified smart contract auditor is essential — this is not a cost that should be skipped for expediency.
Phishing and Social Engineering in Web3 Contexts
Web3 environments see high rates of phishing targeting private keys, seed phrases and approval transactions. Employees authorising Web3 transactions should receive specific training on recognising fraudulent connection requests and approval prompts — these attacks are visually distinct from conventional phishing but equally effective against untrained users.
Governance Framework for Emerging Technology Adoption
Rather than reacting to each new technology category, businesses benefit from a standing governance process for evaluating new tools:
- Technology assessment: Before approving a new AI or Web3 tool for business use, assess: what data will it process, where, under what legal terms?
- Data protection impact assessment (DPIA): Required under UK GDPR for technologies that involve systematic processing of personal data or high privacy risk.
- Vendor due diligence: Confirm the vendor's data processing terms, security certifications (ISO 27001, SOC 2) and breach notification obligations.
- Acceptable use policy: Document what the tool may and may not be used for, and communicate this to employees before deployment.
- Ongoing review: AI tools in particular change rapidly — a tool assessed as safe in 2024 may have different data handling terms in 2025 after a commercial restructuring.
Are Your AI Tools Creating Data Protection Risks?
AMVIA can audit which AI tools your employees are using, assess the data handling risks and recommend controls to manage them within UK GDPR requirements.
Frequently Asked Questions
It depends on what data you submit. OpenAI's free and Plus plans use conversation data to train models by default unless you opt out in settings. The enterprise plan (ChatGPT Enterprise) includes contractual commitments that conversations are not used for training and provides a Data Processing Agreement for UK GDPR compliance. For any business use involving customer data, personal information or confidential information, only enterprise-grade AI tools with appropriate DPAs should be used.
No. Microsoft contractually commits that data from commercial Microsoft 365 customers (including Copilot interactions) is not used to train Microsoft's foundation AI models. Microsoft also provides a Data Processing Addendum for Microsoft 365 and Copilot covering UK GDPR requirements. This is one of the reasons Microsoft Copilot is generally considered an appropriate AI tool for business use compared to consumer alternatives.
A DPIA is a structured assessment of the privacy risks associated with a processing activity, required under UK GDPR for activities likely to result in high risk to individuals. AI tools that systematically process personal data, profile individuals or use innovative technology typically require a DPIA before deployment. The ICO provides a DPIA template and guidance on when one is mandatory.
The UK government's AI regulation approach (as of 2025) is principles-based, applying existing regulatory frameworks (UK GDPR, FCA, Ofcom, CQC) to AI use within their respective sectors rather than a single comprehensive AI Act. The EU AI Act applies to UK businesses serving EU customers with certain AI applications. The ICO has published sector-specific AI guidance, particularly for biometric data, automated decision-making and generative AI.
Related Reading
Data Protection & Privacy | UK GDPR Guide for Businesses
UK GDPR fundamentals for businesses, including obligations when using AI tools that process personal data.
2025 Cybersecurity Compliance Guide | UK & EU Regulatory Landscape
Navigate UK and EU cybersecurity regulations in 2025, including AI-related regulatory developments.
UK Cybersecurity Guide for SMEs | Practical Steps
Practical cybersecurity steps for UK SMEs including shadow IT governance and vendor security assessment.