Ethical AI at Work: Using the Technology Without Losing Your Judgement

More than 80 per cent of UK employees now use generative AI at work, yet fewer than half say their employer has a clear vision for how it should be used. The gap between adoption and oversight is where the real risk lives — and where good judgement becomes indispensable.

There is a peculiar tension at the heart of the modern workplace. Artificial intelligence tools have been adopted at extraordinary speed — faster, by most measures, than any workplace technology in living memory. And yet, for all the enthusiasm, the frameworks for using them well have lagged conspicuously behind.

The EY 2025 Work Reimagined Survey, which included 800 UK employees and 180 UK employers, found that 83 per cent of UK employees now use generative AI at work. But the same survey revealed a troubling disconnect: fewer than half of those employees said their organisation had a clear vision for how AI should be deployed, and only 11 per cent reported receiving adequate AI training. Meanwhile, 43 per cent said they worried that an over-reliance on AI could erode their skills and expertise.

Those numbers describe something more than a technology adoption challenge. They describe a judgement gap — a widening space between what the tools can do and what the people using them understand about when, how, and whether to trust the outputs. This article is about that gap: what it looks like in practice, why it matters, and what responsible professionals and organisations can do to close it.

The State of Play: AI in British Workplaces in 2025

To appreciate why ethical AI use has moved from a niche governance concern to an urgent operational priority, it helps to understand the sheer scale of what has happened in the past two years.

The UK Government’s Department for Science, Innovation and Technology published an assessment of AI capabilities and their impact on the UK labour market in early 2026. The findings were instructive. Business AI adoption had more than doubled since late 2023, though the report noted that only around one in five firms currently use or plan to use AI. Within those firms that have adopted AI, fewer than a third of employees are actively using it — suggesting that adoption, even where it exists, remains patchy and unevenly distributed.

The picture shifts significantly when you look at larger organisations. A 2025 survey by Moneypenny of 750 UK business decision-makers found that 39 per cent of UK businesses were already using AI in some form, with another 31 per cent seriously considering it. Among firms with 50 to 99 employees, only 3 per cent said they had no plans to adopt AI at all. In IT and telecoms, adoption stood at 93 per cent. In finance, 83 per cent. In HR, 76 per cent.

The tools people are reaching for are, by now, familiar: ChatGPT, Microsoft Copilot, Google Gemini, Claude, Grammarly, and a growing ecosystem of sector-specific platforms. They are being used for drafting emails and reports, summarising documents, generating marketing copy, analysing data, writing code, and automating repetitive administrative tasks. The EY survey found that most use remains limited to basic, process-driven activities — search and summarisation — with fewer than 5 per cent of employees leveraging AI in advanced ways that fundamentally transform their working practices.

What is perhaps most revealing is the phenomenon that has come to be known as “shadow AI.” The EY data showed that 32 per cent of UK employees were using AI tools that had not been formally sanctioned by their employer. People are not waiting for permission, policy, or training. They are experimenting — sometimes responsibly, sometimes not — because the tools are accessible, the productivity gains are immediate, and the institutional guidance is often missing.

What Ethical AI Actually Means — Beyond the Buzzword

The phrase “ethical AI” has become ubiquitous in corporate communications, policy documents, and marketing materials. It appears on conference agendas and in annual reports. But in many organisations, it remains frustratingly abstract — a set of principles that sounds right but offers little guidance when someone needs to decide, at ten o’clock on a Tuesday morning, whether to use an AI-generated summary in a client-facing report.

At its core, ethical AI is not a product feature or a compliance certificate. It is a practice — a set of habits and questions that people apply every time they interact with an AI system in a professional context. The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, has developed what it calls the PBG Framework, designed to help project teams ensure that the AI technologies they build, procure, or use are ethical, safe, and responsible. The framework runs across the full AI project lifecycle, from design through deployment to monitoring.

The UK Government’s AI Playbook, updated in February 2025, distils the challenge into nine principles. Among the most relevant for everyday workplace use: you should know what AI is and what its limitations are; you should use AI lawfully, ethically, and responsibly; you should have meaningful human control at the right stages; and you should use the right tool for the job. These principles were written primarily for the public sector, but they apply with equal force to any organisation or individual using AI tools in a professional setting.

The International Labour Organization, in a 2025 review of global AI ethics guidelines, identified five overarching principles that appear across the vast majority of frameworks worldwide: beneficence (AI should promote human wellbeing), non-maleficence (it should not cause harm), autonomy (it should preserve human decision-making power), justice (its benefits should be fairly distributed), and explicability (its workings should be transparent and accountable). The ILO noted, however, that while ethical convergence around these principles is encouraging, the connection between AI ethics guidelines and the practical realities of the world of work remains thin. In other words, the principles are there. The implementation is not.

For the working professional, ethical AI use comes down to a deceptively simple set of questions. Do I understand what this tool is actually doing? Can I verify what it has produced? Am I comfortable being accountable for the output? Would I be willing to explain my reliance on it to a colleague, a client, or a regulator? If the answer to any of these is no, that is a signal — not necessarily to stop, but to slow down and think more carefully.

The Hallucination Problem: When AI Gets It Confidently Wrong

Of all the risks associated with using AI at work, hallucination may be the most insidious — precisely because it is invisible to the uninformed user.

AI hallucinations occur when a large language model generates information that is false, misleading, or entirely fabricated, but presents it with the same confident, authoritative tone as accurate information. The model is not lying in any meaningful sense. It is doing what it was designed to do: predicting the most statistically probable next word in a sequence based on patterns learned from training data. The trouble is that statistical probability and factual accuracy are not the same thing.

The consequences can be serious. In the now-notorious Mata v. Avianca case in the United States, a lawyer relied on ChatGPT to conduct legal research and submitted a filing to a federal court that contained multiple fabricated case citations — cases that did not exist, with quotations that had been invented wholesale. The judge sanctioned the lawyer, and the case became a cautionary tale that echoed around the legal profession worldwide.

That was 2023. By 2025, the problem has not disappeared — it has simply become more nuanced. Research by Stanford and Yale academics found that even domain-specific AI tools designed for legal research still produced hallucinations in 17 to 34 per cent of cases, particularly when it came to citing sources and responding to incorrect user premises. A 2024 Deloitte survey found that 38 per cent of business executives reported making incorrect decisions based on hallucinated AI outputs.

High Risk

Fabricated Sources

AI cites studies, cases, or statistics that do not exist — presented with full confidence, complete with plausible-sounding authors and journal names.

Prevalence 17–34% hallucination rate in domain-specific legal AI tools
Medium Risk

Subtle Inaccuracies

Outputs that are mostly correct but contain small factual errors — a wrong date, an incorrect figure, a misattributed quotation — easy to miss on a casual read.

Prevalence Average 3–9% hallucination rate across leading models
Structural Risk

Plausible Nonsense

Outputs that sound authoritative and well-structured but are substantively meaningless — the AI generates fluent language without genuine understanding.

Impact 38% of executives report decisions based on hallucinated outputs
Trust Risk

Confirmation of False Premises

When users ask leading questions, AI models frequently agree with incorrect assumptions rather than correcting them, reinforcing errors.

Impact 83% of executives confuse AI confidence with accuracy

The practical lesson is straightforward but bears repeating: AI-generated content must be verified. This is not optional, and it is not a sign of distrust in the technology. It is the minimum standard of professional diligence. Every claim, every statistic, every citation, every factual assertion that comes out of a generative AI tool should be treated as a draft — a starting point that requires human review before it is presented as the work of a competent professional.

Bias and Blind Spots: The Invisible Distortions

If hallucination is AI’s most visible failure mode, bias is arguably its most consequential — because it is systemic, difficult to detect, and can scale with devastating efficiency.

AI models learn from data created by humans, and that data carries the biases of the societies that produced it. This is not a theoretical concern. The Gender Shades project, led by researcher Joy Buolamwini at MIT, demonstrated that commercial facial recognition systems had significant accuracy disparities across different genders and skin tones. The systems worked well for lighter-skinned men and considerably less well for darker-skinned women — a finding that prompted major technology companies to revise their models and, in some cases, withdraw products from the market entirely.

In the workplace, bias manifests in subtler but no less damaging ways. AI tools used in recruitment have been found to favour certain patterns in CVs — penalising gaps in employment (which disproportionately affect women and carers), favouring candidates from certain universities, or scoring language patterns associated with particular cultural backgrounds more highly than others. AI systems used in performance evaluation, credit scoring, or customer segmentation can perpetuate and amplify existing inequalities, not because they are deliberately discriminatory, but because the data they learned from reflects a world that is.

The UK’s Centre for Data Ethics and Innovation has been clear that addressing bias requires more than technical fixes. It demands a culture of awareness — teams that are diverse enough to notice blind spots, processes that include regular auditing, and leaders who understand that the efficiency gains of AI must never come at the cost of fairness. The Equality and Human Rights Commission has indicated that AI-driven decisions that result in discriminatory outcomes can engage existing equalities legislation, regardless of whether the discrimination was intentional.

For any professional using AI to draft content, analyse data, or inform decisions, this means asking: whose perspective is embedded in this output? Whose experience might be missing? And would the conclusions change if the training data had been different?

The Data Sovereignty Question: Where Does Your Data Go?

When you type a prompt into a generative AI tool, you are not simply asking a question into the void. You are transmitting data — sometimes highly sensitive data — to a server, often located thousands of miles away, operated by a company whose data practices may be governed by legal frameworks quite different from your own.

This is the data sovereignty question, and it has moved from the margins of the AI debate to its centre. European policymakers have repeatedly warned that the continent’s dependence on a handful of American technology companies for cloud infrastructure and AI services creates structural vulnerabilities. The EU’s Artificial Intelligence Act, which came into force in stages during 2024 and 2025, established a risk-based regulatory framework intended to ensure that AI systems are safe, transparent, and respectful of fundamental rights. The UK, having left the European Union, has chosen a different path — a sector-led, principles-based approach that gives regulators such as the ICO, the FCA, and Ofcom the flexibility to apply AI governance within their existing mandates.

But regardless of the regulatory model, the underlying concern is the same. When a UK solicitor uses a US-hosted AI tool to draft a confidential brief, where is that data stored? Who has access to it? Could it be used to train future models? Could it be compelled under foreign surveillance laws? These are not hypothetical questions. They are the practical realities of using cloud-based AI services in a professional context, and they demand answers that many organisations have not yet provided.

The General Data Protection Regulation and the UK’s own data protection legislation impose obligations around the processing of personal data, including requirements for transparency, lawful basis, and, in some cases, data protection impact assessments for AI deployments. The ICO has been explicit that AI does not exempt organisations from their data protection obligations. If anything, the opacity of AI systems makes those obligations more demanding, not less.

Case Study: Infomaniak and the European Alternative

The data sovereignty challenge has given rise to a growing ecosystem of providers attempting to offer AI tools that keep data within European jurisdictions, under European legal frameworks, and with governance models that prioritise privacy by design rather than by afterthought.

One of the most instructive examples is Infomaniak, a Swiss cloud provider that in December 2025 launched Euria, an AI assistant built on a fundamentally different set of premises from the dominant US platforms. Euria is hosted entirely in Infomaniak’s data centres in Switzerland, compliant with both the GDPR and the Swiss Federal Act on Data Protection. It does not use customer data to train its models. And for particularly sensitive use cases — clinical notes, legal drafts, confidential administrative documents — it offers an “ephemeral mode” in which no data is stored, no logs are retained, and nothing can be recovered, even by Infomaniak itself.

Marc Oehler, Infomaniak’s CEO, has described the philosophy succinctly: “Euria was designed to make privacy a reality, not a marketing promise. The data never leaves our data centres in Switzerland and serves only to provide the service requested by the user.” The company explicitly warns users that no AI is infallible and encourages them to verify outputs before relying on them in high-stakes contexts — a refreshingly honest stance in a market that often oversells AI’s capabilities.

What makes Infomaniak particularly interesting as a case study is that it demonstrates how ethical choices can extend beyond data handling into environmental responsibility. The company operates a data centre in Geneva designed to recover 100 per cent of the electricity it consumes. The waste heat generated by running AI workloads is fed back into Geneva’s district heating network. At full capacity, the data centre provides enough energy to heat 6,000 homes in winter. The company uses only certified renewable energy and has committed to self-generating 50 per cent of its electricity by 2030.

This matters because the environmental cost of AI is real and growing. Training and running large language models requires enormous computational power, which translates into significant energy consumption and carbon emissions. A responsible approach to AI use must account not only for what the tools produce, but for the resources they consume — and the Infomaniak model shows that it is possible to deliver high-performing AI services without externalising environmental costs.

Infomaniak is not the only player in the European sovereign AI space, and its model may not suit every organisation’s needs. But it offers a concrete illustration of a principle that matters: the choice of AI provider is itself an ethical decision, with implications for data privacy, legal compliance, environmental impact, and digital sovereignty. Choosing a tool is not just a question of capability. It is a question of values.

Key Takeaway

The choice of AI provider is itself an ethical decision. Where your data is processed, whether it is used for training, and the environmental footprint of the infrastructure all carry consequences. European alternatives like Infomaniak’s Euria demonstrate that privacy, sovereignty, and sustainability can coexist with genuine AI capability.

The UK Regulatory Landscape: Principles Without Prescriptions

Understanding the regulatory environment is essential for any organisation or professional seeking to use AI responsibly. The UK has deliberately chosen a different approach from the European Union’s prescriptive AI Act, and the differences have real implications for how businesses manage their AI use.

The UK Government’s framework rests on five core principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. Rather than establishing a single AI regulator or a comprehensive AI-specific law, the UK has empowered existing sectoral regulators — the ICO for data protection, the FCA for financial services, Ofcom for communications, the CMA for competition — to apply these principles within their respective domains.

This approach has its advantages. It avoids the rigidity of a one-size-fits-all regime and allows regulators with deep sector knowledge to tailor their guidance to the specific risks and opportunities of their industries. But it also places a greater burden on organisations themselves to interpret the principles, develop their own governance frameworks, and ensure compliance in the absence of detailed, prescriptive rules.

The UK Government’s AI Playbook, published in February 2025, provides useful guidance for public sector organisations, but its principles are broadly applicable. It emphasises that AI use must be lawful and responsible, that organisations should seek legal advice early, and that meaningful human control must be maintained at the right stages. It also acknowledges openly that AI models are trained on data that may contain biases and harmful materials, and that organisations should consider all potential sources of bias throughout the development lifecycle.

For private sector organisations, the practical implication is clear: you cannot outsource your ethical responsibilities to a regulator or a technology vendor. You need an internal AI policy. You need clear lines of accountability. You need training for the people who use AI tools every day. And you need regular reviews to ensure that your practices keep pace with a technology that evolves faster than most institutional processes can accommodate.

The Data Use and Access Act 2025 has introduced additional data duties that affect AI deployments, and the ICO has signalled that AI-related regulatory scrutiny will intensify. Organisations that build ethical foundations now will be far better positioned than those scrambling to catch up when enforcement sharpens — which, based on the trajectory of both UK and EU regulatory activity, it inevitably will.

Keeping the Human in the Loop: Why Judgement Cannot Be Automated

The most important principle in the ethical use of AI at work is also the simplest: the human being must remain the decision-maker.

This is not an argument against efficiency. AI tools can draft, summarise, translate, analyse, and generate at a speed and scale that no human can match. That is precisely their value. But the act of deciding — of weighing context, understanding nuance, applying professional judgement, and accepting responsibility for the outcome — cannot be delegated to a machine. Not because the technology is not impressive, but because the technology does not understand what it is doing. It processes patterns. It does not grasp meaning.

The EY survey’s finding that 43 per cent of UK employees worry about AI eroding their skills deserves serious attention. The concern is well-founded. Cognitive science has long established that skills atrophy when they are not practised. If professionals routinely outsource analysis, writing, and critical thinking to AI tools without engaging their own faculties, those faculties will, over time, diminish. The result is not a more productive workforce but a more dependent one — capable of operating the tools but less capable of functioning without them, and less able to catch errors when the tools get things wrong.

A 2025 Harvard Business School study on generative AI in teamwork found that the introduction of AI tools into collaborative settings reshaped team dynamics in ways that were not always positive. When teams relied heavily on AI-generated content, the quality of human deliberation declined. People spent less time questioning assumptions, interrogating evidence, and debating alternatives — activities that are essential to good decision-making but that feel redundant when a machine has already produced a polished output.

The antidote is not to reject AI, but to use it with discipline. Treat AI outputs as a first draft, not a final answer. Read what it produces with the same critical eye you would apply to a junior colleague’s work. Verify facts. Question logic. Look for what is missing, not just what is present. And never, under any circumstances, present AI-generated work as your own without having thoroughly reviewed, edited, and taken full ownership of its content.

The Human Imperative

AI is a tool, not an authority. The professional who uses AI well is not the one who generates the most outputs, but the one who exercises the most judgement about which outputs to trust, which to revise, and which to discard entirely. Skills are preserved by using them, not by outsourcing them.

A Practical Framework for Your Organisation

Theory is necessary, but it is not sufficient. What follows is a practical framework — informed by the UK Government’s AI Playbook, the Alan Turing Institute’s ethics guidance, and the real-world experience of organisations navigating this landscape — for using AI at work without losing your judgement.

Establish a clear, written AI policy

If your organisation does not have a written policy on AI use, that is the first gap to close. The policy does not need to be a hundred pages long. It needs to answer the questions that matter: which AI tools are approved for use, what types of data may and may not be entered into them, who is responsible for verifying AI-generated outputs, and what level of human review is required before AI-assisted work is shared externally. It should also address the question of disclosure — under what circumstances the use of AI should be acknowledged to clients, customers, or collaborators.

Invest in training — not just in how to use tools, but in how to think about them

The EY finding that only 11 per cent of UK employees have received adequate AI training is alarming. Training should cover not only the mechanics of using specific tools but the principles of critical evaluation: what hallucinations look like, how bias manifests, when AI-generated content should and should not be trusted, and how to verify outputs effectively. The UK Government’s Central Digital and Data Office offers over 70 AI-related courses through its learning platform. Private sector organisations would do well to invest at a similar level.

Assign accountability — clearly and individually

The UK’s Algorithmic Transparency Recording Standard makes senior responsible owners mandatory for AI systems in the public sector. Private sector organisations should adopt the same principle. Every AI system or tool in use should have a named individual responsible for its governance. Collective responsibility, in practice, means no responsibility at all.

Choose your tools with care

Not all AI tools are created equal, and the choice of provider carries ethical weight. Consider where data is processed and stored, whether your inputs are used for model training, what data protection framework applies, and what the provider’s environmental credentials look like. For organisations handling sensitive data — legal, medical, financial, governmental — a sovereign AI provider with strong privacy guarantees may be more appropriate than a general-purpose consumer tool, even if the latter is more familiar or more fashionable.

Build verification into your workflow

Verification should not be an afterthought. It should be a formal step in any process that involves AI-generated content. For factual claims, check primary sources. For analysis, review the methodology and logic. For drafting, read the output critically and ensure it reflects your professional standards. Establish a rule that no AI-generated content leaves the organisation without a named human reviewer having signed off on it.

Review and iterate

AI capabilities and risks evolve rapidly. Your policy and practices need to evolve with them. Schedule regular reviews — quarterly at minimum — to assess whether your AI governance is keeping pace with the tools your people are using and the regulatory landscape you operate within. The ICO has flagged AI guidance for review following the implementation of new data legislation, and your internal practices should follow a similar rhythm.

Conclusion: The Technology Is Not the Decision-Maker — You Are

Artificial intelligence in the workplace is not going away. The tools will become more powerful, more integrated, and more ubiquitous. The question is not whether to use them, but how — and the answer to that question rests not with the technology but with the people who use it.

The data is unambiguous. AI adoption in the UK is accelerating, but training, governance, and critical engagement lag far behind. Shadow AI is widespread. Hallucinations remain a material risk. Bias is systemic and often invisible. Data sovereignty questions are urgent and largely unresolved. And the regulatory environment, while evolving, places the primary burden of responsibility squarely on organisations and individuals.

None of this should be cause for paralysis. AI tools, used well, are extraordinarily valuable. They can free up time, augment human capabilities, and enable work that would otherwise be impractical. But “used well” is the operative phrase. It means used with awareness, with discipline, with humility about the technology’s limitations, and with an unwavering commitment to human oversight and professional accountability.

The companies and providers who will earn lasting trust are those who treat ethics not as a constraint on innovation but as a design principle — as Infomaniak has with Euria, as the Alan Turing Institute has with its PBG Framework, as the best organisations are doing by building governance structures that keep pace with adoption. The professionals who will thrive are those who use AI to amplify their judgement, not to replace it.

You are not a more effective professional because an AI tool can produce text quickly. You are a more effective professional because you know when that text is right, when it is wrong, and when the distinction matters. That judgement is yours. It is the one thing the technology cannot replicate. Do not surrender it.

Sources and References

  1. EY. UK employers miss 40% of AI productivity gains. EY 2025 Work Reimagined Survey. Published December 2025. ey.com
  2. UK Government, Department for Science, Innovation and Technology. Assessment of AI Capabilities and the Impact on the UK Labour Market. Published January 2026. gov.uk
  3. Moneypenny. The State of AI Adoption in UK Businesses: 2025 Trends & Insights. Survey of 750 UK decision-makers, April–May 2025. moneypenny.com
  4. EY. UK’s AI Use Grows in Daily Life, but Lags in Workplaces. EY AI Sentiment Index 2025. Published April 2025. ey.com
  5. The Alan Turing Institute. AI Ethics and Governance in Practice Programme. PBG Framework. Leslie, D., Rincón, C., Briggs, M., et al. (2023–2024). turing.ac.uk
  6. UK Government. Artificial Intelligence Playbook for the UK Government. Updated February 2025. gov.uk
  7. International Labour Organization. Governing AI in the World of Work: A Review of Global Ethics Guidelines. Samaan et al. Published November 2025. ilo.org
  8. UK Parliamentary Office of Science and Technology (POST). Artificial Intelligence: Ethics, Governance and Regulation. Published February 2025. post.parliament.uk
  9. Buolamwini, J. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. MIT Media Lab. Proceedings of Machine Learning Research. 2018.
  10. Weiser, B. Here’s What Happens When Your Lawyer Uses ChatGPT. The New York Times. May 27, 2023.
  11. Dahl, M., Magesh, V., et al. Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. Stanford Human-Centered Artificial Intelligence and Yale Law School. Journal of Legal Analysis. 2024.
  12. Deloitte. State of AI in the Enterprise. Global survey of executive AI adoption. 2024.
  13. Infomaniak. Infomaniak launches Euria, a free and sovereign AI that respects privacy and heats homes. Press release. Published December 9, 2025. news.infomaniak.com
  14. Dell’Acqua, F. et al. The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise. Harvard Business School Working Paper 25-043. March 2025.
  15. Office for National Statistics (ONS). Research into How Artificial Intelligence (AI) Is Affecting Employment. FOI response. 2025. ons.gov.uk
  16. UK Government, Centre for Data Ethics and Innovation (CDEI). AI Assurance Framework and Guidance. gov.uk

References verified February 2026. Links external; Coleebri Digital not liable for third-party content.

Share this on:

Insights

More Related Articles

Vitamin D, Iron, B12: The Deficiencies Half of Britain Doesn’t Know It Has

Building a Brand Identity When You Cannot Afford a Big Agency

First Steps in AI: How Chatbots, Free Tools and a Bit of Curiosity Can Transform Your Working Day