The Hidden Dangers of Using AI in the Workplace

Friday 11th April 2025 Dave Sharp Technology

When AI Crosses the Legal Line: Real-World Cases

Amazon’s Discriminatory Hiring Tool
In 2018, it was revealed that Amazon had scrapped an internal AI recruiting tool after discovering it was biased against female candidates. The AI had been trained on CVs submitted over a 10-year period, most of which came from men, leading the algorithm to favour male applicants and downgrade CVs that included the word “women’s.” This is a classic case of algorithmic discrimination, potentially violating employment discrimination laws.

Clearview AI and Unauthorised Facial Recognition
Clearview AI scraped billions of images from social media platforms without consent to train its facial recognition system, which was then sold to law enforcement and private companies. The company has since faced multiple lawsuits, including a settlement with the ACLU in 2022 that restricted its use of the data. This is a Violation of data privacy laws.

Uber’s Algorithmic Firing in the UK
In 2021, Uber faced legal action in the UK when its AI system allegedly dismissed drivers based on automated performance assessments. The court ruled in favour of the drivers, highlighting that automated decision-making without human intervention violated GDPR protections. This is a Violation of GDPR Article 22, which gives individuals the right not to be subject to a decision based solely on automated processing.

Why This Matters for Your Business
Many organizations jump into AI adoption without fully understanding the regulatory landscape or the ethical implications. As AI decisions increasingly impact hiring, promotions, scheduling, and even termination, companies are entering murky legal territory.

If AI tools make decisions that result in bias, discrimination, or privacy violations, your organisation—not the software vendor is likely to be held responsible.

Proactive Steps: How to Safeguard Your Business


  • Conduct Regular AI Audits: Hire third-party experts to evaluate your AI tools for bias, transparency, and compliance. Audits should cover data inputs, model behaviour, and decision outputs.

  • Ensure Human Oversight: Maintain human involvement in all high-stakes decisions, especially those related to employment. AI can inform decisions, but it should not be the final authority.

  • Implement Explainability Standards: Choose AI systems that offer transparency in how decisions are made. This not only builds trust but also prepares you for legal scrutiny.

  • Train Your Teams: Legal, HR, and IT teams should be educated on the legal risks and ethical responsibilities related to AI. Awareness is the first line of defense.

  • Review Vendor Contracts Carefully: Ensure contracts with AI vendors include clauses on liability, compliance, and ethical use. Do not assume the vendor is compliant just because their product is “AI-powered.”

  • Stay Ahead of Regulations: Laws around AI are evolving rapidly. Monitor new legislation like the EU AI Act, California Privacy Rights Act (CPRA), and other jurisdiction-specific rules.

AI in the workplace can be a powerful ally but it must be handled with caution, transparency, and legal foresight. The cost of a misstep can range from regulatory fines and lawsuits to reputational damage that’s much harder to quantify.

If your company is integrating AI into decision-making processes, now is the time to establish a governance framework that keeps you ahead of legal challenges.

Your bottom line may depend on it.

Need help developing a responsible AI policy for your organization? Let’s connect.

Previous Article

Employee ownership is increasingly recognised as a powerful model for business success. This article looks at the pros and cons of employee ownership.