top of page

Blogs and Articles

tom-parkes-Ns-BIiW_cNU-unsplash (1).jpg

AI regulation: Where are we now?

By Emma Erskine-Fox. 

Managing Associate Data, Privacy and Cybersecurity at TLT.

 

AI is at the forefront of many organisations’ digitalisation plans. But, as businesses race to stay ahead of the AI curve, are ethical guidelines and legal risks being overlooked?

​

Competition breeds innovation, and with Microsoft, Google, Baidu, and Meta all investing billions into AI, legislation will struggle to keep up. Different countries and unions are creating their own set of guidelines, with the EU and the UK government already approaching AI regulation in two very different ways.

​

The UK’s ‘pro-innovation’ approach is very different to the EU’s. Whilst the EU has taken a very robust and prescriptive approach to legislating for AI, by introducing the extensive AI Act, the UK government has proposed a more flexible, regulator-led framework of principles. Any UK businesses operating in the EU, or looking to enter the EU market, will need to stay abreast of both frameworks, posing further challenges for those organisations in managing several different sets of rules.

​

The UK’s approach is designed to encourage innovation, but striking the balance between fostering

innovation in AI and protecting fundamental human rights will be major challenge for the UK.

 

Whilst regulations try and keep up with advancements in AI, many businesses are uncertain of the rules and subsequent risks.

​

In our recent TLT Retail Agility report, that focuses on the impact of AI on businesses, we discovered that 80% of the UK’s 100 leading retailers are uncertain of the long-term legal impact of AI. 

​

Likewise, there are a variety of potential legal risks that businesses need to consider before implementing AI into their processes. This includes data protection and privacy issues as well as consumer law challenges, such as chatbot errors, harmful search algorithms and the targeting of vulnerable consumers.

 

Regulators such as the Information Commissioner’s Office, the Competition and Markets Authority and the Financial Conduct Authority are all focussing increasingly on AI, but regulation is still in its infancy, and it remains to be seen how effective existing frameworks will be in managing these risks.

​

With so much change afoot, regional transparency and collaboration is vital to allow organisations at the cutting edge of these developments to share knowledge and experience to help ensure that AI is implemented responsibly.

​

The Southwest is at the forefront of innovation in this space, and there is no better forum for these crucial collaboration efforts.

 

I’m thrilled to be speaking at The AI Ethics, Risk and Safety Conference on the 15th of May, to add my voice to the debate and help ensure that the latest regulations and risks are communicated in a straightforward way, so businesses can effectively and safely implement AI.

​

AI Ethics, Risks and Safety Conference

Wednesday 15th May, Watershed.

Bristol

Tickets and more information at

​

https://www.eventbrite.co.uk/e/ai-ethics-risks-and-safety-conference-tickets-788913660997

Social media post.png
bottom of page