top of page
_AI Ethics Conference    (3).jpg

We will be back in May 2025!


Thank you for being part of the region's first AI Ethics, Risks and Safety Conference.


To access the presentations, please click on the links below.

AI regulation We are we now?

Tools for trustworthy AI

AI Standards Hub

AI Data and Vulnerability in Financial Services

Conference 2024

Emma Erskine-Fox

 Managing Associate Data, Privacy and Cybersecurity.


Colin Gavaghan jpg

Colin Gavaghan

Professor Digital Future

School of Law

University of Bristol

Nuala Polo

Nuala Polo

Senior Policy Advisor

Responsible Technology Adoption Unit

Department for Science, Innovation and Technology.

Lisa Moretti.JPG

Lisa Talia Moretti

Digital Sociologist

AND Digital

Kevin Macnish

Dr. Kevin Macnish

Head of Ethics and

Sustainability Consulting

Sopra Steria

Tom Scott

Tom Scott
Experience Director
Financial Services

CX Partners

Chris Thomas

Christopher Thomas

Research Associate

Public Policy Programme

The Alan Turing Institute

Sundeep_Bhandari (1).jpg

Sundeep Bhandari

Chief Digital Innovation Officer.

Head of Digital Innovation.

National Physical Laboratory

Marie Oldfield.png

 Dr. Marie Oldfield

CEO of Oldfield Consultancy

 Institute of Science and Technology. 

Ben Byford.jpg

Ben Byford

 AI Ethics Consultant Podcast Host

The Machine Ethics Podcast

Karin Rudolph.jpg

Karin Rudolph

Founder Ethical Technology Network

Collective Intelligence

Conference Programme

  • Welcome and introduction to the Ethical Technology Network.

Karin Rudolph

Founder Collective Intelligence

  • AI regulation: Where are we now?
Emma Erskine-Fox.  Managing Associate in the Data, Privacy and Cybersecurity team at TLT.

The last few years have seen the emergence of the world’s first AI-specific laws and regulations. Whilst many principles overlap with existing legal regimes, the new rules pose novel challenges and opportunities for businesses operating in the AI ecosystem.

In this presentation, Emma Esrkine-Fox will look at how AI regulation is taking shape, both in the UK and worldwide, and what organisations should be doing to ensure they don’t fall foul of the requirements.

Also considering how businesses can manage different, and potentially conflicting, regimes if they are aiming to operate on a multinational level.

  • Implementing Technology Ethics from theory to practice. A case study

Lisa Talia Moretti. Digital Sociologist. AND Digital

Over the last few years, companies and governments, membership organisations and academics have been in a flurry of writing ethical guidelines and principles. They’ve been doing this for artificial intelligence, machine learning, data (and data science) as well as technology more broadly. While the ethical principles and frameworks have been a helpful start, it’s time to put them into action if we are serious about making an impact.

This talk is ideal for those who are looking for practical advice on how to implement AI ethics frameworks or for those who find themselves frustrated by the current theoretical tech ethics conversations and want to hear a pragmatic perspective.

Using a case study, Lisa Talia Moretti will share lessons and insights on how to implement an AI ethics framework and spotlight the uniquely human characteristics and qualities that are needed to do this in a world saturated with tech.

  • Tools for trustworthy AI: Implementing the UK’s proposed regulatory principles in practice

Nuala Polo. Senior Policy Advisor. Responsible Technology Adoption Unit. Department for Science, Innovation and Technology.

In this talk, Nuala Polo will provide an overview of the UK’s approach to AI governance, with a focus on how industry and regulators can operationalise the UK’s proposed AI regulatory principles in practice. This talk will offer
a deep dive into tools for trustworthy AI, with a focus on assurance mechanisms and SDO-developed standards, followed by an update on the UK government’s work programme to develop practical guidance and innovative solutions to help organisations use these tools to ensure the responsible design, development, and deployment of AI systems.

  • What challenges do AI Practitioners face and how can we solve them?

Dr. Maria Oldfield. CEO of Oldfield Consultancy, Executive Board Member Institute of Science and Technology.  Senior Lecturer at London School of Economics.

Working with AI and modelling affecting society means AI practitioners need to not only showcase skills but also prove that they have robust methodology when it comes to practice, whether they are philosophy advisors or computer scientists.

In this talk, Marie Oldfield will introduce a new  AI global Professional Accreditation, established by the Institute of Science and Technology.

  • AI regulation or AI innovation – is that really the choice?


Colin Gavaghan. Professor of Digital Futures Bristol Digital Futures Institute.

University of Bristol Law School.


Recent debates around AI have presented a polarized choice. On one side are those calling for tighter regulation in the face of the potential harm.

On the other side are those (including the UK Government) arguing that we should be slow to add new regulations, and should instead “unleash innovation” and reap the incredible rewards of this wonderous technology.

But is it really the case that regulation needs to be the enemy of innovation? In this talk, Colin Gavaghan will explore that question, arguing that the “regulation or innovation” choice around AI is misleading and overly simplistic.

  • Are You Being Served? AI, Data and Vulnerability in Financial Services


Dr Kevin Macnish. Head of Ethics and Sustainability Consulting. Sopra Steria

Tom Scott. Experience  Director, Financial Services. CX Partners.

In the wake of the FCA's Treating Customers Fairly and the Consumer Duty, financial services are being pressed to identify, help and record outcomes for vulnerable people. More than this, the FCA is keen to see where data (and AI) can be ethically used to achieve this end.

In this talk, Stu Charlton and Kevin Macnish will talk about how to embed ethics into a process that will see the vulnerable served while also treated with dignity and respect.

  • Setting the Standard for AI Ethics.

Christopher Thomas. Researcher Associate. The Alan Turing Institute.
Dr. Ivan Serwano. 
AI Consulting Manager. British Standard Institution

Sundeep  Bhandari. Chief Digital Innovation Officer. Head of Digital Innovation. National Physical Laboratory

The AI Standards Hub is a UK-government-funded program that brings together the National Measurement Institute (the National Physical Laboratory), AI Research Centre (The Alan Turing Institute) and the National Standards Body (BSI). This initiative and its free online resources and events are dedicated to sharing best practices, providing training, and facilitating knowledge sharing, discussion, and community building – acting as a one-stop shop to understand global AI standards activity and status, including how to contribute to the development of AI standards.

This talk will introduce the initiative and provide a deep dive on the relationship between current and upcoming standards and AI ethics.

  • Explaining the machines: How does AI make decisions? 

 Panel discussion

  • Helena Quinn. Principal Policy Adviser - AI and Data Science. ICO. Information Commissioner's Office
  • Dr Andrew Corbett. Head of Research and Development at DigiLab.
  • Moderator: Ben Byford. Game Designer, AI Ethics Consultant and Podcast Host. The Machine Ethics Podcast
Join a lively discussion with two experts in the field of Explainable AI.

Helena Quinn, the Principal Policy Adviser for AI & Data Science has worked at the intersection of policy and AI in academia, industry and the public sector for over eight years.

Helena was the principal author of the ICO’s guidance, ‘Explaining decisions made with AI’, and has produced a paper on the harms that algorithms can bring about for competition and consumers in her previous role at the UK Competition and Markets Authority (CMA).

Andrew Corbett is an Independent Scientific Advisor at the Alan Turing Institute, sitting on the Bridge AI panel to advise SMEs on the use of AI to increase productivity in key sectors. 

He is also the Head of Research and Development at Digilab, a fast-growing machine learning company delivering explainable machine learning to safety-critical industries.  

He holds a research position at the University of Exeter where his research focuses on computer vision and deep learning models which can self-explain their black-box decision-making.

**The Conference Programme is subject to change**

Andy Corbett.png

 Dr Andrew Corbett

Head of Research and Development


Helena Quinn

Helena Quinn

Principal Policy Adviser AI & Data Science.

Information Commissioner's Office 

Partners and supporters
ADLIB B Corp logo.png
IST logo.png
High res TSW logo.png
fintech west logo.png
Engineering Ethics Toolkit logo - 1200x900.png

The AI Ethics, Risks and Safety Conference is part of a series of events, seminars and workshops exploring topics related to AI and Emerging Technologies.

To get all the updates subscribe to the newsletter or contact us at​

Thanks for subscribing!

bottom of page