Wednesday 15th May 2024
Welcome to the region's first AI Ethics, Risks and Safety Conference
Come and join a full-day conference at the Watershed in Bristol.
The AI Ethics, Risk and Safety Conference will bring together businesses and organisations to discuss, share best practices and learn about the upcoming regulations, standards, case studies, and the resources available for businesses.
The conference will offer practical advice and insights from experts in their field and it will cover four main themes:
1. Legal and Compliance.
2. AI Standards.
3. Training and upskilling.
4. AI Ethics framework. Case studies
Meet the Speakers
Managing Associate Data, Privacy and Cybersecurity.
Professor Digital Future
School of Law
University of Bristol
Senior Policy Advisor
Responsible Technology Adoption Unit
Department for Science, Innovation and Technology.
Lisa Talia Moretti
Dr. Kevin Macnish
Head of Ethics and
Experience Strategy Director
Dr. Matilda Rhode
AI and Cyber Security
British Standard Institution
Dr. Florian Ostmann
Head of AI Governance and Regulatory Innovation
The Alan Turing Institute
Chief Digital Innovation Officer.
Head of Digital Innovation.
National Physical Laboratory
Dr. Marie Oldfield
CEO of Oldfield Consultancy
Institute of Science and Technology. Senior Lecturer at London School of Economics.
AI Ethics Consultant Podcast Host
The Machine Ethics Podcast
Founder Ethical Technology Network
Welcome and introduction to the Ethical Technology Network.
Founder Collective Intelligence
AI regulation: Where are we now?
Emma Erskine-Fox. Managing Associate in the Data, Privacy and Cybersecurity team at TLT.
The last few years have seen the emergence of the world’s first AI-specific laws and regulations. Whilst many principles overlap with existing legal regimes, the new rules pose novel challenges and opportunities for businesses operating in the AI ecosystem.
In this presentation, Emma Esrkine-Fox will look at how AI regulation is taking shape, both in the UK and worldwide, and what organisations should be doing to ensure they don’t fall foul of the requirements.
Also considering how businesses can manage different, and potentially conflicting, regimes if they are aiming to operate on a multinational level.
Implementing Technology Ethics going from theory to practice. A case study
Lisa Talia Moretti. Digital Sociologist. AND Digital
Over the last few years, companies and governments, membership organisations and academics have been in a flurry of writing ethical guidelines and principles. They’ve been doing this for artificial intelligence, machine learning, data (and data science) as well as technology more broadly. While the ethical principles and frameworks have been a helpful start, it’s time to put them into action if we are serious about making an impact.
This talk is ideal for those who are looking for practical advice on how to implement AI ethics frameworks or for those who find themselves frustrated by the current theoretical tech ethics conversations and want to hear a pragmatic perspective.
Using a case study, Lisa Talia Moretti will share lessons and insights on how to implement an AI ethics framework and spotlight the uniquely human characteristics and qualities that are needed to do this in a world saturated with tech.
Tools for trustworthy AI: Implementing the UK’s proposed regulatory principles in practice
Nuala Polo. Senior Policy Advisor. Responsible Technology Adoption Unit. Department for Science, Innovation and Technology.
In this talk, Nuala Polo will provide an overview of the UK’s approach to AI governance, with a focus on how industry and regulators can operationalise the UK’s proposed AI regulatory principles in practice. This talk will offer
a deep dive into tools for trustworthy AI, with a focus on assurance mechanisms and SDO-developed standards, followed by an update on the UK government’s work programme to develop practical guidance and innovative solutions to help organisations use these tools to ensure the responsible design, development, and deployment of AI systems.
What challenges do AI Practitioners face and how can we solve them?
Dr. Maria Oldfield. CEO of Oldfield Consultancy, Executive Board Member Institute of Science and Technology. Senior Lecturer at London School of Economics.
Working with AI and modelling affecting society means AI practitioners need to not only showcase skills but also prove that they have robust methodology when it comes to practice, whether they are philosophy advisors or computer scientists.
In this talk, Marie Oldfield will introduce a new AI global Professional Accreditation, established by the Institute of Science and Technology.
AI regulation or AI innovation – is that really the choice?
Colin Gavaghan. Professor of Digital Futures Bristol Digital Futures Institute.
University of Bristol Law School.
Recent debates around AI have presented a polarized choice. On one side are those calling for tighter regulation in the face of the potential harm.
On the other side are those (including the UK Government) arguing that we should be slow to add new regulations, and should instead “unleash innovation” and reap the incredible rewards of this wonderous technology.
But is it really the case that regulation needs to be the enemy of innovation? In this talk, Colin Gavaghan will explore that question, arguing that the “regulation or innovation” choice around AI is misleading and overly simplistic.
Are You Being Served? AI, Data and Vulnerability in Financial Services
Dr Kevin Macnish. Head of Ethics and Sustainability Consulting. Sopra Steria
Stu Charlon. Experience Strategy Director, Financial Services. CX Partners.
In the wake of the FCA's Treating Customers Fairly and the Consumer Duty, financial services are being pressed to identify, help and record outcomes for vulnerable people. More than this, the FCA is keen to see where data (and AI) can be ethically used to achieve this end.
In this talk, Stu Charlton and Kevin Macnish will talk about how to embed ethics into a process that will see the vulnerable served while also treated with dignity and respect.
Setting the Standard for AI Ethics.
Florian Ostmann. Head of AI Governance and Regulatory Innovation. Alan Turing Institute.
Matilde Rhode. Sector Lead. AI and Cyber Security. British Standard Institution
Sundeep Bhandari. Chief Digital Innovation Officer. Head of Digital Innovation. National Physical Laboratory
The AI Standards Hub is a UK-government-funded program that brings together the National Measurement Institute (the National Physical Laboratory), AI Research Centre (The Alan Turing Institute) and the National Standards Body (BSI). This initiative and its free online resources and events are dedicated to sharing best practices, providing training, and facilitating knowledge sharing, discussion, and community building – acting as a one-stop shop to understand global AI standards activity and status, including how to contribute to the development of AI standards.
This talk will introduce the initiative and provide a deep dive on the relationship between current and upcoming standards and AI ethics.
Explainable AI Panel Discussion.
Moderator: Ben Byford. Game Designer, AI Ethics Consultant and Podcast Host. The Machine Ethics Podcast
**The Conference Programme is subject to change**
Partners and supporters
If you are interested in supporting the AI Ethics, Risk and Safety Conference please get in touch with us at firstname.lastname@example.org.