As AI tools become more common in our lives and workplaces, an important question arises: can ethics keep up with the rapid advancements in AI?
Everyday use of AI tools can raise ethical questions around privacy, misinformation and fairness. With the impact that AI is already having on society, we’re sharing our insights on ethics in AI, from implementing responsible AI on client projects to incorporating core ethical AI principles to our Academy training.
At Digital Futures, our engineers tackle ethics in AI every day in real-world scenarios in their roles with our clients.
Data analyst Alistair Boyer works directly with AI ethics at a global financial institution. “I work in a team developing Generative AI to boost efficiency across the bank’s internal processes,” Alistair explains. “However, in the highly regulated financial industry, ethics is an important consideration for us and unrestricted AI is considered too much of a risk to provide directly to our colleagues.”
Alistair’s team is developing a control framework and monitoring tools to manage that risk. He explains, “In my role, I engage with risk stakeholders from around the globe to help them understand our product in the context of rapidly evolving AI regulation.”
Alistair’s colleague, data analyst Rowan Jarvis, also works with Generative AI. His focus is on driving AI adoption and training. “I’m part of a team that’s spearheading the development and implementation of Generative AI capabilities to streamline internal operations, rather than customer-facing applications,” Rowan explains.
For both Alistair and Rowan, the training they received in the Digital Futures Academy is directly contributing to the work they are doing.
Firstly, their skills in Python, web scraping and data visualisation gives their teams the capability to measure the performance of the GenAI and carefully manage the risk associated with it. Secondly, their ethics training is making an important contribution to their teams where ethics in AI is a high priority. Their work is clearly showing how financial institutions can innovatively leverage AI, even in a highly regulated environment. The direct result is a boost to the bank’s ranking in the Evident AI Index – a framework that evaluates financial institutions’ AI governance, risk management, and ethical implementation practices.
Digital Futures software engineer Kenichi Beveridge, placed at a UK retail bank, also works closely with AI. He recently collaborated on a Proof-Of-Concept (POC) with IBM, integrating their WatsonX AI technology with a FileNet environment.
The project showed that the GenAI could reliably categorise complaints according to industry guidelines. This cut a typically 40-minute process to just a couple of minutes. The GenAI could also search through multiple documents to come back with accurate customer details. It was also able to redact sensitive information.
“While WatsonX could automate a lot of the process, it still required human affirmation, partly to help train the AI but also to keep the agency with the user,”
Kenichi Beveridge, Digital Futures Engineer
The power of the technology and innovation was clear to see. However, the ethics of AI remained central to the project, with an emphasis on keeping the “human-in-the-loop.”
“While WatsonX could automate a lot of the process, it still required human affirmation, partly to help train the AI but also to keep the agency with the user,” Kenichi explains. “IBM’s approach to GenAI is to “add value” to the human decision-making process rather than replace human input altogether.”
Ethics in AI is a core element of the Digital Futures Academy. It is vital for people to have the right skills and knowledge so that ethics can keep up with the pace of innovation in AI. Our training focuses on the principles of accountability, transparency, and fairness. These principles have been designed to reduce controversy, minimise bias and, crucially, to ensure that humans understand how AI is making decisions.
“Ethics isn’t something that you can tack onto a process later; it needs to be considered from the start,” says Alex, Data Science Instructor. He adds, “Our training aims to equip Digital Futures engineers with an ethical mindset, which is incredibly powerful for the teams they join at our clients.”
Ultimately, for ethics in AI to keep up with innovation, we need a shift in behaviour, not just technology. Alex explains, “Our training promotes this shift by incorporating ethics throughout the 12-week programme. From simple models to complex ones, thinking about ethics in AI starts to become second nature for our engineers.”
Whether AI ethics can keep pace with innovation depends on whether you choose to make ethics a priority. At Digital Futures, we’re committed to equipping our engineers with the right skills to ensure that ethics in AI can keep up.
Not on our email list? Sign up here to get the latest Digital Futures news, updates and tips.
Find our FAQs for careers in tech here.
Read more about our Data Engineering pathway.
Read more about our Data Analytics pathway.
Read more about our Business Analysis pathway.