Technology is becoming increasingly informed by artificial Intelligence. In this blog, Service Innovation’s Emerging Tech lead, Dr Hazel Bradshaw, discusses AI, ethics and responsibility.
Technology is becoming increasingly informed by artificial Intelligence (AI). It’s enabling greater automation of mundane things, from lasers in soap dispensers — to complex tasks that machines can perform faster and more accurately than humans.
Think: driverless cars, Siri and Alexa, and the many websites that already have a ‘chatbot.’ Advanced chatbots use a form of AI known as Natural Language Processing, which means they try to figure out what you’re saying, what you want to know, and then reply.
What is AI?
There’s no single agreed definition for AI. New Zealand’s AI Forum defines AI as “advanced digital technologies that enable machines to reproduce or surpass abilities that would require intelligence if humans were to perform them.”
In a vast array of areas, AI will in future automate mundane tasks so that real people can do important human work, says Dr Hazel Bradshaw, Emerging Tech lead in DIA’s Service Innovation team.
The potential of AI
Hazel gives the example of when a wallet is reported lost: AI can potentially be used to process and file that initial information for further police investigation, cross-referencing to other cases and completing paperwork for insurance purposes.
“For each of the thousands of wallets and mobile phones lost, cars reported missing and the like, that’s thousands more hours each year that police might instead spend working with people, solving crime and being at the frontline,” says Hazel.
The future of AI in delivering government services
AI will feature big in the delivery of government services in future, she says. “AI won’t sit in a silo — it will enter our lives in the same way computers, mobile phones and the wider digital world has already become integral to our daily activities. And that means we will need a greater diversity of people involved in the designing and programming of digital technology.
“This needs to be society-led, not tech-led. We need more people to engage with AI. We need sociologists, environmentalists, teachers and people with other world views involved in design.”
She says the Christchurch terror attack saw people suddenly confronted by a realisation of just how powerful the online world is and the huge — in this case negative — influence it’s starting to have over what happens in the real world.
A call for ethics and responsibility
“Digital technology skews how we view the world, so introducing this new intelligence into the world has seen a growing drive for responsibility and ethics.
“AI doesn’t have ethics, it just reflects what it’s taught. And as we programme tech to start thinking in increasingly sophisticated ways, AI will still only ever reflect the data it has been fed.
“The growing concern is if we feed AI the online equivalent of ‘sweets and junk food’ — the current free-for-all data that exists in the online world, you risk feeding innate biases into technology. Where AI is used, for example, to process employment applications, then it’s likely you’ll end up reinforcing stereotypes.”
Our Emerging Tech team is working on a 20-year roadmap identifying which new technologies are on the horizon, including AI, so government agencies can understand their potential value as well as the risks and ramifications.
Service Innovation’s Emerging Tech lead, Dr Hazel Bradshaw.
Hazel is on the Law, Ethics and Society Working Group of New Zealand’s AI Forum.
The Service Innovation team works with other agencies across a range of projects, usually focused on improving services around a Life Event. Its work is about creating opportunities to work in different ways; exploring the ‘unobvious.’ The team works collaboratively and openly.
If you'd like to stay across the work from the Service Innovation Lab, please join our mailing list.