- What is Artificial Intelligence (AI) and Machine Learning (ML)?
- What are the impacts of AI/ML on New Zealand's government, economy and education system?
- How can they be used for better service delivery?
- What are are the requirements and preconditions for AI/ML use for service delivery?
AIs are either applied or generalised. Applied AI is most common and usually created to action specific tasks based on data and rules, e.g. manoeuvre an autonomous vehicle. Generalised AIs are less common and are systems or devices which in theory can handle any task. Development of AI in this area has led to the field of ML, which is based on the idea that if provided big data the AI can work out how to do a wide range of things by itself.
AI is a great way to reduce the complexity of systems – like government service delivery – by spanning a wide range of data, systems and processes while keeping it maintainable, consistent and providing a friendly service experience. But does New Zealand have the skills, strategy and coordinated response to the opportunities and challenges of AI?
AI shaping a future New Zealand
Note: Ben’s presentation is via Skype so the video shows the screen he was displaying on.
> Transcript of Emerging Tech: AI shaping a future New Zealand
--the classic quip is, we're really good at doing AI, but we just can't do AV.
But we seem to have got it. OK. Look, so I'm just going to give a brief talk about the recent report that we put out at the beginning of May, "Artificial Intelligence-- Shaping a Future New Zealand." And so this is a report where we looked across the whole of the landscape in New Zealand of who's doing what with AI and then looking at around the world from an economic and a social society point of view and, really, just seeing how New Zealand stacks up and really to identify what are the opportunities and some of the challenges and get a better understanding of AI altogether.
That's me. So I head up the AI Forum. We're an independent organisation, not-for-profit. And we're part of the New Zealand Tech Alliance. And so a number of communities including NZTech itself and FinTech and the IoT communities and HealthTech and EdTech as well. So as I said, we're a broad-based organisation.
We bring together government. So we have a great support during our first year from our MBIE's Digital Economy team and also from global technology companies from local corporates here in New Zealand and economic development agencies right through to-- and all the universities-- and then also right through to some of the exciting AI start-ups that are coming through.
So there's a big logo map there. And so I really do encourage everybody to find out a bit more about the Forum and do get in touch to get involved. So the report, it's only 108 pages, so easy bedtime reading. I won't really talk too much in depth about the whole range. I'm just going to pick out a few highlights. But there's a picture on the screen there of the highlights page. And from a economic viewpoint, we basically got a team of economists to do some modelling and look across, what are the potential upsides across various different industry sectors from applying AI and automation to the economy?
And the key number there is there's potentially up to $54 billion GDP boost by 2035. I'll go into a little bit more of how that breaks down just later on. We did look at the impact on jobs. And there's a constant narrative with points of view at both ends of the spectrum on whether AI will involve lots of job displacement or whether, actually, jobs will continue to evolve over time.
And actually, AI will free us up to do the more complex, higher-value tasks and leave the manual repetitive tasks to machines. So the analysis that we looked at there really-- it's quite subtle, but, basically, what we find is that this is no particular evidence that this is a technological revolution, a new technological paradigm unlike the paradigms that we've seen before. So when we go back and talk about when cars replaced horses and around 1900 there were lots of blacksmiths. And 20 years later-- not so many blacksmiths, but lots of car mechanics.
And then similarly, I think there's a stat in the report where, in the 1960s, there were 21,000 typists and stenographers in New Zealand. And there are next to none now. But that doesn't mean there are 21,000 job losses. It means that people have just been able to, over time, retrain and to apply their skills to new roles and new tasks enabled by new technologies.
So again, I'll talk a little bit more about that. I think the key finding from our report, when we looked around the world, is that a number of our peers in the OECD have been developing and investing in significant nationally-coordinated AI investment strategies. But to date, New Zealand hasn't. And so our real call to action-- I'm going to talk to this in a moment-- is that we really need to start working on a coordinated national strategy for AI to maximise New Zealand's opportunities from the technology.
And then also, the report is called "Shaping the Future of New Zealand." And we raised the point that these technologies are going to have a major impact on our future lives. And we're just starting to see that. And they're going to permeate through all aspects of our work, all aspects of our daily lives. And if we don't start investing now, to actually get control of some of the levers to shape our future, then, increasingly, these technologies may be shaped for us. So that was our call to action.
Given that I'm talking to a government audience as well, there was a piece of research done by the Oxford Institute, I think it was, at the back end of last year, which looked across all 35 OECD countries for government AI readiness. And New Zealand did actually stack up reasonably well there in ninth position.
However, from the work that we did, talking to people across government, we did find that AI capability and use by New Zealand government is pretty disconnected. I think the people in the room, and the work that LabPlus is doing is probably at the forefront of public sector AI capability in New Zealand. And so there's definitely a call to action, again, to work more towards a horizontal capability across government here, rather than operating in silos.
Probably the last Main point is also just to acknowledge the shortage of talent and the competition for talent worldwide for AI experts. And so, really, how we, as a small country in the South Pacific going to attract and retain and also educate through our education system enough workers, not just to develop the technology, but also to work alongside the technology in the future. So lots of questions there.
As part of going out around New Zealand to look at the AI activity that's happening at the moment, we sort of went out and discovered this Cambrian explosion, if you like, of the new firms experimenting and organisations working with them and thinking about AI. And so there's a bit of a logo map there that we've put out. I think we've done three iterations of this now. There's another one, coming with about another 20 logos soon.
And so you can see, there's a wide variety of organisations right through in the private sector, in the research and education sector-- so lots of universities are starting to think deeper-- well, have been for a long time. And this is actually one of the things with AI is many of the theories and the technologies that we're using today are based on academic work that was done back in the '80s and '90s, and a lot of it, actually, in Canada, interestingly.
But it's really only now, that we've seen the growth in readily available cloud computing, the ability do massively parallel computations using GPUs, Graphics Processing Units, and that also the increasing amount and the ability to store and move around large quantities of data. These have all been the technological divers, if you like, of the capabilities we're starting to see come through now.
And then, also, you can see, in the public sector section there-- so again, a small number of organisations working with and thinking about AI. And we've obviously come across more since the report has been out. So I would encourage you, if your organisation is working with AI, then please do let us know. We can include you on the map. This has been a really good exercise to get a benchmark. There's about 140 logos on this map. And it's actually quite a good exercise just to connect people together and for people to understand who is doing what and drill a bit deeper.
As I said, we did look around the world, and found other countries around the world investing in national AI strategies. In particular, obviously, the US has a lead with its Silicon Valley ecosystem. Just before the Trump government came in, the Obama government administration put out a paper calling for a national AI strategy. But that was promptly pulled from the White House website. And I think you can find it on the archives one now. That was a really good piece of work.
Interestingly, in the absence of that, China has moved ahead significantly, in terms of its investment. And they put out a national strategy last year with the stated aim of creating the world's leading AI industry in the world by 2030. And they're talking about a $150 billion AI industry by 2030 as well. Parts of their strategy involve teaching AI in schools. And China has a very different attitude to data and data privacy than we see here in the West.
And so some of the challenges that we're seeing in the West here, with regards to data privacy, have not, basically, constrained some of the technology companies and businesses that are operating in China. The main example, I'm sure everyone has seen, the stories of what you can do with mass surveillance and facial recognition in China right now.
Another anecdote, again, coming out of China is that Microsoft is hiring 5,000 machine-learning engineers and researchers into their research centre in China. So when you look at the scale of New Zealand, relative to the amount of the investment that's going on in China and in other parts of the world, European Union came out with a $20 billion investment pool about a month ago.
And so we're really seeing other regions around the world starting to recognise the importance of these technologies for the future. And part of our report is really just to acknowledge this and ask ourselves the question, do we see this as important for New Zealand's future as well? And if so, what do we do?
So there is research happening in New Zealand. And we talked to a number of universities there. I was actually in talking to Professor Meng Zhang at Victoria yesterday. And he's got a team of-- I think they've got 12 staff and six post-docs and 30 PhD's currently studying in the AI department at Victoria. So there's a significant amount of AI research happening up and down the country.
We did notice a challenge getting that research out of the lab and commercialised. And it's a classic challenge that we have in New Zealand of commercialising university IP. But other ecosystems around the world do seem do do it better than us. And so just, look at ways to maximise the capabilities that are coming out of the universities here and turn that into benefits-- and not just in the commercial sector, but potentially applied across government as well.
One example that's come out of the University of Auckland is the company Soul Machines. And so you may have seen there talking heads, what they call, digital humans and really talking about customer service use cases to automate customer service processes and personalised as well. We looked out across what's happening in AI, around the world and in New Zealand right now. And I think the key takeaway is that these are extremely horizontal technologies. So they are applicable across pretty much any domain, certainly, any domain where you have data.
And so the key business drivers for them will be, if you're in the commercial sector, to make more revenue, to make money and, across all organisations, is to be more efficient and to save cost. And then third real driver, I think, is also to improve customer experience. So you see the talking head down on the bottom left.
We're obviously seeing chatbots as well. But it's really just the ability to drive personalised experiences. And I know that Nadia, Pia and the team have been looking at the opportunity to use an AI to interface with government as an API, if you like, platform and to, basically, to provide a personalised experience and learning from your own personal data profile.
Just some other examples we came across lots of environmental use cases. Top left there, a company called Orbica in Christchurch is doing some pretty amazing stuff with geospatial mapping-- so taking satellite images, drone images, and then running machine learning algorithms across that to identify water bodies. And so Kurt who was actually on the panel yesterday, was saying that, to map out braided rivers like that takes days, if not weeks, of analyst's work, normally.
And they can do it in about 40 seconds now from a photo. then they can stream this live from a satellite, potentially, soon. And so pretty much any water body, any building, any-- counting trees, identifying wilding pines-- lots of new use cases have opened up just by opening up this area of topography data, which, generally, just sits in an archive just to provide nice-looking web maps a lot of the time.
Talking about MoleMaps there-- so in the health industry, in the health sector, huge amounts of visual data-- so pictures. And so one of the things that AI technology is really good at is finding patterns in images. And so there's a company MoleMap who are analysing photos of moles and, basically, improving skin cancer detection.
Bottom right is a great story of a New Zealand logistics firm that's replaced their spreadsheets with machine-learning algorithms. And they're basically able to predict the demand for and also optimise, basically, where the shipping containers are placed and onto which ships. So again, a key use case is to take data, a lot of data, to learn from it, to see the patterns in it, and then to use those patterns to make predictions about the future and then also to use those patterns to inform your decisions going forward.
So I said I'd talk about the economy. I'm not sure how I'm tracking for time. I'll try and speed up a bit. But just, in terms of the economic impact, we did do the analysis across sectors. What this breaks down to see is that, largely, the economic benefits from labour efficiencies-- and so this is where roles are able to be automated.
And these will often be more white-collar roles going forward. So you can see down on the bottom, mining, and agriculture, forestry, and fishing. These are industries which are actually largely automated. That's not the only factor. The other factor is what's called the absorptive capacity, which is the ability for that particular sector to innovate, to take advantage of new technological changes.
So the financial insurance services at the top there has huge potential to deploy AI and to make massive efficiencies and, similarly, manufacturing and construction. So please do-- and sorry I didn't say it at the beginning. But please do take the time to download the report. It's available for free on our website aiforum.org.nz. I'll say it at the end as well. And you can see the details of this analysis and a bit more of how these numbers are made up.
But pretty much, the biggest opportunities are in industries right now. Moving on-- so look-- we didn't just go across the business impact and the economic impact.
So look, there were major concerns, obviously, about some of the ethical issues raised by automated systems and algorithms. And so I think this reasonably well-trodden discussions about AI bias now. And certainly, the media has really woken up to some of these issues. And obviously, the Facebook and Cambridge Analytica events, recently, have really raised the dialogue, if you like, raised the ability to converse about the questions of data privacy and so on.
So this is one of the things, when we went into the report, that we were sort of seeing that people's understanding of what AI actually is is largely informed by science fiction. So people sort of triangulate off Blade Runner and Terminator and Ex Machina, all of which are humanised humanoid robots.
So one of the things we deliberately tried not to do in the report is have pictures of robots in there and trying to illustrate AI using other techniques and to try and make it more tangible. So I think, really, to be honest, I've really noticed an increase-- maybe some comments from the room, at the end-- but really nice, is an increase in the level of dialogue about some of these issues that's being had in the mainstream.
We talk about safety accountability as well. There's a picture of an autonomous-- basically, an autonomous killing machine, bottom right. So these technologies are not just used to make lives better. They can also be deployed for malicious use. And really, there are examples now of autonomous weapons. There is a moratorium that's being worked up in and around the United Nations, actually a Kiwi heading up, works for Human Rights Watch, is heading up some of the work that's happening there.
So go and look at the campaign to stop killer robots for more information. But certainly, New Zealand hasn't got a position on autonomous lethal weapons currently. And I think it's an opportunity for us to take a lead there as well. Because I don't think there would be many New Zealanders that would be wanting to see the proliferation of, basically, weapons which do not have human control across the top of them.
And this applies also to defence as well as offence. And so for our cybersecurity strategy in New Zealand, we really need to start deploying and understanding the potential AI-driven attacks. And then, how do we use AI to defend against that?
The AI Forum's part of the International Partnership on AI. So I won't talk too much about that. But that's an international body made up of lots of the large tech firms and other universities and think tanks around the world who are working on those six themes to ensure that AI does benefit people and society and not just shareholders.
So these are the report's recommendations. And sorry, I'm expecting that you can't see them just move that down. So as I said, our key call, really, is theme one there to coordinate our AI research and development and encourage AI uptake and maximise the opportunities. And so we expect that, when the new chief technology officer is appointed, that this will form part of the work of New Zealand's wider digital strategy.
Theme two is creating awareness for discussions to continue the work that's being done by the AI Forum and others to move away from talking about Terminator, talking about superintelligence. And so it is a conversation that's worth having. Will machines take over one day? The quip from Andrew Ng, who's a professor at Stanford-- real AI guru globally, but talking about Elon Musk, talking up the existential threat from AI-- and he is saying, you're trying to land a person on Mars, but right now it's like you're worrying about overpopulation on Mars.
And so yes, these issues definitely need consideration. But they've sort of dominated, by virtue of the celebrity endorsement of those issues, they've sort of dominated discussion. And actually, economic competitiveness and those ethical and social issues that we talked about are actually probably more important and more tangibly in front of us right now.
I will Just speed things up. So theme four is increasing data accessibility. We really do need-- data is the fuel that feeds AI. And in New Zealand, we really could do better, I think, at opening data out of our public data sets but also in health as well. So I'm at a health conference today. And real challenge is taking data out of silos across the health sector to actually deliver better health outcomes.
Theme five is to grow the talent pool. So we, too, would like to teach AI in New Zealand schools, certainly, data science principles. So there's a new digital curriculum that's just being rolled out currently. And so we'd like to see, at least, some of the principles and the basic understanding of how these techniques are used. And then part of that is, we need to, basically, upskill teachers.
And the other thing is, how do you get started with AI? So there are plenty of online courses that you can take for free and plenty of others which are very reasonably priced. So a really good place to start is that Stanford AI course by Andrew Ng. And so I recommend-- it's actually something we're just going to put up on our website soon, once we get through Tech Week is just-- where can I go to teach myself and get myself skilled up. And then, a lot of the vendors-- so Google, AWS, Microsoft-- all of those have got courses online to get yourself skilled up and just start using the stuff.
And then finally, recommendations there are around adapting to the effects, the ethical concerns and really establishing in ethics and society working groups is, making sure that New Zealand's voice is heard and that we are basically ready for some of these changes as they come.
Now, I'll just move on to the next-- so talking about an AI strategy for New Zealand, what does that look like? One of the diagrams from the report is this value chain diagram, which really calls out how data basically fuels the AI applications that drive the social and economic outcomes that we want.
And that's supported by trust, regulation, research, skills, and also investment capital. And so we really do see the AI strategy as a series of investments in each of these buckets and really working out what it is that we're wanting to achieve to grow the economy and to make government more efficient and to make sure that the society is more fair as well.
So our work programme starts here. So we are working with all of our members and talking to a number of people throughout government as well. So if we're not already talking, please do get in touch-- contact details on the next slide-- we'd be really interested to understand how your organisation is approaching AI and what the AI Forum could do to help.
And also, I really encourage the work that Pia and Nadia and the team are doing to build a horizontal capability across government. So these are not skills that can really be deployed in isolation. The government chief data steward at Stats has a really big role to play here as well in ensuring that data is released and that we have the right data there and it's joined up with the right governance around it. And Yes, so please do download a digital copy of--
Digital humans and what to feed them
> Transcript of Emerging Tech: Digital humans and what to feed them
Hey, everyone. Thanks for inviting me to speak today, albeit not in real time. So my name is Victor Yuen. I am the head of product of FaceMe. I lead product strategy and product development for our AI Digital Assistants. So you might have seen some of our digital assistants around. One that we've had quite a lot in public eye is the Ministry and Primary Industries' API digital assistant called Vai that was based at the Oakland International Airport helping people with biosecurity questions and how do I get through the-- which line do they choose, where do they get the bags, where are the toilets.
So she is a lifelike digital human, a digital assistant. You can walk up and you can literally have a conversation with her. And she can help answer your questions. So, first of all, thanks for allowing me to speak, even though I'm not there obviously. My wife would have-- at the time of you guys watching this video, she would have just given birth to our second child. So I'm at home with family. So what I wanted to talk about is just a few very high level-- obviously, not very long talk what are the opportunities for AI and what are some of the considerations that we need to look at and how could it be used.
So the area that is very much of interest for us, we quite often look at how can we take a very complicated experience, a complicated set of processes or systems and make it much simpler. And government is a really great candidate for that because you've got multiple departments, lots of different touch points, lots of different people, processes, policies, and they're changing all the time. And you do also need to keep very consistent between them.
So an artificial intelligence is a great way to span a wide range of data and systems and processes, keep that service absolutely consistent, have it be very maintainable and provide a very warm and friendly experience as well at the same time. Because what we found is that people still want to have that human connection or at least have that friendly, knowledgeable service. So to achieve that sort of thing, data is a big part of it as I'm sure most people know. Data is a big part of how you get really good machine learning or artificial intelligence performance.
So the quality of the data, the accuracy of the data, and also the breadth and the amount of data that you have available to you. So if you were to make recommendations or you were to make assessments in terms of providing benefits, if you were to advise on particular items or even answering questions, additional data allows us to build much better AI experiences. So when it comes to data, there are some really important considerations. So, first of all, how do you get really good quality data? How do you make it accurate?
And that really comes down to capture at the beginning. So UI development, UX, how do you capture the data? How do you ensure it's audited and checked? If you do free form fields, for example, you've got to get much more variable data than if you restricted the field to only be able to take numbers for a phone number, for example. Email validation-- even simple things like that are very basic examples.
But then thinking about the architecture, if we go to a more advanced for example, thinking about contact centres or email transcripts and things like that. How does that data get stored? What systems are storing it? And how can you actually create neural network or artificial intelligence representations of that data? How do we capture audio in conversational transcripts where-- sorry, how do we capture audio in such a level of quality that allows us to be able to transcribe that conversation and to be able to use it.
How do we make sure that the conversation is separated into the customer service agent and the user and the customer? So all these considerations in many different systems, if they can be considered then can make the process a lot easier later on down the track when you actually collate all that information.
Then you've got things like the communication of what you're going to do with the data. Obviously, you can't just collect all the data and not expect a backlash from a perspective of privacy. Facebook would have been the main example of that recently.
It's very much front of mind. And it's important in the user experience to be able to communicate, what are we doing with the data? What is the intention here? What data is being collected?
And then, to be able to give people the opportunity to pull that data. Therefore, the way we manage the data models and data structures, and how you get access to it-- if you architect the system right from the beginning to make it easy, it means you can really reduce the overhead of maintaining that data, and also giving users of the system the ability to maintain that data themselves. This is really where we want to get.
When it comes to systems, I know Pia has been doing a lot of work around this. It's the APIs and the integration.
It's very difficult to go into an old system to pull unstructured or poorly structured data out and then use it for artificial intelligence. Whereas if you have an API or at least a standard API service, you're able to call from the system, to pull the data into a place where you can use it.
Having APIs into all the different data systems of all the different departments and ministries would be very helpful, especially when you start to build artificial intelligence services that span across multiple departments. I think that's actually quite important.
And then, as you can kind of start to get a sense, there's a lot of considerations to go through around the architecture of these systems and the communication around how data is managed and how you communicate to the public. And so having the right policies in place to be able to audit that data, to understand what data you have.
Some of you may have heard about how machine learning algorithms, whilst they are inherently objective as a technology, they become biassed by the data that is provided to them. And the data may be completely accurate, but the data itself is biassed towards, for example, maybe granting of benefits. Maybe the assessment of benefits may be completely skewed based on the data that you put into there
Whether a person in a particular situation is eligible for a benefit, the data examples that have been given historically may not actually be the way that you want to proceed moving forward. Therefore, using historical data may actually bias the decision that you want to make.
Having the data and being able to assess or independently review and understand whether this is actually aligned with how you want the policies and the vision of these services to be executed-- having overarching policies or guidelines around how these systems are gathered and the data is collated and utilised is going to be really important.
If we can just take a step back for a minute, I dove into the weeds of data and bias, and there's a lot more to discuss in those areas. To start off, it's really just having discussions around what service is going to be created, what sort of data is going to be required, and what might be the standard practises that should be shared across all of government. So not doing things in silos.
I think Pia and her team have been doing a lot of good work around blogging about the sort of things that they do. Opening up that discussion is really important.
When it comes to actually the practical application of it, there's a couple of use cases that we often work on with our artificially intelligent digital assistant. Customer service roles is an obvious one.
For example, if you've got services you want to provide 24/7, it's not always feasible to provide a high volume of service or even anybody available to answer, which is often the case with government services at the moment. Being able to talk to IRD at 10:00 at night about my GST or income tax if I'm a small business owner, that's actually really valuable. I want to do that work then, if I've got questions.
Using a digital assistant makes a lot of sense at that point because you're able to enable a service that wasn't previously going to be available. And you're not displacing a job when you're doing something like that. That's obviously a very big consideration to think about.
And the other way is as an assistant to customer service roles or advisors within government. The sheer complexity and breadth of all the different systems and all that data, and all the changing parameters and things like that by having a digital assistant. As an advisor, I might be speaking to someone, but to have a digital assistant that I can actually call on to ask specific questions about policies that are out there or to ask for particular bits of information, to actually complete a form or complete a process for me. That could make me far more efficient.
And it could create a much better experience because that customer/ user is able to speak with me about something, and I can actually address their needs in that one conversation rather than having to hand off and get back to it, that sort of thing.
I think that probably gives a bit of an overview. It was only meant to be a bit of a short talk.
There are other things that will need to be considered around all this. But ultimately it is thinking a bit strategically about the problems that need to be solved.
How do we start dealing with the data? How do we start dealing with the privacy issues? Starting a conversation around it and then really starting in small areas, and designing and building out, and experimenting with small bits at a time.
It's the best way to go forward rather than look at hefty projects. But I think AI has a major part to play in transforming how we deliver government services to the population.
All right. Thanks for your time.
That's me for today. I can't answer any questions, but one of my colleagues hopefully has attended this call and is able to do that for us. Cheers.
The (hu)man in the machine
Donal Krouse, Senior Research Scientist at Callaghan Innovation, talks about the status and outlook of machine learning.
> Transcript of Emerging Tech: the status and outlook of machine learning
[I'd like to thank] the innovation, the Service Innovation Lab here and Nadia in particular for giving me the opportunity to talk about one of my favourite topics, which is machine learning. I'm a data scientist from Callaghan Innovation. I'm based down at Gracefield. And I'm also in a big group that does IoT in terms of sensor infrastructure. And they actually make sensors and give us the data we need to do our jobs. So that's really cool.
Today, I'm not going to talk about what we actually do in terms of AI machines, the software we use, and that kind of thing. But I realise there's probably a lot that-- this audience is probably-- there are a lot-- how many coders are out there? Thank goodness, because I'm actually going to talk about a computer programme, as an algorithm, and stuff like that.
I might have to wind that back a bit, but I tried not to be technical in this talk. The only technical bit will be right at the end, where I actually address specifically what I think you're going to need to do this across government stuff, in terms of privacy, and that kind of thing. And as a teaser or tickler or something, it's called Fully Homomorphic Encryption. OK, so it's not quite blockchain or anything like that, so I'm leaving that right to the end.
So other people bear with me. I'm swinging around, relaxing a bit here. So status and outlook-- my view of the status of machine learning and the outlook. Callaghan Innovation has produced a white paper, so I'll refer you to that. And then I'll give you my spiel on homomorphic encryption.
So a fun place to begin with machine learning, we have to fly back in time to 1770. This is a reconstruction of-- let me remember his name-- Wolfgang von Kempelen's chess playing machine. So this machine, a mechanical Turk, it's known as, would play people at chess and win, which was surprising. And this guy ran around with it-- he also had a speaking system.
So Hungarian, an inventor, and an author. And I'd love to have been around there at that time. But ultimately, it was revealed as a hoax. And so there was actually a person, not a ghost in the machine, but a person in the machine, a small person in the machine.
And I think that's great. I think that's a very deep thing for us to understand. Humans are easily fooled by things, otherwise magic wouldn't exist. That wonderful talking horse, the counting horse, was revealed as a fake not so long ago. And I'm really disappointed.
But to bring this back to where we are now, this is profound. Because Amazon has a machine Turk marketplace, where once the machines have taken over, they still need humans underneath to do all the good stuff. And so you and I, once we've been displaced from our high-value jobs, if we have them, we will be able to go into this marketplace and tag photos for people, no, no, do that move instead, or I can't quite do this hard translation. Do you speak Armenian, or Maori, for goodness sake?
So this is the kind of situation we're in. So I think that's a nice place to start. And let's get into it. Oops, where am I? Back here.
So people ask, what is machine learning? There are so many definitions but I'm going to go back to the source. Arthur Samuel is the one who's credited with actually saying what machine learning is. And he talked about it in the context of artificial intelligence.
This is a long time ago, back in '59. And again, like the chess-playing Turk, people were writing computer programmes. There's a listing of a computer programme.
I'm not going to say what a computer programme is, and what an algorithm is, because that would be just a bit too offensive. But look-- this is a game of checkers or draughts. And basically, this is the kind of game that's sufficiently complicated for you to develop quite smart strategies about how you do it.
And so what Arthur did was he really developed the first self-learning playing machine, so it really had the first ability to actually learn things. And so this is really the archetype, I guess, of the kind of machine learning that is starting to be smart. But ultimately, of course, any machine is dumb.
Oh, well, I guess it's all over. Well, you can't blame me for running over time, then.
The thing about this is, ultimately machines are dumb, and people are smart. So never, ever forget that, people. What's in here is smarter.
So basically what they do is that machines use brute force to play games. All they do is they develop or look ahead all the moves that might possibly happen. Tic-Tac-Toe is an easy one, because you only have to look ahead two moves and you're guaranteed a draw. You'll never be beaten.
Now, the more complicated the game, of course, the more complicated this tree gets. And it gets enormously big-- bigger than all the atoms in the universe, or something crazy like that. But nonetheless, what happens is Tic-Tac-Toe is the simplest game. Go, the Chinese game Go, is the most difficult one. In 2016, one of Deep Minds-- the Google spin-off programmes-- beat the champion of the time.
That was 2016. It used a lot of data from players, from human players, and that was great. It used the rules of the game, but had lots of data on how people play the game.
Then in 2017, it was beaten by another version of this reinforcement learning machine. And they called it AlphaGo Zero. And the reason they called it Zero was because I don't need data. I don't need data.
And that's a big thing. What did it use? It used rules. It used some reinforcement learning concepts to work out what's a good branch to go down, what's a good one like that. And what it did was it just played itself.
Playing itself, it got better. Humans can't-- we can't play ourselves, because we're too egotistical. The machine can. And it learned in days to even beat the AlphaGo that was beating all the champions. So please bear that in mind when we talk about this.
OK, so how do machines learn? Basically, a machine learns by changing the way it processes data based on past inputs. That's basically how a machine learns.
And so what does it mean to learn? It simply means-- well, it actually has to have a few rules, a minimum set of rules. It has to have a goal.
And basically, it needs a humongous amount of input history. Now, I'm not saying the input history comes from data, because the input history might come from self-playing. So we distinguish supervised and unsupervised learning simply by whether we spoon-feed it.
Supervised learning in a machine is spoon-feeding inputs, creating a regular history to try and control the bias that we hear about. All machines get biassed, depending on what you feed them. If you don't feed them enough data across the horizontal thing, they're going to be biassed, right?
Unsupervised learning is just let it consume any data that comes its way. Reinforcement learning is a version, a type of supervised learning, in which the input you're giving it is a reward. So you can give it a punishment, or you can reward it. It's a little bit anthropomorphic talking like that.
But I really recommend that if you want to look at the power, I think reinforcement learning is a new thing in terms of giving these power. It's the key to AlphaGo Zero. And I really recommend that you-- a lot of you have probably already seen this, about how you can get machines to learn how to walk. And it's extremely funny. And I'm not going to do it now, because it'll eat up all my time here. But I thoroughly recommend that.
So machines can mimic humans. We've heard about Digital Humans. They're not all wandering around, but some of them do. Here's me talking to my psychotherapist. That's a weird word.
And when I was preparing this talk, I just wanted to play around with one of these little chatty things, that I was saying to it. "I'm a bit nervous," and it responds, "Why do you say you're a bit nervous?" Well, I say, "Public speaking makes me nervous."
And then gives me something that I can't quite figure out. Sorry. So this is basically an engine that was developed. The doctor here is behind, the MIT Engine ELIZA that was developed back in the '60s. But essentially, it's the basis of all these. People have called them chatter bots. I don't like that. I prefer chat bots. So this is the basis of chat bots.
And one of the things that people try to do is they try say, is this a human? Does this conversation pass for a human? We play Spot the Human. It's Turing's test from 1951.
And looking at this, please tell me, who is the human and who's the machine? And don't say I'm-- here, I wouldn't give it a very good score. I don't know if you can read it. For those who can read it, what do they reckon about the responses from this thing? Do they look OK? Do they--
AUDIENCE: Apart from the bottom one.
Yeah, it's pretty good until then. Then it lost me. I was lost. So that's that. But now, we move on to some scary stuff. That's annoying, but we can deal with it. This I find scary.
On May the 9th, Google-- they have Assistants-- these people have got so much money and data, they lead the world. Sorry, people. They are leading the world.
And they come up with this thing-- Google Duplex, which is part of the Google Assist thing. On May the 9th, they published this. Go watch the video. I'd love to play it now, but I can't.
But basically, the human is over there. They look awfully machine-ized, don't they? This is actually the-- it is a chatter bot. Let's get real. It's just a very sophisticated chatter bot. It makes appointments for you.
Google did it, because they know we spend a lot of time making appointments. I'm going to get the code. I'm going to change it, make it go to my meetings. How's that? Yes.
So the CEO explained how he did it, or how they did it. They brought together-- they spent a lot of time and money on this. They brought together all these amazing things.
They didn't just say natural language processing, which is all we claim. They say natural language understanding. And then they also said something like "Deep Learning." That's an artificial neural network tool that is really the universal-- it can do anything that any other machine can do in that space, machine learning, and text to speech. So those are the things that they brought together there.
So after seeing that, there's a bit of skepticism in the system. People are still figuring out whether it's for real or not, I'll say that much.
But for those of us who believe it, have faith because there are things machines can't do yet and are unlikely to do for quite a while. So you can challenge them and say, match this thing up with whatever's in there. They can't do that yet. They probably could if it was just cats, but OK. I struggle sometimes, but basically, that's the message there.
I challenged them with our whakatauākī. It was pathetic. "No rera", "Therefore" and some "E ai ki te whakatauākī"-- they got that right. But they didn't get the meat of it. "Rukuhiata te wāhi ngaro, hei maunga tātai whetū". They call that "move the space disappear, as astrology". No, no, no. So don't worry. We can all go onto Amazon and pay the person and the Turk, because they're going to need help with this. This is not good enough.
So I'm coming to a close here. How's my time?
It's a bit over
It's over. Oh, go read that. [LAUGHTER] Callaghan Innovation does that. That's their outlook for all this stuff. And yup, it's there. It's new minting date. And I've run out of time completely, but there you go. That's what we need to do.
I'm done. Thank you very much.
Chatbots helping school kids
Matthew Bartlett, Co-founder of Citizen AI (a subsidiary Community Law Wellington), talks about developing a chatbot to help kids and their parents with school problems and how Citizen AI is using that experience to develop a suite of other legal chatbots.
> Transcript of Emerging Tech: Chatbots that help kids
As quickly as I can, That's good. As Nadia mentioned, I'm with Citizen AI. I'm one of the co-founders, and Citizen AI is a wholly owned subsidiary of Community Law Wellington. And I just want to talk about one project that we've done and a couple that we're doing that may spark some ideas in the service delivery space, and there's quite a lot of overlap with some of the stuff we already talked about, which is helpful for the speed of it.
So the project is called Wagbot, and it's that chatbot that helps school students and their parents with problems at New Zealand schools. So it's accessible on Facebook Messenger, which means that if you're one of the 70% of New Zealanders who has a Facebook account, you can get to it live 24/7, and you can get to it by either searching Facebook for Wagbot or visiting m.me/wagbot. The kinds of things it can answer questions about-- this one. You probably can't read it at the back. Can schools charge fees, or do I have to pay the school donation or whatever?
So you ask it that, and you get a accurate authoritative hopefully very short answer back from it. Where you ask it questions that go outside of its domain-- for instance, I've got no friends at school-- it'll try to make an appropriate referral. So in this case, it'll say, sorry about that. I'm not very good at helping with that sort of thing, but you can talk to Youth line. And you can click the button and give Youth line's 0-800 number a call right from your phone, or speak to their Facebook Messenger service.
So it's kind of baby AI really. It's just using some off the shelf Google tech to match up all the different ways that people might ask questions with its knowledge base, so it's not very advanced. We're just sort of interested to see what this technology can do, a kind of low hanging fruit. Let's see if we can do something useful with it. That's the kind of motivation there.
Now, if it-- if it doesn't know what to answer, it's backed up by humans. So if you ask it something-- what school gets the most NCEA scholarships? I think it's Wellington College. I'm not sure about that.
But it'll say, sorry, I don't know. Why don't you talk to a human? And then you can ring the Student Rights line.
And that's-- the Student Rights line is-- can I click it? Yeah. Bring up the phone.
The Student Rights line is staffed by Victoria University law students, and that's been running for about 25-odd years, maybe 30 years. And that sort of comes to sort of what feeds it part things. So what Community Law-- strangely enough, Community Law Wellington have a kind of in-house publishing house that produces these, to my mind, very impressive resources. There's this Community Law manual-- 900 pages of every legal question you might want answered, and it's updated once a year. Next one's coming out about a month's time.
Problems at School-- similar but for the school issues, particularly. So 25-odd years of answering questions on this phone line has gone into producing this book, and that's the kind of secret sauce. Do you want to pass them around? Might be of vague interest.
So you got all these-- phone record have gone into the book, and then the book has formed the core of the knowledge base for the chatbot. And of course, the content of the books are also available for free online. So what we did once we launched Wagbot, and we started with that content and the Problems at School book-- but as real people started to use it, you can see it-- start seeing the gaps where the thing doesn't know what to say. So we hired a writer researcher person, and they would sort of look at the law, look at relevant government web sites, whatever, and write new content.
One-- and actually, this is a really key benefit out of the chatbot model we found, is that, unlike with a website, where you can see, yes, these are the most popular pages, but you don't-- you have to make quite a big guess-- quite a lot of guesswork, and what do people actually want to do. Whereas with the chatbot format, you can see that's what they want. And it sort of leads to interesting stuff, like in the reinforcing some of my gender prejudices with the Wagbot, when we get real students start talking with it, a lot of the time, they want to talk about sex, and a gender prejudice thing, the girls are mostly saying, I have a crush on a boy. What do I do about it? The boys are mostly saying, can I have sex with my teacher, can I have sex in the bathroom, that sort of thing, and many other things that I can't possibly mention in this context.
So we wondered a little bit how are we going to respond to this, and the first thing we did is give legally correct answers as far as we can to that. But we got in touch with Youth line, Youth Service, and they have quite a useful sex and consent quiz. And we just talked to them and said can we use that, and so we did a chatbot version of that, so it will present various tricky scenarios and quiz the kids, or parents if the parents are using it, and just help give them a little bit more information about consent and what's appropriate and that sort of stuff. So it's a fun example of being able to react to what people are really-- where people really are.
So Wagbot, alas, is presently on hold while we negotiate with the Ministry of Education to see if they might like to fund it's ongoing development. But happily, Community Law Wellington saw the potential of this technology to advance its mission of access to justice, and the very nice people at the Michael and Suzanne Borrin Foundation are funding us to do a series of chatbots over the next three years. First one is tenancies, then prison related law, and then employment related law.
So I'm presently-- we're presently knee deep in Rentbot in researching every possible question we can think of about tenancy law from all sorts of different points of view and having lots and lots of meetings with different agencies and people to understand just, for instance, interactions this week with people in the sexual violence space, trying to understand if my partner has been violent towards me, we're both on the tenancy agreement, how do I get out of that without having to talk with them again? That sort of stuff. So the software side of it is just about there, and it's just the writing and getting all bases covered before we can release to the world.
So as I said, Citizen AI is a subsidiary of Community Law Wellington. It's a charitable company, and my personal motivation for kicking this thing off-- I'm one of the co-founders-- is the sense that's coming out from today, that AI is the next big thing, or the current big thing, and I'd really like to see it being used for some interesting stuff. One of my favourite words is this one, citizen, and I see it as sort of in opposition to consumer. For me, it has-- denotes ideas of equality before the law and membership in a web of rights and responsibilities and a sense of the individual's active participation in the business of society. So I want to point these technologies at that sort of thing.
We're really interested in collaborations, and if you have ideas for mission aligned projects, we'd really love to talk about them. We've got our own ideas, and very interested to hear yours. We have some ideas, like it'd be nice if you could take a photo of a hire purchase contract and have a quick thing that spits back at you, this is how much it's going to cost you, here's an unusual clause, this is going to get to you, watch out, that sort of thing. Maybe be nice to have a digital assistant that will help you write a complaint letter to a utility company, and guide you through the process and spit you out a PDF and told you where to email it, that sort of thing.
We're getting a little bit deep, and we have one project in the works that is analysing decisions from tribunals, seeing if we can pick out patterns, like what are the facts that are likely to win, is it worth taking this case, going through effort, that sort of stuff. But we try and do things in an open-sourcey way when we can. We have one project with a government agency at the moment, in the health and addiction space, to see if this chatbotty methodology might be useful there, which I'm not allowed to talk about yet unfortunately.
But where we can, we do the open source thing. You can follow our progress there or chat with me later. Thank you very much.
Need for a connected government response
> Transcript of Emerging Tech: is government doing enough to prepare for AI?
My name's Rosemary McGrath. I'm the chief architect at Statistics New Zealand. I'm going to take a little bit of a "governmenty" edge to this. I hope it'll be still interesting for those of you who aren't in the public service. It's really around is government doing enough to support where we need to go.
And I think we've heard a lot this morning about where we might need to go. So let's have a think. And I just wanted to point you to some key references, all of which get longer. So you heard how long the AI foreign report is.
The House of Lords is 186 pages. And the all for good-- AI For Good-- all for good-- AI for Good Global Summit has a series of webcasts that will last you hours. So, yeah.
I guess the thing about all of these is that they contain the basis for a lot of thinking that we need to do and a lot of conversations we need to have. And actually, it is quite fun, depending on the kind of person you are. But I find it fun to go through and think about what some of these things mean. So what you're going to hear as I talk is a few of the things I've thought about as I've gone through these things.
Let's-- OK, so the question was, is government doing enough? Enough what?
So enough enabling? Enough funding? Enough regulating?
Enough observing? Enough partnering or enough guiding? It's probably doing all of that in every agency there's one of the problems.
So it was really interesting when Minister Curran gave her talk at the AI forum-- AI-Day. And she talked a lot. She gave lots and lots and lots of examples of what government is doing.
But her big concern was, it's all a bit disconnected. And I think it would be fair to say that is true. And it really is interesting where people are looking, what people are looking at.
But what we struggle with is getting that visible. We talk about open and transparent government. It's quite hard when we're not that open and transparent with ourselves.
So I think it's worth keeping this in mind that we're perceived as being disconnected. People see that now, and AI is one area where we need to start looking connected, that we're actually trying to do something together. So I thought, well, how shall I talk about all these things and what I've been wondering about after reading some of those articles and lots of other stuff.
And I thought I'll start with the value chain. It's nice and published. Lots of people can have access to it.
So I just wanted to read one quote that I quite like that came out of that AI For Good global summit. And it was from Professor Kentaro Toyama from Michigan University. And it basically says, technology, no matter, how well designed, is only a magnifier of human intent and capacity. Therefore, we determine where this goes-- only us.
So what I wanted to do was quickly kind of work through this value chain, and I'll go really, really, fast. So trust and social licence-- what I really like, particularly in New Zealand, for people to think about is also cultural licence. So there's quite a lot around how and where people use data. There's actually quite a lot that we need to understand about where different communities-- different iwi, even-- feel about where data is used, how data is used about them.
And we need to talk to people about how we want to use data. That is how you build trust and social licence and cultural licence. I thought the other really important thing is that there's quite a lot of research available on how New Zealand citizens feel.
And mostly, they feel OK as long as it's for public good. What they don't get so fantastic about is if you think it's for their good. So public good is quite a broad concept that we need to really get our head around.
I mean, just last week, Pia, Nadia and I hosted a wee event here in Wellington around futures. And one of the key themes that came out and really surprised me that we needed to keep our eyes on was the fact that we actually retained humanity in all of this. So it's really important. You know, go and get your design thinking books and put people at the centre of what we're doing.
It's easy to think about AI in terms of what it will save us. You know, get more efficient, more effective. For whom?
This side of this thing, which I think is a little bit narrow in its focus-- could have environment, could have Te Ao Maori could have lots of other things over that side. This is the most important thing. But who's deciding what those outcomes are and who are they for?
And I think that this trust and social licence will never happen if you don't engage the people who that supposed to be with. I don't think government is doing enough in this regard. OK, the regulatory frameworks-- just really quickly.
I think most people are, because there is really nothing else, moving towards the privacy regulations. And also you'll see things like GDPR and that having an influence in this country. So there is work.
But I was at a forum the other week and somebody said, is regulation and legislation what's needed in the world today for this particular type of work? Can we keep up? Look how long legislation takes.
Look what it takes to change legislation. Look what it takes to employ regulation. Is it the right thing?
Do we need sanctions? Horrible word-- don't know. How much notice did Facebook take of our privacy commissioner? How concerned were they about not abiding with New Zealand law?
And I'm going to skip pretty much right to the end now just in the efforts of time. I'm happy to chat about any of this if you want to any other time. So I think the thing about accessible data is really interesting in quite a-- should be quite a concern for government for two reasons.
One is government does hold a lot of data, but a lot of people think it's deficit data. So it's data because of people who had to come and liaise with government often in not the best of circumstances. So we talk a lot about opening it up, but are we doing enough to go out and find where the alternate to those deficit data is?
Are we really? I don't think so-- I don't know that we even know where it is. I think the other kind of core thing is that government could be seen as a bit of a monopoly in this place, in this space. So I think it's really important that all those principles around open data, we're seemed to be really kind of on the front foot around all of that because we hold a lot.
There is quite a lot that people have to go through at the moment if we decide not to make it open. And so researchers have to be prepared to state the outcomes that they're looking to achieve right at the beginning. It's not that free and easy.
I guess what I really wanted people to walk away with is that I didn't give you any answers. All I gave you was more questions. But I do not think that government is doing enough.
I do not think that we're joined up. I do not think that we talk enough. Thank you.
Legislation as code
> Transcript of Emerging Tech: Legislation as code
First of all the reason we're dealing with this and exploring this as a service innovation lab is of course if you want to build integrated services, you need to actually integrate the rules. We very quickly figured that there were only three ways to do this. The first one is to re-hardcode those rules into yet another system. That's not sustainable. It's workable, but it's not sustainable.
The second one is you take all the business rules engines out there that agencies already have and try to create a translation layer. Nightmarish, immense proportions, but doable. So the third option that we're exploring, and it is an exploration and it's still hypothetical, is if legislation was available as code in the first instance-- and we're not talking about all legislation. We're not talking about the justice system. We're not talking about case law. We're talking about prescriptive rules that inform services-- so eligibility criteria, calculations, those kinds of things, potentially even obligations around certain areas of compliance, such as AMLCTF.
There are certain areas that this seems to work, and the full report, the better rules report that-- and the little discovery we did there is worth looking at for the nuance about what we think it does or doesn't apply to. But as a service delivery entity, we basically identified if the rules from legislation were available as code, then you can take those rules, supplement them with your additional agency policy rules, supplement them with any other observed practise rules, and then you actually get the rules applied and integratable for services that actually span agencies. There are so many rules, so many rules, that go across many acts, across many jurisdictions. And when you start thinking about international trade, you start thinking about international IP, regulations, you start thinking about the huge number of rules, it is more critical than ever in our mind not just to do service delivery in an integrated way, but also to understand complexity of the world and the impact on you that those rules are more pragmatically available in a consistent format that isn't up to your interpretation as an organisation. So from our perspective, that's why we're exploring legislation as code, which entirely flips the paradigm because everyone starts from a: I'll just make it really human readable and then my machine can interpret it. But then my machine, your machine, your machine, your machine are all going to interpret it subtly differently.
And none of that is good for the end user. None of it's good for citizens, none of it's good for democracy. So how we came to thinking about AI is two reasons.
We started looking at the future of service delivery, realised very quickly that better websites and apps is not where service delivery is going. And even the short term, the idea of a personal AI-- you know, we're already starting to see those uses with Siri and such and Google and Echo, Amazon Echos, but a personal AI that is actually open source and tethered to me and my interests, not provided by Google or government, but something that can interface with government, with organisations, with companies, with all of the family, with the players that actually play a role in me and my well-being and my family's well-being, is a whole different paradigm for service delivery. But it means that I can have a personalised service that could be voice-only. Could be high resolution. Could be a hologram. Whatever my personal preference for accessibility or interaction is, my personal AI can give me that form of interaction. And then suddenly the whole world becomes accessible to everyone under the terms that they consider important to them. So that gets very interesting.
So that's this journey that we've come from. That's why we're looking at this. I'll quickly cover two other concepts. We've seen the use of AI and combined with virtual reality to get demographically representative public consultation and public development of legislation in Taiwan in real time. A thing called Holopolis. We highly recommend it. We'll put the link out on Twitter, but Holopolis is a fascinating use of AI for democratic engagement and live legislative drafting, which was quite an amazing use that we've seen.
The concept of government as a platform is a huge part of this. If government as a huge influencer, is able to make its rules, it's data, its code, its data models more transparent, then you can actually integrate it other things. Just talking about the deficit data, we've just recently come across the work of Mike Taitoko. And-- who are working with four iwi so that they have sovereignty over their own data. And then they can inform-- use their own data to help inform and model policy to then feedback into government policy development. That starts to explore a different model of collaboration and co-design into the future.
The final thing I'll quickly say is that I've come from Australia if you haven't got the accent. There is a recent cautionary tale called robo-debt. If you haven't seen it, Google it. It will horrify you-- the use of basic automation and machine learning to try to tell a whole bunch of people that have received social benefits that they owe the government money. Now it was 80% inaccurate, as it turns out. So it actually just created a huge amount of stress and pain for a lot of our most vulnerable people in Australia. To me, that's the cautionary tale.
Internationally, we know most governments are watching that case with great interest in trying to make sure they do all the things to avoid that. Here in New Zealand, let's not do that.
It's not just about good service delivery or transparency or accountability. It's about human rights. It's about not just social licence but social contract. And finally, I think that if we want to do this properly, then going back to Rosemary's point, it's not just how we do things.
It's how we collectively do things and how we co-design that future that we want rather than just adopting and iterating on the present that we have. Thank you all very much for your time. This will be available later and we look forward to continue the conversation.
If you'd like to stay across the work from the Service Innovation Lab, please join our mailing list.