Not up and running, it's not a final design. It's not a full fait accompli. But it's part of the problem that we've identified around measurement when it comes to service, design, and delivery in an all government context. So I'm going to talk to you about a couple things about the problem that we're seeing, they hypothesis that we have to try to address that problem, and the approach that we're considering taking, and a bit of a proof of concept around that problem.So the first part is-- first, the problem. What we tend to see right now is an individual agency will have, within usually a series of teams, some visibility to the various channels of service delivery. Now, some of these will be web, some of them will be application-based transactional services, some of them will be help desk, telephony, even social media. There are many, many channels.
One team might have access to one of these. In some agencies there are some excellent, I guess, omnichannel-- to use sort of the current buzzword for it-- view of the service analytics. But in most agencies, what we're seeing is a huge amount of investment and time going into data analytics, where they're looking how they use their administrative data in better ways.
But there's not actually a lot of attention really at all, apart from a few small pockets, on service analytics, on actually pulling in the real time data. And there's where service analytics gets really interesting. Because data analytics through administrative systems, there are some agencies--- and just a quick call out to what MSD are doing, which is the operationalisation of their admin analytics, which is amazing.
There are a few that are doing, sort of, that real time use of their administrative data, but a lot of people's use of administrative data is not real time. It's updated every month, every three months, every year, in some cases. So what you get is wonderful historical analysis, to some degree some predictional stuff.
But in the applied use of it, you also get a lot of issues around normative approaches to data and forming policy moving forward, which is a bit worrying. What service analytics gives you, it's absolutely-- it's still necessarily retrospective, and I think we all need to be aware when we're thinking about measurements how that fact that you've captured data means it's in the past, which means it's not necessarily where things are going, but it's real time.
And where it can actually inform service design and delivery is in a couple of key ways. First of all, it shows you behaviour right now. It gives you opportunities, which is similar to what Kay was talking about, around opportunities to, if not intervene, then to direct people according to the behaviour that they have. I'll come to an example of that in a second.
You have opportunities around-- we suddenly see a spike of people from a particular area. And we don't actually want to know who people are, just to be clear. We are not interested in knowing who the person is. We want to understand the behavioural trends, the user journeys, and what that means in those broader trends across the system.
But if you can see an upsurge of people from, let's say, a particular area, broadly having requests across the entire government domain around for instance, things around drug dependency. Being able to feed that trend through to the front line service delivery people, to say, look, you might see an increase in this. Here's some additional service information around drug dependency for your area. That would be really quite powerful.
It's about taking all of the intelligence that comes out of these systems, and there's a lot of intelligence out of those systems to be-- there's a lot of untapped potential in the data that sits behind those service analytic systems that we could actually use, both for, I guess, interventions, which doesn't have to be a hard intervention, to actually inform the front line, or inform the service delivery, I guess, more broadly.
Service delivery, and of course, to improve and continuously improve our services on an ongoing basis, in some cases to use that data to get rid of services, in some cases to use that data to identify gaps in the service and actually create new services. And the other part there is getting that all government view of a user's journey, and pain points, and behaviours, and where they're going.
Because if you have a whole bunch of websites, and of course they will be across multiple domains, multiple agencies, and multiple sectors, of course, if the person going through that journey, goes back out to Google for a search, comes back in, goes back out, comes back in, and ends up at a contact page somewhere, they're probably having a pretty bad day.
However, we can probably tell, well, 80% of people that go to that page, or are looking at that type of content, are also interested in this type of content, or that page, why wouldn't we use that to start to automate and push the people, did you also mean-- you know, Amazon-style reference or preference sort of options. And again, we don't need to know who people are. I think we have this habit in government of saying, as soon as I know who you are, I'll figure out what you need and then I'll tell you.
Now if we look at this trend of moving towards user-centered service design-- and service design, generally, putting the users at the centre of the design-- I think we've taken a little too literally, I think, that we're going to figure out what the user needs. And just do everything to the user that we think they need, because the problem there is that it depends on my agency, depends on my mandate, depends on my view of the world as to what I'm giving to them from what they need. Because it will always be a subset of what that they need that my agency has to provide to them.
So how do I actually look at all the needs of the user and then redirect them accordingly? Imagine if we could actually make available across the entire of government to all of front lines, whether it's a help desk for immigration, or a help desk for DIA citizenship, or a help desk for MSD-- someone just calling up any of our front line people help desks-- and everyone had the same access to information about services, the same information about the business rules, the same, sort of, logic flows to be able to direct people. So that rather than being 14 hops, it might turn into one hop.
Anyway, I'm going sort of slightly sideways. So the problem we have is that we don't have the ability to understand across-- often enough, across one department, across multiple channels, across multiple services, let alone across all of government. And what that means is that there's some perverse incentives that are starting to be-- starting to emerge.
So I have a quite understandable and natural incentive to reduce my cost of service delivery and to improve the experience that my users have. But if that creates a cost imposed on another agency, I have no motivation to wonder about that, to worry about that, to think about that, to take responsibility for that. If we-- you know, there's an incentive to say, well, if they're not someone coming to me for my service, then I'm sorry, it's not with us.
And of course, a lot of our front line try to make up for that by saying, well, I think you can go to them or I think you can go to them. We end up using our front line as a bit of a manual switching service. And it's not all that efficient or all that effective, which is why we end up with a whole bunch of service integration actors in the non-profit sector and the for-profit sector actually trying to fill those boots, because we don't necessarily do it all ourselves.
So there's also, I guess, just finally, the opportunity or the problem around what success looks like. And again, service analytics kind of helps with this. So it's not just about the analytics and the user journey. It's also about the systems and services themselves.
We did a bit of a cheeky thing in Australia in the Digital Transformation Office, where we set up, public facing, we took the top 100 websites by volume. And then we could actually rank them by volume and by cost, just for fun. And then we just started doing just ping tests. I mean, a ping test is just a hello, are you alive, and them coming back and saying, yes, I am.
And what you get from a ping test is-- what you can do is as often as you like. It just gives you a whether the thing is up, and how long it takes to respond to you. That's all it gives you. But tracking that every-- I think we were doing it every minute or every 30 seconds-- we got some pretty accurate up times and latency of some of those major services.
We actually found, in some cases, some government services-- and they weren't human services in all those cases, it might be an API, it might be a website-- that would actually switch off at 5:30 PM on a Friday--
Come back up at 9 o'clock on a Monday. That was interesting.
We found that the up time and latency of the most expensive services or most expensive websites didn't necessarily correlate to the most up, the most reliable, the most effective. There was interesting things that you can get from service analytics, not just about your user, but about the systems and platforms themselves. And the whole point of this is not to embarrass anyone. It's to actually identify and prioritise funding, user needs, service improvements over time, and actually improving the whole system.
OK, so our hypothesis is, what if we could actually pull intelligence out of all of government around service analytics, what benefits would we get from that? And we're looking at setting up a little bit of a proof of concept around exactly this topic. Again, we don't want to identify data about people. We don't want to do that at all.
What we do want is to be able to see the patterns and the journeys, you know, broadly speaking, in a de-identified way. So your basic inputs for this end up becoming web analytics as the first one. Now there's different sorts of web analytics. A number of agencies use different sort of web analytic tools, whether it's AwStats, or Piwik, or indeed Google Analytics. All of these can be drawn together to get-- that they all do their analytics slightly differently, so you have to take that into account. But you can get some intelligence.
In Australia, we brought together the agencies that were using Google Analytics already. We set up Google Analytics Premium account, so that if they chose, through their own systems, to want to use Google Analytics and they wanted to get access to the Premium account, then they could get it from us. And that is now in the process of being set up in TSSD.
So if you're in government and you've already chosen to use Google Analytics for some of your services, then please chat to me afterwards about getting access to that all government arrangement that's being put in place, financial arrangement. But at the same time, we highly recommend a whole bunch of the non-Google analytics tools for that.
But you can actually pull in those analytics into something. The second source for these kind of things, aligns to these things. And generally speaking, your transactional systems-- so where you are applying for something, or being paid something, or paying something, or updating something-- you don't tend to get good analytics from web. You have to get it out of logs. You end up having systems around help desk, of course, telephony, social media.
So just starting with the web, if we actually create what is commonly, modernly known as a data lake, rather than shoving it all to an analytic system, which means you get only the ability to reuse data within the scope of that particular tool. Our plan is to pull stuff into a data lake, which basically means it's in a structured format that's not beholden to a particular vendor or a particular product, which means that then we can create, actually, a layer, an analysis layer.
So we've got the data in a structured format that we can switch and swap with whatever we want in the analysis layer. And then we can actually create dashboards, we can create analysis, we can create user journeys. We can create a personalisation engine, which is my personal favourite little thing we want to do as part of this proof of concept, where we will be able to say, pass me a URL, and I'll tell you the five most closely correlated URLs, just as a simple thing to be able serve up to our users to say, are you also interested in. Just to help them bypass this, and hopefully, not have to get to this.
And then what we'll be able to do, hopefully, is be able to tell trends around services. And again, this is all very-- it's a little bit hypothetical. It's not entirely, because we've done it once before in another country. But the idea here is once we set up proof of concept to see what the value of it is, to see what the agency interest would be, agencies would have access for their own data and access for the full analytics layer. And we would work with them to pull in what is needed and then de-identified before we get a form.
Because here where it gets cool, and this is the last thing I'll say. You start to see trends. And then when a new service is introduced or removed from the system, what impact did that have across the whole system? That's one of the key things that we want to be able to see, and in many cases, that's going to be a great validation for excellent services being set up.
It will also create a motivation to create services that genuinely help people, because we'll be able to see if it is or isn't. You can imagine a future where we start to combine the insights-- not the data, but the insights-- between that and an IDI, between that and potentially other things. But for the moment, this is a proof of concept that we're hoping to setup over the next three or four months. If anyone's interested in coming to play please let me know. Cheers.