The Artesian Podcast

AI Ethics & Responsible Tech - BoabAI, Dr Catriona Wallace (ArteHouse Innovation Series with Ali & Tim)

Artesian Season 1 Episode 5

This is Episode 2 of the ArteHouse Innovation Series with Ali & Tim.

In this episode, our Portfolio Manager, Ali Clunies-Ross, and Artesian Partner, Tim Heasley, interview Dr Catriona Wallace.

Listen as Catriona breaks down artificial intelligence – what it is, best use cases and lessons for corporate innovators. She also talks about the existential risks posed by AI and the ethics of building responsible, inclusive technology companies.

About Dr Catrona Wallace
Catriona Wallace is the Executive Chair of Boab AI - Artesian's AI Accelerator.  Catriona has been recognised by the Australian Financial Review as the Most Influential Woman in Business & Entrepreneurship.

She is based between Australia and the US, she is also the Founder & CEO of Ethical AI Advisory, the Founder of Artificial Intelligence ASX Listed company Flamingo AI, which exited to BDMN Investments in 2020. Flamingo AI was the second only woman led (CEO & Chair) business ever to list on the Australian Stock Exchange.

Having recently been inducted into the Royal Institution in 2019, recognising her excellence in scientific achievement and commitment to science, Catriona is one of the world’s most cited experts and speakers on Artificial Intelligence, Responsible Technology, Ethics & Human Rights and Women in Leadership. She is also a philanthropist, human rights activist and mother of five. 

She has achieved significant awards in technology & innovation including Advance Australia’s highest award for Australians working abroad. In large part, this success comes directly as a result of her deep experience in data and technology as well as the human side of transformation. In this interview, we talked about everything you need to know about Artificial Intelligence.

With a dedicated passion for encouraging more women to pursue careers in technology, Catriona is leading by example, listen as she shares her experience, expertise and know more about women in technology.

About Boab AI
Boab AI, is a scaleup investment program based in Melbourne, investing in Victorian and international AI companies ready to expand globally.  
 
Their mission is to connect an ecosystem of entrepreneurs, government organisations, universities / research institutes, corporations / industry groups and investors working to grow AI opportunities in Victoria and around the world.

AI ventures interested in applying to the Boab AI scaleup program can apply here: https://www.boab.ai/


[00:00:44] Tim: Hello everyone. Today, we're lucky enough to have Dr. Katrina Wallace. Join us on the outhouse podcast. Katrina has done many things over her storied career, but I know Katrina because she is the executive [00:01:00] chair of Boab AI Artesians AI accelerator. And I'm lucky enough to be on the board with her. We work closely on a number of matters concerning Boab.

And we'll talk more about that as we get. Welcome Katrina. 

[00:01:14] Cat: Hey, Tim. Great to be here with you.

[00:01:16] Tim: Excellent. 

[00:01:17] Ali: So it's great to meet. You could train our, we have not yet met physically in person, and this is the first time actually on video as opposed to on the phone. But we've been chit chatting about a couple of AI companies. So I guess we wanted to dig in today about your journey with AI, which is a bit unusual, I guess, for women in Australia.

So yeah, we're excited to dig into and hear about , how you . Got where you are today and some of the incredibly impressive achievements that you've had. So I guess, look, we want to start right at the start. How did you initially find out about AI? what sparked your interest? when did you start to get involved in this technology?

[00:01:57] Cat: Let me go back a little bit in [00:02:00] time, Allie, and, and say that actually only ever wanted to be a farmer. Then I became a nightclub owner. Then I became a, a small business owner and then went into academia. So I did a PhD at the Australian graduate school of management. And my area of expertise is in organizational behavior.

But in particular, my thesis was on the role of technology plays in substituting for human leaders. So I finished this way back in 2007 and there was very little talk about AI. Then there was no talk and even my professors thought it was kind of a bit of a strange thing study, but I could definitely see the future would mean that humans and technology would come together in a very close relationship.

And so I chose. Area for myself to develop some specialty in. And I went on and built a market research company called ACA research, a human centered design firm called fifth quadrant. And then I decided it was time. Who is actually quite a [00:03:00] funny story. I've been building professional services companies.

I went, oh, professional services companies. There's so many humans involved. Like I really need to do something where I can just do it at scale and it takes care of itself. So I'll build a technology company and that's when I launched Flamingo AI in 2014. But ironically, I found building a software company.

There's much greater human resource. need than anything else I've ever built. So that was quite interesting, but it was in 2014 when I founded Flamingo AI, which was one of Australia's first AI startups.

[00:03:32] Ali: Okay. Fantastic.

[00:03:34] Tim: That's great to know, but I actually haven't got past a nightclub owner, so I think we need to go straight back. 

[00:03:42] Ali: Is this you as a child or these a, these are things you wanted to do as an adult.

[00:03:46] Cat: No, no, I'm jammed about three people's lifetimes and two into my 56 years. So as you know, we owned a nightclub which was called cherry jam, which was down at double bay. Renee [00:04:00] Rifkin used to own it as the embassy, most people can now identify where it was. And we own that for about four to five years and we had our day jobs and then there were four of us who ran the nightclub at night.

So it was, it was an extraordinary experience. We could have about 500 people in the venue, but because I was actually a police officer before I became a nightclub owner, if the police, heck I forgot that.

[00:04:25] Ali: Pharma to night club owner to small business owner. I got, these were all things you wanted to do as a

[00:04:30] Cat: No, no, this is literally no, actually I forgot to say I was a police. I was a cop from 19 to 23 and the new south Wales police force. Do you not know that Tim? do you not know that? Yeah, so I was a cop. And so what I was going to say is that.

[00:04:46] Ali: that is not on your LinkedIn.

[00:04:48] Cat: It was super useful owning the nightclub after being a police officer, because if they were going to do a raid on the club, that it politely ring your head and tell me.

And so we could organize things to be legit for [00:05:00] Redmond when the cops busted down the place. So, yeah, Tim, I'm sorry. I've kept that one a bit secret, like that.

[00:05:12] Tim: Oh, wow. This is just getting a hell of a lot more interesting. 

[00:05:16] Ali: I guess I can see how the journey from sort of small business owner went into AI from trying to sort of automate some of the ]properties that you, or some of the activities that, that small business had. And I guess you're quite entrepreneurial with the nightclub, but I guess what got you into a business.

In general at the start. You had a farm venue then with the police officer, then the nightclub, then the small business, is that the right order?

[00:05:41] Cat: Sort of all makes a bit together, but yeah, pretty much that look, I come from a family of. Successful entrepreneurs, and then successful academics. So I was brought up with a very entrepreneurial father and very supportive, incredibly intelligent mother. And we had four kids, older brother, younger brother, younger [00:06:00] sister, and we were all brought up to really do two things.

One to know that. Probably build our own business. , but then two, to know that as middle-class white Australians, we absolutely had an obligation to be a service to community. So additionally, I've also founded a number of philanthropic funds with Sydney community foundation and Sydney women's funds. So if we come from a family that is about being self determined to be an autonomous building businesses, but then using profit from that to fund social causes.

[00:06:29] Ali: Amazing. and so I guess when going back a little bit, I don't want to go too far forward, but, um, you know, going back a little bit, . You identified this problem with automation. Why did you think AI was the right solution? I mean, with software these days, there are other ways that you can automate beyond AI.

And this probably is going to go into the second question, which is what actually is AI. I think a lot of people talk about artificial intelligence, talk about machine learning, but they don't exactly know what that is compared to [00:07:00] sort of traditional software automation. So I guess my two questions are why.

Why did you think I was the solution, and then what is AI?

[00:07:08] Cat: Right. Well, the small business I had was actually a management consulting firm doing advisory work to large corporates. And so I worked with boards and executive teams of Australia's largest companies in helping them do transformation from product cultures, into customer cultures.

And so I got deep into the customer experience and customer service fields, and that's when I saw how. Really broken. It was particularly here in Australia, very manual customers having very poor experiences. And at that stage, I started to think, look, if I was going to build software, why don't we build something that actually automates the customer's experience?

So they don't have. Deal with humans essentially. And so that was my thesis. And when we looked at doing that, we looked at two components. One was an automated journey system for the customer who perhaps was [00:08:00] buying a product. And then. A conversational component, which ended up being like a virtual assistant so that the customer could actually go through each journey step, but then converse with a robot in this case, if they needed any help.

So we were one of the first companies globally to build. A virtual assistant who could guide customers through their full financial services, product experience. So examples of this would be, we worked with fair to say that I built the business in Australia only for about six months before I took it to the U S and we ran the business out of New York and in the U S we had clients like Liberty mutual 

who used our product to guide customers through their auto insurance experience.

 In purchasing a MetLife for customers going through life insurance applications and nationwide for customers going through applications for retirement products. and then back in Australia companies like CUA used it to guide [00:09:00] customers through healthcare purchasing and the, AMP through superannuation applications.

So we were convinced that bringing this type of software. To market, which of course is 

needed to be AI because it uses natural language processing. Plus some other automation that this would be a useful product for customers. Actually our great passion and interest was actually making the customer experience better than it was.

So that was our first product to market was a virtual assistant for customers.

[00:09:30] Ali: Yeah. Interesting. And so was it focused on customers going through that process on the phone or was it focused on e-commerce or it could be used across all different.

[00:09:40] Cat: Right. So it was only those. Trying to self-service off the web. So an example say 

for Liberty mutual, a customer who wanted to buy auto insurance would go onto the Liberty mutual website. And typically then they might have a application form that's online. Instead of doing that, they would enter through a [00:10:00] ti Meet the virtual assistant and the virtual assistant would handhold them through the entire process. So yeah, we didn't do it audio and we didn't do it through any other channel other than the online channel.

[00:10:12] Ali: And so you've gone from this sort of small business owner to a global tech company. You mentioned that you went back to uni and learnt about AI, but is that how you learnt the skills? Were there many people who have. Was it hard to hire? Like what are we now seven years ago?

 it was a very different landscape to what it is today. What were the challenges in pioneering an I business at that time?

[00:10:36] Cat: Yeah. A huge challenges Aly. So I'll try to see your question. Yes. So we were very lucky to attract two really outstanding technologists. Joe Walla, who's now gone on to be the CTO for finder 

and Dr. Jack Elliott, who's now working with Westpac. So both of these people were outstanding. Jack was our chief data scientist and Joe was our chief technology [00:11:00] officer.

And back then, so again, I say back in the day,

[00:11:04] Ali: Back in the day, only seven years ago, but it feels like three tech cycles.

[00:11:08] Cat: Right completely is the case. 

And we would, you couldn't get AI off the shelf. You couldn't just download Google or Amazon and then coupled together a product and, or open source something and coupled together, and then sell it as a product.

So we had to actually build our own AI from scratch. Because there was no other choice at the time. So we built a type of AI and new type of AI, which was called semi-supervised machine learning. And we have that patented in the U S and in Australia as a unique and novel invention.

And so the challenge was just building this stuff and making it work and we were solving a complex problem. And in hindsight, it was probably too complex, a problem to solve Customer buying a full financial services product because what we ended up doing, we ended up pivoting a few years ago [00:12:00] out of the customer facing robots into employee facing ones.

And there's a couple of reasons for that. So one is we were selling to fortune 200 enterprises and they. We're quite difficult to deal with in terms of very long procurement cycles and requiring a very high level of security accreditation. So we would have spent a couple of million dollars and probably two years actually building all the security requirements so that we could.

Sell into these large financial services companies, which we did. And we've got SOC two type two certification, which was, very good for a young company, but it was time and money. So I think things are a bit faster now. And these enterprises also weren't used to dealing with startups because it was a new thing.

And quite often we fell into the trap of innovation departments wanting to engage with us because they wanted to. Trial AI. And because they were large, corporates would go, yeah, [00:13:00] look, we'll take your money and we'll do a trial. But then once the trial was finished, regardless of how successful it was, there was very little bridging between innovation and into enterprise and into full production.

So we, we learned the hard way that when organizations come to us and say, we want to trial AI and just see how it works and learn about it, 

that you don't do that. 

[00:13:23] Ali: Yeah. Don't talk to them.

[00:13:25] Cat: You don't

talk to them. You don't talk to those. so your building machine was a challenge. Procurement's a challenge.

Security is a challenge. And then really getting organizations to understand AI enough, to know. And I think here is the formula. Ali is the organization's need to identify a problem that they have that has significant commercial value attached to it. But all other technologies and solutions that they've tried.

I have failed to overcome the problem or achieve the goal. And then you would look to AI. So some companies are [00:14:00] already using software that just worked perfectly well. And just using AI didn't necessarily add any greater value to their already pretty useful technology. And so that was another deep and hard learning for us 

[00:14:14] Ali: I mean, that's one of the things that we see, even in the startup world with AI, going from that, looking at it from the actual founders perspective, we have a lot of founders who come to us and are like, oh, and it's got AI. And sometimes where the answer is why you don't need AI for that particular use case.

So where do you think AI is best used? What do you think that the sort of best applications for that technology are?

[00:14:39] Cat: Yeah. Well, let me answer that by first answering a question you asked before. So what is AI? 

So AI 

is simply technology that mimics human intelligence, but that was a traditional definition of AI. And that originally was coined back in 1956 at Dartmouth university. So 70 years ago, this technology has been around what [00:15:00] we believe now better definition of AI is much more related to machine learning.

So machine learning is a type of AI and machine learning is the capacity for the software to learn on its own account and to. Get smarter with every task that it performs. So there is some companies that claim that AI and simply they've automated functions, but it's not necessarily what we regard as true AI, which is actually machine learning.

So we have lots of debates in the AI community about what is the best language. And we still really haven't landed on that. Like sometimes we use algorithmic decision-making, but like Who's going to say that it's too big, but if we think very simply about what an AI process is, it's five components.

It's the data. We should talk about that. The super important, the data, the algorithms, the analytics, the decision-making and the automation. So data algorithms, analytics, decision-making automation [00:16:00] there, the components of AI.

[00:16:02] Tim: And cat machine learning. Is that the highest aspiration within the sort of the subject of AI.

Is that machines thinking or is thinking another level again? 

[00:16:14] Cat: So machine learning is, machines thinking And a great example of this. I just saw recently, and it's a demo that's been presented online by open AI and their product is called codex. They have now developed AI and machine and to be so clever that they have an AI that codes itself, 

and it actually has four different coding languages that it can code in and they have another AI attached to the front of it.

So they use natural language process to give the AI and instruction such as go and build a website around this topic. And the AI will then instruct its other components, which has AI or machine learning to actually go [00:17:00] and build a website on its own accord. Or another example they give is build a game and it'll go and it'll build a game on its own accord with very simple, if a few instructions from the human.

so the core thing about machine learning is that it doesn't need to be explicitly coded or reprogrammed to do extra things. It will learn to do this. On its own. 

[00:17:21] Tim: And sorry, just to take this one level more by analogy. If we look at say a dog and I don't look, I think the current thinking and people may disagree with me on that. A dog can't think for itself, but it can learn. It can imitate and it can repeat behaviors and it's effectively responding to stimulus.

It knows that if it does a certain thing, it's going to get something good. If it does another thing it might get told off. I don't think that of itself constitutes thinking or reasoning. are you saying that we're now at a stage where machines are beyond [00:18:00] that simple. Response. 

[00:18:03] Cat: Yes, they can figure things out for themselves. So look, a great example of this is when we start to talk about the existential risk that AI poses and across all of the existential risks that humanity faces, and that would include nuclear war, climate change. Asteroid hitting the earth pandemics and then bio engineered disease And the sixth one is the coming of artificial intelligence, the existential risk philosophers and academics put those first five at about a one in a thousand to a one in a hundred thousand chance, including climate change that these things were. Pose at a true existential risk to humanity. So what, and this century, so what that means is by the year of the next 80 years, will these things either wipe out humans altogether, kill everyone, or severely reduce the potential of humanity.[00:19:00] 

or wipe, why to enough people that there's only 10% of the population left and everything has to start again. So of these risks, those first five are about at least one in a thousand chance. AI poses, a one in 10 chance of severely. Limiting humanities, potential by the end of this century. Right. So, so let's just kind of plant that as the big audacious, holy shit.

Didn't know that was a thing. And then come back into. Okay. So how would that happen? And I'll give you a simple example. So a human codes, an AI to play game. So let's say it's coding AI to play the game fortnight and the human says AI. your goal is to win simply that your goal is to win. So the AI interprets that literally and starts to play and it will go out and it will learn millions and millions and millions of, moves and strategies through the game.

And within probably a number of hours will become a master [00:20:00] at that game. And then the humans are playing it and then they go, oh, why would we play fortnight? That Fortnite robot wins every time. And so then the robot goes, okay, well, I was good at that game, so I should now learn another game. Cause I've already, I win at everything.

So the machine goes and it. When it's another game and then learns another game and learns another game. And then people say, well, that's not fair. We're not, it's ruined the gaming industry. We don't want to play games down because these machines, when everything, so why don't you turn them off?

But what has happened is the machine itself, because it has this one lesson or instruction when. The game, it will now do everything in its power to make sure that it wins the game. So humans might go, let's pull the power out. Let's change its code. Let's shut it down. But the AI by this stage has recognized that the humans will want to stop it.

And because it's been programmed to win, it won't allow that to happen because it's [00:21:00] just after it's goal. And it will then go and do things such as find alternate power sources, recreate itself in some other place. So if one part of it's shut down, it'll emerged somewhere else.

So , the things that these machines possibly can't do now, but will most certainly be able to do. And this is where we move into. This new language around AI. So we've got ethical AI, responsible AI, and now we're moving into this concept of aligned AI and is AI from the very, very first instruction aligned with human centered values and for the good of humanity, because you can see that the innocent programmer who originally put that in didn't know that this gaming robot would soon take over the world.

That was not his or her intention, but that's potentially what could happen. So there's one thing. So the machine is smart enough to do that. And then there's two just putting this powerful machine in the hand of better.

[00:21:55] Ali: And I guess, I mean, these are bad actors versus people just doing [00:22:00] something and accidentally creating a system that takes over the world that obviously vastly different elements, but they're both risks with this technology. When we talk about aligned AI. I guess the, the first around having accidentally doing something takes more effects, but Like, how can you determine what makes an aligned AI? if you've got this scenario where you're just naively creating something that you think makes a system better, or you think is for the good of humanity, how can you actually determine what the impact could be of. In the future, how can you foresee all of the different decision-making trees that, that particular piece of software can create?

Like, I think it's, it's almost like beyond humanity. You need to create an AI to determine if that AI is aligned. But yeah, there are strategies that are being used. How's that sort of tackled today?

[00:22:54] Cat: Well, there are strategies, but you could see from that simple [00:23:00] example how this can run, right. 

And AI in the fastest growing tech sector in the world now $327 billion worth of value 2021. So there is work being done around. Responsible AI and ethical AI frameworks. And I've been heavily involved in developing those frameworks for Australia.

And now I have a consulting practice called ethical AI advisory, and I worked very closely with the gradient Institute who train engineers, how to code ethically. So there are frameworks and guidelines around, but probably what's additionally scary to the scariness of. Uncontrolled AI is that the government is doing its best to catch up with regard to regulation and 

legislation. But I lipped away behind, 

I'd say a five years behind. So we still don't really have legislation or regulations that, that determines particularly algorithmic. Decision-making how this has done. And we. could see this last week, there [00:24:00] was an example where wall street journal went and did a test of Tik-Tok around. Uh, they set up a hundred accounts and use some posts around sex and then drugs to sit and posing these robots, pose these, pressing these bots under age children, to see what Tik TOK would serve up for them. And it is just a horror story. And all of this is. Algorithmic determination that takes these kids down, these rabbit holes of sex and pornography and bondage and the like it was just deeply disturbing to read that this is what goes on.

So who's regulating Tik TOK. Who's regulating Google, Amazon Facebook, and we've seen the challenges with Facebook and Cambridge Analytica where algorithms would use to manipulate populations towards which way they were going to. So all of this is already happening. And so to a degree, the train has left the station as to the risks of these [00:25:00] technology.

And there's a whole bunch of us around the world now trying to put together responsible AI frameworks, ethical AI, and get them out and get engineers and businesses trained in this field. So we're about to release a research report. Australia's inaugural responsible AI index. And we'll really set in the next two years.

And that was sponsored by IAG and Telstra who were both deeply concerned a about this topic as well. And in that study, we see that less than one in 10 Australian based organizations have any maturity at all around responsible AI.

[00:25:34] Ali: That's. Scary. That's it. That's really scary. I guess. I don't know team. I have a lot of questions.

[00:25:41] Cat: And we can also talk about the amazing things that AI will bring to we've kind of done the dark side, but very happy to chat about the extraordinary things that AI will 

[00:25:51] Ali: Yeah, I have two more questions on the dark side and then we can move into the exciting side of things. But I guess one, we obviously seeing the private sector [00:26:00] and groups like your organization and individuals like yourselves pushing this. Do you think when we've got global companies playing in AI and in this space from a regulatory perspective, are, governments, the best

 to regulate this, or is it actually better to have a sort of global panel or global industry group? Because it is a global issue. And just because it might be regulated in one country might mean that, the other takes another five years and five, then you know, you've got major issues with the fast paced nature of this sector.

So who do you think is best to actually regulate this?

[00:26:38] Cat: Yeah, it definitely does need to be government. And I've kind of reticent to say that, but it's actually the truth because we've just not seen good leadership from the tech giants. Yeah. 

And if we go back to the movie, the social dilemma, where they have a beautiful line in there, which they say the world is essentially controlled by five large tech companies in those five tech [00:27:00] companies are 50 predominantly white middle-class men who capture the attention of 3 billion people a day and influence 3 billion people a day.

So the tech giants are, or their business owners and their profit models don't know. 10 them towards an ethical or a responsible approach, unfortunately. So that really does lead us with government. I am delighted that Australia has just launched the tech council of Australia, and I believe they will play a role in helping navigate through this field 

[00:27:34] Ali: And I guess you just, you know, you touched on my final question, but the tech world is dominated by a certain type of person. And diversity has been one of the big issues with AI. I think. You know. If we look at teaching AI about the history or we give them access to the internet, they'll see historically women aren't CEOs of companies or we've got history of slavery.

There's all these sorts of [00:28:00] atrocities. What is the ethical AI? Sort of community, what's their role in ensuring that AI is used to promote diversity and inclusion rather than learning from historical data that doesn't necessarily provide that sort of inclusive community.

[00:28:17] Cat: Yeah, this is a big problem that we're facing at the moment. And that's essentially the fact that algorithms are trained on. Typically on historical datasets and 

within those historical data sets, there's often community groups, minority groups in particular and women that have been left out. And the great famous example of this is when apple and Goldman Sachs came to market with the apple card and used an AI and algorithm to determine people's credit limits.

Provide and Steve Wozniak, the co-founder of apple put in his financial details. Mrs. Wazniak put in her financial details. And Steve got a 10 [00:29:00] times the amount of credit that Mrs Wazniak got and that kind of blew up all over the internet. And that was simply that Goldman Sachs and Apple. Traditional historical dataset that downgraded women's ability to have 

credit or to pay credit. It's terrible. so this is rampant already. And if you look at any searches on the internet, a lot of them, if not, most of them have bias built into them. And the reason for that is these are coded by humans and then they, start to learn themselves. So for example, I'm a . adjunct professor and if you get a good professor style, you will just get images after images of white. Middle-aged usually quite attractive men and Tweed suits. And so who's coded that well, in this industry, nine and 10 engineers are male and one in 10 are female. So it's likely consciously or not consciously that.

Young men have been given datasets to tag and images to tag and have gone. [00:30:00] Okay. What does a professor look like? A male young and wears a Tweed suit. Do they look like me long red hair? Normally dress in street or beach gear? No, they don't look like me. So train is well left the station. And when people wake up to this, it becomes frightening because then how does that apply to being given.

Bank credit or to being given a prison sentence or to get health care or when the police are patrolling neighborhoods, how's that all going to play out? And then each of those fields I just talked about, they're just terrible examples of where that's all gone, but. Because of the bias and the datasets.

so there are eight core principles about ethical AI. I'm very happy to share those with you, but the number three one is that AI must be fair and it must not discriminate.

[00:30:46] Tim: Cat that's fascinating, but is it, the case that we're only as good as the weakest link, so that if there's a particular state on the planet that is not willing to adhere to [00:31:00] these sorts of structures and standards, then all the good that we may do here in Australia is. Really pointless. 

[00:31:11] Cat: Yeah, there's a big question about that. And what we say in the AI community is that AI will be the primary force that dissolves international boundaries and borders. And we know that the largest AI country in the world is China followed by India, followed by Brazil. And then the U S and then European countries, Australia.

We are very much laggards guards in this field. We have one 10. Of the investment per capita, going into AI, then say that the U S and the fed I've been lucky enough to be invited, to sit on the federal government's team who are allocating their $124 million, which was recently announced in the last budget.

That will be. Going towards AI, but Tim, you and I know like we're inventing our alley, like we're in venture capital, right? [00:32:00] Yeah. We could potentially give her $124 million to one company let alone, 

this is the budget for the entire country. The us does something like $2 billion worth of investment into AI in a given year.

So Tim. And I'm always reticent to say it because sometimes I think it just sounds racist, but there is concerns around China's use of AI because they have a different approach. And in countries like Australia, we do have societal debate about things such as facial recognition. And we see that in the U S as well.

There are some states in the U S at banned, the use of facial recognition. We know in China, China has scale, it has state backing, but it doesn't necessarily have the same level of societal debate that we do. And so we see things such as the credit scoring system where Chinese individuals are given a credit score.

They are monitored with facial recognition, very heavily. And that all to us [00:33:00] seems, it seems very scary. I think China will also be one of the countries that comes out with some of the most incredible AI for good as well. And that will be in healthcare. That will be in financial services. We expect that China will revolutionize the whole social media sector at some stage soon as well.

So I don't want to be that person that is always all everyone has. The worried or scared about China. 

I think we'd be worried and scared about Silicon valley as well. We just need to be alert and be as educated and aware as possible.

[00:33:35] Tim: I lived for two years in Shanghai and facial recognition, cameras would just everywhere And quite in quite intrusive places, you'd be walking along the street and they're quite low down, they're actually, they're designed to photograph your face and then there are others that do number plates and various things like that.

But I don't know whether, the answer to this, but I was told that the original facial recognition technology that's been so widely employed in [00:34:00] China was originally designed in Australia. 

[00:34:02] Cat: Correct. That is my understanding as well.

[00:34:06] Tim: It was a sort of a delicious irony and all of that. 

[00:34:09] Cat: Absolutely. And this is this nature of there are no borders 

anymore because this technology, whether it's Israel does also another great hotbed of amazing AI technologies, Silicon valley, obviously China, obviously Ireland as well. So there are lots of places. And then this technology often is then freely available wherever you are.

[00:34:32] Tim: It was fascinating living there. and it's obviously not everyone's cup of tea, this constant surveillance, I guess I pretty quickly became used to it, but it obviously has a lot of, potentially negative aspects to it. But particularly from the point of view of those of us from the west where we were individually, Freedoms and individual rights are.

foremost.

Whereas there, the greater good is really what dictates what happens. But I had a friend there that [00:35:00] had a wallet stolen in a particular bar or something. And. The facial recognition technology. He went to the local police station and they were able to track this down within three hours. And they had film of the individual traveling huge distances, but just by all the various cameras.

So look perhaps grasping at straws, but there was a positive out of that and the wallet was returned and the offender was arrested and possibly never to be seen again. But yeah, there we go. 

[00:35:29] Cat: It's a great Chinese story where it was at a sports game and there was one of the most sought after Chinese criminals. It was picked up as the facial recognition scanned I think it was like a hundred thousand people and picked him out of the crowd and they arrested him. Right. So at the most wanted man in China, so is that a good news story?

Well, possibly it is 

[00:35:53] Tim: Depends on your perspective doesn't it?

[00:35:55] Cat: Right.

[00:35:55] Tim: very much. And I guess that just neatly highlights [00:36:00] those sort of ethical issues that we're here debating. . I wanted to ask it's slightly off topic. Have you read any works by Phillip K Dick?. 

[00:36:07] Cat: No. 

[00:36:08] Tim: He wrote a number in there through the sixties, seventies, eighties, a number of science fiction classics on which a number of movies were built total recall and minority report and do Androids well do Androids dream of electric sheep, which became a.

When did that become blade runner, fascinating reading. And there's a, there's one that I particularly like called Vulcan's hammer where there's a, and a little like the analogy you gave earlier, where there's a sentient supercomputer running the world, but it's becoming out of date and a new replacement for it is being designed and the story of these two computers effectively fighting each other for control. Anyway, brilliant. I guess a little 2001 space oddesy. Anyway. I'm probably way off topic. 

[00:36:57] Ali: I wanted to talk a little bit, about the positives of [00:37:00] AI and there's been some incredible developments like deep mind to alpha fold and some of the things that's happening in drug development and vaccine development, as well as what we're seeing across potentially fully automated driving.

Quantum computing, you talked a little bit about social media as well, but I guess where do you see the core, the greatest. AI applications and how they can benefit humanity. Where do you see those lines?

[00:37:26] Cat: well right now, the great things that AI and machine learning does is a couple of things. One, they make operations more efficient, so they can automate things and that's great. And two is they have better analytics and three is they can help humans make better decisions. So those things are remarkable. And an example of that in, in our software that we built, one way we would describe it was as a self-organizing library. So we would put customer data into the [00:38:00] machine and. It would be able to analyze things at depths and complexities well beyond what any human could do, a well beyond any normal analytical software.

 the way we would describe it as it was a 300 dimensional graph and it would then find patterns and organize itself into new insights and new patterns. And that's what made it so extraordinary. . Efficiency analytics and better decision making. That's what AI does for us 

now, which is great.

So things like. What we hope perhaps better in the future will be around disaster. So natural disasters. So being able to predict them to be able to mobilize collective action when we're in the midst of a disaster, those sorts of things will be, we'll be great. And we're seeing great AI machine learning, coming out around predicting flooding, predicting fires, predicting earthquakes.

Those sorts of things will be super useful. So on the environmental. Super useful. The other big [00:39:00] area for AI and where we're 

seeing the vast majority of investment is in health care. Anything related to health tech, 

med tech, AI, and this has also been increased since the pandemic. So we've seen five years of tech advancement in this field within the last 12.

And this will be for things such as disease mapping, disease diagnosis. And I sit on the board of the Garvan Institute, which is Australia's leading medical research Institute, 700 scientists. And we talk there about your people going, oh, we really noticed that this vaccine didn't take four or five years to develop like other vaccines and the scientists and the data scientists go.

Yeah, of course it didn't take that.

[00:39:43] Ali: Yeah.

[00:39:44] Cat: We've got like super brilliant technology that, that maps all this stuff out much faster than ever before. So in disease diagnosis, in, in robotics for performing operations on humans, I think. There is a [00:40:00] huge amount of value that will come in healthcare. And then we get into social purpose machine learning, and there's a lot of beautiful work that's been done now for disabled community, for children, with autism, where they are working with robots who are teaching them how to do certain things, beautiful work that's been done there.

There's a wonderful application called seeing AI. And this is where for those people who are visually impaired the AI actually in the rights to them, the environment that they're walking through. So if they're walking through a beautiful forest, it might say, Hey, you're walking down this track and there's a couple of rocks on the side and it's a beautiful pine tree next to you.

And a bird that's a parrot just flew over the top. That's what you can hear. And so there's some beautiful things. For people who have disabilities or for social consciousness, we will see some great things that I know there's things in the pipeline coming around, supporting people in [00:41:00] domestic violence, in war zones, those sorts of things.

We'll also see great work done. The main use cases at the moment from a business perspective are around personalization. And we say that we're moving into the era of personalization of everything. And this is where machine learning. Learns to know you better than yourself and the author of sapiens and homo Deus and Noah Yuval Harari says we're now in the era where for the first time ever, we're competing against organizations, knowing us better than we know ourselves.

So organizations will personalize and target offers and sales offers to us. Most of the time now they get a pretty right and we've all had that experience, but again, we take that level of mass personalization and we can also see that it doesn't always get it right. And what harm can happen if that, if it's 

not always right.

So at the moment, there's a lot of customer service, sales, automation, back office [00:42:00] automation, analytics would be the main areas that we're seeing AI being used for in business.

[00:42:06] Tim: We are in. I think what's generally considered to be the. Wave of sort of industrial revolution and it's probably way past industrial, but each has promised threatened to put people out of work and to, I guess, leave mass unemployment. But to date that hasn't happened. Is AI going to deliver that eventually.

And it may be a good thing if it does because people may not need to work and therefore have time for more leisure activities, but do you see that happening? 

[00:42:42] Cat: So we're not sure, really. If we look at the side, the statistics and Gartner did a model, which was for every one job that was displaced by AI. 1.3 jobs will be created. And so, for example, in Australia, by the year 2030, So nine years time, [00:43:00] we need 160,000 more data scientists in order to keep up with how AI is tracking.

So like, where 

are these people going to come from? That's one of the, one of the things, 

but we do know that massive amounts of work will be automated. And this is where it gets a bit tricky because it will be predominantly administration. Type jobs. So we will see most automation. In fact, it's predicted within the next three to five years at 40% of jobs in finance insurance, or hospitality, tourism travel. These are telecommunications. These frontline jobs will be automated by machines. And in fact, within the next two years, 30% of all customer attraction will be done by machines. Now the real challenge. And the thing that really disturbs me about this is 90% of those jobs that will be removed will be the jobs of women and minority groups and their entry level jobs.

So they're [00:44:00] what the youth who already have high unemployment in many countries. They're the jobs that you used to take as well. so that's the worrying part. So Ali, back to your point about there's bias and discrimination in the data. In who's doing the coding and then in who is actually going to be affected by these machines.

And so what we have, what we're faced with AI is actually the scaling of all societies ills with regard to discrimination, unfairness, coded into the machines and put at scale. And that's the frightening part of it. So we do think, yes, there will be many people put out of jobs and there'll be many new jobs create.

And we also often talk about finally, the coming of the universal basic income when those people are dislocated from their jobs and they are not able to be 

retrained into something else, then potentially they will have a universal, basic income. And Elon Musk is probably one of the people that [00:45:00] talks most openly about the need for that.

[00:45:02] Tim: fascinating. and at what point, if ever, are we going to see robots and machines paying income tax? 

[00:45:10] Cat: Right. So that's an interesting thing. And I used to say, we used to sell our product and certain people, right. So you don't have to pay them. They don't get sick, they don't go to sleep. They don't complain. And then I sat once with, I think it was from PWC at an executive round table. Gentlemen from PWC , obviously he was working on robot tax is don't worry.

it's all coming. So for sure that will be, that'll be coming slowly but surely

[00:45:37] Tim: So Kat, can you tell us a little about your role was Bo AB AI and where that business is going? 

[00:45:43] Cat: Yes, I can. So I'm executive chair of Boaz AI, which is, being funded by the Victorian government and Artesian. And we are, Australia's only dedicated artificial intelligence fund. And so we look for scale. [00:46:00] AI companies. So those that have a product in market have some customer and have some revenue.

And then we wrap a services program around those companies to get them to scale much faster. And we've had some extraordinary companies come through such as strong room, Remi AI, datum, and , PI exchange. And these companies I think have greatly benefited from. The Boab program that we've been working with them on.

And we're now looking over the next few years altogether, we need to fund 32 young Australian companies and separate to that. We are raising a global AI fund in the future as well. So looking to extend that level of investment to potentially international investments. So we really are, I think at the moment the go-to.

Fund with regard to AI companies in Australia. And the great thing is that we're seeing so many that we're just learning about what the Australian AI landscape is. And so we were deeply in there and we're [00:47:00] deeply committed to starting to accelerate Australia's AI potential and capability.

[00:47:05] Tim: so Kat for the average listener out there, they want to understand AI. They want to understand that these fascinating issues that arise from it, what can they do? Where can they go? how do they learn? 

[00:47:17] Cat: Yeah. So first thing is, people don't need to go and do a data science course or an engineering course at all. In fact, I don't have, any of those qualifications myself. I just recommend because we know that. As I mentioned before is the fastest growing tech sector in the world. It will be invasive.

So at the moment, on average for an average middle-aged person like you and me we will interact with AI 28 times a day. For younger people like Ali and even younger up to a hundred times a day, they'll interact with AI. So it's almost, I think an sort of an obligation on your average person to just find out more about it.

And there's just a truck ton of [00:48:00] podcasts and books and YouTube clips, whatever part of AI you're interested in just Google it. You'll find something great. A couple of good books are. human plus machine, which is written by a couple of Accenture consultants. Another great book is invisible women.

Another one is weapons of maths, MIT H S destruction. another one is the most human human. What AI teaches us to be alive. Those are great books to get started, but the other thing , I'd love everyone to think about. I mentioned before that we don't really see great leadership from the tech giants with regard to ethical or responsible AI.

And the government is doing its best to catch up, but pretty way behind the obligation is now on each of us as individual. Regardless of our role station in life to start being aware of this and to start thinking about how do we do this, so that it's AI for good and not AI for bad. And a way to do that is to start to [00:49:00] learn about.

Ethical principles. And they're really simple and easy to find online. Certainly can be found on our website at ethical AI advisory and start to just be aware of when you think things are happening to example on social media, if you think. pushed to you that you don't think are right then just don't engage with them.

It's time for we as average citizens to push back. And then within our businesses, if we're seeing or feeling that things are not being done ethically, and then this might go into something that has been called. Then it's time for us all to stand up and make a stand. And so I just implore everyone become an ethical leader.

 this will be the leadership of the future. And as we go forward, we'll also know that businesses will hire people based on your understanding of ethics organizations will have in their procurement processes, that you need to be able to demonstrate you understand ethical principles and you've implemented ethics in your business.

I say the time is now [00:50:00] for all of us to take leadership because we're not seeing it anywhere else. That's the one, man.

[00:50:06] Ali: Well, thank you so much, cat. Honestly, it was fascinating having you on our show and maybe we'll get you back in season two and I'm sure there'll be a truck ton of development in the space between now and then. So thank you so much. We really appreciate you coming on.

[00:50:21] Tim: Thanks cat. 

[00:50:22] Cat: Yeah. Such a pleasure. Thank you.