Satya Nadella, Global CEO of Microsoft visits the 3A Institute

Satya Nadella, Global CEO of Microsoft visits the 3A Institute
Satya Nadella, Global CEO of Microsoft visits the 3A Institute

On 21 November 2019, Global CEO of Microsoft, Satya Nadella, along with a delegation from the Australian leadership at Microsoft, visited the 3A Institute at the Australian National University.

This storyappeared in the Microsoft Newsroom, an edited version of which appears below.

Just as the computing revolution generated a new field of endeavour — computer science — the fourth industrial revolution will also require new skills and understanding to ensure that intelligent technologies are implemented wisely and inclusively.

The 3A Institute at the Australian National University’s College of Engineering and Computer Science was founded in September 2017 by its director, Distinguished Professor Genevieve Bell with precisely that in mind.

Its intent is to focus on nurturing a new applied science devoted to the responsible use of AI by both organisations and individuals. The Institute is investigating how to design for an autonomous world, how much agency should be given to digital systems, and what assurance is required to preserve people’s safety and values.

The Institute is also teaching a Master of Applied Cybernetics course, equipping graduates with the skills and insights to play an informed and important role in this fast-emerging field.

According to Bell the intent is; “To establish a new branch of engineering to help take AI safely to scale.”

Microsoft has partnered with the Institute for a second year, and the ANU vice chancellor Professor Brian Schmidt, has recently announced that the University will match Microsoft’s contribution to support the work being undertaken.

The importance of the work that Bell and her team at the Institute are engaged in was driven home during a visit in November from Microsoft CEO, Satya Nadella.

Speaking with students and staff Nadella said that; “One of the things I deeply think about is that to some degree, maybe in the past too, we have abdicated accountability and responsibility to this new technology versus us humans creating our own future.” It was essential, said Nadella, to ensure proper control and oversight of all technology deployments.

He applauded the diverse make up of the Master of Applied Cybernetics student cohort noting that; “To me that is the best defence … to have diverse teams”.


Physical and digital combine

Genevieve Bell herself has a diverse and storied background; a cultural anthropologist, technologist and futurist, she returned to Australia after 18 years in Silicon Valley and took up the position at ANU. She is also a director of the Commonwealth Bank of Australia, a member of the Prime Minister’s National Science and Technology Council, and a Fellow of the Australian Academy of Technology and Engineering.

Her intent with the Institute she describes as both “simple and rather ambitious.”

“It’s incredibly clear that the technologies that constitute AI are moving from being a thing that sits inside phones and computing objects and into the built and physical world around us, moving into objects as varied as traffic lights, robots, autonomous vehicles, elevators and our bodies.”

Bell says it is essential to start thinking of a class of objects that are AI enabled, rather than a series of discrete standalone items. “Start to imagine drones and autonomous vehicles and smart electrical grids as manifestations of the same technical system — i.e., a cyber physical system.

“For me it came clear that if you had all of those new systems in the world you need a different way of thinking about them, of building them, and regulating them.”

After promoting the first year of the Master’s course on social media the Institute attracted 173 applicants from all over the world with backgrounds spanning the alphabet from astronomy to zoology. Eventually an initial cohort of 16 was selected, which included Olivia Reeves and Hrishikesh Desai.

Immediately before starting the Masters Reeves was on maternity leave from her public sector role in medical device regulation with the Therapeutic Goods Administration. Her interest in the course was sparked by a growing awareness that emerging technologies were difficult to regulate using traditional frameworks and that that a new approach was needed.

She says that she has learned a great deal during the year, and also credits the course with extending her appreciation for diversity and the value in being able to seek insights from a group of people with vastly different expertise and experience.

Reeves’ major project during the course brought together her own systems engineering skills, to work alongside a theoretical chemist, a computer science teacher and a psychologist. Together the group created Cognitive Cubes, a game-based educational platform that “helps people to learn and learns about people.”

Blocks, similar to a child’s building blocks, were created with embedded sensors and actuators. The blocks are used to solve pre-defined logic problems; the system responds based on how people are solving the problem posed, setting the next problem based on a deeper understanding of what the user needs to learn.

It is an example she said, of how AI can augment and assist. “You need to look at the system holistically. The key thing is this technology is a way to improve things that you are already doing — if things were fundamentally flawed then it’s not going to help.”

Nadella was clearly intrigued by the educational potential of the Cognitive Cubes.

“It’s deep personalisation meets the classroom. I’ve always felt that we can have any curriculum deeply personalised to meet the distribution of cognitive capability in real time.

“That would be democratising classrooms. In real time — where the teacher says in real time ‘if I did this, this student is going to be able to crack it’. “

Potentially AI-augmented learning could have immense personal and societal impact according to Nadella. “What if you had a system that said every kid can solve the Rubik’s Cube — and all it needs is commodity camera and a kid with a Rubik’s Cube and at your own pace we will get you there. Imagine how a middle school kid in a public school with a cell phone gets that sense of accomplishment.”

It is that human focus which is critical for successful AI at scale according to Hrishikesh Desai who is working on his capstone project. He said one of the key insights from the course is that; “AI operates in a social context… there is so much to think about people impacted by AI.”

His capstone focuses on AI from a leadership perspective. “Leadership is one of the crucial things we need to think about because it sets the values and beliefs in any organisation and those then trickle down to employees who develop systems like AI systems.” Desai said that informed leadership was essential to ensure AI at scale was both safe and ethical.

New discipline emerges

The Institute has recently selected a second cohort of 16 people for its 2020 Masters course. Again, it’s highly diverse with a range of people of different ages, genders and backgrounds.

According to Professor Bell having a rich cohort which develops into a genuine community of people who can learn from each other is important. So too, she says, is a hands-on approach.

During the course, beside conventional reading, lectures, assignments and lab projects, students are encouraged to think about things differently says Bell. She knew she was getting traction when one group of students, that had been tasked to run a project based around the Cayla talking doll, had it x-rayed to see what was inside.

“Suddenly you had this ethereal object that let you see components in a different way — they were able to lean into the notion of different ways of asking questions.”

Bell also lined up an array of diverse lecturers; from ANU vice chancellor and Nobel Prize winner Brian Schmidt who talked about his Nobel prize winning work, to traditional weavers from the Ngarrindjeri nation in South Australia. “Probably for most of my students it was their first encounter with an aboriginal technology like that.

“Getting people to think where does the technology come from, what was the impulse that caused it to be created, what was the work that was being done — and how is knowledge about technology transferred.”

The insights gained and skills learned during the Master’s program are what Bell hopes will help steer new sorts of conversations about the interaction between the digital and physical worlds.

Partnering with companies including Macquarie Bank, KPMG and Microsoft also encourages conversations about what the future of work might look like and the skills and talents that employers will need in the fourth industrial revolution.

Says Bell; “I’m really clear we are not creating computer scientists here, and I’m not suggesting we would replace computer science — same with engineering.” Instead, says Bell, it is about seeding a new discipline that empowers people to have conversations about, and work with, cyber-physical systems powered by AI.

“My suspicion is the cohort this year are going to end up working alongside computer scientists and more traditional engineers, but also alongside product teams,” and to some degree acting as a translator between the real world and AI systems, she says.

Bell also wants to ensure the learnings from the 3A Institute are accessible, and provide a scaffold for organisations deploying AI.

“How do we help really different sorts of organisations think about what the world is going to be like when AI goes to scale? What does it mean to be thinking about having an AI-ready organisation?

“This is not necessarily how do you learn to code, but how do you think about what it takes to integrate these new technical systems — everything from what is the nature of your data, where is it kept, what is the nature of your organisational structure, how do you think about where decision making is based?

“It turns out being ready to inhabit this world of cyber-physical systems means asking a whole set of questions that most organisations don’t yet know how to do. That’s possibly because the AI side has been hived off inside the R&D or CIO’s office. The reality is that it has implications for everything from legal counsel through to HR to facility services.

“We were incredibly lucky to have Microsoft willing to partner with us — we are not a tech shop, we’re not going to teach you to be an AI expert in the classic sense of that — but they were willing to explore with us what it might be like to come at the topic from a slightly different angle. The delight of that is sitting in rooms with deeply accomplished grown-ups asking them to approach the problem a different way.”

First published: Microsoft News: 3A Institute charts path to responsible ethical AI at scale with support from Microsoft

arrow-left bars search caret-down plus minus arrow-right times