I was intrigued by an invitation to a Society 4.0 Symposium for the Swinburne Innovation Research Institute (SIRI). Society 4.0 is the social and community perspective of Industry 4.0 – the fourth industrial revolution that we are already well within.

As SIRI says: “In the midst of the growing power of emerging and converging technologies such as genetic manipulation, robotics, and AI (artificial intelligence), our notion of work, society and what it means to be human are changing...In particular, how does trust function in this digital age – trust in individuals, technologies, and social institutions?”

Many dimensions of this conversation are of importance to philanthropy. First of all, we must understand the opportunities and challenges posed by these new technologies so that we can support the use of technology for public good and better understand the ethical decisions that are made along the new technology journey. 

I have written previously about the important work of Lucy Bernholz at Stanford University Centre on Philanthropy and Civil Society (PACS) and the need to ensure that relevant data can be accessed and owned for the public good. This remains critical. There are many other questions to tackle. 

The following is an overview of a few key insights I gained at the Society 4.0 Symposium about justice and about the world of work.

It was important that experts from industry, journalism, as well as leading academics from Monash, Melbourne, ANU and La Trobe Universities joined Swinburne in this discussion. For the program and a list of presenters see swinburne.edu.au/socialinnovation.

Congratulations to Professor Jane Famer, Director of Swinburne Social Innovation Research Institute, and Professor Lawrie Zion (Facilitator of Panel 3) for their leadership of this important discussion.
AI is already affecting the world of work. Sometimes it can deal with dull tasks and free people up to tackle more creative or value adding work. 

AI can apply algorithms to make a recommended decision but sometimes, especially in areas like the law and justice system (sentencing, immigration applications etc), require humans to apply the complex context to provide the nuance to decisions.

AI can provide pattern recognition. It does not yet apply common sense and logic in the same way that humans can. Mathematical models are not perfect. They hold inherent bias, which can be racial, cultural, gender or other lenses.

So how do humans create value in a world of AI? Humans can look forward and humans can dream. Humans can focus on the unforeseen problems or opportunities that confront organisations or nations.

The application of new technologies in one sector can provide learning for other sectors. Applications in the health sector can sometimes be translated to banking, retail or the social sectors. Overall, there is a blurring of discrete disciplines. Scientists need philosophers, and vice versa. We need to find new ways of using transferable skills to solve business problems.

An area of interest to Lord Mayor’s Charitable Foundation is the Future of Work.

The following comment shifted my understanding: “Work and learning are now two intertwined activities. In the future, this will increase. Learning is part of work. The extent and the way learning is being integrated into work is new,” David Yip, DXC Technology.

The closing session posed some tough questions:

  • Are our governance structures up to understanding the kind of choices that have to be made as we adopt and adapt new technology?

  • How do we govern technological change when the tech advisors are learning just ahead of the client?

  • How do we decide which decisions we make as humans and which we outsource to AI? We can’t simply outsource all the decisions about the application of justice and who gets a job.

  • How do we ramp up ethics education for all students and all business leaders?

These discussions need to move into the mainstream.

Dr. Catherine Brown
Chief Executive Officer