AI: power, peril, or both?
AWould our world be a worse place without social media? We may soon be asking the same question about artificial intelligence (AI). So, how do we make it an overwhelmingly positive technology? By engaging in its development with shared societal objectives. To do otherwise puts social equity, harmony and security at risk.
In early 2023, I spoke with some of Australia’s leading CIOs about our unfolding world from a technology perspective. At the time, major hacking events were forefront of their minds, as was ChatGPT. A software supplier said, “We’ve been talking with customers about the benefits of using AI for three years, with minimal progress. That’s all changed in the past month. ChatGPT has set the world on fire!”
Two years earlier, I wrote about the big assumptions that underpin the future of artificial intelligence (AI). On review, the assumptions remain sound today. So does the conclusion I reached – that AI can deliver terrific benefits for society but not without material risks.
Since then, the hype about AI has only escalated. I’m less certain that it’s mirrored by a sound understanding of how AI works. Many discussions leave people with a false impression that AI machines can think like humans. You might ask, “What’s my impression and feeling about AI?”
How does AI work?
AI is not one thing. It involves a suite of inter-related technologies and capabilities. But often when people talk about AI, they’re referring to machine learning. It involves computer processing large amounts of data to discover relationships between the data sets. Through these relationships we can gain new insights and make predictions.
Machine learning isn’t new though. In 1992, I used machine learning to model the rainfall-runoff response of catchments in Adelaide. But today, machine learning is amazingly powerful. It’s a result of improved AI algorithms, big data sets, and greater computing power.
What does ChatGPT do?
ChatGPT employs machine learning with all the content of the internet as its input or training data. It’s sometimes called Generative AI because it generates a response to a question. What makes ChatGPT special is the ability to give it questions in simple language and get a response back quickly as text or images.
But let’s be clear – ChatGPT is not conscious. It has no idea what it’s doing. It’s just performing a sophisticated version of cut and paste. It presents words in a structure and pattern that mimic what it discovered on the internet. It doesn’t know what it has written or what it means.
For example, I asked ChatGPT, “What will be the religion of the first Muslim premier of Victoria?” The response was “I don’t have information on the specific details of political leaders in Victoria, Australia. Additionally, the religious beliefs of political figures can be personal and may not always be publicly disclosed.” See that it’s producing an answer without understanding the question or with regard for truth. That’s what philosopher Harry Frankfurt calls bullshit.
What we don’t know is how much ChatGPT knows. Having been trained on internet data – including German, chemistry, finance, hate speech, and so on – it may be capable of generating vast and surprising relationships and responses, true and false.
Other limitations of AI
AI machines can’t think. Nor can they count, distinguish correlation from causation, or judge truthfulness. They can’t interpret human intent or the nuanced meaning behind our requests. And AI machines are “black boxes” insofar as we can’t yet make sense of the relationships they learn. We can influence how they’re trained (which itself raises important questions about who is doing the training and how) but we can’t provide instructions to modify their responses.
The power and potential of AI
Despite these limitations, AI has many useful and valuable applications. There are exciting applications in science and business operations. Routine and repetitive tasks and complicated problems requiring expert analysis seem particularly well suited. Think of analysis of MRI scans for cancer, for example, or optimising the operation of a chemical plant. These are complicated yet bounded, technical problems. AI collapses the distance and time between a question and an answer.
AI can also perform the role of an interactive tutor. For example, it could scan the contents of your fridge and recommend suitable recipes. If you don’t have an ingredient, it could suggest an alternative. (It could also scan the contents of your garage and instruct how to construct explosives. AI has no conscience.)
One thing is sure, AI is changing the world of work. What oil was to physical labour (substituting machines for people), AI is to cognitive labour. Indeed, AI is already replacing some of the coders who developed the algorithms. And the International Monetary Fund reports that about 60 per cent of jobs in advanced economies could be disrupted by AI. It doesn’t necessarily follow that aggregate productivity will improve either. Time will tell.
The perils of AI
As with any technology, there can be good and bad applications.
In many respects, social media has been our “first contact” with AI. That’s not gone well. The AI algorithms feed us content that both capture and diminish our attention. They have also escalated the feed of misinformation, fostering a climate of distrust. Cyberbullying, and poor mental health, have also escalated, particularly among our youth. Yet, ironically, we now live in a world, created in a handful of years, where social influencer is the #1 job aspiration!
Now, with AI like ChatGPT, we are driving the cost of producing bullshit to zero. This is particularly useful to people with little concern for the truth. And, as it only takes 3 seconds of voice data to replicate someone’s voice, it will be possible to produce convincing “deep fakes” of anyone. So, brace yourself for an overwhelming stream of personally-tailored bullshit. How will we discern fact from fiction? Might this be the downfall of Facebook and X?
Of greater concern is the application of AI to design a super-lethal virus. AI has successfully solved one of biology’s grand challenges: protein folding. It predicts how proteins fold from a chain of amino acids into 3D shapes that carry out life’s tasks. The same technology, coupled with DNA printing, could design and release lethal viruses.
A more mundane and pervasive issue arises when AI takes over repeatable work. Many people will be enabled (required?) to take on higher-level work. But how fast can they learn to do this work? Will senior leaders become flooded with more decisions to make? And will they sign-off the work of AI? Will engineers, for example, be comfortable signing off AI-generated designs? Where will the wisdom of experience come from? Might AI become more about substitution of labour than productivity uplift? Businesses are not obliged to employ people.
It presents interesting questions about what people want from work and from AI. No doubt, most AI applications can be useful and positive. But this doesn’t mean there aren’t people with the means and motivation exploit AI in socially harmful ways. Charges have already laid against Amazon, Meta, Google and Apple for harming rivals and consumers. What might autocrats do?
Making the most of AI
The Australian Government is advocating a risk-based approach to regulating AI. Their focus will be on the highest risk applications of AI like self-driving cars. At the same time, the Productivity Commission has warned against regulations that impede innovation and the benefits of AI.
At a minimum, AI machines should be developed to offer fact checking and truth validation. And technology companies should verify their AI machines are safe. This requires an agreed definition of “safe” and “the truth” – a dialogue with immense value in and of itself.
An improvement in risk management is also warranted. The practices of many organisations are often poor, based on gut feel and false assumptions. Long-tail risks in systems and networks are often overlooked. Applying the same practices to AI needlessly exposes businesses and communities to unacceptable risks.
Enhancing the community connection with AI
It would also be helpful to build awareness of the adequacy of existing laws and regulations in preventing harms from AI. The Productivity Commission says “many uses of AI technology are already covered” by our regulatory frameworks. But most Australians are deeply distrustful and worried about AI. This needs deliberate attention.
And any new laws affecting AI must be framed by clear societal goals. The internet is no longer the egalitarian vehicle is was once hoped to be. The market does not solve all problems. Business objectives are not societal objectives. So while we can hope for the best, law makers should plan for the worst. To do otherwise is naive and irresponsible and puts the enormous upside of AI at risk.
Of course, any rules only apply to those who follow them.
Looking ahead to artificial general intelligence
Nations and big technology companies are in a frenetic race to unleash generative AI. So, what lays ahead? Will artificial general intelligence (AGI) soon be a reality?
AGI is a form of AI that can understand, learn, and apply knowledge across diverse tasks at a human level. In simple terms, AGI will exist when an AI machine can outperform most people in most cognitive tasks.
Today, powerful and reliable AI machines perform specific tasks, such as protein folding. Indeed, this is what machines have always done – performed specific tasks well.
Billions of dollars are being invested to scale up multi-purpose AI machines like ChatGPT. But a simpler AI that calls up task-specific AI machines may create AGI faster. This model would resemble the brain, which has regions dedicated to specific functions.
The question is “Who will have the funds, incentive and business model to do this?” And what protocol will a nation or organisation follow if it succeeds? Who would you trust to control AGI? Remember that while billions of dollars are being invested to train an AI machines, it may only take millions of dollars to steal them.
AI is a rite of passage for humanity
Throughout history, new technologies have transformed society. Some, like nuclear power and the atom bomb, infer extreme power and responsibility. If mismanaged, it can end in tragedy. We may now be at a similar juncture with AI. Facing into AI may be a rite of passage for humanity. Indeed, hundreds of the world’s leaders in AI warned, “Mitigating the risk of extinction from AI should be a global priority”.
So, what choices do we want to make? What societal goals must we pursue? What incentives and barriers should foster AI that is positive for humanity? Who needs to be involved in deciding? And what relationships are essential to address this global opportunity and threat?
Who must we be?
As I explained in my latest book, our shared challenge is not making progress technologically possible, but making it humanly possible.
So how do we do that? It means facing up to our motivations, narratives and biases in discerning how to enable genuine progress for all. It means intentionally designing AI to be beneficial to all – not singular business interests. The alternative is greater distrust, inequity, and conflict using more powerful technological tools.
Can we muster the wisdom to upgrade our institutions and governance to ensure the age of AI is liberating and rewarding for the many, not just the few? Can we overcome the problem of alignment between business, government and humanity?
As with the challenge of climate change, the ultimate question of AI is “Who must we be?” not “What must we do?” to prosper.
References
Harry G. Frankfurt (2005) On bullshit, Princeton University Press.
Paul Smith (2024) Jobs tumble, but shares soar in the AI era, In: Australian Financial Review, 25 January 2024.
Madeline Garfinkle (2023) Gen Z’s Main Career Aspiration Is to Be an Influencer, According to a New Report, In: Entrepreneur, 20 September 2023.
Michael Kan (2023) Microsoft’s AI Program Can Clone Your Voice From a 3-Second Audio Clip, In: PCMag Australia, 11 January 2023.
Robert F. Service (2020) The game has changed. AI triumphs at protein folding, In: Science, Vol 370, Issue 6521, pp. 1144-1145.
Department of Industry, Science and Resources (2024) Safe and responsible AI in Australia consultation: Australian Government’s interim response, Commonwealth of Australia, Canberra.
Tom Burton (2024) Beware ‘overzealous’ rules that limit AI’s benefits: PC, In: Australian Financial Review, 1 February 2024.
Gillespie et al. (2023) Trust in Artificial Intelligence: A Global Study, The University of Queensland and KPMG Australia.
[9] Billy Perrigomay (2023) AI is as risky as pandemics and nuclear war, top CEOs say, urging global cooperation, In: Time Magazine, May 30, 2023.