A lot has been said about AI since non-techies discovered ChatGPT and how it could save them time doing tasks that many appear not to enjoy like writing, thinking and analysing! That’s probably a topic for another piece but amid the raised voices, what really is the problem upon which we should focus?
To answer that question, we need to understand what AI is (already) good for and what it could be used for. We also need to appreciate that today AI is used across every sector of our nation and world. Since the 1950s scientist have been on a journey to create machines that could think like people. However, computer processing power was often the barrier. Over the last 20 years, and particularly the last ten years, there have been big breakthroughs in image and language recognition.
Today AI facilitates your online purchasing, banking and even your holiday planning. Virtual assistants are everywhere. Siri and Alexa are no longer trapped in our mobile devices – they are also making homelife easier. Not all good noting we’ve all wondered how social media feeds seem to magically respond to a conversation we had with a friend. And who can’t wait for driverless cars? Thank you, AI!
So, if AI is ubiquitous, why would 350 AI experts including the chief executive of OpenAI, which developed ChatGPT, recently send the warning that within two years AI systems will be powerful enough wipe out humanity? I won’t dwell on the motivation of these AI experts, but it does seem a bit odd to warn the world about something they’ve created and had control over. That said, some of their key concerns are frightening and relate to formulation of powerful bioweapons and the potential for large-scale cyber-attacks.
Scary yes but the big risk with AI is not the pace at which it is transforming and the influencing the world. The problem is that our governance frameworks, including legislation, are outdated and slow to respond. There has been much talk about the need to legislate but given what’s required to develop, implement and enforce legislation, it would be akin to playing whack-a-mole. Or more likely a never-ending journey that leads nowhere.
We also need to be much more deliberate about ‘choosing’ to use AI for specific purposes and opting out on others. For example, AI to achieve process and analysis efficiencies where you know the answer or can evaluate performance is a great use of AI. Using AI to develop policies that impact citizens, particularly vulnerable citizens, not so good. We also need to need to think about the potential losers here – process efficiencies are great, but job losses are a likely consequence and it’s a big challenge to transform process workers into knowledge workers.
These need to be revisited and updated for the AI world. Existing frameworks that govern particle physics research or nuclear science and technology are examples of governance arrangements that could provide guidance.
AI is here to stay and we need to ensure we leverage it for the benefit of humanity.