Addressing the risks of AI in the water sector
With climate change, population growth and ageing infrastructure posing significant challenges to the water sector, the adoption of Artificial Intelligence (AI) is becoming an urgent necessity to help solve and mitigate these issues.
One session on day two of Ozwater’24 presented key insights from a panel of leading experts around the new challenges posed by AI and how they might be mitigated.
Hosted by SpiralData CEO and Founder Kale Neeham, the panel dissected the multifaceted implications of AI in water utilities and established a road map for its responsible integration.
“The water sector should be excited by the opportunities AI presents to help solve some of its longstanding issues, including asset fatigue and non-revenue water, as well as the potential productivity gains to be had doing more work at lower cost,” he said.
“AI is game changing and those organisations who embrace it with long-term thinking will benefit. But we also need to be equally careful.”
Intelligent Water Networks Program Director Jason Cotton said the water community is very good at managing risk, and this same mindset will need to be adopted as we move forward with AI solutions.
“Managing risk is one of our core capabilities in water. We manage that risk extremely well, particularly risks to public health,” he said.
“We now need to apply that lens and capability across this new tool we have – AI. By taking a more balanced view of risk and applying our ability to manage risk, we can push forward quite vigorously with innovation in this space.”
Risky business
So what are the risks of adopting AI within water sector approaches? University of Adelaide’s Dr Melissa Humphries said one of the worst things that can happen with the use of AI is the “convergence of something less than mediocrity”.
“A lot of our generative AI models are trained on data sets that represent a fraction of reality and don't always include the best of everything,” she said.
“AI models are trained to give us the averages. So if we continue to use AI for everything without applying a critical lens, we eventually start to converge to something that is actually average, or lower.
“This means we don't get innovation, we don't get forward thinking and we don't get to grow and develop – we stagnate.”
SA Water Research and Innovation Specialist Nicole Arbon said one of the biggest risks with AI is the potential to lose sight of achieving actual outcomes.
“It’s easy to get distracted by the shiny new thing and apply it everywhere. But we are at risk of losing sight of the things that we actually need to be providing to our customers, to our workforce,” she said.
“Just like any tool, we need to be looking at it critically and asking if it’s achieving the desired outcome. Is AI giving us the right results and what do those results mean? The worst case scenario is that we play ‘set and forget’ and never actually question what the results are.
“We are at risk of losing that expertise and criticality. We don’t want to make things so easy that everyone forgets how to do it, or forgets how to be critical of the process. AI is an enabler, not a replacement.”
SpiralData Chief Data Scientist Ram Balachandran said some organisations are at risk of missing out on the benefits all together, out of fear of engaging in the AI journey.
“Some of us would like to shut ourselves out, to pretend that AI will simply go away. It’s not going away,” he said.
“It is an amazing tool that can help us with a significant amount of challenges, including climate change, urbanisation, and the management of our finite resources. And we need as many tools as possible to help us with these issues.
“In my opinion, the worst that can happen is for AI not to be utilised responsibly to its full potential.”
Lastly, HWL Ebsworth Lawyers Partner Daniel Kiley said it’s crucial for those organisations utilising AI to maintain an appropriate level of responsibility.
“I’m concerned about scenarios where either the tool is used as an excuse for things going wrong. We are responsible for what happens with the tools that we put in place,” he said.
“We need to make sure we maintain skin in the game – if we don’t get things right using this tool, it’s not the tool’s fault. We must maintain an appropriate level of responsibility for the results of these new applications.”