Tech trends are tempting. A report written for CSIRO, which studied future uptake of AI across local small to medium enterprises and big businesses, found that 60 per cent of Australian organisations are already accelerating and expanding their AI capabilities.
Subscribe now for unlimited access.
or signup to continue reading
From writing detailed emails and providing knowledge workers with relevant research and calculations, to helping plan, build, operate, and maintain national infrastructure more productively and safely, the technology is building pace.
And the Australian federal government is counting on it. To develop our national AI capability and support business, the government is investing $124 million, which includes establishing a next generation AI graduates program to train home-grown, job-ready specialists. Minister for Industry and Science Ed Husic wants to back the commercialisation and business adoption of AI so Australians can reap its rewards.
But the aforementioned CSIRO report also highlights the complex implementation needs of AI. While the government's graduate program will ready talent for the technology itself, industry needs to start with the basics at the data layer.
It won't be as simple as "powering on" AI - AI is only as good as the data at its disposal, and companies need to get their houses in order to ensure their AI investments don't fail before they even start.
Overlooking the back-end systems and processes that AI feeds from will render the tool relatively useless. Business chiefs need to establish a data framework or face consequences when users find the AI incomplete but dishonest.
Consider this - if a data broker has old data, and incorrectly combines data from devices or accounts that don't belong to the same person, the AI relying on this information will make inaccurate inferences.
In another example, a gamification program might use AI to diagnose autoimmune diseases, but it relies on patient data that can be easily tampered with, spurring incorrect diagnoses or treatment.
The old saying "garbage in, garbage out" is especially true with AI.
This is all the consequence of unsecured, disconnected, and ill-informed data. To use a construction analogy, it's like putting up the doors of a house before building the foundation and walls.
AI needs access to a robust data ecosystem where information is clean, correct, secure, and in formats that can be synchronised and used effectively. This type of digital framework gives AI and its users the ongoing assurance of high-quality data, and completeness.
The old saying "garbage in, garbage out" is especially true with AI. If the available data is bad, the output from AI is likely to be the same. Even advanced generative AI falls victim to this scenario; ChatGPT is known to produce false statements, leading to numerous consequences including a defamation case.
Another consideration in AI accuracy is the volume of data available. Sample sizes that date back six months will generate less accurate AI output versus data sets covering three years. That poses a risk if you're using AI for predictive analytics to size up issues such as customer churn or profitability by product.
MORE OPINION:
Put it this way, precision is particularly pertinent among the latest warnings of AI dangers. Top AI experts are receiving high-profile media coverage around the world citing scenarios ranging from "deepfake" images and videos that undermine public trust to an AI-ruled dystopia that leads to human extinction.
Back-end legwork will be essential for AI to earn public trust, and rather, deserve to keep it.
Risk aversions to data sensitivity aren't new concerns. Decision makers, time and time again, have been reluctant to drive value from data - with or without AI in the picture - under apprehensions about whether customers can trust companies to use their data.
The rewards AI and data-driven culture accrues will outweigh the concerns. But they cannot be dismissed - AI implementation must be tailed by a data framework that enables them to overcome sensitivity challenges and govern the data AI lives and breathes. Ultimately, we need to make sure the benefits AI stands to gain are powered by cohesive, trusted data - not a shallow surface that can drive organisations head first into ethical implications.
- David Irecki is director of solutions consulting for Asia Pacific and Japan at Boomi, based in Sydney.