The pitfalls of relying on artificial intelligence for consequential decision making, like health decisions, have long been known. As IBM declared in 1979, "A computer can never be held accountable, therefore a computer must never make a management decision".
Subscribe now for unlimited access.
or signup to continue reading

This sentiment was echoed in guidance published last August by the Australian Medical Association, warning clinicians that "AI must never replace clinical judgment and that final decisions must always rest with medical practitioners". A patient-facing, unregulated AI tool that could easily stand in for a GP, at least from a patient's perspective, is a Pandora's Box for healthcare as we know it.
This week, AI juggernaut OpenAI announced its official foray into healthcare with ChatGPT Health. People eager to make sense of their health, whether out of curiosity or desperation, will be encouraged to plug in their personal data from other health apps, enter their test results, look up symptoms, and ask questions they would normally ask a doctor.
OpenAI says the feature is designed to help people understand their test results, track 'trends', prepare for doctor visits and support health-related questions, but not to diagnose or replace professional medical care.
ChatGPT Health, however, is a potential nightmare for patients, doctors and hospitals everywhere. Those seeking medical help won't get what they need, and the data privacy risks are monumental and far-reaching.
If a medical professional makes a mistake during treatment or care, professional bodies like APHRA (Australian Health Practitioner Regulation Agency) review and investigate, so they can make recommendations to prevent future errors or, when required, sanction professionals who commit malpractice. But who is responsible when ChatGPT hallucinates and gives bad advice? Will OpenAI executives be held to the same standards as our doctors and nurses?
ChatGPT Health is set to become 'Dr Google' or 'WebMD' on steroids. Trust in the medical profession, which is already subject to pressures from online misinformation, will only erode when people are tempted to bypass credentialed medical expertise in favour of a text generator.
In this age of social media-induced self-diagnoses, there is little doubt that patients will "symptom-shop" using AI tools until they get the answers they're looking for. AI is already known for its sycophancy, telling people what they want to hear and affirming thoughts and behaviour that are harmful. Much potential is being seeded for people receiving inappropriate or inadequate medical treatment, with potentially life-threatening implications.
AI's deep flaws are at odds with giving accurate, evidence-led medical advice. Combined with the tech industry's logic of "move fast and break things", a chatbot that dispenses medical advice is a formula for many misdiagnoses.
What makes AI so powerful for the creative industry is its ability to use statistics and vast computational power to generate new ideas and scenarios. But given how often AI tools make up information (known as 'hallucinations'), allowing AI to pretend to diagnose someone, or prescribe drugs (as is already the case in the US) is a national healthcare crisis waiting to happen. We are already seeing how dangerous AI hallucinations can be in healthcare, with Google recently removing some of its AI health summaries after an investigation found they were presenting false information that could mislead seriously ill patients.
Unlike the old days of WebMD, where many diagnoses suggested cancer, my prediction is that ChatGPT will take the opposite route and give patients mostly positive findings, as a way of reducing liability.
Users of these services must also be hyper-aware of who else could access their sensitive health information, including cyber criminals and state actors. OpenAI has already confirmed that a third-party breach leaked names and physical locations, creating a prime target for cybercriminals and state actors alike.
But users also need to be aware that their data is a valuable asset for any technology company. When genetic testing company 23andMe filed for bankruptcy last year, the DNA data of millions of users was sold to a pharmaceutical company. If OpenAI is sold, which is not unimaginable, given that it is not expected to turn a profit until at least 2030 (according to a HSBC estimate), that health data could easily flow to health profiteers.
MORE OPINION:
In the hunt for "alternative revenue sources" (a tech industry euphemism for hyper-targeted ads and sales brokering), private healthcare providers and insurers will rush in to access to OpenAI's highly sensitive data. A trove of sensitive health information, including biometrics from wearables, test results, MRI scans, and medical records, may very well end up on the auction block.
The power of generative AI should not be understated, and Australia should continue to support ethical AI projects that can add billions to the economy. But if markets fail to manage risk, governments and regulators must step in to protect our health data.
Responsibility for healthcare must rest with those who provide it. If AI companies intend to deploy robo-doctors, they must be held fully accountable for harm, without hiding behind "hallucinations". We would never allow a doctor who admits to hallucinating diagnoses to treat patients, and we should not accept it from AI.
We trust governments to safeguard sensitive data in the public interest, but private companies answer to investors, making strong regulation of health AI essential. This is the only way to ensure health data is protected and never treated as a commodity.
- Samuel Spencer is an author, adjunct professor at the University of Canberra and CEO of Aristotle Metadata.
