You can’t swing a jump rope without the concept of AI popping up. AI = Artificial Intelligence. Think Hal, the computer from 2001 A Space Odyssey. He ends up learning so much he actually takes over the world. So much for human existence!
Most of the information I see is HOW TO USE AI – where to access it, the kinds of information we might find by using AI, how to apply it to our work, etc. I’ve even gotten to the point where I can make a fairly-well educated guess on whether AI was used to write an article I read. There’s just something that makes it easily identifiable, and — call me old-fashioned — but I feel somehow ROBBED of real-human knowledge and experience.
But there is (at least) ONE aspect of AI few are talking about that could create major problems for us as professionals – or even more importantly – for our patient-clients. That is…
Garbage In > Garbage Out
“GI-GO” is a concept that’s been around since the first computer was programmed. It means that if wrong or bad information is fed to a computer database (or anything else for that matter!) – then it will yield imperfect results. In computer programming it means that if the coding isn’t done correctly, a program won’t run. In education it means that if you teach wrong or bad information, then wrong or bad information will be learned, and then mal- or dis- or misinformation will then be shared. Just look at American politics.
And THAT is the underlying AI conundrum for advocates, care managers, and our clients.
AI content/information is generated based on all the input it has “learned” from all over the internet – everywhere. In fact, “teaching” these content-generation machines has become its own career path. It’s a huge undertaking, and it changes the results AI generates every day.
But exactly what is being fed to AI?
Are they being fed real, nutritious, solid, scientific information? Or – junk, “garbage” information? And how can we know?
Consider:
- A group of medical researchers publish their findings about a treatment for a dread disease in a well-respected medical journal. Because that medical journal is so well-respected, its content is fed to an AI generator that is widely read by medical professionals. The problem is – later, the inaccurate, published research results turn out to be bogus and harmful in their conclusions. But AI has already incorporated the information into its databank. (Think it can’t happen? Think about the bogus research published by Andrew Wakefield tying vaccines to autism? Scary dangerous…)
- –or– A pharmaceutical company creates a number of articles, videos, podcasts, and other media about a specific “blockbuster” drug – new to the market – and as yet, unproven in the general public, even if it passed muster through clinical trials. The content is intentionally produced as if it’s objective, including comparisons to other drugs in the same class, but colored through the lens of higher sales. All that sales-supporting content gets incorporated into AI learning, and shortly thereafter, is found by medical professionals, advocates, patients, and others – and is trusted because it was produced by AI, supposedly from various credible resources (but which were really all developed from the same resource – the drug company.) (Think it can’t happen? Vioxx killed thousands of people. And addicted patients are still dying after taking Oxycontin which was “sold” to doctors as being safe. What’s next? The new weight-loss drugs like Wegovy or Mounjaro?
So, as a result…
- Your client is having trouble getting an accurate diagnosis. They do searches online using AI and determine that their real diagnosis should be different from what their doctors have told them. But that search was affected by bogus, “garbage” input, as described above. And now you have a conundrum to sort out.
- — or — Your client’s treatment isn’t working well and they go into self-prescribing mode using AI – and find that content the researchers or pharmaceutical companies (above) published, which is then embraced by AI – and insist to their doctors that those are the paths they want to follow, not realizing how harmful they can be. It becomes a headache for the doctor AND the patient AND possibly YOU as their advocate.
How can we combat the AI-generated misinformation conundrum?
Ha! That’s actually it’s own conundrum. In too many ways we can’t. But we can provide our clients with cautions and suggestions.
When you do your own online research, whether or not you use AI, be sure to find credible resources on your own to back it up. Second and third information opinions, in effect.
When your client discusses their personal research findings, whether or not they used AI to do it, ask them where the information came from and whether or not they have confirmed it. If they have, ask them how. If they haven’t, explain the importance, perhaps using the examples above.
Using an AI Authenticity Disclaimer
If you create content for your clients or for marketing purposes, you’ll want to begin using an AI Authenticity Disclaimer. Why? Because it shows that you are making strides to be sure people can trust the information you’re providing to them. An Authenticity Disclaimer is a simple statement that a human being is behind the information being shared. It’s not unlike any other disclaimer (“If this is an emergency, dial 9-1-1!”)
See my sample/example below.
Does the existence of such a disclaimer mean all information is gospel and correct? No. But it does show that being honest – authentic – is a value you demonstrate, even through your writing. It’s at least that, and can support a feeling of trust in dealing with you.
And as we know… in advocacy and care management, TRUST is everything.
Content Authenticity Declaration
100% of this post was written by me, a human being. When there is AI (Artificial Intelligence) generated content, it will always be disclosed.