Want to talk about a story that anecdotally highlights the true long term potential of LLM and Generative AI. It cuts to the chase on why this AI stuff is a fundamentally different way to use computers for the first time in decades. How it can help see possible solutions across the mountains of complex data and disparate disciplines. Despite all its current drawbacks and possible dangers. The story is titled: ‘A boy saw 17 doctors over 3 yars for chronic pain. ChatGPT found the diagnosis’:
“Alex experienced pain that stopped him from playing with other children but doctors had no answers to why. His frustrated mom asked ChatGPT for help.”
Alex was 4 when he started experiencing pain:
“What followed was a three-year search for the cause of Alex's increasing pain and eventually other symptoms.”
The whole piece in Today is worth reading for the detailed, painful quest across scores of doctors and specialists, by parents frantic for answers and remedies. We can all empathize in today’s world of complex medicine across so many disciplines, and specialities.
And being numbed and ground down just by the process of finding, scheduling, and seeing all said specialists, and conveying all the information of the ailment across every new valley of possible solutions. And to do it all in the few minutes that the specialist and their assistants have to digest what’s being conveyed and sussing out solutions. Without running afoul of the mounds of regulations and red tape designed to protect patient privacy, but acting more like chasms without bridges.
Here’s the mother’s journey summarized:
“In total, they visited 17 different doctors over three years. But Alex still had no diagnosis that explained all his symptoms. An exhausted and frustrated Courtney signed up for ChatGPT and began entering his medical information, hoping to find a diagnosis.”
“I went line by line of everything that was in his (MRI notes) and plugged it into ChatGPT,” she says. “I put the note in there about ... how he wouldn’t sit crisscross applesauce. To me, that was a huge trigger (that) a structural thing could be wrong.”
“She eventually found tethered cord syndrome and joined a Facebook group for families of children with it. Their stories sounded like Alex's. She scheduled an appointment with a new neurosurgeon and told her she suspected Alex had tethered cord syndrome. The doctor looked at his MRI images and knew exactly what was wrong with Alex.”
Here’s the happy ending:
“After receiving the diagnosis, Alex underwent surgery to fix his tethered cord syndrome a few weeks ago and is still recovering.”
And the punchline:
“There’s nobody that connects the dots for you,” she says. “You have to be your kid’s advocate.”
And of course have new technologies that can connect those dots. Remember that AI technologies like ChatGPT and so many others to come have their pluses and minuses. But the key reason they can help vs traditional software driven searches is how they work along probabilistic lines:
“ChatGPT is a type of artificial intelligence program that responds based on input that a person enters into it, but it can't have a conversation or provide answers in the way that many people might expect.”
“That's because ChatGPT works by "predicting the next word" in a sentence or series of words based on existing text data on the internet, Andrew Beam, Ph.D., assistant professor of epidemiology at Harvard who studies machine learning models and medicine, tells TODAY.com. “Anytime you ask a question of ChatGPT, it’s recalling from memory things it has read before and trying to predict the piece of text.”
“When using ChatGPT to make a diagnosis, a person might tell the program, "I have fever, chills and body aches,” and it fills in “influenza” as a possible diagnosis, Beam explains.”
“It’s going to do its best to give you a piece of text that looks like a … passage that it’s read,” he adds.”
"I do think ChatGPT can be a good partner in that diagnostic odyssey. It has read literally the entire internet. It may not have the same blind spots as the human physician has."
“But it’s not likely to replace a clinician’s expertise anytime soon, he says.”
We’re in a world where there’s an exponential explosion of new knowledge and data in every field, not just medicine. Think mountains of the curves above for every specialized field of human knowledge. Accelerating in most cases going forward.
Just in medicine, a report a few years ago found:
“Medical knowledge has been expanding exponentially. Whereas the doubling time was an estimated 50 years back in 1950, it accelerated to 7 years in 1980, 3.5 years in 2010, and a projected 73 days by 2020, according to a 2011 study in Transactions of the Amercan Clinical and Climatological Association”.
No doctor or specialist can keep up. They’re typically behind by the time they’ve graduated.
This is where LLM AI can potentially help, connect the dots across mountains of complexities, and real world specialists. I’ve gone through some AI deep dives on how this stuff works, what we understand about it and don’t, and how it cuts through data with reinforcement learning loops, doing math with software, at scale.
But to cut to the chase, stories like Alex above, tell us we’re on the right track to finding a better way to get answers in an ever more complex world. And getting more happy endings. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)