In Elon Musk’s world, AI is the new MD. X’s owner is encouraging users to upload their medical test results — such as CT scans and bone scans — to the platform so that Grok, X’s artificial intelligence chatbot, can learn how to interpret them effectively.
He has previously stated that this information will be used to train X’s artificial intelligence chatbot Grok how to explain things efficiently.
Earlier this month, Elon Musk reposted a video on X talking about uploading medical data to Grok, saying: “Try it!”
“You can upload your X-ray or MRI images to Grok and it will give you a medical diagnosis,” Musk said in the video, which was uploaded in June. “I’ve seen cases where it’s actually better than the doctors say.
In 2024, Musk said medical images uploaded to Grok will be used to train the bot.
“It’s still early stages, but it’s already perfect and will be great,” Musk wrote on X. “Let us know where Grok fixes it or needs work.”
Musk also claimed in his response that Grok saved a man in Norway by diagnosing a problem his doctors had overlooked. The X owner was willing to upload his own medical information to his bot.
“I just did an MRI and submitted it to Grok,” Musk said in one episode Moonshots with Peter Diamandis The podcast continues on Tuesday. “No doctor and grok found anything.”
Musk did not reveal on the podcast why he received the MRI. said XAI, the owner of X fate In a statement: “Legacy media lies.”
Grok AI is facing some competition in the health space. This week OpenAI launched ChatGPT Health, an experience within a bot feature that allows users to securely connect medical records and wellness apps like MyFitnessPal and Apple Health. The company says it does not train models using personal medical information.
AI chatbots have become a ubiquitous source of medical information for humans. OpenAI reported this week that 40 million people seek health information from models, 55% of whom used a bot to see or better understand symptoms.
So far, Grok’s ability to detect medical abnormalities has been mixed. AI successfully analyzed blood test results and identified breast cancer, some users claimed. But it also grossly misinterpreted other pieces of information, according to physicians who responded to some of Musk’s comments about Grok’s ability to interpret medical information. In one instance, Grock mistook a “textbook case” of tuberculosis for a herniated disc or spinal stenosis. In another, Bot mistook a mammogram of a benign breast cyst for an image of a testicle.
A May 2025 study found that all AI models have limitations in processing and predicting medical outcomes, but Grok was the most effective compared to Google’s Gemini and ChatGPT-4o when determining the presence of pathologies in 35,711 slices of brain MRI.
“We know they have the technical ability,” Dr. Laura Haycock, associate professor at New York University Langone Health Department of Radiology, wrote in X. “They save time, data and [graphics processing units] It is up to them to include medical imaging. For now, non-generative AI methods continue to outperform in medical imaging. “
Musk’s lofty goal of training his AI to make medical diagnoses is also risky, experts said. As AI is increasingly used as a means to make complex science more accessible and create assistive technologies, teaching Grok to use data from social media platforms presents concerns about both Grok’s accuracy and user privacy.
In an interview with Fast Company, Ryan Tarzi, CEO of health technology firm Avandra Imaging, asked users to input data directly rather than source it from a secure database with de-identified patient data, Musk’s way of trying to speed up Grok’s development. Also, the information comes from a limited sample of anyone willing to upload their photos and tests – meaning the AI isn’t gathering data from sources representative of a wider and more diverse medical landscape.
Medical Information Shared on Social Media The Health Insurance Portability and Accountability Act (HIPAA), a federal law that protects patients’ private information from being shared without their consent. This means users have less control over where information goes after they choose to share.
“There are numerous risks to this approach, including the accidental sharing of patient identities,” Tarzi said. “Personal health information is ‘burned’ into many images, such as CT scans, and will inevitably be released in this plan.”
According to Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania, the privacy threats that Grok may present are not fully known because X may have privacy protections that the public is not aware of. He said users share medical information at their own risk.
“As an individual user, do I feel comfortable contributing health data?” He said before The New York Times. “Absolutely not.”
A version of this story was originally published Fortune.com On November 20, 2024.
OpenAI launched ChatGPT Health to become a hub for personal health data
OpenAI suggests ChatGPT play doctor as millions of Americans face spiking insurance costs: ‘In the United States, ChatGPT has been an important help’
Ace Utah Gives AI the power to prescribe certain medicationsPhysicians warn patients of risk
This story was originally featured on Fortune.com
need to knowA woman said she was sent home from her local emergency room after…
WASHINGTON (AP) — When President Donald Trump announced the daring capture of Nicolas Maduro to…
Many people work hard to build a retirement nest egg. But then, once their careers…
It may seem like we're back in familiar territory - the Duke of Sussex is…
Did you get enough protein today? Granger Wootz/Getty Images/Tetra Images RFRELATED: RFK Jr. Opens Up…
Jan 17 (Reuters) - Twelve days into the U.S. capture of Venezuelan President Nicolas Maduro,…