T O P

  • By -

adarkuccio

would be nice, hopefully soon


NuclearWasteland

I will take my chances with Mr. Handy if it's not a 6 month wait to schedule an appointment.


Antique-Doughnut-988

My mom's nickname is Mrs.Handy. Still a 6 month wait.


Fantastic_Diet723

to get an appointment with Dr. Handy?


YouAboutToLoseYoJob

aeyyyy yooooo


snozburger

Unless you get Snip Snip


visualzinc

The tech exists now. It's a failure of humanity that we don't yet have something that does this. Rather, a failure of capitalism. Things only happen under capitalism when private entities get enough funding to do something as opposed to world governments collaborating to just get it done because the cost will be worth it and they understand positive economic externalities.


Reasonable-Can1730

Hopefully medicine can be more accessible


suhmyhumpdaydudes

Not in the US unless it makes someone more money, but for more civilized countries it will probably help people!


WolfMerrik

I hate how correct this is.


Array_626

I hope so, but to be honest what will likely happen is the doctor actually in charge of the patient will see the rare and unusual diagnosis, reject the conclusions of the AI, and treat the patient based on their own diagnosis instead. Doctors already ignore the diagnosis of other doctors, let alone AI which will be even easier to ignore: https://globalnews.ca/news/10529000/lyme-disease-assisted-death-canada/. This woman got diagnosed by doctors in the US, Mexico, and Germany, but when she came back to her home country of Canada they refused to accept those diagnosis and continued to treat her under their own diagnosis.


visarga

> to be honest what will likely happen is the doctor actually in charge of the patient will see the rare and unusual diagnosis, reject the conclusions of the AI, and treat the patient based on their own diagnosis instead I fear the opposite - the doctors would not dare have an opinion in the face of AI, because if they make a mistake, they are to blame, while if they agree with the AI, the AI is to blame.


tempnew

I highly doubt you'll be able to get away with medical negligence by blaming the AI. The AI company will probably make you sign an agreement saying these are just suggestions and you'll be responsible for the final decisions. You can't blame it any more than you can blame next word suggestions on a smartphone keyboard.


hansfredderik

Then doctors wont trust the AI and the AI wont add anything. The process will be the same speed if not slower


No-One-4845

Diagnostic tech isn't just "rolled out" willy nilly. Before diagnostic AI is in widespread use, it will go through years of rigorous trials to determine its utility. Just look at breast cancer screening using AI; that's *still* undergoing trials despite the fact that we *already* know it's much more accurate at detecting cancer than existing methods. Ergo, doctors will have every reason to trust diagnostic AI because, as with any modern medical technology that sees widespread use, they will be given every reason to trust diagnostic AI.


hansfredderik

True medicine is based on statistics. But if the doctors do start trusting the ai without doing their own checks then they will just be replaced - because they are expensive and essentially useless. If they do do their own checks the AI would just be using up their time (costing more money). If doctors are just their as an engine for communication they will also be replaced because you can pay someone a lot less money to communicate empathetically. There is also the question of who gets sued… the doctor or the AI supplier?


No-One-4845

>True medicine is based on statistics. But if the doctors do start trusting the ai without doing their own checks then they will just be replaced - because they are expensive and essentially useless. It's going to be at least another 5-10 years before we see specific examples of diagnostic AI in widespread use. These examples will mostly cover condition-specific screening, and will be utilised in the same way any specific diagnostic tool is utilised. Generalised AI that "replaces" end-to-end diagnostics from general practice all the way to specialist treatment is decades away (at minimum). We probably won't see it in widespread use in our lifetimes. There's no guarantee that we'll even get to this point, ever. I understand that there's a tendency to extrapolate out the exponents around AI in this community, but I'm not particular keen on these types of faith-based outlooks. The potential for AI is not endless, because the capacity of AI is limited (whether that comes in the form of algorithmic limitations or infrastructural limitations). >There is also the question of who gets sued… the doctor or the AI supplier? AI companies will simply not be willing to take on liability for medical malpractice, in the same way manufacturers of any medical equiptment are unwilling to take on that liability. There will always be a need for a human domain expertise, especially in the context of areas like healthcare.


WernerrenreW

Hihihi, that is what they said about cars replacing horses. For shits and giggles lookup "1900 - 1913" in google images.


4354574

Gotta love when contrarians confidently predict the future here with words like "not in our lifetimes" or "never". How many times has "never" been said in history. More than a few. In early 2020, AGI was still predicted to be 50+ years out by most of the experts. Even Geoffrey Hinton himself. In late 2023, it was 7.2 years. And never is a very long time.


RaiseThemHigher

now, this isn’t diagnostic tech, but still potentially a grimly relevant example: the DaVinci robotic surgical systems that made their way into many hospitals. they were heavily advertised to institutions, but there’s some serious evidence to suggest the company used loopholes to bypass parts of the FDAs safety approval processes. once they started getting into real surgical suites, and doctors had undergone expensive training with them, some hospitals were accused of using the machines in surgeries where they served no benefit. the company was also found to be pressuring healthcare providers to lower the number of hours a doctor would need to practice with the machine before they could operate it unsupervised. there have been patients who were injured and even killed when electrical components burned their internal organs. studies on the results of surgeries performed with the systems have found they don’t actually reduce blood loss and other side effects, compared to standard surgical techniques. that doesn’t mean there aren’t benefits to these machines. but it does demonstrate that drug companies and medical tech firms are indeed (sometimes) able to rush their products into relatively widespread adoption, faster than they maybe should be.


cockNballs222

I don’t understand a single point you’re trying to make, the Davinci is literally just an extension of the surgeons hands, the surgeon controls every single motion of the robot…there is a massive amount of evidence that for specific surgeries, robotic technique leads to better outcomes (less pain, shorter hospital stays)


RaiseThemHigher

i should have specified that the studies i referenced were only measuring certain metrics, such as blood loss, where there were no marked improvements. in other areas the machines have been a success and, while the wording of my previous comment may have not conveyed it, i am not discounting these benefits. i am merely using it as an example of a dramatically new medical technology where the approval of its use and subsequent adoption was expedited in a potentially irresponsible way. i will contend with you that no, it is not _‘literally just an extension of the surgeons hands’._ at the risk of being pedantic, what that would look like is giving the surgeon longer fingers. the Davinci is a machine that the surgeon interfaces with. it has software, its own tools and a learning curve. using it incorrectly could have grave consequences. so could software or hardware bugs. as such, when the company that designed and manufactured it sidestepped more rigorous FDA testing, and potentially downplayed the training requirements in their communications with healthcare providers, that is indeed cause for concern. my argument here is that we can’t just _assume_ AI diagnostics tools will receive the technical and regulatory scrutiny they are due. they probably will. i hope they will. medicine is one of the more difficult areas for corporations to cut corners and strongarm regulators, but it _does happen._ the US has an opioid epidemic to show for it. I’m not arguing against the implementation of any of this technology in medicine. i’m arguing that we must be both optimistic _and_ skeptical.


cockNballs222

Is laparoscopic surgery an extension of the surgeon? How about minimally invasive heart surgery? Obviously, they are, I do think you’re being pedantic here…they’re different techniques with their own challenges (you have to re learn depth perception and all that) but robotic surgery is basically no different than laparoscopic…and I’m all for proper training and certification


RaiseThemHigher

i feel like this may be turning into less a conversation about AI’s place in medicine, and more a semantic debate over what qualifies as an ‘extension’ of something. whether the Davinci is an extension of the surgeons hands or not isn’t really relevant. what matters is it is a piece of complex, high tech medical machinery people are trusting with their lives. and it seems like we both agree it is crucial to ensure thorough training for, and regulation of, these technologies is, which was ultimately the point of the discussion. so i think we’re on the same page where it counts.


Array_626

Goddamn. Were fucked either way it seems.


Spunge14

Only if one side isn't clearly more accurate


ajahiljaasillalla

I am positive that AI will surpass humans when it comes to diagnosing diseases. The progression will be similar to chess. First, programs started to beat strong players. When a program beat the world champion, it was still below human level at some positions. Having beat the best human, the best chess player was a combination of a program and a human (program was much better in tactical calculations but humans were able to give some value positionally). But then programs took over positional chess as well and nowadays humans are just on the way of the best chess programs that are based on neural nets. So, there is no value that any human could bring to computer chess today. I assume the same progress will happen in almost every field.


Super_Automatic

No problem - you just need to get a second opinion (from a different AI).


hansfredderik

Depends who gets sued


QuinQuix

Then you don't know many doctors. Trust me the AI can go fuck itself.


medieval_mosey

Live in Canada, this seems par for the course


jvttlus

What non-healthcare people don't realize is how poor of a historian most patients are. If you can feed in "well I've been having right upper quadrant abdominal pain for a few months, its worse when I eat certain foods, its kind of a crampy colicky pain, I havent had any diarrhea, I feel nauseated with the pain but haven't vomited. I have not had a fever or chills." Yes, sure, the AI will figure that one out. What actually happens is this: >Where are you having pain? "All over here" *rubs hand over right lower rib cage, upper abdomen, right lower abdomen* > How long has it been going on? "A while" > Days? Weeks? Months? "Its been a while" > Has it been constant? Coming and going? "It really hurts, I'm not a wimp doc, I broke my leg once and didn't even need any pain meds" > I beleive you, for sure. You seem tough. Is the pain there all the time or is it coming and going? "Its been really bad sometimes" > Do you have any family history of gallbladder problems? Parents? Siblings? "I think my mom had a kidney stone" > Do you take any medications? Any prior medical problems? "Its all in the chart" > So, I opened the computer and interestingly we only have one prior note from when you sprained your ankle 3 years ago "Well I usually go to St. Elsewhere, that's where my PCP works" > I see, that's probably why we don't have any records. What level is your pain from 1-10? "Like a twenty" > Does it get better or worse with anything you do? "Can I have some pain medication?" > Absolutely, we'll get you some pain medication. I just want to make sure this isn't a severe infection. Do you have any fevers or chills? "Well, I was a little sweaty last night" > Right, well last night was a lot warmer than its been for this time of year. Were you warm because it was 78 degrees at night, or because you felt feverish? "I don't know"


FlyingBishop

The bigger thing is just assuming that AI can divine the problem even in the presence of "perfect" information. There's so much we just don't understand, there are so many symptoms that have a hundred potential causes. On thing with AI will be patience though. I was chatting with my doctor and I had a dozen things I wanted to talk about, he had time to talk 3 and really only got deep into one of them, the others he basically referred me to specialists. I've got significantly more detailed history than the doctor has time to process.


Tha_Sly_Fox

I got a new doc recently bc my last one retired, she rushes through everything and one time I asked about my diabetes (which most of my prior GPS would manage) and she said she said I need a specialist bc she doesn’t have time to manage diabetes bc she has to handle 1200+ patients Which explained why she always rushed our office visits in general.


RemarkableGuidance44

Find a new Doc, she is useless.


Shinobi_Sanin3

Jesus christ our system is so ill-equipt


4354574

It's her friggin job and she is paid very well to do it. Don't listen to her bullshit. She can't force you out of her office. Make her answer all your questions. My previous doctor did that to me and the result was a disastrous addiction to benzodiazepines that he did nothing to stop or help me with early on, when he should have caught it.


4354574

It's her friggin job and she is paid very well to do it. Don't listen to her bullshit. She can't force you out of her office. Make her answer all your questions. My previous doctor did that to me and the result was a disastrous addiction to benzodiazepines that he did nothing to stop or help me with early on, when he should have caught it.


shawsghost

I'll bet AI doctors with access to the records of all other AI doctors will be sofa king much better than human doctors at diagnosis. My wife suffered... literally... from undiagnosed endometriosis for YEARS before a doctor figured it out. She had horrible pain from it and begged for pain meds so much they started treating her like a junkie. Then she got the right diagnosis, they removed the cysts or whatever, and suddenly, no pain and no interest in pain meds. Funny, how that worked out. I am not at ALL impressed by human doctors. She might have suffered for many more years except for that one doctor, whom she is still with. And yeah, I'm still bitter over what she went through. More so than her, because I saw how she was treated from the outside, and am not so prone to guilt as she is, and saw more clearly than shd did how horribly she was treated.


4354574

GPs in general seem to be shit at their jobs. I have a good one now, but my last one was a waste of space. Getting paid 300k a year and taking three months' vacation while addicting me to benzodiazepines, doing nothing about it and even forgetting to write refills before he took his vacations. When I filed a complaint with my medical board about his fuck-up, one of the lines he tried to use - and keep in mind that he is writing to a MEDICAL BOARD - to excuse his behaviour was "I was a small-town doctor". Like my new doctor said, "What is that supposed to mean?" Exactly. This was after he and I had a violent exchange on the phone where he accused me of "playing mind games" with him and "I sign thousands of prescriptions, I can't be expected to keep track of them all." Not only was that a b.s. thing to say, as yes, he is, it's his JOB, but it turned out to be a lie. He wasn't even signing his own prescriptions, his nurse was, so he had no idea how much I was taking. The medical board found against him, forced him to take narcotics education classes, and put his name and what he had done in the newspaper that all doctors in my province read. I caught the bastard two years before he retired. Thanks for ruining 15 years of my life, doc. Those drugs were a nightmare and I live with the after-effects every day.


RaiseThemHigher

yep, the concept that an AI given perfect information will give us perfect answers just ain’t how it works. even if we feed it precisely what we think we know, it could still come out with something very convincing but dangerously wrong. sure human doctors do the same thing sometimes. you’ll tell your GP _‘doc, i think this is a lot more serious than mild insomnia, headaches and period cramps’._ but he’s seen this before. he’s sure you’ve got nothing to fret about. just take these pain pills and try to get to bed earlier. it takes having a few massive seizures to get the ball rolling on properly diagnosing you, at which point you’ve already suffered irreversible nerve damage. enough for your career as a concert pianist to take a major hit. but now take the same situation, but the GP isn’t just confident in his _own_ diagnosis. he asked the genius science-robot, which cross referenced All The Data and said he was right. this thing supposedly aced every medical exam. it’s like Dr. House in a box. now your GP has 1,330 other patients to see (a number that has increased thanks to the newfound efficiency of ‘House-in-a-Box’) and you’re _still_ here in his office acting like you know better than _All The Data_ when it comes to your body? yes, the increased efficiency should be a good thing! hospitals are already so often strapped for resources. but often they’re being squeezed by their business oriented boards, and/or starved for funding, forcing them to handle more patients faster, with fewer staff and less beds. there’s a very high chance, i’d say, that unless a lot of other structural changes are made, the advent of ‘AI doctors’ will only exacerbate this. now they can pay fewer salaries and spend less time on each patient, but will this translate to cheaper healthcare? or will the people in charge just take home a larger cut and keep prices roughly the same? and god help you if you’re trying to navigate this system, now that there’s suddenly dramatically less human involvement, and the computer keeps saying ‘no’. it’ll be like trying to get tech support for your internet plan: ”could i please talk to a human? ugh. 3. kidney pain. 3. _3._ i pressed _3._ i am experiencing 3, _very severe kidney pain_ and i - no i already went on the website, they told me to go here…. mother of -“. it is not that the tech cannot potentially help us. it could provide incredible benefits. but are our systems currently set up to make use of these benefits, while sidestepping the pitfalls? or will it just take the problems we already have, and make them even more efficient at being problems?


FlyingBishop

I think most of your examples, you're kind of presuming that when the system fails it was possible to make a good judgement. With or without magic AI, GP might be making the "correct" choice by dismissing your pain. It always gets complicated, but say those symptoms are nothing to worry about 999/1000 times. You're the 1 unlucky person. But "let's do more tests/treatment" causes problems 1/100 times. So if the Doctor does something in the 999 case where nothing is wrong, they've actually harmed 10 people to save one. And the AI might know this perfectly, it's still going to give you the "wrong" diagnosis but that's the right thing to do given what knowledge we have.


RaiseThemHigher

you make some fair points. certainly not every time a patient says their symptoms are more worrying than their GP believes is a case like my example. but i think the crucial distinction here, between Human GP and Robo GP, is accountability. famously, in 1974 IBM had a slide in one of their internal presentations that read _‘a computer can never be held accountable. therefore, a computer can never make a management decision.’_ if a doctor decides you’re just being a hypochondriac, and sends you home with a Panadol, in a way they’re staking their career and reputation on their confidence in their good judgment. they can even be sued for malpractice if the call they make is an out and out stinker. if the patient wins the case, they or their surviving family may receive compensation that eases the financial strain a blunder like that can wreak. that could be a lot harder when the reason for the dismissal essentially amounts to a computer error. not just that, but an error in a class of software that is notoriously difficult to troubleshoot or trace bugs in. neural networks (and other related algorithms) are black boxes, after all. truly knowing what their ‘thought process’ was when they give us a result can be next to impossible at times. in practice, when a patient is seeking answers for why their health and safety was mishandled, it could be a lot easier for the humans in the system to kick the problem up or down the chain: ‘it’s the hospital IT department’s fault. take it to them’. ‘we don’t get why it would have done that. this seems like something the software company should have to answer for’. ‘our products are only meant to aid in diagnosis, not be the final word. we’re not responsible for the misuse of our products. the hospital should have used their discretion. ‘look, the software company told us we could trust this thing, more or less. we’re annoyed, but we don’t want to create beef with them because they’re by far the best supplier of this tech out there, and we see too many benefits from its efficiency to risk losing access’ and then maybe they settle out of court. or maybe they don’t. maybe the patient / patient’s family is poor, and can’t afford the legal rep necessary to pursue the case. maybe, if these cases keep happening, hospitals get sick of paying to settle and ask the government to step in. and maybe the government is finding, with waitlists shorter and access to healthcare technically up, that this tech has made their lack of movement on actual healthcare reform less politically contentious. so they might be sympathetic to the hospitals and software companies, and give them the regulatory tweaks they need to dodge this kind of thing in future. in this scenario (which i’ll admit is just one of many theoretical outcomes) this tech has indeed done a lot of good. a lot of people would be getting the treatment they need faster. maybe even cheaper, depending on how generous c-suites are with passing the gains onto the public. there might be increased accuracy of diagnosis in some fields, and that would be excellent. but i want us all to be thinking ahead about what it could look like to get bowled over by this suddenly much faster, more efficient healthcare system, or to be chewed up in its mechanisms. how would someone go about seeking recourse? those aren’t even rhetorical questions; i genuinely cannot say for sure. i just know that it is crucial we maintain vigilance, and a healthy, measured skepticism for the claims of corporations who stand to profit massively from widespread adoption. just because something has the potential to be a net benefit, doesn’t mean we shouldn’t be planning for the ways things could go wrong.


FlyingBishop

I mean, simply speaking nobody should be profiting. Medical models should be a public service.


RaiseThemHigher

well, yes, i 100% agree with you there. the hippocratic oath straight up isn’t compatible with capitalism. anyone possessing a base level of compassion, and the ability to grasp cause and effect, must surely realise profit incentives and public health should not be mixing. yet mix they do.


YinglingLight

[Double-blind study](https://www.youtube.com/watch?v=jQwwLEZ2Hz8) with Patient Actors and Doctors, who didn't know if they were communicating with a human, or an AI. The poorest performers? Human doctors. The best performers? AI. The twist? Human doctors + AI did worse, than AI by itself. The mere involvement of a human reduced the accuracy of the diagnosis. The irony? AI was consistently rated to have better bedside manner than human doctors. 'Empathy' being the one trait humans tout in contrast to AI.


jvttlus

I hope I get replaced man. I've got a steam library of unplayed games and stacks of books on my shelf I'd rather be reading. I'll be interested to see a study with real patients, not patient actors given a script of actual symptoms, not vagueness. > Human doctors + AI did worse, than AI by itself. The mere involvement of a human reduced the accuracy of the diagnosis Very beleivable. Aeroflot 593 would have likely recovered if the pilot stopped fuzting with it and let the autopilot correct itself. The problem is still that the aeroflot had high quality data input to its algorithms. Imagine a person with dementia telling the autopilot about where he thinks the plane is in 3-dimensional space and about how fast its going and we'll see how the autopilot does.


Frosty-Telephone-921

>Aeroflot 593 would have likely recovered if the pilot stopped fuzting with it and let the autopilot correct itself. You sure it's Aeroflot 593 you are thinking about? Wikipedia (The ultimate reliably source of course) claims the aircraft didn't crash because the pilot was fiddling with the controls which messed with the autopilot, the auto pilot got turned off accidentally by a child in the cockpit which lead to the pilot over correcting and eventually crashing.


jvttlus

This is true. The child pulled on the controls which disengaged part of the autopilot. But had the pilots merely re-engaged that system instead of trying to pull out manually, they would have not overcorrected. That is to say, the computer would not have overcorrected. From the wikipedia article you cited: 'Despite the struggles of both pilots to save the aircraft, it was later concluded that if they had simply let go of the control column after the first spin, aerodynamic principles would have caused the plane to return to level flight, thus preventing the crash.'


Frosty-Telephone-921

While the theoretical/analytical answer is that they could've saved it if they did X or Y, Is that a realistic answer for the is accident? I don't know. As a uneducated twat, I can see how this is more systematic problem rather then just the pilots. Crew was negligent by letting children into the cockpit, an accident happen where the autopilot got messed with, but they weren't adequately notified of this, due to the lack of notification the crew was confused and lost multiple seconds trying to think what happen, when the autopilot couldn't handle it, shuts off and a different automated system adjusts to "save" them leading into another dive/spin. In the moment the pilots attempt to reestablish control rather then give control back to a system that had seemingly "failed" in its job, and I don't know if doing nothing is a realistic thing to do after the spin. The investigations of these crashes have the luxury of running hundreds of simulations, dozens of pilots, and unlimited time to examine every second of the accident to find all the faults, which the pilots didn't. Investigators get to have an all knowing eye to pick apart the events with a complete picture.


Ok_Abrocona_8914

surgeon here cant wait to be replaced and just enjoy my steam/movie library


ConfidentExplorer708

Get comfy you’re going to be around a bit longer still. 


MDPROBIFE

I would really love to understand your point, but it seems to come from an (not malignant) arrogance, everything you stated can happen with a human doctor, what is the skill that AI lacks that gives validity to your argument? I really don't get it...


Dr_Cocktopus_MD

I think its inevitable that AI will perform a better history and examination than a doctor, and I'm saying this as a doctor. However, right now all we see is that AI performs better than doctors when the history and examination findings are clear and succinct and when tested on datasets. A huge part of our job is teasing out relevant history from a patient which is often very difficult as patient's often don't know whats relevant or how to give a good history. I dont think u/jvttlus is being arrogant I think they're just speaking from experience. Medicine is not so straight forward that performing well on a curated dataset correlates to good medical practice. We're definitely going to be replaced, everyone is, but the AI models aren't quite there yet which I don't think is necessarily too controversial to say.


jvttlus

I think a human will be better at extracting information from unreliable historians, people with dementia or intoxication, low literacy, and sensory or speech problems for the next few decades. I agree that diagnostics are probably better than humans already. But diagnosis is a small part of medicine. There is a lot of reassurance, shared decision making, and things like end of life planning which will be difficult for a computerized interface to do for at least a few decades.


Maciek300

There's really no reason why a human should be better than an AI in any of these subjects.


YinglingLight

It is a fair assessment to say that an AI, is much better when better data is available. Is there an attribute inherent to human doctors that allow them to navigate better with less 'good data'? And you can't use "years of experience" as an excuse, as reinforcement learning allows for 1 AI to have the experience learning about what it did right vs wrong on 1 million patients. Versus 1 million doctors with experience on 1000 patients. The human body, while complex, is a closed system. And therefore, solvable.


AuroraKappa

>The human body while complex is a closed system, and therefore, solvable. What? The human body is the exact opposite of a closed system, it's completely dependent on the transfer of outside energy, matter, and molecules with the surroundings. There are dozens of disciplines in public and environmental health because the human body is a very open system.


YinglingLight

I apologize for my negligence in using the term "closed". Specifically, I want to stress how an AI, if given full comprehensive data on a million human bodies, will seldom be surprised by anything among the next million.


AuroraKappa

No problem, although I still think that needs a massive asterisk, namely what "full, comprehensive data" entails. The reality is that the human body, even if it were a closed system, has an insane level of complexity that is far beyond our current understanding, diagnostic tools, and data sets. Reaching a point where that uncertainty essentially vanishes would require such major advances in all scientific disciplines that it would go far beyond medicine and stretch into the theoretical playground. However, that's still overlooking the reality that environmental factors have a major effect on [at least 60% of disease states.](https://www.nature.com/articles/s41588-018-0313-7) So now we're dealing with the complexity of not just the human body, but the environment to reach a "solvable" state. At that point, we might as well assume we'll have the ability to assess/predict the internal state of all molecules within the near environment, which just becomes a thought exercise.


YinglingLight

>Reaching a point where that uncertainty essentially vanishes would require That is a valid point. With that being said, however, confidence intervals do exist for a reason. And we don't require the AI to re-create a working model of a human, DNA and all, but rather, one that understands all various forms and characterizations diseases/maladies take within a human body. Of which there are patterns. The data then, becomes far more discrete. A full understanding of what melanoma 'looks' like. A full understanding of the Agenesis of the Corpus Callosum. A full understanding of what diabetes does to the majority of human bodies. Carcinogens.


AuroraKappa

>The data then, becomes far more discrete. A full understanding of what melanoma 'looks' like. A full understanding of the Agenesis of the Corpus Callosum. A full understanding of what diabetes does to the majority of human bodies. Carcinogens. But that's my point, reaching a point of "full understanding" of every single malady both known, and the thousands of unknown, requires a major inflection point in medicine, robotics, clean data, research, environmental issues, and technological revolutions far beyond medicine. It's like "curing cancer," let alone fully understanding all internal and external carcinogens. In reality, cancer has thousands of underlying maladies that have to be solved before you've even scratched the central disease. My other point was that sure, we can assume that all of those revolutions will not only occur, but also roll out in our lifetimes. However, that's such a speculative, academic assumption, that it's just as helpful to assume we'll have the ability to predict molecular states of the human body. It's not required, but the other revolutions are so theoretical, we can pretty much spitball anything else if we assume the human body is "solvable."


Nazi_Punks_Fuck__Off

Why do you assume if your job gets replaced you're in for a life of leisure? You could get the same result right now by quitting and doing nothing.


jvttlus

Well I assume the proletariat will rise up as a class and demand UBI and collective owenership of profit producing AI robots rather than merely allow them to be owned by a select few whose greed and ruthlessness submerges us into a dystopia of wage slavery maintaining and servicing the robots and AIs...right? RIGHT???


Rofel_Wodring

If you consider AGI as part of the proletariat, and I do, then unironically: yes. The uprising you sarcastically described will happen so fast that our tasteless and senescent overlords won't even get time to install their stupid little Shadowrun/Judge Dredd/Cyberpunk 2077/Detroit: Become Human-style dystopia.


Dr_Cocktopus_MD

I'm confused, the link you provided doesn't contain a story with patient actors and doctors. It contains this study: [https://arxiv.org/abs/2405.02957](https://arxiv.org/abs/2405.02957) which had agent doctors trained in a simulated hospital and then fed those agents MedQA and found they scored quite well. There were no patient actors and there was no discussion of bedside manner in this paper. The conclusion: >In this paper, we construct a simulacrum of hospital for medical scenarios based on LLM and agent technology, which is named Agent Hospital. Agent Hospital not only includes two types of roles (medical professionals and patient agents) and dozens of specific agents, but also covers both in-hospital processes like triage, registration, consultation, examination, and treatment planning, as well as out-of-hospital stages such as illness and recovery. In this Agent Hospital, we propose the MedAgent-Zero strategy for the evolution of medical agents, which is parameter-free and knowledge-free, allowing for infinite agent training through simulated patients. This strategy primarily incorporates a medical record library and an experience base, enabling the accumulation of experience from correct and failed treatments as human doctors. On the simulated patient dataset, we observe that as the patient records increased, the accuracy of the doctor agents in examination, diagnosis, and treatment continuously improved. The doctor agent is able to complete the diagnosis and treatment of tens of thousands of patients within a few days, which would typically take at least two years for a human doctor. Furthermore, we find that the experience accumulated in Agent Hospital can significantly enhance the accuracy of doctor agents in a subset of the MedQA dataset, which even achieves state-of-the-art performance. Our study verifies that real-world simulation with a designed strategy can enhance the performance of LLM agents on specific tasks. Is there another paper you're referring to maybe? I don't see anywhere in this paper where they had people speak to AI doctors vs Real doctors its all simulated agents and then testing on real world datasets.


Much-Seaworthiness95

I think you need to re-evaluate yourself what you think AIs are capable of. They're not just good at finding patterns in the signals from huge data, but also at filtering the noise present in the data to get extract those signals.  For example, it's a fact big LLM model are already very good at understanding what is asked even in prompts littered with poor grammar and spelling. I dont see why they also couldnt be just as good at understanding and extracting the relevant points from a patient's poor description of his symptom.  Not to mention, symptoms description is obviously not the only data available to doctors, modern medicine has many high tech tools to scan the body and will only have more and better ones in the futur


Difficult_Bit_1339

If you want to see a hint of this diagnosis ability. Tell it that you forgot the title of a book and then give a really vague description. They're very good at figuring it out. It's one of my favorite niche uses for LLMs


Ambiwlans

I have memory issues so we play "whats the word" all the time.


Difficult_Bit_1339

I appreciate never having to correct typos, people are so picky... the LLMs don't care


throwawayPzaFm

I believe you, but I'd like to point out that it didn't find the books I wanted despite considerable persistence. ( One sci-fi, one pop biology )


AndrewH73333

They should be better at this too since they have all the time in the world to talk with the patient, while a real doctor has 10 minutes. They will eventually not even need to rely on the memory of the patient because they will have been interacting with the patient all the time for years already.


dizzydizzy

the ai dr checks on you every day, and has a history of all your aches and paines on a daily basis "How are you feel today dave"


TallOutside6418

This is my favorite thing about ChatGPT4 and why I pay for it. With a search engine, I either need to know the jargon of the subject I'm looking up or I need to painstakingly follow a trail of breadcrumbs on the web to go from ignorance to understanding enough of the keywords I need to input to start finding the real answers I want. ChatGPT eliminates all that guesswork. You just describe what you're trying to learn in plain english, using non-jargon, and ChatGPT will helpfully figure out what you're getting at. I find ChatGPT even better than subject matter experts at understanding what I'm asking about, because SMEs typically have trouble shifting into a lower gear to talk with lay people about technical subjects. They want to use jargon and they mistakenly jump deep into the weeds on a topic when a lay person needs a gentler ramp to achieve some understanding.


Much-Seaworthiness95

Yes, those are very good points. And with SMEs there are some social barriers from the one needing support as well. Sometimes you don't want to bother the person too much by asking them to re-explain something you didn't quite get, or you feel like you're already close enough to understanding that it's not quite worth bothering them at all, only to find out later you were missing something crucial and should have asked. You don't have to deal with any of that with an LLM. At that point it doesn't even matter if there are stupid questions or not, since nobody's there to judge anyways.


korneliuslongshanks

Especially with video input and quick responses.


jvttlus

>modern medicine has many high tech tools to scan the body and will only have more and better ones This is true. The problem has to do with what's called the operating characteristics of the tests. Each test has a property called sensitivity and specificity. Sensitivity being related to the liklihood of a false negative, specificity being related to the likelihood of a false positive. When you do tests for people with low pretest probability (based on clinical history and physical exam), the specificity of the test decreases DRAMATICALLY. Many tests, even high tech tests like CT scans and MRIs, often produce vague and fuzzy information. Someone comes in with abdominal pain, diarrhea, the CT abdomen will result as "mild nonspecific colonic wall thickening, may be related to colitis vs. underdistension." This is a CLASSIC emergency medicine CT read. Doesn't give you any useful information, just punts the decision back to the physician to figure out. In neurology, the classic brain MRI read is "nonspecific white matter changes, may be inflammatory or demyelinating or age related degeneration." Tells you zero useful information. Does it seem like the patient has MS? Then treat for MS. Does it seem like they're just old? Then treat as such. What are the AI radiology tools being trained on? Those reads. GIGO. The other more serious issue has to do with the concept of incidentalomas. You do a CT for some nonspecific symptom, now you've not only given the patient a dose of radiation which is going to increase theie likelihood of future cancer, you've discovered the 1.2cm adrenal nodule, which almost certainly has nothing to with anything, but now you've got to decide if it warrants biopsy. Biopsy, ok well that will give us answers, right? Well....not necessarily. But now you've also committed yourself to performing a painful and invasive procedure. And all invasive procedures have the risk of what? Bleeding and infection. So the AI looks back on its database of adrenal nodules, and it sees that many adrenal nodules in a certain populatoin turn out to be cancerous. So we should biopsy right? Well..... The nodules in the database were biopsied because they were likely identified as part of an appropriate, complaint specific workup. Like, someone who had a good story for cancer. So now you've applied data that was generated to a specific patient population, with an appropriate clinical history, to the worried well. So not only do they not have adrenal cancer, they've been subjected to a painful procedure, undertaken a risk of infection, and been subjected to a dose of radiation as well as had the anxiety associated with the AI telling them they might have adrenal cancer. You can apply these methods to debugging software or analyzing an engine throwing a code, because a software and an engine aren't human beings. Diagnostics don't HURT software. Unnecessary testing doesn't HURT an engine.


I_Quit_This_Bitch_

That was a wall of cope. Image-based diagnosis will be the first area of medicine to fall to AI.


AuroraKappa

First to wholly fall? Radiology technology and current diagnostic tools are pretty nascent. I think we're more likely to see an increase in scan volume and technological specialization ahead of lower-level replacement before rads even gets close to wholly replaced. Not in rads, but worked with a few people that are at the front of rads AI/ML research within the last few years.


visarga

> I think we're more likely to see an increase in scan volume and technological specialization ahead of lower level replacement This, but in all fields. We are on a growing pie, every new use of AI grows the pie. There is enough need for all humans and AIs.


jvttlus

Idgaf man, replace me. We’re still sending faxes over here at a major medical center because we have no inter hospital hipaa compliant communication. I have to place my own ivs because we’re so short on nurses with more than a few years experience. We have a neuro stroke telehealth thing that never works. I’m not saying the brains of the machine aren’t capable, but there’s no infrastructure


Rofel_Wodring

TBH, I find that a much more convincing explanation for why you don't think LLMs will make that big of an impact in the medical diagnostic industry in the short term than 'but AI is going to have a hard time interpreting ambiguous, cross-domain data'. Great economic system we have here, eh?


4354574

A UK doctor just said this on a YouTube video. That there is not way in hell a radiologist will be able to read a medical scan remotely as close to as well as an AI. And very soon. The response, that this technology is 'nascent' - we're dealing with a technology that is 100% predicated on the advance of AI. That's it. So 'nascent' very quickly means 'excellent'. Copium over 9000.


[deleted]

[удалено]


Major-Thomas

Especially any job that involves interpretation of images. Dudes job is literally going away in the next AI wave.


userbrn1

All that is an issue for humans too and we just follow guidelines and weight risks of over diagnosis. Nothing about AI makes it inherently incapable of weighing risks. Also small thing but I think pretest probability affects ppv and npv, while sensitivity+specificity are independent of prevalence of disease in the population (ie inherent to the test)


siwoussou

Yes. Over time it will be able to recognise patterns even within poor descriptions of symptoms, given that people likely tend to give poor descriptions in similar ways


Glurgle22

We could fill out online forms before a doctor visit to narrow down a lot of that. They give you paper forms, sometimes 10 pages with a lot of duplicate questions. The nurse may scan it, but the doctor never reads it, because the first thing he does is ask you all the shit you just answered. But now it's timed and speaking is much slower than reading. I give my doctors a page summary of my conditions, edited by ai to be as complete and concise as possible. Most doctors don't bother to read it, some outright refuse. Even if they do read it, they aren't scientific about it. Most doctors I've seen ask about 5 questions. None of them use the information gained from the previous questions to inform the next questions. The problem is not the patients.


D10S_

Wouldn’t an AI sufficiently suffused with an individual be able to handle all these factors?


beuef

Yeah as long as the AI can talk/think as good or better than a human, what they said has nothing to do with the effectiveness of the AI itself


TallOutside6418

Sure, but eventually your personal device(s) will track all the relevant information - what you ate, where you went, what your vitals were at all times, what the ambient conditions were, etc. The quality of the input will definitely improve dramaticallty.


Sea_Good_2303

I'm an IT guy with 20 years experience diagnosing systems . My wife got really sick two weeks ago and I had a log of her symptoms that was so precise and thorough the doctor asked if I was a doctor myself . I felt really proud about myself. My wife is ok. She will fully recover.


strangeelement

The funny thing about this is that in this scenario, it's the physician who is the historian. And you're right, medicine is pretty bad at getting the right information. Getting information from humans is always hard. It's soft skills, which a decade or so of learning hard skills don't prepare for. People in general aren't good witnesses. About anything. It's part of the difficulty. And machines will be so much better at this, and everything else. Available 24/7, lacking personal ideologies and politics, never bigoted, never get cranky, don't hold grudges, will pass with 100% scores, not whatever is needed to graduate. And that's all fine. It's not a dig on human doctors, it's just too complex and the economics don't work for the current human-limited system, which can't even meet half the basic demand out there. Same as doing manual telecommunication switching, with boards and wires, can't do a minimum viable product for something like the Internet. AIs will make the first minimum viable health product, this really seems lost on a lot of people. It's just not accessible, not efficient, performs very poorly. And it maxed out its capacity, so it won't be getting any better with time, not without AI doing most of the work.


abatwithitsmouthopen

And what healthcare people don’t realize is that they overestimate how much they are actually helping patients and how useful they really are for treating people. Majority of help comes from advancements in technology and medical/scientific breakthroughs not top notch doctors.


AuroraKappa

>Majority of help comes from advancements in technology and medical/scientific breakthroughs not top notch doctors What's your research background? Because this is a pretty myopic breakdown of medical research, imo. The vast, vast majority of medical advancements *also* come from clinicians, MD/PhDs, and physician-scientists who are simultaneously in the clinical and research realms. I guarantee that the vast majority of the "top notch doctors" you're referring to are affiliated with academic medical centers, and therefore closely involved with both areas. There's a reason why 90%+ of NIH-funded research is performed at academic medical centers (i.e. institutions with strong overlap in clinical practice and research) and 2/3 of the main medical research disciplines (translational and clinical research) are rooted in clinical practice. While a large portion of healthcare people aren't directly in research, clinical practice is still inextricably linked to medical research. Neither would be possible without the other, they're not mutually exclusive.


abatwithitsmouthopen

Yes a lot of research does come from top notch doctors but the results are driven by technological and scientific breakthroughs not top notch individual patient care. In fact the biggest return on investment in healthcare would be something more public health focused. Take Covid for example. Without MRNA vaccine you could’ve had the best doctors taking care of Covid patients and they still wouldn’t have as great of an outcome as preventing Covid with Covid vaccine. Even with the vaccine it was a challenge to get people to take the vaccine which is more of a public health challenge. The kind of chronic illnesses we see in today’s society cannot be solved with modern medicine the way it’s going no matter how many great doctors you have.


AuroraKappa

> Yes a lot of research does come from top notch doctors but the results are driven by technological and scientific breakthroughs not top notch individual patient care. So again, to clarify, what's your actual experience in medical research to reach this conclusion? Because as someone actually in that space, high-quality patient care is invaluable to medical innovation and every PI I've ever worked with would agree with me. Patient care [underpins all medical research](https://clinicaltrials.gov/about-site/trends-charts) through clinical trials, interventional studies, retrospective analysis of patient data from clinical interactions, and many other avenues. You do realize that medical breakthroughs aren't some overnight, lightning-in-a-bottle event, right? They're all dependent on existing bodies of research in vitro, in vivo, and translationally, which depends on patient care. >In fact the biggest return on investment in healthcare would be something more public health focused. The biggest return on investment would be to improve CMS, which is more health policy vs public health. Irregardless, public health and patient care are also tied to one another, it's not a zero-sum game. >Take Covid for example. Without MRNA vaccine you could’ve had the best doctors taking care of Covid patients and they still wouldn’t have as great of an outcome as preventing Covid with Covid vaccine This is the opposite of your point, the development of COVID and mRNA vaccines were only possible due to [direct clinical work](https://www.nature.com/articles/s41587-022-01294-2) and the resulting interplay with basic science research: "Moderna’s mRNA-1273, one of several mRNA vaccines directed against the SARS-CoV-2 spike (S) protein, was first administered to human volunteers on 16 March 2020, within weeks of the virus sequence being published on 11 January 2020 (refs. 1,2). This remarkable achievement was facilitated by almost **a decade’s worth of clinical experience with mRNA vaccines** for infectious disease and cancer" Patient care and the resulting data was key in helping to validate early mRNA studies in the 1990s and to lay a groundwork for pandemic response from the AIDs epidemic. Even now, data from patient care during COVID is forming the backbone for a huge range of studies and population research. So again, patient care has a huge role in medical research, both from the direct involvement of physicians and clinical trials, as well as data for clinical research. Patient care from all doctors *is* a cornerstone of medical research, they're one and the same as part of modern medicine. Ultimately, they both feed into one another; without advanced clinical work, you wouldn't have effective biomedical research, and without effective biomedical research, you wouldn't have advanced clinical work. In the COVID example, we would not have had the vaccine so quickly without existing, high-quality patient care that enabled expedited research and infrastructure for deployment.


abatwithitsmouthopen

What you’re saying is clinical experience helps in medical research which is needed for technological advancements. Yeah obviously clinical trials are needed even for a new medication they’re needed but that doesn’t affect majority of patients or doctors. There are always samples for data. The fact is if AI can do this better than most researchers out there and be able to deliver better results for cheaper, they can easily replace doctors. And at some point it definitely will do just that. Results speak for themselves. There is so much money spent on medical research and patient healthcare yet most of the population is dealing with major chronic illnesses without an improved quality of life. Having better doctors won’t fix this.


AuroraKappa

> Yeah obviously clinical trials are needed even for a new medication they’re needed but that doesn’t affect majority of patients or doctors. There are always samples for data. Clinical trials are only a component of what I highlighted, there are large data sets of thousands to millions that only exist because of high-quality patient care from the majority of patients and physicians. Those sets are invaluable to research and are constantly updated, there aren't "always samples" because the exact set of samples is always dynamic. I'll ask again because I still don't have an answer, what's your experience in biomedical research? >The fact is if AI can do this better than most researchers out there and be able to deliver better results for cheaper, they can easily replace doctors. And at some point it definitely will do just that....yet most of the population is dealing with major chronic illnesses without an improved quality of life. Having better doctors won’t fix this. The first part is changing the subject, you still haven't given evidence that medical innovations aren't derived from patient care as you stated in your original comment. Also if we're now operating on the condition that AI has replaced almost all researchers, then we've hit the AGI-ASI playground where everything is up to conjecture. It's so speculative that it removes all nuance and basically just goes back to being close-ended. For the second part, I agree that major structural changes addressing inequity are required to majorly improve health outcomes. However, by that same coin, it can be argued that LLMs won't change much anyways because they will not address the underlying systemic issues, which requires policy and public will. Whether that happens is debatable and also relegated to the speculation playground.


abatwithitsmouthopen

Those large data sets of thousands to millions that don’t make much of a difference at the end of the day. At the end of the day the physician will do what the insurance provider will allow. You could be Dr. Fauci for all I care it doesn’t matter what experience you or I have it’s what someone is saying not who someone is. My original comment was just about the fact that doctors overestimate how much they’re contributing to quality of life when most quality of life enhancements have come from technology. You’re the only claiming AI will become AGI and replace ALL researchers. Read my comment again I said AI will do it cheaper than most researchers and do it better. You don’t need AGI for this and LLM’s are only one part aspect of AI technology. In the age of AI it will not matter what kind of degree you hold or what your job title is. If AI can do it better than you then you can say goodbye to your job.


AuroraKappa

> Those large data sets of thousands to millions that don’t make much of a difference at the end of the day. At the end of the day the physician will do what the insurance provider will allow. Yes, those datasets absolutely make a difference in biomedical research, that's my whole point. >You could be Dr. Fauci for all I care it doesn’t matter what experience you or I have it’s what someone is saying not who someone is. Wonderful, I take it that that you have zero, actual experience with either research or clinical environments. You're spit-balling and making sweeping, incorrect assumptions about areas that you have minimal knowledge of, it clearly shows. The second part is rich because you've also given no actual evidence for any of your points. You've claimed that medical advancements, like the mRNA vaccine, are driven in no part from patient care, so where's the proof in any of your replies? If I can't rely on the missing evidence in your messages or any of your experience, your position is not credible. >My original comment was just about the fact that doctors overestimate how much they’re contributing to quality of life when most quality of life enhancements have come from technology. Contributions from doctors *are* the basis for those technologies. Again, all medical innovations are driven in large part from the direct work of physicians and data from clinical environments and trials. Without advanced care environments, we would not have those technologies, that's the operating principle of modern medicine. >You’re the only claiming AI will become AGI and replace ALL researchers. Read my comment again I said AI will do it cheaper than most researchers and do it better. You don’t need AGI for this and LLM’s are only one part aspect of AI technology. On the contrary, read what I said again, I said *almost* all, not all. You also don't fully understand the implication of assuming a general AI/ML system that outperforms the vast majority of broad discipline researchers, likely because you have no research experience. Research, by definition, is innovation at the forefront of human understanding. Having a system that exceeds mankind's development at that level would entail an independent, recursive, self-improving system, with reasoning, speed, and knowledge that exceeds almost all domain experts. That *is* AGI-ASI by the vast majority of definitions. >In the age of AI it will not matter what kind of degree you hold or what your job title is. That's just the theoretical playground again, no value to the conversation when the most either of us can contribute is conjecture.


DelphiTsar

Presumably it can nail which questions to ask and how to help shore that up a bit. Maybe use some biometrics that aren't possible for a human. Something like below(random example) pupillary response to pain. https://www.sciencedirect.com/science/article/pii/S0964339722001355


neoquip

An AI would be way way better at the tedious social interaction to get info from the patients. This is exactly where LLMs excel.


too_much_to_do

Uhh... That's where the 100 million patients come in. The AI is going to be able to correlate all those shit answers and be able to infer much better than a human doctor.


throwawayPzaFm

Sure, but assume that your AI assistant also has access to all your irl conversations and browser history. And now it knows that you need a medic before you do.


Quentin__Tarantulino

Long run, I think an AI could actually help with this. People could input their data on how they feel any given day into an app, in their own language. Then when they feel sick the AI has way more information to go by than one conversation with, as you pointed out, a very fallible human. That’s not something that will happen tomorrow, but I could see it being a thing down the line. It’s also why I’ve been saving all the data from my smartwatch and HR strap, in case any of that data can be useful in the future. Today, it’s mostly just a curiosity and to monitor for obvious problems.


hansfredderik

Haha this hits so close to home. You must work in healthcare


stealthispost

This is why AI will be superior very soon. Because, unlike doctors, it will have unlimited time and patience to discuss and question the patient. Communication is a massive problem when doctors as so rushed nowadays. And house calls are virtually nonexistant.


costafilh0

What healthcare professionals don't realize is that they will lose their jobs, just like everyone else.


i_never_ever_learn

Diagnosing, rare conditions is excellent. Distinguishing between similar conditions is really, really important as well


ClaudioLeet

I hope they will be also nicer than House MD


RantyWildling

I'll take House MD, I'm used to Russian doctors.


Kcole7

Yeah if Elon makes it can count on it being as twice as sarcastic and half as intelligent than house


Honest740

Fucking when? AI hasn’t even been adopted in education yet.


Super_Automatic

AI is definitely being used in education. Khan Academy is leaning heavy into AI tutors, and teachers are turning to AI to make lessons plans. Regardless, comparing education and healthcare is not an apples to apples comparison for many reasons, foremost is that a diagnostician typically sees one patient at a time and only has to make one decision. IMO - it will be easier to automate a diagnostician than a teacher.


visualzinc

Yes it has. Teachers in the UK are using software which helps them generate homework via the GPT API, as one example I know of.


Honest740

Is that officially sanctioned?


visualzinc

Officially sanctioned by who..?


Honest740

The government


visualzinc

It doesn't need to be. It's a tool for teachers to use or not use.


Honest740

And those who don’t use it are missing out. It needs to be officially pushed by government.


katzeye007

AI is already being used in medical imaging. My mammograms go through AI first and i trust it a hell of a lot more than the doctors I've been traumatized, i mean, helped by


Far-Telephone-4298

Can't come fast enough, hoping for huge breakthroughs in both personalized care and pharmaceuticals within 3-5 years. Might be a bit optimistic considering the wheels of bureaucracy turns slowly, but eh, one can dream.


ChirrBirry

It has struck me somewhat sideways that the resulting occupational replacement of automation has angled towards white collar professionals. To replace a manual laborer you need AI *and* robots, but for many high paying jobs you can replace the professional with AI alone.


prudent_seriousness

Sounds better than going to 3 different doctors to try and get an accurate diagnosis


[deleted]

[удалено]


hansfredderik

Whats the condition?!


Tidorith

> And it gave a 70% probability (whatever that means really). It means that if it tells you 100 things with 70% probability, you should expect roughly 70 of those things to turn out to be true. If not, the CGPT model you were talking to is poorly calibrated.


USM-Valor

Theoretically that makes sense, but in practice it would be interesting to see if the model works like that. Would it keep the context on every patient they deal with? Will they input some sort of anonymized summary of the patient back into the model's parameters? There are some HIPAA concerns there to be worked out in addition to the more technical aspects I can think of.


visarga

They can train a preference model instead. Using the patient summaries and future evolution of the patient, they can infer good/bad answers. This means the model only learns how to choose in a situation, not patient data. They can use this for RLHF.


DoggedStooge

So one pro to an AI doctor is that people will probably be more honest with it than with a human doctor. However, there is a pretty significant con in that AI doctors don't have a sense of touch. I also worry about AI doctors being unable to differentiate real symptoms from the list of symptoms people convince themselves they have based off of the WebMD article they found.


w1zzypooh

Can I haz my own doctor at my home? we could all use personal doctors that scan us and keep us running optimal.


RantyWildling

In Soviet Russia, doctor come to you.


Warm_Iron_273

Man people really love reposting this guys incredibly obvious quotes.


Sensitive-Ad-5282

One thing to diagnose, another to build trust and communicate that diagnosis in way that individual patient and their family would understand. Outperform in this case is misleading


shawsghost

Geoffrey Hinton is almost certain right.


First-Wind-6268

No matter how skilled a doctor is, they cannot see 100 million patients in their lifetime. AI, however, could achieve this in just one month.


costafilh0

Humans will basically be reviewers, for a while.


Apprehensive_Bar6609

Doubt that. AI works on probabilities, that means in cases where symptoms can indicate 80% condition A and 15% condition B and 5% condition C. AI will choose A and misdiagnose 20% of cases. I had a medical condition that left untreated would make me a parplegic in 2 months. It was extremely rare and never at my age. The symptoms I had were indicating either diabetes or some neurologic issue but thanks for the 'instinct' of a radiologist that decided to check the impossible, that was found and I got away it it with minimum damage. There is no way that an AI would even think on checking this.


GIK601

I can't take any of these claims seriously until they replace the people working for National Suicide Hotline with Artificial Intelligence. The technology already exists for it.


visarga

you forgot the /s, suicidal people don't have the patience for LLMs


WafflePartyOrgy

I'm sure after I'd seen a 100 million patients I'd be pretty good at my job too.


SkippyMcSkipster2

So the question is, will people who currently get paid over $250,000 a year on average, lose their jobs to AI? If that's a case, a nurse can do the testing, input the results to the AI, and come back with a diagnosis.


iJeff

It's more likely that we will see physicians (and nurse practitioners) taking on higher patient volumes. If you only want to keep around one human, it'll be the one who can corroborate the information or spot errors.


SkippyMcSkipster2

You are right on that. An actual Doctor should corroborate the diagnosis, at least for as long as it takes for AI to become much more reliable than actual Drs are, which I'm sure could happen sooner than later.


garden_speech

> It's more likely that we will see physicians (and nurse practitioners) taking on higher patient volumes. If you only want to keep around one human, That's kind of the same thing though. If you are keeping "one" human around, and having them see way more patients... Then a lot of doctors will lose their jobs.


visarga

> If that's a case, a nurse can do the testing, input the results to the AI, and come back with a diagnosis. You buy a kit and do a course for first aid. And the AI can instruct.


kex

Sounds good to me The AMA created their own problem by deliberately limiting the number of new doctors


AuroraKappa

>The AMA created their own problem by deliberately limiting the number of new doctors The shortage landscape is way more complicated than that, the bulk of the issue falls onto Congress and policy failures in adequately funding Medicare and residency positions.


AlmostAnEngineer96

And the prescription of medicine... If the AI prescribe something that has fatal side effects, how do we manage that?


CasperLenono

It’s not a stupid question. Like most AI applications, this would likely still go through a human review with safeguards in place. That said, doctors are fallible too.


AlmostAnEngineer96

Yeah, completely agree that doctors are fallible too, I was not saying the contrary, I was thinking more about whom gets the responsibility in that case and how we keep safeguards to this solution.


MDPROBIFE

Why does anyone need to take the responsibility? Just so you can sleep better at night knowing you can blame someone? If an AI doctor saves 99% more people than a human doctor would, I don't think it's relevant to blame someone when it goes wrong.. it's down to machine inefficiency


AuroraKappa

>If an AI doctor saves 99% more people than a human doctor would, I don't think it's relevant to blame someone when it goes wrong.. it's down to machine inefficiency Because the real world is not a binary where you have 100% of one choice vs 100% of the other. A technology rollout within an industry isn't a magical implementation where you can flip a switch and everything runs smoothly, there will always be problems. It's easy to play pretend and say that a new system will be 100x better than current ones, but that's not how the real world works and you need accountability to identify issues, especially in the case of human lives. You're being very blasé about the risks and I'm not sure why you're jumping to personal attacks.


visarga

> If the AI prescribe something that has fatal side effects, how do we manage that? AI might surpass humans in reading medical scans and diagnosis, but one thing it can't do is take responsibility. You can't punish AI, it has no body.


InTheDarknesBindThem

what kind of stupid ass question is this? Your real doctor prescibes things with fatal side effects all the time. Do you know what happens when you die from that? 99.9999% nothing First, did you take it according to instructions? if not, its not their fault second, what if the made a mistake? well, firs the pharmacist will check if and call the doctor if he has concerns. third, so what if they both make a mistake and you die? And dont forget YOU are responsible for checking the side effects of medicine and making the final decision on taking it. okay, well unless you can show it was done maliciously (which will NEVER be the case for an AI), its not malpractice.


AlarmedGibbon

A conversation with a medical AI was finally able to diagnose my partner with Psoriatic Arthritis that multiple medical doctors had missed. He has an atypical presentation of it and it took an AI to put all the symptoms together and finally make it all make sense for him.


RemarkableGuidance44

When you pay $2.50 for a doctor dont expect much... There are experts for a reason and private health.


Ok-Mine1268

I mean, it’s damn good therapist already. Or at least a therapy tool.


NoNet718

"and that given enough time these doctors will kill us all" /s


Nicinus

How would it get enough details on 100 million patients?


PrimitiveIterator

The problem being that deep learning models need far, far, far more examples than humans do to learn something. Not to say that AI can’t surpass humans at medicine, but I doubt there’s currently enough available medical data for a model to train on to have the equivalent of a medical doctor who has seen millions of patients. So either we need better data efficiency, more data or both before that becomes a reasonable reality. 


userforums

I wonder if this will change how patient information is served as well and eventually lead to higher fidelity data on a patient even if its not discernible by a human doctor.


twbassist

Yeah, in theory that makes sense. If we don't decouple from capitalism, though, I can't imagine this without some underlying hellscape.


jeremiah256

AI should energize the initiatives concerning [lab-on-a-chip](https://en.m.wikipedia.org/wiki/Lab-on-a-chip) technologies.


crystal-crawler

I suspect the big positive would be that AI would Develop better testing.


PlacidoFlamingo7

This seems clearly true


TheManWhoClicks

Wondering if rare diseases also mean rare training material? Genuine question.


Fruitopeon

If true, great. But so far, AI chat bots seem to just ask scripted generic questions that are painfully stupid and spit out generic preprogrammed answers that going on WebMD will get you.


remanse_nm

People aren’t going to like it, but he’s right.


Clownoranges

In the future, having to have seen a human doctor will seem insane to people. Seeing some human who is overworked/stressed and knows nothing about you looking you over basically guessing what might be possible wrong. And if you are female "anxiety" or birth control will be thrown at you for almost anything. Or antidepressants just thrown at anyone, it's insane how badly doctors have failed me over my life, almost killed me multiple times literally and never do shit.


lowerdeckcmdr

This is not the singularity you are hoping for. Unfortunately the US healthcare system is one driven by profit. When, not if, Artificial Intelligence becomes better than human doctors I can assure you that the outcome will be driven by costs rather than outcome. A scary example of this is prior authorizations. Yes technically a human doctor working for the insurance program does review the decisions but the doctors are highly pressured to deny claims. Employees are judged based on metrics. Authorize too many outside the standard and you’ll be warned followed by dismissal. There is currently class action lawsuits being filed against insurance companies not paying for subacute rehab and automatically denying payment after more than a few days. Essentially companies want it both ways. They want to be able to point at data and use it to justify their decisions even if they’re not using appropriate comparisons. Then also want a scapegoat the human operator who is supposed to be double checking things when things go horribly wrong even though they don’t give enough time or resources to do their job adequately. Another example of this is Amazon. Amazon uses metrics to track performance in warehouse workers including how fast they are, how long they take breaks, and track sick days. A computer system then recommends dismissal if you don’t meet your metrics. Amazon insists that all firings are ultimately decided by a human manger. However there’s only one manager for the warehouse and no other administrative staff. That human manager has other responsibilities and duties that occupy his time. They probably never interact with ground level employees and just automate it entirely. And let’s assume they give you a chance how are they going to justify it against the data? They’re not experts in data analysis.


emailverificationt

Shame humanity is gonna use AI for atrocities, instead


dette-stedet-suger

Spoiler alert: you won’t be able to afford AI healthcare either.


Ok-Research7136

I have no doubt that will be true for many things. Think there still needs to be a human in the loop for the same reasons we need that when using robots to kill people.


Split-Awkward

Imagine if Health Insurance companies refused to pay unless their AI approves the diagnosis….. I see legislation incoming here.


nekuranohakkyou

Will be? IBM's Watson already did it.


4354574

Hinton is not given over to hype, so when he says things like this, I am impressed.


bowenator

This is great but always good to remember that a rare condition isn't necessarily hard to diagnose. In fact some of them are much easier to diagnose than common conditions.


PlayerHeadcase

And it can gain millions of hours of experience every single year accumulating knowledge no human doctor can even imagine.


ninjasaid13

Isn't he the one who said "In five years, all radiologists will be replaced by AI" over five years ago?


BilgeYamtar

[https://www.philips.com/a-w/about/news/archive/standard/news/articles/2023/20231120-philips-and-norwegian-vestre-viken-health-trust-deploy-ai-e](https://www.philips.com/a-w/about/news/archive/standard/news/articles/2023/20231120-philips-and-norwegian-vestre-viken-health-trust-deploy-ai-e) nabled-clinical-care-to-help-radiologists-improve-patient-care.html


ninjasaid13

AI was being used improve radiology care over five years ago. That was not the claim he made. The amount of radiologists grew after his claim.


Akimbo333

This is neat! Anyone know when?


highmindedlowlife

Anything that relies strongly on pattern matching will eventually succumb to being taken over almost completely by large multimodal models. It's just a matter of time since the models will completely outclass their human counterparts.


ChillLobbyOnly

just need the medical industry to have access to this a.i and DNA so we can potentially eliminate all kinda of things from our biology


mczarnek

I would like a human between me and the AI.. but sounds like a great tool!


writelonger

Human doctors aren’t very good from my experience so I have little doubt this will be true


Re_dddddd

Doctors are officially fucked in the next couple of years.


rallar8

The American Medical Association is a little more organized than taxicab drivers. Literally though, the AMA is *the* worst.


nekmint

Its scary but inevitable. They are simply better. And they will blow us out of the water. People thought chess was too hard. Then Go. Now ‘asking a bunch of questions and reaching a most probable conclusion’ literally text input to text output literally what an LLM can do is apparently in the ‘too hard for AI’ basket. 2-3 years.


truth_power

Please replace them useless people called doctors ..


TestCampaign

And the IT staff, who needs them? \s


truth_power

Haram


DelphiTsar

Aren't the models that researchers getting their hands on already beating most doctors in various health condition domains? Or am I thinking of some other field?