Professors are turning to this old-school method to stop AI use on exams: A growing number of educators are finding that oral exams allow them to test their students’ learning without the benefit of AI platforms such as ChatGPT.
Snippet: Across the country, a small but growing number of educators are experimenting with oral exams to circumvent the temptations presented by powerful artificial intelligence platforms such as ChatGPT. Such tools can be used to cheat on take-home exams or essays and to complete all manner of assignments, part of a broader phenomenon known as “cognitive off-loading.” EDITED TO ADD: In some countries, such as Norway and Denmark, oral exams never went away. In other places, they were preserved in specific contexts: for instance, in doctoral qualifying exams in the United States. Dobson said he never imagined that oral exams would be “dusted off and gain a second life.” New interest in the age-old technique began emerging during the pandemic amid worries over potential cheating in online environments. Now the advent of AI models — and even AI-powered glasses — has prompted a fresh wave of attention. Oral assessments are “definitely experiencing a renaissance,” said Tricia Bertram Gallant, director of the Academic Integrity Office at the University of California at San Diego. Such tests are not always the answer, she added, but offer the added benefit of practicing a skill valuable for most careers.
I work in healthcare…AI is garbage.
I am a hospital-based physician, and despite all the hype, artificial intelligence remains an unpopular subject among my colleagues. Not because we see it as a competitor, but because—at least in its current state—it has proven largely useless in our field. I say “at least for now” because I do believe AI has a role to play in medicine, though more as an adjunct to clinical practice rather than as a replacement for the diagnostician. Unfortunately, many of the executives promoting these technologies exaggerate their value in order to drive sales. I feel compelled to write this because I am constantly bombarded with headlines proclaiming that AI will soon replace physicians. These stories are often written by well-meaning journalists with limited understanding of how medicine actually works, or by computer scientists and CEOs who have never cared for a patient. The central flaw, in my opinion, is that AI lacks nuance. Clinical medicine is a tapestry of subtle signals and shifting contexts. A physician’s diagnostic reasoning may pivot in an instant—whether due to a dramatic lab abnormality or something as delicate as a patient’s tone of voice. AI may be able to process large datasets and recognize patterns, but it simply cannot capture the endless constellation of human variables that guide real-world decision making. Yes, you will find studies claiming AI can match or surpass physicians in diagnostic accuracy. But most of these experiments are conducted by computer scientists using oversimplified vignettes or outdated case material—scenarios that bear little resemblance to the complexity of a live patient encounter. Take EKGs, for example. A lot of patients admitted to the hospital requires one. EKG machines already use computer algorithms to generate a preliminary interpretation, and these are notoriously inaccurate. That is why both the admitting physician and often a cardiologist must review the tracings themselves. Even a minor movement by the patient during the test can create artifacts that resemble a heart attack or dangerous arrhythmia. I have tested anonymized tracings with AI models like ChatGPT, and the results are no better: the interpretations were frequently wrong, and when challenged, the model would retreat with vague admissions of error. The same is true for imaging. AI may be trained on billions of images with associated diagnoses, but place that same technology in front of a morbidly obese patient or someone with odd posture and the output is suddenly unreliable. On chest xrays, poor tissue penetration can create images that mimic pneumonia or fluid overload, leading AI astray. Radiologists, of course, know to account for this. In surgery, I’ve seen glowing references to “robotic surgery.” In reality, most surgical robots are nothing more than precision instruments controlled entirely by the surgeon who remains in the operating room, one of the benefits being that they do not have to scrub in. The robots are tools—not autonomous operators. Someday, AI may become a powerful diagnostic tool in medicine. But its greatest promise, at least for now, lies not in diagnosis or treatment but in administration: things lim scheduling and billing. As it stands today, its impact on the actual practice of medicine has been minimal. EDIT: Thank you so much for all your responses. I’d like to address all of them individually but time is not on my side 🤣. the headline was intentional rage bait to invite you to partake in the conversation. My messages that AI in clinical practice has not lived up to the expectations of the sales pitch. I acknowledge that it is not computer scientists, but rather executives and middle management, that are responsible for this. They exaggerate the current merits of AI to increase sales. I’m very happy that people that have a foot in each door - medicine and computer science - chimed in and gave very insightful feedback. I am also thankful to the physicians who mentioned the pivotal role AI plays in minimizing our administrative burden, As I mentioned in my original post, this is where the technology has been most impactful. It seems that most MDs responding appear confirm my sentiments with regards the minimal diagnostic value of AI. My reference to ChatGPT with respect to my own clinical practice was in relation to comparing its efficacy to our error prone EKG interpreting AI technology that we use in our hospital. Physician medical errors seem to be a point of contention. I’m so sorry to anyone to anyone whose family member has been affected by this. It’s a daunting task to navigate the process of correcting medical errors, especially if you are not familiar with the diagnosis, procedures, or administrative nature of the medical decision making process. I think it’s worth mentioning that one of the studies that were referenced point to a medical error mortality rate of less than 1% -specifically the Johns Hopkins study (which is more of a literature review). Unfortunately, morbidity does not seem to be mentioned so I can’t account for that but it’s fair to say that a mortality rate of 0.71% of all admissions is a pretty reassuring figure. Parse that with the error rates of AI and I think one would be more impressed with the human decision making process. Lastly, I’m sorry the word tapestry was so provocative. Unfortunately it took away from the conversation but I’m glad at the least people can have some fun at my expense 😂.
Wild, AI is predicting breast cancer 5 YEARS before it develops!
Game changer. This means that doctors can catch it early and save so many lives. The system looks for super subtle signs in the mammograms that could indicate the potential for breast cancer, so it's an early warning system like no other! https://www.goatainews.com/post/breakthrough-ai-predicts-breast-cancer-years-before-development
Paper by physicians at Harvard and Stanford: "In all experiments, the LLM displayed superhuman diagnostic and reasoning abilities."
Paper: https://arxiv.org/pdf/2412.10849
Microsoft Says Its New AI System Diagnosed Patients 4 Times More Accurately Than Human Doctors
https://www.wired.com/story/microsoft-medical-superintelligence-diagnosis/
Another study showing GPT-4 outperforming human doctors at showing empathy
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2821167
AI is learning to troll
https://i.redd.it/q1w0up1lgmdb1.png
I built a free translation chat app that does AI translations in-app.
https://i.redd.it/551ibg5qljqa1.png
New powerful deep learning algorithm can detect Alzheimer’s six years before doctors
https://newatlas.com/ai-algorithm-pet-scan-alzheimers-diagnosis/57138/?fbclid=IwAR36hOMrS4LaNsF4JJH2KP1qWYqPmtwTv4JqfuCEkjWrMOArwrWVUgPCxx8
Doctors who used AI assistance in procedures became 20% worse at spotting abnormalities on their own, study finds, raising concern about overreliance
https://fortune.com/2025/08/26/ai-overreliance-doctor-procedure-study/
ChatGPT Answers Patients’ Questions Better Than Doctors: Study
https://gizmodo.com/chatgpt-ai-doctor-patients-reddit-questions-answer-1850384628?
BBC News covered an AI translator for Bats, soon it may apply to most animal species
I have not seen this BBC News video covered on this subreddit but it piqued my curiosity so I wanted to share. I have known about projects attempting to decode animal communications such as Project CETI which focuses on applying advanced machine learning to listen to and translate the communication of sperm whales. But the translator shown in the video blew my mind, it is already able to grasp the topics which Bats communicate about such as: food, distinguishing between genders and, surprisingly, unique “signature calls” or names the bats have. The study in question, led by Yossi Yovel of Tel Aviv University, monitored nearly two dozen Egyptian fruit bats for two and a half months and recorded their vocalisations. They then adapted a voice-recognition program to analyse 15,000 samples of the sounds, and the algorithm correlated specific sounds with specific social interactions captured via videos—such as when two bats fought over food. Using this framework, the researchers were able to classify the majority of bats' sounds. I wonder how many years it'll take to decode the speech patterns of most household animals, do you think this is a good idea? Would you like to understand your dog or cat better? Let's discuss! GPT 4 summary of the video: - AI is being leveraged to understand and decode animal communication, with a specific focus on bat vocalisations, at a research facility close to the busiest highway in Israel. - The unique open colony at Tel Aviv University allows scientists to monitor the bats round the clock and record their vocalisations with high-quality acoustics, providing a continuous stream of data. - To teach AI to differentiate between various bat sounds, scientists spend days analysing hours of audio-visual recordings, a task that involves significant technical challenges and large databases for annotations. - The result is a 'translator' that can process sequences of bat vocalisations, displaying the time signal of the vocalisations and subsequently decoding the context of the interaction, for instance, whether the bats are communicating about food. - Although the idea of a 'Doolittle machine' that allows humans to communicate with animals may seem far-fetched, the advances made through AI are steering us closer to this possibility. Interesting article on the topic: Scientific American
Thanks doc
https://i.redd.it/ytm9a8ue0f9d1.png
We Found the Hidden Cost of Data Centers. It's in Your Electric Bill
This is relevant to this sub because, as the video stresses, facilitating AI is the main reason for the described increased development of data centers. The impact AI development has on human lives is a necessary part of conversation about AI. I have no doubts that the Data Center Coalition will claim that separating days centers as a special payer, or other significant measures to reduce the impact on area residents will stifle AI development. For the discussion, I am particularly interested to know how many of those those optimistic and enthusiastic about AI think that these measures should be taken. Should the data center companies cover the increased costs instead of the residents taking the hit? Should there be increased legislation to reduce negative impact on the people living where data centers are set up? Or should the locals just clench their teeth and appreciate the potential future benefits?
New AI detects skin cancer better than human doctors
https://www.engadget.com/2018/05/29/ai-outperforms-human-doctors-in-spotting-skin-cancer/
Startup to replace doctors
I'm a doctor currently working in a startup that is very likely going to replace doctors in the coming decade. It won't be a full replacement, but it's pretty clear that an ai will be able to understand/chart/diagnose/provide treatment with much better patient outcomes than a human. Right now nuance is being implemented in some hospitals (microsoft's ai charting scribe), and most people that have used it are in awe. Having a system that understand natural language, is able to categorize information in an chart, and the be able to provide differential diagnoses and treatment based on what's available given the patients insurance is pretty insane. And this is version 1. Other startups are also taking action and investing in this fairly low hanging apple problem.The systems are relatively simple and it'll probably affect the industry in ways that most people won't even comprehend. You have excellent voice recognition systems, you have LLM's that understand context and can be trained on medical data (diagnoses are just statistics with some demographics or context inference). My guess is most legacy doctors are thinking this is years/decades away because of regulation and because how can an AI take over your job?I think there will be a period of increased productivity but eventually, as studies funded by ai companies show that patient outcomes actually have improved, then the public/market will naturally devalue docs. Robotics will probably be the next frontier, but it'll take some time. That's why I'm recommending anyone doing med to 1) understand that the future will not be anything like the past. 2) consider procedure-rich specialties *** editQuiet a few people have been asking about the startup. I took a while because I was under an NDA. Anyways I've just been given the go - the startup is drgupta.ai - prolly unorthodox but if you want to invest dm, still early.
AI software that helps doctors diagnose like specialists is approved by FDA
https://www.theverge.com/2018/4/11/17224984/artificial-intelligence-idxdr-fda-eye-disease-diabetic-rethinopathy
FDA approves AI-powered diagnostic that doesn't need a doctor's help
https://www.technologyreview.com/the-download/610853/fda-approves-first-ai-powered-diagnostic-that-doesnt-need-a-doctors-help/?utm_campaign=add_this&utm_source=facebook&utm_medium=post
This AI Just Beat Human Doctors On A Clinical Exam
https://www.forbes.com/sites/parmyolson/2018/06/28/ai-doctors-exam-babylon-health/#72426ed12c0d
My AI is so bright, I gotta wear shades.
I've built a pair of AI-enabled glasses that allow you to interact with objects in the real world just by gesturing to them. For example, if you wave at the lamp you're looking at, it will turn on. Or, if you wave at your smart speaker, it will play music. It is extensible and can be adapted to control any number of objects, with full details on my GitHub page. The entire BOM is under $150 making it very accessible. I can also envision many additional applications of the tchnology, such as assistive applications for those with a disability, or fast charting/order entry for medical practitioners to name a few. See it in action: https://youtu.be/7UYi-exvHr0 Full details on GitHub: https://github.com/nickbild/shaides Hope you like it!