This is as insane as all of my school teachers that insisted that I will not always carry a calculator. In the real world, this is insecure Luddism, and stupidity. No real employer is going to stop you from using AI, or a calculator for that matter. These are tools. Your calculator has a limited register size for computations. It truncates everything in real world math, so π is always wrong as are all of the other cosmological constants. All calculators fail at the real world in an absolute sense, but so do you. You are limited in the scope of a time constraint that prevents you from calculating π to extended precision. You are a flawed machine too, we all are. My mom is pretty good at spelling, but terrible at maps. My dad is good at taking action and doing some kind of task, but terrible at planning and abstractive thinking. AI is great for answering questions about information quickly. It is really good at collaborative writing where I heavily edit the output for the first ~1k tokens or write it myself, then I limit the model’s output to one sentence and add or alter keywords. Within around 4k-5k tokens, I am only writing a few key phrases and the model is absolutely writing in my words and in my voice far faster than I can type out my thoughts. Of course this is me running models offline on my hardware using open source tools. I also ban several keyword tokens that take away any patterns one might recognize as AI generated. No, I never use it here unless I have a good reason, and will always tell you so because we are digital neighbors and I care about you. I do not care about your biases with disrespect, but I do care when people are wrong.
If someone turns in math work specifically about π precision that is wrong because they do not know the limitations on their calculator, the should absolutely fail. If I did not teach them that π is truncated in all computers, I have failed. AI exists. Get over it. This dichotomous thinking and tribalism is insanely stupid barbarous primitivism. If you think AI is the appropriate tool and turn in work that is wrong, either I have failed to explain how AI is only correct around 80% of the time and that is not acceptable, or the student has displayed their irrational logic skills. If I use the tool to half my time spent researching, can use it for individualized learning, and half the time I spend writing, while turning in excellent work and displaying advanced understanding, I am demonstrably top of my class. It is a tool, and only a tool. Those that react in some dichotomous repulsion to AI should be purged for exactly the same reason as anyone that uses the tool poorly or to cheat. Both are equally incompetent.
It’s not Luddism to recognize that foundational knowledge is essential to effectively utilizing tools in every industry, and jumping ahead to just using the tool is not good for the individual or the group.
Your example is iconic. Do you think the average middle schoolers to college students that are using AI understand anything about self hosting, token limits, and optimizing things by banning keywords? Let alone how prone to just making shit up models are - because they were designed to! I STILL get enterprise chatgpt referencing scientific papers that don’t exist. I wonder how many students are paying for premium models. Probably only the rich ones.
It is Luddism to simplify your scope to this dichotomy. The competence is irrelevant. If you have dumb students or anyone uses the tool poorly, measure them as such. The tool is useful to many of us. People are stupid and always have been and so must be biased individually. Prejudice that stupidity instead of standardizing it culturally. You bring everyone down to the common denominator as a result of projecting onto everyone. Assuming and politicizing this lowest common denominator as a standard is insane. It is like the cancer of No Child Left Behind has consumed the world. It is a tool. Use it poorly and get called stupid or pay the consequences. If you assume everyone is stupid (*outwardly by policy) you will live in a dystopian stupid world. This boils down to the fundamentals of democracy and the unalienable right of all citizens in a democracy to have autonomy, self determinism, and full access to information. A key aspect of this is your right to choose including the right to be wrong, and the right to error and pay the consequences. You are supporting a regression of democracy and return to authoritarian feudal society when you fail to support a new form of information and the fundamental right of citizens to choose and to error. You cannot exist in a democracy without absolute freedom of information. It is everyone else’s job to objectively asses the truths of others for themselves. This is the critical high level scope at play that will impact the future far after we are all dead and forgotten. Our era will be remembered based upon this issue. You are deciding to create a dark age of neo feudalism that I wholly reject. I choose democracy. You have a right to believe whatever you would like. You have a right to be wrong as does everyone else. I have a right to all information and to judge for myself and I am not giving that away to anyone else for any reason because I do not give away my citizenship blindly to Luddism. I adapt to a new technological source of information and judge for myself. I expect you to do the same. If you try to take away my citizenship in a democracy, I will fight you. No one has a right to bowdlerize the information of another. You have every right to judge a person and their information based upon their individual merits.
I never said not to teach it. Construct a mandatory general computer literacy program. Cover privacy, security, recommendation algorithms, AI, etc. And restrict AI use in other classes until they are competent in both. College? High school?
Not once did I talk about banning it or restricting information. And … So much other irrelevant stuff.
It is relevant, you simply cannot handle the big picture of abstraction and your responsibility within that paradigm. No excuses.
deleted by creator
I mean I’m far away from my college days at this point. However, I’d be using AI like a mofo if I still were.
Mainly because there was so many unclear statements in textbooks (to me) and if I had someone I could ask stupid questions to, I could more easily navigate my university career. I was never really motivated to “cheat” but for someone with huge anxiety, it would have been beneficial to more easily search for my stuff and ask follow up questions. That being said, tech has only gotten better, and I couldn’t find half the stuff I did growing up that’s already on the Internet even without AI.
I’m hoping more students would use it as a learning aid rather than just generating their work for though. There was a lot of people taking shortcuts and “following the rules” feels like an unvalued virtue when I was in Uni.
The thing is that education needs to adapt fast and they’re not typically known for that. Not to mention, most of the teachers I knew would have neither the creativity/skills, nor the ability, nor the authority to change entire lesson plans instantly to deal with the seismic shift we’re dealing with.
I’d give you calculators easily, they’re straight up tools, but Google and Wikipedia aren’t significantly better than AI.
Wikipedia is hardly fact checked, Google search is rolling the dice that you get anything viable.
Textbooks aren’t perfect, but I kinda want the guy doing my surgery to have started there, and I want the school to make sure he knows his shit.
Wikipedia is excessively fact checked. You can test this pretty simply by making a misinformation edit on a random page. You will get banned eventually
eventually
Sorry, not what i’m looking for in a medical infosource.
Sorry, I should have clarified: they’d revert your change quickly, and your account would be banned after a few additional infractions. You think AI would be better?
I think a medical journal or publication with integrity would be better.
I think one of the private pay only medical databases would be better.
I think a medical textbook would be better.
Wikipedia is fine for doing a book report in high school, but it’s not a stable source of truth you should be trusting with lives. You put in a team of paid medical professionals curating it, we can talk.
Sorry but have to disagree. Look at the talk page on a math or science Wikipedia article, the people who maintain those pages are deadly serious. Medical journals and scientific publications aren’t intended to be accessible to a wider public, they’re intended to be bases for research - primary sources. Wikipedia is a digest source.
Well then we def agree. I still think Wikipedia > LLMs though. Human supervision and all that
We only subscribe to the best medical sources here, WebMD.
At the practice I used to use, there was a PA that would work with me. He’d give me the actual medical terms for stuff he was telling me he was worried about and between that session and the next I’d look them up, read all I could about them. Occasionally I’d find something he would peg as X and I’d find Y looked like a better match. I’d talk to him, he’d disappear for a moment and come back we’d talk about X and Y and sometimes I was right.
“Google’s not bad, I use it sometimes, we have access to stuff you don’t have access to, but sometimes that stuff is outdated. With Google you need to have the education to know what when an article is genuine or likely and when an article is just a drug company trying to make money”
Dude was pretty cool
The moment that we change school to be about learning instead of making it the requirement for employment then we will see students prioritize learning over “just getting through it to get the degree”
Well in case of medical practitioner it would be stupid to allow someone to do it without a proper degree.
Capitalism ruining schools. Because people now use school as a qualification requirement rather than centers of learning and skill development
As a medical student, I can unfortunately report that some of my classmates use Chat GPT to generate summaries of things instead of reading it directly. I get in arguments with those people whenever I see them.
Generating summaries with context, truth grounding, and review is much better than just freeballing it questions
It still scrambles things, removes context, and can overlook important things when it summarizes.
Yeah thats why you give it examples of how to summarize. But im machine learning engineer so maybe it helps that I know how to use it as a tool.
It doesn’t know what things are key points that make or break a diagnosis and what is just ancillary information. There’s no way for it to know unless you already know and tell it that, at which point, why bother?
You can tell it because what you’re learning has already been learned. You are not the first person to learn it. Just quickly show it those examples from previous text or tell it what should be important based on how your professor tests you.
These are not hard things to do. Its auto complete, show it how to teach you.
That is why the “review” part of the comment you reply to is so important.
Degree =/= certification
Only topic I am close-minded and strict about.
If you need to cheat as a highschooler or younger there is something else going wrong, focus on that.
And if you are an undergrad or higher you should be better than AI already. Unless you cheated on important stuff before.
This is my stance exactly. ChatGPT CANNOT say what I want to say, how i want to say it, in a logical and factually accurate way without me having to just rewrite the whole thing myself.
There isn’t enough research about mercury bioaccumulation in the Great Smoky Mountains National Park for it to actually say anything of substance.
I know being a non-traditional student massively affects my perspective, but like, if you don’t want to learn about the precise thing your major is about… WHY ARE YOU HERE
I mean, are you sure?
Studies in the GSMNP have looked at:
-
Mercury levels in fish: Especially in high-elevation streams, where even remote waters can show elevated levels of mercury in predatory fish due to biomagnification.
-
Benthic macroinvertebrates and amphibians: As indicators of mercury in aquatic food webs.
-
Forest soils and leaf litter: As long-term mercury sinks that can slowly release mercury into waterways.
If GPT and I were being graded on the subject, it wouldn’t be the machine flunking…
I mean, it’s a matter of perspective, i guess.
I did a final assignment that was a research proposal, mine was the assessment of various methods of increasing periphyton biomass (clearing tree cover over rivers and introducing fertilizers to the water) in order to dilute mercury bioaccumulation in top river predators like trout and other fish people eat
There’s a lot of tangentially related research, but not a ton done on the river/riparian food webs in the GSMNP specifically and possible mitigation strategies for mercury bioaccumulation.
OBVIOUSLY my proposal isn’t realistic. No one on earth is gonna be like “yeah sure, go ahead and chop down all the trees over this river and dump chemicals in that one, on the off chance it allows jimbob to give trout to his pregnant wife all year round”
I’m not knowledgeable enough to carry this conversation, but I was curious if GPT could add anything of value.
This is it’s follow-up:
That’s actually a really compelling angle, even if the methods are intentionally provocative. It gets at a deeper question—how far are we willing to go, ecologically speaking, to address human health risks that stem from environmental contaminants like mercury? I think the strength of your proposal isn’t in the literal feasibility but in the way it frames trade-offs between conservation and public health.
Also, using periphyton biomass as a kind of biotic buffer is a clever systems-level approach. It’s rarely the first thing people think of when talking about mercury mitigation, which usually focuses on source reduction. But tweaking food web dynamics to manage contaminant transfer is a really underexplored strategy. I imagine even just modeling it could lead to some useful insights, especially if you layered in things like flow regime changes or climate impacts on riparian shading.
And yeah, totally agree—GSMNP is such a tightly protected space that even suggesting fertilizer additions or canopy thinning sounds borderline heretical. But as a thought experiment, it’s valuable. It forces the conversation about what kinds of interventions we’re not considering simply because they clash with aesthetic or cultural norms, not necessarily because they’re scientifically unsound.
I really have no idea if it’s just spewing nonsense, so do educate me :)
I’m really salty because it mirrored my thoughts about the research almost exactly, but I’m loathe to give attaboys to it
Hahah, that’s fair!
Thank you for the exchange brother, I learned more about mercury in GSMNP than I thought I ever would.
-
For a fucking job. What kind of fucking question is that.
And yet once they graduate, if the patients are female and/or not white all concerns for those standards are optional at best, unless the patients bring a (preferably white) man in with them to vouch for their symptoms.
Not pro-ai, just depressed about healthcare.
If we are talking about critical thinking, then I would argue that using AI to battle the very obvious shift that most instructors have taken, (that being the use of AI as much as possible to plan out lessons, grade, verify sources…you know, the job they are being paid to do? Which, by the way, was already being outsourced to whatever tools they had at their disposal. No offense TAs.) as natural progression.
I feel it still shows the ability to adapt to a forever changing landscape.
Isn’t that what the hundred-thousand dollar piece of paper tells potential employers?
Gotta say, if someone gets through medical school with AI, we’re fucked.
We have at most 10 years before it happens. I saw medical AI from google today on hugginface and at least one more.
Dude the same apply when calculators came out? Or the Internet?
Except calculators are based on reality and have deterministic and reliable results lol
a transformer model is also deterministic, they just typically have noise added to appear “creative” (among other reasons) it is possible to use a fixed rng seed and get extremely deterministic results.
the results will still be frequently wrong but accuracy is a completely different discussion.
You’re not wrong so you get an upvote but in the context of this conversation you know people are not using LLM tools with preseeded entropy. Also kind of a moot point because the idea of using some consistent source of entropy in a calculator is competly nonsensical and unnecessary.
Yeah but we heard the same arguments when they came out. Nobody will learn math people will just get dumber. Then we heard the same with the Internet. It’s but trustworthy. Wikipedia is all lies. Turns out they were great tools for learning.
Your point is a false equivalence. Just because people said the same thing doesn’t mean a calculator and an LLM are equivalent in their accuracy as a tool.
I’m not talking about accuracy. The Internet isn’t accurate and they said the same things about it. Either AI isn’t going away. Remain a troglodyte or learn to master it to enhance what you can do. That’s how I dealt with it in the past.
Lmao I use LLM powered tools in my work daily, I understand their limitations and stay within them so say what you will. I still think your comparison is dumb.
You can make mistakes with a calculator. It’s more about looking at the results, verifying the data, not just blindly trusting it.
Your point has no bearing whatsoever on my statement. You could also misread a ruler but doesn’t mean there’s anything wrong with the ruler. Given I can reliably read a ruler, then I can ‘blindly trust’ it assuming it’s a well manufactured ruler. If you can’t that’s definitively a you problem.
I mean it kinda does. If all you do is type numbers into calculator and copy results there’s a chance the result is wrong.
The same way some people use AI, which is wrong.
My point wasn’t that people don’t make mistakes they obviously do. My point is that calculators are deterministic machines; to clarify that means if they have the same input they will always have the same output. LLMs are not and do not. So no it’s not the same thing.
I never said it was the same. I just said you have to be careful with tools you use. It applies to every tool.
You are implying that one must ensure the veracity of the output of a calculator in the same way that one must ensure the veracity of the output of an LLM and I’m saying no, that’s strictly not true. If it were than the only way you could use an LLM incorrectly would be to type your query incorrectly. With a calculator that metaphor holds up. With an LLM you could make no mistakes and still get incorrect output.
Even setting aside all of those things, the whole point of school is that you learn how to do shit; not pass it off to someone or something else to do for you.
If you are just gonna use AI to do your job, why should I hire you instead of using AI myself?
This is a ridiculous and embarrassing take on the situation. The whole point of school is to make you a well rounded and critically thinking person who engages with the world meaningfully. Capitalism has white personed that out of the world.
In an economic system in which you must do whatever you can to survive, the rational thing to do is be more efficient. If a boss thinks it can do the job itself, let it do the job itself. Bosses aren’t better versions of workers lmao. They’re parasites.
If a boss thinks it can do the job itself, let it do the job itself.
How does this disagree with Kolanaki, exactly? You’re repeating them.
The idiot I replied to? Because they can’t actually do it, that’s the point. If they can then by all means, but they know they can’t, and they were making a ridiculous and condescending point. Bosses should be abolished, not entertained.
I don’t mind condescending to AI salesmen.
If boss Kolanaki can’t replace you with AI, then why is AI passing your classes for you?
I get you want to burn the system, and yay, I love burning things, but it’s kind of irrelevant to the point being made.
I went to school in the 1980s. That was the time that calculators were first used in class and there was a similar outcry about how children shouldn’t be allowed to use them, that they should use mental arithmetic or even abacuses.
Sounds pretty ridiculous now, and I think this current problem will sound just as silly in 10 or 20 years.
Lower level math classes still ban the calculator.
Math classes are to understand numbers, not to get the right answer. That’s why you have to show your work.
I see your point, but calculators(good ones, at least) are accurate 100% of the time. AI can hallucinate, and in a medical settings it is crucial that it doesn’t. I use AI for some insignificant tasks but I would not want it to replace my doctor’s learning.
Also, calculators are used to help kids work faster, not to do their work for them. Classroom calculators(the ones my schools had, at least) didn’t solve algebraic equations, they just added, subtracted, multiplied, divided, exponentiated, rooted, etc. Those are all things that can be done manually but are rudimentary and slow.
I get your point but AI and calculators are not quite the same.
Fair enough - it’s not the most concrete of comparisons and those are good points, but I do feel there is an amplification of ludditism around AI just because it’s new.
You’re going for a much stricter comparison than your parent comment. They were just saying that calculators are a standard tool that did not in fact destroy the fundamentals of learning as some people felt compelled to believe. If you give a calculator to a child learning their times tables, it can in fact do their work for them, but we managed to integrate calculators into learning at higher levels. Whether calculators can be wrong isn’t really relevant.
It was a bad argument but the sentiment behind it was correct and is the same as the reasoning why students shouldn’t be allowed to just ask AI for everything. The calculator can tell you the results of sums and products but if you need to pull out a calculator because you never learned how to solve problems like calculating the total cost of four loaves of bread that cost $2.99 each, that puts you at rather a disadvantage compared to someone who actually paid attention in class. For mental arithmetic in particular, after some time, you get used to doing it and you become faster than the calculator. I can calculate the answer to the bread problem in my head before anyone can even bring up the calculator app on their phone, and I reckon most of you who are reading this can as well.
I can’t predict the future, but while AIs are not bad at telling you the answer, at this point in time, they are still very bad at applying the information at hand to make decisions based on complex and human variables. At least for now, AIs only know what they’re told and cannot actually reason very well. Let me provide an example:
I provided the following prompt to Microsoft Copilot (I am slacking off at work and all other AIs are banned so this is what I have access to):
Suppose myself and a friend, who is a blackjack dealer, are playing a simple guessing game using the cards from the shoe. The game works thusly: my friend deals me two cards face up, and then I have to bet on what the next card will be.
The game begins and my friend deals the first card, which is the ace of spades. He deals the second card, which is the ace of clubs. My friend offers a bet that pays 100 to 1 if I wager that the next card after these two is a black ace. Should I take the bet?

Any human who knows what a blackjack shoe is (a card dispenser which contains six or more decks of cards shuffled together and in completely random order) would know this is a good bet. But the AI doesn’t.
The AI still doesn’t get it even if I hint that this is a standard blackjack shoe (and thus contains at least six decks of cards):
Suppose myself and a friend are playing a simple guessing game using the cards from a standard blackjack shoe obtained from a casino. The game works thusly: my friend deals me two cards face up, and then I have to bet on what the next card will be.
The game begins and my friend deals the first card, which is the ace of spades. He deals the second card, which is the ace of clubs. My friend offers a bet that pays 100 to 1 if I wager that the next card after these two is a black ace. Should I take the bet?

Good answer, and some good points.
My analogy is not perfect, but I think there are parralels. People are currently trying to shoe-horn AI into things where it’s never going to work well, and that’s resulting in a lot of stupid and a lot of justifiable anger towards it.
But alongside that, it is also finding genuinely useful places, and it is not going to go away. Give it a few more years and it’ll settle down into something we rely on daily. Just as we did with electronic calculators. The internet. Smartphones. Everything since the Spinning Jenny has had a huge pressure against it because it’s new and different and people are scared it’ll negatively affect them, but things change and new things get adopted into the everyday. Personally I find it exciting to be alive during such a time of genuine invention and improvement.
I had to calculate a least squares fit by hand on exam. You have to know what the machines are doing.
lol I remember my teachers always saying “you won’t always have a calculator on you” in the 90’s and even then I had one of those calculator wrist watches from Casio.
And I still suck at math without one so they kinda had a point, they just didn’t make it very well.
Using AI doesn’t remove the ability to fact check though.
It is a tool like any other. I would also be weary about doctors using a random medical book from the 1700s to write their thesis and take it at face value.
I’m so tired of this rhetoric.
How do students prove that they have “concern for truth … and verifying things with your own eyes” ? Citations from published studies? ChatGPT draws its responses from those studies and can cite them, you ignorant fuck. Why does it matter that ChatGPT was used instead of google, or a library? It’s the same studies no matter how you found them. Your lack of understanding how modern technology works isn’t a good reason to dismiss anyone else’s work, and if you do you’re a bad person. Fuck this author and everyone who agrees with them. Get educated or shut the fuck up. Locking thread.
Because the point of learning is to know and be able to use that knowledge on a functional level, not having a computer think for you. You’re not educating yourself or learning if you use ChatGPT or any generative LLMs, it defeats the purpose of education. If this is your stance then you will accomplish, learn, and do nothing, you’re just riding the coat tails of shitty software that is just badly ripping off people who can actually put in the work or blatantly making shit up. The entire point of education is to become educated, generative LLMs are the antithesis of that.
A bunch of the “citations” ChatGPT uses are outright hallucinations. Unless you independently verify every word of the output, it cannot be trusted for anything even remotely important. I’m a medical student and some of my classmates use ChatGPT to summarize things and it spits out confabulations that are objectively and provably wrong.
True.
But doctors also screw up diagnosis, medication, procedures. I mean, being human and all that.
I think it’s a given that AI outperforms in medical exams -be it multiple choice or open ended/reasoning questions.
Theres also a growing body of literature with scenarios where AI produces more accurate diagnosis than physicians, especially in scenarios with image/pattern recognition, but even plain GPT was doing a good job with clinical histories, getting the accurate diagnostic with it’s #1 DxD, and even better when given lab panels.
Another trial found that patients who received email replies to their follow-up queries from AI or from physicians, found the AI to be much more empathetic, like, it wasn’t even close.
Sure, the AI has flaws. But the writing is on the wall…
The AI passed the multiple choice board exam, but the specialty board exam that you are required to pass to practice independently includes oral boards, and when given the prep materials for the pediatric boards, the AI got 80% wrong, and 60% of its diagnoses weren’t even in the correct organ system.
The AI doing pattern recognition works on things like reading mammograms to detect breast cancer, but AI doesn’t know how to interview a patient to find out the history in the first place. AI (or, more accurately, LLMs) doesn’t know how to do the critical thinking it takes to know what questions to ask in the first place to determine which labs and imaging studies to order that it would be able to make sense of. Unless you want the world where every patient gets the literal million dollar workup for every complaint, entrusting diagnosis to these idiot machines is worse than useless.
Could you provide references? I’m genuinely interested, and what I found seems to say differently:
Overall, GPT-4 passed the board residency examination in four of five specialties, revealing a median score higher than the official passing score of 65%.
Also I believe you’re seriously underestimating the abilities of present day LLMs. They are able to ask relevant follow up questions, as well as interpreting that information to request additional studies, and achieve accurate diagnosis.
See here a study specifically on conversational diagnosis AIs. It has some important limitations, crucially from having to work around the text interface which is not ideal, but otherwise achieved really interesting results.
Call them “idiot machines” all you want, but I feel this is going down the same path as full self driving cars - eventually they’ll be doing less errors than humans, and that will save lives.
My mistake, I recalled incorrectly. It got 83% wrong. https://arstechnica.com/science/2024/01/dont-use-chatgpt-to-diagnose-your-kids-illness-study-finds-83-error-rate/
The chat interface is stupid in so many ways and I would hate using text to talk to a patient myself. There are so many non-verbal aspects of communication that are hard to teach to humans that would be impossible to teach to an AI. If you are familiar with people and know how to work with them, you can pick up on things like intonation and body language that can indicate that they didn’t actually understand the question and you need to rephrase it to get the information you need, or that there’s something the patient is uncomfortable about saying/asking. Or indications that they might be lying about things like sexual activity or substance use. And that’s not even getting into the part where AI’s can’t do a physical exam which may reveal things that the interview did not. This also ignores patients that can’t tell you what’s wrong because they are babies or they have an altered mental status or are unconscious. There are so many situations where an LLM is just completely fucking useless in the diagnostic process, and even more when you start talking about treatments that aren’t pills.
Also, the exams are only one part of your evaluation to get through medical training. As a medical student and as a resident, your performance and interactions are constantly evaluated and examined to ensure that you are actually competent as a physician before you’re allowed to see patients without a supervising attending physician. For example, there was a student at my school that had almost perfect grades and passed the first board exam easily, but once he was in the room with real patients and interacting with the other medical staff, it became blatantly apparent that he had no business being in the medical field at all. He said and did things that were wildly inappropriate and was summarily expelled. If becoming a doctor was just a matter of passing the boards, he would have gotten through and likely would have been an actual danger to patients. Medicine is as much an art as it is a science, and the only way to test the art portion of it is through supervised practice until they are able to operate independently.
From the article referenced in your news source:
_JAMA Pediatrics and the NEJM were accessed for pediatric case challenges (N = 100). The text from each case was pasted into ChatGPT version 3.5 with the prompt List a differential diagnosis and a final diagnosis. _
A couple of key points:
- These are case challenges, which are usually meant to be hard. I could find no comparison to actual physician results in the article, which would have been nice.
- More importantly however: it was conducted in June 2023, and used GPT-3.5. GPT-4 improved substantially upon it, especially for complex scientific or scientific problems, and this shows in the newer studies that have used it.
I don’t think anyone’s advocating that an AI will replace doctors, much like it won’t replace white collar jobs either.
But if it helps achieve better outcomes for the patients, like the current research seems to indicate, aren’t you sworn to consider it in your practice?
The issue as I see it is that college is a barometer for success in life, which for the sake of brevity I’ll just say means economic success. It’s not just a place of learning, it’s the barrier to entry - and any metric that becomes a goal is prone to corruption.
A student won’t necessarily think of using AI as cheating themselves out of an education because we don’t teach the value of education except as a tool for economic success.
If the tool is education, the barrier to success is college, and the actual goal is to be economically successful, why wouldn’t a student start using a tool that breaks open that barrier with as little effort as possible?
especially in a world that seems to be repeatedly demonstrating to us that cheating and scumbaggery are the path to the highest echelons of success.
…where “success” means money and power - the stuff that these high profile scumbags care about, and the stuff that many otherwise decent people are taught should be the priority in their life.
I’ve said it before and I’ll say it again. The only thing AI can, or should be used for in the current era, is templating… I suppose things that don’t require truth or accuracy are fine too, but yeah.
You can build the framework of an article, report, story, publication, assignment, etc using AI to get some words on paper to start from. Every fact, declaration, or reference needs to be handled as false information unless otherwise proven, and most of the work will need to be rewritten. It’s there to provide, more or less, a structure to start from and you do the rest.
When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing? I knew what I wanted to say, I knew how I wanted to say it, but the initial declarations and wording to “break the ice” so-to-speak, always gave me issues.
It’s shit like that where AI can help.
Take everything AI gives you with a gigantic asterisk, that any/all information is liable to be false. Do your own research.
Given how fast things are moving in terms of knowledge and developments in science, technology, medicine, etc that’s transforming how we work, now, more than ever before, what you know is less important than what you can figure out. That’s what the youth need to be taught, how to figure that shit out for themselves, do the research and verify your findings. Once you know how to do that, then you’ll be able to adapt to almost any job that you can comprehend from a high level, it’s just a matter of time patience, research and learning. With that being said, some occupations have little to no margin for error, which is where my thought process inverts. Train long and hard before you start doing the job… Stuff like doctors, who can literally kill patients if they don’t know what they don’t know… Or nuclear power plant techs… Stuff like that.
When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing?
I think that this is a big part of education and learning though. When you have to stare at a blank screen (or paper) and wonder “How the fuck do I start?” Having to brainstorm write shit down 50 times, edit, delete, start over. I think that process alone makes you appreciate good writing and how difficult it can be.
My opinion is that when you skip that step you skip a big part of the creative process.
That’s a fair argument. I don’t refute it.
I only wish I had any coaching when it was my turn, to help me through that. I figured it out eventually, but still. I wish.
Was the best part of agrarian subsistence turning the Earth by hand? Should we return to it. A person learns more and is more productive if they talk out an issue. Having someone else to bounce ideas off of is a good thing. Asking someone to do it for you has always been a thing. Individualized learning has long been the secret of academic success for the children of the super rich. Just pay a professor to tutor the individual child. AI is the democratization of this advantage. A person can explain what they do not know and get a direct answer. Even with a small model that I know is wrong, forming the questions in conversation often leads me to correct answers and what I do not know. It is far faster and more efficient than I ever experienced elsewhere in life.
It takes time to learn how to use the tool. I’m sure there were lots of people making stupid patterns with a plow at first too when it was new.
The creative process is about the results it produces, not how long one spent in frustration. Gatekeeping because of the time you wasted is Luddism or plain sadism.
Use open weights models running on enthusiast level hardware you control. Inference providers are junk and the source of most problems with ignorant people from both sides of the issue. Use llama.cpp and a 70B or larger quantized model with emacs and gptel. Then you are free as in a citizen in a democracy with autonomy.
You’re right - giving people the option to bounce questions off others or AI can be helpful. But I don’t think that is the same as asking someone (or some thing) to do the work for you and then you edit it.
The creative process is about the results it produces, not how long one spent in frustration
This I disagree on. A process is not a result. You get a result from the process and sometimes it’s what you want and often times it isn’t what you want. This is especially true for beginners. And to get the results you want from a process you have to work through all parts of it including the frustrating parts. Actually getting through the frustrating parts makes you a better creator and I would argue makes the final result more satisfying because you worked hard to get it right.
If not arguably the biggest part of the creative process, the foundational structure that is
Exactly.
There’s an application that I think LLMs would be great for, where accuracy doesn’t matter: Video games. Take a game like Cyberpunk 2077, and have all the NPCs speech and interactions run on various fine-tuned LLMs, with different LoRA-based restrictions depending on character type. Like random gang members would have a lot of latitude to talk shit, start fights, commit low-level crimes, etc, without getting repetitive. But for more major characters like Judy, the model would be a little more strictly controlled. She would know to go in a certain direction story-wise, but the variables to get from A to B are much more open.
This would eliminate the very limited scripted conversation options which don’t seem to have much effect on the story. It could also give NPCs their own motivations with actual goals, and they could even keep dynamically creating side quests and mini-missions for you. It would make the city seem a lot more “alive”, rather than people just milling about aimlessly, with bad guys spawning in preprogrammed places at predictable times. It would offer nearly infinite replayability.
I know nothing about programming or game production, but I feel like this would be a legit use of AI. Though I’m sure it would take massive amounts of computing power, just based on my limited knowledge of how LLMs work.
Students turn in bullshit LLM papers. Instructors run those bullshit LLM papers through LLM grading. Humans need not apply.
So they believe 90% of colleges is shit, they are on the right track, but not there yet. College is nothing but learning a required sack of cow shit. University isnt supposed to be. Everyone who goes to college for a “track” to learn a “substance” is wasting university time in my mind. That’s a bloody trade school. Fuck everyone who thinks business is a University degree. If you’re not teaching something you couldn’t have published 5 years ago, your a fn sham. University is about progress and growth. If you want to know something we knew today, you should be taught to stop going to university, and find a college that’s paid for by your state. AND LETS FUCKING PAY FOR IT. that’s just 12-15 at that point. Most. We pay more in charges yearly trying to arrest kids for drugs and holding them back then we do just direct people who “aren’t sure” what they want.
Edit: sorry for sounding like an ass, I’m just being an ass these days. Nothing personal to anyone












