From the Community | How we use LLMs matter
Forever a Luddite, I’ve only just started using ChatGPT. I’m not like some (far more productive) grad students who can recite whether Claude or o4-mini or whatever else is best for coding; I pretty much just stick to ChatGPT and use it as a scientific buddy.
I have what I always wanted as a kid – a friend! And more importantly, I have a “friend” who can wax poetic about anything I’m interested in.
ChatGPT provides reasonable collections of readings – summaries on everything from protein hydrodynamics to evolutionary biology to art history. I can only imagine that this is as drastic a shift as the incipience of the Internet or recombinant DNA technologies. Information, and even basic reasoning, are democratized.
My apprehension towards using these tools is simple: I don’t want to stop thinking.
I find it hard enough to sit down and write – the attention economy has affected me as much as any 13-year old iPad kid. At the same time, I find that using LLMs can be pedagogical – I ask the model to teach me about autoimmune thrombocytopenia, statistical jackknifing or the difference between dark and light rum. But I also see people using them to replace their judgement: “Please write code to analyze RNAseq data.” “What does the gene expression data tell me here?” “Write a story I can send to a journal with these findings.”
With dramatic ease of use, we will inevitably see a massive outsourcing of cognitive activity. Even if the models can’t themselves reason, they provide us with methods to prevent us from reasoning. By asking the model to write for us, to interpret data for us, to try to think for us, we lose the struggle of learning. Education is an uphill battle against one’s own stupidity. Thus, by surrendering the skirmish, will we lose our ability as humans to think? More simply: are the kids done for?
More optimistic views from friends is that LLMs will open avenues for “higher cognitive activities.” But what higher cognitive activity exists than reasoning? What could be more important than spelunking through rational, difficult chains of thoughts until a “eureka” moment?
I’m fine with a model reformatting an Excel document for me, but I’m much more resistant to it giving me hypotheses to explain my data, or discussing how to assess the tractability of a scientific problem. For me, that process is why I’m alive.
Of course, in science and medicine, our ultimate and final duty is to truth and to ameliorate suffering in the world. Advanced intelligence models will undoubtedly help us approximate truth and to better treat patients. But I think with ideas of having advanced models usurp human thinking, we lose another base value of humanity: the value of active cognition.
We’ve started seeing professors and companies looking to outsource large chunks of human work – medicine, biology, artwork. Maybe the long-term future of non-procedural medicine *is* practitioners without residency training using statistical models to guide treatment. This, arguably, could make outcomes better and return physicians to being healers, in every sense of the word. I’d like that.
But with other fields (art, literature, basic research), why on Earth would I want a world where human cognitive output is wholly irrelevant? Even if it is “better” than us, what am I going to do with my extra time not spent reasoning and creating? I’d argue that a hedonistic life scrolling on TikTok is likely to be less fulfilling than the ones we have today. Especially if that content is generated by things trained on, but not experiencing, the human condition.
By omitting the joyous pain of writing an article or the frustration of planning a difficult experiment, we are liable to completely lose what makes us intelligent. Yin prompts the existence of yang, love due to hate and knowledge and wisdom due to the feeling of idiocy. If we rarely hit our heads against the ceiling of understanding, we lose our intellectual height.
Clearly, LLMs have changed how we work – and will continue in ways beyond my ability to predict. They will hopefully disrupt many terrible things about our society such as educational barriers and lack of healthcare access. But to become Soma-driven, hapless auditors of LLMs is to outsource our meaning to machines. In my opinion, the struggle of creative production is the joy of life. Thus, how we use LLMs matters.
Here’s hoping that we leverage the machines to learn and experience, lest we lose our divinely inspired sense of reasoning. However flawed, our reasoning is what makes life worth living.
Humza Khan is a third year MD-PhD student at the Stanford University School of Medicine and a first-year PhD student in the Program in Immunology.
This article was not produced or edited by an LLM, evidenced by its poor diction and grammatical choices. Ironically, the author happens to simply love using an em-dash.
The post From the Community | How we use LLMs matter appeared first on The Stanford Daily.