1980: TVs will fry your brain
1990: Videogames will fry your brain
2000: Computers will fry your brain
2010: Smartphones will fry your brain
2020: AI will fry your brain
Any takes for the 2030s?
Climate change.
Literally.
2030: Cyborg w/AI will fry your brain. Literally though.
Neural implants? Only this time they’re really going to fry your brain.
And before that books and comics. But LLMs are different: they pretend to be your friend but actually just encourage whatever you come up with. You can easily fry people’s brains by being their sycophant, now everyone can subscribe to one.
I mean, based fully on our current dystopian reality, I feel you just made a really good point about tech growing to a point where it fully captures you from reality, and indeed frys your brain by convincing you that fantasies are real.
MAGA is a great example of people with brains so fried they think a pedophile exconman with 34 felonies who killed millions of Americans trough a poor pandemic response is somehow helping them by destroying USAID, DEI, Healthcare, and Social Security.
Their brains are gonzo, all through the constant applied exploitation of all the tech you just mentioned combined.
AI will absolutley make it worse.
Well looking around at where we are today, maybe TVs did fry our brains.
2030: Critical thought will fry your brain
i think reading the title of this post hurt my brain. like what are we doing here? making medical claims using sensationalist and meaningless language… seems unhelpful
I fucking hate this AI shit but I’ll admit I end up using Gemini (knowing its wrong sometimes) but it’s like how I’d use Google but just more of a complex ask instead of simple search query’s, I couldn’t imagine using it beyond that other than a follow-up or two.
It’s just a chatbot that has access to info, who goes onto their cable companies website and befriends the chatbot?
I have found Google search to be getting progressively worse where as I can type out a question to Gemini that will return better results than Google search. It’s annoying that Google search has gotten so bad and duckduckgo will return you something interesting but not relavent. So Gemini is my Google search nowadays.
It very well may be intentional; to drive people away from traditional search and in to Gemini.
Oh, do you mean Claudia!? She’s awesome!
Found the Richard Dawkins :P
I’ve used gpt a coup times when I was searching the web and forums for well over an hour and found nothing relevant enough to work. Theissue got solved in 5-10 minutes.
They enshittified the search so now using the chatbot is more useful. The search just returns slop and even fake slop forums.
Pretty much. Can’t find useful info without having to put in ALOT of extra work that I wouldn’t of a decade ago.
Fuck though I love being able to ask it for part numbers and info. Much less hassle to ask it then use the shitty corpo parts catalogues search features especially when there’s weird naming schemes and a lack of description, clicking through 50 parts trying to find the right one sucks.
Its more that SEO is so well known at this point you can whip up whatever AI generated garabge you want to be ranked high on search engines in seconds. For now the AIs are just better at “wading” through the trash since they somewhat curate the data its training on. Once all they can train it on is slop you better hope you still have some encyclopedias and text books laying around
I mean I have been using DDG for years now. I just could not find the right answer for my specific issue on my specific linux distro and AI was sadly just faster
Should we trust a researcher whose brain got fried. Did they remember to do the old double-blind setup before the frying of the brains occurred?
AI is like a dog looking at itself in a mirror.
Some dogs are smart, and understand that this is a tool and that it is there to help you see things better… Some dogs are fucking morons and think their reflection is another dog, and they wanna fuck and fight…
There are a ton of good use cases for ai, and none of them include coquettish sexbots or drawings of me as a Simpson or a Ghibli sketch.
How do you know the dogs which want to fuck and fight aren’t the smarter ones?
What if the other dogs don’t recognize the reflection as anything meaningful — not a tool, a reflection, …? In that case, at least the “dumb” dogs figured out that something’s up.
Edit: anthropomorphizing the idea that nonchalant reactions = understanding well enough to not care. There’s many reasons any particular dog may not fight a mirror. Particularly, they may just rely less on vision to determine whether something is alive or not. That would not indicate understanding, though… it would indicate the dogs understandably passive approach to things which don’t seem to have any significance. Closer to a lack of awareness than an actual understanding of any kind.
I really do see the issue with AI. I see people around me outsource thinking to it too much. Like literally. As if they are happy that a machine can make their life choices for them. This is extremely worrying It’s About how people use it
Thinking is hard and ppl would prefer to feel, instead. When you just have to vibe with your AI that thinks for you, ppl will absolutely use it and disempower themselves under the illusion of empowerment. They will infantilize themselves and end up being treated like the children they want to be.
I always thought recommendation algorithm will do it but the progress stopped at some point. We had apps recommending videos, music, feeds, news and so on for a long time but it never evolved into recommended careers or recommended places to live. Not in the sense where some algorithms that tracks you all the times tells you what your next important life choice should be. I don’t know anyone who’s using AI like that yet but I can see it happening in the future.
I think the key point is that you’re not outsourcing critical thinking to LLMs, but are instead using it as a tool to do grunt work that you could’ve done yourself, but the LLM can pump out faster. This means constantly being critical of everything it does, asking questions, asking for links to credible sources, asking it to provide info to help evaluate the pros and cons of multiple approaches, with you making the decisions and learning along the way. Overall, any work a LLM produces that will have your name on it should be work you entirely understand and agree with. For coding, I find agent markdown files to be especially helpful to make sure the LLM follows my desired practices without me constantly making it refactor.
Largely, my assumption at this point is that LLMs may not always be around, so I definitely don’t want to be left holding the bag with a bunch of slop I can’t manage on my own. I think I’ll feel better when I can run open weight models on my own hardware that are fully competitive with cloud models. With models like Qwen 3.6 27B, it seems we are getting closer to that.
Those are important studies but nothing shocking. The conclusion to draw from them is the same one we’ve drawn from all technologies that have improved our lives to some degree: Without them, we tend to either be incompetent as losing access to them isn’t worth planning for, or we are demotivated because why would we deprive ourselves from technology that makes our work so much less exhausting?
It doesn’t necessarily remove our capacity to think (and the article falsely generalises to critical thinking), it shifts what kind of thinking we do.
If AI is as good or better than I am at writing code, then I’ll switch my brain to only do the orchestrating and architecture rather than the writing code part. And yes, if you remove AI, then the switch will cause me to perform less than I used to before AI, but not permanently, only until I get used to it again.
If an AI is better than a doctor at finding cancer indicators, then the doctor will focus their mind on finding solutions only rather than splitting it on both the detection and solution.
This is not new, not bad, and I’ll even go to the extent of saying it’s a great use of AI: Humans evolved for specialization. The less varied the tasks are, the better we are at the subset we specialize in. That’s what has driven our rapid technological and societal advances in the past millenia.
But, AI has many issues and many detrimental applications as well, so don’t see this comment as a full endorsement of AI.
I don’t want it, all it does is to negate years of learned experience and ability to organically formulate ideas.

study already came out that hs people graduating cant even read or write, functionally illterate.
You can’t see the same kind of propaganda your grandparents were saying about computers just for ai now?
Besides, why are colleges passing illiterate students? That’s the actual problem.
There’s a tiny difference between then and now called scientific evidence. These are actual scientific studies saying that using AI results in lower cognitive abilities.
can confirm. Reddit is filled with abject brain dead dumbasses. since most content is AIGEN it makes sense.
Not sure about the method: to me it shows people are more willing to give up when the computer appears to be broken.
I think the control group need to experience a similar computer service failure, but maybe just swap out the ai for a basic calculator tool, or a pdf with formulas or a cheat sheet or something 😅
Studies show that using a bulldozer for plowing a field decreases the farmers muscle density after just one day of use.
Christ. What a load of shit.
deleted by creator
I’m reminded of “Johnny Mnemonic,” the 1995 movie. From IMDB:
In 2021, society is driven by a virtual Internet, which has created a degenerative effect called “nerve attenuation syndrome” or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS.
It seems to me that AI is the real “virtual internet” here.
Although rather unrelated to the topic at hand, I also liked this quote:… [Henry] Rollins, who is uninterested in science fiction, joined the cast because he liked the film’s focus on an upcoming disadvantaged underclass.

Well, when I communicate with the AI for more than two to five minutes, I almost always find myself something like in a picture, if someone didn’t understand, it’s a character from the idiocracy movie.
Dwayne Elizondo Mountain Dew Camacho!











