…of course you’re going to lose it. This post on Musk-X triggered a train of thought from me:
Just had a fascinating lunch with a 22-year-old Stanford grad. Smart kid. Perfect resume. Something felt off though. He kept pausing mid-sentence, searching for words. Not complex words – basic ones. Like his brain was buffering. Finally asked if he was okay. His response floored me.
“Sometimes I forget words now. I’m so used to having ChatGPT complete my thoughts that when it’s not there, my brain feels… slower.”
He’d been using AI for everything. Writing, thinking, communication. It had become his external brain. And now his internal one was getting weaker.
This concerns me, because it’s been an ongoing topic of conversation between the Son&Heir (a devout apostle of A.I.) and me (a very skeptical onlooker of said thing).
I have several problems with A.I., simply because I’m unsure of the value of its underlying assumption — its foundation, if you will — which believes that the accumulated knowledge on the Internet is solid: that even if there were some inaccuracies, they would be overcome by a preponderance of the correct theses. If that’s the case, then all well and good. But I am extremely leery of those “correct” theses: who decides what is truth, or nonsense, or (worst of all) highly plausible nonsense which only a dedicated expert (in the truest sense of the word) would have the knowledge, time and inclination to correct. The concept of A.I. seems to be a rather uncritical endorsement of “the wisdom of crowds” (i.e. received wisdom).
Well, pardon me if I don’t agree with that.
But returning to the argument at hand, Greg Isenberg uses the example of the calculator and its dolorous effect on mental arithmetic:
Remember how teachers said we needed to learn math because “you won’t always have a calculator”? They were wrong about that. But maybe they were right about something deeper. We’re running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking?
And here I agree, wholeheartedly. It’s bad enough to think that at some point, certain (and perhaps important) underpinnings of A.I. may turn out to be fallacious (whether unintended or malicious — another point to be considered) and large swathes of the A.I. inverted pyramids’ points may have been built, so to speak, on sand.
Ask yourself this: had A.I. existed before the reality of astrophysics had been learned, we would have believed, uncritically and unshakably, that the Earth was at the center of the universe. Well, we did. And we were absolutely and utterly wrong. After astrophysics came onto the scene, think how long it would take for all that A.I. to be overturned and corrected — as it actually took in the post-medieval era. Most people at that time couldn’t be bothered to think about astrophysics and just went on with their lives, untroubled.
What’s worse, though, is that at some point in the future the human intellect, having become flabby and lazy through its dependence on A.I., may not have the basic capacity to correct itself, to go back to first principles because quite frankly, those principles would have been lost and our capacity to recreate them likewise.
Like I said, I’m sure of only two things in this discussion: the first is the title of this post, and the second is my distrust of hearsay (my definition of A.I.).
I would be delighted to be disabused of my overall position, but I have to say it’s going to be a difficult job because I’m highly skeptical of this new wonder of science, especially as it makes our life so much easier and more convenient:
He’d been using AI for everything. Writing, thinking, communication. It had become his external brain.
It’s like losing the muscle capacity to walk, and worse still the instinctive knowledge of how to walk, simply because one has come to depend completely on an external machine to carry out that function of locomotion.
P.S. And I don’t even want to talk about this bullshit.
Weren’t Plato and/or Socrates against writing as it would destroy the exercise of memory as one could just look something up.
I guess about the only AI I use is spellcheck and stay away from all else.
Sturgeon’s Law applies here, as it does everywhere else: 90% of Everything is Crap.
People are lazy and selfish. Anything that lets them (us) indulge those traits will be embraced by the vast majority who don’t want to consider the consequences of such indulgence.
Many people who rely on technology today have become dumb as a pile of rocks. Many age groups too not just young. After the SCAMdemic it became worse. There’s people who can’t even have any kind of decent in person conversation. And even phone conversations. People space out. Can’t focus. You tell someone something and two seconds later they say you never told them that. It’s frustrating.
Watch the movie Idiocracy if you have not yet watched it. I think that movie is what our real world is becoming.
That young man sounds very like my post-severe-concussion brain, only much worse, and he did it to himself, by choice.
That young man sounds very like my post-severe-concussion brain, only much worse, and he did it to himself, by choice.
At least there’s hope that I’ll eventually get back to baseline. It sounds like it will take him a lot longer, and a lot more work, if ever to get back to what the 20th century considered normal functioning.
The calculator example is the interesting one – when younger I could do mental math almost as fast, and sometimes faster, than another person could punch the numbers into a calculator. I did math all the time in my job and my mind stayed flexible. Now, 30+ years later, I rarely need to do mental math. I had something come up a few days ago and was horribly frustrated with myself being unable to do a reasonably simple math problem in my mind. We were driving long distance and I finally had to wait until a pit stop to type it into my phone. My “math” muscle had atrophied.
At least I had the experience of knowing and using mental math, writing original essays, etc. Younger people who’ve never had to do that and rely fully on AI will never truly develop the mental capacity. Language is “logic” in a very real sense and the ability to put thoughts on paper that are understandable to others is a first principle logic exercise. It helps develop the mind.
Just remember, it’s all downhill from here. Our civilization has peaked.