…of course you’re going to lose it. This post on Musk-X triggered a train of thought from me:
Just had a fascinating lunch with a 22-year-old Stanford grad. Smart kid. Perfect resume. Something felt off though. He kept pausing mid-sentence, searching for words. Not complex words – basic ones. Like his brain was buffering. Finally asked if he was okay. His response floored me.
“Sometimes I forget words now. I’m so used to having ChatGPT complete my thoughts that when it’s not there, my brain feels… slower.”
He’d been using AI for everything. Writing, thinking, communication. It had become his external brain. And now his internal one was getting weaker.
This concerns me, because it’s been an ongoing topic of conversation between the Son&Heir (a devout apostle of A.I.) and me (a very skeptical onlooker of said thing).
I have several problems with A.I., simply because I’m unsure of the value of its underlying assumption — its foundation, if you will — which believes that the accumulated knowledge on the Internet is solid: that even if there were some inaccuracies, they would be overcome by a preponderance of the correct theses. If that’s the case, then all well and good. But I am extremely leery of those “correct” theses: who decides what is truth, or nonsense, or (worst of all) highly plausible nonsense which only a dedicated expert (in the truest sense of the word) would have the knowledge, time and inclination to correct. The concept of A.I. seems to be a rather uncritical endorsement of “the wisdom of crowds” (i.e. received wisdom).
Well, pardon me if I don’t agree with that.
But returning to the argument at hand, Greg Isenberg uses the example of the calculator and its dolorous effect on mental arithmetic:
Remember how teachers said we needed to learn math because “you won’t always have a calculator”? They were wrong about that. But maybe they were right about something deeper. We’re running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking?
And here I agree, wholeheartedly. It’s bad enough to think that at some point, certain (and perhaps important) underpinnings of A.I. may turn out to be fallacious (whether unintended or malicious — another point to be considered) and large swathes of the A.I. inverted pyramids’ points may have been built, so to speak, on sand.
Ask yourself this: had A.I. existed before the reality of astrophysics had been learned, we would have believed, uncritically and unshakably, that the Earth was at the center of the universe. Well, we did. And we were absolutely and utterly wrong. After astrophysics came onto the scene, think how long it would take for all that A.I. to be overturned and corrected — as it actually took in the post-medieval era. Most people at that time couldn’t be bothered to think about astrophysics and just went on with their lives, untroubled.
What’s worse, though, is that at some point in the future the human intellect, having become flabby and lazy through its dependence on A.I., may not have the basic capacity to correct itself, to go back to first principles because quite frankly, those principles would have been lost and our capacity to recreate them likewise.
Like I said, I’m sure of only two things in this discussion: the first is the title of this post, and the second is my distrust of hearsay (my definition of A.I.).
I would be delighted to be disabused of my overall position, but I have to say it’s going to be a difficult job because I’m highly skeptical of this new wonder of science, especially as it makes our life so much easier and more convenient:
He’d been using AI for everything. Writing, thinking, communication. It had become his external brain.
It’s like losing the muscle capacity to walk, and worse still the instinctive knowledge of how to walk, simply because one has come to depend completely on an external machine to carry out that function of locomotion.
P.S. And I don’t even want to talk about this bullshit.