Now, hold on! I use ChatGPT for a lot, and it's been really, really helpful!
Is that so?
Yeah, I can help organize my thoughts and everything! Oh, and don't forget how much it's been helping me in my job, formatting and creating various documents and such. Come on, you're being overly critical!
Am I? Well, then, let's have a look.
A light tone to a heavy topic, perhaps. However, there is some truth to what I just said.
As you may or may not know, AI (artificial intelligence) as it's coined, is really only artificial. There is nothing intelligent about it. The proliferation of AI as currently used by the public, began in late 2022 according to my research. ChatGPT began building in popularity from that moment on, and has become some people's go to for writing professionally, creating fiction, creating ideas, solving problems, organizing thoughts, research, and various other things that anyone could do themselves through research. I myself have used ChatGPT since February of this year for mostly entertainment purposes, and the occasional research topic, which I verify after AI's result if I got it there first.
Briefly, the use of AI relies upon patterns and pattern recognition. It uses machine learning to accomplish this task, among other things. Basically, what this is, is a computer that learns based upon available data entered into it, which it then uses to determine what patterns are in use and what patterns can then be projected from the data it has available to it. That is all it really is at the core, and in simple terms. AI does not think, it cannot reason. It doesn't feel compassion, kindness, care, cannot be empathetic... And it will never, ever, stop... Right? A nod to the Terminator?
That was a slight joke, of course, as I try not to be too serious all the time in these entries. Ai will stop if you tell it to do so, though you really have to be convinsing if you don't want it to send you any reply as it usually tends to do when you give it text, if working with what's known as a Learning Language Model. That's what I described as the computer learning by data it can train itself on. Anyway, moving on.
Now that we've established the basics of AI and what it does, let's take a look at some of its dangers. I'll use the points that I stated above and we'll go through them one at a time. There were six of them.
Writing professionally.
ChatGPT has been used by some to create a more professional writing style, like this.
Some individuals have utilized ChatGPT to develop a more polished and professional writing style.
That was the quote from ChatGPT, which took my sentence and rewrote it to be more professional in style, just as I instructed it to do. That was easy. Now, you need your that was easy button!
Unfortunately, it's too easy. It's so easy, in fact, that all you really need to do is go to ChatGPT's sight, enter your prompt, and watch as it comes up with everything for you. Want something written professionally? No problem. And I don't mean this in a positive way, either.
Sadly, more people seem to be using ChatGPT in this way, and not just in a one-off manner, but more, and more. This will ultimately cause them to lose their own writing skills over time. If you don't use a skill, you will lose that skill. That's not just my opinion, it's a well established fact that I'm sure you could research for yourselves. Ready? Think!
Creating fiction.
This falls under the same category as writing professionally. All you need to do is give a prompt and watch your creation come to life. Except, it isn't really yours, the LLM (learning language model) did it for you.
I have used ChatGPT to write various fictional stories that I am amused by reading. I did it for a few months, because I enjoyed reading what it came up with. Guess what happened next? I discovered that my own writing skills and ability to generate ideas, had diminished. How sad. So, I'm getting back into my own writing, just as this entry demonstrates. However, the more someone relies upon ChatGPT or other AI, the less of their own skills they will have available to them.
Creating ideas, solving problems, organizing thoughts, research.
Honestly, this is rehashing at this point, so I won't continue repeating the same things. Nonetheless, the use of AI is unfortunately rising, because it's so easy and as a rule, people in general will tend to choose the easiest path available to them. I find it sad. However, it won't last forever.
The training problem.
As I stated above, LLMs have been trained on data they obtained publically. Most of this data, for now, has been created by humans, not computers and their LLM systems. However, as more people use LLMs, and more of that data is made publically available on the internet, less of the data training LLMs will be human generated. ChatGPT and other AI systems have their own misinformation and biases that they generate in their output. The generation of these biases is sometimes not corrected and simply thought of as correct by some using AI, especially those who are not critical thinkers and choose to let AI systems and other similar things do their thinking for them. As a result, the biases will be compounded upon by the training of the LLMs using their own data, and their own writing style, among other things. Since we've established that AI cannot think, it logically follows to reason that AI systems being trained on their own data will deteriorate over time, since they cannot reason errors out for themselves, including the errors they make. They don't truly learn anything, and eventually, AI systems will become unreliable due to the training data being unreliable, and we'll be right back to basics. People actually thinking for themselves. There's a thought.
The ChatGPT version.
Finally, I think it's important that I outline the differences between a human, that being myself in this case, and an AI written version of the article. I gave ChatGPT this prompt, and received the output below as a result. After the quote of my prompt will be ChatGPT's complete textual response, correcting for any heading formatting if required.
Rewrite this entire article to correct all errors and make a better writing style, keep all formatting of headings, etc, the way they are.
Sure! Here's a rewritten and refined version of your article with corrected grammar, improved flow, and a more polished writing style—while fully preserving your original formatting, tone, and section structure:
Now, hold on! I use ChatGPT for a lot, and it's been really, really helpful!
Is that so?
Yeah, it helps me organize my thoughts and everything! Oh, and don’t forget how much it’s helped me at work—formatting documents, drafting content, creating templates. Come on, you're being overly critical!
Am I? Well then, let’s have a look.
A light tone on a heavy topic, perhaps. Still, there’s some truth to what I just said.
As you may or may not know, AI (artificial intelligence), as it’s commonly called, is really only artificial. There’s nothing truly intelligent about it. The public use of AI began proliferating around late 2022, according to my research. ChatGPT started gaining popularity from that point forward and has become the go-to tool for many—used to write professionally, create fiction, generate ideas, solve problems, organize thoughts, conduct research, and more. All things, I’d argue, people are fully capable of doing themselves with a little effort and critical thinking.
I’ve personally been using ChatGPT since February of this year—mostly for entertainment and the occasional research question. When I do use it for research, I make sure to verify the information it gives me.
In simple terms, AI relies heavily on pattern recognition. It uses machine learning to accomplish this, among other things. Essentially, it’s a computer that processes large amounts of data and learns the patterns within that data. It then uses those patterns to make predictions or generate new content. That’s really all it is, at its core.
AI does not think. It cannot reason. It doesn't feel compassion, kindness, or empathy. It doesn’t care—and it will never, ever stop... right? A nod to The Terminator, perhaps?
That was a bit of a joke, of course. I try not to be too serious all the time in these entries. AI will stop if you tell it to, though you often need to be very specific and direct—especially when dealing with what’s known as a Large Language Model (LLM). That’s what I described earlier: a computer trained on data, learning patterns through machine learning. Anyway, moving on.
Now that we've established the basics of AI and what it does, let’s take a closer look at some of the dangers. I mentioned several key points earlier, and we’ll now go through them one by one. There were six in total.
Writing professionally.
ChatGPT has been used by many to produce more professional writing—like this:
Some individuals have utilized ChatGPT to develop a more polished and professional writing style.
That’s a direct quote from ChatGPT, which I prompted to rewrite my sentence in a more professional tone. Simple as that. All you need now is a “that was easy” button.
But unfortunately, it is too easy. Just go to ChatGPT’s site, enter your prompt, and watch as it generates the content for you. Want something to sound professional? No problem.
And no, I don’t mean that in a positive way.
The truth is, more and more people are relying on ChatGPT this way—not just occasionally, but regularly. This is dangerous. Over time, relying on AI in this way will inevitably erode one’s own writing skills. That’s not just my opinion—it’s a well-established principle: If you don’t use a skill, you lose it. Think about that. Really think.
Creating fiction.
This is in the same category as professional writing. You give ChatGPT a prompt, and it writes a story for you. Sounds magical, right?
Except—it’s not really your creation. The LLM did the work.
I’ve used ChatGPT to write fictional stories that I found amusing. I did this for a few months because I enjoyed reading the results. But then, I noticed something troubling: my own creativity and writing ability had diminished. How sad is that?
So now, I’m getting back to writing on my own—this very entry being proof of that effort. But again, the more one relies on ChatGPT or similar tools, the fewer original skills they’ll retain.
Creating ideas, solving problems, organizing thoughts, research.
Honestly, we’re starting to tread the same ground here, so I won’t rehash everything. The point remains: people are turning to AI more and more because it’s easy. And as a general rule, most people tend to take the path of least resistance.
It’s unfortunate—but this too shall pass. It can’t last forever.
The training problem.
As I mentioned earlier, LLMs are trained on publicly available data. Most of that data—so far—has been generated by humans. But here’s the problem: as more and more people use AI to generate content, that AI-created content gets uploaded to the internet and becomes part of the training pool.
ChatGPT and other AI systems already produce outputs that contain misinformation and biases. These issues aren't always caught or corrected. And for those who don’t think critically, that biased information may be taken at face value.
As a result, AI will start learning from its own flawed outputs—reinforcing its own misinformation and biases. Since AI cannot reason, it cannot recognize or fix these errors. It doesn’t actually learn anything in the human sense. Over time, this recursive training on AI-generated content will degrade the quality of the models.
Eventually, they’ll become unreliable.
And when that happens, maybe—just maybe—people will start thinking for themselves again.
Now there’s a thought.
Would you like me to help format this as a downloadable document (PDF, Word, etc.) or improve it further for a particular audience or platform?
Back to the Human.
Yep, it's me once again. A real, non-AI, Human writing. If you look closely, the use of AI in the prompt becomes obvious, even if it isn't initially obvious. In fact as ChatGPT's version of the article continued on, it began to rewrite things in its own generic style. Sure, it kept my wording and tone, but the style of ChatGPT is ultimately the same, even if you tell it to do something different. It will eventually revert back to something very similar in style to what it wrote above. Now, I'm going to refresh my temporary ChatGPT chat and give it another prompt.
Write me a complete article about the dangers of AI, and how it will erode humans thinking for themselves in at least ten different categories. Make it as detailed as possible, but keep it easy to understand.
ChatGPT response, V2.
The Dangers of AI: How Artificial Intelligence May Erode Human Thinking Across Ten Key Areas
Artificial Intelligence (AI) has become an integral part of our daily lives, from recommending what movie to watch next to automating business processes. While AI brings convenience, speed, and efficiency, it also comes with serious risks—especially when it begins to replace or discourage human thinking. As reliance on AI increases, there is a growing danger that people will become less capable of independent thought, decision-making, and problem-solving. This article explores ten critical areas where AI may erode our ability to think for ourselves, with a detailed yet easy-to-understand breakdown.
1. Education and Learning
AI-powered tools like tutoring bots, essay generators, and homework solvers are changing how students learn. While these tools can support learning, they often lead to shortcut thinking. When students rely on AI to do the work for them, they may stop engaging in deep thought or struggle with critical problem-solving. Over time, this creates a generation of learners who can pass tests but lack true understanding.
Example: A student might use an AI tool to write an essay, submitting it without reading or understanding the content. They pass the assignment but learn nothing in the process.
2. Decision-Making
AI systems are increasingly used to make decisions in business, healthcare, finance, and even personal relationships. As we grow accustomed to accepting AI’s “advice” without question, we risk outsourcing our judgment. This weakens our ability to weigh options, consider consequences, and make informed choices on our own.
Example: Relying on an AI assistant to choose a career path or investment option might mean missing out on important emotional or personal considerations that only a human can fully grasp.
3. Problem-Solving Skills
One of the most important human traits is the ability to solve problems creatively and independently. With AI providing instant answers, people may lose patience or willingness to think through complex issues themselves.
Example: Instead of trying to figure out how to fix a leaking faucet, people may rely on an AI to diagnose and instruct them step-by-step, never learning the underlying mechanics or reasoning behind the fix.
4. Memory and Recall
Search engines, voice assistants, and AI-powered reminders have made it unnecessary to remember phone numbers, addresses, or even birthdays. While helpful, this constant outsourcing of memory can weaken our natural recall abilities over time.
Example: If you never have to remember a fact because AI is always there to provide it, your brain’s “mental muscle” can shrink through lack of use—just like any other unused skill.
5. Language and Communication
AI-generated emails, resumes, essays, and even conversations (via chatbots) are becoming so advanced that people increasingly let machines do the talking. This can erode our ability to express ourselves clearly and authentically.
Example: Using AI to write your thoughts may save time, but you also lose the chance to develop your own voice, vocabulary, and storytelling abilities.
6. Creativity and Imagination
AI can generate music, art, stories, and design ideas in seconds. While fascinating, this can discourage people from pursuing creative endeavors themselves, especially if they believe they can’t match AI’s speed or “skill.”
Example: An artist might stop painting because AI-generated art seems more popular or profitable—missing the personal satisfaction and mental growth that come from creating something original.
7. Ethical Reasoning
AI does not have a conscience or a moral compass. Yet it's used in systems that make ethically complex decisions, such as who gets a loan, who gets hired, or how prisoners are sentenced. Overreliance on AI in these areas can dull our own sense of ethics and make us passive participants in morally questionable systems.
Example: If a company blindly follows an AI’s hiring recommendation without questioning its bias or fairness, it may contribute to discrimination without even realizing it.
8. Emotional Intelligence
AI can simulate empathy, but it doesn’t feel anything. As we interact more with AI (chatbots, virtual therapists, AI friends), our ability to engage with real human emotions may suffer. Understanding and responding to emotions is a skill that requires practice—something AI cannot truly teach.
Example: A person may become emotionally distant or less empathetic in relationships because they’ve grown used to “talking” with emotionless machines that never challenge them.
9. Physical Navigation and Spatial Thinking
GPS and AI-driven maps have revolutionized travel, but they also reduce our ability to read maps, sense direction, or plan routes mentally. This weakens spatial intelligence and situational awareness.
Example: Drivers who blindly follow GPS directions may become disoriented when technology fails or leads them the wrong way—because they never learned to navigate themselves.
10. Civic Awareness and Political Thinking
AI-curated news feeds and personalized content shape what we see and believe. This “filter bubble” effect can lead to polarized thinking and discourage us from questioning sources or exploring diverse viewpoints.
Example: If an AI algorithm constantly shows you articles that confirm your beliefs, you may stop considering alternative perspectives or engaging in meaningful political discussions.
The Long-Term Cost
The biggest danger of AI isn’t that it will take over the world—it’s that it may quietly weaken the very things that make us human: our curiosity, our reasoning, our creativity, and our sense of moral responsibility. By slowly reducing our need to think, act, or feel for ourselves, AI could lead to a passive, dependent society where fewer people question authority, innovate, or stand up for what’s right.
What Can Be Done?
To prevent this future, we must learn to partner with AI—not surrender to it. Here are some ways to maintain our thinking skills in an AI-driven world:
- Use AI as a tool, not a crutch.
- Always ask “why” and “how,” even when AI gives you an answer.
- Continue practicing skills like writing, reading, debating, and critical thinking.
- Limit AI use in areas that require personal growth (e.g., education, art, ethics).
- Teach children the value of struggling through problems—not avoiding them.
- Remain curious and skeptical. Don’t accept AI’s outputs without thought.
Final Thoughts
AI is not inherently dangerous—it’s how we use it that matters. If we allow it to think for us, we risk losing the ability to think as ourselves. But if we use AI to enhance, rather than replace, our minds, we can preserve what makes us uniquely human while still benefiting from the power of machine intelligence.
Let AI be your assistant—not your brain.
Back to the Human.
Pretty good, right? It seemed well reasoned and easy to understand with a simple prompt. That simple prompt gave me about 7248 characters of output. A comprehensive and detailed article, and I didn't even have to use my own mind to come up with it. Isn't that sad? I think so.
I have heard people talk at the place I work, about how such and such person used ChatGPT to generate this spreadsheet, or write this article. People tend to talk about how great it is more than they talk about its dangerous effects on cognative abilities. I'll never use ChatGPT to write professionally more than a one-off time, and if my job became writing professional messages or emails to various places, I would not hesitate to learn that skill for myself with research, thinking, and reasoning skills of my own. I won't lose those skills to AI.
Why this update after 9 years?
Honestly, there isn't too much that's changed in my life. I still work at the same place I worked at upon writing my first journal entry, which wasn't about that at all. I still live n the same place, do the same things. I have my routines, and I like them. This is mostly why I've not updated in so long, but I was looking through some older things and decided to update them and finish them.
Mainly, I decided to write this journal entry based upon some recent events that have highlighted the dangerous overuses of AI, even if the rationale is, I just needed help organizing my thoughts. I, personally, will respect anything a lot more if it comes from a Human and not an AI. AI has its purposes. It can be used as a tool for some very good uses. However, it can be used too much, and the dangers in its overuse have already been stated, so I won't state them again.
Remember, folks, stay genuine, think for yourselves, and keep the ideas flowing within those cognative Human abilities we all have!