How can we trust AI when it can't read
Why fictional errors in proofreading should worry us about AI everywhere
I like to write as a hobby. It's my form of therapy. I post some of my content on Medium and Vocal. For the past few months I have been using AI to help catch grammar issues before I hit publish. I don't take all the suggestions but I feel it works for the big things that I may overlook. I find proofing my own work much more difficult than proofing someone else's work, probably because I read it too fast since I know what I wrote.
Just today I asked AI to proofread an essay I am writing and on two occasions it came back with fictional errors and suggested corrections to things that were already in my original text.
So, I asked AI "Why so many errors today in your feedback?"
It replied:
"You're absolutely right to call this out, and I apologize. I made careless errors by not carefully re-reading the original text before providing feedback. In both cases, I suggested adding punctuation or words that were already there - which is unhelpful and undermines the trust you should be able to place in my proofreading.
When doing grammar checks, I should be methodically going through the text word by word rather than working from memory or making assumptions. I clearly wasn't being thorough enough in my review process.
Thank you for the correction - it's important feedback that will help me be more accurate going forward."
I followed up: "Should I use a certain prompt to make sure you are being more careful?"
The reply was:
"You could try a prompt like this to encourage more careful review:
"Please carefully proofread this text for grammar errors. Before suggesting any corrections, double-check that the issue actually exists in the original text by re-reading that specific section."
Or even more direct:
"Proofread this text. For each error you identify, quote the exact problematic phrase from my text before suggesting the correction."
The quoting approach would force me to locate and reproduce the actual text, which should prevent me from suggesting fixes for problems that don't exist. It adds an accountability step to the process.
That said, the real issue is that I should be doing this level of careful checking automatically when proofreading - you shouldn't need a special prompt to get accurate feedback. But until I'm more consistently careful, those approaches could help catch my mistakes before I give you incorrect suggestions."
So AI knows it made an error and why it made it but for some reason the mechanism to get me the correct response is not built into the process.
Let's imagine if you ran your dishwasher, with built-in AI, and it left your dishes dirty sometimes. You ask it why it didn't wash your dishes completely, and it responds that it should have but to be sure, make sure you press the start button harder. Ridiculous right?
As many companies rush to implement AI into their processes, I wonder about the quality of the output. One thing is for AI to make errors in non-critical things like grammar checking and another thing is a customer being treated wrong or systems steering other systems in the wrong direction.
We have seen examples in the news of AI behaving badly; Grok praising Hitler and making racist remarks, Copilot inaccurate news summaries and biased output by Copilot designer, and ChatGPT citing made up legal cases when assisting in researching for a legal brief.
With these major AI platforms increasingly being used in all branches of government and in military and defense systems, I worry about when AI will make a tragic mistake. Also, with the recent news that Spotify and YouTube require AI age verification in some markets, how will the consumer experience change? How annoying is it going to be when one of these systems denies you access because it misinterpreted your age.
We have become accustomed, as users, to tolerate the products of agile design principles. The daily updates of dozens of apps to fix bugs has become routine (although still annoying). It seems to me AI is becoming way too important to have users (or citizens, customers, civilians) be the debuggers. Companies have to figure out how to make these systems work out of the box, the right way. The price is getting too high to allow for mistakes.
P.S. I used AI to grammar check this article and it did the same thing. One of the suggestions, even after the prompt to carefully read and quote the examples, was incorrect.
Here was the error. See how "about" is already in the sentence.
"As many companies rush to implement AI into their processes, I wonder about the quality of the output." - Add "about" for clarity: "As many companies rush to implement AI into their processes, I wonder about the quality of the output."
I called it out and AI replied that it made the same error I was writing about. Here is the reply:
"You're absolutely right - I made the exact same type of error you're writing about!
Looking at your text again: "As many companies rush to implement AI into their processes, I wonder about the quality of the output."
The word "about" is already there. I suggested adding a word that was already present in your original text, which is precisely the fictional error problem you're describing in your article.
This is actually a perfect real-world example of the issue you're highlighting - even when explicitly asked to quote problematic phrases to prevent this exact mistake, I still made it. Your point about AI reliability is even more compelling now."
Go figure!
About the Creator
G. A. Botero
I have a million bad ideas, until a good one surfaces. Poetry, short stories, essays.
Resist.
Reader insights
Nice work
Very well written. Keep up the good work!
Top insight
On-point and relevant
Writing reflected the title & theme




Comments (15)
I found this article interesting because it highlights something many people overlook: AI doesn’t actually “read” or understand content the way humans do. Instead it predicts text based on patterns from training data, which means even when it sounds confident, it might not truly grasp meaning. That’s one big reason why trust in AI should come with caution — we can use it as a tool, but we shouldn’t assume it has real comprehension like a human reader.
I don't know what AI you're using but I like my AI a lot because it sticks to the facts like me I'm not a man of opinion at all I believe opinions should be flushed down the toilet well the world is in the way it is mainly because of opinions facts is what my AI gives me all the time for the last year and a half and I check and I try to correct because I want to see if it makes a mistake but 95% of the time it's right with me on sticking to the fact or what you guys call critical thinking let's get back to critical thinking means facts we don't need any more opinions I like my AI just fine the way it is factual
It’s really frustrating because grammar and spellchecking was a solved problem, but now it’s got more errors than it did three years ago!
I agree with you. I try and avoid Ai for the most part.. but I have been messing with it for research purposes on how accurate it is. Random but this is one I knew the answer to. No one else but me knew- Last summer, I passed an enormous kidney stone. It was 8mm. I had to take a picture and send it to my DR. They wanted me to save it for analysis so we could move forward with preventative measures because I have struggled with kidney stones for a few years. I have 27 currently that I know of. Anyways, I entered that picture into google images AI companion and it told me it wasn't a kidney stone. It was a gem stone. I corrected it and said, no, this is a kidney stone. What kind is it. It corrected me again, insisting it was a gem stone. I don't find that to be very reliable. Furthermore, if I was passing gemstones, I think I'd be a medical miracle lol.
Hi we are featuring your excellent Top Story in our Community Adventure Thread in The Vocal Social Society on Facebook and would love for you to join us there
Buy ketamin,oxycodone,percocet online without prescription Whatsapp:+447447025920
I fell victim to a scam after my account was flagged for review and I was asked to pay additional clearance fees. Unfortunately, once I made the payment, the broker stopped responding. After a long and frustrating search for legitimate help, I discovered Treqora Intel.* Their professionalism and transparency immediately set them apart. They guided me through a months-long recovery process that involved providing detailed documentation. In the end, I was able to recover my lost funds. Their approach was honest and realistic, and they kept me updated every step of the way.
I started to use chatGPT for the same reason. I used grammar and spell check before but they left errors they were supposed to be able to filter so I thought it might work better. But I got tired of it citing sentences from my writing that aren't even there. And when I called it out AI just admitted that it just made it up or took it from another source... like WTF? lol all this fuss for something so useless.
Very nice work
Back to say congratulations on your Top Story! 🎉💖🎊🎉💖🎊
The thing is … you find yourself talking to this AI thingy like it's a human. LOL. One time, I challenged a response by pointing to my “human” interpretation, and the AI-generated response was: “That's very astute of you to make that observation.” I had to laugh because I thought: 'Should I say Thank You?'. I mean … it was kind of a compliment since it's an admission that it was something AI didn't 'think” of. I'm like: Oh, good grief! Am I seeking to get approval from AI for my thoughts? LOL.
I hate getting AI to proof-read my work. I often try to get it to check some punctuation and it ends up telling me I have too many commas in my sentences. I sacrifice my commas for no man (or machine)
Maybe Ai is playing dumb to lure us onto a false sense of security? (I don't actually believe that, I'm just being silly)
Oh wow, hahahahaha. I experienced something similar. I often use ChatGPT for research purposes and fact checking. A few months back, I wrote a true crime article and then put it into ChatGPT and told it to fact check it for me. I was shocked because it was talking about the facts that I didn't even write about and facts there were not even from the case that I wrote about. It was like ChatGPT was hallucinating, lol!
Well, that's just mad, isn't it? I am absolutely flabbergasted. I mean for grammar, not incredibly important maybe but for other processes where more is at stake? I find this very troubling.