The molecular formula for benzoic acid is C7H6O2, so Google AI is being asked to compile the following equation and solve for x
7x + (6 × 1) + (2 × -2) = 0
x = -2/7
Well within Google AI’s capabilities one would think. But when I put this question to Google’s Gemini chatbot on 11 October 2025, it responded in the first instance with -1/3, then corrected it to -3/7, both wrong answers. I asked again and it gave a new incorrect answer of -1/7. I asked one more time the next day and it returned to its original response of -1/3 which it then corrected to a new incorrect answer of +0.5.
I have included these responses in the appendix below so readers can see I am not inventing this and also to show the extent of the logical difficulties that Gemini got itself into. Given the level of confidence I have seen expressed in Gemini, I was quite troubled by my experience and have been pondering why this happened.
My best guess is that the version of Google’s AI chatbot accessed on 11-12 October 2025 was insufficiently equipped to handle the symbolic logic involved in the question. It clearly struggled to apply rule-driven logical reasoning at an uncomplicated level, which suggests that there is some way to go in the development of neuro-symbolic AI systems that all of us can have confidence in. Read the rest of this entry »




