Believing a large-language model (like ChatGPT) is believing a prediction.
Predictions can be useful, but we shouldn't confuse them with facts or reality. And it's odd to use a prediction instead of looking up an available answer. Yet that is how LLMs are often used these days. That's some of what I find disturbing about them (in addition to the hype, environmental concerns, toxic and harmful output, unrecompensed and uncredited use of creative works etc.)
A predicted answer to a question is not the same as an answer.
A predicted summary of some text is not the same as a summary of the text.
A predicted program to perform a task is not the same as a program to perform that task.
LLMs are hypothesis generators (bullshit generators, if you prefer). If they help you get to valid/correct answer, that's great (subject to the negatives). But naïvely believing them is silly.