That will not solve the problem, because when GPT doesn't have the answer, it will make one up by copying the structure of correct answers but without any substance.
For instance, let's say your LLM has never been told how many legs a snake has, it knows however that a snake is a reptile and that most reptiles have four legs. It will then confidently tell you "a snake has four legs", because it mirrors sentences like "a lizard has four legs" and "a crocodile has four legs" from its training set.
I don't think this is necessarily the case anymore. The bing implementation of chatgpt has a toggle for how cautious it should be with getting things wrong. I was working on a very niche issue today and asked it what a certain pin was designated for on a control board I am working on. I believe it is actually undocumented and wanted to see what chatgpt would say. And it actually said it didn't know and gave some tips on how I might figure it out. I suppose it is possible that it synthesized that answer from some previous q&a somewhere, but i couldn't find any mention of it online except for in the documentation.
For instance, let's say your LLM has never been told how many legs a snake has, it knows however that a snake is a reptile and that most reptiles have four legs. It will then confidently tell you "a snake has four legs", because it mirrors sentences like "a lizard has four legs" and "a crocodile has four legs" from its training set.