No. This is just wrong and fails to understand not just how transformers work but the conceptual mapping that results from their training.
All these arguments about whether or not LLMs think are missing the point. They do not “think” as humans think due to their intrinsically transactional nature. But calling them “fancy statistical autocomplete algorithms” is also wrong.
LLMs contain within their matrix a massively high dimensional concept map. In this coordinate space, high order vectors map the distance between abstract concepts. This is a natural result of consuming language, which by its very nature is a symbolic concept map.
The uncomfortable question becomes: Is the human brain similarly using a massively high-dimensional concept map? Can a significant part of human thought be described as a fancy autocomplete algorithm? Can a significant amount of human reasoning be mapped as a nested series of transactions?
All these arguments about whether or not LLMs think are missing the point. They do not “think” as humans think due to their intrinsically transactional nature. But calling them “fancy statistical autocomplete algorithms” is also wrong.
LLMs contain within their matrix a massively high dimensional concept map. In this coordinate space, high order vectors map the distance between abstract concepts. This is a natural result of consuming language, which by its very nature is a symbolic concept map.
The uncomfortable question becomes: Is the human brain similarly using a massively high-dimensional concept map? Can a significant part of human thought be described as a fancy autocomplete algorithm? Can a significant amount of human reasoning be mapped as a nested series of transactions?