Another issue that standalone LLMs face is the need for human guidance. Fundamentally, LLMs are next-word predictors, and often, their internal structure is not inherently suited to higher-order thought processes, such as reasoning through complex tasks. This weakness doesn't mean they can't or don't reason. In fact, there are several that shows they can. However, it does mean they face certain impediments. For example, the LLM itself can create a logical list of steps; however, it has no built-in mechanisms for observation and reflection on that list.