What LLM's Are Good At
You might have missed the recent report that 95% of business AI pilots failed to achieve their objectives. You might also have missed the MIT Technology Review finding that bigger isn't always better in LLMs.
This is the result of a fundamental misunderstanding of what LLMs are and what they do. They do not reason or think, regardless of what they claim to be doing. They are language disassemblers and constructors, and in this they excel. They generate sentences, paragraphs and even full books worth of meaningful text. But they do not generate insight.
MIT is certainly not saying that large language models are useless. But the bigger-is-better approach only serves to make the language construction better, not the insight. In contrast, multimodal or "mixture of experts" models, trained on specific domains or subjects - with a training goal of getting facts and concepts about a subject of limited scope correct - and assembled into a larger model framework, allows the accuracy to improve dramatically.
What this means, in the simplest terms, is that LLMs are a revolution, not in "intelligence," but in user interface. It has never been easier to have a conversation with your computer, and for the computer to come away with a useful, distilled model of the content and context of that conversation. That distillation is then a useful input to other tools that are suited to solving particular types of problems, which the LLM can in turn explain back to you in human terms.
Vibe coding is not "AI writing your software for you," it is the newest wave of "citizen developer" tools. Instead of drawing your app and wiring it up graphically, you can now explain how it should work and the machine can quickly adapt and redefine the pieces as you go.
This is a huge advantage for both the technical and non-technical alike, and has promise for us to rethink and redevelop the user interfaces of all manner of software we use daily, from phone apps, smart tv's, in-car applications and more.