AI Playtime is Over: It is Time to Get Serious
March 04, 2026
Story
Well, distinguished members of the (embedded industry) jury, I’m just a poor country lawyer (editor), and I don’t go in for all that newfangled LLM stuff that the big folks (you know who they are) are all on about. But the way I see it, as we get into embedded world, the time is about here to acknowledge that we’re doing serious work in the embedded industry, and we should probably put away the toys.
Leaving my very thin framing device behind, I’ll make my hot take plain: LLMs are pointless, toxic, and unsustainable toys for the marketing teams and not worth engineers’ time. They take up resources that could be used for real solutions, like machine learning and processing AI. LLM-AI in general and Generative AI specifically have repeatedly introduced faults, latency, legal exposure, security and privacy risks, and gigantic, unrecoverable costs to nearly every system in which they’re introduced. [Editor's Note: This column originally appeared in the Spring 2026 issue of Embedded Computing Design Magazine.]
A few well-known, non-secret examples of these problems:
ChatGPT and OpenAI lose money on every query, and most financial experts agree the company is only afloat because of infusions of outside money, while producing (if I’m generous) factually dubious results to queries. And now they’re adding ads.
Grok is being used to make legally questionable adult content and monetize GenAI image functions.
Google claims they have no plans for Gemini to include ads, but a recent Adweek report says investors are being told a different story. Remember, search used to not be colored by ads, either.
Code written by GenAI is very likely a legal liability that could compromise patents, IP, and real value, as I warned in 2023: https://embeddedcomputing.com/technology/ai-machine-learning/using-generative-ai-for-code-can-be-a-big-risk
The environmental impact is a huge problem and a completely avoidable PR crisis.
Companies really need to examine whether the marketing value of hopping on the hype train of GenAI and LLM chatbots is worth the exposure and risk. Perhaps that accounts for the 95 percent failure-to-launch rate for AI pilots that MIT reported last year. Or maybe it’s just that LLMs don’t work.
Accentuate the Positive
Let’s leave LLMs where they belong: behind us, and consider more positive AI tools.
I get nervous when I hear about companies trying to tap into LLMs from the edge because there is no current use case for this. The last year has been all about Edge AI, and I’ve seen some incredible innovations in powerful, compact processing and in energy efficiency. This innovation gives me hope, and that is exactly what I mean when I talk about “Practical AI Tools.”
Small Language Models, when trained on specific data sets for specific uses, can be very useful at the edge and elsewhere in the chain of work, and I encourage engineers to focus on these SLMs and how they might work to enhance operations at the edge, in vehicles, in the factory and warehouse, and even in homes or hospitals. (You’re really going to have to get serious about privacy and security, but that’s another column.)
Right now, the hottest thing in the AI space right now (and the last thing I’ll mention in this screed) is Physical AI. Jensen Huang of NVIDIA spent much of his CES keynote talking about it, and now every analyst and corporate marketer wants to talk about how they’re leveraging physical AI.
Hey. Physical AI is just embedded computing.
We can all acknowledge that, right? We didn’t need a new term for it. You’ve all been creating this “physical AI” your entire career. Huang is trying to pull off what Cisco abandoned when it failed to rebrand IoT as the “Internet of Everything” back in 2013. And I don’t think it should be allowed to work just because NVIDIA is the (apparent) leader in AI chips.
Big AI has no real future. Consumers are turning away from it. Businesses are abandoning it. If we think AI really has value to enterprise operations, makes the world easier to navigate, and (perish the thought) actually improves the living conditions of people all over the world, we should leave the hypecycle behind. Products solve problems-- they shouldn’t create them.
It’s time to put away the LLMs and the GenAI toys. They don’t belong in real products. Keep innovating and developing SLMs and smart embedded engineering at the edge and in the server, from the factory to the refrigerator, and I do not doubt that we’ll see that pilot-to-product percentage move in the other direction.
I rest my case.
