Most of the conversation about AI in business is coming from people selling it; pundits, software developers, and venture capitalists pushing the hype cycle. Very little is coming from people who have to make it work and actually support businesses with technology.

(An obviously AI generated image)
As someone that has been providing technology systems to my business counterparts in Fortune 50 companies for 20+ years, I know a bit about how technology does and doesn’t work within corporate environments.
In 2018, my team installed our first AI system. We were using machine learning to identify actors’ faces, create transcripts, and identify text and objects in video files from television shows in production. This allowed creative teams to find scenes quickly and easily, speeding up the editing process and avoiding the drudgery of “logging tape”.
Over the last few years, I’ve followed the remarkable advances in LLMs, the “generative” tools under the AI moniker. I learned a lot about what they can and can’t do.
One caveat: software development is changing much faster than everything else. My thoughts apply to the rest of a modern enterprise.
Here are my three key takeaway points:
– Agents are in a nascent stage and can’t replace people
– LLMs make mistakes regularly
– AI costs are subsidized now, but won’t be forever
These aren’t abstract concerns. Each one has real consequences for how you deploy AI inside a company.
Agents are in a nascent stage and can’t replace people
The recent advances in agent capabilities are inspiring. Headlines are dominated by OpenClaw and NemoClaw, the current hot autonomous agent frameworks. Neither of which are AI themselves, making it easy to confuse these agents with LLMs.
The big idea being discussed is replacing roles with AI. I mean if OpenClaw seems to read, think, and respond to emails, why can’t it replace people?
We are seeing a lot of ‘AI washing’ right now with layoffs, but those layoffs aren’t really about replacing people with AI, they are about making Wall Street analysts happy.
The difficulty is that the idea of an AI CFO or AI travel person is not a true AI or agent. There isn’t really a piece of AI software that is running 24/7 thinking about CFO issues. An “AI CFO” isn’t a sentient agent, it’s just a static prompt rerun each time, with no memory or context beyond that single interaction. It’s not a little computer homunculus waiting to leap into action.
An AI corporate person would require a triggering system of sorts, a database or jumble of JSON files that store everything it needs to know, and some kind of boundary on what it’s supposed to look at to have a reasonable context window. You simply cannot make an LLM look at all the information of a business every time it’s invoked.
There is a hazy future of ideas that help with these kind of things to create some sort of standardized framework, but that does not exist right now. There are no gold standard best practices. We are at the “throw stuff at the wall and see what sticks” phase right now.
LLMs make mistakes regularly
It’s often said that LLMs can have hallucinations. I prefer to call them what they are, mistakes. LLMs are incredibly complex systems, but at the highest level they are very good guessing machines, basing their guesses on their training. Even though they are extremely good, they are not perfect.
LLMs are not deterministic systems. They are probabilistic outputs wrapped in confident language. For this reason, I built my llm-discussion app, that has three different AI models debate and come to consensus on a question. Relying on a single LLM’s answer as the gospel every time is a recipe for problems.
The fallout from a miscalculated quarterly report due to an AI hallucination can have a huge negative impact on a company, causing long lasting harm.
In the corporate world, mistakes are real problems. Financial spreadsheets need to be 100% correct. Presentations can’t have misspellings or incorrect logos.
AI costs are subsidized now, but won’t be forever
At the core of any LLM usage are tokens. You can think of tokens like counting each word in an email and charging per word. Buying access to the frontier LLMs is basically buying tokens to use.
Processing these tokens is what all these gigantic data centers are designed to do. Spending hundreds of billions in infrastructure is hugely expensive. That has to be paid for somehow.
The truth of the matter is that the current cost of tokens does not reflect the actual cost of processing the tokens. In other words, AI companies lose money on every single interaction.
Currently, all costs of using AI are subsidized and do not reflect their true costs. The true costs are being paid with venture capital money and money from adjacent lines of business. For example, profit from Google Search pays for Gemini and profit from Microsoft Azure pays for Copilot.
At some point the AI business has to make enough money to be profitable, that means costs will rise.
We’ve seen this business cycle play out before. This follows a pattern Cory Doctorow describes as ‘enshittification.’ In that framing, we’re still in stage one.
Money is what corporate IT divisions are most concerned with. Yes, they have a nice PowerPoint about ‘value add’ and ROI, but their main role is cost containment. The slog into process heavy ITIL processes and standardization is all about saving money. IT groups will deploy a crappy $9 mouse instead of a nicer $30 mouse to save money. They’ll switch from Slack to the inferior Microsoft Teams to save money without hesitation. The user comes last in most of these calculations.
IT managers face a real dilemma when implementing AI tools. Currently, they can put in AI tools provided via a SaaS implementation that are pay by the seat, all you can drink situations. But those arrangements simply will not last. The current subsidized situation is untenable and eventually companies will need token budgets and a way for staff to use those tokens.
Can you imagine the department that stresses over the cost of a mouse, seeing the token bills skyrocketing up when a creative team starts making hundreds of image generation requests in an afternoon. Or that a single employee could accidentally rack up a $5,000 bill just by asking an LLM to “analyze these 500 PDFs” is a nightmare scenario for ITIL-focused managers. There will be aneurysms.
AI optimists will point out that token prices are plummeting, and they aren’t wrong. Cheap tokens during the land-grab phase are exactly how the “subsidized” playbook works. But in the enterprise, the Jevons Paradox usually wins: as a resource gets cheaper, we don’t save money, we find more ways to consume it. A 90% drop in token price doesn’t matter if your workforce increases usage by orders of magnitude.
Corporate email used to be measured in megabytes; now it’s measured in gigabytes. We didn’t save money on storage as it got cheaper; we just stopped deleting things.
I may sound dramatic, but we have to live in the real world. And in the real world, we are in the infancy of how AI will be used in business. Comparing where we are with AI on the timeline from the Wright brothers first flight to the SpaceX Dragon, we are at the World War I biplane era. Everything is made of cloth, wood, and glue.
There is tremendous opportunity, but also tremendous risk.
The winners won’t be the companies that replace people with AI. They’ll be the ones that make their people more effective with it, without blowing up costs, creating mistakes, or breaking processes.










