Hiring AI Like a Workforce — and Why the Frontier Labs Want to Trap You
Eric Brasher | April 21, 2026 | 11 min read
Implementing AI in your business is no different than building your workforce. Some roles you outsource. Some roles you hire in-house. The decision isn’t about cost — it’s about who owns the knowledge once the work is done. AI is the same calculation. And the frontier labs — OpenAI, Anthropic, Microsoft — are betting you won’t notice that they’ve quietly tilted the table so the knowledge always ends up on their side of it.
The Workforce Frame
Think about how you actually staff a business. You outsource payroll, because the work is commodity and the regulations change faster than you want to track. You outsource your CPA at tax time, because the depth of expertise is too expensive to keep on the bench. You outsource cleaning, security monitoring, ad campaigns — anything where the value of the work is in doing it, not in knowing how you do it.
But you hire in-house for the things that make your business your business. Your operations lead. Your senior estimator. The dispatcher who knows which driver to send for which kind of customer. The salesperson who has the relationship. You don’t outsource these, because the value isn’t the task — it’s the accumulated judgment about your customers, your margins, your way of doing things. That judgment is the business.
AI sits in both columns. Some of it is genuinely commodity — transcribing a call, drafting a generic email, summarizing a PDF. Outsource that all day. But the AI that learns how you quote, how you triage tickets, how you handle the customer who always pays late but always pays — that’s an in-house hire. And right now, almost nobody is treating it that way.
What You’re Actually Hiring
When you sign up for a frontier-lab agent system, you are not buying software. You are not even buying a model. You are hiring a worker who promises to learn your business — and whose employment contract says everything they learn belongs to the staffing agency, not to you.
Read that again. The training, the corrections, the “no, that’s not how we do it here, do it this way,” the slow accumulation of context about your customers, your products, your tone, your edge cases — all of it gets absorbed into a system you don’t own. The agent gets smarter about your business every week. You get an invoice every month.
And when you ask the obvious follow-up — can I take what we’ve trained? — the answer is always some shape of no. You can export a conversation log. You can download a transcript. What you cannot do is extract the trained behavior, the prompts that work, the corrections you’ve already made, the personalization that took you four months to dial in. That all stays inside their walls.
The Master Plan, Stated Plainly
I don’t mean “master plan” in a conspiratorial sense. I mean it in the way every public-company strategy deck means it: there is a coherent commercial logic behind what these labs are building, and that logic ends with you unable to leave.
Here is the logic, line by line:
- Models alone are becoming a commodity. A good model from Lab A is now mostly interchangeable with a good model from Lab B. Margins on raw inference are compressing every quarter.
- Switching cost is the only durable moat. If a customer can move to a competitor in an afternoon, you have no pricing power. If a customer would have to rebuild six months of accumulated configuration, you do.
- The agent layer is where switching cost lives. The model is rentable. The agent — with its memory, its tools, its workflows, its hard-won corrections — is sticky.
- So build the agent layer to absorb the customer’s business behavior. Memory features. Custom GPTs. Projects. Connectors. Workspaces. Each one is sold as a convenience. Each one is a hook.
- Don’t offer export. Or offer an export that gives back the conversations but not the configuration. The customer can take the receipts but not the employee.
That’s it. That’s the play. It is being executed, in parallel, by every frontier lab and every hyperscaler with an agent strategy. They are not adversaries to each other — they all win the same way. They are adversaries to the idea that you should own what you trained.
The Cost of Realizing Too Late
Here is how it actually plays out in a small or mid-sized business. You spin up a frontier agent because someone on the team is excited about it. The first month is genuinely magical — it answers questions, drafts proposals, handles a category of work that used to land on a human. You start trusting it more. You start correcting it more. You build out a few custom workflows.
Three months in, you have meaningful productivity. You also have a quiet dependency. The agent is now part of how the business runs. New employees are trained against it. SOPs reference it. Some institutional knowledge has migrated from people’s heads into its memory and prompt set.
Six months in, a better option appears. Maybe a competitor lab releases something more capable. Maybe pricing changes. Maybe their roadmap diverges from yours. You go to evaluate the move and discover the truth: you can’t take the trained behavior. You can move the people, the data, the documents — but the agent has to start from zero. Six months of corrections, gone. The new agent will repeat every old mistake until you re-teach it.
That moment — the moment you understand the trap is closed — is exactly the moment they were designing for from day one.
Why This Is Worse Than Normal Vendor Lock-in
Software lock-in has always existed. ERP migrations are painful. CRMs are sticky. The difference with AI agents is that the lock-in target is your operational behavior — not your data, not your records, but the decision-making patterns you’ve invested in teaching a system to imitate.
You can export rows from a database. You can replatform a website. You can rebuild a spreadsheet. None of that is fun, but it is mechanical. Re-teaching a new agent how your business actually works is not mechanical. It is a months-long, judgment-intensive re-investment of the same human attention you spent the first time. The expensive resource isn’t storage or compute — it’s your team’s patience to correct the same mistakes again.
That’s why the trap works. The cost of leaving isn’t a port fee. It’s a tax on the most finite thing you have: your people’s time and attention.
The Vibe-Coder Pipeline (and Why Their Clients Are Already Captured)
There’s a second layer to this trap that’s easy to miss, because the people springing it usually don’t know they’re springing it. A new generation of “vibe coders” — freelancers and agencies building AI workflows in tools like Notion, Lovable, and n8n — are confidently delivering “custom AI solutions” to clients without realizing they are, in effect, just routing those clients’ data straight into the frontier labs through a third-party wrapper.
The platforms aren’t shady. They’re mostly honest about what they do. The problem is that the value proposition — speed, no-code, “just describe it and it builds it” — works precisely because the builder is one or two layers removed from the actual data flow. That distance is what hides the capture point.
Lovable
Lovable turns a prompt into a working app. Under the hood, code generation runs on Claude (Anthropic) by default, with GPT and Gemini as alternates. To Lovable’s credit, the generated code itself is yours — it syncs to your GitHub, uses standard React/TypeScript/Supabase, and you can theoretically eject. But the building experience — the conversation history, the iterative corrections, the project context that makes “just ask Lovable to add a feature” work — lives on Lovable’s platform and pipes through Anthropic. A vibe coder who hands a client a Lovable project has handed them a product whose ongoing development depends on Lovable continuing to exist, on Anthropic’s pricing not changing, and on the client either continuing to pay both companies or hiring a real developer to take over a codebase the vibe coder couldn’t maintain themselves.
Notion
Notion AI is the cleanest example of the multi-party pipeline. Building “an AI workspace” for a client in Notion means their documents, customer notes, project plans, and internal knowledge are processed by Notion plus at least three model providers (Anthropic Claude, OpenAI GPT, Google Gemini, depending on which feature is invoked) plus a separate vector database provider for embeddings. On non-enterprise plans, those LLM providers retain prompt data for up to thirty days. Notion’s own training-by-default policy is genuinely good, but the data still fans out to four or more parties on every AI call. The vibe coder who set this up may not know any of that — they followed a YouTube tutorial. The client certainly doesn’t know.
n8n
n8n is the interesting one because it’s the only one of the three that has a credible self-hosted option. If a developer self-hosts n8n on the client’s own infrastructure and connects it to a model endpoint the client controls, this can actually be a legitimate, portable, non-trapping deployment. Almost no vibe coder does this. The default path is n8n Cloud, which holds workflow data, credentials, and execution logs on n8n’s servers, and routes every AI node through whichever frontier provider is configured — with the client’s production data riding through every step. n8n Cloud also doesn’t offer a signed BAA, which makes it a non-starter for any healthcare-adjacent business that the vibe coder probably didn’t ask about.
The Pattern
In each case the same thing is happening: a builder who isn’t deep enough in the stack to evaluate the data flow is selling a client “an AI solution” that is actually a thin orchestration over the frontier labs — using a hosted no-code platform as the glue. Three things are true at once:
- The platform captures the configuration and conversation history. The client can’t leave without rebuilding.
- The frontier labs see the operational data on every call. The client’s competitive context is now training surface for someone else’s commercial advantage, even when retention policies are clean.
- The builder usually can’t maintain the system below the no-code layer, so when the client does need to escape, the builder isn’t the one who gets them out — somebody else has to.
None of this means the tools are wrong to use. Lovable is a real productivity multiplier. n8n self-hosted is a serious automation platform. Notion is a great document tool. The problem is the gap between what these platforms genuinely do well and what gets sold to a client as “your custom AI system.” If you’re paying someone to build AI for your business, the question isn’t how clever the demo is. The question is the same one from earlier: if I leave this builder, this platform, or this lab tomorrow, what comes with me? If the builder can’t answer that without flinching, the trap is already set.
What “Working the Problem” Looks Like Instead
Vulcan365 doesn’t come at this as a frontier lab. We come at it as operators who would rather solve a problem than capture a customer. The principles that fall out of that orientation are not complicated:
- The behavior is yours. The prompts, the workflows, the corrections, the agent’s configured judgment — all of it lives in plain code and plain configuration files in repositories you control.
- The model is rentable. Today you might use Claude. Tomorrow GPT. Next year a model that hasn’t shipped yet, or a local one running on your own hardware. The behavior layer doesn’t care. Swap the engine; keep the car.
- The infrastructure is portable. Run it on Azure. Run it on AWS. Run it in your own datacenter. The agent framework is the same .NET code in every environment.
- The framework is open source. FabrCore is published under Apache 2.0 — forkable, auditable, runnable on your own. If we disappeared tomorrow, your agents would keep running and you could keep developing them. That’s the actual definition of not trapped.
- The data is yours. Conversations, embeddings, knowledge bases — stored in databases you own, in tenants you control. We don’t harvest your operational data to train someone else’s model.
None of this is exotic. It’s how mature software has worked for thirty years. It only feels novel because the frontier labs have spent the last two years convincing the market that the new normal is renting your own behavior back from the company that learned it from you.
Outsource and In-House Together
The workforce frame doesn’t mean refusing to use frontier services. It means using them the way you use a payroll vendor: for the part that is genuinely commodity, on terms that don’t leak the parts that aren’t.
In practice, that looks like:
- Use the best frontier model you can afford for raw reasoning. That is the rentable part. Pay for it. Switch when something better arrives.
- Keep your agent definitions, your tools, your prompts, your domain knowledge, and your business rules in your own code, in your own repository, deployed to your own infrastructure.
- Treat any vendor “memory,” “custom GPT,” or “workspace” feature as a convenience to be evaluated against the cost it adds to leaving. Sometimes the trade is worth it. Often it is not.
- Assume every frontier feature that increases stickiness will be priced or repositioned later. Plan for that day.
This is not a paranoid posture. It’s the same posture every operator already has toward their bank, their landlord, and their largest customer: trust the relationship while it works, but never let any single counterparty get into a position where leaving them would put you out of business.
The Quiet Test
If you want a single test for any AI vendor, ask one question: If I leave you tomorrow, what comes with me?
If the answer is “your conversation history as a JSON file,” you do not own the worker — you owned a transcript of the worker. Everything else stayed.
If the answer is “the full agent definition, the prompts, the tool implementations, the workflows, the knowledge base, the configuration — in code, in your repo, runnable on any infrastructure” — that is what owning the worker actually looks like.
That’s the line. Everything in this post is just the explanation for why that line matters.
Where We Stand
Vulcan365 is not a frontier lab and we have no ambition to become one. We build agent systems on .NET, on top of frameworks we’ve open-sourced, deployed into infrastructure our customers control. We pick the best model on the market for each job and we keep the contract with that model thin enough that we can swap it out the week after a better one ships.
We do this because it’s the right architecture for an operator’s business — not because it’s a marketing position. The architecture is the position. If a customer ever asks us “what comes with me if I leave,” the honest answer is everything, because there was never anything we were holding back.
The frontier labs are doing extraordinary work on models. We’ll keep using their models. What we won’t do is hand them the part of your business that you spent the last decade building — the way you actually run.
Keep Reading
FabrCore is open source. Read the documentation, fork the code, run it yourself. Or talk to us about how the workforce frame applies to your specific business.
About Vulcan365 AI: We build and maintain FabrCore — an open source .NET framework for distributed AI agents. We also provide consulting and development services for teams building AI agent systems. Based in Birmingham, Alabama.