James Duffy

The Next Evolution of Platform Engineering

I have been building platform teams for a while now. Long enough to have lived through the DevOps-to-Platform-Engineering transition firsthand. The pattern is happening again, and this time it is bigger.

Platform Engineering is splitting. A new discipline is emerging that I am calling AI Platform Engineering, and if you work in a regulated industry like healthcare, you cannot afford to ignore it.

#We have seen this movie before

Remember when “DevOps” was the answer to everything? Every ops engineer became a DevOps engineer overnight. The title changed but the work stayed the same for a while. Then slowly, the real shift happened. Teams started building internal developer platforms. Golden paths. Self-service infrastructure. The role genuinely evolved into something new.

That same pressure is building again. AI is not just another workload to deploy. It introduces an entirely different set of problems: model governance, prompt injection risks, data privacy boundaries, cost management for inference, and compliance frameworks that regulators are still writing in real time.

Your traditional platform team is not equipped to handle all of this. They should not have to be.

Venn diagram showing Platform, AI Platform, and Security converging from 2026 to 2027

The three disciplines are converging fast.

#What AI Platform Engineering actually looks like

An AI Platform team is not a machine learning team. They are not training models. They are building the guardrails, pipelines, and self-service tooling that lets every other engineering team adopt AI safely and quickly.

In practice, this means owning things like:

  • LLM Gateways — A centralized proxy layer that handles authentication, rate limiting, cost tracking, and content filtering for every model call in the org. Engineers should not be copy-pasting API keys into random services.
  • Prompt management and versioning — Treating prompts like infrastructure. Version controlled, tested, reviewed. Not scattered across application code.
  • Guardrails as code — Input and output validation that is defined centrally and enforced automatically. In healthcare, this is not optional. A model hallucinating medical advice is a compliance nightmare.
  • Model routing and fallback — Abstracting away which model is being called so teams can swap providers, manage costs, and handle outages without code changes.
  • Audit and observability — Every model interaction logged, traceable, and queryable. When a regulator asks what your AI told a patient, you need an answer in minutes, not weeks.

None of this is theoretical. These are real problems I am seeing teams try to solve with duct tape and hope right now.

#Why regulated industries cannot wait

In healthcare, the stakes are different. A chatbot giving bad medical guidance is not just a bad user experience. It is a liability. It is a potential HIPAA violation. It could hurt someone.

This is exactly why AI Platform Engineering matters most in environments like ours. The guardrails cannot be an afterthought bolted on after the product team ships something. They need to be baked into the platform from day one.

The teams that figure this out early gain a massive advantage. They move faster because their engineers are not scared to use AI. They have clear boundaries. They know what is allowed, what is logged, and what is blocked. That confidence unlocks velocity.

The teams that wait end up with shadow AI. Engineers using personal API keys. Prompts with PHI flowing through third-party services nobody vetted. It is the same shadow IT problem we dealt with a decade ago, just with higher stakes.

#The convergence

Look at that diagram again. In 2026, Platform, AI Platform, and Security are still somewhat separate concerns with some overlap. By 2027, I believe these circles will have pulled significantly closer together.

Security teams will need to understand AI-specific threat models. AI Platform teams will need deep infrastructure and networking knowledge. Traditional platform engineers will need to understand model serving, GPU scheduling, and inference optimization.

The boundaries blur. The best teams will be the ones that recognize this convergence early and start cross-training now.

#What to do about it

If you are leading a platform team today, start by acknowledging that AI infrastructure is a distinct problem space. Do not just hand it to your existing platform engineers and hope for the best.

Start small. Stand up an LLM gateway. Centralize your API keys. Build an audit trail. Get your security team in the room early. These are not massive investments, but they set the foundation.

And if you are in healthcare or another regulated space, treat this as urgent. The compliance landscape for AI is forming right now. The organizations that build the right patterns today will be the ones writing the playbook everyone else follows.

The DevOps-to-Platform-Engineering shift took years to fully materialize. This one is going to move faster. AI adoption is accelerating too quickly for it not to.

The question is not whether AI Platform Engineering becomes a real discipline. It is whether your team is ready when it does.