Platform engineering in the age of AI

AI platform engineering can expand the capabilities of your AI agents the same way it does for your human devs. Learn how to go from tickets to prompts with AI.‍

August 28, 2025

Ready to start?

Platform engineering in the age of AI
Listen to article
00:00 00:00

If you’ve ever waited on approval for a new testing environment, you know what it means to be stuck: the thrill of working on a new feature quickly fades as you realize how many tickets you need to open. Maybe if the stars align you’ll get your testing environment in a week.

Platform engineering changes this, but AI really kicks that change up a notch. AI lets developers interact with platforms using natural language, which means they can now handle most of the heavy lifting that goes with software development. AI agents can write code, unit tests, and root cause analyses.  

What used to be a frustrating, manual process has the potential to become a smooth, almost frictionless experience with AI. But what really makes this possible is the underlying platform the AI uses to complete tasks: a clear, concise platform for orchestration that treats AI agents and human developers as equal partners in the process of building software. 

Port delivers on this orchestration model at scale, providing for AI the same way we have for human devs. In this post, we’ll talk about how to enable your AI agents smoothly and securely by applying platform engineering best practices.

What platform engineering can do for AI agents

Platform engineering is an evolution of DevOps practices. This is why, at Port, we know that if humans can talk to AI like they talk to each other, platform engineers will need to manage AI the same way they manage their human devs. This is where internal developer portals (IDPs) come in.

DevOps practices improved response time, sure, but developers didn’t feel much improvement or time savings at all because they still had to open a ticket or ask for help. The cognitive load of resource provisioning shifted to devs, who needed to explain what they wanted without really knowing what was possible. 

Platform engineering emerged to bridge the gap between developers and infrastructure teams, using internal developer platforms and self-service portals. These orchestration layers gave platform engineers a concrete place and structure to build golden paths through their software development lifecycles, allowing devs to click buttons, fill out forms, and provision resources without opening tickets or waiting on DevOps for approval. 

IDPs gave developers autonomy and a unified vision of their SDLC:

Image description: On the left, software development stakeholders across the SDLC have no single source of truth. On the right, Port unifies all stakeholders by providing a single context lake for all engineering metadata.

We’ve come to learn a few things at Port about this model:

  1. IDPs are only as useful as they are fast: Your platform builders have to translate the efficiencies of DevOps transformations into meaningful time and complexity savings for developers.
  2. The platform-as-a-product approach works: when you regularly add new features to your portal, support edge cases, and keep up with your devs’ demand for more complex self-service actions, you get a happier dev team and faster software development.
  3. Humans and AI agents need to work from the same context: If you’ve built a platform or portal already, this could mean exposing your existing actions and controls to AI; if not, you may want to consider AI use cases and build toward a human-in-the-loop approach when building out your new platform. 

The only thing that has changed now is that your portals must include AI. The more developers rely on AI tools — and they’re using them, whether your organization has rolled any tools out officially or not — the more important it becomes that your platform team expands the portal’s capabilities to include AI as first-class users of the portal.

MCP x IDP: Adapting your portal for AI

Adding AI to your IDP isn’t just a cool UX upgrade. It’s a fundamental shift in how we build and ship software:

  • AI helps developers stay in flow, reduce friction, and focus on what matters: solving problems and building products.
  • AI helps platform teams scale operations without scaling headcount by letting machines handle the repetitive, reactive work in controlled environments.
  • Organizations benefit from the speed and safety of a better developer experience, without compromising on control.
  • AI, and the security teams worried about them, can reduce the agentic chaos that occurs when multiple AI agents (coding, testing, security, etc.) act independently, without central coordination, shared rules, or context. 

Internal developer portals do shift infrastructure ownership left, but they don’t eliminate all friction. When a portal can’t handle a new use case, developers are back in Slack, asking for help, waiting for manual support, or requesting a new feature. 

Orchestrating AI is just another use case platform engineers need to support in their portals. AI needs to be orchestrated and governed, similar to how developers have historically needed a lot of context to navigate their software development life cycle: 

Image description: A Venn diagram showing that AI agents need similar things to human developers, and platform engineering addresses that overlap.

Portals are a massive leap forward, no doubt, but they still sit slightly outside the day-to-day flow of development. But if you’ve already built out some functionality in the portal, you’re a step ahead from everyone else in terms of adopting AI. 

Here’s what we mean: Imagine a scenario where you're deep in the zone, iterating on a feature with Cursor. You have to test something now, which means you have two options: 

  1. You can test it manually, submitting a ticket to get a testing environment from DevOps, writing your own tests, and asking Cursor to check your work. You hope for the best.
  2. You can direct Cursor to use Port’s MCP server to complete testing for you.

Option 2 is obviously faster. But it won’t really be more efficient if the tests fail, or Cursor fails to identify flaws you’ll need to fix otherwise. This is what Port’s MCP server can provide above others options.

Using Port’s remote MCP server

Port’s remote MCP server prevents any potential rework because it makes all of the self-service actions you’ve defined for developers accessible to AI agents. When you connect Cursor or any other AI tool to Port’s MCP server, the AI can see all of the tools you’ve built in Port and provided to your developers. 

In our case above, Cursor can use Port’s MCP server to find the action that will let it create a dev env. It will also see what it must provide or do in order to perform the action seriously, such as branch details. 

The MCP server gives AI agents a way to look up the necessary info to run the action, and returns a helpful, in-context, and more detailed response to the request to spin up a new environment, such as where it is and when it will expire.

Without the MCP server, the best response Cursor could give would be something like, “Go into Port and run the 'create dev env' action.”

Now, if you want to test something, all you need to do is chat to your AI agent pair programmer:

"Hey, spin up a new staging env for feature-payments, add Redis, and auto-delete it in 72 hours."

Seconds later, the AI agent will reply:

"Done. Here's your environment: [link]"

There’s no form to fill out, no ticket to submit, and no guesswork. You ask for what you need, and you get it.

This is the promise of AI agents, when thoughtfully controlled and guided by an internal developer portal. This is what the new normal could be: powered by AI agents, chat interfaces, and workflow engines that understand developer intent and translate it into infrastructure actions.

How MCP servers improve DevEx

As we mentioned, Port doesn’t just manage AI agents, it also empowers technical teams with the right experience to lead what they know best. Port serves as a control and orchestration layer that coordinates AI agents and human roles with clarity, context, and governance.

Port is helps you build a robust, agentic system for software delivery, including a context lake that is completely machine readable and searchable via the MCP server:

Image description: A new agentic workflow in Port for resolving a P3 bug using data inside Port to enrich the Jira ticket, provide the data to Copilot, and submit the fix for human review before deployment.

A context lake is a single conduit that agents use to:

  • Get the data they need from a software catalog
  • See the actions they can use, in the context of the specific task they perform
  • View and maintain proper permissions and guardrails while acting autonomously

Port enables a human-in-the-loop approach to AI management. It keeps control in your team’s hands while agents accelerate everyday tasks. Every AI agent decision can be routed to the appropriate channels for review, approval, or revision and resubmission. 

Better yet, your developers, SREs, and managers can track the status of any and all agent-driven tasks, from coding to workflows, pipelines, and incident response through a single interface that spans the full software lifecycle. This alignment keeps everyone informed. 

There are also many developer experience improvements to be gained when your portal treats AI as first-class users:

  • It’s fast: You get accurate, usable responses in seconds, not hours or days.

  • It’s conversational: Just ask! No need to learn internal CLI tools or dig through docs.

  • It’s context-aware: The bot knows your team, your repo, your service defaults. No prompt engineering or lengthy specs to write.

  • You’re asked smart follow-ups: Forgot to mention a region or resource limit? The bot will ask (nicely).

  • You can repeat it forever: Codify your actions into self-service tools, bookmark the interaction or save it to your chatbot’s memory, and reuse it later with one click.

  • It feels easy: This is what DevOps ideals once promised us: the feeling that the system is finally working with you instead of against you.

People and agents can work together in harmony

Port meets teams where they’re already working, and integrates AI seamlessly:

  • SREs can trigger agent actions (like spinning up cloud resources) directly from Cursor, and oversee automated CI fix attempts with clear success/fail indicators.
  • Developers can ask Slack, “How many customers are impacted by this bug?”, consume the agent’s output through Jira tickets with contextual insights (e.g., affected microservice, recent deploy logs, test coverage), or receive AI suggestions in Slack channels as they code or triage.
  • Managers get a real-time AI control center that provides a full view of agentic tasks, how often they’re completed, which agents are assigned to which task, etc. that can later be used to start new initiatives agents and engineers can address together. 

You can see all of these features in action in our demo video, or take a stab at creating them yourself using our documentation guides

Wrapping up

We’ve gone from submitting tickets to clicking buttons to simply asking for what we need. As a developer, you no longer need to know how the system works under the hood — you just need to describe what you want. With AI-powered platforms an emerging reality, the system figures out the rest.

Platform engineering needs to adapt to serve AI agents and users because they are — or shortly will become — essential parts of your SDLC. Port can help unify and organize your platform to make it easily accessible to AI agents alongside humans, and provide the context AI needs to perform up to par. Learn more about platform engineering and how developers' roles will change with AI.

{{survey-buttons}}

Get your survey template today

By clicking this button, you agree to our Terms of Use and Privacy Policy
{{survey}}

Download your survey template today

By clicking this button, you agree to our Terms of Use and Privacy Policy
{{roadmap}}

Free Roadmap planner for Platform Engineering teams

  • Set Clear Goals for Your Portal

  • Define Features and Milestones

  • Stay Aligned and Keep Moving Forward

{{rfp}}

Free RFP template for Internal Developer Portal

Creating an RFP for an internal developer portal doesn’t have to be complex. Our template gives you a streamlined path to start strong and ensure you’re covering all the key details.

{{ai_jq}}

Leverage AI to generate optimized JQ commands

test them in real-time, and refine your approach instantly. This powerful tool lets you experiment, troubleshoot, and fine-tune your queries—taking your development workflow to the next level.

{{cta_1}}

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

{{cta_survey}}

Check out the 2025 State of Internal Developer Portals report

See the full report

No email required

{{cta_2}}

Contact sales for a technical product walkthrough

Let’s start
{{cta_3}}

Open a free Port account. No credit card required

Let’s start
{{cta_4}}

Watch Port live coding videos - setting up an internal developer portal & platform

{{cta_5}}

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start
{{cta_6}}

Contact sales for a technical walkthrough of Port

Let’s start
{{cta_7}}

Open a free Port account. No credit card required

Let’s start
{{cta_8}}

Watch Port live coding videos - setting up an internal developer portal & platform

{{cta-demo}}
{{reading-box-backstage-vs-port}}
{{cta-backstage-docs-button}}

Example JSON block

{
  "foo": "bar"
}

Order Domain

{
  "properties": {},
  "relations": {},
  "title": "Orders",
  "identifier": "Orders"
}

Cart System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Cart",
  "title": "Cart"
}

Products System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Products",
  "title": "Products"
}

Cart Resource

{
  "properties": {
    "type": "postgress"
  },
  "relations": {},
  "icon": "GPU",
  "title": "Cart SQL database",
  "identifier": "cart-sql-sb"
}

Cart API

{
 "identifier": "CartAPI",
 "title": "Cart API",
 "blueprint": "API",
 "properties": {
   "type": "Open API"
 },
 "relations": {
   "provider": "CartService"
 },
 "icon": "Link"
}

Core Kafka Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Kafka Library",
  "identifier": "CoreKafkaLibrary"
}

Core Payment Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Payment Library",
  "identifier": "CorePaymentLibrary"
}

Cart Service JSON

{
 "identifier": "CartService",
 "title": "Cart Service",
 "blueprint": "Component",
 "properties": {
   "type": "service"
 },
 "relations": {
   "system": "Cart",
   "resources": [
     "cart-sql-sb"
   ],
   "consumesApi": [],
   "components": [
     "CorePaymentLibrary",
     "CoreKafkaLibrary"
   ]
 },
 "icon": "Cloud"
}

Products Service JSON

{
  "identifier": "ProductsService",
  "title": "Products Service",
  "blueprint": "Component",
  "properties": {
    "type": "service"
  },
  "relations": {
    "system": "Products",
    "consumesApi": [
      "CartAPI"
    ],
    "components": []
  }
}

Component Blueprint

{
 "identifier": "Component",
 "title": "Component",
 "icon": "Cloud",
 "schema": {
   "properties": {
     "type": {
       "enum": [
         "service",
         "library"
       ],
       "icon": "Docs",
       "type": "string",
       "enumColors": {
         "service": "blue",
         "library": "green"
       }
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "system": {
     "target": "System",
     "required": false,
     "many": false
   },
   "resources": {
     "target": "Resource",
     "required": false,
     "many": true
   },
   "consumesApi": {
     "target": "API",
     "required": false,
     "many": true
   },
   "components": {
     "target": "Component",
     "required": false,
     "many": true
   },
   "providesApi": {
     "target": "API",
     "required": false,
     "many": false
   }
 }
}

Resource Blueprint

{
 “identifier”: “Resource”,
 “title”: “Resource”,
 “icon”: “DevopsTool”,
 “schema”: {
   “properties”: {
     “type”: {
       “enum”: [
         “postgress”,
         “kafka-topic”,
         “rabbit-queue”,
         “s3-bucket”
       ],
       “icon”: “Docs”,
       “type”: “string”
     }
   },
   “required”: []
 },
 “mirrorProperties”: {},
 “formulaProperties”: {},
 “calculationProperties”: {},
 “relations”: {}
}

API Blueprint

{
 "identifier": "API",
 "title": "API",
 "icon": "Link",
 "schema": {
   "properties": {
     "type": {
       "type": "string",
       "enum": [
         "Open API",
         "grpc"
       ]
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "provider": {
     "target": "Component",
     "required": true,
     "many": false
   }
 }
}

Domain Blueprint

{
 "identifier": "Domain",
 "title": "Domain",
 "icon": "Server",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {}
}

System Blueprint

{
 "identifier": "System",
 "title": "System",
 "icon": "DevopsTool",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "domain": {
     "target": "Domain",
     "required": true,
     "many": false
   }
 }
}
{{tabel-1}}

Microservices SDLC

  • Scaffold a new microservice

  • Deploy (canary or blue-green)

  • Feature flagging

  • Revert

  • Lock deployments

  • Add Secret

  • Force merge pull request (skip tests on crises)

  • Add environment variable to service

  • Add IaC to the service

  • Upgrade package version

Development environments

  • Spin up a developer environment for 5 days

  • ETL mock data to environment

  • Invite developer to the environment

  • Extend TTL by 3 days

Cloud resources

  • Provision a cloud resource

  • Modify a cloud resource

  • Get permissions to access cloud resource

SRE actions

  • Update pod count

  • Update auto-scaling group

  • Execute incident response runbook automation

Data Engineering

  • Add / Remove / Update Column to table

  • Run Airflow DAG

  • Duplicate table

Backoffice

  • Change customer configuration

  • Update customer software version

  • Upgrade - Downgrade plan tier

  • Create - Delete customer

Machine learning actions

  • Train model

  • Pre-process dataset

  • Deploy

  • A/B testing traffic route

  • Revert

  • Spin up remote Jupyter notebook

{{tabel-2}}

Engineering tools

  • Observability

  • Tasks management

  • CI/CD

  • On-Call management

  • Troubleshooting tools

  • DevSecOps

  • Runbooks

Infrastructure

  • Cloud Resources

  • K8S

  • Containers & Serverless

  • IaC

  • Databases

  • Environments

  • Regions

Software and more

  • Microservices

  • Docker Images

  • Docs

  • APIs

  • 3rd parties

  • Runbooks

  • Cron jobs

Starting with Port is simple, fast and free.

Let’s start