Ready to get the most out of Port?

AI + Engineering intelligence: Measuring agentic impact and ROI

Learn more about how to adjust your measurement strategies in the AI era with Port.

Matan Grady
Matan Grady
October 28, 2025
AI + Engineering intelligence: Measuring agentic impact and ROI
Listen to article
00:00 00:00

DORA metrics first emerged as a means of quantifying the impact DevOps teams had on throughput and stability. At the time, developer teams comprised several humans engaged in the same or similar tasks of coding, testing, and pushing releases to production. Now, with AI agents and workflows on the precipice of mainstream adoption, we can clearly see that things have changed since the original framework came out in 2013. 

Port has always embraced DORA metrics as a means of improving developer experience and efficiency. But the old DORA can’t paint the entire picture of AI’s impact, primarily because where and how AI is implemented matters a great deal to its success.

We knew we needed new metrics for AI success that went beyond the traditional. The 2025 DORA State of AI-assisted Software Development Report proved our point further when it was released in September. 

We’ll take a look at the DORA AI Capabilities Model and offer our own insights and recommendations for how to measure AI impact, efficiency, and ROI in this post. 

What is the DORA AI Capabilities Model?

The DORA AI Capabilities Model is a set of seven factors that “help amplify the benefits of AI adoption,” per the report’s authors. These seven capabilities are:

  1. A user-centric focus
  2. Strong version control practices
  3. AI-accessible internal data
  4. Working in small batches
  5. A clear and communicated AI stance
  6. Quality internal platforms
  7. Healthy data ecosystems
Source: 2025 State of AI-Assisted Software Development

These are clearly quite different, more qualitative points of consideration than hard and fast metrics. But there are many ways to tie metrics to each of these capabilities, even above and beyond the core DORA metrics. 

Strong version control practices, for example, can be enforced and quantified using scorecards, a key pillar of internal developer portals (IDPs). Working in small batches makes it significantly easier to monitor your deployment frequency and change failure rate, two key DORA metrics that measure throughput. 

The other thing you may notice here is that nearly none of these capability factors involves much coding. Lines of code (LoC) used to be the ultimate signifier of productivity, but what does that matter, when developers are spending only 10 percent of their time writing code?

Source: Gartner, Emerging Tech: AI Developer Tools Must Span SDLC Phases to Deliver Value, 29 January 2025

If AI tools are left only to write code — without access to your systems, engineering context, or domain knowledge — their benefits will remain limited and constrained just to coding problems. At Port, we embrace the idea that AI needs to be deeply integrated into your software development lifecycle (SDLC), not just a coding and chat assistant. 

The main takeaway we had from this year's DORA report is that, without the right context, permissions, guardrails, guidelines, and human involvement throughout the process, AI can’t reach its true transformative potential. Five of the seven capabilities point to a need for strong systems, engineering platforms, and domain-integrated context provided in an AI-accessible way. 

How Port delivers agentic engineering at scale

Port is specifically designed to help you fully embed and integrate AI agents and agentic workflows into your software development process. As first-class users of the platform, agents benefit from Port in the following ways:

Feature Description Benefits DORA metrics impact
Context lake Offers AI agents engineering metadata, domain knowledge, and real-time operational status information alongside the actions agents can access independently. Agents can act autonomously because they can see and understand your engineering environments. As a result, they can take deeper, more meaningful actions beyond simply writing code. - Reduce change failure rates with proper context integration
- Reduce MTTR with fewer mistakes pushed to production
Actions and workflows Part of the context lake; a combination of single and chained self-service actions, exposed to AI agents via role-based access controls (RBAC). Actions and workflows harden pathways to production, which gives both agents and humans repeatable, secure pipelines to navigate autonomously. - Shrink lead time for changes with agentic speed
- Increase deployment frequency with agentic releases
Scorecards Set engineering standards and codify them in your platform to ensure every release meets protocol. Scorecards help everyone improve their security posture and maintain engineering standards. Results allow agents to learn from past mistakes and prevent them in future runs. - Reduce change failure rate by creating production-ready releases on the first try
Access controls Provides fine-grained RBAC for both humans and agents, modeled after your SSO systems. Agents can only take actions that are human-approved. Workflows built with RBAC considerations also incorporate human-in-the-loop approvals directly in the platform. - Reduce change failure rates, including the impact of destructive actions

Two essential categories for measuring AI adoption

As the DORA Report for this year indicates, engineering metrics need to evolve alongside AI agents and infrastructure changes that emerge to support them. This is the primary reason we launched our Agentic Engineering Platform (AEP) at Port. 

After working with dozens of strategic customers to develop the AEP, we maintain that DORA metrics offer a clear framework for measuring throughput and stability. But in addition to those metrics, we recommend adding two more to effectively measure AI’s impact:

  1. Execution time (how long it takes to complete a single task) 
  2. Time-to-market (how fast value reaches production, in terms of features released)

While these are mostly related to throughput, your stability index should not change when you introduce AI to processes. 

Effective AI implementations and agentic engineering should reduce execution time and time-to-market. Your teams’ use of AI translates into features arriving to production and the wider market faster, giving you a significant advantage over competitors when adopted successfully. 

But it also means delivering more features in the same amount of time. AI’s impact as a force-multiplier isn’t restricted to speed, but should positively impact throughput, meaning deployment frequency should increase. 

At the same time, however, AI should not decrease the stability of your product. With more code moving into production with an effective AI adoption process, a certain amount of instability may be considered within reason or statistically insignificant. But prolonged, consistent deterioration of your product’s stability likely indicates that the code AI is producing or the actions it’s taking during autonomous work needs review and iteration.

To understand the impact of AI agents on your overall efficiency, we recommend comparing execution time and time-to-market durations pre- and post-AI implementation.

An AI ROI framework for engineering teams

Outside of using the DORA framework to gauge throughput and stability, calculating the return on your investment into AI adoption is essential to understanding the full impact of your implementation. 

Building on our earlier work measuring the ROI of GenAI, agentic AI must have an impact on your bottom line. To understand this impact, we recommend using metrics across three essential categories:

1. Usage and adoption

  • AI agent throughput: How many tasks assigned are to AI agents per week across all engineering tasks (e.g., three AI agents write three net-new docs each, per day)
  • AI agent utilization rate across teams: How many AI agents are used per team (e.g., the frontend team uses three agents per dev, but security uses two agents per dev)
  • Token costs mapped to business outcomes: How many agents used how many tokens to complete PR review in a single week 

2. Time impact

  • Execution savings: How quickly direct tasks are completed (e.g., net-new documentation creation drops from hours to minutes)
  • Flow efficiency: How the duration of wait states changes (e.g., PR review times drop from days to hours with AI-assisted reviews)
  • Time savings: Hours saved × frequency × developer cost (e.g., if unit testing took three devs two hours per week, AI saves both the development time and cost)

3. Quality and trust

  • Human intervention rate per AI workflow (e.g., how many times humans had to revert agent-produced code or reorient agents in workflows)
  • Developer satisfaction surveys on AI assistance (e.g., has AI caused more frustration than helpful impact?)
  • Success rate of AI-completed tasks (e.g., how often AI-produced code makes it into production)

How to measure the ROI of agentic workflows in Port

Alongside accelerating overall delivery speeds, Port also tracks and stores data related to all of the previously mentioned metrics categories. The Agentic Work Management solution is built to demonstrate your entire usage and adoption suite.

Image caption: The Agentic Work Management solution as seen in Port. A dashboard displays a count of PRs created by agents that need approvals.

To take action against what you learn in Port, start by instrumenting your AI workflows with proper tracking. This pulls all relevant data into your Engineering Intelligence dashboard.

Image caption: The Engineering Intelligence solution as seen in Port. A dashboard displays a high AI-generated code PR merge rate, an increasing AI PR throughput, and a reduction in the need for human approvals. Deployment frequency increases as a result, and the average MTTR is 1.3 hours.

Through Port's audit logs, you can track data that answers questions like, “How many tokens did our AI agent consume to complete PR review?” and even query our native MCP server for answers. 

These trace logs also capture agent activity, token usage, and business context, allowing you to connect AI costs directly to outcomes like “feature documentation” or “security review.” 

This visibility enables you to benchmark agent adoption and scope improvements. For example, if documentation previously took a few days to create from end to end, you can now measure both the execution speedup and the overall delivery acceleration as separate metrics, which helps you understand the value AI provides.

Once you understand where you have bottlenecks, you can then consider creating governed agentic workflows to reduce those bottlenecks.

How to start measuring your agentic impact

To decide where to start building you can look at impact × effort × risk, and highlight high-ROI areas to start investing. It’s important to look for high-value, low-risk initiatives that you can iterate on a few times before launching to your wider team. Starting small helps you gain confidence when building new functionality, build trust with your teams and stakeholders, and measure changes in controlled environments.

Developers are likely already experimenting with AI, so ask your teams where they’ve found it most useful. Implementing that same change across teams is another great way to bolster your confidence and trust in AI across the board.

Port can further help you determine where those high-ROI spots are via surveys, and provides the platform where you can build and execute governed agentic workflows against them.

{{cta_8}}

Tags:
{{survey-buttons}}

Get your survey template today

By clicking this button, you agree to our Terms of Use and Privacy Policy
{{survey}}

Download your survey template today

By clicking this button, you agree to our Terms of Use and Privacy Policy
{{roadmap}}

Free Roadmap planner for Platform Engineering teams

  • Set Clear Goals for Your Portal

  • Define Features and Milestones

  • Stay Aligned and Keep Moving Forward

{{rfp}}

Free RFP template for Internal Developer Portal

Creating an RFP for an internal developer portal doesn’t have to be complex. Our template gives you a streamlined path to start strong and ensure you’re covering all the key details.

{{ai_jq}}

Leverage AI to generate optimized JQ commands

test them in real-time, and refine your approach instantly. This powerful tool lets you experiment, troubleshoot, and fine-tune your queries—taking your development workflow to the next level.

{{cta_1}}

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

{{cta_survey}}

Check out the 2025 State of Internal Developer Portals report

See the full report

No email required

{{cta_2}}

Minimize engineering chaos. Port serves as one central platform for all your needs.

Explore Port
{{cta_3}}

Act on every part of your SDLC in Port.

{{cta_4}}

Your team needs the right info at the right time. With Port's software catalog, they'll have it.

{{cta_5}}

Learn more about Port's agentic engineering platform

Read the launch blog

Let’s start
{{cta_6}}

Contact sales for a technical walkthrough of Port

Let’s start
{{cta_7}}

Every team is different. Port lets you design a developer experience that truly fits your org.

{{cta_8}}

As your org grows, so does complexity. Port scales your catalog, orchestration, and workflows seamlessly.

{{cta-demo}}
{{reading-box-backstage-vs-port}}
{{cta-backstage-docs-button}}

Starting with Port is simple, fast, and free.