Demystifying Kubernetes: CRDs and controllers for developers
Debunk common Kubernetes myths and see how the platform can not only be a container orchestrator but also a powerful, API-driven server. Take a look at how Port in turn enables seamless communication between custom resources and controllers within K8s.

This article is inspired by the talk Demystifying Why the World is Built on Kubernetes, originally given by Sébastien and Abby at KubeCon London in April 2025. You can watch the original talk here.
Many product teams struggle to build bespoke services within the Kubernetes (K8s) API server. Four common misconceptions about extending the K8s API exacerbate this challenge, leading to implementation hurdles for these teams.
This article debunks these myths about K8s and demonstrates how the platform can not only be a container orchestrator but also a powerful, API-driven server. We also examine how Port’s internal developer portal (IDP) enables seamless communication between custom resources and controllers within K8s.
What developers should know about the Kubernetes API server
We often hear that developers perceive K8s primarily as a container orchestrator. While the platform does this well, that only scratches the surface of its full capabilities.
Developers can employ K8s’s powerful, extensible API to automate workflows and manage clusters seamlessly. Here’s how the K8s API boosts the efficiency of multiple interaction models:
- Developers extend the functionality of K8s by building Custom Resource Definitions (CRDs) and writing custom controllers or operators that act on API events. Developers have the flexibility to customize K8s to meet specific infrastructure requirements and operational objectives.
- Users, such as DevOps engineers, use K8s to run and monitor applications and the kubectl command-line (CLI) tool to check logs, manage container pods, and ensure that apps are operating efficiently. Users can initiate these activities by sending API requests directly to the K8s API server.
- Administrators manage the K8s infrastructure, cluster setup, node configuration, and role-based access control (RBAC). Administrators ensure that environments are stable and performing effectively.

What are Kubernetes custom resource definitions?
CRDs extend the K8s API using custom data models, enabling you to define unique resources alongside deployments, pods, and services. Think of CRDs as blueprints that describe the structure of your custom resources.
Here are the core components of the K8s CRD mechanism:
- Custom definitions allow you to create new resource types, or “kinds,” which you can then introduce and apply to K8s. These kinds are the new objects that you manage and work with.
- Native resources include deployments, pods, ingress, services, and CronJobs. These components are built-in and available out of the box in every K8s instance.
- Resource manifests define the desired state or spec of a resource using an OpenAPI schema. For instance, you can create a manifest to request “three replicas of this app” and apply it to your K8s clusters to boost resiliency.
During CRD creation, you populate metadata fields such as “name,” “API version,” and “environment.” We recommend taking the time to establish these identifiers at the beginning of your project to make it easy to manage your custom resources.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: widgets.example.com
spec:
group: example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
dbName:
type: string
default: postgres
description: A database will be created with this name.
env:
type: string
default: dev
description: Prod gets backups and more resources.
enum:
- dev
- prod
scope: Namespaced
names:
plural: widgets
singular: widget
kind: Widget
Example: If you want to manage PostgreSQL databases, you can create a CRD that includes fields such as name, version, and environment (dev or prod).
At this stage, developers can easily create and manage these custom resources on K8s using an open-source platform engineering framework like Kratix. This platform automates resource provisioning through reusable APIs.

After successfully defining the CRD, you can develop custom K8s controllers. Custom controllers are proactive watchers that respond to changes in your resource clusters. They continuously compare the current state of systems against desired performance, ensuring that they are controlled successfully.
{{cta-demo-baner}}
Four myths about extending Kubernetes with controllers
Now that you know how CRDs and K8s controllers work together, let’s debunk a few myths about their flexibility and usage in production.
Myth 1: Controllers are only for native Kubernetes resources
Many engineers believe that controllers can only be used to manage K8s resources like pods. In reality, they can also interact with external APIs and manage bare metal (physical) servers. This flexibility enables you to improve inter-system communications, streamline workflows, eliminate manual processes, and elevate team productivity.
Myth 2: You can only write Kubernetes controllers in Go
False! You can write custom controllers or pods in K8s in many popular programming languages, including Python, Java, Node.js, and even shell scripts. Controllers communicate with the K8s API server, which manages your cluster resources, so you are not limited to Go.
The real issue arises when you write K8s controllers in different languages without clear standards. You can manage and enforce it manually, but many find it easier to utilize an IDP solution to ensure consistency across teams.
For instance, you can use Port’s flexible IDP to:
- Specify the languages (e.g., Python, Java, and more) that developers use to write controllers in your software catalog.
- Leverage scorecards to restrict developers from writing and deploying controllers in unfamiliar languages.
These features help engineers across teams, disciplines, and language expertise stay updated when developers require access to unfamiliar languages.
Myth 3: You need to know everything before starting
The truth is that developers can learn and incrementally build K8s controllers, so they do not need to become experts from the outset. We recommend that developers start small by writing a simple controller that excels at one action. This approach helps create a path toward coding excellence within the K8s API server.
Myth 4: Building on Kubernetes is only for vendors
You are not limited to cloud providers, such as Microsoft Azure and AWS, to build on your K8s instance. In reality, you can leverage CRDs and custom controllers to extend your IDP, streamline self-service operations, and more.
The portal can also make K8s accessible to multiple teams across your organization, including developers, DevOps, and platform engineers, while controlling access and resource allocation.
Streamline critical workflows with Port’s unified API catalog
Having discussed the power of K8s controllers, let’s now explore how Port uses a single API in our real-time software catalog.
Port’s API enables platform engineers to unify all their developer tools, including K8s, and manage them within a single, centralized software catalog. This powerful API catalog integrates seamlessly with K8s through CRDs and controllers, allowing engineers to:
- Access one source of truth for critical K8s data, eliminating cognitive load, silos, and the need for context switching.
- Ingest K8s controllers to automatically update and manage software, reducing manual effort and accelerating deployment cycles.
- Improve visibility of your clusters and maintain consistency across environments.
- Initiate rollbacks when version issues emerge.

Further, Port enables teams to view, test, and measure the performance of their entities within the portal. This helps identify and resolve issues early, enhancing operational efficiency. Teams also utilize Port’s granular RBAC to manage permissions and strengthen internal security.
{{cta_1}}
Do more with the Kubernetes API server and Port
The K8s platform is much more than a container orchestrator; it provides a robust API server that developers can extend with custom data models and controllers.
Port’s flexible IDP solution augments the power of the K8s API server. For instance, Xceptor, a data automation provider, accelerated its K8s implementation and aligned teams by adopting Port’s developer portal early. Custom abstraction layers mitigated complexity, elevating engineer productivity and focus. Explore Xceptor’s experience using Port to speed up their K8s adoption here.
Tags:
Use CaseGet your survey template today
Download your survey template today
Free Roadmap planner for Platform Engineering teams
Set Clear Goals for Your Portal
Define Features and Milestones
Stay Aligned and Keep Moving Forward
Create your Roadmap
Free RFP template for Internal Developer Portal
Creating an RFP for an internal developer portal doesn’t have to be complex. Our template gives you a streamlined path to start strong and ensure you’re covering all the key details.
Get the RFP template
Leverage AI to generate optimized JQ commands
test them in real-time, and refine your approach instantly. This powerful tool lets you experiment, troubleshoot, and fine-tune your queries—taking your development workflow to the next level.
Explore now
Check out Port's pre-populated demo and see what it's all about.
No email required
.png)
Check out the 2025 State of Internal Developer Portals report
No email required
Contact sales for a technical product walkthrough
Open a free Port account. No credit card required
Watch Port live coding videos - setting up an internal developer portal & platform
Check out Port's pre-populated demo and see what it's all about.
(no email required)
Contact sales for a technical walkthrough of Port
Open a free Port account. No credit card required
Watch Port live coding videos - setting up an internal developer portal & platform
Book a demo right now to check out Port's developer portal yourself
Apply to join the Beta for Port's new Backstage plugin
It's a Trap - Jenkins as Self service UI
Further reading:
Learn more about Port’s Backstage plugin
Build Backstage better — with Port
Example JSON block
Order Domain
Cart System
Products System
Cart Resource
Cart API
Core Kafka Library
Core Payment Library
Cart Service JSON
Products Service JSON
Component Blueprint
Resource Blueprint
API Blueprint
Domain Blueprint
System Blueprint
Microservices SDLC
Scaffold a new microservice
Deploy (canary or blue-green)
Feature flagging
Revert
Lock deployments
Add Secret
Force merge pull request (skip tests on crises)
Add environment variable to service
Add IaC to the service
Upgrade package version
Development environments
Spin up a developer environment for 5 days
ETL mock data to environment
Invite developer to the environment
Extend TTL by 3 days
Cloud resources
Provision a cloud resource
Modify a cloud resource
Get permissions to access cloud resource
SRE actions
Update pod count
Update auto-scaling group
Execute incident response runbook automation
Data Engineering
Add / Remove / Update Column to table
Run Airflow DAG
Duplicate table
Backoffice
Change customer configuration
Update customer software version
Upgrade - Downgrade plan tier
Create - Delete customer
Machine learning actions
Train model
Pre-process dataset
Deploy
A/B testing traffic route
Revert
Spin up remote Jupyter notebook
Engineering tools
Observability
Tasks management
CI/CD
On-Call management
Troubleshooting tools
DevSecOps
Runbooks
Infrastructure
Cloud Resources
K8S
Containers & Serverless
IaC
Databases
Environments
Regions
Software and more
Microservices
Docker Images
Docs
APIs
3rd parties
Runbooks
Cron jobs