Windmill v1.661.0 introduces OpenTelemetry metrics support, enabling standardized observability across your workflow infrastructure. Here's what changed and why it matters for your setup.

Standardized workflow observability that integrates directly with your existing monitoring stack, eliminating custom integrations and visibility gaps.
Signal analysis
Here at industry sources, we tracked the Windmill release closely because OTel metrics support addresses a real gap in workflow automation observability. Windmill v1.661.0 adds native OpenTelemetry metrics export, meaning you can now push Windmill's internal metrics directly into any OTel-compatible backend - Prometheus, Datadog, New Relic, Grafana Cloud, or custom collectors.
This isn't about adding dashboards or polished UI elements. This is infrastructure-level observability. You get standardized metric instrumentation following the OTel specification, which means your metrics will look and behave the same way whether they're coming from Windmill, your application code, or your infrastructure layers.
The implementation includes key metrics you actually need: execution latency, error rates, queue depths, job counts, and resource utilization. These are exportable via standard protocols - OTLP gRPC and HTTP, Prometheus scraping endpoints, and other configured exporters.
Workflow automation has a visibility problem. Unlike traditional applications where you instrument code directly, workflow systems like Windmill are black boxes by default. You see job results, but not why a job took 45 seconds instead of 5, or whether your queue is backed up because of resource constraints or slow downstream systems.
OTel metrics solve this by standardizing how Windmill exposes internal state. Instead of parsing logs or building custom metrics queries, you can wire Windmill into your existing observability stack immediately. If you're already using Prometheus for infrastructure monitoring, Grafana for dashboards, or Datadog for cross-platform visibility, Windmill metrics now fit directly into that ecosystem.
This becomes critical at scale. Once you're running hundreds of concurrent workflows, you need to know: Are executions slow because of resource contention? Is a particular job definition degrading in performance? Are certain error patterns clustered? OTel metrics give you the instrumentation hooks to answer these questions without rebuilding your monitoring.
The broader signal here is that Windmill is converging on observability standards. This reduces operational friction - fewer custom integrations, fewer gaps in visibility, less time spent building connectors between Windmill and your monitoring layer.
Getting started requires three basic steps: first, enable OTel metrics export in your Windmill configuration (likely a simple environment variable or config flag pointing to your collector endpoint). Second, ensure your monitoring backend is listening on the expected protocol and port. Third, start shipping metrics - no code changes needed.
For operators already running Prometheus, the path is straightforward: point Windmill at your Prometheus scraper endpoint, add a new job config in your Prometheus scrape configuration, and within minutes you're collecting metrics. Dashboards for Windmill-specific metrics will likely appear in the community - this is standard OTel adoption pattern.
If you're using Datadog, New Relic, or Grafana Cloud, each has native OTel ingest endpoints. Configure Windmill to export OTLP gRPC (the standard protocol), point it at the right endpoint with your API key, and metrics flow automatically. The payoff is instant: you're now monitoring Windmill execution latency, error rates, and queue depth in the same platform where you track everything else.
Advanced operators can use this to build sophisticated alerting strategies. Correlate Windmill job latency with infrastructure CPU usage. Track whether certain error types cluster with specific job definitions or times of day. Build custom metrics on top of OTel exports. The instrumentation is now there - what you do with it is your choice.
The momentum in this space continues to accelerate.
Best use cases
Open the scenarios below to see where this shift creates the clearest practical advantage.
One concise email with the releases, workflow changes, and AI dev moves worth paying attention to.
More updates in the same lane.
Inngest's latest update introduces Durable Endpoints streaming support, improving long-running workflow management for developers.
Cloudflare MCP now offers visualized workflows through step diagrams, enhancing understanding and usability for developers.
Cloudflare MCP's new client-side security tools enhance detection capabilities, reducing false positives significantly while safeguarding against zero-day exploits.