GitHub introduces new metrics to track active and passive Copilot code review users, enhancing insights for developers and organizations alike.

Copilot usage metrics transform AI assistance from a black box into a measurable productivity factor, enabling data-driven decisions about AI tool investment and developer training.
Signal analysis
GitHub has launched comprehensive usage metrics for Copilot, providing organizations visibility into how AI assistance impacts code review workflows. The new dashboard surfaces data on suggestion acceptance rates, lines of AI-generated code that survive review, and time-to-merge comparisons between AI-assisted and traditional pull requests. This data enables organizations to quantify Copilot's ROI beyond anecdotal productivity reports.
The metrics are available in the GitHub Enterprise admin panel under Copilot > Analytics. Three main dashboards provide different perspectives: Individual developer patterns, team aggregate trends, and organizational comparisons. Each view includes filtering by repository, time period, and PR complexity. Historical data is available from the date Copilot was enabled for each organization.
Key metrics tracked include: Suggestion acceptance rate (percentage of Copilot suggestions developers keep), Code survival rate (percentage of AI-generated code that passes review unchanged), Revision rate (how often AI code requires modification during review), and Time metrics (median time from suggestion to commit, PR open to merge times). These create a multi-dimensional view of AI effectiveness.
Engineering managers gain quantitative data for AI investment decisions. Instead of relying on developer sentiment surveys, managers can demonstrate concrete productivity gains - or identify where Copilot isn't adding value. This data is essential for renewal decisions and expanding Copilot rollout to additional teams. Boards and executives increasingly expect metrics-backed justification for AI tool spend.
Team leads can identify training opportunities. If a team shows low acceptance rates but high revision rates, developers may be accepting low-quality suggestions or struggling to evaluate suggestions effectively. This signals need for Copilot best practices training. Conversely, teams with high acceptance and high survival rates are using Copilot effectively and can share their patterns.
Individual developers benefit from self-awareness about their AI usage patterns. Seeing that you accept 80% of suggestions but 30% get revised during review indicates opportunity to be more critical during acceptance. The metrics create feedback loops that weren't possible when AI assistance was invisible to analytics.
Access requires GitHub Enterprise license with Copilot for Business or Enterprise. Navigate to your organization settings, select Copilot in the sidebar, then Analytics. If you don't see the Analytics tab, verify your admin role and that your organization has been opted into the beta. General availability is expected Q3 2026 for all Enterprise customers.
The most actionable metric is Code Survival Rate - calculated as the percentage of Copilot-generated lines that exist unchanged in the merged PR. High survival rates (above 85%) indicate suggestions that add value without creating review burden. Lower rates suggest the AI is producing starting points that require significant human refinement. Compare this across teams to identify variance.
For time metrics, focus on delta comparisons rather than absolute numbers. Compare median time-to-merge for PRs with high Copilot usage versus low usage within the same repository. Repository characteristics affect merge times, so cross-repo comparisons are less meaningful. A meaningful signal is if high-Copilot PRs merge 20%+ faster within the same repo.
LinearB, Jellyfish, and similar developer analytics platforms have tracked PR metrics for years but lacked Copilot-specific data. These platforms show overall engineering metrics without distinguishing AI-assisted work. GitHub's native metrics provide AI-specific segmentation that third-party tools cannot replicate without GitHub's internal data about suggestion events.
Integration between GitHub's metrics and third-party platforms is limited in this initial release. Organizations using LinearB for engineering insights will need to manually correlate Copilot data with their existing dashboards. GitHub has announced API access for metrics is planned for GA release, which will enable third-party platform integration.
For organizations already invested in developer analytics platforms, the decision is whether to maintain separate AI metrics or wait for integration. The metrics GitHub provides are unique - no third party has visibility into suggestion events - so maintaining a separate view is justified for AI-focused analysis while using existing platforms for broader engineering metrics.
GitHub's roadmap indicates individual developer dashboards will launch in Q4 2026, allowing developers to see their own patterns without admin access. This democratizes the data while maintaining organizational rollups for management. Expect gamification risks - developers may optimize for acceptance rates rather than optimal AI collaboration.
Quality metrics beyond survival rate are planned for 2027. GitHub is exploring integrations with code quality tools to correlate Copilot usage with bug rates, security issues, and technical debt. This would provide the missing link between AI productivity and code quality outcomes that organizations need for comprehensive evaluation.
Competitive pressure will force similar offerings from JetBrains and VS Code extensions. Amazon CodeWhisperer's dashboard has existed longer but with less sophisticated metrics. Expect the developer tools market to converge on standard AI usage metrics similar to how CI/CD tools standardized on deployment frequency and lead time measures.
Best use cases
Open the scenarios below to see where this shift creates the clearest practical advantage.
One concise email with the releases, workflow changes, and AI dev moves worth paying attention to.
More updates in the same lane.
Cursor introduces self-hosted cloud agents, empowering developers with flexibility and control over their AI tools. Discover how this innovation can transform your development workflow.
Cursor's Warp Decode feature enhances AI-driven code interpretation, streamlining development workflows and improving productivity for developers. Discover how this innovation reshapes coding practices.
Together AI has announced the general availability of Instant Clusters, a new feature that streamlines AI model training and deployment. This innovative tool promises to enhance productivity and collaboration among developers working on AI projects.