GitHub header
All Systems Operational
Git Operations ? Operational
API Requests ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Aug 2, 2024

No incidents reported today.

Aug 1, 2024

No incidents reported.

Jul 31, 2024
Resolved - This incident has been resolved.
Jul 31, 21:21 UTC
Update - Our partner service outage has been resolved, and our service has recovered.
Jul 31, 21:20 UTC
Update - We have identified the issue is caused by a service outage to a partner, and we are working with the partner to resolve the incident.
Jul 31, 21:00 UTC
Update - We are investigating reports of users seeing some errors in Billing functionality and Billing pages.
Jul 31, 20:45 UTC
Investigating - We are currently investigating this issue.
Jul 31, 20:38 UTC
Resolved - This incident has been resolved.
Jul 31, 09:20 UTC
Update - Actions is operating normally.
Jul 31, 09:20 UTC
Update - We are continuing to see improvements in queuing and running Actions jobs and are monitoring for full recovery.
Jul 31, 09:13 UTC
Update - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.
Jul 31, 08:28 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Jul 31, 08:07 UTC
Update - We are investigating reports of degraded performance in some Redis clusters.
Jul 31, 08:02 UTC
Investigating - We are currently investigating this issue.
Jul 31, 07:59 UTC
Resolved - This incident has been resolved.
Jul 31, 03:37 UTC
Update - The team is currently investigating a fix for issues with Codespaces. Impact continues to be limited to non-web clients. Customers receiving errors on desktop clients are encouraged to use the web client as a temporary workaround.
Jul 31, 02:55 UTC
Update - We continue investigating issues with Codespaces in multiple regions. Impact is limited to non-web clients. Customers receiving errors on desktop clients are encouraged to use the web client as a temporary workaround.
Jul 31, 01:49 UTC
Update - We are investigating issues with Codespaces in multiple regions. Some users may not be able to connect to their Codespaces at this time. We will update you on mitigation progress.
Jul 31, 01:23 UTC
Update - We are investigating reports of degraded performance for Codespaces.
Jul 31, 00:53 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Jul 31, 00:52 UTC
Jul 30, 2024
Resolved - On July 30th, 2024, between 13:25 UTC and 18:15 UTC, customers using Larger Hosted Runners may have experienced extended queue times for jobs that depended on a Runner with VNet Injection enabled in a virtual network within the East US 2 region. Runners without VNet Injection or those with VNet Injection in other regions were not affected. The issue was caused due to an outage in a third party provider blocking a large percentage of VM allocations in the East US 2 region. Once the underlying issue with the third party provider was resolved, job queue times went back to normal. We are exploring the addition of support for customers to define VNet Injection Runners with VNets across multiple regions to minimize the impact of outages in a single region.
Jul 30, 22:10 UTC
Update - The mitigation for larger hosted runners has continued to be stable and all job delays are less than 5 minutes. We will be resolving this incident.
Jul 30, 22:09 UTC
Update - We are continuing to hold this incident open while the team ensures that mitigation put in place is stable.
Jul 30, 21:44 UTC
Update - Larger hosted runners job starts are stable and starting within expected timeframes. We are monitoring job start times in preparation to resolve this incident. No enqueued larger hosted runner jobs were dropped during this incident.
Jul 30, 21:00 UTC
Update - Over the past 30 minutes, all larger hosted runner jobs have started in less than 5 minutes. We are continuing to investigate delays in larger hosted runner job starts
Jul 30, 20:17 UTC
Update - We are still investigating delays in customer’s larger hosted runner job starts. Nearly all jobs are starting under 5 minutes. Only 1 customer larger hosted runner job was delayed by more than 5 minutes in the past 30 minutes.
Jul 30, 19:40 UTC
Update - We are seeing improvements to the job start times for larger hosted runners for customers. In the last 30 minutes no customer jobs are delayed more than 5 minutes. We will continue monitoring for full recovery.
Jul 30, 19:04 UTC
Update - We are seeing run delays for larger hosted runners for a limited number of customers. We are deploying mitigations to address these delays.
Jul 30, 18:23 UTC
Investigating - We are currently investigating this issue.
Jul 30, 18:19 UTC
Resolved - This incident has been resolved.
Jul 30, 14:22 UTC
Update - We are starting to see recovery for this issue and are monitoring things closely. We will keep this incident open for now until we are fully confident on complete recovery.
Jul 30, 14:16 UTC
Update - We have correlated the impact on Codespaces to an outage with a third party service. We are continuing to investigate ways to reduce impact on our customers while we wait for that outage to be resolved.
Jul 30, 14:09 UTC
Update - We are seeing increased failure rates for creation and resumption of Codespaces in the UK South and West Europe regions.

We are working to resolve this issue and will update again soon.

Jul 30, 13:47 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Jul 30, 13:36 UTC
Jul 29, 2024

No incidents reported.

Jul 28, 2024

No incidents reported.

Jul 27, 2024

No incidents reported.

Jul 26, 2024

No incidents reported.

Jul 25, 2024
Resolved - This incident has been resolved.
Jul 25, 21:05 UTC
Investigating - We are currently investigating this issue.
Jul 25, 21:04 UTC
Resolved - On July 25th, 2024, between 15:30 and 19:10 UTC, the Audit Log service experienced degraded write performance. During this period, Audit Log reads remained unaffected, but customers would have encountered delays in the availability of their current audit log data. There was no data loss as a result of this incident.

The issue was isolated to a single partition within the Audit Log datastore. Upon restarting the primary partition, we observed an immediate recovery and a subsequent increase in successful writes. The backlog of log messages was fully processed by approximately 00:40 UTC on July 26th.

We are working with our datastore team to ensure mitigation is in place to prevent future impact. Additionally, we will investigate whether there are any actions we can take on our end to reduce the impact and time to mitigate in the future.

Jul 25, 19:20 UTC
Update - We have applied a fix and are seeing recovery. (Point of clarification: Impact was constrained to Audit Log Events, not all categories of events.)
Jul 25, 19:16 UTC
Investigating - We are currently investigating this issue.
Jul 25, 18:44 UTC
Jul 24, 2024

No incidents reported.

Jul 23, 2024
Resolved - This incident has been resolved.
Jul 23, 22:38 UTC
Update - We have mitigated the issue with Copilot Chat returning failures in some regions. Functionality has recovered for all Copilot Chat users.

Jul 23, 22:25 UTC
Update - We are seeing failures for the Copilot Chat for users in some regions. We are seeing about 20% of Copilot Chat requests fail.
Jul 23, 21:52 UTC
Investigating - We are currently investigating this issue.
Jul 23, 21:40 UTC
Jul 22, 2024

No incidents reported.

Jul 21, 2024

No incidents reported.

Jul 20, 2024

No incidents reported.

Jul 19, 2024
Resolved - On July 18, 2024, from 22:37 UTC to 04:47 UTC, one of our provider's services experienced degradation, causing errors in Codespaces, particularly when starting the VSCode server and installing extensions. The error rate reached nearly 100%, resulting in a global outage of Codespaces. During this time, users worldwide were unable to connect to VSCode. However, other clients that do not rely on the VSCode server, such as GitHub CLI, remained functional.

We are actively working to enhance our detection and mitigation processes to improve our response time to similar issues in the future. Additionally, we are exploring ways to operate Codespaces in a more degraded state when one of our providers encounters issues, to prevent a complete outage.

Jul 19, 04:47 UTC
Update - Codespaces is still recovering fully, but we can see the issue is trending positive. Please stop and start your Codespace, if impacted: https://docs.github.com/en/codespaces/developing-in-a-codespace/stopping-and-starting-a-codespace?tool=webui
Jul 19, 03:54 UTC
Update - We are still investigating issues with Codespaces. Some users may not be able to connect to their Codespaces at this time. We will update you on mitigation progress.
Jul 19, 03:17 UTC
Update - We are investigating issues with Codespaces. Some users may not be able to connect to their Codespaces at this time. We will update you on mitigation progress.
Jul 19, 02:43 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Jul 19, 02:10 UTC
Resolved - Beginning on July 18, 2024 at 22:38 UTC, network issues within an upstream provider led to degraded experiences across Actions, Copilot, and Pages services.

Up to 50% of Actions workflow jobs were stuck in the queuing state, including Pages deployments. Users were also not able to enable Actions or register self-hosted runners. This was caused by an unreachable backend resource in the Central US region. That resource is configured for geo-replication, but the replication configuration prevented resiliency when one region was unavailable. Updating the replication configuration mitigated the impact by allowing successful requests while one region was unavailable. By July 19 00:12 UTC, users saw some improvement in Actions jobs and full recovery of Pages. Standard hosted runners and self-hosted Actions workflows were healthy by 2:10 UTC and large hosted runners fully recovered at 2:38.

Copilot requests were also impacted with up to 2% of Copilot Chat requests and 0.5% of Copilot Completions requests resulting in errors. Chat requests were routed to other regions after 20 minutes while Completions requests took 45 minutes to reroute.

We have identified improvements to detection to reduce the time to engage all impacted on-call teams and improvements to our replication configuration and failover workflows to be more resilient to unhealthy dependencies and reduce our time to failover and mitigate customer impact.

Jul 19, 02:38 UTC
Update - Actions is operating normally.
Jul 19, 02:38 UTC
Update - We have continued to apply mitigations to work around the outage. Customers may still experience run start delays for larger runners.
Jul 19, 02:25 UTC
Update - We've applied a mitigation to work around the outage. Customers may still experience run start delays.
Jul 19, 01:50 UTC
Update - We are making progress failing over to a different region to mitigate an outage.
Jul 19, 01:04 UTC
Update - We continue to mitigate an outage by failing over to a different region.
Jul 19, 00:30 UTC
Update - Pages is operating normally.
Jul 19, 00:24 UTC
Update - We are working to mitigate an outage by failing over to a different region.
Jul 18, 23:57 UTC
Update - Pages is experiencing degraded performance. We are continuing to investigate.
Jul 18, 23:23 UTC
Update - Some actions customers may experience delays or failures in their runs. We continuing to investigate.
Jul 18, 23:22 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jul 18, 22:47 UTC