GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Mar 21, 2025
Resolved - This incident has been resolved.
Mar 21, 03:08 UTC
Update - Codespaces is operating normally.
Mar 21, 03:08 UTC
Update - We have seen full recovery in the last 15 minutes for Codespaces connections. GitHub Codespaces are healthy. For users who are still seeing connection problems, restarting the Codespace may help resolve the issue.
Mar 21, 03:08 UTC
Update - We are continuing to investigate issues with failed connections to Codespaces. We are seeing recovery over the last 10 minutes.
Mar 21, 02:53 UTC
Update - Customers may be experiencing issues connecting to Codespaces on GitHub.com. We are currently investigating the underlying issue.
Mar 21, 02:19 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Mar 21, 02:12 UTC
Mar 20, 2025
Resolved - This incident has been resolved.
Mar 20, 20:54 UTC
Update - We have resolved the issue for Pages. If you're still experiencing issues with your GitHub Pages site, please rebuild.
Mar 20, 20:53 UTC
Update - Customers may not be able to create or make changes to their GitHub Pages sites. Customers who rely on webhook events from Pages builds might also experience a downgraded experience.
Mar 20, 20:38 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Mar 20, 20:33 UTC
Investigating - We are investigating reports of degraded performance for Pages
Mar 20, 20:04 UTC
Mar 19, 2025
Completed - The scheduled maintenance has been completed.
Mar 19, 05:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 18, 21:00 UTC
Scheduled - Migrations will be undergoing maintenance starting at 21:00 UTC on Tuesday, March 18 2025 with an expected duration of up to eight hours.

During this maintenance period, users will experience delays importing repositories into GitHub.

Once the maintenance period is complete, all pending imports will automatically proceed.

Mar 18, 19:28 UTC
Resolved - On March 18th, 2025, between 23:20 UTC and March 19th, 2025 00:15 UTC, the Actions service experienced degradation, leading to run start delays. During the incident, about 0.3% of all workflow runs queued during the time failed to start, about 0.67% of all workflow runs were delayed by an average of 10 minutes, and about 0.16% of all workflow runs ultimately ended with an infrastructure failure. This was due to a networking issue with an underlying service provider. At 00:15 UTC the service provider mitigated their issue, and service was restored immediately for Actions. We are working to improve our resilience to downtime in this service provider to reduce the time to mitigate in any future recurrences.
Mar 19, 00:55 UTC
Update - Actions is operating normally.
Mar 19, 00:55 UTC
Update - The provider has reported full mitigation of the underlying issue, and Actions has been healthy since approximately 00:15 UTC.
Mar 19, 00:55 UTC
Update - We are continuing to investigate issues with delayed or failed workflow runs with Actions. We are engaged with a third-party provider who is also investigating issues and has confirmed we are impacted.
Mar 19, 00:22 UTC
Update - Some customers may be experiencing delays or failures when queueing workflow runs
Mar 18, 23:45 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 18, 23:45 UTC
Mar 18, 2025
Resolved - On March 18th, 2025, between 13:35 UTC and 17:45 UTC, some users of GitHub Copilot Chat in GitHub experienced intermittent failures when reading or writing messages in a thread, resulting in a degraded experience. The error rate peaked at 3% of requests to the service. This was due to an availability incident with a database provider. Around 16:15 UTC the upstream service provider mitigated their availability incident, and service was restored in the following hour.

We are working to improve our failover strategy for this database to reduce the time to mitigate similar incidents in the future.

Mar 18, 18:45 UTC
Update - We are seeing recovery and no new errors for the last 15mins.
Mar 18, 18:28 UTC
Update - We are still investigating infrastructure issues and our provider has acknowledged the issues and is working on a mitigation. Customers might still see errors when creating messages, or new threads in Copilot Chat. Retries might be successful.
Mar 18, 17:42 UTC
Update - We are still investigating infrastructure issues and collaborating with providers. Customers might see some errors when creating messages, or new threads in Copilot Chat. Retries might be successful.
Mar 18, 16:42 UTC
Update - We are experiencing issues with our underlying data store which is causing a degraded experience for a small percentage of users using Copilot Chat in github.com
Mar 18, 16:00 UTC
Investigating - We are currently investigating this issue.
Mar 18, 15:58 UTC
Resolved - On March 18, between 13:04 and 16:55 UTC, Actions workflows relying on hosted runners using the beta MacOS 15 image experienced increased queue time waiting for available runners. An image update was pushed the previous day that included a performance reduction. The slower performance caused longer average runtimes, exhausting our available Mac capacity for this image. This was mitigated by rolling back the image update. We have updated our capacity allocation to the beta and other Mac images and are improving monitoring in our canary environments to catch this potential issue before it impacts customers.
Mar 18, 17:15 UTC
Update - We are seeing improvements in telemetry and are monitoring for full recovery.
Mar 18, 16:56 UTC
Update - We've applied a mitigation to fix the issues with queuing Actions jobs on macos-15-arm64 Hosted runner. We are monitoring.
Mar 18, 16:36 UTC
Update - The team continues to investigate issues with some Actions macos-15-arm64 Hosted jobs being queued for up to 15 minutes. We will continue providing updates on the progress towards mitigation.
Mar 18, 15:43 UTC
Investigating - We are currently investigating this issue.
Mar 18, 15:05 UTC
Mar 17, 2025
Resolved - Between March 17, 2025, 18:05 UTC and March 18, 2025, 09:50 UTC, GitHub.com experienced intermittent failures in web and API requests. These issues affected a small percentage of users (mostly related to pull requests and issues), with a peak error rate of 0.165% across all requests.

We identified a framework upgrade that caused kernel panics in our Kubernetes infrastructure as the root cause. We mitigated the incident by downgrading until we were able to disable a problematic feature. In response, we have investigated why the upgrade caused the unexpected issue, have taken steps to temporarily prevent it, and are working on longer term patch plans while improving our observability to ensure we can quickly react to similar classes of problems in the future.

Mar 17, 23:02 UTC
Update - We saw a spike in error rate with issues related pages and API requests due to some problems with restarts in our kubernetes infrastructure that, at peak, caused 0.165% of requests to see timeouts or errors related to these API surfaces over a 15 minute period. At this time we see minimal impact and are continuing to investigate the cause of the issue.
Mar 17, 23:01 UTC
Update - We are investigating reports of issues with service(s): Issues We're continuing to investigate. Users may see intermittent HTTP 500 responses when using Issues. Retrying the request may succeed.
Mar 17, 21:25 UTC
Update - We are investigating reports of issues with service(s): Issues We're continuing to investigate. We will continue to keep users updated on progress towards mitigation.
Mar 17, 20:51 UTC
Update - We are investigating reports of issues with service(s): Issues. We will continue to keep users updated on progress towards mitigation.
Mar 17, 19:19 UTC
Investigating - We are investigating reports of degraded performance for Issues
Mar 17, 18:39 UTC
Mar 16, 2025

No incidents reported.

Mar 15, 2025

No incidents reported.

Mar 14, 2025

No incidents reported.

Mar 13, 2025

No incidents reported.

Mar 12, 2025
Resolved - On March 12, 2025, between 13:28 UTC and 14:07 UTC, the Actions service experienced degradation leading to run start delays. During the incident, about 0.6% of workflow runs failed to start, 0.8% of workflow runs were delayed by an average of one hour, and 0.1% of runs ultimately ended with an infrastructure failure. The issue stemmed from connectivity problems between the Actions services and certain nodes within one of our Redis clusters. The service began recovering once connectivity to the Redis cluster was restored at 13:41 UTC. These connectivity issues are typically not a concern because we can fail over to healthier replicas. However, due to an unrelated issue, there was a replication delay at the time of the incident, and failing over would have caused a greater impact on our customers. We are working on improving our resiliency and automation processes for this infrastructure to improve the speed of diagnosing and resolving similar issues in the future.
Mar 12, 14:07 UTC
Update - We have applied a mitigation for the affected Redis node, and are starting to see recovery with Action workflow executions.
Mar 12, 13:55 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 12, 13:28 UTC
Mar 11, 2025

No incidents reported.

Mar 10, 2025

No incidents reported.

Mar 9, 2025

No incidents reported.

Mar 8, 2025
Resolved - On March 8, 2025, between 17:16 UTC and 18:02 UTC, GitHub Actions and Pages services experienced degraded performance leading to delays in workflow runs and Pages deployments. During this time, 34% of Actions workflow runs experienced delays, and a small percentage of runs using GitHub-hosted runners failed to start. Additionally, Pages deployments for sites without a custom Actions workflow (93% of them) did not run, preventing new changes from being deployed.

An unexpected data shape led to crashes in some of our pods. We mitigated the incident by excluding the affected pods and correcting the data that led to the crashes. We’ve fixed the source of the unexpected data shape and have improved the overall resilience of our service against such occurrences.

Mar 8, 18:11 UTC
Update - Actions is operating normally.
Mar 8, 18:11 UTC
Update - Actions run start delays are mitigated. Actions runs that failed will need to be re-run. Impacted Pages updates will need to re-run their deployments.
Mar 8, 18:10 UTC
Update - Pages is operating normally.
Mar 8, 18:00 UTC
Update - We are investigating impact to Actions run start delays, about 40% of runs are not starting within five minutes and Pages deployments are impacted for GitHub hosted runners.
Mar 8, 17:50 UTC
Investigating - We are investigating reports of degraded performance for Actions and Pages
Mar 8, 17:45 UTC
Mar 7, 2025
Resolved - On March 7, 2025, from 09:30 UTC to 11:07 UTC, we experienced a networking event that disrupted connectivity to our search infrastructure, impacting about 25% of search queries and indexing attempts. Searches for PRs, Issues, Actions workflow runs, Packages, Releases, and other products were impacted, resulting in failed requests or stale data. The connectivity issue self-resolved after 90 minutes. The backlog of indexing jobs was fully processed and saw recovery soon after, and queries to all indexes also saw an immediate return to normal throughput.

We are working with our cloud provider to identify the root cause and are researching additional layers of redundancy to reduce customer impact in the future for issues like this one. We are also exploring mitigation strategies for faster resolution.

Mar 7, 11:24 UTC
Update - We continue investigating degraded experience with searching for issues, pull, requests and actions workflow runs.
Mar 7, 10:54 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:27 UTC
Update - Searches for issues and pull-requests may be slower than normal and timeout for some users
Mar 7, 10:12 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:06 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:05 UTC
Investigating - We are currently investigating this issue.
Mar 7, 10:03 UTC