PRINCIPAL DATA SECURITY CONSULTANT

Cloudflare Down – 6 Hour of Massive Global Service Outage Cause Customers Unreachable From the Internet

3 min read
Cloudflare experienced a significant six-hour global service outage on February 20, 2026, causing major disruptions for customers utilizing its Bring Your Own IP (BYOIP) services. The incident, which began at 17:48 UTC and lasted for six hours and seven minutes, unintentionally withdrew customer BGP routes from the Internet, rendering numerous services and applications unreachable. The…

Cloudflare experienced a significant six-hour global service outage on February 20, 2026, causing major disruptions for customers utilizing its Bring Your Own IP (BYOIP) services.

The incident, which began at 17:48 UTC and lasted for six hours and seven minutes, unintentionally withdrew customer BGP routes from the Internet, rendering numerous services and applications unreachable.

The company confirmed the disruption was entirely caused by an internal configuration update rather than a cyberattack or malicious activity, affecting 25 percent of all BYOIP prefixes globally and triggering HTTP 403 errors on the 1.1.1.1 public recursive DNS resolver website.

Technical Breakdown of the Addressing API Failure

The root cause of the outage traced back to an internal bug within Cloudflare’s Addressing API introduced during an automated cleanup sub-task deployment.

This task was designed to replace manual removal processes for BYOIP prefixes as part of the company’s “Code Orange: Fail Small” resilience initiative.

Engineers deployed a system to periodically check and remove pending objects from the network. However, the system executed an API query passing the pending_delete flag with no assigned value, resulting in the server interpreting the empty string as a command to queue all returned BYOIP prefixes for deletion rather than just those slated for removal.

This coding oversight systematically deleted approximately 1,100 BYOIP prefixes and their dependent service bindings before an engineer manually terminated the process.

The impacted connections immediately fell into a state known as BGP Path Hunting, where end-user connections continually search for destination routes until they time out and fail. The blast radius extended across multiple core products that rely on BYOIP configurations for Internet advertisement.

Service or Product Impact Description
Core CDN and Security Services Traffic failed to route to Cloudflare, resulting in connection timeouts for advertised websites ​.
Spectrum Applications operating on BYOIP completely failed to proxy traffic ​.
Dedicated Egress Users leveraging BYOIP or Dedicated IPs could not send outbound traffic to their destinations ​.
Magic Transit End users connecting to protected applications experienced complete connection failures and timeouts ​.

Recovery Efforts and Planned Remediation

Recovery was severely delayed because the mass withdrawal affected customer prefixes differently, requiring intensive and varied data recovery operations.

While some users maintained the ability to self-remediate by toggling their advertisements back on via the Cloudflare dashboard, roughly 300 prefixes suffered complete removal of their service bindings.

These severely impacted accounts required manual restoration by engineers who had to push global configuration updates to reapply settings across every machine on the edge network.

To prevent future catastrophic deployments, Cloudflare is accelerating several critical architecture changes under its Code Orange mandate.

The engineering team plans to standardize the API schema to prevent flag interpretation errors, implement circuit breakers to detect abnormally fast BGP prefix deletions, and establish health-mediated operational state snapshots to separate customer configurations from production rollouts.

Time (UTC) Incident Event Description
17:56 The broken sub-process executes, withdrawing prefixes and triggering the outage ​.
18:46 An engineer identifies the flawed task, disables regular execution, and begins mitigation ​.
19:19 Dashboard self-remediation becomes available, allowing some customers to restore service ​.
23:03 Global machine configuration deployment completes, fully restoring the remaining removed prefixes ​.

Cloudflare concluded its official incident report with a direct apology to its users and the global internet community regarding the February 20 disruption. The company publicly acknowledged that the widespread outage undermined its core promise of delivering a highly resilient network.

Follow us on Google NewsLinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post Cloudflare Down – 6 Hour of Massive Global Service Outage Cause Customers Unreachable From the Internet appeared first on Cyber Security News.


Kaynak: Cyber Security News

Yayin Tarihi: 22.02.2026 00:56

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir