🚀 Gate Square “Gate Fun Token Challenge” is Live!
Create tokens, engage, and earn — including trading fee rebates, graduation bonuses, and a $1,000 prize pool!
Join Now 👉 https://www.gate.com/campaigns/3145
💡 How to Participate:
1️⃣ Create Tokens: One-click token launch in [Square - Post]. Promote, grow your community, and earn rewards.
2️⃣ Engage: Post, like, comment, and share in token community to earn!
📦 Rewards Overview:
Creator Graduation Bonus: 50 GT
Trading Fee Rebate: The more trades, the more you earn
Token Creator Pool: Up to $50 USDT per user + $5 USDT for the first 50 launche
Cloudflare Global "Digital Blackout": Official Report Reveals Details of the Outage on November 18
On November 18, 2025, at 06:58 UTC, millions of websites and services experienced connection anomalies, primarily due to an internal error from the internet infrastructure provider Cloudflare. The company later released a complete incident report, providing a transparent account of how this technical failure occurred, how it was handled, and future preventive measures.
Issue Erupts: Service Outages in Multiple Locations Worldwide
Cloudflare experienced an outage on November 18 at 06:58 UTC (around 2 PM Taiwan time), affecting many websites that use Cloudflare's CDN and DNS services, including major commercial platforms, news media, and web applications, which were unable to function normally. The disruption lasted for nearly 40 minutes, causing some websites in certain regions to be completely inaccessible, and users were unable to interact smoothly with the backend servers via API.
The company pointed out that this incident is a network-level interruption affecting its global service infrastructure, rather than a single data center or regional issue.
Root cause of the problem: BGP configuration errors lead to disaster.
Cloudflare further explained that the disruption was caused by a misconfiguration change in the Border Gateway Protocol (BGP). BGP is one of the core protocols that control internet traffic, allowing global networks to know how to route to a specific destination.
The original purpose was to update the internal routing broadcast policy to improve infrastructure efficiency, but due to manual push errors in configuration, certain Cloudflare IP prefixes became inaccessible to other ISPs through BGP, effectively “disappearing” the routing paths for these services on the internet.
This error was not detected in the internal automation deployment tool in real-time, and thus was widely pushed to multiple regions before the impact emerged.
Activate emergency recovery: quickly undo erroneous settings
The Cloudflare engineering team detected anomalies within minutes of the incident and urgently initiated recovery procedures. They began retracting the erroneous BGP policy settings around UTC 07:15 and completed the recovery at UTC 07:28, with most services returning to normal operation at that time.
Overall, this outage lasted for about 30 to 40 minutes, and according to the timeline provided by Cloudflare, services were fully restored at UTC 07:28.
Why did automation and protection mechanisms fail to prevent the problem?
Cloudflare admitted that this error reveals that there is room for improvement in its internal deployment processes. The original automation process had a “safety mechanism” that could prevent incorrect BGP broadcasts, but this update was implemented at a lower level of system settings and was not included in that protection scope.
In addition, this change was originally intended to only apply to specific experimental network segments, but it unexpectedly affected the main production environment scope. They have begun to correct the scope definition of the deployment system and strengthen the automatic detection capability for erroneous policies.
Cloudflare promises future improvements
Cloudflare stated that it will take the following measures to prevent similar incidents from occurring again:
Strengthen the verification mechanism for BGP-related settings to avoid unexpected route broadcasting;
Clearly distinguish the permission settings for the testing and production environments.
Increase the automated alert system to respond to abnormal network traffic in seconds.
Strengthen the monitoring of internal change audits and manual operation processes.
The company also emphasized that they will continue to enhance transparency, and if any malfunctions occur in the future, they will promptly disclose relevant information to maintain user trust.
The Responsibilities and Challenges of Internet Giants
Cloudflare, as one of the largest global network infrastructure providers, offers key network components including CDN, DNS, network security, and DDoS protection. A single BGP configuration error can lead to a global “digital blackout.” Although this incident was handled quickly, it still highlights the risks and challenges posed by the high centralization of internet infrastructure.
This article Cloudflare's global “digital blackout”: official report reveals details of the outage on November 18, first appeared in Chain News ABMedia.