Internet Protocol is the protocol underlying all Internet communications, what lets a packet of information get from one computer on the Internet to another.

Since the beginning of the Internet, Internet Protocol has permitted Computer A to send a packet of information to Computer B, regardless of whether Computer B wants that packet or not. Once Computer B receives the packet, it can decide to discard it or not.

The problem is that Computer B also only has so much bandwidth available to it, and if someone can acquire control over sufficient computers that can act as Computer A, then they can overwhelm Computer B’s bandwidth by having all of these computers send packets of data to Computer B; this is a distributed denial-of-service (DDoS) attack.

Any software running on a computer — a game, pretty much any sort of malware, whatever — normally has enough permission to send information to Computer B. In general, it hasn’t been terribly hard for people to acquire enough computers to perform such a DDoS attack.

There have been, in the past, various routes to try to mitigate this. If Computer B was on a home network or on a business’s local network, then they could ask their Internet service provider to stop sending traffic from a given address to them. This wasn’t ideal in that even some small Internet service providers could be overwhelmed, and trying to filter out good traffic from bad wasn’t necessarily a trivial task, especially for an ISP that didn’t really specialize in this sort of thing.

As far as I can tell, the current norm in 2026 for dealing with DDoSes is basically “use CloudFlare”.

CloudFlare is a large American Content Delivery Network (CDN) company — that is, it has servers in locations around the world that keep identical copies of data, and when a user of a website requests, say, an image for some website using the CDN, instead of the image being returned from a given single fixed server somewhere in the world, they use several tricks to arrange for that content to be provided from a server they control near the user. This sort of thing has generally helped to keep load on international datalinks low (e.g. a user in Australia doesn’t need to touch the submarine cables out of Australia if an Australian CloudFlare server already has the image on a website that they want to see) and to keep them more-responsive for users.

However, CDNs also have a certain level of privacy implications. Large ones can monitor a lot of Internet traffic, see traffic from a user spanning many websites, as so much traffic is routed through them. The original idea behind the Internet was that it would work by having many small organizations that talked to each other in a distributed fashion, rather than having one large company basically monitor and address traffic issues Internet-wide.

A CDN is also a position to cut off traffic from an abusive user relatively-close to the source. A request is routed to its server (relatively near the flooding machine), and so a CDN can choose to simply not forward it. CloudFlare has decided to specialize in this DDoS resistance service, and has become very popular. My understanding — I have not used CloudFlare myself — is that they also have a very low barrier to start using them, see it as a way to start small websites out and then later be a path-of-least-resistance to later provide commercial services to them.

Now, I have no technical issue with CloudFlare, and as far as I know, they’ve conducted themselves appropriately. They solve a real problem, which is not a trivial problem to solve, not as the Internet is structured in 2026.

But.

If DDoSes are a problem that pretty much everyone has to be concerned about and the answer simply becomes “use CloudFlare”, that’s routing an awful lot of Internet traffic through CloudFlare. That’s handing CloudFlare an awful lot of information about what’s happening on the Internet, and giving it a lot of leverage. Certainly the Internet’s creators did not envision the idea of there basically being an “Internet, Incorporated” that was responsible for dealing with these sort of administrative issues.

We could, theoretically, have an Internet that solves the DDoS problem without use of such centralized companies. It could be that a host on the Internet could have control over who sends it traffic to a much greater degree than it does today, have some mechanism to let Computer B say “I don’t want to get traffic from this Computer A for some period of time”, and have routers block this traffic as far back as possible.

This is not a trivial problem. For one, determining that a DDoS is underway and identifying which machines are problematic is something of a specialized task. Software would have to do that, be capable of doing that.

For another, currently there is little security at the Internet Protocol layer, where this sort of thing would need to happen. A host would need to have a way to identify itself as authoritative, responsible for the IP address in question. One doesn’t want some Computer C to blacklist traffic from Computer A to Computer B.

For another, many routers are relatively limited as computers. They are not equipped to maintain a terribly-large table of Computer A, Computer B pairs to blacklist.

However, if something like this does not happen, then my expectation is that we will continue to gradually drift down the path to having a large company controlling much of the traffic on the Internet, simply because we don’t have another great way to deal with a technical limitation inherent to Internet Protocol.

This has become somewhat-more important recently, because various parties who would like to train AIs have been running badly-written Web spiders to aggressively scrape website content for their training corpus, often trying to hide that they are a single party to avoid being blocked. This has acted in many cases as a de facto distributed denial of service attack on many websites, so we’ve had software like Anubis, whose mascot you may have seen on an increasing number of websites, be deployed, in an attempt to try to identify and block these:

We’ve had some instances on the Threadiverse get overwhelmed and become almost unusable under load in recent months from such aggressive Web spiders trying to scrape content. A number of Threadiverse instances disabled their previously-public access and require users to get accounts to view content as a way of mitigating this. In many cases, blocking traffic at the instance is sufficient, because even though the AI web spiders are aggressive, they aren’t sufficiently so to flood a website’s Internet connection if it simply doesn’t respond to them; something like CloudFlare or Internet Protocol-level support for mitigating DDoS attacks isn’t necessarily required. But it does bring the DDoS issue, something that has always been an issue for the Internet, back to prominent light again in a new way.

It would also solve some other problems. CloudFlare is appropriate for websites, but not all Internet activity is over HTTPS. DoS attacks have happened for a long time — IRC users with disputes (IRC traditionally exposing user IP addresses) would flood each other, for example, and it’d be nice to have a general solution to the problem that isn’t limited to HTTPS.

It could also potentially mitigate DoS attacks more-effectively than do CDNs, since it’d permit pushing a blacklist request further up the network than a CDN datacenter, up to an ISP level.

Thoughts?

  • DecronymB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 minutes ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    CGNAT Carrier-Grade NAT
    DNS Domain Name Service/System
    IP Internet Protocol
    NAT Network Address Translation
    SSL Secure Sockets Layer, for transparent encryption
    VPS Virtual Private Server (opposed to shared hosting)

    [Thread #137 for this comm, first seen 6th Mar 2026, 10:00] [FAQ] [Full list] [Contact] [Source code]