Deja Vu All Over Again: Cables Cut in the...


To Catch a Thief

February 16, 2009 Comments (29) Views: 24300 Engineering

Reckless Driving on the Internet

Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInShare on Reddit

This weekend, John Markoff wrote an interesting piece for the New York Times entitled Do We Need a New Internet? While his emphasis was largely on security, or rather the lack thereof, the central point Markoff makes is that the Internet may be so hopelessly broken that it could be better to start over, rather than continue to apply band-aids. As if to emphasize this point, SuproNet, a local Czech provider, single-handedly caused a global Internet meltdown for upwards of an hour today. SuproNet accomplished this feat by sending out a rather unusual routing update, one which a lot of routers did not handle very well. The result was Internet bedlam.

Some Preliminaries

Routing on the Internet is strictly a cooperative affair. Neighboring routers tell each other what they know and that information ultimately propagates globally. Eventually everyone figures out how to reach everyone else. And what routers know are prefixes, i.e., blocks of IP addresses, that are routed in the same way. Since there is often more than one way to reach any given prefix, routing announcements include various attributes so that everyone can decide on their preferred path to each prefix. One such attribute is the Autonomous System (AS) path, i.e., the list of organizations that have to be traversed to reach the prefix. For example, I’m typing this blog from my home Verizon DSL connection. If I wanted to reach an IP address in Qatar served by Qtel, Verizon might hand that traffic off to Tata/Teleglobe (AS 6543), which in turn hands it off to Qtel (AS 8781). The prefix in question and its associated AS path are depicted graphically below.

AS path length is one important factor in route selection, with shorter paths favored over longer ones. Suppose that Qtel wanted to used Tata/Teleglobe as a backup provider for this prefix, only to be used when other alternatives failed. They could effect this by making the announced path artificially long. Instead of

  • 701 6453 8781

we might see paths like

  • 701 6453 8781 8781 8781

In this example, Qtel would have prepended its own AS to the path several times so that this particular route to this particular prefix would tend not to be selected by others.

Now the average path length on the Internet is only around 4. That is, we are all fairly close to one another. So if I make any path seem just a little bit longer, one or two ASes, it generally will not get selected and will accomplish the objective of being the path of last resort. Nothing stops you from prepending your own AS a dozen or even a hundred times, but it is not going to accomplish anything and will only pointlessly consume everyone else’s router memory. It’s also an indication that you don’t know what you are doing. Which brings us to a central problem, you don’t need a driver’s license on the Information Superhighway.

Bedlam on the Internet

Now suppose you just got your Internet learner’s permit yesterday and you really don’t want your backup provider being used unless your main provider is down. You could prepend your AS a few times in the route announcements you make to your backup provider and that would do the trick, but to make really sure you go for a few hundred instead. In a perfect Internet, that wouldn’t matter, but we don’t have one of those. What we think happened next is the Internet equivalent of a massive buffer overflow. While most of the core routers run by major ISPs fared just fine, processing the ridiculous path and sending it on, others choked. Perhaps they weren’t as well maintained or were running buggy software. These routers viewed the update as malformed and so tore down their session with whoever sent them the update. In other words, two routers that were happily exchanging traffic with each other just moments before suddenly stopped all communication. Traffic was lost, alternative paths were explored, and maybe the former cooperating routers recovered and re-established contact. Multiply this by thousands of routers around the world and you can begin to appreciate the ensuing pandemonium. At Renesys, we experienced an almost 100-fold increase in the rate of routing updates from our worldwide array of sensors

The Details

SuproNet (AS 47868) normally announces a single prefix,, to a single provider, CD-Telematika (AS 25512). On February 16th at 16:23:30 UTC, we saw this same prefix via a different provider, Sloane Park Property Trust (AS 29113), but with an AS path exceeding 255 ASNs. Such messages continued for almost exactly one hour or until 17:23:00 UTC. We observed Level 3 (AS 3356), Tiscali (AS 3257) and TeliaSonera (AS 1299) propagating most of these routes globally, with a total of 230 unique ASes ultimately sending us the problematic announcements.

This single Czech provider announcing a single prefix caused a huge increase in the global rate of updates, peaking at 107,780 updates per-second. This peak occurred at 16:30:54 UTC, less than 8 minutes after the first announcement.

At Renesys, we call a prefix impacted in a given hour if either suffers an outage or has a non-trivial amount of instability. In the hour before this event, there were 1215 impacted prefixes globally out of a total of 271,175. During the event, that number surged to 12,920 or 4.8% of all prefixes on earth. One announcement from one provider and we have a 10-fold increase in planetary routing instability for an hour. North America suffered the most, increasing from 0.35% to 4.76%, while South America suffered the least, increasing from 0.52% to 1.75%.

Mapping the Damage

It’s always the middle of the night somewhere, a time when ISPs perform their maintenance. And bad weather and backhoes roam the earth. So routing instability on the Internet is always present, as networks come and go. The following map shows instability levels by country in the hour before this event starting at 15:00 UTC and computed as a percentage of all prefixes geo-locating to the country.

Global Instability by Country – Before

The next map show instability levels by country during the hour of the event, starting at 16:00 UTC. Can you spot the outdated routers?

Global Instability by Country – During

Now what?

We were heartened to see that most of Internet’s core survived a single odd announcement, but this does speak to a lot of outdated equipment or software at the edge. And if you manage to get all of edge routers to reset, you aren’t going to have many people to talk to no matter what the core is doing. While it might be tempting to bash SuproNet, can anyone really defend a system where a failure in probably one of the weaker links can cause the entire system to unravel? Maybe we really do need a new Internet and for more reasons than better security. The next one needs to come with an operating permit too.

Learn More About Dyn Internet IntelligenceLearn More

Not sure how your network is affected by events? Check out the tool our research team uses!

29 Responses to Reckless Driving on the Internet

  1. Rudolf says:

    So what do you propose?
    where could you apply pressure?
    1. Most countries have liberalized telecom markets. Anyone can become a telco. –> stopping idiots here violates so many laws, can’t go there.
    2. RIR’s hand out address blocks and AS-numbers: Mandatory BGP config classes when first receiving an AS?
    3. Transit providers need to check BGP driving license? (How do you check for clue? Will sales do it for you?)
    4. People building routers need to build good software
    5. People configuring routers need to know how to configure routers
    The trouble is that we have a global market that anyone can join and participate in. This is a great thing as it allows new parties with clue to enter the market and change the game in a way like never seen before. This is bad for the same reason with clueless people. The GSMA has a solution… they allow only their friends on the GRX. It kind of works with a minimum amount of parties involved. It is not good enough for a liberalized market.
    It’s comparable to the roads. Every idiot that can move can get on them. Yes we require people to have a drivers license for cars. However we still have major accidents with far reaching consequences. It is not the fault of the road system and requiring only qualified drivers on the road would be detrimental to the system of free travel (qualified means pro-driver, level Obama’s chauffeur)

  2. JZP says:

    Nice writeup. As usual, some things have to get glossed over to communicae to the general public, but it is useful to point out to those who might be getting their BGP driver’s licenses and reading this article for things to avoid: other elements of the decision tree can and will override the path length, so don’t expect that to work 100% for backups. Find out if your providers have localpref-tuning communities and use them in addition to a little prepending.
    @Rudolf: we only have 4 and 5 now. 3 is already a problem – how many times has a customer not been given your BGP or multihoming instructions, been promised something impossible by sales, and the salesperson is not held accountable, leaving technical people to create unsupportable one-offs?
    It behooves service provides to filter their customers. Given the diameter of the ‘net, anything beyond a dozen or so ASNs is added noise. Permit customers to use communities to control your edge-prepends, and disallow path forgery, and permit only a handful of customer prepends. Enforce your vendor’s maximum as-path limit to something many multiples the current as diameter from your point of view (50 is popular). Push these conditions transitively dwn to your customers with BGP speakers behind them.

  3. Neil McRae says:

    Action is needed in the IETF IDR forum to change the behaviour of BGP so that these issues do not have this level of impact. This has been an issue for quite some time. A small group is working on a draft to try and ensure this doesn’t happen again. If you are interested in collaborating on this contact me at neil at

  4. Dinger says:

    You hit the nail on the head here: “Perhaps they weren’t as well maintained or were running buggy software.”
    This would have been nothing more than a FYI post or two to NANOG if everyone ran semi-recent code on their routers. The routers that were properly maintained didn’t even hiccup.

  5. Ted Mittelstaedt says:

    Jim, pull your head out.
    The fact of the matter is that the administrators of the largest backbones identified this problem within 15-20 minutes of it happening, and, using the appropriate WHOIS records, contacted the responsible parties who got it corrected within 2 hours, maximum.
    I challenge anyone to find a global ANYTHING even half the size of the Internet, that identifies and corrects problems this rapidly.
    Contrast this story to the Salmonella contamination in the US Peanut products industry. That is a FOOD distribution network, not unlike the Internet in structure, and we are still finding contaminated products. And it’s just in the US only – not global – it’s VASTLY smaller.
    If the food industry responded as quickly as the Internet’s administrators do, after the first Salmonella death, there wouldn’t have been any other ones. Talk about drivers licenses and incompetence!!!!!!!!

  6. Brett says:

    So… almost anyone determined to recreate this can?

  7. Ted,
    the ability to limit the AS-path length has been available in (at least) Cisco IOS for years and obviously only a few service providers implemented it (read the thread on the NANOG mailing list).
    Furthermore, any decent ISP (and I guess there are not that many that would fit this particular criteria) should filter the updates from their customers. It’s not hard to limit customer’s AS-path prepending to a few AS-es, you just have to realize it might be an issue and implement the filters.
    However, neither countermeasure has been used as illustrated by the impact … and thus a single greenhorn small player from a mid-sized European country was able to make a global impact. Do you find that reasonable? I don’t.

  8. Brent2 says:

    @Ivan Thing is, there aren’t hundreds of things that can go wrong. There are tens of thousands. Realizing which one could is done all the time. Then another and another. Your router manifacterer thinks of a thousand, builds the hardware well enough to cover another ten thousand (by accident) and leaves the rest to you. Your OS manages to cover another 20K.
    Then the sys admins start going through options, adding their own counter-measures. One change can stop 100+ problems. Huge companies, if run well, have enough admins to hit almost anything. Almost.
    @Brett No. This particular one won’t happen again, and probably a few hundred others will be hit by the fixes/updates. Others might. But I’m betting a new round problems will be found, published and fixed over the next week. Our global community of obsessive admins (we love you lots) will see to it.
    Line from The West Wing, “The most costly mistakes always happen when the thing we take completely for granted stops working for a minute.”

  9. Chris Cappuccio says:

    This is all basically bullshit. The bug was caused by certain buggy implementations. BGP implementations have lots of bugs, and it’s up to the people who maintain their own routers to keep the software up to date. If it isn’t this, it’s something else. There’s really no stopping poorly written software.

  10. Josh Potter says:

    It’s really easy to stand on a box and tell other to run the latest version of code however people do have valid reasons for being on older versions of code.
    But for the people who have the functionality but didn’t turn it on, well yeah, lets shout at them please.

  11. Richard Steenbergen says:

    Nothing in the BGP spec says that > 255 AS-PATH hops is invalid, infact the spec specifically says how you should handle this case. 255 hops is certainly “unusual”, but this was a software bug pure as simple. Sessions that shouldn’t have been torn down were, and flapping -> dampening -> suffering ensued.
    This exact situation has happened at least a dozen times in the past several years, and the root of the problem is the “in the event of an error, drop the session” method of controlling the spread of invalid BGP attributes. The fatal flaw in this design is that it requires every BGP implementation in the world to correctly detect every possible error. As soon any major BGP implementation allows the propagation of a particular problem attribute (either by failing to detect an invalid one, or propagating a valid one to other routers incorrectly think is invalid) the problem quickly spreads to the entire global routing table.

  12. internet says:

    Reckless Driving on the Internet – Renesys Blog

    Bookmarked your post over at Blog!

  13. Scott says:

    This situation falls right into the same category as bogon filtering. If everyone followed bogon principles and filtered invalid sources and destinations at their border, there would be no source or destination address-spoofing anywhere on the Internet. Likewise, if everyone filtered AS path advertisements down to a “reasonable” number, this excessive path problem would never have occured and would never occur in the future. Maybe it’s time for a new RFC that not enough people will read and think “hey, that really IS a good idea”…

  14. bangky says:

    Hi, can someone from renesys confirm if 29113 was the only AS announcing this path at that time, or if the long AS path was in addition to the usual announcement via 25512?
    Feel free to ping me offlist. Thanks.
    Editor’s Note: 25512 was seen throughout the incident, but by fewer than 3% of our peers. Immediately after the incident, the situation reversed, with almost everyone going back to 25512, except for a few staying with 29113.

  15. A.Amor says:

    Maybe the Regional Internet Registrar like RIPE and ARIN needs to use AS number licences with penality points. If you make mistakes then you will loose you AS number. Same concept as the driving licence.
    The Other option is to upgrade BGP, with auto detection features that allows the protocol to detect mistakes authomatically. Time to move from BGP4 to BGPX.

  16. James Miner says:

    There is ana easy solution for this case: Major country ISPs will state that they will not recieved nor propagate any BGP prefixes that are longer that 10 for example. Done, problem solved.

  17. Michal Margula says:

    I don’t know why anybody is blaming the guy in that company that “caused” it? The truth it is that he made it by mistake (probably) and caused some trouble. We should really blame people running buggy software, because next time it will be done not by mistake by some newbie, but by a someone doing it on a purpose.

  18. I think that’s *ap*pend and not *pre*pend. :-)
    Perhaps the intentional appending of few hundred repetitions of one’s own AS “for good measure” *is* the seed here. But sometimes bad software feeds bad software. Consider this scenario: You’re specifying a value in your router (or router-management) UI (assuming that your router has one :-D). Maybe you’re even pasting or otherwise copying in a modestly sized block of repetitions. For whatever reason, a stuck key or whatever, the copy or write process loops–until whatever process yells “uncle!” first stops the looping. And say this bulked-up value nonetheless does indeed end up getting saved and actually applied (and then off we go…). But by now you may have specified a very long value that your router/management UI was not written to expect to usefully display or handle. One possible flavor of this is that the text box or field that displays/echoes applied values is coded to display the “left” (“big,” “most significant digit”) end of values, with the expectation that if you want to see the entire value specified, you must put your cursor in there and press RIGHT ARROW until you get to the end. Through such a mechanism, the operator may not have been able to superficially and immediately see their (unintentionally enormous) misspecification.
    I’m just sayin.’ :-)
    Of course, what happened next was that bad software elsewhere went on to do exactly what it was told to do. And so where’s that QA manual for the Internet? Sounds like the basics of a new test case…

  19. Internet Routing Pandemonium

    Here’s an interesting blog post explaining how a single routing error made by a small ISP in Europe caused

  20. Long AS paths causing commotion

    Last Monday long AS paths caused quite some commotion. A good technical explanation can be found at the Renesys and arbornetworks blog

  21. anon says:

    Actually, it is *pre*pend.
    While this usage may not make precise sense in natural language, it is the terminology used in the BGP specifications (rfc 1771 onwards), and by basically every BGP operator on the planet.

  22. Hashname says:

    Beautiful explanation…I’m someone with practically zero knowledge in networking, but u explained it with such conviction that i just kept reading…Bookmarked!!!!

  23. Thibault says:

    “Be strict in what you send and liberal in what you receive”
    configuration errors may always happen.
    If an update is not understandable ignore it, log an error so that the administrator will see it and transmit it to their peering counterpart.
    I think the failure is in the software which should be more resilient.

  24. Thibault says:

    In my previous comment I spotted a technical cause in the software.
    The organisational solution in my view would be That a sub organisation of the IETF should be responsabile for certificating pieces of equipment, ie routers’ interface destined to interfacing between AS in the Internet.
    Tests would be made by the independant organisation that would issue a certification entitling the router to be used as an ASBR in the Internet.
    Only certified pieces of equipement could be ASBR in the Internet.
    I have been a developper of X25 swithes for French X25 operator Transpac and DTRE X75 NTI switch.
    Never would a piece of equipment be introduced in those public networks prior thorough test were made including double causes of errors.
    You may imagine X75 being the “Noeud de Transit International” interconnecting all other countries X25 networks made up with a wide range of suppliers and people configurring them that validation tests were quite serious.
    I think this is what was lacking there.
    Routers which may participate in the Internet with its level of interdependance should not receive the same validation than routers for a private network.

  25. Robert says:

    I work in IT Security and this map (the second map) is outright SCARY. It visually shows the lack of upgrading to the backbone routers on the internet (managed by the big name telecoms, Verizon, Level 3, ATT, etc).
    This story even starts out with the NY Times piece on starting over on the internet.
    Map #2 is the most enlightening. It shows the affect of the damaged caused by the Czech ISP SuproNet and even asks the question, “can you spot the outdated routers”. The telecoms are obviously aware of the outdated routers on their backbones and are choosing to do NOTHING about it. Telecoms have been CLEANING UP in the last several years and have the funds to keep up with updates to their backbone segments.
    It could be debated that the stimulus bill money will be used to update some of these routers since Obama “plans” to have more rural penetration with broadband. I have a perception problem with that, I do not see it happening. This is OBVIOUSLY Internet 2 money.
    SuproNet might have done it deliberately. Who knows and really who cares? In any case, it is a another example of willful negligence on the side of big telecoms. They plan to start pushing Internet 2 soon and get people onto that, of course with a license and access to websites by government approval.

  26. Ed says:

    Good summary and lay-men explanation. Nice work.

  27. Just to make sure we’re not spreading misinformation. It was not an “older IOS software” issue, it was a previously unknown bug and many (if not all) recent IOS releases were affected. If you’ve configured AS-path length limiting (available for years), you were safe. Obviously, not many people have done that.
    This particular incident was noticed because the AS-path length was just right to cause problems on core peerings. That’s why some regions were not as affected as others (or maybe their routers were better configured).
    See Oversized AS paths: Cisco IOS bug details
    for details. You might also want to know what caused the problem in the first place.

  28. reckless says:

    Reckless Driving on the Internet – Renesys Blog

    Bookmarked your post over at Blog!

  29. adam says:

    interesting that no isp did summarize the prefix into a supernet on it’s way across the globe as well as very little of isps out there control the as path length on the routes they receive
    one would thing the security measures are part of the standard bgp config templates of most of the isps
    can’t wait for other rfc defined bgp parameters to get compromised carefully and propagated throughout the internet

Leave a Reply

Your email address will not be published. Required fields are marked *