DNS Amplification Attacks Show the Need for Application Distribution

Image from Pixabay

DNS amplification attacks have been used for several years. The Spamhaus attack of 2013 was the first large scale DDoS attack using DNS amplification. This attack reached up to 300Gbps and involved up to 30,000 open DNS resolvers. The attack lasted for more than a week before fully subsiding. Up until last year, the Spamhaus attack was probably the most well-known of all DDoS and DNS reflection attacks.

As my colleague covered here, the Dyn attack in October of 2016 was an unprecedented level of Internet DNS disruption. 100,000 IoT devices launched multiple attacks that were able to disrupt the Dyn Managed DNS platform for several hours. This made a large piece of the Internet unusable in certain parts of the world. This is one of the more famous attacks because of the popularity of the services that were taken offline. The world now knows that even the biggest services can be taken down by an attack, but unfortunately the method of that attack remains poorly understood and entirely unresolved.

What happened?

In the 2013 Spamhaus incident, the attackers spoofed the Spamhaus IP and sent requests for a DNS zone file to open DNS resolvers. These resolvers responded by sending the zone file to Spamhaus, and ultimately triggered about 2.5Mbps from each of the 30,000 open DNS resolvers. Because of the amplification effect, each DNS resolver only sent a small amount of data, but Spamhaus was overwhelmed by the combined total traffic. You can get the technical details in the Cloudflare blog here.

The Dyn attack three years later was a completely different beast. This attack was launched by the Mirai Botnet, which consists of hundreds of thousands of devices on the Internet of Things. These devices were instructed to send high volume floods of garbage packets to the Dyn servers in a particular region. As Dyn personnel responded with mitigation efforts, the attackers shifted to different regions of the world to continue the attack. When mitigation efforts began to succeed against these regional attacks, the botnet responded with a more globally diverse attack. Ultimately the services were restored and remained unaffected by further attacks, which lasted for a few more days. See the Dyn report here for complete details.

Let’s consider the differences between these two attacks; 30,000 DNS resolvers responding to IP spoofing, versus 100,000 poorly secured smart ‘things’ that were turned into bots by software freely available on the Internet. 300Gbps versus 1.1Tbps, both attacks using one service that responded by acting exactly as it was designed to act. Only three years between them.

What will happen in another three years?

Why DNS?

The Domain Name System, or DNS, is like the phone book of the Internet. It allows users to connect with a website like Techspective by using the name, rather than a series of uninteresting numbers. Domain names are massively important to business (and personal) brands, and the really popular ones have even become verbs. “I’m going to Google the movie before I buy tickets.” “I’m Facebooking an old friend.” These things would not be possible without domain names, and domain names would not work without the Domain Name System.

Dyn is a huge DNS provider, and maintains the records for companies like Twitter, Reddit, Spotify, and more. When the Dyn DNS services went offline, these sites were unavailable to large parts of the world. DNS amplification attacks allow criminals to have a larger effect on a victim than would normally be possible. Deploying these attacks against a managed DNS provider allows those criminals to take down groups of websites that rely on that service, but are otherwise unrelated. Although 100,000 webcams and DVRs are not going to be enough firepower to launch a successful attack on Twitter, GitHub, Amazon, Netflix, Pinterest, Etsy, Reddit, PayPal, etc., all at once, they were enough to launch an attack on the DNS provider that supports them.

Why do we care?

Companies that are victims of a DDoS attack lose money in several ways. Businesses still have to pay employees who are at work, even if their work can’t be done while services are down. Advertising revenues and transaction fees cannot be earned or collected, and savvy competitors may step in and start peeling customers away while the business is offline. In the Dyn attack, the company suffered from attack mitigation and customer service costs, and potential SLA related costs as well. Mirai botnet owners probably didn’t notice anything, but may be surprised with something in the future.

Businesses that were not targeted or taken offline were also suffering from the Dyn attack. The attack took out more than just popular news and entertainment sites. There were interruptions to project management applications like Basecamp and Zoho, incentive systems like Starbucks Rewards, commenting systems like Disqus, business messaging applications like Yammer, incident resolution systems like PagerDuty, time tracking systems like NimbleSchedule, and multifactor authentication services like Okta. The loss of any of these systems could cause a bottleneck in a workflow.

Multifactor authentication services in particular are mission critical to many companies. When a company uses 2 Factor Authentication (2FA), it needs that second layer of credentials to access resources. Imagine losing the ability to login to a line-of-business SaaS application for several hours. Sales calls, tech support tickets, payroll applications … all of these things could be inaccessible. This is not because the service itself is offline, but because the second layer of information required to authenticate to the service cannot be contacted. A company that follows best practices and enforces 2FA is going to take a major productivity loss if that authentication provider is offline. As we connect more of our infrastructure and emergency systems to the Internet, protecting ourselves from these attacks will become less about workplace productivity and more about saving lives.

What does this mean?

There are several interesting lessons we can take from this incident, and three that I think are particularly valuable. The first is that the public clearly have a role to play in securing the Internet of Things. It simply isn’t possible to secure all of the cameras, DVRs, refrigerators, doorbells, etc., without the participation of the consumers who use them. Unfortunately, the public is still struggling with following best practices when it comes to password complexity. What are the odds that the general public will remember to secure their IoT devices? Security providers and the IT industry in general have to help the public understand the risks, while also assuming the greater responsibility of protecting the Internet.

Another lesson to be learned from these attacks relates to the dispersion of the services that we use to support our Internet based services, sites, and applications. Cloud-based resources that are meant to be available worldwide should be deployed worldwide, with various hosting providers. While most companies deploy redundancies in their infrastructures, those redundancies may have a single point of failure somewhere in the system. Multiple service providers on different backbones can allow cloud-based resources to be configured so that they always have access to working services. A system should not be unavailable due to an attack on one provider.

The second is on the time-to-live value, or TTL, associated with DNS records. Put simply, the TTL dictates how much time a device keeps a DNS record in its cache. DNS attacks would not have such a severe impact on a resource if the TTL could be widely changed in response to the threat. For example, if a DNS client could not reach an authoritative server like Dyn Managed DNS, that client could allow all traffic to continue to the cached entry. This would require a fail-open design from multiple operating system vendors. It would take an industry-wide effort to be successful.

It will take more than an individual effort to secure our Internet DNS and other critical systems. As we move toward an increasingly connected world, it is worth our while to bring together vendors and academics to collaborate on these efforts.

Fleming Shi: Barracuda SVP Advanced Technology Engineering

View Comments (1)

Related Post