Fahmida Rashid explores the importance of securing the essential building blocks of the modern internet, and highlights how it can be achieved
The modern internet is essential – used for various means including commerce, healthcare and education. When parts of the infrastructure go down, either maliciously or by mistake, the ripple effects bring parts of the world to a standstill. That is why it is even more important than ever to strengthen the internet infrastructure to make it harder for a determined bad actor to eavesdrop on user activity, disrupt operations and cause outages.
For all its issues, the internet isn’t broken. In fact, it is working as designed because it was originally intended for a world where people could trust that other people were who they claimed to be, and accessed only the things they were supposed to. Academics and researchers in the early days didn’t prioritize making communications secure by default or establishing mechanisms to verify identity because they were relying on existing trust relationships. Those trust relationships no longer exist on the internet – an all-encompassing communications medium between strangers.
Starting over and building a more secure and private internet from scratch is not an option, so the next best thing is to retrofit existing protocols with security and privacy controls to make the internet a safer place to shop, work and live. There have already been some improvements, such as the fact that Secure Shell (SSH) has more or less displaced Telnet as the way to access remote systems. A significant portion of web traffic is now encrypted, with Transport Layer Security 1.3 addressing how web connections handle encryption and Let’s Encrypt making it easier to obtain and renew certificates. There are also initiatives to tackle critical components everyone relies on, namely internet routing, time synchronization and domain names.
Fixing Routing
Cyber-criminals who stole $17m worth of cryptocurrency Ethereum from MyEtherWallet.com over a period of two hours in 2018 did so by abusing the Border Gateway Protocol (BGP) – the universal routing system that is the foundation of the internet. When a user wants to go to a website, the traffic from the computer can pass through any number of intermediate servers before reaching the web server. There are many paths the traffic can take, and BGP holds information on the best route from one location to another.
BGP is the GPS navigation service that tells network operators which route the web traffic should follow. Analogous to the highway system, there are many ways to drive to a destination, but “some roads are better,” says Aftab Siddiqui, a senior manager of internet technology at the Internet Society, an international non-profit organization focusing on internet standards, education and policy. Siddiqui is also project lead at Mutually Agreed Norms for Routing Security (MANRS) which was founded in 2014.
Technically, there is no reason why web traffic can’t take the scenic route through many servers, but the website can time out or be very sluggish to use if the path is too long. Alternatively, bad actors can direct the traffic through servers they control, allowing them to see the contents of the packets or divert users to malicious sites. With MyEtherWallet.com, someone sent BGP messages to the core routers with instructions to send traffic intended for AWS to a rogue DNS server. Users wound up on phishing sites because that server gave out the wrong IP address for MyEtherWallet.com.
“I want to know I am getting my time information from someone trustworthy and not injecting bogus information into my clock”
While BGP hijacking has been around for years, this attack was a “wake-up call for many people” because of the amount of money stolen in such a short period of time, Siddiqui says.
It is easy to send BGP messages with routing instructions. MANRS is a consortium of network operators, internet service providers and cloud service providers who voluntarily adopted cryptographic route checks and filters so that incorrect route information doesn’t spread throughout their networks. Controls include defining a clear routing policy, enabling source address validation and deploying anti-spoofing filters. A total of nine network operators were members when MANRS launched in 2014 – now there are more than 500.
The numbers of reported routing incidents have declined as more operators join MANRS, from more than 5000 in 2017 to below 4000 in 2020, Siddiqui explains. More importantly, routing incidents are shorter because they are not spreading widely to other networks and are being fixed sooner. MANRS membership isn’t the only reason for the decline; increased awareness of best practices for routing also contributed, Siddiqui outlines.
A new task force helps content delivery networks and other cloud services to adopt the filters and controls to become MANRS-compliant.
MANRS is not intended to be an exclusive club, Siddiqui adds. There are organizations who have implemented some of the measures but are not official members, and there are other initiatives with the same goals but with different approaches. “Routing security is a long game,” Siddiqui says.
Securing Time
Network Time Protocol (NTP) is another critical component of internet infrastructure that everyone relies on. The ubiquitous protocol is used to synchronize clocks on servers and devices to make sure they all have the same time. Without accurate time, there is no way to determine certificates are valid or if two-factor authentication credentials have expired. If a computer is a few seconds too fast or a few milliseconds too slow, financial transactions cannot be reconciled properly and files may not be backed up or synchronized correctly across machines. However, NTP doesn’t verify the source of the time information, so until recently, there was no way to verify that the time information came from a trusted source.
A bad actor could send different time information from another IP address as part of a man-in-the-middle attack and trick the computer into accepting it as the correct response. The incorrect time can lead to issues with cryptographic signatures and result in incorrect timestamps on logs and transactions.
“I want to know I am getting my time information from someone trustworthy and not injecting bogus information into my clock,” says Daniel Franke, security architect at Akamai.
Network Time Security, which was published as a standard (RFC 8915) by the Internet Engineering Task Force (IETF) in October, uses Transport Layer Security (TLS) and Authenticated Encryption with Associated Data (AEAD) to provide cryptographic security for NTP. NTS uses the same Public Key Infrastructure the internet relies on to authenticate NTP servers so that when the computer wants to talk to a time server, it is reaching the server it wants to talk to, explains Franke, who was one of the authors of the standard.
The security protocol has two phases: NTS key exchange using TLS handshake between the NTP client and server; and using the results of the TLS handshake to authenticate NTP time synchronization packets via extension fields. In the first phase, the client (the computer) receives information about what server to query for time, secret keys for that server and private cookies. The TLS channel is closed after the handshake is complete. The client then sends the time server a query signed with one of the secret keys and a cookie from the key exchange. The server uses the cookie to validate the signature of the query and to sign the response. The client validates the signature on the incoming packet and knows that the time was sent from the correct server.
“We’d like to protect the internet for tomorrow”
NTS can help resolve some types of NTP amplification distributed denial-of-service attacks because the time server can drop spoofed packets that aren’t signed correctly. The security protocol can ensure user privacy by breaking ‘linkability’ – where different time requests can be linked to track the user’s location – assuming the implementation was configured to use truly random numbers (and not sequential ones).
With the NTS finalized as a standard, the next step is to help organizations implement the security layer in their networks and spend the next year learning how different groups use it, Franke says. NTP daemons chrony and ntpsec both support NTS, but there’s more left to do. Operating systems need to incorporate NTS support and server administrators need to deploy NTS on their time servers.
“One hope is to see if NIST picks up the protocol,” Franke explains. “NIST operates the busiest time pool in the world, so there would be a lot to learn if NIST launches this.”
The Way Forward
Adding the security layers to these essential building blocks should not be treated as one-time fixes. As different groups adopt the controls, adjustments and enhancements will be necessary. The effort to encrypt the Domain Name System (DNS) is a good example.
DNS translates human-readable domain names into machine IP addresses. This was originally done via cleartext queries over UDP port 53. No one thought it was a big deal if the name of the domain in the DNS lookup was public, but it is now clear that there are multiple entities (governments, malicious actors, ISPs) that can take advantage of this information.
There are two ways to encrypt DNS: DNS over HTTPS (DoH), which performs lookups over the secure HTTPS protocol, and DNS over TLS (DoT) which encrypts DNS packets using TLS and transmits them over TCP. DoH gained some traction in 2020, as Mozilla and Google both began offering DoH for Firefox (using Cloudflare’s DNS service) and Chrome (using Google’s DNS infrastructure) users. Growing adoption meant it was also necessary to address a significant concern – the centralization of DNS information.
Large technology companies like Cloudflare and Google deploying DoH servers at scale meant they were sitting on top of a treasure trove of DNS data, which could be mined for insights on user behavior. A group of academics at the University of Washington and Cloudflare technologists published a recent paper proposing a new protocol called Oblivious DNS over HTTPS (ODoH) to address this issue.
ODoH introduces a proxy node to act as a broker during DNS lookups. The client creates an encrypted DNS request and sends it to the proxy along with the name of the desired DNS resolver. The proxy removes identifying information from the request, such as the IP address, and forwards the request to the DNS resolver. The DNS resolver decrypts the request and sends the encrypted response back to the proxy, which then forwards the response to the client. At no point does the proxy see the unencrypted request, nor does the resolver ever see the identity of the client.
“What ODoH is meant to do is separate the information about who is making the query and what the query is,” Nick Sullivan, Cloudflare’s head of research, divulges in a post announcing ODoH on the company blog.
There are some components that are so deep in the infrastructure that just replacing them is not an option, nor can fixing the issues be something that one entity can do alone. Improving the security of these critical internet technologies requires a multi-stakeholder approach involving individuals, corporations and governments, but with data breaches, outages and thefts on the rise, not doing anything is no longer an option. “We’d like to protect the internet for tomorrow,” Siddiqui concludes.