As the web has evolved, its security needs have changed too. Today, as web applications become more lightweight, composed of loosely coupled services, with a mesh of different dependencies, the onus is on web developers to build websites and applications that have not only better user experience but also better user security.
There are many techniques that can be used to do this. And at this year’s JavaScript fwdays, Twilio frontend developer Stefan Judis demonstrated how we can use HTTP headers to add another layer of security and optimization to our applications and websites.
The browser has in recent years become a popular site for attacks. Back in 2017, Equifax, a massive credit rating agency, announced that criminals exploited a vulnerability in their website to get access to personal information of 143 million American consumers. Last year, more than 4,000 websites in the US and UK were found serving the CoinHive crypto miner, a JavaScript script designed to mine cryptocurrency at the expense of users’ CPU power, to its users because of a “cryptojacking” attack.
Today, when there are so many open-source packages available on package managers like npm, nobody really codes everything from scratch while developing any application. Last year in November, we saw the case of malicious code in the npm event-stream package. The attacker used social engineering tactics to become the maintainer of the event-stream package and then exploited their position to add a malicious package as a direct dependency.
Looking at these cases developers have become more aware of the different means and ways of securing their work like using a web application firewall, using web vulnerability scanners, securing the web server, and more. And, a good place to start is ensuring security through HTTP security headers and that’s what Judis explained in his talk.
https://www.youtube.com/watch?v=1jD7dBsg_Nw
HTTP, which stands for Hypertext transfer protocol, allows the client and server to communicate over a secure connection. HTTP headers are the key-value pairs that are used to exchange additional information. When a client makes a request for a resource, along with the request a request header is sent to the server, which includes particulars such as the browser version, client’s operating system, and so on. The server answers back with the resource along with a response header containing information like type, date, and size of the file sent.
Below are some of the HTTP headers Judis demonstrated:
HTTPS, which is a secure version of HTTP, ensures that we are communicating over a secure channel with the help of Transport Layer Security (TLS) protocol or its predecessor Secure Sockets Layer (SSL) for encryption. Not only just security, but it is often a requirement for many new browser features, especially those required for progressive web apps.
Though browser vendors do mark non-HTTPS unsafe, we cannot really guarantee safety all the time. “Unfortunately, we’re not browsing safe all the time. When someone wants to open a website they are not entering the protocol into the address bar (and why should they?). This action leads to an unencrypted HTTP request. Securely running sites then redirect to HTTPS. But what if someone intercepts the first unsecured request?,” wrote Judis in a blog post.
To address such cases you can use HSTS response headers, that allow declaring that your web server only takes HTTPS requests. You can implement it this way:
Strict-Transport-Security: max-age=1000; includeSubDomains; preload
Before you implement, do read this helpful advice shared by a web developer on Hacker News:
“Be sure that you understand the concept of HSTS! Simply copy/pasting the example from this article will completely break subdomains that are not HTTPS enabled and preloading will break it permanently. I wish the authors made that more clear. Don't use includesubdomains and preload unless you know what you are doing.”
The security model for the web is based on the same-origin policy. This means that a web browser will only allow a script in the preceding web page to access data from the following page if both pages have the same origin. This policy is bypassed by attacks like cross-site scripting (XSS), in which malicious scripts are injected into trusted websites.
You can use CSP, which will help in significantly minimizing the risk and impact of XSS attacks. You can define CSP using the following meta element in your server HTML or via HTTP headers:
Content-Security-Policy: upgrade-insecure-requests
This directive (upgrade-insecure-requests) upgrades all the HTTP requests to HTTPS requests. CSP offers a wide variety of policy directives that give you the control of defining the source from where a page is allowed to load a resource. The ‘script-src’ directive can prove to be very helpful in case of XSS attack, which controls a set of script-related privileges for a specific page. Other examples include ‘img-src’, ‘media-src’, ‘object-src’, and more.
Before you implement CSP, take into account the following advice from a Hacker News user:
“CSP can be really hard to set up. For instance: if you include google analytics, you need to set a script-src and a img-src. The article does a good job of explaining you should use CSP monitoring (I recommend sentry), but it doesn't explain how deceptive it can be. You'll get tons of reports of CSP exceptions caused by browsers plugins that attempt to inject CSS or JS. You must learn to distinguish which errors you can fix, and which are out of your control.
Modern popular frontend frameworks will be broken by CSP as they rely heavily on injecting CSS (a concept known as JSS or 'styled components'). As these techniques are often adopted by less experienced devs, you'll see many 'solutions' on StackOverflow and Github that you should set unsafe-inline in your CSP. This is bad advise as it will basically disable CSP! I have attempted to raise awareness in the past but I always got the 'you're holding it wrong' reply (even on HN). The real solution is that your build system should separate the CSS from JS during build time. Not many popular build systems (such as create-react-app) support this.”
Despite these advantages, Judis in his talk highlighted that not many websites have put it to work and merely 6% are using it. He wrote in a post, “To see how many sites serve content with CSP I ran a query on HTTP Archive and found out that only 6% of the crawled sites use CSP. I think we can do better to make the web a safer place and to avoid our users mining cryptocurrency without knowing it.”
Judis believes that it is a web developers’ responsibility to ensure that their website or web app is not eating up a user’s data to keep the “web affordable for everybody”. One way to do that is by using the Cache-Control header, which allows you to define response caching policies. This header controls for how long a browser can cache an individual response.
Here is how you can define it:
Cache-Control: max-age=30, public
These were some of the headers that Judis highlighted in his post. The article further explains the use of other headers like Accept-Encoding, Feature-Policy, and more. Go ahead and give it a read!
All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night
How to build a real-time data pipeline for web developers – Part 2 [Tutorial]
How to build a real-time data pipeline for web developers – Part 1 [Tutorial]