I Built a Serverless Blog in 30 Minutes

I’ve had the same godawful landing page for my personal site for 3 years now. In the last 3 years, I’ve learned a lot but never applied it to my personal work. My site was stuck on some premium shared hosting plan that cost me more money than I’d like to admit; the server itself was running outdated cPanel; I had no use for the server outside of running my personal site.

My old personal site really needed a refresh.

So, why not build something more modern?

The current blog - what you’re reading this on - costs me a grand total of £0.00 to run, is far faster than anything I could have built previously, & runs completely serverless, meaning there is nothing to configure my end.

The best part of it - I only wrote 62 lines of code to achieve this.

Why Serverless?

Serverless solutions are a bit weird, but the gist of it is that your code runs in an isolated sandbox alongside many other peoples' pieces of code on many servers. This has a few benefits:

  • No need to manage any infrastructure
  • Lower hosting costs, since you’re billed per usage
  • Easier scalability, as your code can just run on more servers

However, there are also some downsides:

  • You may not be able to use “standard solutions” - you could try and run WordPress, but that wouldn’t be the easiest thing in the world
  • If your site starts to get a lot of traffic, your wallet starts to cry

When I was looking at options for a blog to write in, serverless solutions started to look at lot more interesting to me. Instead of worrying about patching WordPress every month or dealing with a crappy page builder, I just wanted something that was simple, fast, & worked out of the box.

It became clear that my site would have three “layers” to it:

  • Serverless layer which routes traffic to static files
  • Static layer which stores all the files
  • Deployment layer which generates all the site content & deploys it to the Static layer

In theory, the only layer that ever gets changed frequently is the static layer, but that only gets changed by processes in the deployment layer. Effectively, there should be very little work I have to do!

Enter Hugo

Hugo is a static site generator built in Go. For all intents and purposes, the static site generator part is a lot more interesting than the built in Go part. Hugo had everything I was looking for:

  • A nice, easily editable theme
  • Creating site content in Markdown, which can be version controlled
  • Site content boiled down in to static pages

I was originally looking at running a localhost WordPress instance, and running a static site generator plugin to create the pages, but that becomes difficult to plug in to a deploy process.

Creating the skeleton of my site in Hugo was as simple as hugo new site mellen-blog and typing hugo new posts/my-cool-post.md. I could instantly see my draft posts by running hugo server -D on localhost.

This was incredible for me, as it was as simple as editing Markdown documents. I also went and picked out a theme which I liked and changed a few config options, but none of that was necessary to get my site running.

I now had a ridiculously simple static site, but how do I serve that to people?

Enter CloudFlare (…and Backblaze B2)

CloudFlare workers are awesome. They’re a serverless solution that runs through CloudFlare, which means I get to leverage a lot of the existing infrastructure of CloudFlare. Their free tier also is a lot more forgiving than competitors such as Azure or AWS, as they strictly charge per request. More importantly, with zero configuration I can set my worker to use my own domain.

Workers are also instant deployment, which is kinda cool too.

Finding a storage solution

We have a serverless solution, and now we just need a storage solution. An S3-compatible service was obviously required, but there are so many providers that I couldn’t figure one out.

Eventually, one leapt out at me: Backblaze B2. They have free bandwidth to CloudFlare, as well as 10GB storage for free. Given how my site is, at its biggest, multiple megabytes in size, I think I should be set.

I created a bucket on Backblaze & got to finally writing some code.

Routing with CloudFlare

Once I figured out the solutions I wanted to go with, the only part left was sorting out the routing part. For the sake of simplicity, the routing consists of taking a URL such as https://mellen.io/posts/2019/my-cool-post and proxy it to a file location, such as https://my-blog.storage-service/posts/2019/my-cool-post/index.html.

With CloudFlare, however, there are some additional things to take in to account such as caching, hotlink protection etc; I’ll mention these in a follow up post. Below is the barebones code that I started with to route everything:

const BASE_URL = 'S3_BUCKET_URL';

addEventListener('fetch', event => {
  event.respondWith(doFetch(event.request));
})

async function doFetch(request) {
  let url = new URL(request.url);
  let response = await fetch(BASE_URL + url.pathname);
  try {
    //Check if we got a JSON response; this could be because of a server error, a 404 etc.
    if (response.headers.get('Content-Type').startsWith('application')) {
      let data = await response.text(); //Await response data
      let jsonData = JSON.parse(data);
      //We might want to send a special response based off of response code
      if (200 !== jsonData.status) {
        switch (jsonData.status) {
          case 404:
            return new Response('Not found', { status: 404 });
        }
      }
      return response;
    }
  }
  catch { } //We don't mind so much about any exceptions here
  return response;
}

Now we have the static layer sorted (we can run hugo -s . to compile our static site) & the serverless layer after deploying to CloudFlare.

Conclusion

Serverless solutions such as CloudFlare workers are very fun to work with & provide a new way of thinking about things. What I’ve done is only one of many potential use cases - they can be used to lower latency, provide robust caching & authentication options, or host full applications.

I’ll be writing a followup post that details deployment, as well as the pitfalls I fell in to after this initial solution.