Hosting a Backwards Compatible Website on IPFS

by: TheHans255

February 19, 2023

You may or may not have noticed this coming in, but this website is hosted on IPFS! IPFS, also known as the "InterPlanetary File System", is a file-sharing network by which files - websites, images, programs, sounds, videos, and more - are addressed by their content rather than their name, such that as long as there is any node on the network that is hosting the file, you can request that file from any other node in the world, and it will eventually reach you where you can view it, store it, and redistribute it to your heart's content.

The principle is similar to torrent software such as BitTorrent, but the end result is more resilient, since all you need to access a file on IPFS is its "hash" (essentially just a unique number based on the file's contents). As such, it's a popular choice for cryptocurrency/NFT/Web3 content, where the files themselves are too large to store directly on the currency's associated blockchain. While my feelings on these sorts of projects are mixed at best, I'm a huge fan of IPFS and believe it will play a major part in the future of the Web, since it will help to disconnect Web content from the server it's hosted on.

Hosting a Website on IPFS

First things first, how do you host a static website on IPFS?

  1. Download and install an IPFS Client on the system you use to publish your site. This can either be IPFS Desktop or the IPFS CLI.
  2. Add the folder containing your website to IPFS. On the IPFS Desktop UI, this is found in the "FILES" menu, while in the CLI, this is done with the $ ipfs add -r command.
  3. Take note of the CID (Content ID) that IPFS gives to the top level folder. On IPFS Desktop, this is found underneath the folder, and can be accessed from the "..." menu with the "Copy CID" command. In the IPFS CLI, it's on the last line of the output from $ ipfs add -r. This is the hash that people will use to access your site.

If all you're intending to do is post your website for general consumption on IPFS, you're done! Go to (where your_hash is the CID you took down earlier), and you can view your site! If you're using the IPFS Companion in your browser, it will automatically replace with your own locally hosted node!

If you have a website domain (a .com, .org, etc.), you can attach your new IPFS website by creating a "DNSLink" entry in your website's DNS:

  1. Navigate to your website's DNS editing page. This will be connected with whoever your domain registrar is - if you bought your domain from Amazon AWS, for instance, this will be accessible from AWS Route 53.
  2. Add a TXT record for, where is your domain name. Set a reasonably short TTL (300 seconds usually suffices), and set the value to "dnslink=/ipfs/your_cid", where your_cid is the CID for your website's top level folder.

You're done! At this point, anyone using IPFS Companion and navigating to will find themselves on your website! If you ever change your website's content, simply add the folder to IPFS again and change the DNSLink record to match the new CID, and your website will update accordingly.

This whole process is quite simple, and only relies on having a work machine with the IPFS node running. There are plenty of articles about it online that go into this process in more detail, including these two in the IPFS docs. For cases when you don't have a machine you can leave running connected to the IPFS network, there are also pinning services that can keep the website available on IPFS for a fee.

There's just one problem: what happens if someone isn't using IPFS Companion and navigates to

Unfortunately, if you just follow the steps above, users without IPFS will get nothing. Adding a website to IPFS does nothing to create a server that hosts it in the way that web browsers usually expect, which means that if we want to host an IPFS website for non-IPFS users, we need to fix that.

How do we do it?

Option 1: Use a Traditional Web Server

Probably the simplest option, if you already have a web server hosting your website with software such as Apache, nginx, or Express, is to simply keep that server running. With this method, users accessing your website without IPFS will continue to access your server, just as they've always done, while users accessing your website with IPFS will use the IPFS network. Whenever you publish a change to IPFS, simply upload your website directory to your server at the same time.

As far as cost goes, this option will continue to incur the costs associated with hosting a web server, along with the necessary maintenance (security updates, health checks, etc.) to keep it running. The cost associated with IPFS is very low, fortunately, so as long as your site is compatible with it, you could see IPFS as a nice bonus for visitors that use it.

Note that when using this option, you will need to take special care that your website's content works on both versions of the site - in particular, that your website does not rely on calling any APIs on the server that are not CORS-enabled (which you likely shouldn't do if the APIs rely heavily on your visitor's data).

Option 2: Use a Static Hosting Service

Another option, and probably the best one if you're just starting out with web hosting, is to use a static web hosting service such as Amazon S3, Cloudflare Pages, or Github Pages.

The advantage of using a static service like this is that you get high speed and availability for a fraction of the price of deploying your custom server code in the same way, since the static service only needs to worry about delivering website content and not about executing code or accessing a database. To utilize a service like this in your publishing process, simply upload the folder to it at the same time you publish a change to IPFS.

This is the option that this website currently uses, using Amazon S3. The setup for Amazon S3 hosting is a bit involved - you have to create and set up a properly named bucket, followed by adding a Cloudfront deployment to get secure HTTPS access - but one thing it does well is provide an extensive CLI and SDK for automating the changes that would normally be done in the console. I have published an example script on Github that automates the process of uploading to IPFS, uploading to S3, updating the DNS record, and clearing the Cloudfront cache, all in one execution.

Option 3: Pull automatically from IPFS

In combination with Option 1, you can streamline your publishing process a little further by having your traditional hosting update automatically whenever you change the DNSLink record. To do this, install the IPFS CLI on your web server. Then, either when you have a DNS update, or every few minutes via a scheduled task or cronjob, run the command $ ipfs get -o serve_folder /ipns/, where is the domain you're serving and serve_folder is the content folder you're serving from. This will cause your server to update its content to match what's in IPFS.

Depending on how efficient your upload process would be in Option 1, this option is slightly more cost effective because your server will have its own IPFS repository, causing it to download only files that have changed. This option also does not require you to run a daemon (whose costs will be expounded on in Option 4).

Some static hosting methods (as shown in Option 2) have their own facilities for automatically updating hosted content in the same way - for instance, AWS allows you to attach events to Route 53 updates to run Lambda functions, which can then update the contents of the S3 bucket.

Option 4: Mount IPFS in your web server

Another option that's closely related to Option 1 is to use the experimental ipfs mount command on your web server. This command allows you to mount /ipfs and /ipns as FUSE filesystems, which provide dynamic access to everything IPFS has to offer, including your website. From there, you simply point your server software directly to /ipns/, and it will automatically update whenever your website updates.

The major difference between this option and the previous options is that you will need to not only install an IPFS node, but also run an IPFS daemon, which makes your node a full participant in the IPFS network. In particular, this means that it will only download your website content when someone requests it, but it will also be making peer connections and forwarding IPFS content unrelated to your website if you just so happen to be in the most efficient transit path for it.

As such, the cost profile is much different, and if you are running in the cloud, is likely not worth it if your website content is small enough. This website was originally served using a similar method (Option 5, the gateway, explained below) using an AWS node in the cloud, and the prices were approximately $22 US per month, mostly due to outgoing network costs. Understandably, this was a huge money sink, and while we may go back to this option in the future, it will likely only be when the website grows large enough that the cost of uploading it to S3 rivals this benchmark.

Note also that, with this method, your server is no longer intrisically storing your website content, and if there are no machines online with copies of it (such as if your only development system is regularly put to sleep), your website will be unavailable. You will need to be sure to run $ ipfs pin /ipns/ either on this server machine, or on another IPFS-enabled machine that you can keep awake, every time the website changes, much as in Option 3.

A positive aspect of this option, however, is that it easily contributes back to IPFS, rather than simply exploiting it as a free service. You can also pin the website content to this server along with any other content that you are interested in making available, making it easy to reach even if you can't keep your development system awake and connected all the time.

Option 5: Host an IPFS Gateway

The final option, and probably the one that contributes the most to IPFS, is to run a full gateway in the cloud yourself.

All IPFS nodes, when running in daemon mode, can expose a gateway that allows for traditional HTTP access of IPFS content. This is typically on port 8080, although this can be viewed and adjusted through $ ipfs config Addresses.Gateway. For this hosting option, you will set up a server machine, either on your premises or in the cloud, as (or whatever domain you wish to use), and then run this daemon to make the gateway available.

From here, you can access IPFS content as you would normally, using With a simple second endpoint, you can also use this gateway to host your website - simply redirect all requests for to, where path is the path the visitor is trying to access. When this website used this hosting method, we used AWS API Gateway to build this redirection endpoint.

As with Option 4, this method runs a full daemon in the cloud, which lazily downloads content but also establishes peer connections with extraneous network traffic. As such, the cost profile is similar, as is the need to pin your website content and the fact that your server is contributing to IPFS.

(As a side note to all this, while is a gateway in the same sense, it forbids users from using it for web hosting and with throttle them if they incur too much load. Setting up a gateway is a good way to add hosting power to IPFS without taking away from this shared resource.)

Extra Option: Combining Traditional Hosting and IPFS

The js-ipfs package can be used to add IPFS capabilties to the browser, including running a node, downloading and adding files, and other use cases - in fact, it provides a CLI module that can be used in much the same way the reference implementation in Go can. While I have not personally verified this as of this writing, this module claims it can be combined with JavaScript service workers in order to fetch IPFS files in the browser.

This could be used to defer most of your content either exclusively or mostly to IPFS - your server only needs to serve the base web pages and Service Worker code, and any assets in the page after that can be fetched directly from IPFS, making it take much the same role as a CDN. Your users will still be fetching content directly from your server if they don't have JavaScript enabled or can't run service workers (and you will likely have the content on your server anyway if it's running an IPFS daemon), but this deferment to IPFS should still improve much of your website's performance if this does not represent a significant portion of your users.


What option you choose for hosting a backwards compatible website on IPFS will depend on your service needs. For smaller websites, options 1-3 are typically best, where you host on a server or a static site. For larger websites, running a daemon, as in options 4 and 5, becomes better, as the server is only lazily fetching content. Whatever option you choose, if you choose to add your website to IPFS, we hope you have fun doing it!

Copyright © 2022-2023, TheHans255. All rights reserved.