Migrating from IIS to Nginx

One webserver to another.

Background

I’ve been planning to migrate all the sites I host to a Linux server for a while. This blog and other sites I host are either static content, or some reverse proxies for something else internal - so they’re easy to migrate anywhere. But makemeapassword.ligos.net is based on .NET, so I first had to port it to .NET Core.

Partly to dip my toes in the world of new and shiny, partly to move it off a Windows box which is used for too many other things, partly to gain more Linux experience, but mostly to save on licensing.

But the final thing which really pushed me over the edge was the power supply in my current Windows box which hosts everything was dying. As in, the fan had seized and wasn’t spinning.

There’s nothing like an impending hardware failure to err… encourage me to migrate!

It's all your fault!

Steps

I’d already done most of the work porting makemeapassword.ligos.net to .NET Core. (I’ll write a blog about that later).

These are the steps to migrate everything onto an old laptop running Debian 9 Stretch. My webserver of choice is Nginx.

My (old) new server! And my actual old server.

Hosting On A Laptop? Really??

Why host on a laptop? Shouldn’t I be hosting on some beefy server?

Well, no.

I’m hosting from home, so my main constraint is my terrible ADSL connection (which will hopefully become as slightly less terrible HFC connection when the NBN is available in my area). The server needs to a) generate a few web pages, b) serve a bit of static content, and c) generate random numbers. None of those are CPU intensive in the slightest. And the number of passwords actually generated by the site is in the order of 300-400 per day. And the .NET Core process, while using the most RAM of anything on the box, sits at about 180MB, with 1.8GB still available.

Actual resource usage. Or lack there of.

So beefy server is overkill; laptop is more than capable.

And it comes with a built in UPS, and was free!

Create new DNS records.

First thing I did was create some new DNS records at DNS Simple that I could point to my new “server”. Using correct DNS is essential on the modern web as a single webserver may host multiple sites on the same IP address, and the server part of the URL is how it works out which is which.

I have 1 IPv4 address available, so I can’t reuse my public address. I could use private addresses for internal testing (eg: 192.168.1.1), but I opted to use public IPv6 addresses instead.

home2.ligos.net      -> 2001:44b8:3168:9b02:f24d:a2ff:fe7c:1614
blog2.ligos.net -> 2001:44b8:3168:9b02:f24d:a2ff:fe7c:1614
...
syncthing2.ligos.net -> 2001:44b8:3168:9b02:f24d:a2ff:fe7c:1614

Install Nginx

Debian has split Nginx into 3 flavours: light, full and extras. Light is enough for static content, but as this will be a more serious webserver, I chose full. A summary of differences are at the Debian Nginx wiki page.

$ sudo apt install nginx-full

My main references for configuring Nginx this stage are the Nginx beginners guide, and Nginx quick start. They do a fine job of getting you up and started.

There’s a more in-depth admin guide available for Nginx as well, which helped after the basics.

Default Site

Although web servers let you host multiple sites from the same server (the server config directive), its still possible for a client not to request any particular site. Out of the box, Nginx serves a default Debian flavoured page using the following config:

server {
# Listen on port 80 (IPv4 and IPv6) as the default server.
listen 80 default_server;
listen [::]:80 default_server;

# Serve files from this folder
root /var/www/html;

# When requests arrive, try serving files, or give a 404 error.
location / {
try_files $uri $uri/ =404;
}
}

But I want the ligos.net site to be the default, which is what Shodan and other port scanners will see when they come knocking. So I created a new site with a server section at /etc/nginx/sites-available/www.ligos.net with a few extra config entries:

server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/www.ligos.net;
location / {
try_files $uri $uri/ =404;
}

# Pick up index.html as the default page in any folder.
index index.html;

# Respond to these server names in a URL
# (a bit redundant with default_server, but best practice anyway).
server_name ligos.net www.ligos.net www2.ligos.net;

# Separate log files for the site.
access_log /var/log/nginx/ligos.net.access.log;
error_log /var/log/nginx/ligos.net.error.log;
}

And then symlinked it to /etc/nginx/sites-enabled/www.ligos.net.

Restart Nginx with sudo service nginx restart, browse to www2.ligos.net and it worked!

Static Content Sites

With DNS in place, Nginx installed and a default site ready, I can start adding my other static content sites. So, I created some folders and set permissions so that I was the owner and could edit, and the webserver is allowed to read.

/var/www$ sudo mkdir blog.ligos.net
/var/www$ sudo mkdir home.ligos.net
/var/www$ ls -al
drwxr-xr-x 2 murray www-data 4096 Feb 23 15:04 blog.ligos.net
drwxr-xr-x 2 murray www-data 4096 Dec 26 22:45 cadbane.ligos.net
drwxr-xr-x 2 murray www-data 4096 Feb 23 15:04 www.ligos.net
drwxr-xr-x 2 murray www-data 4096 Feb 23 15:04 home.ligos.net
drwxrwxr-x 4 www-data www-data 4096 Dec 31 20:17 html
/var/www$ sudo chown murray:www-data blog.ligos.net/
/var/www$ sudo chown murray:www-data home.ligos.net/

I manually uploaded some content for home.ligos.net because it’s very static (as in, it consists of 5 files and never changes).

murray@cadbane:/var/www/home.ligos.net$ ls
403.html image001.png index.html robots.txt
404.html image002.png ligos.jpg

And set up a basic configuration file for Nginx to serve content to my home2.ligos.net site. This is almost identical to the ligos.net site, but without the default_server directive and with a couple of security related headers.

server {
listen 80;
listen [::]:80;
root /var/www/www.ligos.net;
server_name home.ligos.net home2.ligos.net;
location / {
try_files $uri $uri/ =404;
}
index index.html;
access_log /var/log/nginx/home.ligos.net.access.log;
error_log /var/log/nginx/home.ligos.net.error.log;

# Security options, more for when we get to HTTPS
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header Strict-Transport-Security max-age=15768000;
}

I learned quickly that you don’t need to restart Nginx to update its config file, you just need to sudo service nginx reload, which means you have no down time when making config changes.

Static Content Sites - Part 2

This blog is static content using Hexo, but every time I add a new post, a whole bunch of files are added and change. Previously, I deployed updates using robocopy, but I’ll use SFTP to accomplish the same aim on Linux. (I could also use rsync or unison).

On Windows I created a cmd file to invoke WinSCP to synchronize my blog to my server.

call hexo generate
"C:\Program Files (x86)\WinSCP\WinSCP.com" /script="deploy.cadbane.txt" /log="deploy.cadbane.log"

And a basic WinSCP script to deploy it:

open sftp://murray@cadbane.ligos.local:2223/ -privatekey="N:\ssh keys\murray.ppk" -hostkey="ssh-ed25519 256 9zbLZ+4HLP06m+M/FyShS6IvM+EVtvyk8poACFryvN0"

synchronize remote -transfer=binary -delete .\hexo_root\public /var/www/blog.ligos.net

exit

The Nginx config is identical, except the server directive is blog.ligos.net.

Reverse Proxy Sites

I run several internal services which I occasionally like to access remotely. To do this, I create a reverse proxy from Nginx to the internal server. Previously, I’ve written how to do this from IIS, but its even easier with Nginx.

The following example is for my SyncThing server, which is a bit of a cross between BitTorrent and DropBox - letting you syncronise files between your own servers and devices.

server {
listen 80;
listen [::]:80;

# Note the root is for the default site
# As the whole site is proxied to syncthing, there's no files to serve.
root /var/www/www.ligos.net;

server_name syncthing.ligos.net syncthing2.ligos.net;

location / {
# proxy_pass does all the magic, letting nginx pass requests
# to another server and sending the responses back.
proxy_pass http://loki.ligos.local:8384;

# These are nice for the actual service to know real host, IP address and protocol.
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
access_log /var/log/nginx/syncthing.ligos.net.access.log;
error_log /var/log/nginx/syncthing.ligos.net.error.log;
}

The key is proxy_pass which tells Nginx to receive requests and pass them through to the listed server. It then buffers the responses back to the original requester.

Gotcha: Nginx will fail hard (as in refuse to start) if it can’t resolve the server listed in proxy_pass. You should a) use an IP address if possible, b) make darned sure it will resolve, c) really make sure by putting it in /etc/hosts, d) changing the systemd settings to restart Nginx automatically when it crashes.

.NET Core - makemeapassword.ligos.net

OK, static content and reverse proxies are the easy part. Deploying my ported .NET Core site for makemeapassword.ligos.net is a bit harder. Fortunately, Microsoft’s documentation is pretty good, giving step by step instructions (well, step by step enough for me).

An ASP.NET Core site is deployed as its own standalone web server (kestrel), and you use Nginx as a reverse proxy. Fortunately, I just learned how to do reverse proxies! Here’s the relevant part of the Nginx config:

location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

There were a few additional headers and directives recommended by Microsoft. But the main change is proxy_pass points to localhost:5000.

OK, now for the hard part: actually getting an ASP.NET Core site running. First, install .NET Core:

$ sudo apt install aspnetcore-runtime-2.2

Next, build your site:

PS> dotnet publish Web.NetCore -c Release -f netcoreapp2.1

I also delete any config files at this point, so there’s no chance that I accidentally overwrite production. Yes, I know there are other ways of managing configs for different environments, but the less files and environment variables the better.

Then, copy the files (I used the SFTP GUI which comes with the Bitvise SSH Client, but if I was deploying more regularly I’d make a script):

murray@cadbane:/var/www/makemeapassword.ligos.net$ ls

Test by running from the command line (you might need to open firewall ports or use SSH port forwarding to open the site in a browser):

murray@cadbane:/var/www/makemeapassword.ligos.net$ dotnet MurrayGrant.MakeMeAPassword.Web.NetCore.dll
Hosting environment: Production
Content root path: /var/www/makemeapassword.ligos.net
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Finally, create a systemd config for it, which will manage the Kestrel server. (I’m just going to link to Microsoft’s documentation because I don’t claim to understand systemd services; I just followed the instructions).

That’s it!

Lets Encrypt

All my sites have been using Lets Encrypt certificates on Windows for years. So I need to do the same on Linux. Fortunately, there is certbot to do that, with instructions for Debian 9 & Nginx.

Their instructions are perfectly fine, so I won’t reproduce them here. It boils down to 1) install certbot, 2) run sudo certbot, 3) chose the site to add SSL to.

It does mess your pretty config files up a bit, but you can hand edit them afterwards.

There was an option to do DNS based authentication, which even supported DNS Simple (where my domain is hosted). However, in spite of the possibility of issuing a wildcard certificate which I could deploy for all my sites in one hit, I decided against it. The problem is you need to give certbot an access token which allows global changes to my DNS. And, if that token ever leaked, then I could kiss my domain goodbye.

Here’s an example config after certbot has done its thing:


server {
if ($host = home2.ligos.net) {
return 301 https://$host$request_uri;
} # managed by Certbot

listen 80;
listen [::]:80;

server_name home2.ligos.net;
return 404; # managed by Certbot
}
server {

server_name home2.ligos.net;
root /var/www/home.ligos.net;
index index.html;

...

listen [::]:443 ssl http2; # managed by Certbot
listen 443 ssl http2; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/home.ligos.net/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/home.ligos.net/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

HTTP2

HTTP2 is new and shiny. I’m sure it does something good for users, but its not on by default. Fixing that is really easy!

Just add http2 in your ssl listen directive:

server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
}

Geoblocking

As I basically had all my sites working at this point, I starting thinking about security. Particularly for sites like syncthing.ligos.net which really shouldn’t be very public.

There is documentation and guides to geoblocking in Nginx. That is, restricting access to sites depending on what country your visitor is from. And I though that only allowing access to my internal sites from Australia would help stop bad guys.

Except is doesn’t work with IPv6. The IP address lists which drive the geolocation of IP addresses don’t support IPv6. (And, when my test sites are only accessible via IPv6, that means I just blocked myself from my own sites!)

So scratch that idea.

Basic Auth & IP Blocking

Instead, I turned to a more tried and true method of keeping bad guys out: IP whitelists and [HTTP basic authentication]. Again, Nginx has some documentation around basic auth and IP address filtering.

So, my internal sites (reachable via IPv4 or IPv6) don’t require authentication to access, but if I want to remotely muck with my syncthing.ligos.net server, I need a password (which is kept in my password manager).

Here’s the config:


server {
root /var/www/cadbane.ligos.net;
server_name syncthing.ligos.net;

...

location / {
satisfy any; # Either IP whitelist OR basic auth
# Whitelist of internal & public addresses
allow 192.168.1.0/24;
allow fe80::/64;
allow 2001:44b8:3168:9b00::0/56;
allow 150.101.201.180;
deny all;

# Basic auth
auth_basic "Syncthing Admin";
auth_basic_user_file /etc/nginx/sites-available/syncthing.ligos.net.htpasswd;

proxy_pass http://loki.ligos.local:8384;
...
}
}

Gotcha: BCrypt isn’t supported. Use MD5 or SHA1.

When you create a password, you use the Apache htpasswd command, which has an option to use BCrypt when saving the password hash. Of course, I’m going to choose the most secure password hash; why wouldn’t you pick the best on offer?

But here’s the log you see when your htpasswd decryption fails:

2019/03/02 13:00:53 [crit] 5384#5384: *1083 crypt_r() failed (22: Invalid argument), 
client: 2001:44b8:3168:9b01:6162:b1a2:92e4:2e5e,
server: syncthing.ligos.net,
request: "GET /admin/ HTTP/2.0",
host: "syncthing.ligos.net"

Fortunately, my password is crazy long enough that even SHA1 should be OK. At least for a few years.

Cut Over

At this point, I’ve got all my sites working with alternate DNS names (eg: home2.ligos.net) with SSL enabled. Time to cut across to the new server… err… laptop.

Because of how Lets Encrypt works, I couldn’t pre-load valid certificates until the sites were live. So I knew I’d be taking some level of downtime, but hopefully just a few minutes. (Well, technically I could extract the certs from my Windows box and manually load them into Nginx - but I didn’t trust I’d get it working first time).

I updated all the config for the sites to the real domain, ie: home2.ligos.net -> home.ligos.net.

I changed DNS for my internal sites first (eg: syncthing.ligos.net), and when using IPv6 I could generate certificates for them. And I checked they were working OK.

Finally, the moment of truth: I changed the IPv4 port forwarding from my Windows server to my Debian box for HTTP and HTTPS. From this point, any IPv4 clients would be presented with invalid certificates, so the race was on!

I changed DNS for my remaining sites (which meant IPv6 clients were directed to the new server).

Then quickly ran certbot for each site and obtained new certificates. Success - in less than a few minutes! Fast enough that Uptime Robot didn’t notice!

Then I tested each site on my computer (and my work computer, to test access from outside my network), and monitored access log files to see if anything unusual appeared. A few minor corrections and all was well.

Conclusion

IIS is a fantastic webserver, but it restricts you to Windows. Nginx + Debian Linux gives me the option of hosting all my stuff on a cheaper and less power hungry box. Migrating is always stressful, but with just a few minutes of downtime, all went well.

And now my amazing Debian server… err… laptop is happily serving this blog, makemeapassword.ligos.net and everything else I host.

(And I could replace the power supply in my Windows box).