I just took about 9 hours to move my website from running on Apache2 to Nginx. Good God was it awful.
Pro-tip for a site migration? Run your migration on an identical virtual machine, while keeping the original server up and running, because you have no idea how long it might take. Luckily, as I described in my post describing how I’ve configured my infrastructure, I already run everything as a virtual machine, so that wasn’t too hard.
Why I Switched
My original setup ran 3 or 4 websites on a single Apache2 server. As you may or may not know, Apache2 is vulnerable to something called a Slow Loris attack, as I described in my post about taking down websites with next to no bandwidth.
That’s not the only problem with Apache2. For example, the REASON it is vulnerable to Slow Loris is because it takes up a lot of unnecessary memory and doesn’t handle simultaneous connections very efficiently, whereas Nginx can easily scale to thousands of simultaneous connections on a very low-end web host
The Process
Beware: if your website uses PHP, this is not going to be pleasant.
If you just serve a bunch of HTTP requests with basic HTML, CSS, and maybe a couple Javascripts, it’ll honestly be a cakewalk.
From an organizational standpoint, Apache2 and Nginx handle websites pretty much identically. Their configuration folders look the same, they both use modules, and most importantly, they both use virtual hosts.
A virtual host is a configuration where you can have a single computer running a single instance of Nginx (Apache does this too, I’m just saying Nginx as an example), but depending on what IP or domain name it is accessed by, it serves a different set of files. For example, this WordPress site is being hosted on the same Nginx server that hosts my Nextcloud installation, along with a few other sites. But you’ll note that (though yes, they’re both zachkline.us addresses, but that’s just because I get subdomains for free with my setup, so that’s what I do) they are entirely separate websites.
In Apache2, a set of virtual hosts may look something like this:
<VirtualHost *:443>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/mainsite
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/example.com/fullchain.pem
</VirtualHost>
<VirtualHost *:443>
ServerName wiki.example.com
DocumentRoot /var/www/wiki
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/wiki.example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/wiki.example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/wiki.example.com/fullchain.pem
</VirtualHost>
<VirtualHost *:443>
ServerName forum.example.com
DocumentRoot /var/www/forum
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/forum.example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/forum.example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/forum.example.com/fullchain.pem
</VirtualHost>
This way, you can assign multiple websites to the same IP address, and just have that server handle all of it. It’s cheaper than having multiple servers.
Nginx entries look pretty similar, but with a much more professional look:
server {
listen 443 ssl;
server_name example.com www.example.com;
root /var/www/mainsite;
index index.html index.htm;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
}
server {
listen 443 ssl;
server_name wiki.example.com;
root /var/www/wiki;
index index.html index.htm;
ssl_certificate /etc/letsencrypt/live/wiki.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/wiki.example.com/privkey.pem;
}
server {
listen 443 ssl;
server_name forum.example.com;
root /var/www/forum;
index index.html index.htm;
ssl_certificate /etc/letsencrypt/live/forum.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/forum.example.com/privkey.pem;
}
These configurations would be perfectly adequate for serving just basic HTML over an HTTPS connection. But when you want to start including PHP pages? Well, that’s when things get tricky.
See, Apache2 has a simple module that it uses to manage PHP, so you really just need to issue a quick “a2enmod php” and you’re done. But Nginx is so lightweight because it offloads the PHP handling to a separate program called php-fpm. And PHP-FPM is complicated.
It’s not just a single installation and you’re done. After installing, you need to determine exactly how the PHP scripts in your site act, because you need to develop a specific configuration per virtual host. There is no one-size-fits-all for PHP-FPM.
The closest thing to very basic PHP functionality that I found was this configuration inside the server block:
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~* .php$ {
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
}
This works for most basic PHP scripts, and is the configuration that I use for this WordPress site. But when you get into more complicated thing like Drupal, or Nextcloud? Well, you’ll have to consult their documentation. What I found for Nextcloud was that they just gave you a pre-built virtual host and you could just copy and paste it. Thank God for that. I diddled around for about 2 hours trying to figure it out on my own.
Work smarter, not harder.
Overall, Nginx performs better than Apache2 with a significantly decreased memory footprint. Even running my WordPress and Nextcloud installations simultaneously, MySQL and Nginx take up about 100 MB of memory each, leaving plenty of room on a system with just 512 MB of RAM to run a headless Debian installation.