This site

This site is build more as means to a learning experience than necessity or anything else.

Here I am detailing what is definitely not a how-to, step by step guide, but rather an overview of my decision making regarding some technologies and my thoughts about them.

1. Server – Amazon EC2

Initially I decided to have my personal site build on some form of hosting service, but then I thought it would be interesting project if I put it on a VPS server, that I administer myself.

So I chose an Amazon t2.micro Elastic Compute Cloud Instance. There are cheaper VPS alternatives of course, but I felt that EC2 service gives me flexibility beyond other solutions. For example, tomorrow I may decide to migrate to much bigger server and have it done in 1 minute.

ec2_arch

EC2 architecture is pretty modular and makes it easy to abstract/migrate a part of your server infrastructure – storage, load balancing, instance cloning, stopping and starting… the list goes on.

t2 type of EC2 instances are also operating on an interesting “burst principle”. This means that they are usually using single Xeon 2.4 GHz Core and are able to use it on 10% capacity (only 240 MHz of it), but they gain X “CPU Credit” each hour if they are idle.  So for each Y CPU credit available, they receive burst operating capacity for 1 hour. This is really convenient, since for all purposes a webserver will never operate on full load all the time. So you get a single core Xeon-like CPU (with sweet new SIMD instructions) with 1 GB of RAM and 20 GB of relatively good performing storage (its called “General Purpose SSD, but actually its more akin to faster HDD – about 80 MB/s). All this costs about 10 euro per month (on demand pricing, prepaid will be even less).

2. OS – Arch Linux

arch2

For the OS I chose Arch Linux, since I use it for my desktop distro and I find it very convinient. It has excellent community wiki, which is basically in the top results of almost all google searches about linux how-tos in general. Its bleeding edge modernity means that practically you have the most recent version of any software package you can think of. The official package repository is not that big, but with the addition of the AUR package repository (community defined packages), you have de facto access to the vast majority of software written for Linux. Also it has great packaging system that makes it easy for every given package – to download the sources, build it with the optimal options for your architecture, and install it – all with a one line command.

Sometimes though, bleeding edge is more bleeding than edge – the most recent versions have bugs that are not well known and you can have some package incompatibility. Also there is no hand holding and you are expected to know about every part of your system.

arch_wifi

This may seem very intimidating to new users, but actually forces you to learn a great deal about Arch and Linux in general.

Finding a “template OS image” to start from, or as it is called Amazon Machine Image, is pretty easy. The guys from uplinklabs support up to date arch linux AMIs for EC2.

After launching an EC2 instance with it, it is time to turn to the goodies that arch provides.

System setup

The really good part of Arch is that I can compile from source the most recent versions of the server software and make sure that I squeeze the absolute maximum of the hardware I have at disposal.

This is accomplished with the yaourt utility, which is a wrapper for the arch package manager pacman and the package builder makepkg and shares most of their command line options, adding to them some pretty useful stuff like the above mentioned build from source.

So lets say we want to install the latest version of the htop utility. Generally using pacman, this will be done as:

pacman -S htop

If instead we want to install it from source we would do with yaourt as:

yaourt -Sb htop

This will download the package PKGBUILD (package definition) from either the official repositories or the AUR, build it with the makepkg utility, archive it in a package and then install it.

So since makepkg is responsible for the actual building, we want to set some optimal options in its configuration (/etc/makepkg.conf).

CFLAGS="-march=native -mtune=native -O3 -pipe -fomit-frame-pointer"
CXXFLAGS="-march=native -mtune=native -O3 -pipe -fomit-frame-pointer"

I could perhaps tinker with some exotic stuff like LTO or fast math, but I am not sure they are worth the hassle, -O3 and mtune/march = native are the singe most significant optimization options on GCC.

So specifying those option in the makepkg configuration and using the latest stable GCC version, we are ready to install the actual web-server related packages we need.

3. LAMP setup

I followed pretty closely this guide from Linode – a hosting company, which also offers Arch Linux VPS computing.

Apache

apache_logo

The vast majority of web servers (51.7%) run Apache and so by choice of popularity and laziness (since most newbie guides like the above are written with Apache in mind, I had it build from source and installed.

For its configuration, the Arch Wiki is again an excellent choice.

MariaDB

mariadb_logo

Basically a “better” MySQL, MariaDB is proving to be a strong competitor to its ubiquitous predecessor. It is generally recognized to have better performance and more active development.

PHP

php7_logo

Arch latest PHP version is 7.0.11 and at the time I was trying to build it from source, I had some errors and decided to install it from the prebuilt binaries.

5. CMS – WordPress

wordpress-logo-notext-rgb

So having all installed and configured, the next choice I was facing was the Content Management System, I would use. Since I had some positive experience in the past with WordPress, and since I am again… lazy, I went with it and aside from some filesystem permission hiccups I had it running in a no time.

There is where it went ugly – I began to hunt for a “minimalistic”, “responsive” and “cool” theme, only to become completely depressed and frustrated after trying the bloated stuff which is apparently popular these days. So after a lot of headaches trying to bend some “cool” themes to look like I wanted them to, I gave up and settled on some prebuild WP theme. After some experimentation and CSS tweaking, I manage to make it look okay…ish. But hey – cut me some slack – I am no web designer.

6. Performance

Since I gave so much effort into running the most efficient software stack, you think It would be wise to actually configure it right.

So as always before attempting any optimization one should measure. Chrome and Firefox have some built-in developer tools that can facilitate such measurements, but using a 3rd party service is not a bad choice by any means, especially if, like me, you are new to these technologies and want quick references on what and how to improve. There are a lot of such 3rd party site analysis tools, but gtmetrix.com has grown on me. Using it, I was able to identify several major areas where I could gain speedup:

  • enabling gzip compression – it seems that compressing the content before serving it to the client is the standard these days. In apache this is done with the mod_deflate module.
  • minifying text content – since html, css and js have a lot of whitespace and other formatting, removing it can  have a positive impact on your load times. More importantly if you have 4 .css and 8 .js, you are making a request for each one of them, minifying also merges these different files into one for additional overhead reduction. I would also imagine, that this also affects positively the gzip compression, since you can have a common pattern across multiple .js files. For this purpuse I recommend the Fast Velocity Minify plugin for WordPress, that does all of the above. Take care, however for enabling minification of html, because I had issues with some text formatting in articles that was relying on non string html data, which was minified and looked all wrong.
  • expire headers – this server-wide configuration tells the clients for how long are valid the static resources they get – .html, .css and .js. This doesn’t affect the initial load time, but significantly reduces the subsequent ones. Apache supports this feature wit its mod_expires module.
  • multiprocessing optimization – by default Apache is configured with the mpm_prefork_module + mod_php. This is a “should work” configuration, but its rightfully depreciated, because its wasteful and inefficient.

 

mod_php is loaded into every httpd process all the time. Even when httpd is serving static/non php content, that memory is in use.
mod_php is not thread safe and forces you to stick with the prefork mpm (multi process, no threads), which is the slowest possible configuration

Much more reasonable solution is to switch to the mpm_event_module + php-fpm . I did just that and in addition configured the min and max child processes to be exactly 1, since I am running this on a single core instance.

site_performance_before
Using LoadImpact tool for performance analysis,  before this change I had fast response time (<2 sec) until I hit 20 concurent users. Then the server struggled and the response time exploded into tens of seconds.

site_performance_mpm_event
After the change towards mpm_event + php-fpm, the server dramatically improved and the site load time stabilized (Note that the sub-second response time here is mainly because of the testing nodes locating being in Europe)

  • HTTPS and HTTP/2 – at this point I was happy with my site load times, but using unencrypted connection bothered me. Sending my cookies and passwords so naked was less than ideal. In order to have HTTP over SSL/TLS connection, you need to have a certificate. You can generate a self-signed one, but that way every user that opens your site, will receive a warning (depending on the browser) that the site’s authenticity can’t be verified. I thought that SSL certificates were pretty expensive, but it turns out that for domain assurance only, there are some pretty cheap options. Installing SSL certificate was pretty straightforward and enabling HTTP2 was also easy.
    site_performance_after
    Evidently, the performance is much more stable and I am pretty sure it can handle at least double the amount of users since the CPU load never went beyond 50% during the whole test.

 

A personal space