Change LUKS disk encryption key

This morning I wanted to "cleanup" the work laptop to pass it over to a new colleague in the team. Then I remembered that I have used my LUKS password in other applications. Unwilling to share this password I decided to take the plunge and change it! These are the steps needed:

1. Determine the device with LUKS encryption:
╭─root@darktech  ~
╰─$ dmsetup ls                                                                                                                                                                                fedora-win8     (253:9)
fedora-backtrack        (253:6)
fedora-swap     (253:1)
fedora-root     (253:2)
luks-eca67822-2122-44ab-9dc7-2f13c8e94d6f       (253:0)
fedora-data     (253:7)
fedora-winxp    (253:4)
fedora-backup   (253:8)
fedora-f18      (253:5)
fedora-home     (253:3)
╭─root@darktech  ~
╰─$ dmsetup info luks-eca67822-2122-44ab-9dc7-2f13c8e94d6f
Name:              luks-eca67822-2122-44ab-9dc7-2f13c8e94d6f
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        9
Event number:      0
Major, minor:      253, 0
Number of targets: 1
UUID: CRYPT-LUKS1-eca67822212244ab9dc72f13c8e94d6f-luks-eca67822-2122-44ab-9dc7-2f13c8e94d6f

╭─root@darktech  ~
╰─$ pvdisplay
  --- Physical volume ---
  PV Name               /dev/mapper/luks-eca67822-2122-44ab-9dc7-2f13c8e94d6f
  VG Name               fedora
  PV Size               465.08 GiB / not usable 0
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              119059
  Free PE               0
  Allocated PE          119059
  PV UUID               yXSVeR-pwc4-nkJt-EXWq-CiLA-2XPs-qwdQVu

2. Create a new LUKS password:
╭─root@darktech  ~
╰─$ cryptsetup luksAddKey /dev/sda3                                                                                                                                                          
Enter any existing passphrase:
Enter new passphrase for key slot:
Verify passphrase:
╭─root@darktech  ~

3. Remove the first slot:
╭─root@darktech  ~
╰─$ cryptsetup luksKillSlot /dev/sda3 0                                                                                                                                                      
Enter any remaining passphrase:

The first slot is "Slot 0", second is 1, etc. Make sure you don't remove the wrong slot or you will lose your data permanently!

2015 Malta Eurovision Song Context - a review

Times of Malta have posted a lighthearted "Idiots' guide to 2015 Malta Eurovision Song Context" article this morning.

Of particular amusement the tongue-in-cheek reference to Lawrence Grey's "never giving up" experience with this song context, and Ludwig's sleek image:

Lawrence Grey - The One That You Love

“I’m never gonna be the one that you love. Is there a way that I can win your heart?”
I love the way our Lawrence never gives up and now he’s even spelling it all out to the audience, in a quite catchy, croony song.
The question is, though, Will the audience get the message?

Trilogy - Chasing A Dream

The trio is made up of Eleanor Spiteri, Roger Tirazona and Ludwig Galea.
Roger and Eleanor and diction don’t go hand in hand: Traaying Harrrd.
I love Ludwig and his sleek, blow-dried, Japanese straightened mane, and the way his vein in the neck pops when he reaches high notes and the way he throws himself, back, then to one side, then to another, then back again, when he’s reaching all the other notes.

Improving your database backup policy

In this brief article I will reflect on a strategy which I normally adopt in an enterprise which requires a solid database backup policy in which frequent regular backups of their data-set is needed.

First and foremost get familiar with a filesystem snapshot technology; whether an enterprise tool such as VMware's Veeam, or something like LVM. Secondly, get familiar with a replication technology of your database system, for example in MySQL we have master-master or master-slave.

The strategy can be explained in these simple terms is:
  1. Stop replication
  2. Stop database service on slave/secondary node
  3. Take a snapshot of the slave/secondary node
  4. Start replication
  5. Repeat steps 1 - 4 every hour
  6. Repeat steps 1 - 4 every day at midnight
  7. Repeat steps 1 - 4 every first day of the month at midnight
  8. Every Sunday at midnight, purge all hourly backups of the previous week
  9. Every first day of the month at midnight, purge all daily backups of the previous month excluding the first day
Now you have a nice catalog of backups which allow you to also do point-in-time recovery by applying binary logs in between backups. Adjust backup retention as necessary.

Did my first 12km run ever!

Or to be precise, my first 12.56km. Before that, I had only ran about 9 more times for the first time in my life, with the first run resulting in me getting knackered after the first 5 minutes!

I started running as soon as I started my MBA programme to help me keep focused and sharp between work and studying activities. Thereafter it became more like "the thing to do" after work.

Goes without saying that I do have a goal: that to be able to do the half marathon which is next February. I am half way through it, consistently improving and pushing myself within reasonable limits. Some stats:

Personal Development assessment

This morning I submitted the Personal Development Stage 1 assessment having worked on it for the last 30 hours. With a mere pen and paper I managed to get intimate with my thoughts - I have used the Henley Star model to define some objectives and how the Henley MBA can reach them. When defining the objectives, I have use the SMART goal approach.

Writing a timeline of my personal development since I was born till now made me reflect how my family and community have shaped the person I am today. When I mapped these life events to the psychosocial 8-stage model (Erikson, 1950), I found out that stage 5 was crucial and some key learning events were brought up thanks to decisions taken during this stage.

I will use this blog as my personal journal to do my best and reflect and succeed in my life.

First workshop at Henley

Our honeymoon period at the Henley business school is approaching to its end. During this first workshop for the module Personal Development we had the opportunity to reflect on ourselves and to identify what has brought us here, to pursue the MBA programme.

I personally found these last couple of days more interesting than I was expecting. Some of my favorite highlights:

1. The tutors are fun, very communicative and we managed to crunch a wealth of information in just a few days!

2. My group intake, the MT09, is a mix of interesting people from Malta, Lebanon, Italy, Libya and Spain. Moreover I was surprised to meet a good number of fellows from different backgrounds and experience - I was personally expecting a bunch of boring accountants ;)

3. The team learning exercises made me reflect on who I really believe I am and helped me shape a bit better my future goals and aspirations. Making dialogues with other peers also made me realize that I share the same issues and concerns with other people and that I share the same goals with others.

4. The course content looks tough, but with that in mind I am pretty sure we can do it - and I am being realistic here. I believe that with a solid time schedule and 12 hours a week to spare for the MBA, we should have a good chance of keeping up a  good pace. And this is like a marathon after all.

Starting the Henley MBA experience

In a few hours I will start the MBA adventure. Together with my new Maltese colleagues, we will prepare for the flight to UK which will bring us to Henley Business School at Reading University.

Having completed a revamped time scheduling exercise in the last couple of days, I managed to free up 16 more hours to allocate for my study plans. This means that I do not need to sacrifice my present commitments such as house activities, family, music and daily running in order to complete my MBA, if I stick to the plan!

My mind and spirit are already geared up for the MBA. Bring it on...

How to protect against the Bash Bug (ShellShock)

I will not explain in technical details how yesterday's freshly released Bash Bug exploit works as there are already millions of articles spreading like wild fire on this subject. But in a nutshell, the problem is that while it's okay to define a function in an environment variable, bash is not supposed to execute the code after it.

Let me give you an easy example to determine whether my Bash version is impacted or not by this vulnerability. If I run the code below, I should get echoed "Vulnerable" if indeed I am:

The patch has been released yesterday so there's no excuse in not fixing your servers. Additionally RedHat/Centos/Fedora/Debian/Ubuntu and the likes, have updated their bash package as well. Updating bash will make the previous code not able to run any commands after the function:

Read ebooks on Linux

If you are an avid online book reader, an essential tool is an epub reader. Check out FBReader. It is one of the best free tools I have used so far and works on many platforms, including Windows, OSX and Linux.

MySQL with HAProxy for Failover and Load Balancing

As discussed in a previous blog post about different types of MySQL HA architectures, using HAProxy for failing over and load balancing over clusters of MySQL can be very effective for most  situations where a transparent application manual failover is required (OK I coined this term after years working on Oracle systems). In this article I will explain how to setup an architecture similar to Figure 1.

Figure 1 - Two MySQL "clusters" load balanced by HAProxy

For simplicity, the cluster will be made up of one node (the master), and for this setup we will use three machines:

  • m-mysql01 - First MySQL cluster
  • m-mysql02 - Second MySQL cluster
  • mysql-cluster - HAProxy

In this article I will assume that you have already installed MySQL on m-mysql01 and m-mysql02 and set them up as master-master replication. In the next steps we will create a user for HAProxy to determine the status of the MySQL servers (a non-privileged user with a blank password accessible only from HAProxy) and another user for the application to use to connect through HAProxy:

Please note that these users will be automatically replicated to the other node. Before we start looking at the HAProxy part, let's install MySQL client and test the connectivity to both nodes:
If you are able to list the databases, we can move on to install HAProxy:

Create a new HAProxy configuration:

A bit of explanation on this configuration might be handy for you guys especially if you're not familiar with HAProxy. The most important blocks are the last two where we tell HAProxy to listen to the network interface and based on the rules forward the requests. In the first listen block we are accepting MySQL connections (from the application) on HAProxy does not understand the MySQL protocol, but it understands TCP (hence mode TCP). The "option" tag is used to determine the node status by trying to TCP connect using the "haproxy_check" user. In the next two lines we put the MySQL nodes. Since in my particular case I would like to have one active server at a particular time (since the application is not robust enough to handle node crashing with async replication), I am commenting the second server.

In the second listen block I am configuring a simple stats application which comes by default on HAProxy. It is now time to start HAProxy:

When I point my browser to I can see the "cluster" status:

A green row indicates that HAProxy is able to communicate with the MySQL node. We can also perform another test using MySQL protocol (i.e. MySQL client):

And that's it! Now we can go on and test failovers by replacing commenting out m-mysql01 and activate m-mysql02 instead. And now for some stress tests I use "mysqlslap" tool.

Stress testing against HAProxy:

Stress testing against one of the MySQL nodes directly:

The stress test which was run a number of times on a cold and warm instance, shows that HAProxy acqually managed connections better, resulting in faster queries. Note that the stress test introduces both INSERTS and SELECT statements.

Another cool thing you can do with HAProxy is to limit the maximum number of connections to the MySQL servers. This can make sense not just to protect you against DoS attacks but to actually improve performance especially if your data files are not on a multidisk SAN. I normally like to set the maximum connections to 20, but this is subject to your environment:

A Comparison of MySQL HA Architectures

I was recently asked to design a new MySQL HA architecture for an internal project which currently runs on a Master-Slave. The acceptable criteria were pretty much defined and agreed:

  • Provide High Availability (no need to be automatic failover)
  • Easy Failover (everyone should be able to do it without being a DBA)
  • Seamless Failover (the application should not be modified on a failover)
  • Scale Reads (for reports and DWH applications)
  • The performance should be reasonable good or at least not worse than the current setup
  • The application is not robust enough to handle crash failures in an async master-master setup (ie distributing writes is out of the question)

With this in mind, we were discussing several setups:

Setup #1 MHA:

This is a very popular setup in the MySQL community, and if setup well it provides you with zero downtime if one node crashes. 

In this architecture we have 2 elements: MHA manager (x1) and MHA nodes (x3 in our case). The MHA Manager can run on a separate server and can manage multiple master-slave clusters. It polls the nodes of the clusters and, on finding a non-functioning master it promotes the most up-to-date slave to be the new master and then redirects all the other slaves to it. 

The failover is transparent to the application. An MHA node runs on each MySQL server and facilitates failover with scripts that monitor the parsing & purging of logs. The architecture being proposed here is shown in figure 1.
Figure 1 - MHA with 3 nodes

The problem with this setup is that it is very difficult to maintain if something goes wrong. Also I hate Perl.

Setup #2 Manual Managed Replication (Master-Slave-Delayed Slave):

In this architecture we make use of traditional self managed master-slave replication with an additional delayed slave. The application always points to the master and should the master goes down, we have to manually promote the slave to master and point the application to it. 

Doing a failover entails DBA knowledge - the downtime, in comparison to architecture 2, will be a bit longer. The benefit with this architecture is its simplicity and lack of Perl scripts. This architecture is shown in figure 2. 

Figure 2 - Simple MySQL replica with a delayed Slave

A delayed slave is useful if a MySQL user accidentally drops a table or deletes a row. The problem with this setup is that on a failover the application needs to be changed to point to the new master.

Setup #3 Percona XtraDB Cluster:

I will talk more about this setup in detail in a future blog. I personally installed my first Percona XtraDB/Galera cluster last April 2014 on a Rackspace infrastructure. Writes and reads scaled beautifully and the failover was seamless. But I was experiencing random node failures, network partitioning and corruption. Was it a bug in Percona XtraDB or Galera? Was is due to Rackspace infrastructure? I filed bug reports and I never had the time to investigate further so I ditched this setup completely. I feel like this product needs to mature a bit more before being production ready.

Setup #4 MySQL Clustered by HAProxy:

When designing this architecture, I kept in mind all the pitfalls of the previous setups. In this architecture we are making use of HAProxy to handle all the failover between two clusters of master-slave nodes. The application will write to just one cluster at any point in time, but the master-master replication between both clusters will make sure that they are always in sync. To failover, we point haproxy to the other cluster, as depicted in Figure 3. 

Note that during the failover there is no downtime on the application. Therefore this can be used to do real time and downtime-less changes on the application-database stack.

Figure 3 - Failover using HAProxy and master-master replication between the clusters

This is personally my favorite setup due to my positive experience with HAProxy. Additionally, this setup ticks all the boxes for our requirements of then new architecture. As an extra bonus we can setup one of the slaves as a delayed. While writes will not be scaled (to satisfy our acceptable criteria), the reads can be scaled if we wanted to.

How would I make this setup fool-proof that even non-DBAs can failover? Simple - we can use Jenkins or Bamboo tasks to switch the HAProxy cluster as per diagram.

In the next blog post I will show in detail how to setup an HAProxy load balanced MySQL cluster.

Installing a DigiCert star SSL certificate in AWS Load Balancer

This should be quite a straightforward task, especially since I have been installing countless of HAProxy SSL terminated load balancers. When I was reading that setting an AWS load balancer with SSL can be a royal pain, I confess my first reaction was 'n00bs!'.

However I want to be quite clear here, the load balancer dashboard on AWS is a bit buggy. Let's take you through the process of setting up the load balancer for SSL termination as documented by AWS:

So we first start by created the port mapping between the ELB and the instances. If you want to terminate the SSL on port 80, you can set both ports as 80 on the instance. I prefer to terminate them on different ports so I make an explicit rewrite from HTTP to HTTPS. Example: I set up instance ports to 80 and 81, the latter being the "SSL" (although in reality, internally we have standard HTTP). If someone requests resource by http, I have a rewrite to https, which will redirect to port 81 by the ELB.

After you follow the next screens (read, click click click) you get to a point where you "upload" (read, copy paste) your SSL certificates. Now this is the trickiest part, which should not be in reality - so I do not know if this is a bug in AWS or there is something wrong integration-wise with DigiCert star certificates and AWS.

The dialog asks you to enter four pieces of information:

  • Certificate Name – The name you want to use to keep track of the certificate within the AWS console.
  • Private Key – The key file you generated as part of your request for certificate.
  • Public Key Certificate – The public facing certificate provided by your certificate authority.
  • Certificate Chain – An optional group of certificates to validate your certificate.

The private key is normally called star_<domain_name>.key, the public key certificate star_<domain_name>.crt and the Certificate Chain is a concatenation of the previous two and the DigiCertCA.crt intermediate certificate. But here comes the cockup. When you arrive at this screen, just fill the Private Key and Public Key Certificate and click Create.

Once the Load Balancer is created, go to the Listeners tab and click Change SSL certificate. Upload a "new one", by repeating the same process as before, but this time let's fill the Certificate Chain. Unlike traditional Certificate Chain, AWS expects just the Intermediate Certificate here, so just paste the contents of DigiCertCA.crt.

Note: You might ask that instead of repeating the last step, why don't we just paste the Certificate Chain at the LB setup. Now this is why I stated that AWS might be buggy - if you past the Certificate Chain at the ELB setup, a cryptic error will occur stating that the intermediate certificate is not valid. This is the only way I know it works (and which I haven't seen documented anywhere in the interwebs).

To check that you have the Chain installed correctly, use curl:
─james@darktech  ~
╰─$ curl -v https://<domain_name>.com                                                                                                                                                             60 ↵
* Rebuilt URL to: https://<domain_name>.com/
* Adding handle: conn: 0x23f2970
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x23f2970) send_pipe: 1, recv_pipe: 0
* About to connect() to <domain_name>.com port 443 (#0)
*   Trying
* Connected to ( port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_RSA_WITH_AES_128_CBC_SHA
* Server certificate:
* subject: CN=*.<domain_name>.com,OU=IT,O=acme Limited,L=Sliema,C=MT
* start date: Dec 06 00:00:00 2013 GMT
* expire date: Dec 11 12:00:00 2014 GMT
* common name: *
* issuer: CN=DigiCert High Assurance CA-3,,O=DigiCert Inc,C=US

The part marked in bold should state the details of the CA, signing Certificate and encryption cipher.

The 12 Factor App

Quoted from, this is how an application infrastructure should be built - no exceptions to the rule!

I. Codebase

One codebase tracked in revision control, many deploys

II. Dependencies

Explicitly declare and isolate dependencies

III. Config

Store config in the environment

IV. Backing Services

Treat backing services as attached resources

V. Build, release, run

Strictly separate build and run stages

VI. Processes

Execute the app as one or more stateless processes

VII. Port binding

Export services via port binding

VIII. Concurrency

Scale out via the process model

IX. Disposability

Maximize robustness with fast startup and graceful shutdown

X. Dev/prod parity

Keep development, staging, and production as similar as possible

XI. Logs

Treat logs as event streams

XII. Admin processes

Run admin/management tasks as one-off processes

If you can RTFM, we WANT YOU!

So yesterday I was tasked to take care of putting up a job description for a devops engineer in our team. This is what I came up, and to my surprise, even non-techies enjoyed it:

Easy way to confirm that Centos is patched against Heartbleed

This post should have been posted earlier, but here it is anyway... If you run a Centos box you'll notice that packages are not updated as regular as other distros like Ubuntu. However since the Heartbleed vulnerability is pretty sick, the developers at Centos issued a patch. A simple yum update openssl should fix it. To confirm:
╭─james@darktech  ~ 
╰─$ for i in `seq 1 4`; do ssh root@tech-qa0$i "rpm -q --changelog openssl | grep CVE-2014-0160"; done                       255 ↵

- fix CVE-2014-0160 - information disclosure in TLS heartbeat extension
- fix CVE-2014-0160 - information disclosure in TLS heartbeat extension
- fix CVE-2014-0160 - information disclosure in TLS heartbeat extension
- fix CVE-2014-0160 - information disclosure in TLS heartbeat extension

The Internet of Things

Today I stumbled across a concept which although not new to me, never realized it was called like that  - The Internet of Things.
In a seminal 2009 article for the RFID Journal, "That 'Internet of Things' Thing", Ashton made the following assessment:
Today computers—and, therefore, the Internet—are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings—by typing, pressing a record button, taking a digital picture, or scanning a bar code. Conventional diagrams of the Internet ... leave out the most numerous and important routers of all - people. The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world. And that's a big deal. We're physical, and so is our environment ... You can't eat bits, burn them to stay warm or put them in your gas tank. Ideas and information are important, but things matter much more. Yet today's information technology is so dependent on data originated by people that our computers know more about ideas than things. If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best. The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.[21]
—Kevin Ashton, 'That 'Internet of Things' Thing', RFID Journal, July 22, 2009

Compiling FontForge2 on Centos 6

As devops engineer I get often challenged by PHP developers to install cutting edge (almost bleeding!) packages against a conservative and stable distro like Centos. Recently I was asked to install grunt-webfont on one of the deployment servers which runs on Centos 6.

This npm package requires FontForge 2 which is not available on the base repo of Centos 6 (the current one is very to 2009!). Looking for prebuilt RPMs proved difficult, if not pointless so I decided to take a shot in compiling from sources. Since compiling FontForge 2 on Linux is a royal pain in the ass, I hope that this document will save you some time (read hours!):

Install these packages with yum:

$ sudo yum install libtool libtool-ltdl libtool-ltdl-devel libuninameslist-devel libXt-devel xorg-x11-proto-devel gettext pango-devel cairo-devel freetype-devel libxml2 libxml2-devel ibpng libpng-devel giflib-devel giflib libjpeg-turbo-devel libjpeg-turbo libtiff-devel libtiff libspiro-devel libspiro cairo

Install autoconf 2.69 from rpm:
$ wget

$ sudo yum localinstall autoconf-2.69-12.2.noarch.rpm

Install the iPython module:
$ pip2.7 install ipython

Install bdwgc:
$git clone

$ push bdwgc

$ git clone

$ pushd libatomic_ops && ./configure

$ make && sudo make install

$ popd

$ ./ && ./configure

$ make && sudo make install

$ popd

Install FontForge 2:
$git clone

$ pushd && ./bootstrap

$ ./configure

$ make && sudo make install

Hedonism (Skunk Anansie) - Bass Cover

What a hiatus!

So for those who just follow my blog you might have wondered what I have been up to. Let's face it, my last blog post was published last April, almost nine months ago. Many things happened and I think it's best to summarize them as follows:

1. I am back in Malta! Yes I missed this tiny island so much.
2. I am working in a new company, as a Senior DevOps engineer.
3. I joined a new heavy metal band, we are still in the process of finding a singer but I think we are almost there!
4. I started taking photography more seriously and now I have my own flickr account! I think photography was a long dormant passion inside me!

I have so much stuff to share with you in this blog, mostly IT, photography and music related, so I'll that in future posts.

Peace and stay tuned!