Upgrading from MySQL to MariaDB on Ubuntu Server 16.04LTS

Let’s see which version of MySQL is installed

root@mysql-bak:~# dpkg -l | grep mysql
ii  mysql-client-5.7                 5.7.22-0ubuntu0.16.04.1  
ii  mysql-client-core-5.7            5.7.22-0ubuntu0.16.04.1  
ii  mysql-common                     5.7.22-0ubuntu0.16.04.1
ii  mysql-server                     5.7.22-0ubuntu0.16.04.1 
ii  mysql-server-5.7                 5.7.22-0ubuntu0.16.04.1  
ii  mysql-server-core-5.7            5.7.22-0ubuntu0.16.04.1

Rumour says that MariaDB is a “drop-in” replacement for MySQL. Let’s try install MariaDB as is.

reimport them hereroot@mysql-bak:~# apt-get install mariadb-server
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  libevent-core-2.0-5
Use 'apt autoremove' to remove it.
The following additional packages will be installed:
  libdbd-mysql-perl libdbi-perl libmysqlclient20 libterm-readkey-perl mariadb-client-10.0 mariadb-client-core-10.0 mariadb-common mariadb-server-10.0 mariadb-server-core-10.0
Suggested packages:
  libclone-perl libmldbm-perl libnet-daemon-perl libsql-statement-reimport them hereperl mailx mariadb-test tinyca
The following packages will be REMOVED:
  mysql-client-5.7 mysql-client-core-5.7 mysql-server mysql-server-5.7 mysql-server-core-5.7
The following NEW packages will be installed:
  libdbd-mysql-perl libdbi-perl libmysqlclient20 libterm-readkey-perl mariadb-client-10.0 mariadb-client-core-10.0 mariadb-common mariadb-server mariadb-server-10.0 mariadb-server-core-10.0
0 upgraded, 10 newly installed, 5 to remove and 0 not upgraded.
Need to get 16.3 MB of archives.
After this operation, 15.2 MB disk space will be freed.
Do you want to continue? [Y/n]

And we press Y!

The old data directory will be saved at new location. │ │ A file named /var/lib/mysql/debian-*.flag exists on this system. The number indicated │ │ Therefore the previous data directory will be renamed to /var/lib/mysql-* and a new d │ │ Please manually export/import your data (e.g. with mysqldump) if needed.

And all your data is not accessible.

You’ll need to have MariaDB 10.1 or higher which can import MySQL data.

sudo apt-get install software-properties-common
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://mariadb.mirrors.ovh.net/MariaDB/repo/10.2/ubuntu xenial main'
sudo apt update
sudo apt install mariadb-server

You database should now be MariaDB 😉

Elasticsearch on Docker Swarm with NGINX

On all Hosts:

sudo sysctl -w vm.max_map_count=262144

On Host 1:

1. We initialize a docker swarm. Add `–advertise-addr X.X.X.X` if inside a private network

# docker swarm init

1. We create a network on docker

# docker network create --driver overlay --subnet 10.0.10.0/24   --opt encrypted elastics

“Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other.” [2]

2. We initialize the docker containers with 3 copies

docker service create --name elasticsearch --network=elastics \
  --replicas 3 \
  --env SERVICE_NAME=elasticsearch \
  --env "ES_JAVA_OPTS=-Xms256m -Xmx256m -XX:-AssumeMP" \
  --publish 9200:9200 \
  --publish 9300:9300 \
  youngbe/docker-swarm-elasticsearch:5.5.0

3. We get the command to generate the joining link

# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-TOKEN \
    X.X.X.X:2377

On Worker Host
1. Type the command from the last step from host 1

# docker swarm join \
    --token TOKEN \
    X.X.X.X:2377

On Master 1

1. We now setup nginx

docker service create --name meranginx --network=elastics  nginx
docker service create --name nginx --network=elastics --mount type=bind,source=/root/meradockernginx/elasticsearch.conf,destination=/etc/nginx/conf.d/elasticsearch.conf nginx

To be continued…
#TODO: make a conf file for nginx which listens on port 9200 and uses `elasticsearch` as backend server

References:

[1] https://github.com/imyoungyang/docker-swarm-elasticsearch
[2] https://docs.docker.com/network/#network-drivers

How to check your MyT internet usage #Mauritius

Mauritius’s biggest Internet Service Provider still caps internet for poor people to 1Mbps after having exceeded a quota. Here’s how to know how much internet you got left

1. Go to myt.mu > my.t home > Check My Account

You’ll get a page like this:

2. Call 8900 from your mobile phone and ask them for the password.

3. Once logged in, you shall see how much data allowance you got left:

Joyeuse Fête de l’indépendance aux Ministres Mauriciens

Vous êtes libres de voler combien vous voulez.
Voler en termes d’argent et voyages.
Vous êtes libres d’emprisoner quiconque vous questionne.
Vous êtes libres dans les bouchons routiers. Vos motards d’escorte font de la place pour vous. Vous êtes libre de vendre les plages Mauriciennes sans vous soucier où vont nager les Mauriciens. Donner nous du biryani, nous venons vous célébrer aux Champs de Mars.

Duplicate Monit IDs in MMonit

when you’re using MMonit software with multiple VMs cloned from a template with monit installed, there are sometimes 2 VM get the same monit IDs. You’ll notice that there are errors on your MMonit dashboard which disappears after a while.

To view the monit id of your VMs, type the following command on your terminal

# monit -i

What do you do if you have hundreds or thousands of VMs? How will you know which ones have duplicate IDs?

I implemented a solution using SQL Triggers.

CREATE TABLE `duplicate_monitids` (
`ipaddrin` varchar(255) NOT NULL DEFAULT ”,
`monitid` varchar(255) DEFAULT NULL,
PRIMARY KEY (`ipaddrin`)
)

delimiter //
CREATE TRIGGER duplicate_monitids AFTER UPDATE
ON host
FOR EACH ROW
BEGIN
INSERT INTO duplicate_monitids(ipaddrin,monitid) VALUES(NEW.ipaddrin,NEW.monitid);
END//
delimiter ;

Then to view the VMs which have duplicate IDs, run the following SQL Query

select ipaddrin from duplicate_monitids where monitid IN (select monitid from duplicate_monitids group by monitid having count(*) > 1);