PeerVPN

What is a VPN?
VPNs means Virtual Private Network. It is way to connect multiple machines located in different regions together as if they were in a LAN. For example, in Mauritius, if your have Orange’s MyT at home, your router’s internal IP would most likely to be `192.168.100.1`.

Your laptop, mobile phone or Smart Home Appliances will have an IP in the range `192.268.100.2` to `192.168.100.254`. You cannot access these devices outside of your home without doing some tricks on your router configurations. This is where VPNs come in. If you are at work or on the move, you can still access your devices as if you were connected actually at home.

This would make more sense for businesses who have multiple region of operations but would still like all there IT devices to freely share information among themselves as if they in the same building. Examples would be a Manager printing a document in the office’s printer while he’s travelling in bus.

The problem with popular VPNs
Popular VPN solutions are centralised – meaning they depend on a single point such as a known server. Problem is when the server happens to be off-service, the whole VPN goes down. Furthermore, all the traffic is routed to the single server before being dispatched to their respective recipients.

PeerVPN comes in
PeerVPN is a very lightweight peer-2-peer VPN. You can initialise it with 2 nodes. When more nodes join in, it doesn’t matter if the first 2 are still in.

PeerVPN is so small that it took less that 1 minute to compile on my Raspberry Pi 3. You can find the codes here: https://github.com/Nayar/peervpn

Performance
I noticed an increased of like 4-5ms when pinging between my VPSs’ servers on the cloud. However I noticed the ping to be 25ms faster when pinging my VPSs’ from my Raspberry Pi at home.

The HAProxy 75th percentile backend response time increased by 10ms. I think it’s not bad compared to the benefits of the encryption and ease which it provides.

Drawbacks
– The author hasn’t updated the code since 2 years now.
– Security might not be as updated.

I hope this project revives. I gotta test Meshbird to see how it compares to PeerVPN. Have you ever used any of these types of VPNs?

Hands on LoRa IoT Network

IoT (Internet of Things) is a has become a reality. Its adoption is increasing at exponential rates in almost all areas. I’m lucky that a friend of mine lent me his “Dragino Lora IoT Kit” to develop a prototype for his startup. The kit comes in this beautiful box.

Getting the Arduino Uno board to work on my Macbook was quite a hassle. I had to download a firmware from a website which I don’t remember for it to work. Yep, I was so desperate that I ignored the security concerns. But finally, my Arduino IDE finally recognized the board. I connected an RFID reader to it but unfortunately it couldn’t read any of my tags or cards. I’m guessing it’s simply a faulty reader. I will get a new one soon.

LoRa is a long distance, low power IoT network. It is similar to SigFox IoT network which has already been deployed in Mauritius. The difference between these 2 is that SigFox sells its network and LoRa sells its chips. Meaning anyone can have his own private LoRa network but has to buy the hardware for SemTech. On the other hand, anyone can manufacture SigFox equipments but they have to connect to SigFox’s official network only.

The Dragino Kit offers a LoRa gateway along with 2 Arduino Uno’s coupled with LoRa Shields. The results:

It’s just a simple hello world app but now is when the fun is going to start. Next I plan to make the LoRa server pair with my custom MQTT server and send logs to my ElasticSearch cluster so I can start analyzing data.

If you have more ideas on how we could use this technology, feel free to have a little chat 😉

Chilli’s Pack Zenes is awesome (MTML)

For Rs 75, you get 750mb of data for 1 month. Isn’t that great?

The signal quality is awesome. Last week I went to Belle Mare beach. A friend called me on WhatsApp and the sound was crystal clear. Browsing the internet is super fast for a mobile phone.

I’ve been using the service since 2 weeks now. I’m really satisfied. I’d strongly recommend the Pack Zenes — unless if you live in a really remote location, you might wanna ask for someone else for feedback.

Great service MTML 🙂

New Shock Absorbers on my VW Polo 1996

My left shock absorber was leaked most of its oil. When going over bumps and humps, I would feel the left side of my car bouncing over 2 seconds. This could be very dangerous on motorways.

 

Braking distance

Without good shock absorbers, the emergency braking distance is greatly increased at speed which is a bad thing.

Comfort

I must say the broken shocks were more comfortable to ride in. They allowed the springs to cushion out the road defects without hindrance. But security comes first. They had to be replaced.

Cost

The new 2 shocks costed me Rs 5300 (without installation fee). But they don’t come with dust covers so bought them at Rs 700 both.

The shocks are of brand “Optimal” model A-3859H. Haven’t seen lots of reviews for it on the internet. Let’s see how it fares.

I’d say Mauritius should, like other European countries remove the humps on our roads. They increase noise pollution, air pollution and causes spinal problems in elderly people. These are only few problems associated with them. How can we make the government of Mauritius do this?

 

Elasticsearch error with python datetime: “Caused by: java.lang.IllegalArgumentException: Invalid format: “” is too short”

During normal usage, we got no loss of information on our Elasticsearch cluster. It was about 200 entries per second. We stressed tested the platform by doing bulks inserts at 7000 entries/s. We started seeing data getting lost.

It seems like the date format is not valid but the code has not been changed and there are no if-else statement where there could have been a possibility of echoing the datetime using a different branch. We are returning the time by the python function `datetime.isoformat(‘ ‘)`. Why would the same line of code produce 2 different outputs under heavy stress?

[2017-05-30T15:03:30,562][DEBUG][o.e.a.b.TransportShardBulkAction] [idc7-citadelle-0-es0] [proxydns-2017-05-30][0] failed to execute bulk item (index) BulkShardRequest [[proxydns-2017-05-30][0]] containing [1353] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [date.date_answer]
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:298) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:450) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:576) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:396) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:447) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:467) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:383) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:93) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:66) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:277) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:532) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:509) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.prepareIndexOperationOnPrimary(TransportShardBulkAction.java:447) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:455) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1654) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:618) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:613) [elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.3.0.jar:5.3.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.lang.IllegalArgumentException: Invalid format: "2017-05-30 13:03:11" is too short
        at org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187) ~[joda-time-2.9.5.jar:2.9.5]
        at org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:826) ~[joda-time-2.9.5.jar:2.9.5]
        at org.elasticsearch.index.mapper.DateFieldMapper$DateFieldType.parse(DateFieldMapper.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.DateFieldMapper.parseCreateField(DateFieldMapper.java:462) ~[elasticsearch-5.3.0.jar:5.3.0]
        at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:287) ~[elasticsearch-5.3.0.jar:5.3.0]
        ... 40 more

It turns out that during stress tests, logs with a wider range of timestamps will be produced. Lemme illustrate it using this little python script for example.

import datetime

lol = datetime.datetime.fromtimestamp(1496211943.123)
print lol.isoformat(' ') # 2017-05-31 10:25:43.123000
lol = lol + datetime.timedelta(milliseconds=500) # we add 500 milliseconds to it. 
print lol.isoformat(' ') # 2017-05-31 10:25:43.623000
lol = lol - datetime.timedelta(milliseconds=623) # we substract 623 milliseconds. What will the result be?
print lol.isoformat(' ')  # 2017-05-31 10:25:43

Noticed how the format changed? I was expecting the output to be `2017-05-31 10:25:43.000000`. During normal usage, it is highly unlikely for someone to do requests at exactly 0 milliseconds. Even if they did, the data loss would be very little and non-detectable. By doing the stress test, what was very improbable become more probable. We detected it.

But why did the stupid format had to change? Reading the official documentation of the datetime library of python, it saw this:

datetime.isoformat([sep])

Return a string representing the date and time in ISO 8601 format, YYYY-MM-DDTHH:MM:SS.mmmmmm or, if microsecond is 0, YYYY-MM-DDTHH:MM:SS [1]

Why did they choose to do this? No idea. Anyways, I resorted to write my own format of which I would be sure not to have this problem again.

print lol.strftime('%Y-%m-%d %H:%M:%S.%f')

[1] https://docs.python.org/2/library/datetime.html