Webdevlist.net – The Developer’s Resource Collection

 

I just put up a new project of mine, which is called Webdevlist.

So what is Webdevlist?

I’m pretty sure most developers know it all too well: you browse Twitter, Reddit or Stackoverflow on a late Sunday evening and come across some cool programming framework or webservice. You think like “Wow that’s cool! Could be helpful some time. I need to remember it.” Then you put the link anywhere in a messy text file or the like and never visit it again. Webdevlist’s aim is to become a comprehensive, community-maintained collection of useful resources concerning software development. Such could include frameworks and libraries for any language, devtools, software applications, apps, webservices like PaaS, SaaS or IaaS and also learning resources like guides and tutorials. Everytime you find a cool tool on the web, just post it to the list. Everytime you’re looking for a tool to help you with your development problems come visit the list. Easy enough.

Tech facts

Webdevlist’s frontend is built with Angular2, which just has had its first final release. The backend makes use of LoopbackJS, which is a mighty framework to build REST APIs with a minimum of boilerplate code. Additionally the site is powered by HTTP/2.0 to get some more speed.

Is it finished, yet?

No, it isn’t. Actually, it probably never will be. I’m continuously going to add new technology to Webdevlist’s stack and change and refactor things. Currently I’m considering to switch to GraphQL. Right now, Webdevlist is in some kind of beta state, meaning that it still might have bugs.

I’d really appreciate to get feedback on this project!

>> Webdevlist.net
>> Webdevlist on GitHub

Migrate Maildir to new server using imapsync

This is a little tutorial for mailserver administrators, who want to migrate to a new server while keeping all e-mails. This works for mailservers whose MDA uses the Maildir format – like Dovecot by default – and have IMAP enabled.
This tutorial does not cover how to set up and configure a new mailserver on a new machine, based on the old one’s configuration, but only how to migrate the e-mails. Simply taring the Maildir folder and untaring it on the new machine again usually won’t work. But don’t worry, there is a cleaner way that abstracts away any actual mailserver or file-level considerations by only using the IMAP protocol’s methods. Therefore, we use a tool imapsync, which is written Perl. It acts as an ordinary IMAP client – just as Outlook or Thunderbird – that connects to both mailservers, the old and the new one. All information needed is how to authenticate the respective user with both servers. Actually one “manual” way to migrate the mails would be to set up both mail accounts in Outlook or Thunderbird, let download the mails via IMAP from the old one and Ctrl+A and Drag&Drop them over to the new one. imapsync does just that – yet automatically and without Outlook or Thunderbird.

First we need to install imapsync. You could install imapsync on your local PC, just as you would with Outlook or Thunderbird, but then there would be a unnecessary detour from server 1 over your PC to server 2. And since your local internet connection is probably ways slower then the servers’, your PC would be a bottleneck. So I recommend to install imapsync on either the old or the new mailserver’s host machine. Let’s do it.

  1. Clone the imapsync repository to any folder on your machine, e.g. /opt/imapsync: git clone https://github.com/imapsync/imapsync
  2. Read the installation notes for your specific operation system at https://github.com/imapsync/imapsync/tree/master/INSTALL.d and do exactly what’s described there. Usually, you will need to install some dependencies and the like.
  3. Now you should be able to execute ./imapsync from within the directory where you have cloned it to, e.g. /opt/imapsync. You should see a description on how to use the program.

Let’s now assume that you want to migrate mails from your old server with ip 12.34.45.78 for user “foo@example.org” with password “suchsecret” to your new server with ip 98.76.54.32. A prerequisite is that on both machines the mailserver is up and running and the respective user is configured. Further, let’s assume that on the new machine the user, as it makes sense, is called “foo@example.org” again, but his password is “ssshhhhh” now and that both MDAs require a TLS-secured connection, use standard PLAIN login method and are listening on port 143.

To perform the migration now, run the following command:

./imapsync --host1 12.34.45.78 --user1 foo@example.org --password1 suchsecret --authmech1 PLAIN --tls1 --host2 98.76.54.32 --user2 foo@example.org --password2 ssshhhhh --authmech2 PLAIN --tls2

Now all mails should be transferred from host1 through the imapsync client to host2, using nothing but the IMAP protocol. If you want to test if everything is working fine first, before actually transferring data, you could add the --dry option to the above command.

To migrate multiple accounts at once, you could write a small scripts that takes username-password combinations from a text file, as described here: https://wiki.ubuntuusers.de/imapsync/#Massenmigration (although that article is in German, the code should be clear).

Blank page in Firefox using nginx with SSL

Recently I got the following problem. I have a web application running on my server and an nginx sitting in front of that to handle incoming requests as a reverse proxy. Additionally I configured my nginx to enforce HTTPS by sending a 301 Moved Permanently for all requests on port 80, pointing to the HTTPS URL. The config roughly looked like this:

server {
    listen 80;
    server_name example.org www.example.org;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl;
    server_name example.org www.example.org;

    ssl on;
    ssl_certificate /etc/ssl/example.bundle.crt;
    ssl_certificate_key /etc/ssl/example.key;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://localhost:3000/;
    }
}

Everything worked fine with Chrome and Edge, but not with Firefox. When trying to access my page, Firefox simply displayed a blank page. In the developer tools I could find that the request wasn’t event performed. There was a request, but no response and zero processing time for that request. The nginx logs didn’t show anything as well. After some time of googling, I found out that I needed to specify the supported SSL ciphers and protocols. To be honest, I’m not too familiar with how SSL works, but adding the following three lines to the config solved the problem for me.

ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH $
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Docker Beta for Windows: Docker.Backend.HyperVException

Docker has recently released a beta version of a new Docker for Windows and Mac. It replaces Docker Machine and basically doesn’t use VirtualBox anymore, but Hyper-V or xhyve and should therefore be more performant and efficient.

I wanted to try that new Docker on my Windows 10 Pro machine (as far as I know, you will need Pro because of the Hyper-V support), so I downloaded and installed it. Once installed and restarted, it prompted me to install Hyper-V. The Docker installer did the download, but then the residual installation failed with an error text that contained Docker.Backend.HyperVException as a keyword.

A solution was to install Hyper-V “manually” through the Turn Windows features on or off menu in the control panel. After a reboot, Docker actually seems to work now.

To be perfectly honest, there was another problem in between. After having enabled Hyper-V (including both the tools and the platform!) my PC didn’t boot up anymore. It got stuck at the windows logo screen, but without the progress circle or anything. I did a hard-reset to make the automatic repair come up at the next reboot, which then failed. But now I could choose from the Advanced boot options to Disable early-launch anti-malware protection. This solved the problem for me, so that my PC was able to reboot again. In some forum posts I also read about people who had to disable USB 3.0 support (mostly in combination with Gigabyte mainboards – like mine) in their BIOS to fix the exact same problem as described above.

Good luck and have fun with the new Docker for Windows. I’m going to give it a deeper try one of the next days.

Innovation in Germany – not

Over last last while I got confronted with this topic quite frequently. Eventually this german article – or more precisely the comments below it – caused me to write my own little post on my personal perception of how innovation takes place in Germany. For the non-german readers among us I want to give a brief summary of that article. Recently the german internet provider “Unitymedia” came up with the idea to provide free WiFi-Hotspots for everyone, since internet connection in public places is pretty much a negative report in Germany. The key point with that is their intention of how to implement those so called “WifiSpots”. Every Unitymedia customer who has a Wifi-capable router in her home should become such a hotspot, while they promise that the hotspot network is completely isolated from the customer’s own network – in terms of both security and bandwidth. Personally I like the idea, because I consider it quite efficient. Why put effort in distributing routers in public places if there already is full Wifi coverage? Most people’s routers are nowhere near working at capacity anyway. And if it’s guaranteed that the public internet traffic doesn’t influence you at all, why not? The article tells that the consumer protection center of North Rhine-Westphalia had admonished Unitymedia for their plans recently. One of the top comments below the article says about the following:

That’s exactly the reason why innovation isn’t possible in Germany. As soon as a company tries to solve people’s problems, everybody goes to the barricades. One gets punished for experiments – not surprisingly nobody wants to found a company.

Even though the comment received a lot of bad write-up, that boils it down for me quite well. In my opinion Germany is ways too sluggish and conservative when it comes to adapting something new. Even though most representatives of the german manufacture consider themselves progressive, I actually don’t think they are – at least not as much as they once were. I went to a conference on Industry 4.0 last week where one speaker claimed that Germany was at least one to two years ahead of other countries at Industry 4.0 topics, but what I picked out between the lines is that quite the opposite is true. While the Germany are continuously trying to define standards, develop well-defined business-processes, clarify legal aspects and the like other countries are just doing it. They simply try it out, taking not too much risk, and if it fails, it fails. But if it succeeds – and I claim that if not applied totally wrong, new technology is likely to – they are given a competitive advantage, while the current big players’ lead is melting. Of course, with this attitude you are more likely to fail as if you examine every little aspect fussily. But you’re also so much more likely to win big. As a professor at university had repeatedly said: “think big!”.

If your dreams do not scare you, they are not big enough! – Ellen Johnson Sirleaf

That’s what often is referred to as the Silicon valley mindset. In fact, as the diagram below shows, most startups are founded in the U.S.A. and I guess if there was a ranking of really big and successful startups (like Uber, WhatsApp, Tesla, …) the contrast would be even more dramatic. Take Elon Musk – the founder of Tesla Motors and SpaceX – (I really recommend his biography) as an instance. He thought big and he obviously won (admittedly he took a really high risk). I don’t think they evaluate and plan new technology (like VR and stuff) that much at SpaceX – they just do it.

http://www.statista.com/statistics/268786/start-ups-in-leading-economic-nations/

http://www.statista.com/statistics/268786/start-ups-in-leading-economic-nations/

Another example for Germany’s innovation power is the following. I’ve worked for two different companies as a working student – both were software manufacturers. One was a typical german medium-sized company and the other was an american corporation. In one of them, the second-latest version of Internet Explorer was the only browser installed on every employee’s computer and if you desired another, you had to open a ticket to make the software distribution department install an outdated version of Firefox a few hours later. I don’t want to blame that company – they did a great job at what they did. But the overall way of thinking there was old-fashioned, strict and not open-minded at all. Primarily it’s the people’s mindset that differentiates those two companies completely. I can’t really imagine that this first company is a workplace where you really feel comfortable and where you’re looking forward to a workday. At the time I worked there, they were about to release a little mobile app, whose development effort I estimate to only few weeks of intensive work of a small group of developers. The app’s purpose was great, but unfortunately, it was modern and innovative. Therefore there were only too few people to insistently support it. And there’s also the fact that processes were ways too sluggish to do a rapid development. As a result the app is still not released, but instead, two american countries each released an app with pretty much the exact same purpose. So much for that mindset in german companies – of course and as always, there definitely are exceptions (a really big global player originated in Germany at which one department prints out document to hand it over to the other department, at which a trainee typewrites it to make it digital again, not being one of them).

Another alarming fact I want to mention in this context is that the average internet speed in Germany is even far behind countries like Sri-Lanka and Vietnam.

Once a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road. — Stewart Brand

What this all amounts to is that Germany should really watch out to not get passed by countries where the people are more ambitious and motivated and less conservative and formal. We should never rest on our laurels but try to permanently improve in a continuous evolution – or to quote many speakers at the conference mentioned above: to not fall into the process of disruptive self-destruction.

Telegram: ExpenseBot & DoodlerBot

In 2015, the Telegram messenger announced their Bots. Basically they are pieces of software that act like a normal chat user in many ways. They could have any functionality, from being helpful at daily tasks to even simple games or trivias – all within an ordinary Telegram chat. You send them message, they give answers – some more and some less intelligent. Recently, also other companies – like Facebook or Microsoft – announced such bots for their messaging apps. Sometimes bots are even considered kind of the next step after native (web-) applications in the future.

From a developer’s perspective, making a bot is fun, because there are almost no restrictions on how to develop your bot. All communication with Telegram works by requesting a single REST API provided by them. Choices like which programming language and -framework to use and how to structure the code are completely up to the developer. A Telegram bot can theoretically be built in any programming language. The only requirements are to be able to make HTTP request from the application and to have a server to host the bot on.

I’ve recently created two bots for Telegram that should each help with a daily task.

 

ExpenseBot – Keep track of your finances

1461614801_Money-Increase

This bot’s purpose is to help people manage their daily expenses and keep track of their financial situation. Users can add expenses from wherever they are using a few simple commands from within the chat and have an eye on how much they have spent in a month or a day. This obviates the need for confusing Excel spreadsheets or paper notes. You can reach the bot by sending a Message to @ExpenseBot in Telegram.

 

 

DoodlerBot – Coordinate group appointments

1462726473_calendar My second bot helps users coordinate a group of people and find the right date for a common appointment, just like you might know from doodle.com (even though it doesn’t have anything to do with that commercial service, except for fulfilling the same need). Open a new doodle and let your mates in the group chat vote for their preferred date to finally make the best decision for everyone. You can reach the bot by sending a Message to @DoodlerBot in Telegram.

 

 

Both projects are completely independent, non-commercial and privately operated. If you have any questions our found a bug (both bots are still in beta phase and therefore might show some unexpected behavior), please contact me at @n1try or via e-mail. In both cases, you should first read a basic introduction on how to use the respective bot, by sending a message with /help to it.

In case you like my bots, I’d be really happy if you rated them at https://storebot.me/bot/expensebot and https://storebot.me/bot/doodlerbot. Have fun!

Free webserver SSL certificates made incredibly easy

Let’s Encrypt: Setting up SSL encryption on a private webserver was fairly laborious yet.

You as a private website owner maintaining your own little server with an Apache or nginx running on it might have felt quite uncomfortable when having had to set up https for your webserver till now. First you needed to find a provider for free certificates, since you probably don’t want to spend 30 bugs a month just for encrypting connections to your tiny website. Besides startssl.com and cacert.org there were only few other providers that offered free certs and are commonly trusted (e.g. namecheap.com, but only if you buy a domain at them, or one.com, but only if you buy a webspace at them). With each of them the set up was actually not that trivial. You had to generate your own private key, paste it into an incredibly poor web interface, verify your domain ownership through complicated ways (like setting up a particular e-mail address specifically for the domain you request the cert for), copy it to your server, add the certificate chain and modify the webserver’s configs. This is over now. I have discovered a super easy way to do all that automatically. There’s a new provider for free SSL certs: Let’s Encrypt. All you need to do is download a small client tool to your Linux server and execute it with some parameters, like what the domain is for which you want the certificate. It then automatically verifies your ownership, generates and save a private key, the certificate and another file including the certificate as well as the entire chain in one. In case you use Apache2 webserver it even sets it up for you. In case you use nginx or any standalone webserver (like a publicly facing Node.js app) you need to include the generated cert files into the configuration, but actually that’s everything you have to do by hand. You don’t even need to generate an account or the like.

How does this work?

Technically it basically performs the validation (proving that you’re actually the owner of the domain) in such a way that the client tool contacts the Let’s Encrypt server to do a simple HTTP request to a specific path on your domain and at the same time starts a little webserver / listener on exactly that path (that’s why you have to stop your webserver for a short while when the client tool perfoms its verification, since it needs port 80 to listen on). If it receives the request it is proven that you’re the root of this domain. This explanation was a simplified, if you want to know how it works in detail see https://letsencrypt.org/how-it-works/.

Why use such a certificate?

For a static webpage you definitely won’t need ssl / https, it doesn’t make sense for that purpose. But as soon as there’s any kind of private data flow FROM the client TO your server you might consider that. Especially if there’s a way for users to log in or sign up at your page you should encrypt the connection so that passwords won’t get passed through the internet in plain text.

I love that such annoying processes that traditionally required quite an amount of technical knowledge get simplified and automated more and more towards kind of a click-and-go way. Two other examples that come to my mind spontaneously are mailcow.email or nvm.sh, or of course any of those PaaS providers like DigitalOcean etc.

Unhosted.org applications with remoteStorage.io and WebFinger.net

Lately you as an interested web developer might have heard or read about a thing called unhosted applications, mostly with a reference to unhosted.org. This basically means web-apps running in your browser which do not rely on any kind of backend. Most apps you’re using on the web store data to a backend service at the provider’s host server, regardless of them being Google Docs, Evernote, Wunderlist or also Anchr.io. Obviously this makes you dependent on the provider in terms of availability and security. Also these apps are usually online-only apps, meaning that you can only use them with an internet connection. Another type of apps are those without a specific backend but still with a central, cloud-based data store, e.g. Firebase. In this case your data is still anywhere out there in the cloud at a provider you potentially could not trust (even though not on any kind of dubious private-hosted server but on a certified platform of one of the big players – deciding whether that is better or worse it’s up to you, actually). These apps are usually offline-capable, so you can use them without internet connection and if the connection comes back again, the data is synced to the data platform. And then there’s this third kind of apps, which they call unhosted ones. These are intended to be completely static, without needing any kind of backend. Theoretically you could download a zip of their HTML, JavaScript and CSS files to your computer and perfectly run the app without a webserver and a database. Well, ok, probably you’ll need a webserver, but it only needs to server static files (like an Apache2, nginx or http-server), nothing else – no PHP, node Node.js… Those apps (e.g. a simple todo-list) store all their data to your browser’s localStorage or IndexedDB. As a result it obviously is only available on your local computer and only until you clear you browser data. After all you might still want to sync your data to other devices now – but without giving your data away to a untrusted provider.

Screenshot at 10-16-08

This is where quite a new thing called RemoteStorage comes in. It’s a protocol for storing per-user data on the web in an unhosted fasion. Anything implementing the remoteStorage protocol can be your data-repository, which basically works as a simple key-value store. As a result you can implement your own remoteStorage or e.g. take the “official” PHP library to host one on your own server. There are also a few providers (like 5apps) out there, yet, which you can use, but don’t have to, if you don’t trust them. An unhosted-app that is remoteStorage-capable can now connect to your remoteStorage, e.g. https://rs.yourserver.com/user1/appXyz, and sync its data there. RemoteStorage works together with WebFinger. What is WebFinger now? WebFinger is kind of a registry on the web, where unique keys (usually in email-address style, like user1@yourserver.com), are mapped to URL’s for a certain type of relation. In this case you would tell your unhosted app such an email-address-like identifier, which maps to your remoteStorage-endpoint for the relation-type “remotestorage”. The app queries WebFinger for that key and follows the returned registered URL to the datastore. In this example the identifier tony@5apps.com maps to a remotestorage located at https://storage.5apps.com/tony. This makes the entire thing as decentralized as possible. Note that you could change your remoteStorage anytime by simply registering a new URL at WebFinger (you usually wouldn’t have to do this registration on your own, but the remoteStorage server implementation handles that for you). The authorization – like which app may access which subkeys on the remoteStorage – is handled by OAuth at your remoteStorage server implementation, where you can grant or revoke access for certain apps to certain store keys.

Two apps you can try are litewrite.net and Grouptabs. If you just want to play around with remoteStorage it might be the easiest way to use 5apps‘ remoteStorage for this.

How do WhatsApp’s end-to-end encrypted group chats work?

WhatsApp_Logo_4A few days ago WhatsApp has announced end-to-end encryption for all chats, which basically means, that messages are encrypted in a way that nobody except for the recipient can read a message. Previously only the communication channels between you and the server and between the server and your chat partner were encrypted, but the messages lay in clear text form on the server (though in case of WhatsApp not persistently stored).
Every end-to-end encryption is based on asymmetric cryptography methods, where a private and a public key exist. Messages encrypted with the public key can only be decrypted using the respective private key and the other way round. The private key is always kept private (as the name implies) and should never ever leave the user’s device, while the public key is sent to every chat partner, who will use it for messages addressed to you. No one without your private key will every be able to read them. In turn you are in posession of your chat partners’ public keys to send secure messages to them. So far so good, but there’s a problem with group chats.
Assume a group with three members, you (A), B and C. B (C) can only read messages signed with pubkey_B (pubkey_C). This would mean you had to encrypt the message twice, once with each public key. As a consequence you would also have to send it twice, which make your traffic for group messaging increase linearly with the amount of group members. Traditionally you only had to send the exact same message to the server once, who then did a fanout to all group members. Now you would have to send x (number of group members) different (since differently encrypted) messages, which isn’t a too good solution, since it will increase your mobile data traffic amount. Threema does it that way anyway. WhatApp takes another approach that I will explained a little simplified.

According to their Whitepaper of crypthography it works roughly as follows:
1. When joining a group you generate a group-specific key-pair, which we call myGroupPubkey and myGroupPrivkey from now on.
2. You encrypt it individually with every other group member’s public key (similar as you would do with a normal message) and send it to them. This is a client-side fanout where you actually send as many messages as there are members in the group, but it’s acceptable since it only happens once when joining a new group, not every time sending a message.
3. The partners encrypt this key-message using their respective private keys and get your myGroupPubkey out.
4. You encrypt every subsequent message using myGroupPrivkey (note that traditionally you encrypt messages using the partner’s public key while now you encrypt them using your private key) and send it to the server (who can’t read it) to fan it out to the group.
5. Every group members uses your myGroupPubkey to decrypt is.

It is important to note that you encrypt with your private key, which it is usually not intended for (biut technically perfectly capable of), since it is unsecure in the way that everyone with your public key could read the message payload. The sticking point is that since this public key is transferred in asymmetrically encrypted form it is safe that only group members are in possession of it. If a group changes, those keys are re-generated.

To be precise, this is only a simplified description but it explains the fundamental concept. For instance, besides other technical details the messages are actually first encrypted symmetrically and then signed asymmetrically.

Disclaimer: There is no guarantee for correctness of this explanation at all. This is how I have understood the process, but that doesn’t mean it is actually true. Do not rely on this. If you’re from a cryptographical background, have read the official Whitepaper and are of the opinion that I understood anything from please let me know.

Digitalocean – My preferred Cloud Hosting Provider

Screenshot from 2016-04-06 18:35:43DigitalOcean.com is a service that offers you on-demand virtual server instances that you can use to host any server application, be it a simple webpage, a Node.js written backend, any Docker container or anything else.

It is especially useful if you have developed a web application and want to bring it to the internet without owning a root server. In this case you can go to DigitalOcean, choose any boilerplate (or Droplet, as they call it) for your new virtual, cloud-hosted machine, additionally choose a the datacenter region which is closest to you or your customers, add your SSH keys for quick access and hit the create button. Within less than a minute your machine is up and running with a dedicated IPv4 assigned where you can ssh in.

As a template / boilerplate you can either choose from the common, plain Linux distributions (even CoreOS) in almost any version or take one of the various pre-configured environments like a machine already running Docker, Node.js, ownCloud, Joomla or plenty other runtimes and applications.

For scalability you can choose between different sizes, which basically means different memory capacity, cpu cores, ssd capacity and amount of traffic.

A feature which I only know from DigitalOcean by now is the ability to create a cluster of multiple machines (Droplets) with private networking, meaning they can communicate with every other node in the cluster but are kind of isolated from the internet. I haven’t tried this feature too much but it is similar to what you might know from linking multiple Docker containers together.

What I also like about the service is the ultra simple-to-use, minimalistic and intuitive web-interface that abstracts away this entire technical complexity running in the background when users do a single click on a button until a pre-installed machine comes up.

DigitalOcean is my personal favorite service of this type, but I also want to mention some alternatives which are Microsoft Azure, Google Compute Engine, Amazon EC2, Linode or in a wider sense also JiffyBox.de.

If you want to give DigitalOcean a try (and support me), follow this referral link where you get $10 in credits, which is enough for running the smallest container for two months. I will get $25 in credits in case you in turn spend $25 for credits. Of course I would be very pleased if you did so :-)