To Home page

Set up build environments

Virtual Box

To build a cross platform application, you need to build in a cross platform environment.

Set up Oracle Virtual Box. Because you will have numerous copies of the operating system, you may not want the location to which Oracle Virtual Box sets up stuff to default to your C Drive. Change the default in preferences.  

To make a copy of the wallet, use the regular wallet saving interface to save on hard drive outside the virtual drive. To make a copy of the blockchain that can survive destruction of the virtual machine. , on windows the blockchain is stored in %APPDATA%\Bitcoin, on Linux ~/.bitcoin/

Setting up Ubuntu in Virtual Box

Having a whole lot of different versions of different machines, with a whole lot of snapshots, can suck up a remarkable amount of disk space mighty fast. Even if your virtual disk is quite small, your snapshots wind up eating a huge amount of space, so you really need some capacious disk drives. And you are not going to be able to back up all this enormous stuff, so you have to document how to recreate it.

Each snapshot that you intend to keep around long term needs to correspond to a documented path from install to that snapshot.

When creating a Virtual Box machine, make sure to set the network adapter to paravirtualization, set preferences in the file menu to the D drive, the Virtual Hard disk to D drive, and the snapshot directory to D drive.  Virtual hard disk drive selection is done when creating it, snapshot directory is done in settings/general/advanced (which also allow you to do clipboard sharing).

apt-get -qy update && apt-get -qy upgrade
# Fetches the list of available updates and
 # Strictly upgrades the current packages

To install guest additions, thus allow full communication between host and virtual machine, update Ubuntu, hen while Ubuntu is running, simulate placing the guest additions CD in the simulated optical drive. Then Ubuntu will correctly activate and run the guest additions install.

Installing guest additions frequently runs into trouble. Debian especially tends to have security in place to stop random people from sticking in CDs that get root access to the OS to run code to amend the OS in ways the developers did not anticipate.

Setting up Debian in Virtual Box

To install guest additions on Debian:

su -l root
apt-get -qy update && apt-get -qy upgrade &&  apt-get -qy install build-essential module-assistant whiptail rsync && m-a -qi prepare
mount -t iso9660 /dev/sr0 /media/cdrom
cd /media/cdrom0 && sh ./
usermod -a -G vboxsf «username»

You will need to do another m-a prepare and to reinstall it after a apt-get -qy dist-upgrade. Sometimes you need to do this after a mere upgrade to Debian or to Guest Additions. Every now and then, guest additions gets mysteriously broken on Debian, due to automatic operating system updates in the background, the system will not shut down correctly, and guest additions has to be reinstalled with a shutdown -r. Or copy and paste mysteriously stops working.

On Debian lightdm mate go to system/ control center/ Look and Feel/ Screensaver and turn off the screensaver screen lock

Go to go to system / control center/ Hardware/ Power Management and turn off the computer and screen sleep.

To set automatic login on lightdm-mate

nano /etc/lightdm/lightdm.conf

In the [Seat:*] section of the configuration file (there is another section of this configuration file where these changes have no apparent effect) edit




In the shared directory, I have a copy of /etc and ~.ssh ready to roll, so I just copy them over, chmod .ssh and reboot.

cp -rv /shared/.ssh . && chmod 700 .ssh && chmod 600 .ssh/*
cp -rv /shared/etc /

Set the hostname

check the hostname and dns domain name with

hostname && domainname -s && hostnamectl status

And if need be, set them with

domainname -b «»
hostnamectl set-hostname «»

Your /etc/hosts file should contain       localhost       «»
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

To change the host ssh key, so that different hosts have different hostnames after I copied everything to a new instance:

cd /etc/ssh
cat sshd* |grep HostKey
#Make sure that `/etc/ssh/sshd_config` has the line
#     HostKey /etc/ssh/ssh_host_ed25519_key
rm -v ssh_host*
ssh-keygen -t ed25519 -f  /etc/ssh/ssh_host_ed25519_key

If the host has a domain name, the default in /etc/bash.bashrc will not display it in full at the prompt, which can lead to you being confused about which host on the internet you are commanding.

nano /etc/bash.bashrc

Change the lower case h in PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' to an upper case H

  PS1='${debian_chroot:+($debian_chroot)}\u@\H:\w\$ '

VM pretending to be cloud server

To have it look like a cloud server, but one you can easily snapshot and restore, set it up in bridged mode. Note the Mac address. After having it is running as a normal system, and you can browse the web with it, after guest additions and all that, then shut it down, go to your router, and give it a new static IP and a new entry in hosts.

Then configure ssh access to root. so that you can go ssh <server>as if on your real cloud system. See setting up a server in the cloud

On a system that only I have physical access to and which runs no services that can be accessed from outside my local network my username is always the same and the password always a short easily guessed single word. Obviously if your system is accessible to the outside world, you need a strong password. An easy password could be potentially really bad if we have openssh-server installed, and ssh can be accessed from outside. If building a headless machine with openssh-server (the typical cloud or remote system) then need to set up public key sign in only, if the machine should contain anything valuable.  Passwords are just not good enough – you want your private ssh key on a machine that only you have physical access to, and runs no services that anyone on the internet has access to, and which you don’t use for anything that might get it infected with malware, and you use that private key to access more exposed machines by ssh public key corresponding to that private key.

apt-get -qy update && apt-get -qy upgrade
# Fetches the list of available updates and
# strictly upgrades the current packages

To automatically start virtual boxes on bootup, which we will need to do if publishing them, Open VirtualBox and right click on the VM you want to autostart, click the option to create a shortcut on the desktop, cut the shortcut.  Open the windows 10“Run” box (Win+R) and enter shell:startup Paste the shortcut. But all this is far too much work if we are not publishing them.

If a virtual machine is always running, make sure that the close default is to save state, for otherwise shutdown might take too long, and windows might kill it when updating.

If we have a gui, don’t do openssh.  Terminal comes up with Ctrl Alt T

Directory Structure


Secondary hierarchy for read-only user data; contains the majority of (multi-)user utilities and applications.
Non-essential command binaries (not needed in single user mode); for all users.
Standard include files grouped in subdirectories, for example /usr/include/boost
Libraries for the binaries in /usr/bin and /usr/sbin.
Alternate format libraries, e.g. /usr/lib32 for 32-bit libraries on a 64-bit machine (option)
Tertiary hierarchy for local data, specific to this host. Typically has further subdirectories, e.g., bin, lib, share.
Non-essential system binaries, e.g., daemons for various network-services. Blockchain daemon goes here.
Architecture-independent (shared) data. Blockchain goes in a subdirectory here.
Source code. Generally release versions of source code. Source code that the particular user is actively working on goes in the particular user’s ~/src/ directory, not this directory.
Data maintained by and for specific programs for the particular user, for example in unix ~/.Bitcoin is the equivalent of %APPDATA%\Bitcoin in Windows.
Config data maintained by and for specific programs for the particular user, so that the users home directory does not get cluttered with a hundred .<program> directories.
Files maintained by and for specific programs for the particular user.
Source code that you, the particular user, are actively working on, the equivalent of %HOMEPATH%\src\ in Windows.
header files, so that they can be referenced in your source code by the expected header path, thus for example this directory will contain, by copying or hard linking, the boost directory so that standard boost includes work.

Directory Structure Microsoft Windows

hierarchy for user data.
Where data for a particular program and particular user on a particular machine lives. normally LOCALAPPDATA=%HOMEPATH%\AppData\Local
Environment variable pointing to the directory that Gsl was cloned into. Typically %HOMEPATH%\libs\gsl
Environment variable pointing to the directory that Libsodium was cloned into. Typically %HOMEPATH%\libs\libsodium
Environment variable pointing to the directory that wxWidgets was downloaded into. Typically %HOMEPATH%\libs\wxWidgets-3.1.2

Source code directory structure

Per-installation: Tools/Options/Projects and Solutions/VC++ Directories).

Per-project: Project/Properties/Configuration Properties/“C/C++”/General/Additional Include Directories.

No matter where the header files actually are in your particular hard disk on any particular computer, every source file on everyone’s system refers to them in the same way, on both windows and Linux, and everyone’s Visual Studio project file refers to them relative to environment variables that have the same name in everyone’s Windows environment, so that everyone can use the same source files in all environments, and everyone can use the same visual studio project files in all Visual Studio environments.

Also set the C++ dialect to C++17 in Solution explorer/properties/all configurations/Configuration properties/C-C++/All Options
C++ Language Standards ISO C++17
No Common Language Runtime Support
Conformance mode Yes Permissive
Consume Windows runtime extension No

Turning C++17 on turns standards compliant automatic memory management on, and turning off Common Language Runtime turns off Microsoft specific automatic memory management.

Similarly, Conformance mode yes turns off Microsoft specific language extensions, and and turning off Windows runtime extensions turns off some Microsoft specific operating system extensions.

In Git Bash:

mkdir src && cd src
git clone git@«»:~/wallet
cd .. && mkdir libs && cd libs
git clone
git clone

Building a wxWidgets project

see wxWidgets. Start from the samples directory.

Setting up a headless server in the cloud

On any cloud machine – hell any Linux machine, you need to setup ssh login. On the cloud machines, you want an a login secured by ssh. ssh is not necessarily installed by default

apt-get -qy install openssh-server
service ssh status
service ssh stop
service ssh status
nano /etc/ssh/sshd_config
service ssh start
service ssh status
service ssh restart
service ssh status

Which does not necessarily prebuild the .ssh directory, nor necessarily enable password login over ssh

Setting up SSH

Login by password is second class, and there are a bunch of esoteric special cases where it does not quite 100% work in all situations, because stuff wants to auto log you in without asking for input.

Putty is the windows ssh client, but you can use the Linux ssh client in windows in the git bash shell, and the Linux remote file copy utility scp is way better than the putty utility PSFTP.

Usually a command line interface is a pain and error prone, with a multitude of mysterious and inexplicable options and parameters, and one typo or out of order command causing your system to unrecoverably die,but even though Putty has a windowed interface, the command line interface of bash is easier to use.

It is easier in practice to use the bash (or, on Windows, git-bash) to manage keys than PuTTYgen. You generate a key pair with

ssh-keygen -t ed25519 -f  keyfile

On windows, your secret key should be in %HOMEPATH%/.ssh, on linux in /home/«username»/.ssh, as is your config file for your ssh client, listing the keys for hosts. The public keys of your authorized keys are in /home/«username»/.ssh/authorized_keys, enabling you to login from afar as that user over the internet. The linux system for remote login is a cleaner and simpler system that the multitude of mysterious, complicated, and failure prone facilities for remote windows login, which is a major reason why everyone is using linux hosts in the cloud.

In Debian, I create the directory ~/.ssh for the user, and, using the editor nano, the file authorized_keys

mkdir ~/.ssh
nano ~/.ssh/authorized_keys
chmod 700 .ssh
chmod 600 .ssh/*

I set the ssh session host IP under /Session, the auto login username under /Connection/data, the autologin private key under /Connection/SSH/Auth.

If I need KeepAlive I set that under /Connection

I make sure auto login works, which enables me to make ssh do all sorts of things, then I disable ssh password login, restrict the root login to only be permitted via ssh keys.

In order to do this, open up the SSHD config file (which is ssh daemon config, not ssh_config. If you edit this into the the ssh_config file everything goes to hell in a handbasket. ssh_config is the global .ssh/config file):

nano /etc/ssh/sshd_config

Your config file should have in it

HostKey /etc/ssh/ssh_host_ed25519_key
X11Forwarding yes
AllowAgentForwarding yes
AllowTcpForwarding yes
TCPKeepAlive yes
AllowStreamLocalForwarding yes
GatewayPorts yes
PermitTunnel yes
PasswordAuthentication no
PermitRootLogin prohibit-password

PermitRootLogin defaults to prohibit-password, but best to set it explicitly Within that file, find the line that includes PermitRootLogin and if enabled modify it to ensure that users can only connect with their SSH key
For no good reason, I prefer to have the host identify itself with the autogenerated ed25519 key – it is shorter and quicker, a premature optimization, but waste offends me.

To put these changes into effect:

shutdown -r

Now that putty can do a non interactive login, you can use plink to have a script in a client window execute a program on the server, and echo the output to the client, and psftp to transfer files, though scp in the Git Bash window is better, and rsync (Unix to Unix only, requires rsync running on both computers) is the best. scp and rsync, like git, get their keys from

On windows, FileZilla uses putty private keys to do scp. This is a much more user friendly and safer interface than using scp – it is harder to issue a catastrophic command, but rsync is more broadly capable.

Life is simpler if you run FileZilla under linux, whereupon it uses the same keys and config as everyone else.

All in all, on windows, it is handier to interact with Linux machines using the Git Bash command window, than using putty, once you have set up ~/.ssh/config on windows.

Of course windows machines are insecure, and it is safer to have your keys and your ~/.ssh/config on Linux.

Putty on Windows is not bad when you figure out how to use it, but ssh in Git Bash shell is better:
You paste stuff into the terminal window with right click, drag stuff out of the terminal window with the mouse, you use nano to edit stuff in the ssh terminal window.

Backing up a cloud server

rsync is the openssh utility to synchronize directories locally and remotely.

Assume rsync is installed on both machines, and you have root logon access by openssh to the remote_host

Shutdown any daemons that might cause a disk write during backup, which would be disastrous. Login as root at both ends or else files cannot be accessed at one end, nor permissions preserved at the other.

rsync -aAXvzP  --delete remote_host:/ --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/media/*","/lost+found"} local_backup

Of course, being root at both ends enables you to easily cause catastrophe at both ends with a single typo in rsync.

To simply logon with ssh

ssh remote_host

To synchronize just one directory.

rsync -aAXvzP  --delete remote_host:~/name .

To make sure the files are truly identical:

rsync -aAXvzc  --delete remote_host:~/name .

rsync, ssh, git and so forth know how to logon from the ~/.ssh/config(not to be confused with /etc/ssh/sshd_config or /etc/ssh/ssh_config

Host remote_host
HostName remote_host
Port 22
IdentityFile ~/.ssh/id_ed25519
User root
ServerAliveInterval 60
TCPKeepAlive yes

Git on windows users %HOMEPATH/.ssh/config and that is how it knows what key to use

To locally do a backup of the entire machine, excluding of course your /local_backup directory which would cause an infinite loop:

rsync -raAvX --delete /
    "/media/*","/lost+found"} /local_backup

The a and X options means copy the exact file structure with permission and all that recursively, The z option is for compression of data in motion. The data is uncompressed at the destination, so when backing up local data locally, we don’t use it.

To locally just copy stuff from the Linux file system to the windows file system

rsync -acv --del source dest/

Which will result in the directory structure dest/source

To merge two directories which might both have updates:

    rsync -acv source dest/

A common error and source of confusion is that:

rsync -a dir1/ dir2

means make dir2 contain the same contents as dir1, while

rsync -a dir1 dir2

is going to put a copy of dir1 inside dir2

Since a copy can potentially take a very long time, you need the -v flag.

The -P flag (which probably should be used with the -c flag) does incremental backups, just updating stuff that has been changed. The -z flag does compression, which is good if your destination is far away.


To bring up apache virtual hosting

Apache2 html files are at /var/www/<domain_name>/.

Apache’s virtual hosts are:


The apache2 directory looks like:


The sites-available directory looks like


The sites enabled directory looks like

000-default.conf -> ../sites-available/000-default.conf

And the contents of «».conf are (before the https thingly has worked its magic)

<VirtualHost *:80>
    ServerName «»
    ServerAlias www.«»
    ServerAlias «»
    ServerAlias «»
    ServerAdmin «me@mysite»
    DocumentRoot /var/www/«»

    <Directory /var/www/«»>
        Options -Indexes +FollowSymLinks
        AllowOverride All

    ErrorLog ${APACHE_LOG_DIR}/«»-error.log
    CustomLog ${APACHE_LOG_DIR}/«»-access.log combined
RewriteEngine on
    RewriteCond %{HTTP_HOST} ^www\.example\.com [NC]
    RewriteRule ^(.*)$$1 [L,R=301]

All the other files don’t matter. The conf file gets you to the named server. The contents of /var/www/ are the html files, the important one being index.html.

To get free, automatically installed and configured, ssl certificates and configuration

apt-get -qy install certbot python-certbot-apache
certbot --apache

if you have set up http virtual apache hosts for every name supported by your nameservers, and only those names, certbot automagically converts these from http virtual hosts to https virtual hosts and sets up redirect from http to https.

If you have an alias server such as for, certbot will guess you also have the domain name and get a certificate for that.

Thus, after certbot has worked its magic, your conf file looks like

    <VirtualHost *:80>
        ServerName «»
        ServerAlias «»
        ServerAlias «»
        ServerAdmin me@mysite
        DocumentRoot /var/www/«»

        <Directory /var/www/«»>
            Options -Indexes +FollowSymLinks
            AllowOverride All

        ErrorLog ${APACHE_LOG_DIR}/«»-error.log
        CustomLog ${APACHE_LOG_DIR}/«»-access.log combined
    RewriteEngine on
        RewriteCond %{HTTP_HOST} ^www\.example\.com [NC]
        RewriteRule ^(.*)$$1 [L,R=301]
        RewriteCond %{SERVER_NAME} =«» [OR]
        RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]

Lemp stack on Debian

apt-get -qy install nginx mariadb-server  php php-cli php-xml php-mbstring php-mysql php7.3-fpm ufw

Browse to your server, and check that nginx web page is working. Your browser will probably give you an error page, merely because it defaults to https, and https is not yet working. Make sure you are testing http, not https. We will get https working shortly..

Mariadb and ufw

ufw default deny incoming && ufw default allow outgoing
ufw allow SSH &&  ufw allow 'Nginx Full' && ufw limit ssh/tcp 
# edit /etc/default/ufw so that MANAGE_BUILTINS=yes
# "no" is bug compatibility with software long obsolete
ufw enable && ufw status verbose

You should now receive a message that you are in the mariadb console

CREATE DATABASE example_database;
GRANT ALL ON example_database.* TO  'example_user'@'localhost'
mariadb -u example_user --password=mypassword example_database
CREATE TABLE todo_list (      item_id INT
AUTO_INCREMENT,      content VARCHAR(255),
PRIMARY KEY(item_id)  );
INSERT INTO todo_list (content) VALUES
("My first important item");
INSERT INTO todo_list (content) VALUES
("My second important item");
SELECT * FROM todo_list;

OK, MariaDB is working. We will use this trivial database and easily guessed example_user with the easily guessed password mypassword for more testing later. Delete him and his database when your site has your actual content on it.

domain names and PHP under nginx

Check again that the default nginx web page comes up when you browse to the server.

Create the directories /var/www/«» and /var/www/«» and put some html files in them, substituting your actual domains for the example domains.

mkdir /var/www/«» && nano /var/www/«»/index.html
mkdir /var/www/«» && nano /var/www/«»/index.html
<!DOCTYPE html>
        <meta charset="utf-8" />
    <body><h1>«» index file</h1></body>

Delete the default in /etc/nginx/sites-enabled, and create a file, which I arbitrarily name config that specifies how your domain names are to be handled, and how php is to be executed for each domain names.

This config file assumes your domain is called «» and your service is called php7.3-fpm.service. Create the following config file, substituting your actual domains for the example domains, and your actual php fpm service for the fpm service.

nginx -t
# find the name of your php fpm service
systemctl status php* | grep fpm.service
# substitute the actual php fpm service for 
# php7.3-fpm.sock in the configuration file.
rm -v /etc/nginx/sites-enabled/*
nano /etc/nginx/sites-enabled/config
server {
    return 301 $scheme://«»$request_uri;
server {
    listen 80;
    listen [::]:80;
    index index.php index.html;
    server_name «»;
    root /var/www/«»;
    index index.php index.html;
    location / {
        try_files $uri $uri/ =404;
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.3-fpm.sock;
    location = /favicon.ico {access_log off; }
    location = /robots.txt {access_log off; allow all; }
    location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
        expires max;
server {
    listen 80;
    listen [::]:80;
    index index.php index.html;
    server_name «»;
    root /var/www/«»;
    location / {
        try_files $uri $uri/ =404;
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.3-fpm.sock;
    location = /favicon.ico {access_log off; }
    location = /robots.txt {access_log off; allow all; }
    location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
        expires max;
server {
    server_name *.«»;
    return 301 $scheme://«»$request_uri;

The first server is the default if no domain is recognized, and redirects the request to an actual server, the next two servers are the actual domains served, and the last server redirects to the second domain name if the domain name looks a bit like the second domain name. Notice that this eliminates those pesky wwws.

The root tells it where to find the actual files.

The first location tells nginx that if a file name is not found, give a 404 rather than doing the disastrously clever stuff that it is apt to do, and the second location tells it that if a file name ends in .php, pass it to php7.3-fpm.sock (you did substitute your actual php fpm service for php7.3-fpm.sock, right?)

Now check that your configuration is OK with nginx -t, and restart nginx to read your configuration.

nginx -t
systemctl restart nginx

Browse to those domains, and check that the web pages come up, and that www gets redirected.

Now we will create some php files in those directories to check that php works.

echo "<?php phpinfo(); ?>"  |tee /var/www/«»/info.php

Then take a look at info.php in a browser.

If that works, then create the file /var/www/«»/index.php containing:

    $user = "example_user"; 
    $password = "mypassword";
    $database = "example_database";
    $table = "todo_list";
    try { 
        $db = new PDO("mysql:host=localhost;dbname=$database", $user, $password);
        echo "<h2>TODO</h2><ol>";
        foreach($db->query("SELECT content FROM $table") as $row) {
            echo "<li>" . $row['content'] . "</li>";   
            echo "</ol>";  
    catch (PDOException $e) {
        print "Error!: " . $e->getMessage() . "<br/>";

Browse to http://«» If that works, delete the info.php file as it reveals private information. You now have domain names being served by lemp. Your database now is accessible over the internet through PHP on those domain names.


Certbot provides a very easy utility for installing ssl certificates, and if your domain name is already publically pointing to your new host, that is great. Not so great if you are setting up a new server, and want the old server to keep on servicing people while you set up the new server, so here is the hard way, where you prove that you, personally, control the DNS records, but do not prove that the server that certbot is modifying is right now publicly connected as that domain name.

(Obviously on your network the domain name should map to the new server. Meanwhile, for the rest of the world, the domain name continues to map to the old server, until the new server works.)

apt-get -qy install certbot python-certbot-nginx 
certbot register  --agree-tos -m EMAIL
certbot run -a manual --preferred-challenges dns -i nginx -d «» -d «»
nginx -t

If instead you already have a certificate, because you copied over your /etc/letsencrypt directory

apt-get -qy install certbot python-certbot-nginx 
certbot install  -i nginx
nginx -t

To backup and restore letsencrypt, to move your certificates from one server to another, rsync -HAaX /etc, as root on the computer which will receive the backup. The letsencrypt directory gets mangled by tar, scp and sftp

Again, browse to your server. You should get redirected to https, and https should work.

Backup the directory tree /etc/letsencrypt/, or else you can get into situations where renewal is a problem. Only Linux to Linux backups work, and they do not exactly work – things go wrong. Certbot needs to fix its backup and restore process, which is broken. Apparently you should backup certain directories but not others. But backing up and restoring the whole tree works well enough for certbot install -i nginx

The certbot modified file for your ssl enabled domain should now look like

index index.php index.html;
server {
    return 301 https://«»$request_uri;
server {
    server_name «»;
    root /var/www/«»;
    index index.php index.html;
    location / {
        try_files $uri $uri/ =404;
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.3-fpm.sock;
    location = /favicon.ico {access_log off; }
    location = /robots.txt {access_log off; allow all; }
    location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
        expires max;
    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/«»/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/«»/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
server {
    server_name «»;
    root /var/www/«»;
    location / {
        try_files $uri $uri/ =404;
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.3-fpm.sock;
    location = /favicon.ico {access_log off; }
    location = /robots.txt {access_log off; allow all; }
    location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
        expires max;
    listen [::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/«»/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/«»/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
server {
    server_name *.«»;
    return 301 https://«»$request_uri;
server {
    server_name «»;
    return 301 https://$host$request_uri;
    listen 80;
    listen [::]:80;
server {
    server_name «»;
    return 301 https://$host$request_uri;
    listen 80;
    listen [::]:80;

You may need to clean a few things up after certbot is done.

The important lines that certbot created in the file being ssl_certificate, the additional servers listening on port 80 which exist to redirect http to https servers listening on port 403, and that all redirects should be https instead of $scheme (fix them if they are not).

nginx starts as root, but runs as unprivileged user www-data, who needs to have read permissions to every relevant directory. If you want to give php write permissions to a directory, or restrict www-data and pgp’s read permissions to some directories and not others, you could do clever stuff with groups and users, giving creating users that php scripts act as, and making www-data a member of their group, but that is complicated and easy to get wrong.

A quick fix is to chown -R www-data:www-data the directories that your web server needs to write to, and only those directories, though I can hear security gurus gritting their teeth when I say this.

For all the directories that www-data merely needs to read:

find /var/www -type d -exec chmod 755 {} \;
find /var/www -type f -exec chmod 644 {} \;

Now you should delete the example user and the example database:

DROP USER 'example_user'@'localhost';
DROP DATABASE example_database;

Wordpress on Lemp

apt-get -qy install php-curl php-gd php-intl php-mbstring php-soap php-xml php-xmlrpc zip php-zip
systemctl status php* | grep fpm.service
# restart the service indicated above
systemctl stop nginx
systemctl stop php7.3-fpm.service
utf8mb4 COLLATE utf8mb4_unicode_ci;
GRANT ALL ON wordpress.* TO 'wordpress_user'@'localhost'
IDENTIFIED BY '«password»';

The lemp server block that will handle the wordpress domain needs to pass urls to index.php instead of returning a 404. (Handle your 404s and redirects issues with the Redirections Wordpress plugin, which is a whole lot easier, safer, and more convenient than editing redirects into your /etc/nginx/sites-enabled/* files.)

server {
    . . .
    location / {
        #try_files $uri $uri/ =404;
        try_files $uri $uri/ /index.php$is_args$args;
    . . .
nginx -t
mkdir temp
cd temp
curl -LO
tar -xzvf latest.tar.gz
cp -v wordpress/wp-config-sample.php wordpress/wp-config.php
cp -av wordpress/. /var/www/«»
chown -R www-data:www-data /var/www/«» && find /var/www -type d -exec chmod 755 {} \; && find /var/www -type f -exec chmod 644 {} \;
# so that wordpress can write to the directory
curl -s
nano /var/www/«»/wp-config.php

Replace the defines that are there
define('LOGGED_IN_KEY', 'put your unique phrase here');
with the defines you just downloaded from wordpress.


// ** Mariadb settings //
/** The name of the database for WordPress */
define('DB_NAME', 'wordpress');
/** MySQL database username */
define('DB_USER', 'wordpress_user');
/** MySQL database password */
define('DB_PASSWORD', '«password»');
/** MySQL hostname */
define( 'DB_HOST', 'localhost' );
/** Database Charset to use in creating database tables. */
define( 'DB_CHARSET', 'utf8mb4' );
/** The Database Collate type. */
define( 'DB_COLLATE', 'utf8mb4_unicode_ci' );
systemctl start php7.3-fpm.service
systemctl start nginx

It should now be possible to navigate to your wordpress domain in your web browser and finish the setup there:

Exporting databases

Interacting directly with your database of the MariaDB command line is apt to lead to disaster.

Installing PhpMyAdmin has a little gotcha on Debian 9, which is covered in this tutorial, but I just do not use PhpMyAdmin even though it is easer and safer.

To export by command line
systemctl stop nginx
systemctl stop php7.3-fpm.service
mdir temp && cd temp
mysqldump -u $dbuser --password=$dbpass $db > $fn.sql
head -n 30 $fn.sql
zip $ $fn.sql
systemctl start php7.3-fpm.service
systemctl start nginx

Moving a wordpress blog to new lemp server

Prerequisite: you have configured Wordpress on Lemp

Copy everything from the web server source directory of the previous wordpress installation to the web server of the new wordpress installation.

chown -R www-data:www-data /var/www/«»

Replace the defines for DB_NAME, DB_USER, and DB_PASSWORD in wp_config.php, as described in Wordpress on Lemp

To import datbase by command line
systemctl stop nginx
systemctl stop php7.3-fpm.service
# we don’t want anyone browsing the blog while we are setting it up
# nor the wordpress update service running.
utf8mb4 COLLATE utf8mb4_unicode_ci;
GRANT ALL ON wordpress.* TO 'wordpress_user'@'localhost'
IDENTIFIED BY '«password»';

At this point, the database is still empty, so if you start nginx and browse to the blog, you will get the wordpress five minute install, as in Wordpress on Lemp. Don’t do that, or if you start nginx and do that to make sure everything is working, then start over by deleting and recreating the database as above.

Now we will populate the database.

unzip  $
mv *.sql $fn.sql
mariadb -u $dbuser --password=$dbpass $db < $fn.sql
mariadb -u $dbuser --password=$dbpass $db
SELECT COUNT(*) FROM wp_posts;
SELECT * FROM wp_posts l LIMIT 20;

Adjust $table_prefix = 'wp_'; in wp_config.php if necessary.

systemctl start php7.3-fpm.service
systemctl start nginx

Inside the sql file are references to the old directories, (search for 'recently_edited'), and to the old user who had the privilege to create views (search for DEFINER=) Replace them with the new directories and new database user, in this example wordpress_user.

Edit the siteurl,admin_email and new_admin_email fields of the blog database to the domain and new admin email.

mariadb -u $dbuser --password=$dbpass $db < $db.sql
mariadb -u $dbuser --password=$dbpass $db
SELECT COUNT(*) FROM wp_comments;
SELECT * FROM wp_comments l LIMIT 10;

Adjust $table_prefix = 'wp_'; in wp_config.php if necessary.

systemctl start php7.3-fpm.service
systemctl start nginx

Your blog should now work.

Logging and awstats.


First create, in the standard and expected location, a place for nginx to log stuff.

mkdir /var/log/nginx
chown -R www-data:www-data /var/log/nginx

Then edit the virtual servers to be logged, which are in the directory /etc/nginx/sites-enabled and in this example in the file /etc/nginx/sites-enabled/config

server {
    server_name «»;
    root /var/www/«»;
    access_log  /var/log/nginx/«».access.log;
    error_log  /var/log/nginx/«».error.log;

The default log file format logs the ips, which in a server located in the cloud might be a problem. People who do not have your best interests at heart might get them.

So you might want a custom format that does not log the remote address. On the other hand, Awstats is not going to be happy with that format. A compromise is to create a cron job that cuts the logs daily, a cron job that runs Awstats, and a cron job that then deletes the cut log when Awstats is done with it.

There is no point to leaving a gigantic pile of data, that could hang you and your friends, sitting around wasting space.


Postfix is a matter of installing it, but again it expects an MX record in your nameserver. It automagically does the right thing for users that have Linux accounts, using their names and passwords to set up mailboxes. It gets complicated only when people start to pile supposedly more advanced mail systems, databases, and webmail on top of it.

Postfix is not a Pop3 or IMAP server. It sends, receives, and forwards emails. You cannot set up an email client such as Thunderbird to remotely access your emails – they are only available to people logged in on the server who are never going to look at them anyway, because there is no useful UI to read them. To receive your emails in an actually useful form, you are going to have to forward them or set up a Dovecot service. Dovecot provides Pop3 and IMAP, though IMAP is useless unless your email server is on a local network. Postfix does not provide any of that.

In addition to the MX record, you will also need a PTR record for your durable IP, or everyone will reject your emails as spam. You set up PTR records in the same place as your MX record and A record, and only set it up if you have an MX record.

PTR records may require a DNS zone, so that reverse lookup can find frequently changing IPs without looking up every DNS on the planet. Which zone may or may not already exist. If it does not exist, you may have problems creating it.

Unfortunately, you are likely to need the cooperation of the guy who provides you your IP address, and not all hosting providers are set up to provide reverse IP.

[Debian instructions]:( Note that these differ significantly from instructions found elsewhere, different enough that some people, probably a lot of people, are wrong in ways likely to break your mail service. Apart from reverse lookup, worry about spam prevention later, as misconfiguring spam protection is likely to break your mail server. Break only one thing at a time.

No end of people place garbage setup instructions on the internet for search engine optimization purposes, and then exchange links with each other so that they look authoritative to search engines.

apt-get -qy update && apt-get -qy upgrade       #
apt-get -qy install postfix
systemctl start postfix
systemctl enable postfix
    ufw allow Postfix
    nano /etc/postfix/virtual

the main postfix configuration file /etc/postfix/

Postfix needs to be connected up to your lets encrypt certificates that were generated for your apache website by editing your file:

nano /etc/postfix/
myhostname = «»
mydomain = «»
myorigin = $mydomain

smtpd_tls_loglevel = 1

# Use TLS if avalailable
# "yes" is an invalid option, and all 
# other options are too clever by half
# due  to ten zillion mail recipient configurations

# keys
smtp_tls_cert_file = /etc/letsencrypt/live/«»/fullchain.pem
smtp_tls_key_file = /etc/letsencrypt/live/«»/privkey.pem

smtpd_recipient_restrictions = permit_mynetworks
smtpd_sender_restrictions = reject_unknown_sender_domain

The file is documented. Spamhelp will check for open relay.

spam filtering instructions

Gitlab also has instructions for a more complete installation.

Your SSH client

Your cloud server is going to keep timing you out and shutting you down, so if using OpenSSH need to set up ~/.ssh/config to read

    ForwardX11 yes
    Protocol 2
    TCPKeepAlive yes
    ServerAliveInterval 10

Putty has this stuff in the connection configuration, not in the config file. Which makes it easier to get wrong, rather than harder.

A cloud server that does not shut you down

Your cloud server is probably virtual private server, a vps running on KVM, XEN, or OpenVZ.

KVM is a real virtual private server, XEN is sort of almost a virtual server, and OpenVZ is a jumped up guest account on someone else’s server.

KVM vps is more expensive, because when they say you get 2048 meg, you actually do get 2048 meg. OpenVZ will allocate up to 2048 gig if it has some to spare – which it probably does not. So if you are running OpenVZ you can, and these guys regularly do, put far too many virtual private servers on one physical machine. Someone can have a 32 Gigabyte bare metal server with eight cores, and then allocate one hundred virtual servers each supposedly with two gigabytes and one core on it, while if he is running KVM, he can only allocate as much ram as he actually has.

Debian on the cloud

Debian is significantly more lightweight than Ubuntu, harder to configure and use, will crash and burn if you connect up to a software repository configured for Ubuntu in order to get the latest and greatest software. You generally cannot get the latest and greatest software, and if you try to do so, likely your system will die on its ass, or plunge you into expert mode, where no one sufficiently expert can be found.

Furthermore, in the course of setting up Debian, highly likely to break it irretrievably and have to restart from scratch. After each change, reboot, and after each successful reboot, take a snapshot, so that you do not have to reboot all the way from scratch.

But, running stuff that is supposed to run, which not always the latest and greatest, is more stable and reliable than Ubuntu. Provided it boots up successfully after you have done configuring, will likely go on booting up reliably and not break for strange and unforseeable reasons. Will only break because you try to install or reconfigure software and somehow screw up. Which you will do with great regularity.

On a small virtual server, Debian provides substantial advantages.

Go Debian with ssh and no GUI for servers, and Debian with lightdm Mate for your laptop, so that your local environment is similar to your server environment.

On any debian you need to run through the apt-get cycle till it stops updating:

apt-get -qy update && apt-get -qy upgrade
apt-get -qy install whiptail dialog nano build-essential
apt-get -qy rsync linux-headers-generic

On windows, edit the command line of the startup icon for a virtual box that you have iconized to add the command line option --type headless, for example

"C:\Program Files\Oracle\VirtualBox\VirtualBoxVM.exe" --comment "vmname" --startvm --type headless "{873e0c62-acd2-4850-9faa-1aa5f0ac9c98}"

To uninstall a package

apt-get -qy --purge remove <package>

To uninstall a package without removing the settings

apt-get -qy remove <package>

On your home computer, Ubuntu has significant ease of use advantages. On the cloud, where computing power costs and you are apt to have a quite large number of quite small servers, Debian has significant cost advantages, so perhaps should have Debian locally, despite its gross pain the ass problems, in order to have the same system in the cloud and locally.

installing a Wordpress blog on a new domain

Assuming you have a backup of the files and the database.

Create a freshly installed empty blog on the target site using one of the many easy Wordpress setups.

Copy over all the old files except wp_config.php

Edit the wp_config file so that the table_prefix agrees with the original blogs wp_config.php table_prefix.

Delete the new blog tables with new blog’s table prefix from the new blog’s database, and upload the old blog’s tables from the new blog’s database

Upload the sql file to the new blog database.

Should now just work.


OpenVPN has long ceased to be open, and all the other setup guides for setting up your own VPN server are a pain in the ass.

Opensource has a list of tools for setting up your own vpn. I have not tried them yet.

Wireguard has a simple and elegant design based on modern cryptographic concepts, free from any NSA approved cryptographic algorithms. Looks like the best, and its concepts need to be imitated and copied.

Integrated Development Environments

The cross platform open source gcc compiler produces the best object code, but is not debugger friendly. The cross platform CLang is debugger friendly, but this is not that useful unless you are using an ide designed for Clang. Visual Studio has the best debugger – but you are going to have to debug on windows.

In Clang and gcc, use valgrind.

In Visual Studio, enable the CRT debugger.

#ifdef _MSC_VER
#ifdef _DEBUG
//  Testing for memory leaks under Microsoft Visual Studio
#include <stdlib.h>
#include <crtdbg.h>

At the start of program

#ifdef _MSC_VER
#ifdef _DEBUG
//  Testing for memory leaks under Microsoft Visual Studio

At the end of program

#ifdef _MSC_VER
#ifdef _DEBUG
//  Testing for memory leaks under Microsoft Visual Studio

However, this memory leak detection is incompatible with wxWidgets, which does its own memory leak detection.

And since you are going to spend a lot of time in the debugger, Windows Visual Studio recommended as the main line of development. But, for cross compilation, wxWidgets recommended.

Code::Blocks (wxSmith GUI developer) is one open source cross platform which may be out of date.

IDE CodeLite (wxCrafter GUI developer) is more modern, and undergoing more active development, but may have as yet a smaller user base. But it is based on wxWidgets, so may be more useful for wxWidget development.

wxCrafter and wxSmith appear to be both of them dead in the water.

Develop in Visual Studio, in Code::Blocks or CodeLite using the Visual Studio compiler, in Code::Blocks using the MinGW compiler, and in Linux using Code:Blocks, using wxWidgets for your UI iin all environments.

Not sure whether Code::blocks (wxSmith) or CodeLite Code (wxCrafter GUI developer) is the best cross platform IDE. Its chief competitor, QT Creator, has its own idiosyncratic way of doing things that differs from the C++ way, and its own idiosyncratic UI, which differs from what users on Windows are likely to expect.  It also locks you in by taking a long time to learn, providing a whole lot of goodies that bind you to QT Creator, which investment and goodies eventually lead you to needing to buy QT Creator.

I am more than a little worried that development for Code::Blocks wxSmith seems to have been mighty quiet for a year, and I cannot seem to find too many live wxSmith projects Github has three wxSmith Projects, nine hundred wxWidgets projects.

wxSQLite3 incorporates SQLite3 into wxWidgets, also provides ChaCha20 Poly1305 encryption. There is also a wrapper that wraps SQLite3 into modern (memory managed) C++.

wxSQLite3 is undergoing development right now, indicating that wxWidgets and SQLite3 are undergoing development right now. wxSmith and Tk are dead.

Model View Controller Architecture

This design pattern separates the UI program from the main program, which is thus more environment independent and easier to move between different operating systems.

The Model-view-controller design pattern makes sense with peers on the server, and clients on the end user’s computer. But I am not sure it makes sense in terms of where the power is. We want the power to be in the client, where the secrets are.

The central component of the pattern. It is the application’s dynamic data structure, independent of the user interface.[5] It directly manages the data, logic and rules of the application.
Any representation of information such as a chart, diagram or table. Multiple views of the same information are possible, such as a bar chart for management and a tabular view for accountants.
Accepts input and converts it to commands for the model or view.

So, a common design pattern is to put as much of the code as possible into the daemon, and as little into the gui.

Now it makes sense that the Daemon would be assembling and checking large numbers of transactions, but the client has to be assembling and checking the end user’s transaction, so this model looks like massive code duplication.

If we follow the Model-View-Controller architecture then the Daemon provides the model, and, on command, provides the model view to a system running on the same hardware, the model view being a subset of the model that the view knows how to depict to the end user. The GUI is View and Command, a graphical program, which sends binary commands to the model.

Store the master secret and any valuable secrets in GUI, since wxWidgets provides a secret storing mechanism.  But the daemon needs to be able to run on a headless server, so needs to store its own secrets – but these secrets will be generated by and known to the master wallet, which can initialize a new server to be identical to the first.  Since the server can likely be accessed by lots of people, we will make its secrets lower value.

We also write an (intentionally hard to use) command line view and command line control, to act as prototypes for the graphical view and control, and test beds for test instrumentation.


CMake is the best cross platform build tool, but my experience is that it is too painful to use, is not genuinely cross platform, and that useful projects rely primarily on autotools to build on linux, and on Visual Studio to build on Windows.

And since I rely primarily on a pile of libraries that rely primarily on autotools on linux and Visual Studio on windows …

Windows, Git, Cmake, autotools and Mingw

Cmake in theory provides a universal build that will build stuff on both Windows and linux, but when I tried using it, was a pain and created extravagantly fragile and complicated makefiles.

Libsodium does not support CMake, but rather uses autotools on linux like systems and visual studio project files on Windows systems.

wxWidgets in theory supports CMake, but I could not get it working, and most people use wxWidgets with autotools on linux like systems, and visual studio project files on Windows systems. Maybe they could not get it working either.

Far from being robustly environment agnostic and shielding you from the unique characteristics of each environment, CMake seems to require a whole lot of hand tuning to each particular build environment to do anything useful.

  1. Install 7zip.

  2. Install Notepad++.

  3. Install MinGW using TDM-GCC, as the MinGW install is user hostile, and the Code::Blocks install of MinGW broken. Also, wxWidgets tells you to use the TDM environment.

  4. Download Git from Git for Windows and install it. (This is the successor to msysgit, which has a walkthrough.) Select Notepad++ as the editor.

    Note that in any command line environment where one can issue Git commands, the commands git gui and git gui citool are available.

  5. Install MinGW using TDM-GCC, as the MinGW install is user hostile, and the Code::Blocks install of MinGW broken.

  6. Download your target project using Git.

  7. Open a Windows PowerShell and navigate to the folder where you just put your target project.

  8. Execute the following commands:

cd build
cmake .. -G "MinGW Makefiles"


There is no satisfactory android running under Oracle Virtual Box. However, Google supports development environments running under windows.

Trouble is, for android clients, you will want to develop primarily in JavaScript with a bit of Java.


JavaScript delivers the write once, run anywhere, promised by Java, and, unlike Java,delivers distributed computing. This, however, requires the entire java ecology, plus html and css. And I don’t know JavaScript. But more importantly, I don’t know the JavaScript ecology:

The JavaScript ecology is large and getting larger, as parodied by Hackernoon.

For an intro into JavaScript and the accompanying (large) ecology, telling you what small parts of the forest you actually need to get an app up: A study plan to cure JavaScript fatigue.


To set up Git on the cloud, see and to use git on the cloud see.

On my system, I ssh into the remote system «» as the user git and then in the git home directory:

mkdir MyProject.git
cd MyProject.git
git init --bare

and on my local system I launch the git bash shell, and go to the MyProject directory. I copy a useful .gitinore and useful .gitattributes file into that directory, then launch the bash git shell

git init
git add *
git commit -m"this is a project to so and so"
git remote -v
git remote add origin git@«»:~/MyProject
git remote -v
git push -u origin --all # pushes up the repo and its refs for the first time
git push -u origin --tags

Push,of course, requires that I have the ssh keys in putty format where putty can find them, and another copy in openssh format where git can find them. Git expects the ssh keys in .ssh

If you ssh into the other system instead of puttying into it, only need your keys in one place, which is simpler and safer

Invoke ssh-keygen -t ed25519 -C comment under git bash to automagically set up everything on the client side, then replace their private key with the putty key using putty key gen’s convert key, and their public key with the putty key gen copy and paste public key.

Make sure the config file ~/.ssh/config contains

Host «»
    HostName «»
    Port 22
    IdentityFile ~/.ssh/id_ed25519

Host is the petname, and HostName the globally unique name.
An example of the use of petnames is

Host project3
    User git
    IdentityFile ~/.ssh/project3-key
Host publicid
    User git
    IdentityFile ~/.ssh/publicid-key
    User git
    IdentityFile ~/.ssh/github-key

Putty likes its keys in a different format to git and open ssh, and created pageant and plink so that git and openssh could handle that format, but pageant and plink are broken. Convert format works, tplink hangs. Just make sure that there is one copy as expected by git and openssh, and one copy as expected by Putty.

Save the private key in ssh format with no three letter extension, and the corresponding public key in putty key gen’s copy and paste format with the three letter

Git Workflow

You need a .gitignore file to stop crap from piling up in the repository, and because not everyone is going to handle eol and locales the same way, you need to have a .gitattributes file, which makes sure that text files that are going to be used in both windows and Linux are eol and utf-8, while text files that will be used only in windows are crlf

At, create a new repository

cd \development\MyProject
git init
git config --global studi-c
git config --global core.editor 'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin
git config --global
git config --global core.autocrlf false
git remote add origin
git add --all --dry-run
git add --all
git diff --cache
git commit -m "Initial revision"
git push origin master

After I make a change and test it and it seems to work:

    git pull origin master

Test that the application still works after pulling and merging other developers’ changes

git diff .
git add .
git diff --staged HEAD
git commit -m "My change"
git push origin master

For an more complete list of commands, with useful examples.

To make a git repository world readable, you need git daemon running, but that a half measure, for if you publish your code to the world, you want the world to contribute, and you will need gitlab to manage that.

A simpler way of making it public is to have the post-update hook turn it into a old plain dumb files, and then put a symlink to your directory in the repository in your apache directories, whereupon the clone command takes as its argument the directory url (with no trailing backslash).

Lamp stack

Install Apache is documented above. After installing Apache, make sure you can see the Apache install page, and modify it to make sure you are seeing the correct page in the correct place.

install Mariadb

After installing Mariadb, make sure you can get into the Mariadb shell.

PHP, and PhpMyAdmin

PhpMyAdmin has the gotcha that by default its gui is, so to use it have to use

ssh -L 8080: «» 

whereupon you can access it on your local computer ss

Make sure you can access the PhpMyAdmin shell


First install your lamp stack. Or for Phabricator and Comen, nginx.

Sharing git repositories

Git Daemon

git-daemon will listen on port 9418. By default, it will allow access to any directory that looks like a git directory and contains the magic file git-daemon-export-ok.

This is by far the simplest and most direct way of allowing the world to get at your git repository.


Does much the same thing has git-daemon, makes your repository public with a prettier user interface, and somewhat less efficient protocol.

Gitweb provides a great deal of UI for viewing and interacting with your repository, while git-daemon just allows people to clone it, and then they can look at it.


It seems that the lightweight way for small group cooperation on public projects is Gitolite, git-daemon, and Gitweb.

Gitolite allows you to easily make people identified by their ssh public key and the filename of the file containing their public key write capability to certain branches and not others.

On Debian host apt-get install gitolite3, though someone complains this version is not up to date and you should install from github.

It then requests your public key, and you subsequently administer it through the cloned repository gitolite-admin on your local machine.

It likes to start with a brand new empty git account, because it is going to manage the authorized-keys file and it is going to construct the git repositories.

Adding existing bare git repositories (and all git repositories it manages have to be bare) is a little bit complex.

So, you give everyone working on the project their set of branches on your repository,and they can do the same on their repositories.

This seems to be a far simpler and more lightweight solution than Phabricator or Gitlab. It also respects Git’s inherently decentralized model. Phabricator and Gitlab provide a great big pile of special purpose collaboration tools, which Gitolite fails to provide, but you have to use those tools and not other tools. Gitolite seems to be overtaking Phabricator. KDE seems to abandoning Phabricator:

The KDE project uses gitolite (in combination with redmine for issue tracking and reviewboard for code review). Apart from the usual access control, the KDE folks are heavy users of the “ad hoc repo creation” features enabled by wildrepos and the accompanying commands. Several of the changes to the “admin defined commands” were also inspired by KDE’s needs. See section 5 and section 6 of the above linked page for details.

So they are using three small tools, gitolite, redmine, and reviewboard, instead of one big monolithic hightly integrated tool. Since we are creating a messaging system where messages can carry money and prove promises and context, the eat-your-own dogfood principle suggests that pull requests and code reviews should come over that messaging system.

Gitolite is designed around giving identified users controlled read and write access, but can provide world read access through gitweb and git-daemon.

Gitea and Gogs

Gitea is the fork, and Gogs is abandonware. Installation seems a little scary, but far less scary than Gitlab or Phabricator. Like Gitolite, it expects an empty git user, and, unlike Gitolite, it expects a minimal ssh setup. If you have several users with several existing keys, pain awaits.

It expects to run on lemp.

Gitea, like Gitolite, likes to manage people’s ssh keys

Comes with password based membership utilities and web UI ready to roll, unlike Gitolite which for all its power just declares all the key managent stuff that it is so deeply involved in out of scope with the result that everyone rolls their own solution ad-hoc.

These involve fewer hosting headaches than the great weighty offerings of Gitlab and Phabricator. They can run on a rasberry pi, and are great ads for the capability of the Go language.


Server Size : 2GB Ram – 1 CPU Core – 50GB SSD If you have more than five users, this may not suffice, but you can limp along OK with one gigabyte.

Installation and configuration of Phabricator (which is also likely to be useful in installing PhpAdmin, because it covers configuring your “MySQL” (Actually MariaDB) database.

Configuring phabricator requires a whole lot of configuration of ssh and git, so that ssh and git will work with it.

Phabricator notifications require node.js, will not run with apache. Ugh. But on the other hand, Comen needs node.js regardless. But wordpress requires PHP, not sure that is going to play nice with node.js. Node.js does not play well with apache PHP, and Phabricator seems to use both of the, but likely only uses node.js for notifications, which can wait. Usual gimmick is that to use Apache’s ProxyPass directive to pass stuff to node.js. Running both node.js and apache/PHP is likely to need a bigger server. Apache 2.4.6 has support for proxying websockets mod_proxy_wstunnel

The phabricator website suggests nginx + php-fpm + “MySQL” (MariaDB) + PHP. Probably this will suffice for Wordpress. nginx is the highest performance web server, but it is not node.js

Apache, node.js, and nginx can all coexist, by routing stuff to each other (usually with the highest performing one, nginx, on top) but you will need a bigger server that way

Blender is a huge open source success, breaking into the big time, being free and open tool for threedee drawing. It’s development platform is of course self hosted, but not on Gitlab, on Phabricator. Maybe they know what they are doing.

Phabricator is a development environment provided by Phacility. It is written in in PHP, in a phabricator development environment, which is designed to support very large development communities and giant corporations. It is written in PHP, which makes me instantly suspicious. But it is free, and, unlike GitLab, genuinely open source.

Being written in PHP, assumes a Lamp stack, apache2, php 5.2 or later, mysql 5.5 or later

Phabricator assumes and enforces a single truth, it throws away a lot of the inherent decentralization of git. You always need to have one copy of the database, which is the King – which people spontaneously do with git, but with git it is spontaneous, and subject to change.

KDE seems to be moving away from Phabricator to Gitolite, but they have an enormously complex system, written in house, for managing access, of which Gitolite is part. Perhaps, Phabricator, being too big, and doing too much was inflexible and got in their way.

Github clients spontaneously choose one git repository as the single truth, but then you have the problem thatGithub itself is likely to be hostile. An easy solution is to have the Github respository a clone of the remote repository, without write privileges.

Gitlab repository

Git, like email, automatically works – provided that all users have ssh login to the git user, but it is rather bare bones, better to fork out the extra cash and support gitlab – but gitlab is far from automagic, and expects one git address for git and one chat address for matterhorn, and I assume expects an MX record also.

Gitlab is a gigantic memory hog, and needs absolute minimum of one core and four gig, and two cores and eight gig strongly recommended for anything except testing it out and toying with it. It will absolutely crash and burn on less than four gig. If you are running gitlab, no cost advantage to running it on debian. But for my own private vpn, huge cost advantage.

Gitlab absolutely requires postgreSQL. They make a half assed effort to stay MySQL compliant, but fall short. Not a big disk space hog – ten gigabytes spare will do, so fine on thirty two gigabyte system.

Gitlab Omnibus edition contains the postfix server, thus can send and receive email to is host address

Setup Gitlab with Protected branch flow

With the protected branch flow everybody works within the same GitLab project. The project maintainers get Maintainer access and the regular developers get Developer access. The maintainers mark the authoritative branches as ’Protected’. The developers push feature branches to the project and create merge requests to have their feature branches reviewed and merged into one of the protected branches. By default, only users with Maintainer access can merge changes into a protected branch. Each new project requires non trivial manual setup.

But, seems likely to be easier just to use the main gitlab site, as least until my project is far enough advanced that the evil of github is likely a threat.

Gitlab is intended to be hosted on your own system, but to learn how to use it in a correctly configured gitlab environment, and to learn what a correctly configured gitlab environment looks like and how it is used, going to need an account on gitlab.

Gitlab requires that the Openssh port 22, the http port 80, and the https port 443 be forwarded. Http should always get automatic redirect to hppts governed by a lets encrypt certificate.

GitLab Mattermost expects to run on its own virtual host. In your DNS you would then have two entries pointing to the same machine, e.g. «» and GitLab Mattermost is disabled by default, to enable it just put the external url in the configuration file.

Github, on the other hand, allows you to point your own domain name to your custom (static) github website and git repository as if on your own system.

I am suspicious of placing your own website on someone else’s system, especially a system owned by social justice warriors, and the restriction to static web pages is likely intended to facilitate political censorship and law enforcement, and physical attacks on dissidents by nominally private but state department supported thugs and Soros supported thugs.

Gitlab on the cloud

Omnibus edition of gitlab, usually available already configured to a cloud provider.

Instructions for installing it yourself.

It is like advisable to setup apache, the apache virtual hosts, and the apache certificates first.

Gitlab markdown, like Github markdown, can mostly succeed in handling html tags.

We are not going to build on the cloud, but we will have source code, chat, and code of conduct on the cloud, probably on and

Configuring gitlab is non trival. You want anyone to be able to branch, and anyone to be able to issue a pull request, but you only want authorized users to be able to merge into the master.

Since we want to be open to the world, implement recaptcha for signups, but allow anyone in the world to pull without signing up. To create a branch, they have to sign up. Having created a branch, they can issue a pull request for an authorized user to pull their branch into the master, they have to sign up.

Gitlab workflow is that you have a master branch with protected access, a stable branch with protected access, and an issue tracker.

You create the issue before you push the branch containing fixes for the issue. There must be at most one branch for every issue.

Developers create a branch for any issues they are trying to fix, and their final commit should say something like “fixes #14” or “closes #67.” and then, when the branch is merged into the master, the issue will be marked fixed or closed.

Digital Ocean offers a free entry, and a quite cheap system

But virmach, even cheaper.

Eight gig, two cores, which you will need to run gitlab for everyone, is Debian 9 enterprise edition on $40 per month.

Also, vpn on the cloud.

Currency project should be hosted on digital ocean at, at $20 per month (Four gig, two cores), using Gitlab free omnibus edition. They suggest configuring your own Postfix email server on the machine also, but should this not be automatic? Probably already in the DigitalOcean Gitlab droplet. Postfix is a sendmail implementation,

If you have a static IP address, you can point your subdomain A record at it. Digital Ocean provides static IPs. DNS A record for IP address, DNS AAAA record for IPv6 address.

How to us the gitlab one click install image to manage git repositories.

You will need to get your SSL certificate from cyberultra and supply it to Gitlab (though gitlab has built in lets-encrypt automation, so maybe not).

Subdomains are a nameserver responsibility, so you really have to point your domain name nameservers to cyberultra, or else move everything to digital ocean.

All digital ocean ips, except floating ips, are static. You will need an A record and an AAAA record, the AAAA record for IP6

Getting started with gitlab and digitalocean.

Gitlab did not want to support fully browsable public repositories, but they have been supported since 6.2

Gitlab omnibus edition comes integrated with Mattermost, but mattermost is turned off by default.

Before spending money and going public, you might want to install locally and run on your local system. Enable mattermost.

Since gitlab will have the root web page on, you will need another DNS entry pointing at the same host, something like So, though both on the same machine, one is the root http page when accessed by one domain name, the other the root entry when accessed by another domain name.

Implementing Gui in linux

coupling to the desktop

To couple to the desktop requires a pile of information and configuration, which most people ignore most of the time. To the extent that they provide it, they seem to write it for the Gnome based desktops, Cinnamon and Mate – more for Mate because it is older and has changed less.

Since wxWidgets is written for GDT in its linux version, it is written for Gnome.

Gnome3, the default Debian desktop, is broken, largely because they refuse to acknowledge that it is broken, so the most standard linux environment, the one for which your practices are least likely to break on other linuxes, is Debian with Lightdm and Mate. (pronounced Mahtee)

Looks to me that KDE may be on the way out, hard to tell, Gnome3 is definitely on the way out, and every other desktop other than Cinnamon and Mate is rather idiosyncratic and non standard.

Lightdm-Mate has automatic login in a rather obscure and random spot. Linux has its command line features polished and stable, but is still wandering around somewhat lost figuring out how desktops should work.

Under Mate and KDE Plasma, bitcoin implements run-on-login by generating a bitcoin.desktop file and writing it into ~/.config/autostart

It does not, however, place the bitcoin.desktop file in any of the expected other places.

Under Mate and KDE Plasma, bitcoin stores its configuration data in ~/.config/Bitcoin/Bitcoin-Qt.conf, rather than in ~/.bitcoin

wxWidgets attempts to store its configuration data in an environment appropriate location under an environment appropriate filename.

It does not, however, seem to have anything to handle or generate desktop files.

desktop files are the Linux desktop standard for a gui program to integrate itself into the linux desktop, used to ensure your program appears in the main application menu, the linux equivalent of the windows startup menu.

Getting your desktop file into the startup menu is slightly different in KDE to the way it is in Gnome, but there are substantial similarities. FreeDesktop tries to maintain and promote uniformity. Gnome rather casually changed the mechanism in a minor release, breaking all previous desktop applications.


Every Linux desktop is different, and programs written for one desktop have a tendency to die, mess up, or crash the desktop when running on another Linux desktop.

Flatpack is a sandboxing environment designed to make every desktop look to the program like the program’s native desktop (which for wxWidgets is Gnome), and every program look to the desktop like a program written for that particular desktop.

Flatpack simulates Gnome or KDE desktops to the program, and then translates Gnome or KDE behaviour to whatever the actual desktop expects. To do this, it requires some additional KDE configuration for Gnome desktop programs, and some additional Gnome information for KDE desktop programs, and some additional information to cover the other 101 desktops.

WxWidgets tries to make all desktops look alike to the programmer, and Flatpack tries to make all desktops look alike to the program, but they cover different aspects of program and desktop behaviour, so are both needed. Flatpack covers interaction with launcher, the iconization, the install procedure, and stuff, which wxWidgets does not cover.

Linux installs tend to be wildly idiosyncratic, and the installed program winds up never being integrated with the desktop, and never updated.

Flatpack provides package management and automatic updates, all the way from the git repository to the end user’s desktop, which wxWidgets cannot.

This is vital, since we want every wallet to talk the same language as every other wallet.

Flatpack also makes all IPC look alike, so you can have your desktop program talk to a service, and it will be talking gnome iPC on every linux.

Unfortunately Flatpack does all this by running programs inside a virtual machine with a virtual file system, which denies the program access to the real machine, and denies the real machine access to the program’s environment. So the end user cannot easily do stuff like edit the program’s config file, or copy bitcoin’s wallet file or list of blocks.

A program written for the real machine, but actually running in the emulated flatpack environment, is going to not have the same behaviors. The programmer has total control over the environment in which his program runs – which means that the end user does not.

Censorship resistant internet

My planned system


Namecoin plus bittorrent based websites. Allows dynamic content.

Not compatible with wordpress or phabricator. Have to write your own dynamic site in python and coffescript.


Messaging system, email replacement, with proof of work and hidden servers.

Non instant text messages. Everyone receives all messages in a stream, but only the intended recipients can read them, making tracing impossible, hence no need to hide one’s IP. Proof of work requires a lot of work, to prevent streams from being spammed.

Not much work has been done on this project recently, though development and maintenance continues in a desultory fashion.

Tcl Tk

An absolutely brilliant and ingenious language for producing cross platform UI. Unfortunately, mostly dead. The documentation for a key tool and system you would need to develop in a Tk Tcl environment was last updated twenty years ago. Therefore, way to go is WxWidgets. A tad suspicious about Code::Blocks wxSmith

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.