Category Archives: How To

How to run Git pull in all subdirectories

Git pull allows you to retrieve the latest changes from the remote repository.

If you ever need to run git pull in all subdirectories but you don’t want to manually do it for each of them, you create a bash script as follows:

#! /usr/bin/env bash
for dir in ./*/
do
    cd ${dir}
    git status >/dev/null 2>&1
    # check if exit status of above was 0, indicating we're in a git repo
    [ $(echo $?) -eq 0 ] && echo "Updating ${dir%*/}..." && git pull
    cd ..
done

Set the permissions:

chmod +x git-pull.sh

And run ./git-pull.sh

Source

primary-node

Primary node election in distributed computing

Primary node can help with coordination of multiple node deployed in a cloud. One of the projects I’ve worked on had a NodeJS worker that ran multiple types of tasks. I wanted to upgrade this setup in order to be easily scalable, have a primary node (or coordinator) that triggers the tasks and continue processing even when some of the nodes fail.

The Checklist

  • All nodes may participate in an “election” to choose the coordinator
  • Support any number of nodes, including 1 node setup
  • Handle node fail (even the coordinator node) without impacting the flow
  • Allow new nodes to join the party
  • Don’t depend on an expensive technology (paid or resource hungry)

Besides the main scope of the solution I also needed to ensure that the election of the coordinator follows these 3 basic principles:

  • Termination: the election process must complete in a finite time
  • Uniqueness: only one node can be coordinator
  • Agreement: all other nodes know who’s the coordinator.

After I established that I have all the scenarios in mind, I started investigating different algorithms and solutions that the market uses (Apache Zookeeper, Port Locking, Ring Networks). However, most of these require a lot of setup or were incompatible in a multi server setup and I also wanted to embrace a KISS approach so continue reading to see the solution.

The Primary Node Election Algorithm

  1. Node generates a random numeric id
  2. Node retrieves a COORDINATOR_ID key from Redis
  3. If key !NULL
    • We have a coordinator
    • Wait Z minutes (e.g. Z = 1 hour)
    • GoTo Step2
  4. If key NULL
    • No coordinator announce
    • Push the id from Step1 in a Redis list
    • Waits X seconds (depending on how long the deployment takes, e.g. 10 seconds)
    • Retrieve all items in the list and extract the highest number
    • If result === node id
      • Current node is primary
      • Set Redis key COORDINATOR_ID with expiry Z+X
      • Do all the hard work 🙂
    • Wait Z minutes
    • GoTo Step2

Downside of this solution is that if the coordinator node fails, it actually takes 2*Z until a new election takes place.

There’s room for improvement so please don’t hesitate to leave a feedback 🙂

Docker Compose YAML – Most Wanted Fields

This article describes the basic fields that can be configured in a Docker Compose YAML. It should help you bootstrap your Compose YAML and get your services up and running.

The fields described are available in Docker Compose YAML version 3.7. If your YAML version if different or you Docker Engine is too old, you might not be able to use all the fields.

What we’re gonna do is build-up the YAML from scratch, adding new fields as we require them.

Our initial docker-compose.yml looks like this:

version: '3.7'
services:
    database:

image

In order to run a container you need to have an image which describes what the container includes. You can specify the value using the image field (key). The value can be a string that refers to an image from the Public Docker Hub or an URL for any other hub.

For example if you want to use the official postgres image from here, you would specify the name like this:

version: '3.7'
services:
    database:
        image: postgres

build

Sometimes we need to use custom images that are created using custom made Dockerfile. You can specify which Dockerfile to use, and allocate a build context using the context and dockerfile fields.

version: '3.7'
services:
    database:
        build:
            context: ./my_project_folder
            dockerfile: ./my_project_folder/Dockerfile

If you specify both image and build, then docker-compose names the resulting image from the build using the name specified as the image value.

ports

The ports field allows configuration of the ports mapped between the container and the host machine.

You can specify multiple ports in the format: <HOST_PORT>:<CONTAINER_PORT>

version: '3.7'
services:
    database:
        image: postgres
        ports:
            - 5432:5432

Also check out these articles if you’re not familiar with Docker or Docker Compose.

How to expose multiple ports from Docker Container

One important aspect of a HTTP web server is to be able to handle both HTTPS and HTTP requests, which requires binding to multiple ports.

However, handling HTTP requests should be done at a minimal level, which means these requests should be redirected to the HTTPS server handler.

My server listens to both 443 and 80, but it redirects all 80 requests to 443.

All good until production deploy using Docker container.

The problem I was facing to which I did not find any quick solution in the Docker documentation: how to bind multiple host:container ports.

It turns out this is much simpler than it first sounds:

docker run ... -p <HOST_PORT1>:<CONTAINER_PORT1> -p <HOST_PORT2>:<CONTAINER_PORT2>

That’s all.

You can specify as many ports as needed.

OSX Global NPM Module command not found

In case you ended up in a situation where you just installed a global NPM module, but it still throws command not found, here’s what you have to do:

Find out where the global NPM modules are installed by running:

npm prefix -g

Double check that your $PATH does not already contain that value:

echo $PATH

If the value is not included, you must update your etc/paths with the NPM location:

sudo vi /etc/paths

Add the value returned by npm prefix -g preceded by /bin

e.g. /Users/catalinmunteanu/.npm-global/bin

Save the file and exit.

Open a new terminal tab/window and retry the command.

Cheers!

Node n permission denied without sudo

Each time I do a fresh install of the Node Version Management tj/n I end up getting the permission denied when running npm install.

If you also ran into this issue, well, there’s a quick fix.

The issue is caused by the Node Version Management for which the owner is the root.

The two following steps will help you continue in peace 😀

which n

Returns the install location of the Node Version Management package. e.g. /Users/username/n

sudo chown -R $(whoami) <PATH_WHICH_N>

Sets the current user as owner.

You can now install the NPM packages without the power of sudo.

How to send signal to NodeJS process running in Linux

There might be times when you want to enable debugging or just change some NodeJS configuration without restarting the process. Scroll down to learn how to send signals to NodeJS process running on Linux.

linux signal nodejs

One extremely easy way of doing this is to send a signal to the running Node pid.

You can get the process pid running:

lsof i :<PORT>

This lsof command works if your process is running in a Docker container as well.

We can send the linux signal using:

# kill -s <SIGNAL> <PID>
kill -s SIGUSR2 999

As you can see it’s quite easy to send a signal. Checkout the official docs and find out more about how Node treats signals.

Now that we know how to send the signals we have to handle it in our NodeJS service:

function handle(signal) {
    console.log(`Received ${signal}`)
    // take some action depending on the signal received
}
process.on('<SIGNAL>', handle)

That’s all.

You can now do whatever you want when a signal occurs, without restarting the Node process: reload the configuration, enable debugging, logs and so on.

In one of my apps I used the signal to enable debugging of all HTTP requests.

Docker Tutorial – Getting started with Docker

This Docker Tutorial should help you get started with Docker.

  1. How to get a running Docker Machine on your computer
  2. How to create and manage Docker Images
  3. How to create and manage Docker containers
  4. How to delete Docker images and containers
  5. How to login to Docker Hub
  6. How to publish images to Docker Hub

You’ll learn how to start a Docker Machine on your PC, how to create images, how to create containers, and to cleanup your Docker Machine, login to Docker Hub, tag and publish images.

1. How to get a running Docker Machine on your computer

First thing you need to do before starting to manage images and containers is to have a Docker Machine up and running. For this tutorial I’m using Virtual Box for running the Docker Machine.

The next command creates a Docker Machine named dev. The command will check and download the required dependencies, create the VM, create a SSH key, start the VM and assign an IP and generate certificates.

docker-machine create --driver virtualbox dev

Running the above command is only needed the first time you create a Docker Machine. When you just need to start it you must run:

docker-machine start dev

After you have the Docker Machine running, you’ll be able to see the PORT and IP by running the next command. The printed result also contains a short command that you must run in order to set Docker Machine configuration in your environment.

docker-machine env dev

You must now set the environment with the Docker Machine configuration for the current terminal session by running:

eval $(docker-machine env dev)

That’s all for the first step. Starting a Docker Machine is quite easy.

In case you’re getting an IP allocation error, check out this article. It might help.

2. How to create and manage Docker Images

Now that we have the machine up and running we can list all the available images with:

docker images

In order to create a new image we must have a Dockerfile. I won’t get into how to create Dockerfile in this article but I’ll continue the tutorial using a base Nginx Dockerfile. Place this content in a file named Dockerfile

FROM nginx:latest
RUN nginx -v
EXPOSE 80
CMD ["bash", "-c", "nginx -g 'daemon off;'"]

Now you have a base Dockerfile and you can create an image from it by running:

docker build -t the_tag .

This command also requires that you have the Dockerfile in cwd. The dot from the end specifies that you are using a Dockerfile from the local directory. You can also build using a Dockerfile that’s not in the cwd by specifying the path to the Dockerfile:

docker build -f Dockerfile -t the_tag .

For both of the above commands you’ll see the following steps being executed:

Sending build context to Docker daemon 338.1MB
Step 1/4 : FROM nginx:latest
 ---> f68d6e55e065
Step 2/4 : RUN nginx -v
 ---> Using cache
 ---> 46c77d837d51
Step 3/4 : EXPOSE 80
 ---> Using cache
 ---> b31d714e67f1
Step 4/4 : CMD ["bash", "-c", "nginx -g 'daemon off;'"]
 ---> Using cache
 ---> c3f3bfb92c45
Successfully built c3f3bfb92c45
Successfully tagged the_tag:latest

If something goes wrong in any step the build will fail and the error will be printed. If you now run docker images again you’ll see:

REPOSITORY       TAG           IMAGE ID          CREATED             SIZE
the_tag          latest        c3f3bfb92c45      7 minutes ago       109MB

3. How to create and manage Docker containers

We now have a Docker image created and we’re ready to create and manage containers. You can list all available containers by running:

docker ps

Create a new container using the image we built by running:

docker run -d --name docker-nginx -p 80:80 the_tag

-d flag causes the container to run in detached mode
–name allows you to easily identify the container by setting a label
-p allows you to set publish the HOST:CONTAINER ports.

If you now run docker ps you’ll see something like this:

CONTAINER ID    IMAGE        COMMAND     CREATED          STATUS          PORTS                NAMES
c2e1bdcf12a0    image_tag    "bash -c"   3 seconds ago    Up 2 seconds    0.0.0.0:80->80/tcp   docker-nginx

You may now stop and delete this container by running:

docker stop c2e1bdcf12a0
docker rm c2e1bdcf12a0

4. How to delete Docker images and containers

In some articles I previously wrote I described how to keep your Docker Machine clean and how to delete certain images and containers or do a full cleanup.

Check out this article if you’re on OSX/Linux.

Check out this article if you’re using Windows.

5. How to login to Docker Hub

If you plan to publish your images to Docker Hub you’ll probably need to login.

docker login --username username@example.com --password THE_PASSWORD docker-hub.example.com:PORT

Be careful to specify the PORT. If you don’t, the authentication will work but publishing images will not find a token for the host.

6. How to publish images to Docker Hub

You’re now have an image and are authenticated to your Docker Hub. You can publish the image and make it available for others to use as well.

docker images # prints the hash
docker tag HASH docker-hub.example.com:443/<IMAGE_REPOSITORY_NAME>:0.0.1
docker push docker-hub.example.com:443/<IMAGE_REPOSITORY_NAME>

That’s it. The image is now on the hub.

How to clear Docker Machine on Linux

A few intensive hours building Docker containers quickly used up all the allocated space.

Deleting Docker images can be done using -f to force deletion of images that are used in other containers that remain.

docker images docker rmi <IMAGE_ID> -f 

Deleting Docker containers:

docker ps docker rm <CONTAINER_ID> 

Easy right?

What if you got tens of images? Still easy.

There are multiple ways of deleting Docker images depending on how they’re used.

The dangling images are not tagged and not used by any container. You can remove these images using (-f flag confirms the operation):

docker image prune -f

If you instead want to remove all images that are not referred in any container:

docker image prune -a

Cleaning containers is an easy task. You can clear only containers that are not running. The following command will stop all running containers:

docker container stop $(docker container ls -aq)

Now that we don’t have any running containers we can completely remove them using:

docker container rm $(docker container ls -aq)

Check this post and learn how to quickly delete Docker containers and images if you’re running on Windows.

How to generate X-WSSE Token using Java

Learn how to generate X-WSSE Token and how to authorize requests using X-WSSE header authentication.

If you’re not familiar with X-WSSE Token Authentication and why you should use it, go ahead and read this article that contains the basics of this type of authentication.

In this article I’ll describe how to generate a X-WSSE Token using Java.

import java.io.UnsupportedEncodingException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.Random;
import java.util.TimeZone;
import javax.xml.bind.DatatypeConverter;
 
public class xwsse {
 
    final protected static char[] hexArray = "0123456789abcdef".toCharArray();
 
    public static void main(String[] args) {
        String xwsse = getWsseHeader("CLIENT_ID", "CLIENT_SECRET");
        System.out.println(xwsse);
    }
 
    private static String getWsseHeader(String username, String secret) {
        String nonce = getNonce();
        String created = getUTCTimestamp();
        String digest = getPasswordDigest(nonce, created, secret);
 
        return String.format("UsernameToken Username=\"%s\", PasswordDigest=\"%s\", " + "Nonce=\"%s\", Created=\"%s\"", username, digest, nonce, created);
    }
 
    private static String getUTCTimestamp() {
        SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZ");
        sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
        return sdf.format(new Date());
    }
 
    private static String getNonce() {
        byte[] nonceBytes = new byte[16];
        new Random().nextBytes(nonceBytes);
        return bytesToHex(nonceBytes);
    }
 
    private static String getPasswordDigest(String nonce, String created, String secret) {
        String digest = "";
        try {
            MessageDigest messageDigest = MessageDigest.getInstance("SHA-256");
            messageDigest.reset();
            String hashedString = String.format("%s%s%s", nonce, created, secret);
            messageDigest.update(hashedString.getBytes("UTF-8"));
            String sha256Sum = bytesToHex(messageDigest.digest());
            digest = DatatypeConverter.printBase64Binary(sha256Sum.getBytes("UTF-8"));
        } catch(NoSuchAlgorithmException ex) {
            System.out.println("No SHA-256 algorithm found");
        } catch(UnsupportedEncodingException ex) {
            System.out.println("Unable to use UTF-8 encoding");
        }
        return digest;
    }
 
    private static String bytesToHex(byte[] bytes) {
        char[] hexChars = new char[bytes.length*2];
        for(int j=0; j<bytes.length; j++) {
            int v = bytes[j] & 0xFF;
            hexChars[j*2] = hexArray[v >>> 4];
            hexChars[j*2+1] = hexArray[v & 0x0F];
        }
        return new String(hexChars);
    }
 
}

That’s it. Check my other X-WSSE Articles and learn how to generate the token using other programming languages.