Category Archives: Linux

curl_openssl_4_not_found

Dynamic Linking Error: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_4′ not found

  1. Why is this happening?
  2. Am I in the same situation?
  3. The Million $ Solution

Why is this happening?

Well, if you end up on this page, you for sure are in a bad situation.

It took me some good hours to debug and solve this problem but maybe you can close this chapter much faster. You are encountering this issue because a native module that you are using depends on CURL_OPENSSL_4.

Am I in the same situation?

There are multiple reasons why you may encounter this error: CURL_OPENSSL_4 not found.

The most common ones are:

  1. Symbolic Link from /usr/lib/x86_64-linux-gnu/libcurl.so.4 -> /usr/lib/x86_64-linux-gnu/libcurl.so.4.X.X is bad. You might need to recreate it.
  2. libcurl version is not installed -> Easy fix -> install it
  3. libcurl version is wrong -> This is what we’re gonna cover

If you are a Linux user who is also into system level software development, you may find yourself in situations where-in you need information related to symbols in an object file. You’ll be glad to know there exists a command line utility – nm – that you can use in these situations.

The nm command line utility basically lists symbols from object files. Here’s the tool’s syntax:

nm [OPTIONS] OBJECT-FILENAME

Run:

nm -D /usr/lib/x86_64-linux-gnu/libcurl.so.4|grep CURL

What you should see is:

0000000000000000 A CURL_OPENSSL_4

What you might see:

0000000000000000 A CURL_OPENSSL_3

Well clearly things don’t look good. Your Linux distribution is built using other libcurl version. You may attempt to uninstall it and add another version but it might break apt.

The Million $ Solution

I switched to a Linux distribution that includes libcurl4.

I was using node-12-slim which is built on Debian-Stretch. You cannot install libcurl4 on this version since it’s built on CURL_OPENSSL_3.

I switched to node-12-buster-slim which is built on top of Debian-Buster and installed libcurl4 and things started working.

apt-get install libcurl4 -y

That’s all πŸ™‚

How to run Git pull in all subdirectories

Git pull allows you to retrieve the latest changes from the remote repository.

If you ever need to run git pull in all subdirectories but you don’t want to manually do it for each of them, you create a bash script as follows:

#! /usr/bin/env bash
for dir in ./*/
do
    cd ${dir}
    git status >/dev/null 2>&1
    # check if exit status of above was 0, indicating we're in a git repo
    [ $(echo $?) -eq 0 ] && echo "Updating ${dir%*/}..." && git pull
    cd ..
done

Set the permissions:

chmod +x git-pull.sh

And run ./git-pull.sh

Source

primary-node

Primary node election in distributed computing

Primary node can help with coordination of multiple node deployed in a cloud. One of the projects I’ve worked on had a NodeJS worker that ran multiple types of tasks. I wanted to upgrade this setup in order to be easily scalable, have a primary node (or coordinator) that triggers the tasks and continue processing even when some of the nodes fail.

The Checklist

  • All nodes may participate in an “election” to choose the coordinator
  • Support any number of nodes, including 1 node setup
  • Handle node fail (even the coordinator node) without impacting the flow
  • Allow new nodes to join the party
  • Don’t depend on an expensive technology (paid or resource hungry)

Besides the main scope of the solution I also needed to ensure that the election of the coordinator follows these 3 basic principles:

  • Termination: the election process must complete in a finite time
  • Uniqueness: only one node can be coordinator
  • Agreement: all other nodes know who’s the coordinator.

After I established that I have all the scenarios in mind, I started investigating different algorithms and solutions that the market uses (Apache Zookeeper, Port Locking, Ring Networks). However, most of these require a lot of setup or were incompatible in a multi server setup and I also wanted to embrace a KISS approach so continue reading to see the solution.

The Primary Node Election Algorithm

  1. Node generates a random numeric id
  2. Node retrieves a COORDINATOR_ID key from Redis
  3. If key !NULL
    • We have a coordinator
    • Wait Z minutes (e.g. Z = 1 hour)
    • GoTo Step2
  4. If key NULL
    • No coordinator announce
    • Push the id from Step1 in a Redis list
    • Waits X seconds (depending on how long the deployment takes, e.g. 10 seconds)
    • Retrieve all items in the list and extract the highest number
    • If result === node id
      • Current node is primary
      • Set Redis key COORDINATOR_ID with expiry Z+X
      • Do all the hard work πŸ™‚
    • Wait Z minutes
    • GoTo Step2

Downside of this solution is that if the coordinator node fails, it actually takes 2*Z until a new election takes place.

There’s room for improvement so please don’t hesitate to leave a feedback πŸ™‚

Docker Compose Tutorial – Getting Started

This tutorial should help people with no Docker Compose knowledge and those that just need to freshen their memory πŸ˜€

  1. What is Docker Compose?
  2. What do I need? The prerequisites
  3. The docker-compose.yml
  4. Docker Compose commands

1. What is Docker Compose?

Docker Compose is a tool that can be used to define and run multi-container Docker applications.

The application’s services are defined and configured using a YAML file.

With a single command you may manage all your application’s services.

2. What do I need? The prerequisites

Docker Compose

Obviously you need to have Docker Compose installed on your machine. There are already plenty of articles out there and even the official Docker Compose Install documentation so we’re gonna skip that part.

Docker Machine

You can’t run any container without a Docker Machine up and running. If you don’t know how to do this, check out my Docker Tutorial.

3. The docker-compose.yml

In order to be able to use the Docker Compose you need an YAML file which defines how your applications run, how they communicate, what images (or Dockerfile) they use and other aspects.

For the following tutorial steps you may either use your own YAML or the sample YAML from bellow:

version: '3.7'
services:
    db:
        image: postgres
        restart: always
        environment:
            POSTGRES_PASSWORD: THEPASSWORD
        ports:
            - 5432:5432
    adminer:
        image: adminer
        restart: always
        ports:
            - 8080:8080

If you want to learn more about what the docker-compose.yml offers, you can go ahead and read this article.

Docker Compose Commands

All YAML Defined Services

Starting an application with all associated services:

docker-compose up --build

The above command builds images that were never built or that have changes since the last run. After build is done all containers are started and the console remains attached to all containers.

However, running the containers with this command doesn’t allow you to detach from them without stopping them. So you should specify that you want to run in detached mode:

docker-compose up --build -d

Stopping all containers:

docker-compose stop

Stopping all containers, remove them and the networks:

docker-compose down

Stopping and removing all containers and networks and volumes:

docker-compose down --volumes

Specific YAML Defined Service

Now that we have all containers running, if we only want to manage only one container we have the following commands:

Stopping a container:

docker-compose stop db

Starting a container:

docker-compose up start db

Rebuilding a container:

docker-compose up --no-start db

Restarting a container:

docker-compose restart db

Viewing logs of a single container:

docker-compose logs db

Hopefully this Docker Compose tutorial helps you understand what Compose is and how manage your containers with it.

If you’re not bored yet, check out my other Docker Articles.

Docker Compose YAML – Most Wanted Fields

This article describes the basic fields that can be configured in a Docker Compose YAML. It should help you bootstrap your Compose YAML and get your services up and running.

The fields described are available in Docker Compose YAML version 3.7. If your YAML version if different or you Docker Engine is too old, you might not be able to use all the fields.

What we’re gonna do is build-up the YAML from scratch, adding new fields as we require them.

Our initial docker-compose.yml looks like this:

version: '3.7'
services:
    database:

image

In order to run a container you need to have an image which describes what the container includes. You can specify the value using the image field (key). The value can be a string that refers to an image from the Public Docker Hub or an URL for any other hub.

For example if you want to use the official postgres image from here, you would specify the name like this:

version: '3.7'
services:
    database:
        image: postgres

build

Sometimes we need to use custom images that are created using custom made Dockerfile. You can specify which Dockerfile to use, and allocate a build context using the context and dockerfile fields.

version: '3.7'
services:
    database:
        build:
            context: ./my_project_folder
            dockerfile: ./my_project_folder/Dockerfile

If you specify both image and build, then docker-compose names the resulting image from the build using the name specified as the image value.

ports

The ports field allows configuration of the ports mapped between the container and the host machine.

You can specify multiple ports in the format: <HOST_PORT>:<CONTAINER_PORT>

version: '3.7'
services:
    database:
        image: postgres
        ports:
            - 5432:5432

Also check out these articles if you’re not familiar with Docker or Docker Compose.

How to expose multiple ports from Docker Container

One important aspect of a HTTP web server is to be able to handle both HTTPS and HTTP requests, which requires binding to multiple ports.

However, handling HTTP requests should be done at a minimal level, which means these requests should be redirected to the HTTPS server handler.

My server listens to both 443 and 80, but it redirects all 80 requests to 443.

All good until production deploy using Docker container.

The problem I was facing to which I did not find any quick solution in the Docker documentation: how to bind multiple host:container ports.

It turns out this is much simpler than it first sounds:

docker run ... -p <HOST_PORT1>:<CONTAINER_PORT1> -p <HOST_PORT2>:<CONTAINER_PORT2>

That’s all.

You can specify as many ports as needed.

This machine has been allocated an IP address, but Docker Machine could not reach it successfully – Docker Machine Error

I recently encountered this docker machine IP allocation error while trying to power up my docker machine.

This machine has been allocated an IP address, but Docker Machine could not
reach it successfully.
SSH for the machine should still work, but connecting to exposed ports, such as
the Docker daemon port (usually :2376), may not work properly.
You may need to add the route manually, or use another related workaround.
This could be due to a VPN, proxy, or host file configuration issue.
You also might want to clear any VirtualBox host only interfaces you are not using.
Checking connection to Docker…
Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.100.101:2376": dial tcp 192.168.100.101:2376: i/o timeout
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.

I tried to regenerate-certs and with the existing Network adapter config, or with a new one, but the errors continued to reproduce.

The only fix that worked for me was to restart the virtualbox network adapter. Open the Virtual Box UI, go to Preferences -> Network -> view the name of the adapter.

It’s usually vboxnet0 or vboxnet1.

After manually restarting that network adapter things should work nicely:

sudo ifconfig vboxnet0 down && sudo ifconfig vboxnet0 up

This did the trick for me.

OSX Global NPM Module command not found

In case you ended up in a situation where you just installed a global NPM module, but it still throws command not found, here’s what you have to do:

Find out where the global NPM modules are installed by running:

npm prefix -g

Double check that your $PATH does not already contain that value:

echo $PATH

If the value is not included, you must update your etc/paths with the NPM location:

sudo vi /etc/paths

Add the value returned by npm prefix -g preceded by /bin

e.g. /Users/catalinmunteanu/.npm-global/bin

Save the file and exit.

Open a new terminal tab/window and retry the command.

Cheers!

Node n permission denied without sudo

Each time I do a fresh install of the Node Version Management tj/n I end up getting the permission denied when running npm install.

If you also ran into this issue, well, there’s a quick fix.

The issue is caused by the Node Version Management for which the owner is the root.

The two following steps will help you continue in peace πŸ˜€

which n

Returns the install location of the Node Version Management package. e.g. /Users/username/n

sudo chown -R $(whoami) <PATH_WHICH_N>

Sets the current user as owner.

You can now install the NPM packages without the power of sudo.

How to send signal to NodeJS process running in Linux

There might be times when you want to enable debugging or just change some NodeJS configuration without restarting the process. Scroll down to learn how to send signals to NodeJS process running on Linux.

linux signal nodejs

One extremely easy way of doing this is to send a signal to the running Node pid.

You can get the process pid running:

lsof i :<PORT>

This lsof command works if your process is running in a Docker container as well.

We can send the linux signal using:

# kill -s <SIGNAL> <PID>
kill -s SIGUSR2 999

As you can see it’s quite easy to send a signal. Checkout the official docs and find out more about how Node treats signals.

Now that we know how to send the signals we have to handle it in our NodeJS service:

function handle(signal) {
    console.log(`Received ${signal}`)
    // take some action depending on the signal received
}
process.on('<SIGNAL>', handle)

That’s all.

You can now do whatever you want when a signal occurs, without restarting the Node process: reload the configuration, enable debugging, logs and so on.

In one of my apps I used the signal to enable debugging of all HTTP requests.