How to run PHP in Docker Container

If you ended up here you probably want a fast solution to get your PHP scripts running without installing Apache in your host PC.

Well, there you go (docker-compose.yml):

version: '3.8'
services:
    php-apache:
        container_name: php-apache
        image: php:8.0-apache
        volumes:
            - ./source/code:/var/www/html/
        ports:
            - 8000:80

Run docker-compose up and voila 😀

vue-pin-code-input-component

Vue PIN code input component

At the end of this post you’ll learn how to implement a basic vue pin code input component.

One of my projects required an authorization layer for protecting some resources.

I ended up building a quite simple Vue Component that can be integrated in any app and that can be easily extended for any needs.

The template of the Vue Pin Code Component looks like this:

<template>
    <div>
        <div class="input-group">
            <input v-model.number="pin_0"
                   v-on:keyup.right="pin_focus('pin_1')"
                   v-on:keypress="is_valid_pin_value($event, 'pin_0')"
                   ref="pin_0" type="text" placeholder="0">
            <input v-model.number="pin_1"
                   v-on:keyup.left="pin_focus('pin_0')"
                   v-on:keyup.right="pin_focus('pin_2')"
                   v-on:keypress="is_valid_pin_value($event, 'pin_1')"
                   ref="pin_1" type="text" placeholder="0"">
            <input v-model.number="pin_2"
                   v-on:keyup.left="pin_focus('pin_1')"
                   v-on:keyup.right="pin_focus('pin_3')"
                   v-on:keypress="is_valid_pin_value($event, 'pin_2')"
                   ref="pin_2" type="text" placeholder="0">
            <input v-model.number="pin_3"
                   v-on:keyup.left="pin_focus('pin_2')"
                   v-on:keypress="is_valid_pin_value($event, 'pin_3')"
                   ref="pin_3" type="text" placeholder="0">
        </div>
    </div>
</template>

The controller of the component:

export default {
     data: function () {
         return {
             pin_0: null,
             pin_1: null,
             pin_2: null,
             pin_3: null
         }
     },
     computed: {
         pin: function () {
             return ${this.pin_0}${this.pin_1}${this.pin_2}${this.pin_3}
         }
     },
     watch: {
         pin: function () {
             this.$bus.$emit('PIN/change', this.pin)
         },
         pin_0: function (nv) {
             if (nv.toString().length !== 0) {
                 this.$refs.pin_1.focus()
                 this.$refs.pin_1.select()
             }
         },
         pin_1: function (nv) {
             if (nv.toString().length !== 0) {
                 this.$refs.pin_2.focus()
                 this.$refs.pin_2.select()
             }
         },
         pin_2: function (nv) {
             if (nv.toString().length !== 0) {
                 this.$refs.pin_3.focus()
                 this.$refs.pin_3.select()
             }
         }
     },
     methods: {
         pin_focus: function (ref) {
             this.$refs[ref].focus()
             this.$refs[ref].select()
         },
         is_valid_pin_value: function (e, pin_N) {
             const char = String.fromCharCode(e.keyCode)
             const is_value_selected = this[pin_N] !== null && this.$refs[pin_N].selectionStart === 0 && this.$refs[pin_N].selectionEnd === this[pin_N].toString().length
             if ((this[pin_N] === null || this[pin_N].toString().length === 0 || is_value_selected) && parseInt(char, 10) >= 0 && parseInt(char, 10) <= 9) {
                 return true
             }
         e.preventDefault()     } }
 }

What it does:

The data contains properties required for binding the current values of the pin between the template and the controller.

A computed property pin is kept up2date using changes done to any data variable.

We’re using keyup.left, keyup.right and keypress events to switch between each input of the PIN.

Each time the pin computed variable is changed, we $emit an event. See my previous post for learning how to implement Observer Pattern in Vue.

Integrating the PIN Component in our app is as easy as:

<InputPIN />

this.$bus.$on('PIN/change', function (value) {
    console.log('the pin:', value)
})

That’s all. Our Vue PIN Input component is ready.

curl_openssl_4_not_found

Dynamic Linking Error: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_4′ not found

  1. Why is this happening?
  2. Am I in the same situation?
  3. The Million $ Solution

Why is this happening?

Well, if you end up on this page, you for sure are in a bad situation.

It took me some good hours to debug and solve this problem but maybe you can close this chapter much faster. You are encountering this issue because a native module that you are using depends on CURL_OPENSSL_4.

Am I in the same situation?

There are multiple reasons why you may encounter this error: CURL_OPENSSL_4 not found.

The most common ones are:

  1. Symbolic Link from /usr/lib/x86_64-linux-gnu/libcurl.so.4 -> /usr/lib/x86_64-linux-gnu/libcurl.so.4.X.X is bad. You might need to recreate it.
  2. libcurl version is not installed -> Easy fix -> install it
  3. libcurl version is wrong -> This is what we’re gonna cover

If you are a Linux user who is also into system level software development, you may find yourself in situations where-in you need information related to symbols in an object file. You’ll be glad to know there exists a command line utility – nm – that you can use in these situations.

The nm command line utility basically lists symbols from object files. Here’s the tool’s syntax:

nm [OPTIONS] OBJECT-FILENAME

Run:

nm -D /usr/lib/x86_64-linux-gnu/libcurl.so.4|grep CURL

What you should see is:

0000000000000000 A CURL_OPENSSL_4

What you might see:

0000000000000000 A CURL_OPENSSL_3

Well clearly things don’t look good. Your Linux distribution is built using other libcurl version. You may attempt to uninstall it and add another version but it might break apt.

The Million $ Solution

I switched to a Linux distribution that includes libcurl4.

I was using node-12-slim which is built on Debian-Stretch. You cannot install libcurl4 on this version since it’s built on CURL_OPENSSL_3.

I switched to node-12-buster-slim which is built on top of Debian-Buster and installed libcurl4 and things started working.

apt-get install libcurl4 -y

That’s all 🙂

PostgreSQL Logo

How to list database privileges in PostgreSQL

When creating a new database in PostgreSQL or when creating a new connection to an existing database from a new client, it is recommended that a dedicated database user is used.

Each database user must only be granted the permissions needed to solve it’s purpose.

The following SQL command allows you to obtain the current PostgreSQL privileges configured in the active database:

SELECT grantee, table_name, privilege_type 
FROM information_schema.role_table_grants

This command will output some like:

granteeTable_nameprivilege_type
management_usernameusersINSERT
management_usernameusersSELECT
management_usernameusersUPDATE
management_usernameusersDELETE
management_usernameusersTRUNCATE
management_usernameusersREFERENCES
management_usernameusersTRIGGER
statistics_usernamesessionsSELECT
Sample Privileges

However, you may only execute this SQL and get the PostgreSQL privileges as an admin user.

search-engine-optimization

SEO Tutorial – Search Engine Optimization

SEO Tutorial – Learn how to improve your website for Search Engine Optimization.

What is SEO?

SEO stands for “Search Engine Optimization”.

It represents the process of improving your site rating to increase its visibility for relevant searches. Its purpose is to improve the quality of the website content and the quantity of the traffic.

SEO targets the increase of unpaid traffic rather then paid traffic or organic traffic (direct traffic).

SEO Key Aspects

Bellow are some key aspects that you should go through when not pleased with the search engine results position. Have fun 😀

Website Meta Tags

Meta tags are crucial to the health of a website. They define how should the visitors browser treat your website. If a website would have a mandatory configuration file, well the meta tags would be it.

Your website must have at least the bellow meta tags defined, but if quality optimization is what you need, you should be aware of all available meta tags and their purpose.

Charset – The charset attribute of the <meta> element specifies the character encoding for the HTML document.

Viewport – Setting the viewport meta tag that controls the width and scaling of the viewport so that it’s sized correctly on all devices.

Title – The title of a HTML document displayed both in Search Engine result snippets as well as the page’s tab in browsers.

See a complete list of the most important meta tags here and their definition.

Website content is the advertised product. Search Engines will rank your website depending on the quality of the content. However, more content doesn’t mean higher ranking.

Duplicate content actually decreases the ranking so we need to let Search Engines know what is the original content of a duplicate content.

For example if we have a product on the website and multiple URLs open that products page, we need to refer the main product page on all secondary pages using canonical links.

This pages’ URL is https://www.catalinmunteanu.com/seo-tutorial/ but it can be also accessed using the post id: https://www.catalinmunteanu.com?post=591. When accessed using the post id we need to set the canonical URL:

<link rel="canonical" href="https://www.catalinmunteanu.com/seo-tutorial/" />

Heading and Description

Images

Images have an important role in SEO. One might say they can replace a thousand words… or was it pixels… See bellow some aspects that can be optimized.

Image Size Optimization

Your images need to be optimized for web. You can’t expect all visitors to have a great bandwidth connection so instead of including images with sizes greater than 50-100Kb, you should consider optimizing them before.

There are plenty tools out there that can help with image optimization. Consider using PJPEG if in no way you can optimize their size.

Image Attributes

Any image on your website needs to have attributes set. You need to set the alt and title attributes. The values must describe in words the content of the image.

Image Lazy Loading

You can never know if a visitor will scroll all the way down or will stop as soon as he finds what he was searching for. So for the short period in which your visitor browses, make sure all goes smooth.

One way of achieving this is to not load all the images that are not visible in the Viewport, but actually use Lazy Loading.

Sitemaps

Sitemaps serve a single purpose: Help search engine bots to understand what pages does your website expose. They have a huge role in Search Engine Optimization process and should be one of the first actions your take in this direction.

Some search engines provide a Web Console in which you can submit your sitemaps which will end up being used by their bots. Google offers the Search Console where you can manage your sitemaps.

There are two types of sitemaps: Sitemap Index and Sitemap.

Sitemap Index should be used when your website has multiple Sitemaps and should link each Sitemap URL. Sample Sitemap Index bellow:

<?xml version="1.0" encoding="UTF-8"?>
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
   <sitemap>
      <loc>https://www.catalinmunteanu.com/sitemap-1.xml</loc>
      <lastmod>2020-10-25</lastmod>
   </sitemap>
   <sitemap>
      <loc>https://www.catalinmunteanu.com/sitemap-2.xml</loc>
      <lastmod>2020-10-25</lastmod>
   </sitemap>
</sitemapindex>

The Sitemap contains the actual links to each page available on the website and some optional properties regarding each page. Sample Sitemap bellow:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
   <url>
      <loc>https://www.catalinmunteanu.com/</loc>
      <changefreq>daily</changefreq>
      <priority>1.0</priority>
   </url>
   <url>
      <loc>https://www.catalinmunteanu.com/about/</loc>
      <changefreq>weekly</changefreq>
   </url>
</urlset>

Additional information can be found on https://www.sitemaps.com.

Mobile Compatibility

Some Search Engines give extra points for responsive or adaptive websites. So having a mobile-optimized website will increase your SEO rating.

A few things that might affect your mobile compatibility:

  • viewport meta tag not set
  • viewport width is not responsive
  • content requires horizontal scrolling which determines content not to fit in viewport
  • interactive elements are not optimized for touch devices
  • font is not readable – maybe too small or colors not optimized

Even if you plan on building native mobile applications, you should consider configuring your website as a PWA. Additional details about Progressive Web apps here.

Optimizing your website for mobile is an easy task and there are plenty tutorials out there. However, check this great article on how to start.

Social Platform Integration (Twitter, Facebook Open Graph)

Your website needs to have the Twitter Card and Facebook OG configured. You’ll need to create a Facebook Application by accessing https://developers.facebook.com. For Twitter the configuration doesn’t require any custom app. There are plenty tutorials out there where you can learn how to do this. After you have configured both integrations, you can check the validity using:

Facebook Open Graph -> https://developers.facebook.com/tools/debug/

Twitter Cards -> https://cards-dev.twitter.com/validator

If your Twitter Card is valid you should see something like this in their console:

Facebook OG integration is valid if the Facebook Debug tool doesn’t list any errors.

You can also use https://www.opengraph.xyz to quickly review the integration of both social networks.

Website SSL Certificate

A key aspect that should be addressed before all other aspects is the SSL Certificate. Missing SSL Certificate or poor configuration determines a low SEO score.

Your website needs to have a valid and trusted SSL Certificate. You might get more ranking points if your website has the TLSv1.0 and TLSv1.1 disabled. It is good practice to disable deprecated protocols and to prepare for the future.

You can check your websites configuration using the following tools:

Basic check -> https://www.sslshopper.com/ssl-checker.html

Comprehensive check -> https://www.ssllabs.com/ssltest/

Certificate chain check -> https://whatsmychaincert.com

SSL Certificate Rating A+

Your target is to get A+ Rating so do everything that you can to achieve this. Having a lower rating will impact the overall website Search Engine Optimization.

Website Content

Maybe you have thoroughly configured all the above listed aspects but you’re still not in top 10 search results?

The question is: Do you have content? Does your website provide useful, clear, properly formatted information? The content displayed on your website is as important as any other SEO Key Aspect.

Your websites content is the most important aspect so make sure you have quality information. However, it’s not impossible to get in top 10 search results with a blank page. Leave me a comment if you know the secrets.

Conclusion

Congratulations! You have come a long way 🙂

Search Engine Optimization is about calibration, precision, dedication.

Each aspect needs to be carefully optimized and if you ignore a single aspect, your SEO rating will be impacted.

Some people consider SEO to be an art. It’s like manually crafting a piece of jewelry.

How to run Git pull in all subdirectories

Git pull allows you to retrieve the latest changes from the remote repository.

If you ever need to run git pull in all subdirectories but you don’t want to manually do it for each of them, you create a bash script as follows:

#! /usr/bin/env bash
for dir in ./*/
do
    cd ${dir}
    git status >/dev/null 2>&1
    # check if exit status of above was 0, indicating we're in a git repo
    [ $(echo $?) -eq 0 ] && echo "Updating ${dir%*/}..." && git pull
    cd ..
done

Set the permissions:

chmod +x git-pull.sh

And run ./git-pull.sh

Source

primary-node

Primary node election in distributed computing

Primary node can help with coordination of multiple node deployed in a cloud. One of the projects I’ve worked on had a NodeJS worker that ran multiple types of tasks. I wanted to upgrade this setup in order to be easily scalable, have a primary node (or coordinator) that triggers the tasks and continue processing even when some of the nodes fail.

The Checklist

  • All nodes may participate in an “election” to choose the coordinator
  • Support any number of nodes, including 1 node setup
  • Handle node fail (even the coordinator node) without impacting the flow
  • Allow new nodes to join the party
  • Don’t depend on an expensive technology (paid or resource hungry)

Besides the main scope of the solution I also needed to ensure that the election of the coordinator follows these 3 basic principles:

  • Termination: the election process must complete in a finite time
  • Uniqueness: only one node can be coordinator
  • Agreement: all other nodes know who’s the coordinator.

After I established that I have all the scenarios in mind, I started investigating different algorithms and solutions that the market uses (Apache Zookeeper, Port Locking, Ring Networks). However, most of these require a lot of setup or were incompatible in a multi server setup and I also wanted to embrace a KISS approach so continue reading to see the solution.

The Primary Node Election Algorithm

  1. Node generates a random numeric id
  2. Node retrieves a COORDINATOR_ID key from Redis
  3. If key !NULL
    • We have a coordinator
    • Wait Z minutes (e.g. Z = 1 hour)
    • GoTo Step2
  4. If key NULL
    • No coordinator announce
    • Push the id from Step1 in a Redis list
    • Waits X seconds (depending on how long the deployment takes, e.g. 10 seconds)
    • Retrieve all items in the list and extract the highest number
    • If result === node id
      • Current node is primary
      • Set Redis key COORDINATOR_ID with expiry Z+X
      • Do all the hard work 🙂
    • Wait Z minutes
    • GoTo Step2

Downside of this solution is that if the coordinator node fails, it actually takes 2*Z until a new election takes place.

There’s room for improvement so please don’t hesitate to leave a feedback 🙂

Docker Swarm Tutorial – Getting Started

  1. Activating Docker Swarm on Manager Node
  2. Attaching Worker Nodes to the Swarm

A great alternative to Kubernetes is Docker Swarm. It allows you to orchestrate services deployed on multiple nodes that are controlled by a node called manager.

First of all you need to have Docker Tools installed on all nodes.

Activating Docker Swarm on Manager Node

docker swarm init --advertise-addr <IP_OF_MANAGER_NODE>

Attaching Nodes to the Docker Swarm

After you activate the Manager Node you will receive a token that can be used to attach. worker nodes to the Manager Node.

docker swarm join --token <TOKEN> <IP_OF_MANAGER_NODE:PORT> (2377 default port)

After you run this command, you should see the message “This node joined a swarm as a worker.

Docker Compose Tutorial – Getting Started

This tutorial should help people with no Docker Compose knowledge and those that just need to freshen their memory 😀

  1. What is Docker Compose?
  2. What do I need? The prerequisites
  3. The docker-compose.yml
  4. Docker Compose commands

1. What is Docker Compose?

Docker Compose is a tool that can be used to define and run multi-container Docker applications.

The application’s services are defined and configured using a YAML file.

With a single command you may manage all your application’s services.

2. What do I need? The prerequisites

Docker Compose

Obviously you need to have Docker Compose installed on your machine. There are already plenty of articles out there and even the official Docker Compose Install documentation so we’re gonna skip that part.

Docker Machine

You can’t run any container without a Docker Machine up and running. If you don’t know how to do this, check out my Docker Tutorial.

3. The docker-compose.yml

In order to be able to use the Docker Compose you need an YAML file which defines how your applications run, how they communicate, what images (or Dockerfile) they use and other aspects.

For the following tutorial steps you may either use your own YAML or the sample YAML from bellow:

version: '3.7'
services:
    db:
        image: postgres
        restart: always
        environment:
            POSTGRES_PASSWORD: THEPASSWORD
        ports:
            - 5432:5432
    adminer:
        image: adminer
        restart: always
        ports:
            - 8080:8080

If you want to learn more about what the docker-compose.yml offers, you can go ahead and read this article.

Docker Compose Commands

All YAML Defined Services

Starting an application with all associated services:

docker-compose up --build

The above command builds images that were never built or that have changes since the last run. After build is done all containers are started and the console remains attached to all containers.

However, running the containers with this command doesn’t allow you to detach from them without stopping them. So you should specify that you want to run in detached mode:

docker-compose up --build -d

Stopping all containers:

docker-compose stop

Stopping all containers, remove them and the networks:

docker-compose down

Stopping and removing all containers and networks and volumes:

docker-compose down --volumes

Specific YAML Defined Service

Now that we have all containers running, if we only want to manage only one container we have the following commands:

Stopping a container:

docker-compose stop db

Starting a container:

docker-compose up start db

Rebuilding a container:

docker-compose up --no-start db

Restarting a container:

docker-compose restart db

Viewing logs of a single container:

docker-compose logs db

Hopefully this Docker Compose tutorial helps you understand what Compose is and how manage your containers with it.

If you’re not bored yet, check out my other Docker Articles.

Docker Compose YAML – Most Wanted Fields

This article describes the basic fields that can be configured in a Docker Compose YAML. It should help you bootstrap your Compose YAML and get your services up and running.

The fields described are available in Docker Compose YAML version 3.7. If your YAML version if different or you Docker Engine is too old, you might not be able to use all the fields.

What we’re gonna do is build-up the YAML from scratch, adding new fields as we require them.

Our initial docker-compose.yml looks like this:

version: '3.7'
services:
    database:

image

In order to run a container you need to have an image which describes what the container includes. You can specify the value using the image field (key). The value can be a string that refers to an image from the Public Docker Hub or an URL for any other hub.

For example if you want to use the official postgres image from here, you would specify the name like this:

version: '3.7'
services:
    database:
        image: postgres

build

Sometimes we need to use custom images that are created using custom made Dockerfile. You can specify which Dockerfile to use, and allocate a build context using the context and dockerfile fields.

version: '3.7'
services:
    database:
        build:
            context: ./my_project_folder
            dockerfile: ./my_project_folder/Dockerfile

If you specify both image and build, then docker-compose names the resulting image from the build using the name specified as the image value.

ports

The ports field allows configuration of the ports mapped between the container and the host machine.

You can specify multiple ports in the format: <HOST_PORT>:<CONTAINER_PORT>

version: '3.7'
services:
    database:
        image: postgres
        ports:
            - 5432:5432

Also check out these articles if you’re not familiar with Docker or Docker Compose.