search-engine-optimization

SEO Tutorial – Search Engine Optimization

SEO Tutorial – Learn how to improve your website for Search Engine Optimization.

What is SEO?

SEO stands for “Search Engine Optimization”.

It represents the process of improving your site rating to increase its visibility for relevant searches. Its purpose is to improve the quality of the website content and the quantity of the traffic.

SEO targets the increase of unpaid traffic rather then paid traffic or organic traffic (direct traffic).

SEO Key Aspects

Bellow are some key aspects that you should go through when not pleased with the search engine results position. Have fun πŸ˜€

Website Meta Tags

Meta tags are crucial to the health of a website. They define how should the visitors browser treat your website. If a website would have a mandatory configuration file, well the meta tags would be it.

Your website must have at least the bellow meta tags defined, but if quality optimization is what you need, you should be aware of all available meta tags and their purpose.

Charset – The charset attribute of the <meta> element specifies the character encoding for the HTML document.

Viewport – Setting the viewport meta tag that controls the width and scaling of the viewport so that it’s sized correctly on all devices.

Title – The title of a HTML document displayed both in Search Engine result snippets as well as the page’s tab in browsers.

See a complete list of the most important meta tags here and their definition.

Website content is the advertised product. Search Engines will rank your website depending on the quality of the content. However, more content doesn’t mean higher ranking.

Duplicate content actually decreases the ranking so we need to let Search Engines know what is the original content of a duplicate content.

For example if we have a product on the website and multiple URLs open that products page, we need to refer the main product page on all secondary pages using canonical links.

This pages’ URL is https://www.catalinmunteanu.com/seo-tutorial/ but it can be also accessed using the post id: https://www.catalinmunteanu.com?post=591. When accessed using the post id we need to set the canonical URL:

<link rel="canonical" href="https://www.catalinmunteanu.com/seo-tutorial/" />

Heading and Description

Images

Images have an important role in SEO. One might say they can replace a thousand words… or was it pixels… See bellow some aspects that can be optimized.

Image Size Optimization

Your images need to be optimized for web. You can’t expect all visitors to have a great bandwidth connection so instead of including images with sizes greater than 50-100Kb, you should consider optimizing them before.

There are plenty tools out there that can help with image optimization. Consider using PJPEG if in no way you can optimize their size.

Image Attributes

Any image on your website needs to have attributes set. You need to set the alt and title attributes. The values must describe in words the content of the image.

Image Lazy Loading

You can never know if a visitor will scroll all the way down or will stop as soon as he finds what he was searching for. So for the short period in which your visitor browses, make sure all goes smooth.

One way of achieving this is to not load all the images that are not visible in the Viewport, but actually use Lazy Loading.

Sitemaps

Sitemaps serve a single purpose: Help search engine bots to understand what pages does your website expose. They have a huge role in Search Engine Optimization process and should be one of the first actions your take in this direction.

Some search engines provide a Web Console in which you can submit your sitemaps which will end up being used by their bots. Google offers the Search Console where you can manage your sitemaps.

There are two types of sitemaps: Sitemap Index and Sitemap.

Sitemap Index should be used when your website has multiple Sitemaps and should link each Sitemap URL. Sample Sitemap Index bellow:

<?xml version="1.0" encoding="UTF-8"?>
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
   <sitemap>
      <loc>https://www.catalinmunteanu.com/sitemap-1.xml</loc>
      <lastmod>2020-10-25</lastmod>
   </sitemap>
   <sitemap>
      <loc>https://www.catalinmunteanu.com/sitemap-2.xml</loc>
      <lastmod>2020-10-25</lastmod>
   </sitemap>
</sitemapindex>

The Sitemap contains the actual links to each page available on the website and some optional properties regarding each page. Sample Sitemap bellow:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
   <url>
      <loc>https://www.catalinmunteanu.com/</loc>
      <changefreq>daily</changefreq>
      <priority>1.0</priority>
   </url>
   <url>
      <loc>https://www.catalinmunteanu.com/about/</loc>
      <changefreq>weekly</changefreq>
   </url>
</urlset>

Additional information can be found on https://www.sitemaps.com.

Mobile Compatibility

Some Search Engines give extra points for responsive or adaptive websites. So having a mobile-optimized website will increase your SEO rating.

A few things that might affect your mobile compatibility:

  • viewport meta tag not set
  • viewport width is not responsive
  • content requires horizontal scrolling which determines content not to fit in viewport
  • interactive elements are not optimized for touch devices
  • font is not readable – maybe too small or colors not optimized

Even if you plan on building native mobile applications, you should consider configuring your website as a PWA. Additional details about Progressive Web apps here.

Optimizing your website for mobile is an easy task and there are plenty tutorials out there. However, check this great article on how to start.

Social Platform Integration (Twitter, Facebook Open Graph)

Your website needs to have the Twitter Card and Facebook OG configured. You’ll need to create a Facebook Application by accessing https://developers.facebook.com. For Twitter the configuration doesn’t require any custom app. There are plenty tutorials out there where you can learn how to do this. After you have configured both integrations, you can check the validity using:

Facebook Open Graph -> https://developers.facebook.com/tools/debug/

Twitter Cards -> https://cards-dev.twitter.com/validator

If your Twitter Card is valid you should see something like this in their console:

Facebook OG integration is valid if the Facebook Debug tool doesn’t list any errors.

You can also use https://www.opengraph.xyz to quickly review the integration of both social networks.

Website SSL Certificate

A key aspect that should be addressed before all other aspects is the SSL Certificate. Missing SSL Certificate or poor configuration determines a low SEO score.

Your website needs to have a valid and trusted SSL Certificate. You might get more ranking points if your website has the TLSv1.0 and TLSv1.1 disabled. It is good practice to disable deprecated protocols and to prepare for the future.

You can check your websites configuration using the following tools:

Basic check -> https://www.sslshopper.com/ssl-checker.html

Comprehensive check -> https://www.ssllabs.com/ssltest/

Certificate chain check -> https://whatsmychaincert.com

SSL Certificate Rating A+

Your target is to get A+ Rating so do everything that you can to achieve this. Having a lower rating will impact the overall website Search Engine Optimization.

Website Content

Maybe you have thoroughly configured all the above listed aspects but you’re still not in top 10 search results?

The question is: Do you have content? Does your website provide useful, clear, properly formatted information? The content displayed on your website is as important as any other SEO Key Aspect.

Your websites content is the most important aspect so make sure you have quality information. However, it’s not impossible to get in top 10 search results with a blank page. Leave me a comment if you know the secrets.

Conclusion

Congratulations! You have come a long way πŸ™‚

Search Engine Optimization is about calibration, precision, dedication.

Each aspect needs to be carefully optimized and if you ignore a single aspect, your SEO rating will be impacted.

Some people consider SEO to be an art. It’s like manually crafting a piece of jewelry.

How to run Git pull in all subdirectories

Git pull allows you to retrieve the latest changes from the remote repository.

If you ever need to run git pull in all subdirectories but you don’t want to manually do it for each of them, you create a bash script as follows:

#! /usr/bin/env bash
for dir in ./*/
do
    cd ${dir}
    git status >/dev/null 2>&1
    # check if exit status of above was 0, indicating we're in a git repo
    [ $(echo $?) -eq 0 ] && echo "Updating ${dir%*/}..." && git pull
    cd ..
done

Set the permissions:

chmod +x git-pull.sh

And run ./git-pull.sh

Source

primary-node

Primary node election in distributed computing

Primary node can help with coordination of multiple node deployed in a cloud. One of the projects I’ve worked on had a NodeJS worker that ran multiple types of tasks. I wanted to upgrade this setup in order to be easily scalable, have a primary node (or coordinator) that triggers the tasks and continue processing even when some of the nodes fail.

The Checklist

  • All nodes may participate in an “election” to choose the coordinator
  • Support any number of nodes, including 1 node setup
  • Handle node fail (even the coordinator node) without impacting the flow
  • Allow new nodes to join the party
  • Don’t depend on an expensive technology (paid or resource hungry)

Besides the main scope of the solution I also needed to ensure that the election of the coordinator follows these 3 basic principles:

  • Termination: the election process must complete in a finite time
  • Uniqueness: only one node can be coordinator
  • Agreement: all other nodes know who’s the coordinator.

After I established that I have all the scenarios in mind, I started investigating different algorithms and solutions that the market uses (Apache Zookeeper, Port Locking, Ring Networks). However, most of these require a lot of setup or were incompatible in a multi server setup and I also wanted to embrace a KISS approach so continue reading to see the solution.

The Primary Node Election Algorithm

  1. Node generates a random numeric id
  2. Node retrieves a COORDINATOR_ID key from Redis
  3. If key !NULL
    • We have a coordinator
    • Wait Z minutes (e.g. Z = 1 hour)
    • GoTo Step2
  4. If key NULL
    • No coordinator announce
    • Push the id from Step1 in a Redis list
    • Waits X seconds (depending on how long the deployment takes, e.g. 10 seconds)
    • Retrieve all items in the list and extract the highest number
    • If result === node id
      • Current node is primary
      • Set Redis key COORDINATOR_ID with expiry Z+X
      • Do all the hard work πŸ™‚
    • Wait Z minutes
    • GoTo Step2

Downside of this solution is that if the coordinator node fails, it actually takes 2*Z until a new election takes place.

There’s room for improvement so please don’t hesitate to leave a feedback πŸ™‚

Docker Swarm Tutorial – Getting Started

  1. Activating Docker Swarm on Manager Node
  2. Attaching Worker Nodes to the Swarm

A great alternative to Kubernetes is Docker Swarm. It allows you to orchestrate services deployed on multiple nodes that are controlled by a node called manager.

First of all you need to have Docker Tools installed on all nodes.

Activating Docker Swarm on Manager Node

docker swarm init --advertise-addr <IP_OF_MANAGER_NODE>

Attaching Nodes to the Docker Swarm

After you activate the Manager Node you will receive a token that can be used to attach. worker nodes to the Manager Node.

docker swarm join --token <TOKEN> <IP_OF_MANAGER_NODE:PORT> (2377 default port)

After you run this command, you should see the message “This node joined a swarm as a worker.

Docker Compose Tutorial – Getting Started

This tutorial should help people with no Docker Compose knowledge and those that just need to freshen their memory πŸ˜€

  1. What is Docker Compose?
  2. What do I need? The prerequisites
  3. The docker-compose.yml
  4. Docker Compose commands

1. What is Docker Compose?

Docker Compose is a tool that can be used to define and run multi-container Docker applications.

The application’s services are defined and configured using a YAML file.

With a single command you may manage all your application’s services.

2. What do I need? The prerequisites

Docker Compose

Obviously you need to have Docker Compose installed on your machine. There are already plenty of articles out there and even the official Docker Compose Install documentation so we’re gonna skip that part.

Docker Machine

You can’t run any container without a Docker Machine up and running. If you don’t know how to do this, check out my Docker Tutorial.

3. The docker-compose.yml

In order to be able to use the Docker Compose you need an YAML file which defines how your applications run, how they communicate, what images (or Dockerfile) they use and other aspects.

For the following tutorial steps you may either use your own YAML or the sample YAML from bellow:

version: '3.7'
services:
    db:
        image: postgres
        restart: always
        environment:
            POSTGRES_PASSWORD: THEPASSWORD
        ports:
            - 5432:5432
    adminer:
        image: adminer
        restart: always
        ports:
            - 8080:8080

If you want to learn more about what the docker-compose.yml offers, you can go ahead and read this article.

Docker Compose Commands

All YAML Defined Services

Starting an application with all associated services:

docker-compose up --build

The above command builds images that were never built or that have changes since the last run. After build is done all containers are started and the console remains attached to all containers.

However, running the containers with this command doesn’t allow you to detach from them without stopping them. So you should specify that you want to run in detached mode:

docker-compose up --build -d

Stopping all containers:

docker-compose stop

Stopping all containers, remove them and the networks:

docker-compose down

Stopping and removing all containers and networks and volumes:

docker-compose down --volumes

Specific YAML Defined Service

Now that we have all containers running, if we only want to manage only one container we have the following commands:

Stopping a container:

docker-compose stop db

Starting a container:

docker-compose up start db

Rebuilding a container:

docker-compose up --no-start db

Restarting a container:

docker-compose restart db

Viewing logs of a single container:

docker-compose logs db

Hopefully this Docker Compose tutorial helps you understand what Compose is and how manage your containers with it.

If you’re not bored yet, check out my other Docker Articles.

Docker Compose YAML – Most Wanted Fields

This article describes the basic fields that can be configured in a Docker Compose YAML. It should help you bootstrap your Compose YAML and get your services up and running.

The fields described are available in Docker Compose YAML version 3.7. If your YAML version if different or you Docker Engine is too old, you might not be able to use all the fields.

What we’re gonna do is build-up the YAML from scratch, adding new fields as we require them.

Our initial docker-compose.yml looks like this:

version: '3.7'
services:
    database:

image

In order to run a container you need to have an image which describes what the container includes. You can specify the value using the image field (key). The value can be a string that refers to an image from the Public Docker Hub or an URL for any other hub.

For example if you want to use the official postgres image from here, you would specify the name like this:

version: '3.7'
services:
    database:
        image: postgres

build

Sometimes we need to use custom images that are created using custom made Dockerfile. You can specify which Dockerfile to use, and allocate a build context using the context and dockerfile fields.

version: '3.7'
services:
    database:
        build:
            context: ./my_project_folder
            dockerfile: ./my_project_folder/Dockerfile

If you specify both image and build, then docker-compose names the resulting image from the build using the name specified as the image value.

ports

The ports field allows configuration of the ports mapped between the container and the host machine.

You can specify multiple ports in the format: <HOST_PORT>:<CONTAINER_PORT>

version: '3.7'
services:
    database:
        image: postgres
        ports:
            - 5432:5432

Also check out these articles if you’re not familiar with Docker or Docker Compose.

How to expose multiple ports from Docker Container

One important aspect of a HTTP web server is to be able to handle both HTTPS and HTTP requests, which requires binding to multiple ports.

However, handling HTTP requests should be done at a minimal level, which means these requests should be redirected to the HTTPS server handler.

My server listens to both 443 and 80, but it redirects all 80 requests to 443.

All good until production deploy using Docker container.

The problem I was facing to which I did not find any quick solution in the Docker documentation: how to bind multiple host:container ports.

It turns out this is much simpler than it first sounds:

docker run ... -p <HOST_PORT1>:<CONTAINER_PORT1> -p <HOST_PORT2>:<CONTAINER_PORT2>

That’s all.

You can specify as many ports as needed.

This machine has been allocated an IP address, but Docker Machine could not reach it successfully – Docker Machine Error

I recently encountered this docker machine IP allocation error while trying to power up my docker machine.

This machine has been allocated an IP address, but Docker Machine could not
reach it successfully.
SSH for the machine should still work, but connecting to exposed ports, such as
the Docker daemon port (usually :2376), may not work properly.
You may need to add the route manually, or use another related workaround.
This could be due to a VPN, proxy, or host file configuration issue.
You also might want to clear any VirtualBox host only interfaces you are not using.
Checking connection to Docker…
Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.100.101:2376": dial tcp 192.168.100.101:2376: i/o timeout
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.

I tried to regenerate-certs and with the existing Network adapter config, or with a new one, but the errors continued to reproduce.

The only fix that worked for me was to restart the virtualbox network adapter. Open the Virtual Box UI, go to Preferences -> Network -> view the name of the adapter.

It’s usually vboxnet0 or vboxnet1.

After manually restarting that network adapter things should work nicely:

sudo ifconfig vboxnet0 down && sudo ifconfig vboxnet0 up

This did the trick for me.

OSX Global NPM Module command not found

In case you ended up in a situation where you just installed a global NPM module, but it still throws command not found, here’s what you have to do:

Find out where the global NPM modules are installed by running:

npm prefix -g

Double check that your $PATH does not already contain that value:

echo $PATH

If the value is not included, you must update your etc/paths with the NPM location:

sudo vi /etc/paths

Add the value returned by npm prefix -g preceded by /bin

e.g. /Users/catalinmunteanu/.npm-global/bin

Save the file and exit.

Open a new terminal tab/window and retry the command.

Cheers!

Node n permission denied without sudo

Each time I do a fresh install of the Node Version Management tj/n I end up getting the permission denied when running npm install.

If you also ran into this issue, well, there’s a quick fix.

The issue is caused by the Node Version Management for which the owner is the root.

The two following steps will help you continue in peace πŸ˜€

which n

Returns the install location of the Node Version Management package. e.g. /Users/username/n

sudo chown -R $(whoami) <PATH_WHICH_N>

Sets the current user as owner.

You can now install the NPM packages without the power of sudo.