Tag Archives: docker-containers

How to run PHP in Docker Container

If you ended up here you probably want a fast solution to get your PHP scripts running without installing Apache in your host PC.

Well, there you go (docker-compose.yml):

version: '3.8'
services:
    php-apache:
        container_name: php-apache
        image: php:8.0-apache
        volumes:
            - ./source/code:/var/www/html/
        ports:
            - 8000:80

Run docker-compose up and voila 😀

Docker Swarm Tutorial – Getting Started

  1. Activating Docker Swarm on Manager Node
  2. Attaching Worker Nodes to the Swarm

A great alternative to Kubernetes is Docker Swarm. It allows you to orchestrate services deployed on multiple nodes that are controlled by a node called manager.

First of all you need to have Docker Tools installed on all nodes.

Activating Docker Swarm on Manager Node

docker swarm init --advertise-addr <IP_OF_MANAGER_NODE>

Attaching Nodes to the Docker Swarm

After you activate the Manager Node you will receive a token that can be used to attach. worker nodes to the Manager Node.

docker swarm join --token <TOKEN> <IP_OF_MANAGER_NODE:PORT> (2377 default port)

After you run this command, you should see the message “This node joined a swarm as a worker.

Docker Compose Tutorial – Getting Started

This tutorial should help people with no Docker Compose knowledge and those that just need to freshen their memory 😀

  1. What is Docker Compose?
  2. What do I need? The prerequisites
  3. The docker-compose.yml
  4. Docker Compose commands

1. What is Docker Compose?

Docker Compose is a tool that can be used to define and run multi-container Docker applications.

The application’s services are defined and configured using a YAML file.

With a single command you may manage all your application’s services.

2. What do I need? The prerequisites

Docker Compose

Obviously you need to have Docker Compose installed on your machine. There are already plenty of articles out there and even the official Docker Compose Install documentation so we’re gonna skip that part.

Docker Machine

You can’t run any container without a Docker Machine up and running. If you don’t know how to do this, check out my Docker Tutorial.

3. The docker-compose.yml

In order to be able to use the Docker Compose you need an YAML file which defines how your applications run, how they communicate, what images (or Dockerfile) they use and other aspects.

For the following tutorial steps you may either use your own YAML or the sample YAML from bellow:

version: '3.7'
services:
    db:
        image: postgres
        restart: always
        environment:
            POSTGRES_PASSWORD: THEPASSWORD
        ports:
            - 5432:5432
    adminer:
        image: adminer
        restart: always
        ports:
            - 8080:8080

If you want to learn more about what the docker-compose.yml offers, you can go ahead and read this article.

Docker Compose Commands

All YAML Defined Services

Starting an application with all associated services:

docker-compose up --build

The above command builds images that were never built or that have changes since the last run. After build is done all containers are started and the console remains attached to all containers.

However, running the containers with this command doesn’t allow you to detach from them without stopping them. So you should specify that you want to run in detached mode:

docker-compose up --build -d

Stopping all containers:

docker-compose stop

Stopping all containers, remove them and the networks:

docker-compose down

Stopping and removing all containers and networks and volumes:

docker-compose down --volumes

Specific YAML Defined Service

Now that we have all containers running, if we only want to manage only one container we have the following commands:

Stopping a container:

docker-compose stop db

Starting a container:

docker-compose up start db

Rebuilding a container:

docker-compose up --no-start db

Restarting a container:

docker-compose restart db

Viewing logs of a single container:

docker-compose logs db

Hopefully this Docker Compose tutorial helps you understand what Compose is and how manage your containers with it.

If you’re not bored yet, check out my other Docker Articles.

Docker Compose YAML – Most Wanted Fields

This article describes the basic fields that can be configured in a Docker Compose YAML. It should help you bootstrap your Compose YAML and get your services up and running.

The fields described are available in Docker Compose YAML version 3.7. If your YAML version if different or you Docker Engine is too old, you might not be able to use all the fields.

What we’re gonna do is build-up the YAML from scratch, adding new fields as we require them.

Our initial docker-compose.yml looks like this:

version: '3.7'
services:
    database:

image

In order to run a container you need to have an image which describes what the container includes. You can specify the value using the image field (key). The value can be a string that refers to an image from the Public Docker Hub or an URL for any other hub.

For example if you want to use the official postgres image from here, you would specify the name like this:

version: '3.7'
services:
    database:
        image: postgres

build

Sometimes we need to use custom images that are created using custom made Dockerfile. You can specify which Dockerfile to use, and allocate a build context using the context and dockerfile fields.

version: '3.7'
services:
    database:
        build:
            context: ./my_project_folder
            dockerfile: ./my_project_folder/Dockerfile

If you specify both image and build, then docker-compose names the resulting image from the build using the name specified as the image value.

ports

The ports field allows configuration of the ports mapped between the container and the host machine.

You can specify multiple ports in the format: <HOST_PORT>:<CONTAINER_PORT>

version: '3.7'
services:
    database:
        image: postgres
        ports:
            - 5432:5432

Also check out these articles if you’re not familiar with Docker or Docker Compose.

How to expose multiple ports from Docker Container

One important aspect of a HTTP web server is to be able to handle both HTTPS and HTTP requests, which requires binding to multiple ports.

However, handling HTTP requests should be done at a minimal level, which means these requests should be redirected to the HTTPS server handler.

My server listens to both 443 and 80, but it redirects all 80 requests to 443.

All good until production deploy using Docker container.

The problem I was facing to which I did not find any quick solution in the Docker documentation: how to bind multiple host:container ports.

It turns out this is much simpler than it first sounds:

docker run ... -p <HOST_PORT1>:<CONTAINER_PORT1> -p <HOST_PORT2>:<CONTAINER_PORT2>

That’s all.

You can specify as many ports as needed.

This machine has been allocated an IP address, but Docker Machine could not reach it successfully – Docker Machine Error

I recently encountered this docker machine IP allocation error while trying to power up my docker machine.

This machine has been allocated an IP address, but Docker Machine could not
reach it successfully.
SSH for the machine should still work, but connecting to exposed ports, such as
the Docker daemon port (usually :2376), may not work properly.
You may need to add the route manually, or use another related workaround.
This could be due to a VPN, proxy, or host file configuration issue.
You also might want to clear any VirtualBox host only interfaces you are not using.
Checking connection to Docker…
Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.100.101:2376": dial tcp 192.168.100.101:2376: i/o timeout
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.

I tried to regenerate-certs and with the existing Network adapter config, or with a new one, but the errors continued to reproduce.

The only fix that worked for me was to restart the virtualbox network adapter. Open the Virtual Box UI, go to Preferences -> Network -> view the name of the adapter.

It’s usually vboxnet0 or vboxnet1.

After manually restarting that network adapter things should work nicely:

sudo ifconfig vboxnet0 down && sudo ifconfig vboxnet0 up

This did the trick for me.

Docker Tutorial – Getting started with Docker

This Docker Tutorial should help you get started with Docker.

  1. How to get a running Docker Machine on your computer
  2. How to create and manage Docker Images
  3. How to create and manage Docker containers
  4. How to delete Docker images and containers
  5. How to login to Docker Hub
  6. How to publish images to Docker Hub

You’ll learn how to start a Docker Machine on your PC, how to create images, how to create containers, and to cleanup your Docker Machine, login to Docker Hub, tag and publish images.

1. How to get a running Docker Machine on your computer

First thing you need to do before starting to manage images and containers is to have a Docker Machine up and running. For this tutorial I’m using Virtual Box for running the Docker Machine.

The next command creates a Docker Machine named dev. The command will check and download the required dependencies, create the VM, create a SSH key, start the VM and assign an IP and generate certificates.

docker-machine create --driver virtualbox dev

Running the above command is only needed the first time you create a Docker Machine. When you just need to start it you must run:

docker-machine start dev

After you have the Docker Machine running, you’ll be able to see the PORT and IP by running the next command. The printed result also contains a short command that you must run in order to set Docker Machine configuration in your environment.

docker-machine env dev

You must now set the environment with the Docker Machine configuration for the current terminal session by running:

eval $(docker-machine env dev)

That’s all for the first step. Starting a Docker Machine is quite easy.

In case you’re getting an IP allocation error, check out this article. It might help.

2. How to create and manage Docker Images

Now that we have the machine up and running we can list all the available images with:

docker images

In order to create a new image we must have a Dockerfile. I won’t get into how to create Dockerfile in this article but I’ll continue the tutorial using a base Nginx Dockerfile. Place this content in a file named Dockerfile

FROM nginx:latest
RUN nginx -v
EXPOSE 80
CMD ["bash", "-c", "nginx -g 'daemon off;'"]

Now you have a base Dockerfile and you can create an image from it by running:

docker build -t the_tag .

This command also requires that you have the Dockerfile in cwd. The dot from the end specifies that you are using a Dockerfile from the local directory. You can also build using a Dockerfile that’s not in the cwd by specifying the path to the Dockerfile:

docker build -f Dockerfile -t the_tag .

For both of the above commands you’ll see the following steps being executed:

Sending build context to Docker daemon 338.1MB
Step 1/4 : FROM nginx:latest
 ---> f68d6e55e065
Step 2/4 : RUN nginx -v
 ---> Using cache
 ---> 46c77d837d51
Step 3/4 : EXPOSE 80
 ---> Using cache
 ---> b31d714e67f1
Step 4/4 : CMD ["bash", "-c", "nginx -g 'daemon off;'"]
 ---> Using cache
 ---> c3f3bfb92c45
Successfully built c3f3bfb92c45
Successfully tagged the_tag:latest

If something goes wrong in any step the build will fail and the error will be printed. If you now run docker images again you’ll see:

REPOSITORY       TAG           IMAGE ID          CREATED             SIZE
the_tag          latest        c3f3bfb92c45      7 minutes ago       109MB

3. How to create and manage Docker containers

We now have a Docker image created and we’re ready to create and manage containers. You can list all available containers by running:

docker ps

Create a new container using the image we built by running:

docker run -d --name docker-nginx -p 80:80 the_tag

-d flag causes the container to run in detached mode
–name allows you to easily identify the container by setting a label
-p allows you to set publish the HOST:CONTAINER ports.

If you now run docker ps you’ll see something like this:

CONTAINER ID    IMAGE        COMMAND     CREATED          STATUS          PORTS                NAMES
c2e1bdcf12a0    image_tag    "bash -c"   3 seconds ago    Up 2 seconds    0.0.0.0:80->80/tcp   docker-nginx

You may now stop and delete this container by running:

docker stop c2e1bdcf12a0
docker rm c2e1bdcf12a0

4. How to delete Docker images and containers

In some articles I previously wrote I described how to keep your Docker Machine clean and how to delete certain images and containers or do a full cleanup.

Check out this article if you’re on OSX/Linux.

Check out this article if you’re using Windows.

5. How to login to Docker Hub

If you plan to publish your images to Docker Hub you’ll probably need to login.

docker login --username username@example.com --password THE_PASSWORD docker-hub.example.com:PORT

Be careful to specify the PORT. If you don’t, the authentication will work but publishing images will not find a token for the host.

6. How to publish images to Docker Hub

You’re now have an image and are authenticated to your Docker Hub. You can publish the image and make it available for others to use as well.

docker images # prints the hash
docker tag HASH docker-hub.example.com:443/<IMAGE_REPOSITORY_NAME>:0.0.1
docker push docker-hub.example.com:443/<IMAGE_REPOSITORY_NAME>

That’s it. The image is now on the hub.