You’ve probably seen mentions of Docker over the past few years. This guide explains the basics so you can get up and running with Docker for PHP in your local development environment.
In the dim and distant past, when a developer wanted to make a change to production code, they may have SSHed onto the server and changed the code manually. (SSH means Secure Shell, and it’s a way to operate network services on an unsecured network.) Of course, there was very little experience and guidance in terms of good testing and good build pipelines for PHP then. Thankfully, this has changed.
Today, development teams expect to be able to test code in production-like environments. Doing this allows teams to deploy code with confidence. For this to happen, local development environments need to be identical (or as close as possible) to the production environments the code runs on.
Virtual machines were a step in this direction. With these, developers can spin up versions of production software on their local machines. VMs work fine, but they’re slow and use a lot of system resources.
Docker fixes these problems and many more. The ability to have isolated environments between projects, test code with different versions easily, and do both these things with a small footprint are only a few of the reasons Docker has won my heart.
First, make sure you have Docker installed on your local system. I think the documentation on the website is clear for all operating systems.
The other thing you’re going to need for this tutorial is the PHP package management system called composer.
A few tools will provide you with a Docker configuration without the need to understand it yourself. I’ve tried a couple of these tools, and my experience has been mixed. I like to be in control of everything that’s part of my build to make sure only the essential tools are installed. Like most other developers, I also like to understand what the code is doing rather than rely on a “magic” command. So while I set up this configuration, I’ll try to explain everything that’s happening.
Let’s use a simple Laravel app to get up and running. This guide will work with all PHP software and frameworks, though.
Install an empty Laravel project by opening a new terminal and running the following command:
composer create-project laravel/laravel docker-tutorial
Now create the three files you’ll need for the Docker configuration:
mkdir .docker # A hidden folder to keep most of the Docker configuration.
touch .docker/Dockerfile .docker/vhost.conf
touch docker-compose.yml # This has to be in the root directory.
That’s it—three files and one directory!
If you use this setup with every framework or application prepare, then you’ll always know where to look and what to check.
Let’s go through how each of these files contributes toward your configuration. We’ll also look at some of the key Docker commands.
A Dockerfile is a text document that has the instructions needed to build a Docker image. Once built, containers run instances of these images. In almost every scenario, your image will be built on top of another. Docker Hub has lots of images to choose from. I like to use the official PHP images as the basis for my projects.
For this tutorial, let’s use an Apache server:
FROM php:7.3-apache
COPY . /app
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf
RUN chown -R www-data:www-data /app && a2enmod rewrite
Dockerfile instructions rely on only a few main keywords. I’ve listed three of them here.
There’s not much in this file. We’re directing the web host to the Laravel entry point and collecting the logs.
<VirtualHost *:80>
DocumentRoot /app/public
<Directory “/app/public”>
AllowOverride all
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
The log files are piped to STDOUT, so you can watch them in real time as the application runs. This is handy for debugging errors locally.
Docker manages containers, and each container that Docker runs depends on an image. The image is a build artifact that can be rigorously tested and is identical for all servers. This testing and sameness give much greater confidence in the quality and stability of deployed code.
To make your build image, run the following command from the project root:
docker build –file .docker/Dockerfile \
-t docker-tutorial .
The file flag shows where your Dockerfile is, and the -t flag is tagging your image. Tagging is useful so you can keep track of which images are on your system and differentiate between projects. It also simplifies how you can refer to your images later.
The final “.” is defining the context in which the build is happening. This context is important for the COPY commands in your build process in particular.
Now you can spin up a container using this image and finally see your application.
docker run –rm -p 8080:80 docker-tutorial
If you’ve followed along, navigate to localhost:8080. You should see the Laravel project splash page.
Well done! You’ve built a Dockerfile and container from scratch.
The –rm flag in the command means the container will be removed when you quit the docker process. The -p flag is mapping a port on your localhost to a port inside the docker container. The last argument is the image you want to build, referenced by the tag you added earlier.
When you create the build image, you can copy your application files and make them part of the artifact. These files are static after build, and changing them locally won’t change them inside the docker image.
During development, you’ll probably want the files to be updated in your container. To achieve that, use docker-compose.
The docker-compose file orchestrates the creation of multiple Docker containers and deals with the networking between them. It’s good practice to have one container for each job, so you’d normally have a container for the webserver and one for a database. You can declare and coordinate both of these in the docker-compose file.
version: ‘3’
services:
docker-tutorial:
build:
context: .
dockerfile: .docker/Dockerfile
image: docker-tutorial
ports:
– 8080:80
The version declaration at the top is telling docker-compose which configuration format you’re using.
At this point, you can start declaring services. Docker spins up a separate container for each service. For now, you have just one container named docker-tutorial. The rest of the configuration should look similar to the command line execution.
For the build step, you can give the context (in this case, the present directory). Then you can point to the correct Dockerfile. You’ve also got the same port bindings that you had earlier.
From the project root, run:
docker-compose up
You’ve managed to get docker-compose to bring you exactly where you were before! But you still don’t have access to the local file systems. To do that, you’ll need to mount a volume and alter the build step in your Dockerfile.
…
services:
docker-tutorial:
…
volumes:
– .:/app
In the Dockerfile, instead of copying the files into the image, create the /app directory, and
COPY . /app
then becomes
RUN mkdir /app
Now, run your docker-compose command—but, because you’ve updated the Dockerfile, you need to make sure the image gets rebuilt. Do this by adding the –build flag to the docker-compose command.
docker-compose up –build
When you visit the site on localhost:8080, it should be serving the live local files. Any changes you make will be immediately visible in the production environment.
In some instances, you might get an error because the Apache web user can’t write to the logs. If this happens, you’ll need to change the permissions on the storage and bootstrap directories, like so:
chmod -R o+rw bootstrap/ storage/
Everything should work fine after that.
Most applications will have a database for persistent storage. Let’s add a MySQL database to your setup. You’ll need to make some changes to your docker-compose.yml file.
version: ‘3’
services:
docker-tutorial:
…
links:
– mysql
environment:
DB_HOST: mysql
DB_DATABASE: docker
DB_USERNAME: docker
DB_PASSWORD: docker
mysql:
image: mysql:5.7
ports:
– 13306:3306
environment:
MYSQL_DATABASE: docker
MYSQL_USER: docker
MYSQL_PASSWORD: docker
MYSQL_ROOT_PASSWORD: docker
There’s quite a bit more content to your docker-compose.yml file, but hopefully it’s clear what’s happening.
The links key is declaring all the containers you want your docker-tutorial container to link to. In this case, it’s just the MySQL server, but it could be any number of other services. Docker handles all the internal routing between the containers with this declaration.
We then declared some environment variables. These will overrule any variables declared in Laravel’s .env file.
The second service to spin up is mysql. The image is an official MySQL with a specific version number.
Finally, you can add the port mapping and the environment variables.
The Docker containers are up and running! The final thing to look at is running commands. To get a command prompt inside the docker-tutorial container, try this:
docker-compose exec docker-tutorial /bin/bash
This gives a bash prompt in the /var/www/html directory, the standard directory for apache2 hosting. If you’re using /app, you can change the directory and run the database migrations.
cd /app
php artisan migrate
Uh-oh! You don’t have a database driver. Why not? Because Laravel uses slightly different drivers than the standard PHP install does.
Your last configuration before you finish is to change the Dockerfile to install the PHP extensions required for Laravel.
These are the final changes to get your Docker environment up and running. Laravel 5.8 requires a number of PHP extensions to be installed on the server. Here’s what the final file looks like:
FROM php:7.3-apache
RUN apt-get update
RUN apt-get install -y libzip-dev libjpeg62-turbo-dev libpng-dev libfreetype6-dev
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd –with-gd –with-freetype-dir=/usr/include/ –with-jpeg-dir=/usr/include/ –with-png-dir=/usr/include/
RUN docker-php-ext-install gd
RUN mkdir /app
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf
WORKDIR /app
RUN chown -R www-data:www-data /app && a2enmod rewrite
We’ve installed some required packages and then installed the PHP extensions. The only other difference is adding the WORKDIR instruction. Now when you spin up a bash prompt, you’ll be in the /app directory and not have to go anywhere before you execute any PHP artisan commands.
Let’s give it a go:
docker-compose up -d –build
Here we’re rebuilding the image and spinning up containers. The -d flag is telling Docker that you want to detach from the containers once they’re up and running.
Now, let’s get back to those migrations.
docker-compose exec docker-tutorial /bin/bash
php artisan migrate
Phew! All up and running now!
In this guide, we’ve walked through building a Docker image, running containers, and orchestrating those containers with docker-compose. The logs are being piped to STDOUT. So, in production you can easily aggregate, analyze, and process the logs.
Retrace is an excellent service to do this. It provides application performance data that lets you find bugs and increase performance. For PHP, it provides tools to help with code quality. Retrace is also optimized to support Docker containers, providing excellent documentation.
If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]