Introduction

Often when developing a complex web application there can be multiple layers that need to work together. For example, a database, backend and UI layer that all need to fit together and work in harmony for the overall application to work.

One key way to ensure that your application works correctly is to implement full stack testing. However, this can be complicated when there are more and more pieces involved. Docker is a fantastic tool that can make it significantly easier to run all of these pieces together on the same system.

In this article we are going to build just such an application. We are then going to build a test system for this, and build a Docker Compose cluster to allow us to execute these full stack tests in a reliable and repeatable manner, without having any outside interactions that can interfere with them.

Prerequisites

Our application is going to be built in JavaScript with a MongoDB data store. As such, it will be necessary to have a recent version of Node.js – either the latest LTS or Current releases will suffice. You will also need either NPM or Yarn installed, and an understanding of how to use them. You will also need Docker and Docker Compose installed.

Note: Throughout the article we are going to be using Yarn. However, NPM is a perfectly suitable alternative if that is your preference.

In order to run the tests locally – without using Docker – you will also need a Selenium server available. The tests also use Google Chrome by default – though you can change that if desired. In order for them to work you will need to have this installed and the Selenium ChromeDriver installed and on your system path.

Note: The Selenium server is a Java application and will need a recent JVM installed in order for it to run. Once that is done, it can be launched simply by executing java -jar selenium-server.jar

It is assumed that these tools are already set up and available, and this article does not cover installation, configuration and debugging of them.

Writing the application

Our application is going to be the traditional To-Do List, using Express.js for the backend, MongoDB for the data store and React for the UI. This is going to be set up such that the UI is a Single Page Application, with the JavaScript calling the backend directly for reading and changing the data as needed.

Creating the backend

To start with, we want to create our backend application. To start with, we need to set up a new Node.js project with our required dependencies. Create a new directory and initialise a new project:

    $ mkdir backend
    $ cd backend
    $ yarn init -y

Then we’ll install our required dependencies:

    $ yarn add express cors body-parser dotenv mongodb

These give us:

  • express – the defacto web framework for Node.js
  • cors – Express middleware to support Cross Origin Resource Sharing. Specifically this will allow the web browser to access our backend from a different origin URL.
  • body-parser – Express middleware to allow us to consume JSON payloads on incoming requests
  • dotenv – support for local configuration in .env files
  • mongodb – official Node.js drivers for communicating with MongoDB.

Once these are installed, we can start writing our application. The first part is a DAO (database access object) layer for interacting with the MongoDB data store to read and write our ToDos. For this, create a new file called todoDao.js at the root of the project as follows:

    const MongoClient = require('mongodb').MongoClient;
    const ObjectID = require('mongodb').ObjectID;

    let mongodb;

    function getMongoConnection() {
        if (!mongodb) {
            mongodb = new Promise((resolve, reject) => {
                MongoClient.connect(process.env.MONGODB_URL, {
                    poolSize: 10,
                    autoReconnect: true,
                    reconnectTries: 60,
                    reconnectInterval: 1000
                }, (err, client) => {
                    if (err) {
                        console.log('Error connecting to MongoDB');
                        console.log(err);
                        reject(err);
                    } else {
                        console.log('Connected to MongoDB');
                        resolve(client.db(process.env.MONGODB_DATABASE));
                    }
                });
            });
        }
        return mongodb;
    }

    function listTodos() {
        return getMongoConnection()
            .then((db) => db.collection('todos'))
            .then((col) => col.find().toArray());
    }

    function getTodoById(id) {
        return getMongoConnection()
            .then((db) => db.collection('todos'))
            .then((col) => col.findOne({_id: new ObjectID(id)}));
    }

    function createTodo(todo) {
        return getMongoConnection()
            .then((db) => db.collection('todos'))
            .then((col) => col.insertOne({
                title: todo.title,
                status: todo.status === true ? true : false
            }))
            .then((r) => r.ops[0]);
    }

    function deleteTodo(id) {
        return getMongoConnection()
            .then((db) => db.collection('todos'))
            .then((col) => col.findOneAndDelete({_id: new ObjectID(id)}));
    }

    function updateTodo(id, todo) {
        return getMongoConnection()
            .then((db) => db.collection('todos'))
            .then((col) => col.findOneAndUpdate({_id: new ObjectID(id)}, {
                title: todo.title,
                status: todo.status === true ? true : false
            }, {
                returnOriginal: false
            }))
            .then((r) => r.value);
    }

    module.exports = {
        listTodos,
        getTodoById,
        createTodo,
        deleteTodo,
        updateTodo
    }

This exposes functions for the standard CRUD activities we wish to perform on our data.

Next we can write the actual REST service in terms of this DAO. Create a new file called index.js as follows:

    require('dotenv').config();

    const express = require('express');
    const bodyParser = require('body-parser');
    const cors = require('cors');
    const todoDao = require('./todoDao');

    const app = express();
    app.use(cors());
    app.use(bodyParser.json());

    function translateTodo(todo) {
        return todo && {
            title: todo.title,
            status: todo.status,
            _meta: {
                id: todo._id
            }
        };
    }

    app.get('/todos', (req, res) => {
        todoDao.listTodos()
            .then((v) => v.map(translateTodo))
            .then((v) => res.send(v))
            .catch((e) => {
                console.log(e);
                res.status(500);
                res.send(e);
            });
    });
    app.get('/todos/:id', (req, res) => {
        todoDao.getTodoById(req.params.id)
            .then(translateTodo)
            .then((v) => {
                if (v) {
                    res.send(v)
                } else {
                    res.status(404);
                    res.send();
                }
            })
            .catch((e) => {
                console.log(e);
                res.status(500);
                res.send(e);
            });
    });
    app.post('/todos', (req, res) => {
        todoDao.createTodo(req.body)
            .then(translateTodo)
            .then((v) => res.send(v))
            .catch((e) => {
                console.log(e);
                res.status(500);
                res.send(e);
            });
    });
    app.delete('/todos/:id', (req, res) => {
        todoDao.deleteTodo(req.params.id)
            .then((v) => {
                res.status(204);
                res.send();
            })
            .catch((e) => {
                console.log(e);
                res.status(500);
                res.send(e);
            });
    });
    app.put('/todos/:id', (req, res) => {
        const updated = {
            title: req.body.title,
            status: req.body.status
        };
        todoDao.updateTodo(req.params.id, req.body)
            .then(translateTodo)
            .then((v) => {
                if (v) {
                    res.send(v)
                } else {
                    res.status(404);
                    res.send();
                }
            })
            .catch((e) => {
                console.log(e);
                res.status(500);
                res.send(e);
            });
    });

    app.listen(process.env.PORT, () => console.log(`Listening on port ${process.env.PORT}!`));

This exposes the following routes that can be used:

  • GET /todos – return a list of all of the ToDos in the system
  • GET /todos/:id – return the single specified ToDo
  • POST /todos – create a new ToDo
  • DELETE /todos/:id – delete the single specified ToDo
  • PUT /todos/:id – update the single specified ToDo to match the provided data

Finally, in order to actually run the application, we need some configuration. Create a new file called .env to contain this. The dotenv module will automatically load this and make it available as properties on process.env, but in such a way that actual environment properties take precedence. Our example file will read as follows:

    MONGODB_URL=mongodb://localhost:27017
    MONGODB_DATABASE=todo
    PORT=4000

Note: This assumes you are running MongoDB locally and can access it on localhost:27017. If this is not the case then change the URL as needed.

At this point, the backend application can be started up:

    $ node index.js
    Listening on port 4000!

Creating the frontend

Once we’ve got our backend written, we can move on to the UI. This will be developed using Create React App, which gives us a good starting point and can easily generate static files that can be deployed to any web server.

Start out by creating our project:

    $ create-react-app webapp
    $ cd webapp

Then install the few additional dependencies we want:

    yarn add axios semantic-ui-react semantic-ui-css

Semantic UI is a CSS framework that has easy to use React bindings, and Axios is an HTTP client that is easy to configure and use.

The first thing we want to do is create a couple of components for our UI. The ones we need are a form for creating a new ToDo entry, and a list for displaying the existing entries.

The form for the new ToDo entry will go in a file called src/NewTodo.js as follows:

    import React from 'react';
    import { Form, Button } from 'semantic-ui-react';

    export default class NewTodo extends React.Component {
        state = {
            value: ''
        };

        render() {
            const { value } = this.state;
            return (
                <Form onSubmit={(e) => this.onSubmit(e)}>
                    <Form.Group inline>
                        <input placeholder='New ToDo' value={value} onChange={(e) => this.onChange(e)} />
                        <Button>Add</Button>
                    </Form.Group>
                </Form>
            );
        }

        onChange(e) {
            this.setState({
                value: e.target.value
            });
        }

        onSubmit(e) {
            this.props.onSubmit(this.state.value);
            this.setState({
                value: ''
            });
        }
    }

Note that this is entirely self-contained except for the callback function when the form is submitted. This will be provided from the outer component and will do the work of calling our API to create the new entry.

Next is the list of existing entries. This goes in a file called src/TodoList.js as follows:

    import React from 'react';
    import { List, Checkbox } from 'semantic-ui-react';
    import './TodoList.css';

    function TodoItem({id, title, status, onToggle}) {
        const className = status ? 'finished-todo' : '';

        return (
            <List.Item>
                <Checkbox toggle label={title} checked={status} className={className} onChange={(e) => onToggle(id, !status)} />
            </List.Item>
        );
    }

    export default function TodoList({ todos, onToggle }) {
        return (
            <List>
                {
                    todos.map(({id, title, status}) => <TodoItem key={id} id={id} title={title} status={status} onToggle={ onToggle } />)
                }
            </List>
        );
    }

This receives the list of ToDo entries to render, and a callback function to use to change the state of an existing entry. We also need a CSS file to define some styles here – specifically so that ToDo entries that are finished get a strike through them. This goes in src/TodoList.css as follows:

    .finished-todo label {
      text-decoration: line-through;
    }

Now we just need the overarching application to tie it all together. This belongs in the standard src/App.js file as follows:

    import React, { Component } from 'react';
    import 'semantic-ui-css/semantic.min.css';
    import { Container, Header } from 'semantic-ui-react';
    import axios from 'axios';
    import NewTodo from './NewTodo';
    import TodoList from './TodoList';

    const httpClient = axios.create({
      baseURL: process.env.REACT_APP_API_URL || window.API_URL_BASE,
      timeout: 1000
    });

    class App extends Component {
      state = {
        todos: []
      }

      render() {
        return (
          <Container>
            <Header as='h1'>ToDo List</Header>
            <NewTodo onSubmit={(title) => this.createNewTodo(title)} />
            <TodoList todos={this.state.todos} onToggle={(id, newState) => this.toggleTodo(id, newState)} />
          </Container>
        );
      }

      componentDidMount() {
        this.loadTodos();
      }

      loadTodos() {
        httpClient.get('/todos')
          .then((response) => response.data)
          .then((response) => response.map((todo) => {
            return {
              title: todo.title,
              status: todo.status,
              id: todo._meta.id
            }
          }))
          .then((todos) => {
            this.setState({
              todos: todos
            })
          });
      }

      createNewTodo(title) {
        httpClient.post('/todos', {
          title: title
        })
          .then(() => this.loadTodos());
      }

      toggleTodo(id, newState) {
        httpClient.get('/todos/' + id)
          .then((response) => response.data)
          .then((todo) => {
            return httpClient.put('/todos/' + id, {
              title: todo.title,
              status: newState
            })
          })
          .then(() => this.loadTodos());
      }
    }

    export default App;

This renders both of our other components as well as providing the callbacks necessary for the API interactions. It does all of this using a configured Axios instance, which is given the base URL to use.

You will notice that the base URL comes from a slightly unusual construct – process.env.REACT_APP_API_URL || window.API_URL_BASE. This is to allow us the ability to configure it on a per-deploy basis.

When running the application in development mode, the process.env object is populated from a .env.development file automatically. This allows us to specify where our backend is located whilst we’re running locally. As such, create the file .env.development in the root of the project as follows:

    REACT_APP_API_URL=http://localhost:4000

When this is not specified then we will fall back to a global variable of window.API_URL_BASE. This will be populated by creating a file called public/api.js as follows:

    window.API_URL_BASE = '<Place URL Here>';

Which can then be replaced with a real URL when the files are deployed on a real server. This is then loaded by adding the following to the head block of public/index.html:

    <script src="%PUBLIC_URL%/api.js"></script>

We can now start the UI up by executing yarn start and, assuming your backend and data store are running as needed, everything will work.

Note: If you see JavaScript errors stating that “GET http//localhost:3000/%3CPlace%20URL%20Here%3E/todos 404 (Not Found)” then the webapp has failed to load the .env.development file and thus does not know where to find the backend.

Testing the application

Now that we’ve got our application, we need to be able to prove that it works. This is where our end-to-end tests will come in. The tests that we write here will not be completely comprehensive, but will be enough to give an example of how such a setup can work.

Ensure that the complete application – backend, frontend and database – are running for now, so that we have something to execute our tests against. Later on when we do this using Docker this will become less important – the Docker cluster will automatically start and stop everything – but for now we need something running to test against.

For our tests, we will be using the Nightwatch.js library, which is a Node.js library allowing for Selenium based browser testing.

In order to get started, we need another new project:

    $ mkdir e2e
    $ cd e2e
    $ yarn init -y

We then need our dependencies:

    $ yarn add nightwatch mongodb

Note that in addition to Nightwatch.js we are also including the MongoDB drivers again. This is because we want our tests to be able to interact with the database – in our case to reset the data, but potentially to insert test data or to assert that data was created or updated as appropriate.

Nightwatch.js is powered by a central configuration file – nightwatch.json. This tells it everything it needs to know in order to find the appropriate source files and execute the tests. Ours will look like this:

    {
      "src_folders" : ["src/tests"],
      "custom_commands_path" : "src/commands",
      "output_folder" : "target/reports",
      "page_objects_path" : "src/pages",
      "globals_path" : "",
      "test_workers": false,
      "live_output": false,

      "test_settings" : {
        "default" : {
          "selenium_host"  : "localhost",
          "selenium_port"  : 4444,
          "silent": true,
          "screenshots" : {
            "enabled" : true,
            "on_failure": true,
            "on_error": true,
            "path" : "target/reports"
          },
          "desiredCapabilities": {
            "browserName": "chrome"
          }
        },

        "local": {
          "launch_url" : "http://localhost:3000",
          "globals": {
            "mongo_uri": "mongodb://localhost:27017",
            "mongo_database": "todo"
          }
        }
      }
    }

Note that at the top it refers to some source directories. These need to exist for the test runner to work, so lets create them:

    $ mkdir -p src/tests src/commands src/pages

These directories are used as follows:

  • src/tests – this is where the actual tests will live.
  • src/commands – this is where any custom commands will live.
  • src/pages – this is where our page objects will live.

The target directory does not need to exist, and will be created automatically when the tests are run to store the output.

First test – checking the page loads

At this point we are ready to write our first test. This will simply be that the page loads, and not that any functionality works. This is always a good first test to write, since if this fails then everything else is going to fail as well.

In order for this to work, we need to write a couple of page objects to describe areas of the page to interact with.

The first of these represents the page as a whole, and is in src/pages/main.js as follows:

    module.exports = {
        elements: {
            body: ".container"
        }
    }

The second represents the Add ToDo form, and goes in src/pages/add.js as follows:

    module.exports = {
        elements: {
            input: "form input",
            submit: "form button"
        }
    }

Now we’re ready to write our test. This will go in src/tests/loadPage.js as follows:

    module.exports = {
        'Load Page' : function (browser) {
            browser.url(browser.launchUrl);

            browser.page.main().expect.element("@body").to.be.visible;
            browser.page.add().expect.element('@input').to.be.visible;

            browser.end();
        }
    };

Very simply, this loads the page, and checks that there are a couple of targeted elements present – the main body and the input box on the New ToDo form.

Second test – adding a ToDo entry

The second test we will write is to actually interact with the page and add a new entry. This will involve typing into the New ToDo form, submitting the form, and then checking that the new entry appears in the list correctly.

Firstly, we want a new page object to represent the list of ToDo entries. This goes in src/pages/todos.js as follows:

    module.exports = {
        elements: {
            list: ".list"
        }
    }

We are also going to write a custom command that can be used to access the MongoDB data store. This goes in src/commands/mongo.js as follows:

    const util = require('util');
    const events = require('events');
    const MongoClient = require('mongodb').MongoClient;

    function Mongo() {
        events.EventEmitter.call(this);
    }

    util.inherits(Mongo, events.EventEmitter);

    Mongo.prototype.command = function (handler, cb) {
        var self = this;

        new Promise((resolve, reject) => {
            MongoClient.connect(self.api.globals.mongo_uri, (err, client) => {
                if (err) {
                    console.log('Error connecting to MongoDB');
                    reject(err);
                } else {
                    console.log('Connected to MongoDB');
                    resolve(client);
                }
            });
        }).then((client) => {
            return new Promise((resolve, reject) => {
                resolve(client.db(self.api.globals.mongo_database));
            }).then((db) => handler(db))
            .catch((err) => {
                console.log('An error occurred');
                console.log(err);
            })
            .then(() => {
                client.close();

                if (cb) {
                    cb.call(self.client.api);
                }

                self.emit('complete');
            });
        });
        return this;
    };

    module.exports = Mongo;

This gets the details of the MongoDB database to access from the globals section of the Nightwatch configuration, and allows any of our tests to access the database.

We can now write our test. This goes in src/tests/addTodo.js as follows:

    module.exports = {
        'Add a new Todo' : function (browser) {
            browser.mongo(function(db) {
                console.log('Dropping all Todos');
                const col = db.collection('todos');
                return col.remove({});
            });

            browser.url(browser.launchUrl);

            const addTodoForm = browser.page.add();
            addTodoForm.setValue('@input', 'Run first test');
            addTodoForm.click('@submit');

            addTodoForm.expect.element('@input').value.to.equal('');

            const todosList = browser.page.todos();
            todosList.expect.element('.item:nth-child(1) label').text.to.equal('Run first test');
            todosList.assert.cssClassNotPresent('.item:nth-child(1) .checkbox', 'checked');

            browser.end();
        }
    };

Notice right at the start of the test we use our custom command to drop every record from the todos collection. This guarantees that we start from a clean slate, but it does mean that we can never safely run this against any environment where the data would be important. This will be solved later on by building an entire test environment using Docker Compose every time.

We can now run our test suite against our running application and ensure that everything is working correctly. You will need to have a Selenium Server running locally, and then simply execute the tests as follows:

    $ ./node_modules/.bin/nightwatch -e local

    [Add Todo] Test Suite
    =========================

    Running:  Add a new Todo
    Connected to MongoDB
    Dropping all Todos
     ✔ Expected element <form input> to have value equal: ""
     ✔ Expected element <.item:nth-child(1) label> text to equal: "Run first test"
     ✔ Testing if element <.item:nth-child(1) .checkbox> does not have css class: "checked".

    OK. 3 assertions passed. (5.593s)

    [Load Page] Test Suite
    ==========================

    Running:  Load Page
     ✔ Expected element <.container> to be visible
     ✔ Expected element <form input> to be visible

    OK. 2 assertions passed. (2.327s)

    OK. 5  total assertions passed. (8.127s)

Setting up Docker

At this point we have our application and end-to-end tests. But this does not give us an easily repeatable experience. Any new user who wants to work with this needs to get their system set up, it’s not possible to run the end-to-end tests against a live database without damaging the data, and so on.

What we want to do next is to set up a Docker infrastructure to run the application and to run the tests against it. This is surprisingly easy if you’ve already got the tools installed.

Creating Docker images

The first step is to create the Docker images for our application. There are three images that we want to build:

  • todos/backend – The backend application
  • todos/webapp – The UI for the application
  • todos/e2e – The end-to-end tests

Each of these is done by writing a Dockerfile inside the appropriate project and then requesting that it is built.

First the backend. Inside this project, create our Dockerfile as follows:

    FROM node:9.9.0-alpine

    COPY index.js todoDao.js package.json /opt/todos/
    WORKDIR /opt/todos
    RUN yarn install

    ENV MONGODB_URL mongodb://mongo:27017
    ENV MONGODB_DATABASE todos
    ENV PORT 4000

    EXPOSE 4000/tcp

    CMD node index.js

This creates an image based off of the Node.js base image, copies our application into it and causes it to build – which downloads all of the dependencies inside of the image. We then set the environment properties needed for database access to some defaults – they can be overridden at runtime if needed – and inform it that we are going expose port 4000 for external applications to call.

In order to build this image, we execute the following:

    $ docker build -t todos/backend .
    Sending build context to Docker daemon  4.438MB
    Step 1/8 : FROM node:9.9.0-alpine
     ---> 3e60aa6db49b
    Step 2/8 : COPY index.js todoDao.js package.json yarn.lock /opt/todos/
     ---> 527036c179bf
    Step 3/8 : WORKDIR /opt/todos
    Removing intermediate container 43a95995e43a
     ---> 63555efe5304
    Step 4/8 : RUN yarn install
     ---> Running in c3581351fb6a
    yarn install v1.5.1
    [1/4] Resolving packages...
    [2/4] Fetching packages...
    [3/4] Linking dependencies...
    [4/4] Building fresh packages...
    Done in 1.84s.
    Removing intermediate container c3581351fb6a
     ---> 1f134ed46d2a
    Step 5/8 : ENV MONGODB_URL mongodb://mongo:27017
     ---> Running in ea9d5b1e738b
    Removing intermediate container ea9d5b1e738b
     ---> 623de75a61c9
    Step 6/8 : ENV MONGODB_DATABASE todos
     ---> Running in f3ba07cafbb9
    Removing intermediate container f3ba07cafbb9
     ---> fbcd2e9d89af
    Step 7/8 : EXPOSE 3000/tcp
     ---> Running in ff2e2c920316
    Removing intermediate container ff2e2c920316
     ---> 2a74be827d8a
    Step 8/8 : CMD node index.js
     ---> Running in 1c23fef6aee3
    Removing intermediate container 1c23fef6aee3
     ---> 2c007489f6cc
    Successfully built 2c007489f6cc
    Successfully tagged todos/backend:latest

Next is the UI. Inside this project, create our Dockerfile as follows:

    FROM nginx:1.13.7

    COPY ./build /usr/share/nginx/html

    ENV API_URI=
    CMD echo "window.API_URL_BASE = '$API_URI';" > /usr/share/nginx/html/api.js && nginx -g 'daemon off;'

This is significantly easier. Notice that the CMD line creates a new api.js file before starting our web server. If you remember earlier, this file can be used to tell the UI where the backend application resides, and this is generated using an environment property that is provided at runtime.

Note as well that we are copying the entire build directory into the container. Create React App creates this when you run yarn build, and it contains static files that are ready to use. As such, building this container is done as follows:

    $ yarn build
    yarn run v1.5.1
    $ react-scripts build
    Creating an optimized production build...
    Compiled successfully.

    File sizes after gzip:

      122.54 KB  build/static/js/main.c05a9237.js
      99.01 KB   build/static/css/main.e2f12779.css
    ✨  Done in 34.71s.

    $ docker build -t todos/webapp .
    Sending build context to Docker daemon  142.8MB
    Step 1/4 : FROM nginx:1.13.7
     ---> f895b3fb9e30
    Step 2/4 : COPY ./build /usr/share/nginx/html
     ---> Using cache
     ---> f44495dd8a9a
    Step 3/4 : ENV API_URI=
     ---> Using cache
     ---> b2e16917f2ba
    Step 4/4 : CMD echo "window.API_URL_BASE = '$API_URI';" > /usr/share/nginx/html/api.js && nginx -g 'daemon off;'
     ---> Using cache
     ---> 10fb3b31a053
    Successfully built 10fb3b31a053
    Successfully tagged todos/webapp:latest

Finally we have our end-to-end tests. For these, we want to obtain a simple shell script that can be used to ensure that other services have started first. This is called wait-for-it.sh and can be downloaded from here. Drop this file into the e2e project, and then write our Dockerfile as follows:

    FROM node:9.9.0

    COPY package.json nightwatch.json wait-for-it.sh /opt/tests/e2e/
    COPY src /opt/tests/e2e/src/

    WORKDIR /opt/tests/e2e

    VOLUME /opt/tests/e2e/target

    RUN chmod +x wait-for-it.sh
    RUN yarn install

Note: We do not use the Alpine image here because our wait-for-it.sh script requires that bash is available, which isn’t the case in Alpine.

We also want to extend out nightwatch.json file slightly, so it knows about running tests inside out cluster. Add the following to it, alongside the local block:

        "local": {
          "launch_url" : "http://localhost:3000",
          "globals": {
            "mongo_uri": "mongodb://localhost:27017",
            "mongo_database": "todo"
          }
        },

        "integration_chrome": {
          "launch_url": "http://todos-webapp",
          "selenium_host"  : "todos-selenium-chrome",
          "selenium_port"  : 4444,
          "desiredCapabilities": {
            "browserName": "chrome"
          },
          "globals": {
            "mongo_uri": "mongodb://todos-mongo:27017",
            "mongo_database": "todos"
          }
        }

As before, building this image is done as follows:

    $ docker build -t todos/e2e .
    Sending build context to Docker daemon  9.273MB
    Step 1/5 : FROM node:9.9.0
     ---> 4885ab8871c2
    Step 2/5 : COPY package.json yarn.lock nightwatch.json wait-for-it.sh src /opt/tests/e2e/
     ---> 9da9fae297d0
    Step 3/5 : WORKDIR /opt/tests/e2e
    Removing intermediate container 5ff8169cb44a
     ---> f7e9027a0ba5
    Step 4/5 : VOLUME /opt/tests/e2e/target
     ---> Running in d6d3b69a9789
    Removing intermediate container d6d3b69a9789
     ---> a8a10a0ecff6
    Step 5/5 : RUN yarn install
     ---> Running in 4e7938ee2c23
    yarn install v1.5.1
    [1/4] Resolving packages...
    [2/4] Fetching packages...
    [3/4] Linking dependencies...
    [4/4] Building fresh packages...
    Done in 2.38s.
    Removing intermediate container 4e7938ee2c23
     ---> 0cee76b31526
    Successfully built 0cee76b31526
    Successfully tagged todos/e2e:latest

Creating the application cluster

Now that we have our Docker images, we want to use them. We could just start them up manually every time, but that’s a lot of hassle. Instead we will use Docker Compose to orchestrate this.

Note: Docker Compose allows you to define a series of Docker Containers that are all started up as one cluster

For our application, we will write a docker-compose.yml file as follows:

    version: '3'
    services:
        todos-mongo:
            image: mongo
            ports:
                - "127.0.0.1:27017:27017"
        todos-backend:
            image: todos/backend:latest
            ports:
                - "127.0.0.1:4000:4000"
            environment:
                MONGODB_URL: mongodb://todos-mongo/27017
                MONGODB_DATABASE: todos
        todos-webapp:
            image: todos/webapp:latest
            ports:
                - "127.0.0.1:3000:80"
            environment:
                API_URI: http://localhost:4000

This starts up three containers – our todos/backend and todos/webapp ones that we have just built, and a mongo image to act as the database. It also configures the todos/backend container to know where the database is, and the todos/webapp container to know where the backend is.

At this point, it’s possible to execute docker-compose up and visit http://localhost:3000 to see a fully working application:

Todo List Preview

Creating the test cluster

Finally, we want to create a cluster that extends this and allows us to run the tests against it. Fortunately, Docker Compose allows for multiple configuration files to be used together, and it will combine them.

For this, we will write a docker-compose.e2e.yml alongside our previous docker-compose.yml file, as follows:

    version: '3'
    services:
        todos-webapp:
            environment:
                API_URI: http://todos-backend:4000
        todos-selenium-chrome:
            image: selenium/standalone-chrome
        todos-e2e:
            image: todos/e2e:latest
            volumes:
                - ./target:/opt/tests/e2e/target
            command: ./wait-for-it.sh todos-selenium-chrome:4444 -- ./wait-for-it.sh todos-webapp:80 -- ./wait-for-it.sh todos-backend:4000 -- ./node_modules/.bin/nightwatch -e integration_chrome

You will notice that the todos-e2e container has a complicated command that is a chain of several calls to wait-for-it.sh. This ensures that the various components we depend on are all available and running before we run our tests.

We also specify a volumes entry. This ensures that the reports from the tests are available on the filesystem outside of the container, which is essential to know what happened and diagnose any problems.

We can now run the tests by executing:

    $ docker-compose -f docker-compose.yml -f docker-compose.e2e.yml up --exit-code-from todos-e2e

The --exit-code-from flag here is the special trick. It will cause Docker Compose to start all of the containers, but when the command from todos-e2e finishes it will then shut everything down again. At the same time, the exit code from this container is used as the exit code from the entire command, meaning that – for example – it will cause builds to fail if this container returns a failing exit code.

Conclusion

This article highlights a way that Docker and Docker Compose can be used to produce a 100% repeatable end-to-end testing environment, either on a developers workstation or on a CI system. The only requirement is a working Docker setup and access to the images which were built as part of the individual application.

Our setup only has three layers, but the only limit is your imagination (and your system resources). Why not try expanding on the tests here, or adding more complexity to the infrastructure – maybe a Redis cache as well as the MongoDB data store?

Full source code for this application from GitHub.