
The Python package testcontainers solves two problems common to Python-apps. We develop Python-based applications and deploy them using the AWS ECS-CLI. So we directly deploy a docker-compose configuration into AWS ECS. That configuration wants to be tested locally as well, and I haven’t found a proper solution for that other than the package testcontainers.
If you don’t work with docker-compose but k8n or some other docker orchestrator, you for sure encounter the second use case. It’s to spin up a local container with Postgres, NGINX or Redis to run a small integration level test with real queries and real responses. Those services might on deployment be replaced with managed cloud services or external dependencies of other teams. For that case, testcontainers was apparently designed.
Of course, you can always use a combination of bash scripts and Pytest for both use cases, but for Python-based applications, my feeling is that test code should be written in Python as well.
In this post, I will walk you through some code snippets, but not a complete example like usual. I believe this is so dependent on your use case that it’s better to see how this works in general and to see how you could structure such tests in testcontainers.
Let’s look at a simple example. Say you have an app that talks to a Postgres database, which might be an AWS RDS or some independently deployed instance.
The docker-compose.yml might look like this.
version: ‘3’
services:
app:
image: “${APP_IMAGE}”
ports:
- “8000:8000”
networks:
- backend
environment:
- DB_HOST=postgres
postgres:
image: “postgres:11.3-alpine”
ports:
- “5432:5432”
networks:
- backend
networks:
backend:
driver: “bridge”
Of course ${APP_IMAGE} should be replaced with your applications’ image name, exported as an environment variable. So you could start this with
$ export APP_IMAGE=my-image.0.0.2; docker-compose up
. Being thorough developers we already have a bunch of unit tests in place, some that mock away the Postgres. Now let’s test the configuration. For that we can start with installing testcontainers via pip.
$ pip install testcontainers
Or with pipenv
$ pipenv install testcontainers
. A first test could look like this.
import testcontainers.compose
COMPOSE_PATH = “ecs/compose” #the folder containing docker-compose.yml
def get_db_conn():
"""function returning the DB psycopg2 connection."""
...
return conn
def setup_module():
compose = testcontainers.compose.DockerCompose(COMPOSE_PATH)
compose.start()
time.sleep(10)
return compose, get_db_conn()
def teardown_module(compose):
compose.stop()
def test_db():
compose = setup_module()
# Test 1: Check DB accepts connections
cur = conn.cursor()
cur.execute(“SELECT ‘foo’”)
assert cur.fetchone()[0] == “foo”, “Database is not running”
cur.close()
teardown_module(compose)
I’m assuming you have some way of getting a DB connection as your app needs on as well. In fact, I would find it even better if you used the same method your app uses and import that functionality.
If you run this test with pytest you should now see that your database is running and accepting connections. Next we could check if your app accepts HTTP GET requests. For that, I like the requests package.
import testcontainers.compose
import requests
…
def test_db_api():
compose = setup_module()
# Test 1: Check DB accepts connections
…
# Test 2: Check that the API started
r = requests.get(“http://localhost:8000")
assert r.status_code == 200, “API did not start correctly”
teardown_module(compose)
Again you can run this with pytest. Now once we know both docker containers start and are functioning, let’s finally check whether they can work together. Of course, that only makes sense if your app actually triggers some write request to the database. Let’s assume for this example that a POST request to the app creates a new object and commits it to the database.
# Test 3: App accepts POST requests
r = requests.post(
“http://localhost:8000/request",
data=json.dumps({“foo”: [“bar”]}),
)
assert r.status_code == 200
# Test 4: data gets inserted into the DB
cur = conn.cursor()
cur.execute(“SELECT * from foo”)
assert cur.fetchone()[0] == “bar”, “Database did not get the record”
cur.close()
And voila, we tested the complete workflow of a basic app inside a local docker-compose configuration, only utilizing Python.
Note we do have to have docker-compose installed. If you’re worried about that you could add it as a dependency to the project, in your dev-requirements.txt for instance, since docker-compose is pip-installable.
Also, notice that the function to set up is called setup_module(). We can use the usual setup/ teardown functionality of pytest to not have to call those functions individually in each test.
More On The Testcontainer Package
Some general notes on the testcontainers package as I really like it, but the documentation isn’t really up to date last time I checked.
So instead of testing your docker-compose configuration, you can also simply let testcontainers generate a configuration for you. That’s probably the best idea if you only want to test an external dependency, the second use case I mentioned above. In that case, you don’t have to write a docker-compose.yml but instead, import the class e.g. the postgres class and use that. The advantage of this way is that those classes can have functions to wait for the container to spin up, and set up the credentials on creation. If not I suggest you extend them to include them to avoid that nasty “time.sleep(10)” line above.
More notes
- The Oracle database is broken because Oracle removed the underlying docker image from docker-hub. You’ll have to obtain that image yourself, via Oracle, and then hack it into the class or use the docker-compose wrapper.
- There’s also a much more popular Java version of testcontainers which supports JUnit style tests, which is how I discovered the Python version.
- More ready to use classes include NGINX, Redis, Selenium, etc. you can check out the full list at github.
Do I Really Need To Test That?
This is my personal opinion, but it worked out well so far. I like to use local unit tests on the function and class level for the Python app, then local integration tests of the kind described here, as well as post-deployment tests. Some people might say unit tests and post-deployment tests are enough and they might be right in terms of test coverage.
But for me what really takes it home is debugging and development speed. Both are accelerated quite a bit by pushing more production-like testing onto my local machine. With post-production tests, you usually have to dig through log files in various places, while with testcontainers you can simply debug right into the running setup. So with that in mind, I like to use:
- Unit tests in the App code, where I mock external dependencies like AWS S3, or a Postgres, e.g. by straight mocking/ stubbing or using moto or localstack. That makes sure my own code works, but tells me nothing about network level.
- Locally run integration tests using testcontainers, where I use this framework to either “mock” a database that is later replaced by an RDS instance, or have the future cluster on my laptop running.
- Post-deployment tests are then only needed to figure out the quirks in cloud providers magic.
Hope this helps you find a few blindspots or make you debug & develop faster. Enjoy!
Resources
- The AWS ECS-CLI can be found at github.
- The framework can be found here, where the Java version is here.