Posts in this Series
- Getting Started
- Static Code Analysis
- Unit Testing
- Functional Testing - What your reading now.
Previously we added cloud-radar
to unit test our template in the article Unit Testing. Today we will look at functional testing our Cloudformation template using Cloud-Radar.
Functional Testing
Functional testing for infrastructure is deploying your infrastructure and then interacting with it to make sure it behaves as expected. Some teams skip this step and rely on functional testing of their application to verify the infrastructure. I prefer to test the infrastructure separately because if you have failed tests at the application layer it’s not always obvious if the problem is with the application or the infra. It’s also possible the requirements for the infra is different than the application. For example, our template is meant to be reusable and deployed in many AWS regions while testing of the application would most likely not span multiple regions.
If you need a template to test you can try out the companion repo and the functional-test-start
branch.
Setup
You will need to configure AWS credentials in order to deploy the template in multiple regions. Check out the boto3 docs for help with configuring credentials.
Since functional tests will take sometime to complete we usually don’t want them to run by default. In pytest we can control which tests run using markers.
touch tests/conftest.py
In conftest.py
lets create an e2e
marker.
import pytest
def pytest_configure(config):
config.addinivalue_line("markers", "e2e: Run tests that deploy resources on AWS.")
Now when we run pytest with our marker, no tests should run.
Writing Tests
cloud-radar
uses AWS taskcat to handle multiple region stack creation and management. Since most of the tedious things like resource names were unit tested our functional tests will be very simple.
Lets create a directory structure to hold our tests.
mkdir -p tests/functional
touch tests/functional/test_stacks.py
In test_stacks.py
we will start with the imports and fixtures.
import os
from pathlib import Path
from typing import Dict, List
import pytest
from cloud_radar.cf.e2e._stack import Stack
@pytest.fixture(scope='session')
def template_path() -> Path:
base_path = Path(__file__).parent
template_path = base_path / Path('../../templates/log-bucket.template.yaml')
return template_path.resolve()
@pytest.fixture()
def default_params() -> Dict[str, str]:
parameters = {
"BucketPrefix": "taskcat-$[taskcat_random-string]",
"KeepBucket": "FALSE",
}
return parameters
@pytest.fixture()
def regions() -> List[str]:
return ["us-west-1", "us-west-2"]
Lets add our first test. It just check that a bucket is created and the bucket is destroyed along with the stack.
@pytest.mark.e2e
def test_ephemeral_bucket(template_path: Path, default_params, regions):
buckets = []
with Stack(template_path, default_params, regions) as stacks:
for stack in stacks:
session = stack.region.session
s3 = session.resource("s3")
bucket_name = ""
for output in stack.outputs:
if output.key == "LogsBucketName":
bucket_name = output.value
break
bucket = s3.Bucket(bucket_name)
bucket.wait_until_exists()
buckets.append(bucket)
assert len(stacks) == 2
for bucket in buckets:
bucket.wait_until_not_exists()
Now lets run our test.
LogsBucketPolicy Invalid policy syntax.
😦 I have a bug. This actually took a long time to track down and ended up in this PR to the AWS docs repo.
If I was doing this for a company that was going to test hundreds of Cloudformation templates I would create test function that when given a policy document would validate it to be correct. Since this is only an example repo, I’m going to update my template and my now failing unit tests and move on.
In our next test we will create the stacks, delete the stacks and then test that buckets were not deleted.
@pytest.mark.e2e
def test_retain_bucket(template_path: Path, default_params, regions):
default_params["KeepBucket"] = "TRUE"
with Stack(template_path, default_params, regions) as stacks:
pass
for stack in stacks:
session = stack.region.session
s3 = session.resource("s3")
for output in stack.outputs:
if output.key == "LogsBucketName":
bucket = s3.Bucket(output.value)
bucket.wait_until_exists()
bucket.delete()
bucket.wait_until_not_exists()
break
assert len(stacks) == 2
We can now run our e2e tests.
Awsome! I’m now 100% confident in how this template will behave. Lets update pre-commit
so we are not running the e2e
tests on every commit since they can sometimes take a while to complete.
git diff .pre-commit-config.yaml
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 0b631cc..b532e0d 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -20,7 +20,7 @@ repos:
- id: pytest
name: pytest
- entry: pytest
+ entry: pytest -m 'not e2e'
language: system
pass_filenames: false
types_or: [python, yaml]
All that’s left is to commit and push. If you have been following along with the companion repo then your branch should look similar to this. This will be the last section for now. I hope this guide has inspired you to add some linting and testing to your Cloudformation repos. If you would like to see me create a CICD pipeline using Jenkins or GitHub actions then let me know.