Skip to main content

Steps to Follow When Creating an App

In order to run your app on Akinon Commerce Cloud (ACC), it needs to be containerized. How this is done depends on the language and the framework you are using. You can search for "dockerize $language $framework" to find out the best practices when containerizing your app.

ACC builds your application as a Docker container, then runs it on a Kubernetes cluster. Therefore, we need a couple pieces of information from you on how to build your app, and how to run it.

However, ACC doesn't support Dockerfiles, instead it expects an app manifest consisting of two files: akinon.json and Procfile.

akinon.json

This file describes what your app is, how it can be built and run, and what it needs to run properly (database, broker, etc). This file must exist at the root of the app repo.

A minimal example of akinon.json is as follows:

{
"name": "mcapp-9000",
"description": "My cool app",
"scripts": {
"build": "build.sh"
},
"runtime": "python:3.10-slim",
"formation": {
"web": {
"healthcheck": "/healthz"
}
},
"...": "..."
}

Name and description are pretty self-explanatory, they're used to identify your app in the UI.

Scripts are used to build & run your app in different stages of its lifecycle. Since they're executed inside the container, they can use any shell available in the container (sh, bash, xonsh, etc.).

We'll explore other fields and their uses in the following sections.

Converting a Dockerfile into a build script

ACC abstracts Dockerfile into a script that is run while building the Docker container. It effectively combines all RUN statements into one. This is to prevent users from generating a massive Docker container, often due to creating too many layers by mistake.

It's best to start from a Dockerfile and later convert it to an app manifest that ACC can understand.

Take this minimal (though not optimized) Dockerfile as an example of a Python application served by Gunicorn server.

# Dockerfile
FROM python:3.10-slim

RUN apt-get update && apt-get install -y python-dev postgresql-dev jpeg-dev g++
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["gunicorn", "app:app", "-b", "0.0.0.0:8000"]
EXPOSE 8000

We have a number of RUN commands in the Dockerfile. And an CMD command that runs starts process.

To create a build script, we combine all RUN statements. Files inside the repo are copied to the Docker container, and the script is run at the root of the repo.

# build.sh
#!/bin/bash
set -euo pipefail

apt-get update && apt-get install -y python-dev postgresql-dev jpeg-dev g++
pip install -r requirements.txt

# remember to clean up unnecessary files to reduce container size
# for example, g++ or python-dev is not used by the app, it can be uninstalled
# this will reduce the container size significantly,
# and speed up builds & deployments.

apt-get remove -y python-dev g++

Then this is specified in akinon.json under $.scripts.build, and the base image under $.runtime:

{
"...": "...",
"scripts": {
"build": "build.sh" // runs `sh build.sh` at the repo root
},
"runtime": "python:3.10-slim" // used as the base image for the Docker container
}

This also means you can put your scripts under a subdirectory and point to it in $.scripts.build:

{
"...": "...",
"scripts": {
"build": "./scripts/build.sh"
}
}

Release script

Release script is an optional script that is run just before deploying your app. This is usually used to run migrations.

Keep in mind that this script can and will likely be run multiple times, so it must be idempotent, meaning it should expect some changes to be already made by previous runs.

{
"...": "...",
"scripts": {
"build": "build.sh",
"release": "release.sh"
}
}

And it contents could potentially be:

# release.sh
#!/bin/bash
set -euo pipefail

# only run migrations if the database is not already migrated.
# this check is usually performed by the ORM you're using.

is_migrated() {
# ...
if test $migrated -eq 0; then
return 0 # yes
else
return 1 # no
fi
}

is_migrated || migrate

Formations and Procfile

Formation defines how the app is deployed on a Kubernetes cluster and how many replicas it has or how many instances it can scale to.

{
"...": "...",
"formation": {
"web": {
"min": 2, // app is deployed with 2 replicas
"max": "auto", // and scale up as needed
"healthcheck": "/healthz"
},
"beat": {
"min": 1, // only a single instance is deployed
"max": 1
},
"worker": {
"min": 1, // only a single instance is deployed
"max": "auto" // but can scale up as needed
}
}
}

Keys of the formation are the names of the processes that are defined in Procfile. We expect to find this file at repo root.

Procfile is a file that contains a list of processes that can be run inside the container.

# format: <process_name>: <command>
web: gunicorn app:app -b 0.0.0.0:8008
worker: worker.py

This command will be used as CMD statement in the Dockerfile.

For example, if you have a Django app, it is usually served by Gunicorn. But it also needs a worker process to run background tasks.

So we define two processes, both of which will be deployed as separate containers. This means they cannot share state among each other without using a shared database or a broker like Redis.

Healthcheck

If the app is expected to be accessed from the internet, it must have a web process which listens to all interfaces 0.0.0.0 at port 8008. And it must have a healthcheck endpoint.

A healthcheck endpoint is a path used that responds to HTTP GET requests to confirm if the app is ready to receive traffic.

It is like the following command that is run inside the container:

curl -XGET http://localhost:$PORT/healthz

After deployment, once the app starts returning HTTP 200 responses from this endpoint, it is assumed to be healthy and ready to serve traffic.

This means, that if the app is not ready to serve traffic, such as, it cannot access the database or other upstream services & APIs it depends on, it should not return HTTP 200 responses from this endpoint. If it does, the traffic will be routed to the app, and it will most likely crash, and return HTTP 500 responses.

{
"...": "...",
"formation": {
"web": {
"...": "...",
"healthcheck": "/healthz"
},
"...": "..."
}
}

Addons

Now with all this information, the app can be deployed. But most apps require a database to store state. This is provided by addons.

It's an array of objects, each of which defines an addon (with optional configuration).

{
"...": "...",
"addons": [
{
"plan": "postgresql",
// "options": {
// "instance_type": "db.r5.large",
// "instance_count": 1
// }
}
]
}

Each addon causes a number of environment variables to be passed to the container. And the app needs to read these environment variables to configure itself.

You can define a type of addon multiple times, but they must have different roles defined in as field.

{
"...": "...",
"addons": [
{
"plan": "redis",
"as": "cache"
},
{
"plan": "redis",
"as": "broker"
}
]
}

This will cause environment variables to have different prefixes. In this case, the app will have two different Redis instances, and their details will be stored in CACHE_* and BROKER_* environment variables.

Postgresql addon

This addon provides a Postgresql database. It can be defined as follows:

{
"plan": "postgresql"
// "as": "db", // optional, set to "db" by default
// "options": {
// "instance_type": "db.r5.large",
// "instance_count": 1
// }
}

This will pass the following environment variables to the container:

  • DB_HOST: the hostname of the database
  • DB_PORT: the port to connect to the host
  • DB_NAME: the name of the database
  • DB_USER: the username to connect to the database
  • DB_PASSWORD: the password of the user

Redis addon

This addon provides a Redis instance to the app. It can be defined as follows:

{
"plan": "redis"
// "as": "cache", // optional, set as cache by default
// "options": {
// "instance_type": "cache.r4.large",
// "instance_count": 1
// }
}

This will pass the following environment variables to the container:

  • CACHE_HOST: the hostname of the Redis instance
  • CACHE_PORT: the port to connect to the host
  • CACHE_DATABASE_INDEX: the index of the database to use

Combining these, you'll need to prepare a Redis DSN as follows:

redis://$CACHE_HOST:$CACHE_PORT/$CACHE_DATABASE_INDEX

Sentry addon

This addon provides a Sentry DSN to send error logs to. It can be defined as follows:

{
"plan": "sentry"
}

This will pass the following environment variables to the container:

  • SENTRY_DSN: the DSN of the Sentry instance

Mail addon

This addon provides SMTP details to allow the app to send emails. It's included with every app by default, so it doesn't need to be defined in akinon.json.

This will pass the following environment variables to the container:

  • EMAIL_HOST: the hostname of the SMTP server
  • EMAIL_PORT: the port to connect to the host
  • EMAIL_HOST_USER: the username to connect to the SMTP server
  • EMAIL_HOST_PASSWORD: the password of the user
  • EMAIL_USE_TLS: whether to use TLS or not

CDN addon

This addon provides a CDN instance to the app. It can be defined as follows:

{
"plan": "cdn",
// "scope": "project" // optional set as cache by default
}

This will pass the following environment variables to the container:

  • CDN_DOMAIN: the domain of the CDN server
  • S3_BUCKET_NAME: the name of the AWS S3 bucket
  • S3_REGION_NAME: the name of the AWS S3 region name
  • AWS_ACCESS_KEY_ID: the access key id of the AWS account
  • S3_SIGNATURE_VERSION: the signature version of the AWS S3
  • AWS_SECRET_ACCESS_KEY: the secret access key of the AWS account

Static CDN addon

This addon provides a Static CDN instance to the app. It can be defined as follows:

{
"plan": "static_cdn"
}

This will pass the following environment variables to the container:

  • BASE_STATIC_URL: the url of the Static CDN server

Elasticsearch addon

This addon provides an Elasticsearch instance to the app. It can be defined as follows:

{
"plan": "elasticsearch",
// "options": {
// "version": "5.5",
// "instance_type": "t2.medium.elasticsearch",
// "instance_count": 1
// }
}

This will pass the following environment variables to the container:

  • ES_HOST: the hostname of the Elasticsearch server
  • ES_PORT: the port to connect to the host