bunkerized-nginx/docs/integrations.md

21 KiB

Integrations

Docker

![Overwiew](assets/img/integration-docker.svg){ align=center }
Docker integration

Using BunkerWeb as a Docker container is a quick and easy way to test and use it as long as you are familiar with the Docker technology.

We provide ready to use prebuilt images for x64 and x86 architectures (armv7 and armv8 are not supported at the moment) on Docker Hub :

docker pull bunkerity/bunkerweb:1.4.0

Alternatively, you can build the Docker images directly from the source (and take a coffee because it may be long depending on your hardware) :

git clone https://github.com/bunkerity/bunkerweb.git && \
cd bunkerweb && \
docker build -t my-bunkerweb .

Usage and configuration of the BunkerWeb container are based on :

  • Environment variables to configure BunkerWeb and meet your use cases
  • Volume to cache important data and mount custom configuration files
  • Networks to expose ports for clients and connect to upstream web services

Environment variables

Settings are passed to BunkerWeb using Docker environment variables. You can use the -e flag :

docker run \
	   ...
	   -e MY_SETTING=value \
	   -e "MY_OTHER_SETTING=value with spaces" \
	   ...
	   bunkerity/bunkerweb:1.4.0

Here is the docker-compose equivalent :

...
services:
  mybunker:
    image: bunkerity/bunkerweb:1.4.0
    environment:
      - MY_SETTING=value

!!! info "Full list" For the complete list of environment variables, see the settings section of the documentation.

Volume

A volume is used to share data with BunkerWeb and store persistent data like certificates, cached files, ...

The easiest way of managing the volume is by using a named one. You will first need to create it :

docker volume create bw-data

Once it's created, you can mount it on /data when running the container :

docker run \
	   ...
	   -v "${PWD}/bw-data:/data" \
	   ...
	   bunkerity/bunkerweb:1.4.0

Here is the docker-compose equivalent :

...
services:
  mybunker:
    image: bunkerity/bunkerweb:1.4.0
    volumes:
      - bw-data:/data
...
volumes:
  bw-data:

!!! warning BunkerWeb runs as an unprivileged user with UID 101 and GID 101 inside the container. The reason behind this is the security : in case a vulnerability is exploited, the attacker won't have full root (UID/GID 0) privileges. But there is a downside : if you use a local folder for the persistent data, you will need to set the correct permissions so the unprivileged user can write data to it. Something like that should do the trick : shell mkdir bw-data && \ chown root:101 bw-data && \ chmod 770 bw-data

Alternatively, if the folder already exists :

chown -R root:101 bw-data && \
chmod -R 770 bw-data

Mounting the folder :

docker run \
        ...
      -v ./bw-data:/data \
        ...
        bunkerity/bunkerweb:1.4.0

Here is the docker-compose equivalent :


...
services:
  mybunker:
  image: bunkerity/bunkerweb:1.4.0
  volumes:
    - ./bw-data:/data

Networks

The easiest way to connect BunkerWeb to web applications is by using Docker networks.

First of all, you will need to create a network :

docker network create mynetwork

Once it's created, you will need to connect the container to that network :

docker run \
       ...
	   --network mynetwork \
	   ...
	   bunkerity/bunkerweb:1.4.0

You will also need to do the same with your web application(s). Please note that the other containers are accessible using their name as the hostname.

Here is the docker-compose equivalent :

...
services:
  mybunker:
    image: bunkerity/bunkerweb:1.4.0
    networks:
      - bw-net
...
networks:
  bw-net:

Docker autoconf

![Overwiew](assets/img/integration-autoconf.svg){ align=center }
Docker autoconf integration

!!! info "Docker integration" The Docker autoconf integration is an "evolution" of the Docker one. Please read the Docker integration section first if needed.

The downside of using environment variables is that the container needs to be recreated each time there is an update which is not very convenient. To counter that issue, you can use another image called autoconf which will listen for Docker events and automatically reconfigure BunkerWeb in real-time without recreating the container.

Instead of defining environment variables for the BunkerWeb container, you simply add labels to your web applications containers and the autoconf will "automagically" take care of the rest.

!!! info "Multisite mode" The Docker autoconf integration implies the use of multisite mode. Please refer to the multisite section of the documentation for more information.

First of all, you will need to create the data volume :

docker volume create bw-data
  • One for sharing the persistent data, mounted on /data
  • Another one for sharing the generated Nginx configurations, mounted on /etc/nginx

Then, you can create two networks (replace 10.20.30.0/24 with an unused subnet of your choice) :

docker network create --subnet 10.20.30.0/24 bw-autoconf && \
docker network create bw-services
  • One for communication between BunkerWeb and autoconf
  • Another one for communication between BunkerWeb and web applications

You can now create the BunkerWeb container with the AUTOCONF_MODE=yes setting and the bunkerweb.AUTOCONF label (replace 10.20.30.0/24 with the subnet specified before) :

docker run \
       -d \
       --name mybunker \
	   --network bw-autoconf \
	   -p 80:8080 \
	   -p 443:8443 \
	   -e AUTOCONF_MODE=yes \
	   -e MULTISITE=yes \
	   -e SERVER_NAME= \
	   -e "API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24" \
	   -l bunkerweb.AUTOCONF \
	   bunkerity/bunkerweb:1.4.0 && \

docker network connect bw-services mybunker

And the autoconf one :

docker run \
       -d \
	   --name myautoconf \
	   --network bw-autoconf \
	   -v bw-data:/data \
	   -v /var/run/docker.sock:/var/run/docker.sock:ro \
	   bunkerity/bunkerweb-autoconf:1.4.0

Here is the docker-compose equivalent for the BunkerWeb autoconf stack :

version: '3'

services:

  mybunker:
    image: bunkerity/bunkerweb:1.4.0
    ports:
      - 80:8080
	  - 443:8443
    volumes:
      - bw-data:/data
    environment:
	  - MULTISITE=yes
      - SERVER_NAME=
      - API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24
    labels:
      - "bunkerweb.AUTOCONF"
    networks:
      - bw-autoconf
	  - bw-services

  myautoconf:
    image: bunkerity/bunkerweb-autoconf:1.4.0
    volumes:
      - bw-data:/data
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - bw-autoconf

volumes:
  bw-data:

networks:
  bw-autoconf:
    ipam:
      driver: default
      config:
        - subnet: 10.20.30.0/24
  bw-services:
    name: bw-services

Once the stack is setup, you can now create the web application container and add the settings as labels using the "bunkerweb." prefix in order to automatically setup BunkerWeb :

docker run \
       -d \
       --name myapp \
	   --network bw-services \
	   -l bunkerweb.MY_SETTING_1=value1 \
	   -l bunkerweb.MY_SETTING_2=value2 \
       ...
	   mywebapp:4.2

Here is the docker-compose equivalent :

...

services:

  myapp:
	image: mywebapp:4.2
    networks:
      bw-services:
        aliases:
          - myapp
    labels:
      - "bunkerweb.MY_SETTING_1=value1"
	  - "bunkerweb.MY_SETTING_2=value2"

...

networks:
  bw-services:
    external:
      name: bw-services

...

Swarm

![Overwiew](assets/img/integration-swarm.svg){ align=center }
Docker Swarm integration

!!! info "Docker autoconf" The Docker autoconf integration is similar of the Docker autoconf one (but with services instead of containers). Please read the Docker autoconf integration section first if needed.

To automatically configure BunkerWeb instances, a special service, called autoconf, will be scheduled on a manager node. That service will listen for Docker Swarm events like service creation or deletion and automatically configure the BunkerWeb instances in real-time without downtime.

Like the Docker autoconf integration, configuration for web services is defined using labels starting with the special bunkerweb. prefix.

The recommended setup is to schedule the BunkerWeb service as a global service on all worker nodes and the autoconf service as a single replicated service on a manager node.

First of all, you will need to create two networks (replace 10.20.30.0/24 with an unused subnet of your choice) :

docker network create -d overlay --attachable --subnet 10.20.30.0/24 bw-autoconf && \
docker network create -d overlay --attachable bw-services
  • One for communication between BunkerWeb and autoconf
  • Another one for communication between BunkerWeb and web applications

You can now create the BunkerWeb service (replace 10.20.30.0/24 with the subnet specified before) :

docker service create \
       --name mybunker \
	   --mode global \
	   --constraint node.role==worker \
	   --network bw-autoconf \
	   --network bw-services \
	   -p published=80,target=8080,mode=host \
	   -p published=443,target=8443,mode=host \
	   -e SWARM_MODE=yes \
	   -e SERVER_NAME= \
	   -e MULTISITE=yes \
	   -e "API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24" \
	   -l bunkerweb.AUTOCONF \
	   bunkerity/bunkerweb:1.4.0

And the autoconf one :

docker service \
       create \
	   --name myautoconf \
	   --constraint node.role==manager \
	   --network bw-autoconf \
	   --mount type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock,ro \
	   --mount type=volume,source=bw-data,destination=/data \
	   -e SWARM_MODE=yes \
	   bunkerity/bunkerweb-autoconf:1.4.0

Here is the docker-compose equivalent (using docker stack deploy) :

version: '3.5'

services:

  mybunker:
    image: bunkerity/bunkerweb:1.4.0
    ports:
      - published: 80
        target: 8080
        mode: host
        protocol: tcp
      - published: 443
        target: 8443
        mode: host
        protocol: tcp
    environment:
      - SWARM_MODE=yes
      - SERVER_NAME=
      - MULTISITE=yes
      - API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24
    networks:
      - bw-autoconf
      - bw-services
    deploy:
      mode: global
      placement:
        constraints:
          - "node.role==worker"
      labels:
        - "bunkerweb.AUTOCONF"

  myautoconf:
    image: bunkerity/bunkerweb-autoconf:1.4.0
    environment:
      - SWARM_MODE=yes
    volumes:
      - bw-data:/data
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - bw-autoconf
    deploy:
      replicas: 1
      placement:
        constraints:
          - "node.role==manager"

networks:
  bw-autoconf:
    driver: overlay
    attachable: true
    name: bw-autoconf
	ipam:
	  config:
        - subnet: 10.20.30.0/24
  bw-services:
    driver: overlay
    attachable: true
    name: bw-services

volumes:
  bw-data:

Once the BunkerWeb Swarm stack is set up and running (see autoconf logs for more information), you can now deploy web applications in the cluster and use labels to dynamically configure BunkerWeb :

docker service \
       create \
       --name myapp \
       --network bw-services \
       -l bunkerweb.MY_SETTING_1=value1 \
       -l bunkerweb.MY_SETTING_2=value2 \
       ...
       mywebapp:4.2

Here is the docker-compose equivalent (using docker stack deploy) :

...
services:
  myapp:
    image: mywebapp:4.2
    networks:
      - bw-services
    deploy:
      placement:
        constraints:
          - "node.role==worker"
      labels:
        - "bunkerweb.MY_SETTING_1=value1"
        - "bunkerweb.MY_SETTING_2=value2"
...
networks:
  bw-services:
    external:
      name: bw-services

Kubernetes

![Overwiew](assets/img/integration-kubernetes.svg){ align=center }
Kubernetes integration

The autoconf acts as an Ingress controller and will configure the BunkerWeb instances according to the Ingress resources. It also monitors other Kubernetes objects like ConfigMap for custom configurations.

The first step to install BunkerWeb on a Kubernetes cluster is to add a role and permissions on the cluster for the autoconf :

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cr-bunkerweb
rules:
- apiGroups: [""]
  resources: ["services", "pods", "configmaps"]
  verbs: ["get", "watch", "list"]
- apiGroups: ["networking.k8s.io"]
  resources: ["ingresses"]
  verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-bunkerweb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: crb-bunkerweb
subjects:
- kind: ServiceAccount
  name: sa-bunkerweb
  namespace: default
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: cr-bunkerweb
  apiGroup: rbac.authorization.k8s.io

The recommended way of deploying BunkerWeb is using a DaemonSet which means each node in the cluster will run an instance of BunkerWeb :

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: bunkerweb
spec:
  selector:
    matchLabels:
      app: bunkerweb
  template:
    metadata:
      labels:
        app: bunkerweb
      # mandatory annotation
      annotations:
        bunkerweb.io/AUTOCONF: "yes"
    spec:
      containers:
      - name: bunkerweb
        image: bunkerity/bunkerweb
        securityContext:
          runAsUser: 101
          runAsGroup: 101
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
        ports:
        - containerPort: 8080
          hostPort: 80
        - containerPort: 8443
          hostPort: 443
        env:
        - name: KUBERNETES_MODE
          value: "yes"
        # replace with your DNS resolvers
        # e.g. : kube-dns.kube-system.svc.cluster.local
        - name: DNS_RESOLVERS
          value: "coredns.kube-system.svc.cluster.local"
        - name: USE_API
          value: "yes"
        # 10.0.0.0/8 is the cluster internal subnet
        - name: API_WHITELIST_IP
          value: "127.0.0.0/8 10.0.0.0/8"
        - name: SERVER_NAME
          value: ""
        - name: MULTISITE
          value: "yes"
        livenessProbe:
          exec:
            command:
            - /opt/bunkerweb/helpers/healthcheck.sh
          initialDelaySeconds: 30
          periodSeconds: 5
          timeoutSeconds: 1
          failureThreshold: 3
        readinessProbe:
          exec:
            command:
            - /opt/bunkerweb/helpers/healthcheck.sh
          initialDelaySeconds: 30
          periodSeconds: 1
          timeoutSeconds: 1
          failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
  name: svc-bunkerweb
spec:
  clusterIP: None
  selector:
    app: bunkerweb

In order to store persistent data, you will need a PersistentVolumeClaim :

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-bunkerweb
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

Now, you can start the autoconf as a single replica Deployment :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: bunkerweb-controller
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: bunkerweb-controller
  template:
    metadata:
      labels:
        app: bunkerweb-controller
    spec:
      serviceAccountName: sa-bunkerweb
      volumes:
      - name: vol-bunkerweb
        persistentVolumeClaim:
          claimName: pvc-bunkerweb
      containers:
      - name: bunkerweb-controller
        image: bunkerity/bunkerweb-autoconf
        imagePullPolicy: Always
        env:
        - name: KUBERNETES_MODE
          value: "yes"
        volumeMounts:
        - name: vol-bunkerweb
          mountPath: /data

Once the BunkerWeb Kubernetes stack is setup and running (see autoconf logs for more information), you can now deploy web applications in the cluster and declare your Ingress resource. Please note that settings need to be set as annotations for the Ingress resource with the special value bunkerweb.io for the domain part :

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  annotations:
	bunkerweb.io/MY_SETTING_1: "value1"
	bunkerweb.io/MY_SETTING_2: "value2"
spec:
  rules:
...

Linux

![Overwiew](assets/img/integration-linux.svg){ align=center }
Linux integration

List of supported Linux distros :

  • Debian 11 "Bullseye"
  • Ubuntu 22.04 "Jammy"
  • Fedora 36
  • CentOS Stream 8

Please note that you will need to install NGINX 1.20.2 before BunkerWeb. For all distros, except Fedora, using prebuilt packages from official NGINX repository is mandatory. Compiling NGINX from source or using packages from different repositories won't work with the official supported way of installing BunkerWeb on Linux.

Repositories of Linux packages for BunkerWeb are available on PackageCloud, they provide a bash script to automatically add and trust the repository (but you can also follow the manual installation instructions if you prefer).

=== "Debian"

The first step is to add NGINX official repository :
```shell
sudo apt install curl gnupg2 ca-certificates lsb-release debian-archive-keyring && \
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null && \
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/debian `lsb_release -cs` nginx" \
| sudo tee /etc/apt/sources.list.d/nginx.list
```

You should now be able to install NGINX 1.20.2 :
```shell
sudo apt update && \
sudo apt install nginx=1.20.2-1~bullseye
```

And finally install BunkerWeb 1.4.0 :
```shell
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.deb.sh | sudo bash && \
apt update && \
apt install -y bunkerweb=1.4.0
```

=== "Ubuntu"

The first step is to add NGINX official repository :
```shell
sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring && \
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null && \
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
| sudo tee /etc/apt/sources.list.d/nginx.list
```

You should now be able to install NGINX 1.20.2 :
```shell
sudo apt update && \
sudo apt install nginx=1.20.2-1~jammy
```

And finally install BunkerWeb 1.4.0 :
```shell
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.deb.sh | sudo bash && \
apt update && \
apt install -y bunkerweb=1.4.0
```

=== "Fedora"

Fedora already provides NGINX 1.20.2 that we support :
```shell
sudo dnf install nginx-1.20.2
```

```shell
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.rpm.sh | sudo bash && \
dnf check-update && \
dnf install -y bunkerweb-1.4.0
```

=== "CentOS Stream"

The first step is to add NGINX official repository, create the following file at `/etc/yum.repos.d/nginx.repo` :
```conf
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
```

You should now be able to install NGINX 1.20.2 :
```shell
sudo dnf install nginx-1.20.2
```

And finally install BunkerWeb 1.4.0 :
```shell
dnf install -y epel-release && \
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.rpm.sh | sudo bash && \
dnf check-update && \
dnf install -y bunkerweb-1.4.0
```

Configuration of BunkerWeb is done by editing the /opt/bunkerweb/variables.env file :

MY_SETTING_1=value1
MY_SETTING_2=value2
...

BunkerWeb is managed using systemctl :

  • Check BunkerWeb status : systemctl status bunkerweb
  • Reload the configuration : systemctl reload bunkerweb
  • Start it if it's stopped : systemctl start bunkerweb
  • Stop it if it's started : systemctl stop bunkerweb
  • And restart : systemctl restart bunkerweb