Merge (soft) 1.4 branch into dev branch
This commit is contained in:
parent
f8e31f2878
commit
916caf2d6a
40
CHANGELOG.md
40
CHANGELOG.md
|
@ -1,6 +1,44 @@
|
|||
# Changelog
|
||||
|
||||
## v1.4.3 -
|
||||
## v1.4.6
|
||||
|
||||
- Fix error in the UI when a service have multiple domains
|
||||
- Fix bwcli bans command
|
||||
- Fix documentation about Linux Fedora install
|
||||
- Fix DISABLE_DEFAULT_SERVER=yes not working with HTTPS
|
||||
- Add INTERCEPTED_ERROR_CODES setting
|
||||
|
||||
## v1.4.5 - 2022/11/26
|
||||
|
||||
- Fix bwcli syntax error
|
||||
- Fix UI not working using Linux integration
|
||||
- Fix missing openssl dep in autoconf
|
||||
- Fix typo in selfsigned job
|
||||
|
||||
## v1.4.4 - 2022/11/10
|
||||
|
||||
- Fix k8s controller not watching the events when there is an exception
|
||||
- Fix python dependencies bug in CentOS and Fedora
|
||||
- Fix incorrect log when reloading nginx using Linux integration
|
||||
- Fix UI dev mode, production mode is now the default
|
||||
- Fix wrong exposed port in the UI container
|
||||
- Fix endless loading in the UI
|
||||
- Fix \*_CUSTOM_CONF_\* dissapear when jobs are executed
|
||||
- Fix various typos in documentation
|
||||
- Fix warning about StartLimitIntervalSec directive when using Linux
|
||||
- Fix incorrect log when issuing certbot renew
|
||||
- Fix certbot renew error when using Linux or Docker integration
|
||||
- Add greylist core feature
|
||||
- Add BLACKLIST_IGNORE_\* settings
|
||||
- Add automatic change of SecRequestBodyLimit modsec directive based on MAX_CLIENT_SIZE setting
|
||||
- Add MODSECURITY_SEC_RULE_ENGINE and MODSECURITY_SEC_AUDIT_LOG_PARTS settings
|
||||
- Add manual ban and get bans to the API/CLI
|
||||
- Add Brawdunoir community example
|
||||
- Improve core plugins order and add documentation about it
|
||||
- Improve overall documentation
|
||||
- Improve CI/CD
|
||||
|
||||
## v1.4.3 - 2022/08/26
|
||||
|
||||
- Fix various documentation errors/typos and add various enhancements
|
||||
- Fix ui.env not read when using Linux integration
|
||||
|
|
25
README.md
25
README.md
|
@ -1,15 +1,12 @@
|
|||
<p align="center">
|
||||
<img alt="BunkerWeb logo" src="https://github.com/bunkerity/bunkerweb/raw/master/misc/logo.png" />
|
||||
<img alt="BunkerWeb logo" src="https://github.com/bunkerity/bunkerweb/raw/master/logo.png" />
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/github/license/bunkerity/bunkerweb?color=40bb6b" />
|
||||
<img src="https://img.shields.io/github/release/bunkerity/bunkerweb?color=085577" />
|
||||
<img src="https://img.shields.io/github/downloads/bunkerity/bunkerweb/total">
|
||||
<img src="https://img.shields.io/docker/pulls/bunkerity/bunkerweb?color=085577">
|
||||
<img src="https://img.shields.io/badge/bunkerweb-1.4.6-blue" />
|
||||
<img src="https://img.shields.io/github/last-commit/bunkerity/bunkerweb" />
|
||||
<img src="https://img.shields.io/github/workflow/status/bunkerity/bunkerweb/Automatic%20test%2C%20build%2C%20push%20and%20deploy%20%28DEV%29?label=CI%2FCD%20dev" />
|
||||
<img src="https://img.shields.io/github/workflow/status/bunkerity/bunkerweb/Automatic%20test%2C%20build%2C%20push%20and%20deploy%20%28PROD%29?label=CI%2FCD%20prod" />
|
||||
<img src="https://img.shields.io/github/actions/workflow/status/bunkerity/bunkerweb/dev.yml?label=CI%2FCD%20dev&branch=dev" />
|
||||
<img src="https://img.shields.io/github/actions/workflow/status/bunkerity/bunkerweb/dev.yml?label=CI%2FCD%20prod" />
|
||||
<img src="https://img.shields.io/github/issues/bunkerity/bunkerweb">
|
||||
<img src="https://img.shields.io/github/issues-pr/bunkerity/bunkerweb">
|
||||
</p>
|
||||
|
@ -30,7 +27,7 @@
|
|||
|
||||
> Make security by default great again !
|
||||
|
||||
# Bunkerweb
|
||||
# BunkerWeb
|
||||
|
||||
<p align="center">
|
||||
<img alt="overview" src="https://github.com/bunkerity/bunkerweb/raw/master/docs/assets/img/intro-overview.svg" />
|
||||
|
@ -215,7 +212,7 @@ List of supported Linux distros :
|
|||
|
||||
[Ansible](https://docs.ansible.com/ansible/latest/index.html) is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
|
||||
|
||||
A specific BunkerWeb Ansible role is available on [Ansible Galaxy](https://galaxy.ansible.com/fl0ppy_d1sk/bunkerweb) (source code is available [here](https://github.com/bunkerity/bunkerweb-ansible)).
|
||||
A specific BunkerWeb Ansible role is available on [Ansible Galaxy](https://galaxy.ansible.com/bunkerity/bunkerweb) (source code is available [here](https://github.com/bunkerity/bunkerweb-ansible)).
|
||||
|
||||
You will find more information in the [Ansible section](https://docs.bunkerweb.io/latest/integrations/#ansible) of the documentation.
|
||||
|
||||
|
@ -265,12 +262,12 @@ BunkerWeb comes with a plugin system to make it possible to easily add new featu
|
|||
|
||||
Here is the list of "official" plugins that we maintain (see the [bunkerweb-plugins](https://github.com/bunkerity/bunkerweb-plugins) repository for more information) :
|
||||
|
||||
| Name | Version | Description | Link |
|
||||
| :------------: | :-----: | :------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------: |
|
||||
| Name | Version | Description | Link |
|
||||
| :------------: | :-----: | :------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------: |
|
||||
| **ClamAV** | 0.1 | Automatically scans uploaded files with the ClamAV antivirus engine and denies the request when a file is detected as malicious. | [bunkerweb-plugins/clamav](https://github.com/bunkerity/bunkerweb-plugins/tree/main/clamav) |
|
||||
| **CrowdSec** | 0.1 | CrowdSec bouncer for BunkerWeb. | [bunkerweb-plugins/crowdsec](https://github.com/bunkerity/bunkerweb-plugins/tree/main/crowdsec) |
|
||||
| **Discord** | 0.1 | Send security notifications to a Discord channel using a Webhook. | [bunkerweb-plugins/discord](https://github.com/bunkerity/bunkerweb-plugins/tree/main/discord) |
|
||||
| **Slack** | 0.1 | Send security notifications to a Slack channel using a Webhook. | [bunkerweb-plugins/slack](https://github.com/bunkerity/bunkerweb-plugins/tree/main/slack) |
|
||||
| **Discord** | 0.1 | Send security notifications to a Discord channel using a Webhook. | [bunkerweb-plugins/discord](https://github.com/bunkerity/bunkerweb-plugins/tree/main/discord) |
|
||||
| **Slack** | 0.1 | Send security notifications to a Slack channel using a Webhook. | [bunkerweb-plugins/slack](https://github.com/bunkerity/bunkerweb-plugins/tree/main/slack) |
|
||||
| **VirusTotal** | 0.1 | Automatically scans uploaded files with the VirusTotal API and denies the request when a file is detected as malicious. | [bunkerweb-plugins/virustotal](https://github.com/bunkerity/bunkerweb-plugins/tree/main/virustotal) |
|
||||
|
||||
You will find more information in the [plugins section](https://docs.bunkerweb.io/latest/plugins) of the documentation.
|
||||
|
@ -309,4 +306,4 @@ If you would like to contribute to the plugins you can read the [contributing gu
|
|||
|
||||
# Security policy
|
||||
|
||||
We take security bugs as serious issues and encourage responsible disclosure, see our [security policy](https://github.com/bunkerity/bunkerweb/tree/master/SECURITY.md) for more information.
|
||||
We take security bugs as serious issues and encourage responsible disclosure, see our [security policy](https://github.com/bunkerity/bunkerweb/tree/master/SECURITY.md) for more information.
|
|
@ -17,13 +17,13 @@
|
|||
sudo dnf install nginx-1.20.2
|
||||
```
|
||||
|
||||
And finally install BunkerWeb 1.4.4 :
|
||||
And finally install BunkerWeb 1.4.6 :
|
||||
```shell
|
||||
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
|
||||
rpm -Uvh epel-release*rpm && \
|
||||
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.rpm.sh | sudo bash && \
|
||||
sudo dnf check-update && \
|
||||
sudo dnf install -y bunkerweb-1.4.4
|
||||
sudo dnf install -y bunkerweb-1.4.6
|
||||
```
|
||||
|
||||
To prevent upgrading NGINX and/or BunkerWeb packages when executing `dnf upgrade`, you can use the following command :
|
||||
|
|
|
@ -12,7 +12,7 @@ Using BunkerWeb as a [Docker](https://www.docker.com/) container is a quick and
|
|||
We provide ready-to-use prebuilt images for x64, x86 armv8 and armv7 architectures on [Docker Hub](https://hub.docker.com/r/bunkerity/bunkerweb) :
|
||||
|
||||
```shell
|
||||
docker pull bunkerity/bunkerweb:1.4.4
|
||||
docker pull bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Alternatively, you can build the Docker images directly from the [source](https://github.com/bunkerity/bunkerweb) (and get a coffee ☕ because it may take a long time depending on your hardware) :
|
||||
|
@ -39,7 +39,7 @@ docker run \
|
|||
-e MY_SETTING=value \
|
||||
-e "MY_OTHER_SETTING=value with spaces" \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
|
@ -48,7 +48,7 @@ Here is the docker-compose equivalent :
|
|||
...
|
||||
services:
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
environment:
|
||||
- MY_SETTING=value
|
||||
```
|
||||
|
@ -73,7 +73,7 @@ docker run \
|
|||
...
|
||||
-v bw_data:/data \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
|
@ -82,7 +82,7 @@ Here is the docker-compose equivalent :
|
|||
...
|
||||
services:
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
volumes:
|
||||
- bw_data:/data
|
||||
...
|
||||
|
@ -152,7 +152,7 @@ docker run \
|
|||
...
|
||||
--network mynetwork \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
You will also need to do the same with your web application(s). Please note that the other containers are accessible using their name as the hostname.
|
||||
|
@ -163,7 +163,7 @@ Here is the docker-compose equivalent :
|
|||
...
|
||||
services:
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
networks:
|
||||
- bw-net
|
||||
...
|
||||
|
@ -218,7 +218,7 @@ docker run \
|
|||
-e SERVER_NAME= \
|
||||
-e "API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24" \
|
||||
-l bunkerweb.AUTOCONF \
|
||||
bunkerity/bunkerweb:1.4.4 && \
|
||||
bunkerity/bunkerweb:1.4.6 && \
|
||||
|
||||
docker network connect bw-services mybunker
|
||||
```
|
||||
|
@ -235,7 +235,7 @@ docker run \
|
|||
--network bw-autoconf \
|
||||
-v bw-data:/data \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock:ro \
|
||||
bunkerity/bunkerweb-autoconf:1.4.4
|
||||
bunkerity/bunkerweb-autoconf:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent for the BunkerWeb autoconf stack :
|
||||
|
@ -246,7 +246,7 @@ version: '3.5'
|
|||
services:
|
||||
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
ports:
|
||||
- 80:8080
|
||||
- 443:8443
|
||||
|
@ -262,7 +262,7 @@ services:
|
|||
- bw-services
|
||||
|
||||
myautoconf:
|
||||
image: bunkerity/bunkerweb-autoconf:1.4.4
|
||||
image: bunkerity/bunkerweb-autoconf:1.4.6
|
||||
volumes:
|
||||
- bw-data:/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
|
@ -364,7 +364,7 @@ docker service create \
|
|||
-e MULTISITE=yes \
|
||||
-e "API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24" \
|
||||
-l bunkerweb.AUTOCONF \
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
And the autoconf one :
|
||||
|
@ -378,7 +378,7 @@ docker service \
|
|||
--mount type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock,ro \
|
||||
--mount type=volume,source=bw-data,destination=/data \
|
||||
-e SWARM_MODE=yes \
|
||||
bunkerity/bunkerweb-autoconf:1.4.4
|
||||
bunkerity/bunkerweb-autoconf:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent (using `docker stack deploy`) :
|
||||
|
@ -389,7 +389,7 @@ version: '3.5'
|
|||
services:
|
||||
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
ports:
|
||||
- published: 80
|
||||
target: 8080
|
||||
|
@ -416,7 +416,7 @@ services:
|
|||
- "bunkerweb.AUTOCONF"
|
||||
|
||||
myautoconf:
|
||||
image: bunkerity/bunkerweb-autoconf:1.4.4
|
||||
image: bunkerity/bunkerweb-autoconf:1.4.6
|
||||
environment:
|
||||
- SWARM_MODE=yes
|
||||
volumes:
|
||||
|
@ -706,11 +706,11 @@ Repositories of Linux packages for BunkerWeb are available on [PackageCloud](htt
|
|||
sudo apt install -y nginx=1.20.2-1~$(lsb_release -cs)
|
||||
```
|
||||
|
||||
And finally install BunkerWeb 1.4.4 :
|
||||
And finally install BunkerWeb 1.4.6 :
|
||||
```shell
|
||||
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.deb.sh | sudo bash && \
|
||||
sudo apt update && \
|
||||
sudo apt install -y bunkerweb=1.4.4
|
||||
sudo apt install -y bunkerweb=1.4.6
|
||||
```
|
||||
|
||||
To prevent upgrading NGINX and/or BunkerWeb packages when executing `apt upgrade`, you can use the following command :
|
||||
|
@ -736,11 +736,11 @@ Repositories of Linux packages for BunkerWeb are available on [PackageCloud](htt
|
|||
sudo apt install -y nginx=1.20.2-1~jammy
|
||||
```
|
||||
|
||||
And finally install BunkerWeb 1.4.4 :
|
||||
And finally install BunkerWeb 1.4.6 :
|
||||
```shell
|
||||
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.deb.sh | sudo bash && \
|
||||
sudo apt update && \
|
||||
sudo apt install -y bunkerweb=1.4.4
|
||||
sudo apt install -y bunkerweb=1.4.6
|
||||
```
|
||||
|
||||
To prevent upgrading NGINX and/or BunkerWeb packages when executing `apt upgrade`, you can use the following command :
|
||||
|
@ -755,13 +755,13 @@ Repositories of Linux packages for BunkerWeb are available on [PackageCloud](htt
|
|||
sudo dnf install -y nginx-1.20.2
|
||||
```
|
||||
|
||||
And finally install BunkerWeb 1.4.4 :
|
||||
And finally install BunkerWeb 1.4.6 :
|
||||
```shell
|
||||
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.rpm.sh | \
|
||||
sed 's/yum install -y pygpgme --disablerepo='\''bunkerity_bunkerweb'\''/yum install -y python-gnupg/g' | \
|
||||
sed 's/pypgpme_check=`rpm -qa | grep -qw pygpgme`/python-gnupg_check=`rpm -qa | grep -qw python-gnupg`/g' | sudo bash && \
|
||||
sudo dnf makecache && \
|
||||
sudo dnf install -y bunkerweb-1.4.4
|
||||
sudo dnf install -y bunkerweb-1.4.6
|
||||
```
|
||||
|
||||
To prevent upgrading NGINX and/or BunkerWeb packages when executing `dnf upgrade`, you can use the following command :
|
||||
|
@ -788,12 +788,12 @@ Repositories of Linux packages for BunkerWeb are available on [PackageCloud](htt
|
|||
sudo dnf install nginx-1.20.2
|
||||
```
|
||||
|
||||
And finally install BunkerWeb 1.4.4 :
|
||||
And finally install BunkerWeb 1.4.6 :
|
||||
```shell
|
||||
dnf install -y epel-release && \
|
||||
curl -s https://packagecloud.io/install/repositories/bunkerity/bunkerweb/script.rpm.sh | sudo bash && \
|
||||
sudo dnf check-update && \
|
||||
sudo dnf install -y bunkerweb-1.4.4
|
||||
sudo dnf install -y bunkerweb-1.4.6
|
||||
```
|
||||
|
||||
To prevent upgrading NGINX and/or BunkerWeb packages when executing `dnf upgrade`, you can use the following command :
|
||||
|
@ -931,7 +931,7 @@ Configuration of BunkerWeb is done by using specific role variables :
|
|||
|
||||
| Name | Type | Description | Default value |
|
||||
|:-----:|:-----:|--------------|----------------|
|
||||
| `bunkerweb_version` | string | Version of BunkerWeb to install. | `1.4.4` |
|
||||
| `bunkerweb_version` | string | Version of BunkerWeb to install. | `1.4.6` |
|
||||
| `nginx_version` | string | Version of NGINX to install. | `1.20.2` |
|
||||
| `freeze_versions` | boolean | Prevent upgrade of BunkerWeb and NGINX when performing packages upgrades. | `true` |
|
||||
| `variables_env` | string | Path of the variables.env file to configure BunkerWeb. | `files/variables.env` |
|
||||
|
|
|
@ -53,13 +53,13 @@ The first step is to install the plugin by putting the plugin files inside the c
|
|||
...
|
||||
-v "${PWD}/bw-data:/data" \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
volumes:
|
||||
- ./bw-data:/data
|
||||
...
|
||||
|
|
|
@ -54,7 +54,7 @@ You will find more settings about reverse proxy in the [settings section](/1.4/s
|
|||
-e USE_REVERSE_PROXY=yes \
|
||||
-e REVERSE_PROXY_URL=/ \
|
||||
-e REVERSE_PROXY_HOST=http://myapp \
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
|
@ -64,7 +64,7 @@ You will find more settings about reverse proxy in the [settings section](/1.4/s
|
|||
services:
|
||||
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
ports:
|
||||
- 80:8080
|
||||
- 443:8443
|
||||
|
@ -379,7 +379,7 @@ You will find more settings about reverse proxy in the [settings section](/1.4/s
|
|||
-e app1.example.com_REVERSE_PROXY_HOST=http://myapp1 \
|
||||
-e app2.example.com_REVERSE_PROXY_HOST=http://myapp2 \
|
||||
-e app3.example.com_REVERSE_PROXY_HOST=http://myapp3 \
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
|
@ -389,7 +389,7 @@ You will find more settings about reverse proxy in the [settings section](/1.4/s
|
|||
services:
|
||||
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
ports:
|
||||
- 80:8080
|
||||
- 443:8443
|
||||
|
@ -981,13 +981,13 @@ REAL_IP_HEADER=X-Forwarded-For
|
|||
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
|
||||
-e REAL_IP_HEADER=X-Forwarded-For \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
...
|
||||
environment:
|
||||
- USE_REAL_IP=yes
|
||||
|
@ -1006,13 +1006,13 @@ REAL_IP_HEADER=X-Forwarded-For
|
|||
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
|
||||
-e REAL_IP_HEADER=X-Forwarded-For \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
...
|
||||
environment:
|
||||
- USE_REAL_IP=yes
|
||||
|
@ -1031,13 +1031,13 @@ REAL_IP_HEADER=X-Forwarded-For
|
|||
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
|
||||
-e REAL_IP_HEADER=X-Forwarded-For \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent (using `docker stack deploy`) :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
...
|
||||
environment:
|
||||
- USE_REAL_IP=yes
|
||||
|
@ -1062,7 +1062,7 @@ REAL_IP_HEADER=X-Forwarded-For
|
|||
spec:
|
||||
containers:
|
||||
- name: bunkerweb
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
...
|
||||
env:
|
||||
- name: USE_REAL_IP
|
||||
|
@ -1146,13 +1146,13 @@ REAL_IP_HEADER=proxy_protocol
|
|||
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
|
||||
-e REAL_IP_HEADER=proxy_protocol \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
...
|
||||
environment:
|
||||
- USE_REAL_IP=yes
|
||||
|
@ -1173,13 +1173,13 @@ REAL_IP_HEADER=proxy_protocol
|
|||
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
|
||||
-e REAL_IP_HEADER=proxy_protocol \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
...
|
||||
environment:
|
||||
- USE_REAL_IP=yes
|
||||
|
@ -1200,13 +1200,13 @@ REAL_IP_HEADER=proxy_protocol
|
|||
-e "REAL_IP_FROM=1.2.3.0/24 100.64.0.0/16" \
|
||||
-e REAL_IP_HEADER=proxy_protocol \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent (using `docker stack deploy`) :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
...
|
||||
environment:
|
||||
- USE_REAL_IP=yes
|
||||
|
@ -1232,7 +1232,7 @@ REAL_IP_HEADER=proxy_protocol
|
|||
spec:
|
||||
containers:
|
||||
- name: bunkerweb
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
...
|
||||
env:
|
||||
- name: USE_REAL_IP
|
||||
|
@ -1327,7 +1327,7 @@ Some integrations offer a more convenient way of applying configurations such as
|
|||
Here is a dummy example using a docker-compose file :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
environment:
|
||||
- |
|
||||
CUSTOM_CONF_SERVER_HTTP_hello-world=
|
||||
|
@ -1369,13 +1369,13 @@ Some integrations offer a more convenient way of applying configurations such as
|
|||
...
|
||||
-v "${PWD}/bw-data:/data" \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
```yaml
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
volumes:
|
||||
- ./bw-data:/data
|
||||
...
|
||||
|
@ -1436,13 +1436,13 @@ Some integrations offer a more convenient way of applying configurations such as
|
|||
...
|
||||
-v "${PWD}/bw-data:/data" \
|
||||
...
|
||||
bunkerity/bunkerweb-autoconf:1.4.4
|
||||
bunkerity/bunkerweb-autoconf:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
```yaml
|
||||
myautoconf:
|
||||
image: bunkerity/bunkerweb-autoconf:1.4.4
|
||||
image: bunkerity/bunkerweb-autoconf:1.4.6
|
||||
volumes:
|
||||
- ./bw-data:/data
|
||||
...
|
||||
|
@ -1622,7 +1622,7 @@ BunkerWeb supports PHP using external or remote [PHP-FPM](https://www.php.net/ma
|
|||
-e AUTO_LETS_ENCRYPT=yes \
|
||||
-e REMOTE_PHP=myphp \
|
||||
-e REMOTE_PHP_PATH=/app \
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
|
@ -1632,7 +1632,7 @@ BunkerWeb supports PHP using external or remote [PHP-FPM](https://www.php.net/ma
|
|||
services:
|
||||
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
ports:
|
||||
- 80:8080
|
||||
- 443:8443
|
||||
|
@ -1674,7 +1674,7 @@ BunkerWeb supports PHP using external or remote [PHP-FPM](https://www.php.net/ma
|
|||
...
|
||||
-v "${PWD}/myapp:/app" \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Once BunkerWeb and autoconf are ready, you will be able to create the PHP-FPM container, mount the application folder inside the container and configure it using specific labels :
|
||||
|
@ -1738,7 +1738,7 @@ BunkerWeb supports PHP using external or remote [PHP-FPM](https://www.php.net/ma
|
|||
...
|
||||
-v "/shared/myapp:/app" \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Once BunkerWeb and autoconf are ready, you will be able to create the PHP-FPM service, mount the application folder inside the container and configure it using specific labels :
|
||||
|
@ -1984,7 +1984,7 @@ BunkerWeb supports PHP using external or remote [PHP-FPM](https://www.php.net/ma
|
|||
-e app2.example.com_REMOTE_PHP_PATH=/app \
|
||||
-e app3.example.com_REMOTE_PHP=myphp3 \
|
||||
-e app3.example.com_REMOTE_PHP_PATH=/app \
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Here is the docker-compose equivalent :
|
||||
|
@ -1994,7 +1994,7 @@ BunkerWeb supports PHP using external or remote [PHP-FPM](https://www.php.net/ma
|
|||
services:
|
||||
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
ports:
|
||||
- 80:8080
|
||||
- 443:8443
|
||||
|
@ -2055,7 +2055,7 @@ BunkerWeb supports PHP using external or remote [PHP-FPM](https://www.php.net/ma
|
|||
...
|
||||
-v "${PWD}/myapps:/apps" \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Once BunkerWeb and autoconf are ready, you will be able to create the PHP-FPM containers, mount the right application folder inside each container and configure them using specific labels :
|
||||
|
@ -2179,7 +2179,7 @@ BunkerWeb supports PHP using external or remote [PHP-FPM](https://www.php.net/ma
|
|||
...
|
||||
-v "/shared/myapps:/apps" \
|
||||
...
|
||||
bunkerity/bunkerweb:1.4.4
|
||||
bunkerity/bunkerweb:1.4.6
|
||||
```
|
||||
|
||||
Once BunkerWeb and autoconf are ready, you will be able to create the PHP-FPM service, mount the application folder inside the container and configure it using specific labels :
|
||||
|
|
|
@ -76,7 +76,7 @@ Because the web UI is a web application, the recommended installation procedure
|
|||
-e "bwadm.example.com_REVERSE_PROXY_HEADERS=X-Script-Name /changeme" \
|
||||
-e bwadm.example.com_REVERSE_PROXY_INTERCEPT_ERRORS=no \
|
||||
-l bunkerweb.UI \
|
||||
bunkerity/bunkerweb:1.4.4 && \
|
||||
bunkerity/bunkerweb:1.4.6 && \
|
||||
docker network connect bw-ui mybunker
|
||||
```
|
||||
|
||||
|
@ -115,7 +115,7 @@ Because the web UI is a web application, the recommended installation procedure
|
|||
-e ADMIN_USERNAME=admin \
|
||||
-e ADMIN_PASSWORD=changeme \
|
||||
-e ABSOLUTE_URI=http(s)://bwadm.example.com/changeme/ \
|
||||
bunkerity/bunkerweb-ui:1.4.4 && \
|
||||
bunkerity/bunkerweb-ui:1.4.6 && \
|
||||
docker network connect bw-docker myui
|
||||
```
|
||||
|
||||
|
@ -131,7 +131,7 @@ Because the web UI is a web application, the recommended installation procedure
|
|||
services:
|
||||
|
||||
mybunker:
|
||||
image: bunkerity/bunkerweb:1.4.4
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
networks:
|
||||
- bw-services
|
||||
- bw-ui
|
||||
|
@ -154,7 +154,7 @@ Because the web UI is a web application, the recommended installation procedure
|
|||
- "bunkerweb.UI"
|
||||
|
||||
myui:
|
||||
image: bunkerity/bunkerweb-ui:1.4.4
|
||||
image: bunkerity/bunkerweb-ui:1.4.6
|
||||
depends_on:
|
||||
- mydocker
|
||||
networks:
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
{
|
||||
"name": "autoconf-configs",
|
||||
"kinds": ["autoconf"],
|
||||
"delay": 60,
|
||||
"timeout": 60,
|
||||
"tests": [
|
||||
{
|
||||
|
|
|
@ -13,7 +13,7 @@ else
|
|||
echo "❌ No PHP user found"
|
||||
exit 1
|
||||
fi
|
||||
curl https://www.drupal.org/download-latest/tar.gz -Lo /tmp/drupal.tar.gz
|
||||
curl https://ftp.drupal.org/files/projects/drupal-9.5.3.tar.gz -Lo /tmp/drupal.tar.gz
|
||||
tar -xzf /tmp/drupal.tar.gz -C /tmp
|
||||
current_dir="$(pwd)"
|
||||
cd /tmp/drupal-*
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
"name": "ghost",
|
||||
"kinds": ["docker", "autoconf", "swarm", "kubernetes"],
|
||||
"timeout": 60,
|
||||
"delay": 180,
|
||||
"delay": 240,
|
||||
"tests": [
|
||||
{
|
||||
"type": "string",
|
||||
|
|
|
@ -7,7 +7,7 @@ metadata:
|
|||
bunkerweb.io/www.example.com_MAX_CLIENT_SIZE: "10G"
|
||||
bunkerweb.io/www.example.com_ALLOWED_METHODS: "GET|POST|HEAD|COPY|DELETE|LOCK|MKCOL|MOVE|PROPFIND|PROPPATCH|PUT|UNLOCK|OPTIONS"
|
||||
bunkerweb.io/www.example.com_X_FRAME_OPTIONS: "SAMEORIGIN"
|
||||
bunkerweb.io/www.example.com_BAD_BEHAVIOR_STATUS_CODES: "400 401.4.4 405 444"
|
||||
bunkerweb.io/www.example.com_BAD_BEHAVIOR_STATUS_CODES: "400 401 405 444"
|
||||
bunkerweb.io/www.example.com_LIMIT_REQ_URL_1: "/apps"
|
||||
bunkerweb.io/www.example.com_LIMIT_REQ_RATE_1: "5r/s"
|
||||
bunkerweb.io/www.example.com_LIMIT_REQ_URL_2: "/apps/text/session/sync"
|
||||
|
|
|
@ -10,6 +10,15 @@ server {
|
|||
listen 0.0.0.0:{{ HTTP_PORT }} default_server {% if USE_PROXY_PROTOCOL == "yes" %}proxy_protocol{% endif %};
|
||||
{% endif %}
|
||||
|
||||
# HTTPS listen
|
||||
{% set os = import("os") %}
|
||||
{% if os.path.isfile("/var/cache/bunkerweb/default-server-cert/cert.pem") +%}
|
||||
{% if has_variable(all, "USE_CUSTOM_HTTPS", "yes") or has_variable(all, "AUTO_LETS_ENCRYPT", "yes") or has_variable(all, "GENERATE_SELF_SIGNED_SSL", "yes") +%}
|
||||
listen 0.0.0.0:{{ HTTPS_PORT }} ssl {% if HTTP2 == "yes" %}http2{% endif %} default_server {% if USE_PROXY_PROTOCOL == "yes" %}proxy_protocol{% endif %};
|
||||
ssl_certificate /var/cache/bunkerweb/selfsigned/{{ SERVER_NAME.split(" ")[0] }}.pem;
|
||||
ssl_certificate_key /var/cache/bunkerweb/selfsigned/{{ SERVER_NAME.split(" ")[0] }}.key;
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% if IS_LOADING == "yes" +%}
|
||||
root /usr/share/bunkerweb/loading;
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
"help": "List of HTTP error code intercepted by Bunkerweb",
|
||||
"id": "intercepted-error-codes",
|
||||
"label": "Intercepted error codes",
|
||||
"regex": "^.*$",
|
||||
"regex": "^( *([1-5]\\d{2})(?!.*\\2) *)+$",
|
||||
"type": "text"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,78 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
from os import getenv, makedirs
|
||||
from os.path import isfile
|
||||
from subprocess import DEVNULL, STDOUT, run
|
||||
from sys import exit as sys_exit, path as sys_path
|
||||
from traceback import format_exc
|
||||
|
||||
sys_path.extend(
|
||||
(
|
||||
"/usr/share/bunkerweb/deps/python",
|
||||
"/usr/share/bunkerweb/utils",
|
||||
)
|
||||
)
|
||||
|
||||
from logger import setup_logger
|
||||
|
||||
logger = setup_logger("DEFAULT-SERVER-CERT", getenv("LOG_LEVEL", "INFO"))
|
||||
status = 0
|
||||
|
||||
try:
|
||||
|
||||
# Check if we need to generate a self-signed default cert for non-SNI "clients"
|
||||
need_default_cert = False
|
||||
if getenv("MULTISITE", "no") == "yes":
|
||||
for first_server in getenv("SERVER_NAME", "").split(" "):
|
||||
for check_var in [
|
||||
"USE_CUSTOM_HTTPS",
|
||||
"AUTO_LETS_ENCRYPT",
|
||||
"GENERATE_SELF_SIGNED_SSL",
|
||||
]:
|
||||
if (
|
||||
getenv(f"{first_server}_{check_var}", getenv(check_var, "no"))
|
||||
== "yes"
|
||||
):
|
||||
need_default_cert = True
|
||||
break
|
||||
if need_default_cert:
|
||||
break
|
||||
elif getenv("DISABLE_DEFAULT_SERVER", "no") == "yes" and (
|
||||
getenv("USE_CUSTOM_HTTPS", "no") == "yes"
|
||||
or getenv("AUTO_LETS_ENCRYPT", "no") == "yes"
|
||||
or getenv("GENERATE_SELF_SIGNED_SSL", "no") == "yes"
|
||||
):
|
||||
need_default_cert = True
|
||||
|
||||
# Generate the self-signed certificate
|
||||
if need_default_cert:
|
||||
makedirs("/var/cache/bunkerweb/default-server-cert", exist_ok=True)
|
||||
if not isfile("/var/cache/bunkerweb/default-server-cert/cert.pem"):
|
||||
cmd = "openssl req -nodes -x509 -newkey rsa:4096 -keyout /var/cache/bunkerweb/default-server-cert/cert.key -out /var/cache/bunkerweb/default-server-cert/cert.pem -days 3650".split(
|
||||
" "
|
||||
)
|
||||
cmd.extend(["-subj", "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/"])
|
||||
proc = run(cmd, stdin=DEVNULL, stderr=STDOUT)
|
||||
if proc.returncode != 0:
|
||||
logger.error(
|
||||
"Self-signed certificate generation failed for default server",
|
||||
)
|
||||
status = 2
|
||||
else:
|
||||
logger.info(
|
||||
"Successfully generated self-signed certificate for default server",
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
"Skipping generation of self-signed certificate for default server (already present)",
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
"Skipping generation of self-signed certificate for default server (not needed)",
|
||||
)
|
||||
|
||||
except:
|
||||
status = 2
|
||||
logger.error(f"Exception while running default-server-cert.py :\n{format_exc()}")
|
||||
|
||||
sys_exit(status)
|
|
@ -159,5 +159,13 @@
|
|||
"type": "select",
|
||||
"select": ["403", "444"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"jobs": [
|
||||
{
|
||||
"name": "default-server-cert",
|
||||
"file": "default-server-cert.py",
|
||||
"every": "once",
|
||||
"reload": false
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -6,8 +6,8 @@ export PYTHONPATH=/usr/share/bunkerweb/deps/python
|
|||
# Create the ui.env file if it doesn't exist
|
||||
if [ ! -f /etc/bunkerweb/ui.env ]; then
|
||||
echo "ADMIN_USERNAME=admin" > /etc/bunkerweb/ui.env
|
||||
echo "ADMIN_PASSWORD=PasswordChanged" >> /etc/bunkerweb/ui.env
|
||||
echo "ABSOLUTE_URI=" >> /etc/bunkerweb/ui.env
|
||||
echo "ADMIN_PASSWORD=changeme" >> /etc/bunkerweb/ui.env
|
||||
echo "ABSOLUTE_URI=http://mydomain.ext/mypath/" >> /etc/bunkerweb/ui.env
|
||||
fi
|
||||
|
||||
# Function to start the UI
|
||||
|
@ -18,7 +18,7 @@ start() {
|
|||
fi
|
||||
source /etc/bunkerweb/ui.env
|
||||
export $(cat /etc/bunkerweb/ui.env)
|
||||
python3 -m gunicorn --bind=127.0.0.1:7000 --chdir /usr/share/bunkerweb/ui/ --workers=1 --threads=2 main:app &
|
||||
python3 -m gunicorn --graceful-timeout=0 --bind=127.0.0.1:7000 --chdir /usr/share/bunkerweb/ui/ --workers=1 --threads=2 main:app &
|
||||
echo $! > /var/tmp/bunkerweb/ui.pid
|
||||
}
|
||||
|
||||
|
|
|
@ -48,7 +48,11 @@ class AutoconfTest(Test):
|
|||
"10.20.1.1:5000/bw-autoconf-tests:latest",
|
||||
)
|
||||
Test.replace_in_file(compose, r"\./bw\-data:/", "/tmp/bw-data:/")
|
||||
proc = run("docker-compose pull", cwd="/tmp/autoconf", shell=True)
|
||||
proc = run(
|
||||
"docker-compose pull --ignore-pull-failures",
|
||||
cwd="/tmp/autoconf",
|
||||
shell=True,
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("docker-compose pull failed (autoconf stack)"))
|
||||
proc = run("docker-compose up -d", cwd="/tmp/autoconf", shell=True)
|
||||
|
@ -123,7 +127,11 @@ class AutoconfTest(Test):
|
|||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("cp bw-data failed"))
|
||||
proc = run("docker-compose -f autoconf.yml pull", shell=True, cwd=test)
|
||||
proc = run(
|
||||
"docker-compose -f autoconf.yml pull --ignore-pull-failures",
|
||||
shell=True,
|
||||
cwd=test,
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("docker-compose pull failed"))
|
||||
proc = run("docker-compose -f autoconf.yml up -d", shell=True, cwd=test)
|
||||
|
@ -153,6 +161,10 @@ class AutoconfTest(Test):
|
|||
|
||||
def _debug_fail(self):
|
||||
autoconf = "/tmp/autoconf"
|
||||
run("docker-compose logs", shell=True, cwd=autoconf)
|
||||
proc = run("docker-compose logs", shell=True, cwd=autoconf)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("docker-compose logs failed"))
|
||||
test = f"/tmp/tests/{self._name}"
|
||||
run("docker-compose -f autoconf.yml logs", shell=True, cwd=test)
|
||||
proc = run("docker-compose -f autoconf.yml logs", shell=True, cwd=test)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("docker-compose -f autoconf.yml logs failed"))
|
||||
|
|
|
@ -51,6 +51,11 @@ class DockerTest(Test):
|
|||
)
|
||||
Test.replace_in_file(compose, r"\./bw\-data:/", "/tmp/bw-data:/")
|
||||
Test.replace_in_file(compose, r"\- bw_data:/", "- /tmp/bw-data:/")
|
||||
Test.replace_in_file(
|
||||
compose,
|
||||
r"AUTO_LETS_ENCRYPT=yes",
|
||||
"AUTO_LETS_ENCRYPT=yes\n - USE_LETS_ENCRYPT_STAGING=yes",
|
||||
)
|
||||
for ex_domain, test_domain in self._domains.items():
|
||||
Test.replace_in_files(test, ex_domain, test_domain)
|
||||
Test.rename(test, ex_domain, test_domain)
|
||||
|
@ -67,7 +72,9 @@ class DockerTest(Test):
|
|||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("cp bw-data failed"))
|
||||
proc = run("docker-compose pull", shell=True, cwd=test)
|
||||
proc = run(
|
||||
"docker-compose pull --ignore-pull-failures", shell=True, cwd=test
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("docker-compose pull failed"))
|
||||
proc = run("docker-compose up -d", shell=True, cwd=test)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
from Test import Test
|
||||
from os.path import isdir, isfile
|
||||
from os import getenv
|
||||
from os import getenv, mkdir
|
||||
from shutil import copytree, rmtree, copy
|
||||
from traceback import format_exc
|
||||
from subprocess import run
|
||||
|
@ -25,35 +25,21 @@ class KubernetesTest(Test):
|
|||
try:
|
||||
if not Test.init():
|
||||
return False
|
||||
proc = run("sudo chown -R root:root /tmp/bw-data", shell=True)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("chown failed (k8s stack)"))
|
||||
if isdir("/tmp/kubernetes"):
|
||||
rmtree("/tmp/kubernetes")
|
||||
copytree("./integrations/kubernetes", "/tmp/kubernetes")
|
||||
copy("./tests/utils/k8s.yml", "/tmp/kubernetes")
|
||||
mkdir("/tmp/kubernetes")
|
||||
copy("./tests/utils/bunkerweb.yml", "/tmp/kubernetes")
|
||||
deploy = "/tmp/kubernetes/bunkerweb.yml"
|
||||
Test.replace_in_file(
|
||||
deploy, r"bunkerity/bunkerweb:.*$", "10.20.1.1:5000/bw-tests:latest"
|
||||
deploy,
|
||||
r"bunkerity/bunkerweb:.*$",
|
||||
f"{getenv('PRIVATE_REGISTRY')}/infra/bunkerweb-tests-amd64:{getenv('IMAGE_TAG')}",
|
||||
)
|
||||
Test.replace_in_file(
|
||||
deploy,
|
||||
r"bunkerity/bunkerweb-autoconf:.*$",
|
||||
"10.20.1.1:5000/bw-autoconf-tests:latest",
|
||||
f"{getenv('PRIVATE_REGISTRY')}/infra/bunkerweb-autoconf-tests-amd64:{getenv('IMAGE_TAG')}",
|
||||
)
|
||||
Test.replace_in_file(deploy, r"ifNotPresent", "Always")
|
||||
proc = run(
|
||||
"sudo kubectl apply -f k8s.yml", cwd="/tmp/kubernetes", shell=True
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("kubectl apply k8s failed (k8s stack)"))
|
||||
proc = run(
|
||||
"sudo kubectl apply -f rbac.yml", cwd="/tmp/kubernetes", shell=True
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("kubectl apply rbac failed (k8s stack)"))
|
||||
proc = run(
|
||||
"sudo kubectl apply -f bunkerweb.yml", cwd="/tmp/kubernetes", shell=True
|
||||
"kubectl apply -f bunkerweb.yml", cwd="/tmp/kubernetes", shell=True
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("kubectl apply bunkerweb failed (k8s stack)"))
|
||||
|
@ -61,7 +47,7 @@ class KubernetesTest(Test):
|
|||
i = 0
|
||||
while i < 30:
|
||||
proc = run(
|
||||
"sudo kubectl get pods | grep bunkerweb | grep -v Running",
|
||||
"kubectl get pods | grep bunkerweb | grep -v Running",
|
||||
shell=True,
|
||||
capture_output=True,
|
||||
)
|
||||
|
@ -87,20 +73,10 @@ class KubernetesTest(Test):
|
|||
if not Test.end():
|
||||
return False
|
||||
proc = run(
|
||||
"sudo kubectl delete -f bunkerweb.yml",
|
||||
"kubectl delete -f bunkerweb.yml",
|
||||
cwd="/tmp/kubernetes",
|
||||
shell=True,
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
ret = False
|
||||
proc = run(
|
||||
"sudo kubectl delete -f rbac.yml", cwd="/tmp/kubernetes", shell=True
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
ret = False
|
||||
proc = run(
|
||||
"sudo kubectl delete -f k8s.yml", cwd="/tmp/kubernetes", shell=True
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
ret = False
|
||||
rmtree("/tmp/kubernetes")
|
||||
|
@ -121,10 +97,10 @@ class KubernetesTest(Test):
|
|||
Test.replace_in_files(test, "example.com", getenv("ROOT_DOMAIN"))
|
||||
setup = f"{test}/setup-kubernetes.sh"
|
||||
if isfile(setup):
|
||||
proc = run("sudo ./setup-kubernetes.sh", cwd=test, shell=True)
|
||||
proc = run("kubectl./setup-kubernetes.sh", cwd=test, shell=True)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("setup-kubernetes failed"))
|
||||
proc = run("sudo kubectl apply -f kubernetes.yml", shell=True, cwd=test)
|
||||
proc = run("kubectl apply -f kubernetes.yml", shell=True, cwd=test)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("kubectl apply failed"))
|
||||
except:
|
||||
|
@ -140,10 +116,10 @@ class KubernetesTest(Test):
|
|||
test = f"/tmp/tests/{self._name}"
|
||||
cleanup = f"{test}/cleanup-kubernetes.sh"
|
||||
if isfile(cleanup):
|
||||
proc = run("sudo ./cleanup-kubernetes.sh", cwd=test, shell=True)
|
||||
proc = run("kubectl./cleanup-kubernetes.sh", cwd=test, shell=True)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("cleanup-kubernetes failed"))
|
||||
proc = run("sudo kubectl delete -f kubernetes.yml", shell=True, cwd=test)
|
||||
proc = run("kubectl delete -f kubernetes.yml", shell=True, cwd=test)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("kubectl delete failed"))
|
||||
super()._cleanup_test()
|
||||
|
@ -156,9 +132,9 @@ class KubernetesTest(Test):
|
|||
|
||||
def _debug_fail(self):
|
||||
proc = run(
|
||||
'sudo kubectl get pods --no-headers -o custom-columns=":metadata.name"',
|
||||
'kubectl get pods --no-headers -o custom-columns=":metadata.name"',
|
||||
shell=True,
|
||||
capture_output=True,
|
||||
)
|
||||
for pod in proc.stdout.decode().splitlines():
|
||||
run(f"sudo kubectl logs {pod}", shell=True)
|
||||
run(f"kubectl logs {pod}", shell=True)
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
from Test import Test
|
||||
from os.path import isdir, isfile
|
||||
from os import getenv, mkdir, chmod
|
||||
from shutil import rmtree
|
||||
from os.path import isfile
|
||||
from os import getenv
|
||||
from traceback import format_exc
|
||||
from subprocess import run
|
||||
from time import sleep
|
||||
|
@ -28,15 +27,7 @@ class LinuxTest(Test):
|
|||
try:
|
||||
if not Test.init():
|
||||
return False
|
||||
# TODO : find the nginx uid/gid on Docker images
|
||||
proc = run("sudo chown -R root:root /tmp/bw-data", shell=True)
|
||||
if proc.returncode != 0:
|
||||
raise Exception("chown failed (autoconf stack)")
|
||||
if isdir("/tmp/linux"):
|
||||
rmtree("/tmp/linux")
|
||||
mkdir("/tmp/linux")
|
||||
chmod("/tmp/linux", 0o0777)
|
||||
cmd = f"docker run -p 80:80 -p 443:443 --rm --name linux-{distro} -d --tmpfs /tmp --tmpfs /run --tmpfs /run/lock -v /sys/fs/cgroup:/sys/fs/cgroup:ro bw-{distro}"
|
||||
cmd = f"docker run -p 80:80 -p 443:443 --rm --name linux-{distro} -d --tmpfs /tmp --tmpfs /run --tmpfs /run/lock -v /sys/fs/cgroup:/sys/fs/cgroup:rw --cgroupns=host --tty local/bw-{distro}:latest"
|
||||
proc = run(cmd, shell=True)
|
||||
if proc.returncode != 0:
|
||||
raise Exception("docker run failed (linux stack)")
|
||||
|
@ -50,20 +41,6 @@ class LinuxTest(Test):
|
|||
proc = LinuxTest.docker_exec(distro, "systemctl start bunkerweb")
|
||||
if proc.returncode != 0:
|
||||
raise Exception("docker exec systemctl start failed (linux stack)")
|
||||
cp_dirs = {
|
||||
"/tmp/bw-data/letsencrypt": "/etc/letsencrypt",
|
||||
"/tmp/bw-data/cache": "/var/cache/bunkerweb",
|
||||
}
|
||||
for src, dst in cp_dirs.items():
|
||||
proc = LinuxTest.docker_cp(distro, src, dst)
|
||||
if proc.returncode != 0:
|
||||
raise Exception(f"docker cp failed for {src} (linux stack)")
|
||||
proc = LinuxTest.docker_exec(distro, f"chown -R nginx:nginx {dst}/*")
|
||||
if proc.returncode != 0:
|
||||
raise Exception(
|
||||
f"docker exec failed for directory {src} (linux stack)"
|
||||
)
|
||||
|
||||
if distro in ("ubuntu", "debian"):
|
||||
LinuxTest.docker_exec(
|
||||
distro,
|
||||
|
@ -128,22 +105,28 @@ class LinuxTest(Test):
|
|||
Test.replace_in_files(test, ex_domain, test_domain)
|
||||
Test.rename(test, ex_domain, test_domain)
|
||||
Test.replace_in_files(test, "example.com", getenv("ROOT_DOMAIN"))
|
||||
proc = LinuxTest.docker_cp(self.__distro, test, f"/opt/{self._name}")
|
||||
proc = self.docker_cp(self.__distro, test, f"/opt/{self._name}")
|
||||
if proc.returncode != 0:
|
||||
raise Exception("docker cp failed (test)")
|
||||
setup = test + "/setup-linux.sh"
|
||||
if isfile(setup):
|
||||
proc = LinuxTest.docker_exec(
|
||||
proc = self.docker_exec(
|
||||
self.__distro, f"cd /opt/{self._name} && ./setup-linux.sh"
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise Exception("docker exec setup failed (test)")
|
||||
proc = LinuxTest.docker_exec(
|
||||
proc = self.docker_exec(
|
||||
self.__distro, f"cp /opt/{self._name}/variables.env /etc/bunkerweb/"
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise Exception("docker exec cp variables.env failed (test)")
|
||||
proc = LinuxTest.docker_exec(
|
||||
proc = self.docker_exec(
|
||||
self.__distro,
|
||||
"echo '' >> /opt/bunkerweb/variables.env ; echo 'USE_LETS_ENCRYPT_STAGING=yes' >> /opt/bunkerweb/variables.env",
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
raise (Exception("docker exec append variables.env failed (test)"))
|
||||
proc = self.docker_exec(
|
||||
self.__distro, "systemctl stop bunkerweb ; systemctl start bunkerweb"
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
|
@ -159,7 +142,7 @@ class LinuxTest(Test):
|
|||
|
||||
def _cleanup_test(self):
|
||||
try:
|
||||
proc = LinuxTest.docker_exec(
|
||||
proc = self.docker_exec(
|
||||
self.__distro,
|
||||
f"cd /opt/{self._name} ; ./cleanup-linux.sh ; rm -rf /etc/bunkerweb/configs/* ; rm -rf /etc/bunkerweb/plugins/*",
|
||||
)
|
||||
|
@ -174,16 +157,18 @@ class LinuxTest(Test):
|
|||
return True
|
||||
|
||||
def _debug_fail(self):
|
||||
LinuxTest.docker_exec(
|
||||
self.docker_exec(
|
||||
self.__distro,
|
||||
"cat /var/log/nginx/access.log ; cat /var/log/nginx/error.log ; journalctl -u bunkerweb --no-pager",
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def docker_exec(distro, cmd_linux):
|
||||
return run(
|
||||
f'docker exec linux-{distro} /bin/bash -c "{cmd_linux}"',
|
||||
shell=True,
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def docker_cp(distro, src, dst):
|
||||
return run(f"sudo docker cp {src} linux-{distro}:{dst}", shell=True)
|
||||
|
|
|
@ -33,12 +33,14 @@ class SwarmTest(Test):
|
|||
copytree("./integrations/swarm", "/tmp/swarm")
|
||||
compose = "/tmp/swarm/stack.yml"
|
||||
Test.replace_in_file(
|
||||
compose, r"bunkerity/bunkerweb:.*$", "10.20.1.1:5000/bw-tests:latest"
|
||||
compose,
|
||||
r"bunkerity/bunkerweb:.*$",
|
||||
"192.168.42.100:5000/bw-tests:latest",
|
||||
)
|
||||
Test.replace_in_file(
|
||||
compose,
|
||||
r"bunkerity/bunkerweb-autoconf:.*$",
|
||||
"10.20.1.1:5000/bw-autoconf-tests:latest",
|
||||
"192.168.42.100:5000/bw-autoconf-tests:latest",
|
||||
)
|
||||
Test.replace_in_file(compose, r"bw\-data:/", "/tmp/bw-data:/")
|
||||
proc = run(
|
||||
|
@ -50,19 +52,28 @@ class SwarmTest(Test):
|
|||
raise (Exception("docker stack deploy failed (swarm stack)"))
|
||||
i = 0
|
||||
healthy = False
|
||||
while i < 45:
|
||||
while i < 90:
|
||||
proc = run(
|
||||
'docker stack ps --no-trunc --format "{{ .CurrentState }}" bunkerweb | grep -v "Running"',
|
||||
cwd="/tmp/swarm",
|
||||
shell=True,
|
||||
capture_output=True,
|
||||
)
|
||||
if "" == proc.stdout.decode():
|
||||
if not proc.stdout.decode():
|
||||
healthy = True
|
||||
break
|
||||
sleep(1)
|
||||
i += 1
|
||||
if not healthy:
|
||||
proc = run(
|
||||
"docker service logs bunkerweb_mybunker ; docker service logs bunkerweb_myautoconf",
|
||||
cwd="/tmp/swarm",
|
||||
shell=True,
|
||||
capture_output=True,
|
||||
)
|
||||
logger = setup_logger("Swarm_test", getenv("LOGLEVEL", "INFO"))
|
||||
logger.error(f"stdout logs = {proc.stdout.decode()}")
|
||||
logger.error(f"stderr logs = {proc.stderr.decode()}")
|
||||
raise (Exception("swarm stack is not healthy"))
|
||||
sleep(60)
|
||||
except:
|
||||
|
|
|
@ -19,6 +19,7 @@ class Test(ABC):
|
|||
self.__tests = tests
|
||||
self._no_copy_container = no_copy_container
|
||||
self.__delay = delay
|
||||
self._domains = {}
|
||||
self.__logger = setup_logger("Test", getenv("LOG_LEVEL", "INFO"))
|
||||
self.__logger.info(
|
||||
f"instantiated with {len(tests)} tests and timeout of {timeout}s for {self._name}",
|
||||
|
@ -143,11 +144,13 @@ class Test(ABC):
|
|||
f"Can't replace file {path}"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def replace_in_files(path, old, new):
|
||||
for root, dirs, files in walk(path):
|
||||
for root, _, files in walk(path):
|
||||
for name in files:
|
||||
Test.replace_in_file(join(root, name), old, new)
|
||||
|
||||
@staticmethod
|
||||
def rename(path, old, new):
|
||||
for root, dirs, files in walk(path):
|
||||
for name in dirs + files:
|
||||
|
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
- hosts: all
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
|
||||
wait_for:
|
||||
port: 22
|
||||
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
|
||||
search_regex: OpenSSH
|
||||
delay: 10
|
||||
connection: local
|
||||
|
||||
- hosts: all
|
||||
name: Provisioning tasks
|
||||
roles:
|
||||
- common
|
||||
- docker
|
||||
|
||||
- hosts: all
|
||||
name: Install GH runner
|
||||
vars:
|
||||
- runner_user: user
|
||||
- access_token: "{{ lookup('env', 'GITHUB_TOKEN') }}"
|
||||
- runner_extra_config_args: "--ephemeral"
|
||||
- runner_name: "bw-autoconf-{{ ansible_date_time.iso8601_micro | to_uuid }}"
|
||||
- github_account: "{{ lookup('env', 'GITHUB_ACCOUNT') }}"
|
||||
- github_owner: bunkerity
|
||||
- github_repo: bunkerweb
|
||||
- runner_labels:
|
||||
- bw-autoconf
|
||||
roles:
|
||||
- monolithprojects.github_actions_runner
|
||||
|
||||
- hosts: all
|
||||
name: Restart GH runner (dirty hack because sometimes the runner is in inactive state)
|
||||
tasks:
|
||||
- name: Wait 60 seconds just in case
|
||||
ansible.builtin.pause:
|
||||
seconds: 60
|
||||
- name: Restart GH runner
|
||||
shell: systemctl restart actions.runner.*
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
- hosts: all
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
|
||||
wait_for:
|
||||
port: 22
|
||||
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
|
||||
search_regex: OpenSSH
|
||||
delay: 10
|
||||
connection: local
|
||||
|
||||
- hosts: all
|
||||
name: Provisioning tasks
|
||||
roles:
|
||||
- common
|
||||
- docker
|
||||
|
||||
- hosts: all
|
||||
name: Install GH runner
|
||||
vars:
|
||||
- runner_user: user
|
||||
- access_token: "{{ lookup('env', 'GITHUB_TOKEN') }}"
|
||||
- runner_extra_config_args: "--ephemeral"
|
||||
- runner_name: "bw-docker-{{ ansible_date_time.iso8601_micro | to_uuid }}"
|
||||
- github_account: "{{ lookup('env', 'GITHUB_ACCOUNT') }}"
|
||||
- github_owner: bunkerity
|
||||
- github_repo: bunkerweb
|
||||
- runner_labels:
|
||||
- bw-docker
|
||||
roles:
|
||||
- monolithprojects.github_actions_runner
|
||||
|
||||
- hosts: all
|
||||
name: Restart GH runner (dirty hack because sometimes the runner is in inactive state)
|
||||
tasks:
|
||||
- name: Wait 60 seconds just in case
|
||||
ansible.builtin.pause:
|
||||
seconds: 60
|
||||
- name: Restart GH runner
|
||||
shell: systemctl restart actions.runner.*
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
- hosts: all
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
|
||||
wait_for:
|
||||
port: 22
|
||||
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
|
||||
search_regex: OpenSSH
|
||||
delay: 10
|
||||
connection: local
|
||||
|
||||
- hosts: all
|
||||
name: Provisioning tasks
|
||||
roles:
|
||||
- common
|
||||
- docker
|
||||
|
||||
- hosts: all
|
||||
name: Install GH runner
|
||||
vars:
|
||||
- runner_user: user
|
||||
- access_token: "{{ lookup('env', 'GITHUB_TOKEN') }}"
|
||||
- runner_extra_config_args: "--ephemeral"
|
||||
- runner_name: "bw-linux-{{ ansible_date_time.iso8601_micro | to_uuid }}"
|
||||
- github_account: "{{ lookup('env', 'GITHUB_ACCOUNT') }}"
|
||||
- github_owner: bunkerity
|
||||
- github_repo: bunkerweb
|
||||
- runner_labels:
|
||||
- bw-linux
|
||||
roles:
|
||||
- monolithprojects.github_actions_runner
|
||||
|
||||
- hosts: all
|
||||
name: Restart GH runner (dirty hack because sometimes the runner is in inactive state)
|
||||
tasks:
|
||||
- name: Wait 60 seconds just in case
|
||||
ansible.builtin.pause:
|
||||
seconds: 60
|
||||
- name: Restart GH runner
|
||||
shell: systemctl restart actions.runner.*
|
|
@ -0,0 +1,2 @@
|
|||
APT::Periodic::Update-Package-Lists "1";
|
||||
APT::Periodic::Unattended-Upgrade "1";
|
|
@ -0,0 +1,3 @@
|
|||
Unattended-Upgrade::Origins-Pattern {
|
||||
"origin=Debian,codename=${distro_codename},label=Debian-Security";
|
||||
};
|
|
@ -0,0 +1 @@
|
|||
network: {config: disabled}
|
|
@ -0,0 +1,6 @@
|
|||
[sshd]
|
||||
enabled = true
|
||||
port = 22
|
||||
findtime = 10m
|
||||
bantime = 24h
|
||||
maxretry = 3
|
|
@ -0,0 +1,5 @@
|
|||
net.ipv6.conf.all.disable_ipv6 = 1
|
||||
net.ipv6.conf.default.disable_ipv6 = 1
|
||||
net.ipv6.conf.lo.disable_ipv6 = 1
|
||||
net.ipv6.conf.ens3.disable_ipv6 = 1
|
||||
net.ipv6.conf.ens4.disable_ipv6 = 1
|
|
@ -0,0 +1,3 @@
|
|||
deb http://deb.debian.org/debian bullseye main
|
||||
deb http://deb.debian.org/debian-security/ bullseye-security main
|
||||
deb http://deb.debian.org/debian bullseye-updates main
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
- name: Restart networking
|
||||
service:
|
||||
name: networking
|
||||
state: restarted
|
||||
|
||||
- name: Reload sysctl
|
||||
shell: sysctl -p -f /etc/sysctl.d/70-disable-ipv6.conf
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
- name: Update /etc/apt/sources.list
|
||||
copy:
|
||||
src: sources.list
|
||||
dest: /etc/apt/sources.list
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Update APT cache and install dependencies
|
||||
shell: apt update && apt autoclean && apt install -y unattended-upgrades python3-apt rename python3-pip
|
||||
|
||||
- name: copy 50unattended-upgrades
|
||||
copy:
|
||||
src: 50unattended-upgrades
|
||||
dest: /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: copy 20auto-upgrades
|
||||
copy:
|
||||
src: 20auto-upgrades
|
||||
dest: /etc/apt/apt.conf.d/20auto-upgrades
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
- name: Install fail2ban
|
||||
apt:
|
||||
name: fail2ban
|
||||
state: present
|
||||
|
||||
- name: Update /etc/fail2ban/jail.d/defaults-debian.conf
|
||||
copy:
|
||||
src: defaults-debian.conf
|
||||
dest: /etc/fail2ban/jail.d/defaults-debian.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
- name: Set the hostname
|
||||
hostname:
|
||||
name: "{{ inventory_hostname }}"
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- include_tasks: network.yml
|
||||
- include_tasks: apt.yml
|
||||
- include_tasks: hostname.yml
|
||||
- include_tasks: fail2ban.yml
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
- name: Update /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
|
||||
copy:
|
||||
src: 99-disable-network-config.cfg
|
||||
dest: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Update /etc/network/interfaces.d/50-cloud-init
|
||||
template:
|
||||
src: 50-cloud-init
|
||||
dest: /etc/network/interfaces.d/50-cloud-init
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- Restart networking
|
||||
|
||||
- name: Update /etc/sysctl.d/70-disable-ipv6.conf
|
||||
copy:
|
||||
src: ipv6.conf
|
||||
dest: /etc/sysctl.d/70-disable-ipv6.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- Reload sysctl
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
auto lo
|
||||
iface lo inet loopback
|
||||
dns-nameservers 213.186.33.99 0.0.0.0
|
||||
|
||||
auto ens3
|
||||
iface ens3 inet dhcp
|
||||
accept_ra 0
|
||||
mtu 1500
|
||||
|
||||
auto ens3:0
|
||||
iface ens3:0 inet static
|
||||
address {{ failover_ip }}
|
||||
netmask 255.255.255.255
|
|
@ -0,0 +1 @@
|
|||
deb [arch=amd64] https://download.docker.com/linux/debian bullseye stable
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
- name: Install docker dependencies
|
||||
apt:
|
||||
name:
|
||||
- ca-certificates
|
||||
- gnupg
|
||||
update_cache: yes
|
||||
state: present
|
||||
|
||||
- name: Update /etc/apt/sources.list.d/docker.list
|
||||
copy:
|
||||
src: docker.list
|
||||
dest: /etc/apt/sources.list.d/docker.list
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Trust docker key
|
||||
apt_key:
|
||||
url: https://download.docker.com/linux/debian/gpg
|
||||
state: present
|
||||
|
||||
- name: Install docker
|
||||
apt:
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
update_cache: yes
|
||||
state: present
|
||||
|
||||
- name: Install /usr/local/bin/docker-compose
|
||||
shell: curl -L https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose
|
||||
|
||||
- name: Add debian user to docker group
|
||||
user:
|
||||
name: debian
|
||||
groups: docker
|
||||
append: yes
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
- name: Install ruby
|
||||
apt:
|
||||
name:
|
||||
- ruby-full
|
||||
state: present
|
||||
|
||||
- name: Install package_cloud package
|
||||
community.general.gem:
|
||||
name: package_cloud
|
||||
state: present
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- name: Restart networking
|
||||
service:
|
||||
name: networking
|
||||
state: restarted
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
- include_tasks: network.yml
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
- name: Update /etc/network/interfaces.d/ens4
|
||||
template:
|
||||
src: ens4
|
||||
dest: /etc/network/interfaces.d/ens4
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- Restart networking
|
|
@ -0,0 +1,5 @@
|
|||
auto ens4
|
||||
allow-hotplug ens4
|
||||
iface ens4 inet static
|
||||
address {{ local_ip }}/24
|
||||
mtu 9000
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
- name: Install pip
|
||||
apt:
|
||||
name:
|
||||
- python3
|
||||
- python3-pip
|
||||
- virtualenv
|
||||
- python3-setuptools
|
||||
- python
|
||||
- python-setuptools
|
||||
|
||||
- name: Upgrade pip3
|
||||
pip:
|
||||
name: pip
|
||||
state: latest
|
||||
executable: pip3
|
||||
|
||||
- name: Install dockerpy for py3
|
||||
pip:
|
||||
name: docker[tls]
|
||||
state: forcereinstall
|
||||
executable: pip3
|
||||
|
||||
- name: Init Docker Swarm
|
||||
community.general.docker_swarm:
|
||||
advertise_addr: "{{ local_ip }}"
|
||||
listen_addr: "{{ local_ip }}"
|
||||
ssl_version: "1.3"
|
||||
validate_certs: yes
|
||||
state: present
|
||||
register: result
|
||||
when: inventory_hostname == groups['managers'][0]
|
||||
|
||||
- name: Get join-token for manager nodes
|
||||
set_fact:
|
||||
join_token_manager: "{{ hostvars[groups['managers'][0]].result.swarm_facts.JoinTokens.Manager }}"
|
||||
|
||||
- name: Get join-token for worker nodes
|
||||
set_fact:
|
||||
join_token_worker: "{{ hostvars[groups['managers'][0]].result.swarm_facts.JoinTokens.Worker }}"
|
||||
|
||||
- name: Join Swarm as managers
|
||||
community.general.docker_swarm:
|
||||
advertise_addr: "{{ local_ip }}"
|
||||
listen_addr: "{{ local_ip }}"
|
||||
ssl_version: "1.3"
|
||||
validate_certs: yes
|
||||
state: join
|
||||
join_token: "{{ join_token_manager }}"
|
||||
remote_addrs: ["{{ hostvars[groups['managers'][0]].local_ip }}:2377"]
|
||||
when:
|
||||
- inventory_hostname in groups['managers']
|
||||
- inventory_hostname != groups['managers'][0]
|
||||
|
||||
- name: Join Swarm as workers
|
||||
community.general.docker_swarm:
|
||||
advertise_addr: "{{ local_ip }}"
|
||||
listen_addr: "{{ local_ip }}"
|
||||
ssl_version: 1.3
|
||||
validate_certs: yes
|
||||
state: join
|
||||
join_token: "{{ join_token_worker }}"
|
||||
remote_addrs: ["{{ hostvars[groups['managers'][0]].local_ip }}:2377"]
|
||||
when: inventory_hostname in groups['workers']
|
|
@ -0,0 +1 @@
|
|||
network: {config: disabled}
|
|
@ -0,0 +1,5 @@
|
|||
net.ipv6.conf.all.disable_ipv6 = 1
|
||||
net.ipv6.conf.default.disable_ipv6 = 1
|
||||
net.ipv6.conf.lo.disable_ipv6 = 1
|
||||
net.ipv6.conf.ens3.disable_ipv6 = 1
|
||||
net.ipv6.conf.ens4.disable_ipv6 = 1
|
|
@ -0,0 +1,3 @@
|
|||
deb http://deb.debian.org/debian bullseye main
|
||||
deb http://deb.debian.org/debian-security/ bullseye-security main
|
||||
deb http://deb.debian.org/debian bullseye-updates main
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
- name: Restart networking
|
||||
service:
|
||||
name: networking
|
||||
state: restarted
|
||||
|
||||
- name: Reload sysctl
|
||||
shell: sysctl -p -f /etc/sysctl.d/70-disable-ipv6.conf
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
- name: Update /etc/apt/sources.list
|
||||
copy:
|
||||
src: sources.list
|
||||
dest: /etc/apt/sources.list
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Update APT cache and install dependencies
|
||||
shell: apt update && apt autoclean && apt install -y python3-apt rename python3-pip sudo
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
- name: Set the hostname
|
||||
hostname:
|
||||
name: "{{ inventory_hostname }}"
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
#- include_tasks: network.yml
|
||||
- include_tasks: user.yml
|
||||
- include_tasks: apt.yml
|
||||
- include_tasks: hostname.yml
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
- name: Update /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
|
||||
copy:
|
||||
src: 99-disable-network-config.cfg
|
||||
dest: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Update /etc/network/interfaces.d/50-cloud-init
|
||||
template:
|
||||
src: 50-cloud-init
|
||||
dest: /etc/network/interfaces.d/50-cloud-init
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- Restart networking
|
||||
|
||||
- name: Update /etc/sysctl.d/70-disable-ipv6.conf
|
||||
copy:
|
||||
src: ipv6.conf
|
||||
dest: /etc/sysctl.d/70-disable-ipv6.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- Reload sysctl
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
- name: Create user
|
||||
user:
|
||||
name: user
|
||||
shell: /bin/bash
|
||||
- name: Configuring sudoer access
|
||||
community.general.sudoers:
|
||||
name: allow-all-sudo
|
||||
state: present
|
||||
user: "user"
|
||||
commands: ALL
|
||||
nopassword: true
|
|
@ -0,0 +1,13 @@
|
|||
auto lo
|
||||
iface lo inet loopback
|
||||
dns-nameservers 213.186.33.99 0.0.0.0
|
||||
|
||||
auto ens3
|
||||
iface ens3 inet dhcp
|
||||
accept_ra 0
|
||||
mtu 1500
|
||||
|
||||
auto ens3:0
|
||||
iface ens3:0 inet static
|
||||
address {{ failover_ip }}
|
||||
netmask 255.255.255.255
|
|
@ -0,0 +1 @@
|
|||
deb [arch=amd64] https://download.docker.com/linux/debian bullseye stable
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
- name: Install docker dependencies
|
||||
apt:
|
||||
name:
|
||||
- ca-certificates
|
||||
- gnupg
|
||||
update_cache: yes
|
||||
state: present
|
||||
|
||||
- name: Update /etc/apt/sources.list.d/docker.list
|
||||
copy:
|
||||
src: docker.list
|
||||
dest: /etc/apt/sources.list.d/docker.list
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Trust docker key
|
||||
apt_key:
|
||||
url: https://download.docker.com/linux/debian/gpg
|
||||
state: present
|
||||
|
||||
- name: Install docker
|
||||
apt:
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
update_cache: yes
|
||||
state: present
|
||||
|
||||
- name: Install /usr/local/bin/docker-compose
|
||||
shell: curl -L https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose
|
||||
|
||||
- name: Add user to docker group
|
||||
user:
|
||||
name: user
|
||||
groups: docker
|
||||
append: yes
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
- name: Install ruby
|
||||
apt:
|
||||
name:
|
||||
- ruby-full
|
||||
state: present
|
||||
|
||||
- name: Install package_cloud package
|
||||
community.general.gem:
|
||||
name: package_cloud
|
||||
state: present
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
- name: Restart networking
|
||||
service:
|
||||
name: networking
|
||||
state: restarted
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
- include_tasks: network.yml
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
- name: Update /etc/network/interfaces.d/60-ens5-vpc
|
||||
template:
|
||||
src: 60-ens5-vpc
|
||||
dest: /etc/network/interfaces.d/60-ens5-vpc
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- Restart networking
|
|
@ -0,0 +1,4 @@
|
|||
auto ens5
|
||||
allow-hotplug ens5
|
||||
iface ens5 inet static
|
||||
address {{ local_ip }}/24
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
- name: Create Docker Registry
|
||||
docker_container:
|
||||
name: registry
|
||||
image: registry:2
|
||||
state: started
|
||||
restart_policy: unless-stopped
|
||||
volumes:
|
||||
- /etc/docker/registry:/var/lib/registry
|
||||
published_ports:
|
||||
- "192.168.42.100:5000:5000"
|
||||
when: inventory_hostname == groups['managers'][0]
|
|
@ -0,0 +1,3 @@
|
|||
{
|
||||
"insecure-registries" : ["192.168.42.100:5000"]
|
||||
}
|
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
- name: Install pip
|
||||
apt:
|
||||
name:
|
||||
- python3
|
||||
- python3-pip
|
||||
- virtualenv
|
||||
- python3-setuptools
|
||||
- python
|
||||
- python-setuptools
|
||||
|
||||
- name: Upgrade pip3
|
||||
pip:
|
||||
name: pip
|
||||
state: latest
|
||||
executable: pip3
|
||||
|
||||
- name: Install dockerpy for py3
|
||||
pip:
|
||||
name: docker[tls]
|
||||
state: forcereinstall
|
||||
executable: pip3
|
||||
|
||||
- name: Init Docker Swarm
|
||||
community.general.docker_swarm:
|
||||
advertise_addr: "{{ local_ip }}"
|
||||
listen_addr: "{{ local_ip }}"
|
||||
ssl_version: "1.3"
|
||||
validate_certs: yes
|
||||
state: present
|
||||
register: result
|
||||
when: inventory_hostname == groups['managers'][0]
|
||||
|
||||
- name: Get join-token for manager nodes
|
||||
set_fact:
|
||||
join_token_manager: "{{ hostvars[groups['managers'][0]].result.swarm_facts.JoinTokens.Manager }}"
|
||||
|
||||
- name: Get join-token for worker nodes
|
||||
set_fact:
|
||||
join_token_worker: "{{ hostvars[groups['managers'][0]].result.swarm_facts.JoinTokens.Worker }}"
|
||||
|
||||
- name: Join Swarm as managers
|
||||
community.general.docker_swarm:
|
||||
advertise_addr: "{{ local_ip }}"
|
||||
listen_addr: "{{ local_ip }}"
|
||||
ssl_version: "1.3"
|
||||
validate_certs: yes
|
||||
state: join
|
||||
join_token: "{{ join_token_manager }}"
|
||||
remote_addrs: ["{{ hostvars[groups['managers'][0]].local_ip }}:2377"]
|
||||
when:
|
||||
- inventory_hostname in groups['managers']
|
||||
- inventory_hostname != groups['managers'][0]
|
||||
|
||||
- name: Join Swarm as workers
|
||||
community.general.docker_swarm:
|
||||
advertise_addr: "{{ local_ip }}"
|
||||
listen_addr: "{{ local_ip }}"
|
||||
ssl_version: 1.3
|
||||
validate_certs: yes
|
||||
state: join
|
||||
join_token: "{{ join_token_worker }}"
|
||||
remote_addrs: ["{{ hostvars[groups['managers'][0]].local_ip }}:2377"]
|
||||
when: inventory_hostname in groups['workers']
|
||||
|
||||
- name: Update daemon.json
|
||||
copy:
|
||||
src: daemon.json
|
||||
dest: /etc/docker/daemon.json
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Reload docker
|
||||
service:
|
||||
name: docker
|
||||
enabled: true
|
||||
state: restarted
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
- hosts: all
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
|
||||
wait_for:
|
||||
port: 22
|
||||
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
|
||||
search_regex: OpenSSH
|
||||
delay: 10
|
||||
connection: local
|
||||
|
||||
- hosts: all
|
||||
name: Provisioning tasks
|
||||
roles:
|
||||
- common
|
||||
- docker
|
||||
- private_net
|
||||
|
||||
- hosts: all
|
||||
name: Setup swarm
|
||||
roles:
|
||||
- swarm
|
||||
|
||||
- hosts: managers[0]
|
||||
name: Setup local registry
|
||||
roles:
|
||||
- registry
|
||||
|
||||
- hosts: managers[0]
|
||||
name: Install GH runner
|
||||
vars:
|
||||
- runner_user: user
|
||||
- access_token: "{{ lookup('env', 'GITHUB_TOKEN') }}"
|
||||
- runner_extra_config_args: "--ephemeral"
|
||||
- runner_name: "bw-swarm-{{ ansible_date_time.iso8601_micro | to_uuid }}"
|
||||
- github_account: "{{ lookup('env', 'GITHUB_ACCOUNT') }}"
|
||||
- github_owner: bunkerity
|
||||
- github_repo: bunkerweb
|
||||
- runner_labels:
|
||||
- bw-swarm
|
||||
roles:
|
||||
- monolithprojects.github_actions_runner
|
||||
|
||||
- hosts: managers[0]
|
||||
name: Restart GH runner (dirty hack because sometimes the runner is in inactive state)
|
||||
tasks:
|
||||
- name: Wait 60 seconds just in case
|
||||
ansible.builtin.pause:
|
||||
seconds: 60
|
||||
- name: Restart GH runner
|
||||
shell: systemctl restart actions.runner.*
|
|
@ -0,0 +1,25 @@
|
|||
FROM quay.io/centos/centos:stream8
|
||||
|
||||
RUN yum install -y initscripts # for old "service"
|
||||
|
||||
ENV container=docker
|
||||
|
||||
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
|
||||
rm -f /lib/systemd/system/multi-user.target.wants/*;\
|
||||
rm -f /etc/systemd/system/*.wants/*;\
|
||||
rm -f /lib/systemd/system/local-fs.target.wants/*; \
|
||||
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
|
||||
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
|
||||
rm -f /lib/systemd/system/basic.target.wants/*;\
|
||||
rm -f /lib/systemd/system/anaconda.target.wants/*;
|
||||
|
||||
COPY linux/nginx.repo /etc/yum.repos.d/nginx.repo
|
||||
|
||||
RUN dnf install php-fpm curl yum-utils epel-release -y && \
|
||||
dnf install nginx-1.20.2 -y
|
||||
|
||||
COPY ./package-centos/*.rpm /opt
|
||||
|
||||
VOLUME /run /tmp
|
||||
|
||||
CMD /usr/sbin/init
|
|
@ -0,0 +1,38 @@
|
|||
FROM debian:bullseye
|
||||
|
||||
ENV container docker
|
||||
ENV LC_ALL C
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
ENV NGINX_VERSION 1.20.2
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y systemd systemd-sysv \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
RUN cd /lib/systemd/system/sysinit.target.wants/ \
|
||||
&& rm $(ls | grep -v systemd-tmpfiles-setup)
|
||||
|
||||
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
|
||||
/etc/systemd/system/*.wants/* \
|
||||
/lib/systemd/system/local-fs.target.wants/* \
|
||||
/lib/systemd/system/sockets.target.wants/*udev* \
|
||||
/lib/systemd/system/sockets.target.wants/*initctl* \
|
||||
/lib/systemd/system/basic.target.wants/* \
|
||||
/lib/systemd/system/anaconda.target.wants/* \
|
||||
/lib/systemd/system/plymouth* \
|
||||
/lib/systemd/system/systemd-update-utmp*
|
||||
|
||||
RUN apt update && \
|
||||
apt-get install php-fpm curl gnupg2 ca-certificates python3-pip -y && \
|
||||
echo "deb https://nginx.org/packages/debian/ bullseye nginx" > /etc/apt/sources.list.d/nginx.list && \
|
||||
echo "deb-src https://nginx.org/packages/debian/ bullseye nginx" >> /etc/apt/sources.list.d/nginx.list && \
|
||||
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys ABF5BD827BD9BF62 && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends nginx=${NGINX_VERSION}-1~bullseye
|
||||
|
||||
COPY ./package-debian/*.deb /opt
|
||||
|
||||
VOLUME ["/sys/fs/cgroup"]
|
||||
|
||||
CMD ["/lib/systemd/systemd"]
|
|
@ -0,0 +1,29 @@
|
|||
FROM fedora:36
|
||||
|
||||
ENV container docker
|
||||
|
||||
RUN dnf -y update \
|
||||
&& dnf -y install systemd \
|
||||
&& dnf clean all
|
||||
|
||||
RUN cd /lib/systemd/system/sysinit.target.wants/; \
|
||||
for i in *; do [ $i = systemd-tmpfiles-setup.service ] || rm -f $i; done
|
||||
|
||||
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
|
||||
/etc/systemd/system/*.wants/* \
|
||||
/lib/systemd/system/local-fs.target.wants/* \
|
||||
/lib/systemd/system/sockets.target.wants/*udev* \
|
||||
/lib/systemd/system/sockets.target.wants/*initctl* \
|
||||
/lib/systemd/system/basic.target.wants/* \
|
||||
/lib/systemd/system/anaconda.target.wants/*
|
||||
|
||||
# Nginx
|
||||
RUN dnf update -y && \
|
||||
dnf install -y php-fpm curl gnupg2 ca-certificates redhat-lsb-core python3-pip which && \
|
||||
dnf install nginx-1.20.2 -y
|
||||
|
||||
COPY ./package-fedora/*.rpm /opt
|
||||
|
||||
VOLUME ["/sys/fs/cgroup"]
|
||||
|
||||
CMD ["/usr/sbin/init"]
|
|
@ -0,0 +1,38 @@
|
|||
FROM ubuntu:22.04
|
||||
|
||||
ENV container docker
|
||||
ENV LC_ALL C
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
ENV NGINX_VERSION 1.20.2
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y systemd systemd-sysv \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
RUN cd /lib/systemd/system/sysinit.target.wants/ \
|
||||
&& rm $(ls | grep -v systemd-tmpfiles-setup)
|
||||
|
||||
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
|
||||
/etc/systemd/system/*.wants/* \
|
||||
/lib/systemd/system/local-fs.target.wants/* \
|
||||
/lib/systemd/system/sockets.target.wants/*udev* \
|
||||
/lib/systemd/system/sockets.target.wants/*initctl* \
|
||||
/lib/systemd/system/basic.target.wants/* \
|
||||
/lib/systemd/system/anaconda.target.wants/* \
|
||||
/lib/systemd/system/plymouth* \
|
||||
/lib/systemd/system/systemd-update-utmp*
|
||||
|
||||
RUN apt update && \
|
||||
apt-get install php-fpm curl gnupg2 ca-certificates lsb-release ubuntu-keyring software-properties-common python3-pip -y && \
|
||||
echo "deb https://nginx.org/packages/ubuntu/ jammy nginx" > /etc/apt/sources.list.d/nginx.list && \
|
||||
echo "deb-src https://nginx.org/packages/ubuntu/ jammy nginx" >> /etc/apt/sources.list.d/nginx.list && \
|
||||
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys ABF5BD827BD9BF62 && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends nginx=${NGINX_VERSION}-1~jammy
|
||||
|
||||
COPY ./package-ubuntu/*.deb /opt
|
||||
|
||||
VOLUME ["/sys/fs/cgroup"]
|
||||
|
||||
CMD ["/lib/systemd/systemd"]
|
|
@ -11,7 +11,6 @@ from subprocess import run
|
|||
|
||||
path.extend((f"{Path.cwd()}/utils", f"{Path.cwd()}/tests"))
|
||||
|
||||
from Test import Test
|
||||
from DockerTest import DockerTest
|
||||
from AutoconfTest import AutoconfTest
|
||||
from SwarmTest import SwarmTest
|
||||
|
|
|
@ -0,0 +1,32 @@
|
|||
# Variables
|
||||
variable "autoconf_ip" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
variable "autoconf_ip_id" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create cicd_bw_autoconf SSH key
|
||||
resource "scaleway_account_ssh_key" "ssh_key" {
|
||||
name = "cicd_bw_autoconf"
|
||||
public_key = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
|
||||
# Create cicd_bw_autoconf instance
|
||||
resource "scaleway_instance_server" "instance" {
|
||||
depends_on = [scaleway_account_ssh_key.ssh_key]
|
||||
name = "cicd_bw_autoconf"
|
||||
type = "DEV1-M"
|
||||
image = "debian_bullseye"
|
||||
ip_id = var.autoconf_ip_id
|
||||
}
|
||||
|
||||
# Create Ansible inventory file
|
||||
resource "local_file" "ansible_inventory" {
|
||||
content = templatefile("templates/autoconf_inventory.tftpl", {
|
||||
public_ip = var.autoconf_ip
|
||||
})
|
||||
filename = "/tmp/autoconf_inventory"
|
||||
}
|
|
@ -0,0 +1,32 @@
|
|||
# Variables
|
||||
variable "docker_ip" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
variable "docker_ip_id" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create cicd_bw_docker SSH key
|
||||
resource "scaleway_account_ssh_key" "ssh_key" {
|
||||
name = "cicd_bw_docker"
|
||||
public_key = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
|
||||
# Create cicd_bw_docker instance
|
||||
resource "scaleway_instance_server" "instance" {
|
||||
depends_on = [scaleway_account_ssh_key.ssh_key]
|
||||
name = "cicd_bw_docker"
|
||||
type = "DEV1-M"
|
||||
image = "debian_bullseye"
|
||||
ip_id = var.docker_ip_id
|
||||
}
|
||||
|
||||
# Create Ansible inventory file
|
||||
resource "local_file" "ansible_inventory" {
|
||||
content = templatefile("templates/docker_inventory.tftpl", {
|
||||
public_ip = var.docker_ip
|
||||
})
|
||||
filename = "/tmp/docker_inventory"
|
||||
}
|
|
@ -0,0 +1,62 @@
|
|||
# Variables
|
||||
variable "k8s_ip" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
variable "k8s_dockerconfigjson" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create k8s cluster
|
||||
resource "scaleway_k8s_cluster" "cluster" {
|
||||
type = "kapsule"
|
||||
name = "bw_k8s"
|
||||
version = "1.24.7"
|
||||
cni = "cilium"
|
||||
}
|
||||
|
||||
# Create k8s pool
|
||||
resource "scaleway_k8s_pool" "pool" {
|
||||
cluster_id = scaleway_k8s_cluster.cluster.id
|
||||
name = "bw_k8s"
|
||||
node_type = "DEV1-M"
|
||||
size = 3
|
||||
wait_for_pool_ready = true
|
||||
}
|
||||
|
||||
# Get kubeconfig file
|
||||
resource "local_file" "kubeconfig" {
|
||||
depends_on = [scaleway_k8s_pool.pool]
|
||||
content = scaleway_k8s_cluster.cluster.kubeconfig[0].config_file
|
||||
filename = "/tmp/k8s/kubeconfig"
|
||||
}
|
||||
provider "kubectl" {
|
||||
config_path = "${local_file.kubeconfig.filename}"
|
||||
}
|
||||
|
||||
# Setup LB
|
||||
resource "local_file" "lb_yml" {
|
||||
depends_on = [local_file.kubeconfig]
|
||||
content = templatefile("templates/lb.yml.tftpl", {
|
||||
lb_ip = var.k8s_ip
|
||||
})
|
||||
filename = "/tmp/k8s/lb.yml"
|
||||
}
|
||||
resource "kubectl_manifest" "lb" {
|
||||
depends_on = [local_file.lb_yml]
|
||||
yaml_body = local_file.lb_yml.content
|
||||
}
|
||||
|
||||
# Setup registry
|
||||
resource "local_file" "reg_yml" {
|
||||
depends_on = [local_file.kubeconfig]
|
||||
content = templatefile("templates/reg.yml.tftpl", {
|
||||
dockerconfigjson = var.k8s_dockerconfigjson
|
||||
})
|
||||
filename = "/tmp/k8s/reg.yml"
|
||||
}
|
||||
resource "kubectl_manifest" "reg" {
|
||||
depends_on = [local_file.reg_yml]
|
||||
yaml_body = local_file.reg_yml.content
|
||||
}
|
|
@ -0,0 +1,32 @@
|
|||
# Variables
|
||||
variable "linux_ip" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
variable "linux_ip_id" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create cicd_bw_linux SSH key
|
||||
resource "scaleway_account_ssh_key" "ssh_key" {
|
||||
name = "cicd_bw_linux"
|
||||
public_key = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
|
||||
# Create cicd_bw_linux instance
|
||||
resource "scaleway_instance_server" "instance" {
|
||||
depends_on = [scaleway_account_ssh_key.ssh_key]
|
||||
name = "cicd_bw_linux"
|
||||
type = "DEV1-M"
|
||||
image = "debian_bullseye"
|
||||
ip_id = var.linux_ip_id
|
||||
}
|
||||
|
||||
# Create Ansible inventory file
|
||||
resource "local_file" "ansible_inventory" {
|
||||
content = templatefile("templates/linux_inventory.tftpl", {
|
||||
public_ip = var.linux_ip
|
||||
})
|
||||
filename = "/tmp/linux_inventory"
|
||||
}
|
|
@ -0,0 +1,41 @@
|
|||
# Variables
|
||||
variable "autoconf_ip" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create cicd_bw_autoconf SSH key
|
||||
resource "openstack_compute_keypair_v2" "ssh_key" {
|
||||
provider = openstack.openstack
|
||||
name = "cicd_bw_autoconf"
|
||||
public_key = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
|
||||
# Create cicd_bw_autoconf instance
|
||||
resource "openstack_compute_instance_v2" "instance" {
|
||||
provider = openstack.openstack
|
||||
name = "cicd_bw_autoconf"
|
||||
image_name = "Debian 11"
|
||||
flavor_name = "d2-4"
|
||||
region = "SBG5"
|
||||
key_pair = openstack_compute_keypair_v2.ssh_key.name
|
||||
network {
|
||||
name = "Ext-Net"
|
||||
}
|
||||
}
|
||||
|
||||
# Attach failover IP to cicd_bw_autoconf instance
|
||||
#resource "ovh_cloud_project_failover_ip_attach" "failover_ip" {
|
||||
# provider = ovh.ovh
|
||||
# ip = var.autoconf_ip
|
||||
# routed_to = openstack_compute_instance_v2.instance.name
|
||||
#}
|
||||
|
||||
# Create Ansible inventory file
|
||||
resource "local_file" "ansible_inventory" {
|
||||
content = templatefile("templates/autoconf_inventory.tftpl", {
|
||||
public_ip = openstack_compute_instance_v2.instance.access_ip_v4,
|
||||
failover_ip = var.autoconf_ip
|
||||
})
|
||||
filename = "/tmp/autoconf_inventory"
|
||||
}
|
|
@ -0,0 +1,41 @@
|
|||
# Variables
|
||||
variable "docker_ip" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create cicd_bw_docker SSH key
|
||||
resource "openstack_compute_keypair_v2" "ssh_key" {
|
||||
provider = openstack.openstack
|
||||
name = "cicd_bw_docker"
|
||||
public_key = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
|
||||
# Create cicd_bw_docker instance
|
||||
resource "openstack_compute_instance_v2" "instance" {
|
||||
provider = openstack.openstack
|
||||
name = "cicd_bw_docker"
|
||||
image_name = "Debian 11"
|
||||
flavor_name = "d2-4"
|
||||
region = "SBG5"
|
||||
key_pair = openstack_compute_keypair_v2.ssh_key.name
|
||||
network {
|
||||
name = "Ext-Net"
|
||||
}
|
||||
}
|
||||
|
||||
# Attach failover IP to cicd_bw_docker instance
|
||||
#resource "ovh_cloud_project_failover_ip_attach" "failover_ip" {
|
||||
# provider = ovh.ovh
|
||||
# ip = var.docker_ip
|
||||
# routed_to = openstack_compute_instance_v2.instance.name
|
||||
#}
|
||||
|
||||
# Create Ansible inventory file
|
||||
resource "local_file" "ansible_inventory" {
|
||||
content = templatefile("templates/docker_inventory.tftpl", {
|
||||
public_ip = openstack_compute_instance_v2.instance.access_ip_v4,
|
||||
failover_ip = var.docker_ip
|
||||
})
|
||||
filename = "/tmp/docker_inventory"
|
||||
}
|
|
@ -0,0 +1,51 @@
|
|||
# Create cicd_bw_k8s network
|
||||
resource "ovh_cloud_project_network_private" "network" {
|
||||
provider = ovh.ovh
|
||||
name = "cicd_bw_k8s"
|
||||
regions = ["SBG5"]
|
||||
vlan_id = 60
|
||||
}
|
||||
resource "ovh_cloud_project_network_private_subnet" "subnet" {
|
||||
provider = ovh.ovh
|
||||
depends_on = [ovh_cloud_project_network_private.network]
|
||||
network_id = ovh_cloud_project_network_private.network.id
|
||||
start = "192.168.42.100"
|
||||
end = "192.168.42.200"
|
||||
network = "192.168.42.0/24"
|
||||
region = "SBG5"
|
||||
dhcp = true
|
||||
no_gateway = false
|
||||
}
|
||||
|
||||
# Create k8s cluster
|
||||
resource "ovh_cloud_project_kube" "cluster" {
|
||||
provider = ovh.ovh
|
||||
depends_on = [ovh_cloud_project_network_private_subnet.subnet]
|
||||
name = "cicd_bw_k8s"
|
||||
region = "SBG5"
|
||||
version = "1.24"
|
||||
private_network_id = tolist(ovh_cloud_project_network_private.network.regions_attributes[*].openstackid)[0]
|
||||
private_network_configuration {
|
||||
default_vrack_gateway = ""
|
||||
private_network_routing_as_default = false
|
||||
}
|
||||
}
|
||||
|
||||
# Create nodepool
|
||||
resource "ovh_cloud_project_kube_nodepool" "pool" {
|
||||
provider = ovh.ovh
|
||||
kube_id = ovh_cloud_project_kube.cluster.id
|
||||
name = "pool"
|
||||
flavor_name = "d2-4"
|
||||
desired_nodes = 3
|
||||
min_nodes = 3
|
||||
max_nodes = 3
|
||||
monthly_billed = false
|
||||
autoscale = false
|
||||
}
|
||||
|
||||
# Get kubeconfig file
|
||||
resource "local_file" "kubeconfig" {
|
||||
content = ovh_cloud_project_kube.cluster.kubeconfig
|
||||
filename = "/tmp/kubeconfig"
|
||||
}
|
|
@ -0,0 +1,41 @@
|
|||
# Variables
|
||||
variable "linux_ip" {
|
||||
type = string
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create cicd_bw_linux SSH key
|
||||
resource "openstack_compute_keypair_v2" "ssh_key" {
|
||||
provider = openstack.openstack
|
||||
name = "cicd_bw_linux"
|
||||
public_key = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
|
||||
# Create cicd_bw_linux instance
|
||||
resource "openstack_compute_instance_v2" "instance" {
|
||||
provider = openstack.openstack
|
||||
name = "cicd_bw_linux"
|
||||
image_name = "Debian 11"
|
||||
flavor_name = "d2-4"
|
||||
region = "SBG5"
|
||||
key_pair = openstack_compute_keypair_v2.ssh_key.name
|
||||
network {
|
||||
name = "Ext-Net"
|
||||
}
|
||||
}
|
||||
|
||||
# Attach failover IP to cicd_bw_linux instance
|
||||
#resource "ovh_cloud_project_failover_ip_attach" "failover_ip" {
|
||||
# provider = ovh.ovh
|
||||
# ip = var.linux_ip
|
||||
# routed_to = openstack_compute_instance_v2.instance.name
|
||||
#}
|
||||
|
||||
# Create Ansible inventory file
|
||||
resource "local_file" "ansible_inventory" {
|
||||
content = templatefile("templates/linux_inventory.tftpl", {
|
||||
public_ip = openstack_compute_instance_v2.instance.access_ip_v4,
|
||||
failover_ip = var.linux_ip
|
||||
})
|
||||
filename = "/tmp/linux_inventory"
|
||||
}
|
|
@ -0,0 +1,22 @@
|
|||
terraform {
|
||||
required_version = ">= 0.14.0"
|
||||
required_providers {
|
||||
openstack = {
|
||||
source = "terraform-provider-openstack/openstack"
|
||||
version = "~> 1.48.0"
|
||||
}
|
||||
ovh = {
|
||||
source = "ovh/ovh"
|
||||
version = ">= 0.13.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "openstack" {
|
||||
alias = "openstack"
|
||||
}
|
||||
|
||||
provider "ovh" {
|
||||
alias = "ovh"
|
||||
}
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
# Variables
|
||||
variable "swarm_ips" {
|
||||
type = list(string)
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create cicd_bw_swarm SSH key
|
||||
resource "openstack_compute_keypair_v2" "ssh_key" {
|
||||
provider = openstack.openstack
|
||||
name = "cicd_bw_swarm"
|
||||
public_key = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
|
||||
# Create cicd_bw_swarm network
|
||||
resource "ovh_cloud_project_network_private" "network" {
|
||||
provider = ovh.ovh
|
||||
name = "cicd_bw_swarm"
|
||||
regions = ["SBG5"]
|
||||
vlan_id = 50
|
||||
}
|
||||
resource "ovh_cloud_project_network_private_subnet" "subnet" {
|
||||
provider = ovh.ovh
|
||||
network_id = ovh_cloud_project_network_private.network.id
|
||||
start = "192.168.42.1"
|
||||
end = "192.168.42.254"
|
||||
network = "192.168.42.0/24"
|
||||
region = "SBG5"
|
||||
no_gateway = true
|
||||
}
|
||||
|
||||
# Create cicd_bw_swarm_[1-3] instances
|
||||
resource "openstack_compute_instance_v2" "instances" {
|
||||
provider = openstack.openstack
|
||||
depends_on = [ovh_cloud_project_network_private_subnet.subnet]
|
||||
count = 3
|
||||
name = "cicd_bw_swarm_${count.index}"
|
||||
image_name = "Debian 11"
|
||||
flavor_name = "d2-4"
|
||||
region = "SBG5"
|
||||
key_pair = openstack_compute_keypair_v2.ssh_key.name
|
||||
network {
|
||||
name = "Ext-Net"
|
||||
}
|
||||
network {
|
||||
name = ovh_cloud_project_network_private.network.name
|
||||
}
|
||||
}
|
||||
|
||||
# Attach failover IPs to cicd_bw_swarm_[1-3] instances
|
||||
#resource "ovh_cloud_project_failover_ip_attach" "failover_ip" {
|
||||
# provider = ovh.ovh
|
||||
# count = 3
|
||||
# ip = var.swarm_ips[${count.index}]
|
||||
# routed_to = openstack_compute_instance_v2.instances[${count.index}]
|
||||
#}
|
||||
|
||||
# Create Ansible inventory file
|
||||
resource "local_file" "ansible_inventory" {
|
||||
content = templatefile("templates/swarm_inventory.tftpl", {
|
||||
instances = openstack_compute_instance_v2.instances,
|
||||
failover_ips = var.swarm_ips
|
||||
})
|
||||
filename = "/tmp/swarm_inventory"
|
||||
}
|
|
@ -0,0 +1,2 @@
|
|||
[autoconf]
|
||||
autoconf ansible_host=${public_ip} ansible_user=debian failover_ip=${failover_ip}
|
|
@ -0,0 +1,2 @@
|
|||
[docker]
|
||||
docker ansible_host=${public_ip} ansible_user=debian failover_ip=${failover_ip}
|
|
@ -0,0 +1,2 @@
|
|||
[linux]
|
||||
linux ansible_host=${public_ip} ansible_user=debian failover_ip=${failover_ip}
|
|
@ -0,0 +1,6 @@
|
|||
[managers]
|
||||
manager ansible_host=${instances[0].access_ip_v4} ansible_user=debian failover_ip=${failover_ips[0]} local_ip=192.168.42.100
|
||||
|
||||
[workers]
|
||||
worker1 ansible_host=${instances[1].access_ip_v4} ansible_user=debian failover_ip=${failover_ips[1]} local_ip=192.168.42.101
|
||||
worker2 ansible_host=${instances[2].access_ip_v4} ansible_user=debian failover_ip=${failover_ips[2]} local_ip=192.168.42.102
|
|
@ -0,0 +1,12 @@
|
|||
terraform {
|
||||
required_providers {
|
||||
scaleway = {
|
||||
source = "scaleway/scaleway"
|
||||
version = "2.5.0"
|
||||
}
|
||||
kubectl = {
|
||||
source = "gavinbunney/kubectl"
|
||||
version = "1.14.0"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,41 @@
|
|||
# Variables
|
||||
variable "swarm_ips" {
|
||||
type = list(string)
|
||||
nullable = false
|
||||
}
|
||||
variable "swarm_ips_id" {
|
||||
type = list(string)
|
||||
nullable = false
|
||||
}
|
||||
|
||||
# Create cicd_bw_swarm SSH key
|
||||
resource "scaleway_account_ssh_key" "ssh_key" {
|
||||
name = "cicd_bw_swarm"
|
||||
public_key = file("~/.ssh/id_rsa.pub")
|
||||
}
|
||||
|
||||
# Create cicd_bw_swarm private network
|
||||
resource "scaleway_vpc_private_network" "pn" {
|
||||
name = "cicd_bw_swarm"
|
||||
}
|
||||
|
||||
# Create cicd_bw_swarm_[1-3] instances
|
||||
resource "scaleway_instance_server" "instances" {
|
||||
count = 3
|
||||
depends_on = [scaleway_account_ssh_key.ssh_key]
|
||||
name = "cicd_bw_swarm_${count.index}"
|
||||
type = "DEV1-M"
|
||||
image = "debian_bullseye"
|
||||
ip_id = var.swarm_ips_id[count.index]
|
||||
private_network {
|
||||
pn_id = scaleway_vpc_private_network.pn.id
|
||||
}
|
||||
}
|
||||
|
||||
# Create Ansible inventory file
|
||||
resource "local_file" "ansible_inventory" {
|
||||
content = templatefile("templates/swarm_inventory.tftpl", {
|
||||
public_ips = var.swarm_ips
|
||||
})
|
||||
filename = "/tmp/swarm_inventory"
|
||||
}
|
|
@ -0,0 +1,2 @@
|
|||
[autoconf]
|
||||
autoconf ansible_host=${public_ip} ansible_user=root
|
|
@ -0,0 +1,2 @@
|
|||
[docker]
|
||||
docker ansible_host=${public_ip} ansible_user=root
|
|
@ -0,0 +1,18 @@
|
|||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: bw-lb
|
||||
annotations:
|
||||
service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: "true"
|
||||
spec:
|
||||
selector:
|
||||
app: bunkerweb
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 8080
|
||||
- name: https
|
||||
port: 443
|
||||
targetPort: 8443
|
||||
loadBalancerIP: ${lb_ip}
|
|
@ -0,0 +1,2 @@
|
|||
[linux]
|
||||
linux ansible_host=${public_ip} ansible_user=root
|
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: secret-registry
|
||||
type: kubernetes.io/dockerconfigjson
|
||||
data:
|
||||
.dockerconfigjson: ${dockerconfigjson}
|
|
@ -0,0 +1,6 @@
|
|||
[managers]
|
||||
manager ansible_host=${public_ips[0]} ansible_user=root local_ip=192.168.42.100
|
||||
|
||||
[workers]
|
||||
worker1 ansible_host=${public_ips[1]} ansible_user=root local_ip=192.168.42.101
|
||||
worker2 ansible_host=${public_ips[2]} ansible_user=root local_ip=192.168.42.102
|
|
@ -0,0 +1,157 @@
|
|||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: cr-bunkerweb
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["services", "pods", "configmaps"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: ["networking.k8s.io"]
|
||||
resources: ["ingresses"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: sa-bunkerweb
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: crb-bunkerweb
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: sa-bunkerweb
|
||||
namespace: default
|
||||
apiGroup: ""
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: cr-bunkerweb
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: bunkerweb
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: bunkerweb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: bunkerweb
|
||||
annotations:
|
||||
bunkerweb.io/AUTOCONF: "yes"
|
||||
spec:
|
||||
containers:
|
||||
- name: bunkerweb
|
||||
image: bunkerity/bunkerweb:1.4.6
|
||||
imagePullPolicy: Always
|
||||
securityContext:
|
||||
runAsUser: 101
|
||||
runAsGroup: 101
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
- containerPort: 8443
|
||||
env:
|
||||
- name: KUBERNETES_MODE
|
||||
value: "yes"
|
||||
# replace with your DNS resolvers
|
||||
# e.g. : kube-dns.kube-system.svc.cluster.local
|
||||
- name: DNS_RESOLVERS
|
||||
value: "coredns.kube-system.svc.cluster.local"
|
||||
- name: USE_API
|
||||
value: "yes"
|
||||
- name: API_WHITELIST_IP
|
||||
value: "10.0.0.0/8 192.168.0.0/16 172.16.0.0/12 100.64.0.0/10"
|
||||
- name: SERVER_NAME
|
||||
value: ""
|
||||
- name: MULTISITE
|
||||
value: "yes"
|
||||
- name: USE_REAL_IP
|
||||
value: "yes"
|
||||
- name: USE_PROXY_PROTOCOL
|
||||
value: "yes"
|
||||
- name: REAL_IP_HEADER
|
||||
value: "proxy_protocol"
|
||||
- name: REAL_IP_FROM
|
||||
value: "10.0.0.0/8 192.168.0.0/16 172.16.0.0/12 100.64.0.0/10"
|
||||
- name: USE_LETS_ENCRYPT_STAGING
|
||||
value: "yes"
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /opt/bunkerweb/helpers/healthcheck.sh
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 5
|
||||
timeoutSeconds: 1
|
||||
failureThreshold: 3
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /opt/bunkerweb/helpers/healthcheck.sh
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 1
|
||||
timeoutSeconds: 1
|
||||
failureThreshold: 3
|
||||
imagePullSecrets:
|
||||
- name: secret-registry
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: svc-bunkerweb
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: bunkerweb
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pvc-bunkerweb
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: bunkerweb-controller
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: bunkerweb-controller
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: bunkerweb-controller
|
||||
spec:
|
||||
serviceAccountName: sa-bunkerweb
|
||||
volumes:
|
||||
- name: vol-bunkerweb
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-bunkerweb
|
||||
containers:
|
||||
- name: bunkerweb-controller
|
||||
image: bunkerity/bunkerweb-autoconf:1.4.6
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: KUBERNETES_MODE
|
||||
value: "yes"
|
||||
volumeMounts:
|
||||
- name: vol-bunkerweb
|
||||
mountPath: /data
|
||||
imagePullSecrets:
|
||||
- name: secret-registry
|
|
@ -1,11 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv-bunkerweb
|
||||
spec:
|
||||
capacity:
|
||||
storage: 5Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: "/tmp/bw-data"
|
Loading…
Reference in New Issue