Merge pull request #452 from bunkerity/dev

Dev
This commit is contained in:
Théophile Diot 2023-04-26 15:35:21 +02:00 committed by GitHub
commit df94bc4af7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
65 changed files with 1267 additions and 925 deletions

View File

@ -2,7 +2,6 @@
<figure markdown>
![Overwiew](assets/img/concepts.svg){ align=center }
</figure>
## Integrations
@ -11,18 +10,19 @@ The first concept is the integration of BunkerWeb into the target environment. W
The following integrations are officially supported :
- [Docker](/1.4/integrations/#docker)
- [Docker autoconf](/1.4/integrations/#docker-autoconf)
- [Swarm](/1.4/integrations/#swarm)
- [Docker](/1.5.0-beta/integrations/#docker)
- [Docker autoconf](/1.5.0-beta/integrations/#docker-autoconf)
- [Swarm](/1.5.0-beta/integrations/#swarm)
- [Kubernetes](/1.4/integrations/#kubernetes)
- [Linux](/1.4/integrations/#linux)
- [Ansible](/1.4/integrations/#ansible)
- [Linux](/1.5.0-beta/integrations/#linux)
- [Ansible](/1.5.0-beta/integrations/#ansible)
- [Vagrant](/1.5.0-beta/integrations/#vagrant)
If you think that a new integration should be supported, do not hesitate to open a [new issue](https://github.com/bunkerity/bunkerweb/issues) on the GitHub repository.
!!! info "Going further"
The technical details of all BunkerWeb integrations are available in the [integrations section](/1.4/integrations) of the documentation.
The technical details of all BunkerWeb integrations are available in the [integrations section](/1.5.0-beta/integrations) of the documentation.
## Settings
@ -79,11 +79,11 @@ app3.example.com_USE_BAD_BEHAVIOR=no
!!! info "Going further"
You will find concrete examples of multisite mode in the [quickstart guide](/1.4/quickstart-guide) of the documentation and the [examples](https://github.com/bunkerity/bunkerweb/tree/master/examples) directory of the repository.
You will find concrete examples of multisite mode in the [quickstart guide](/1.5.0-beta/quickstart-guide) of the documentation and the [examples](https://github.com/bunkerity/bunkerweb/tree/master/examples) directory of the repository.
## Custom configurations
Because meeting all the use cases only using the settings is not an option (even with [external plugins](/1.4/plugins)), you can use custom configurations to solve your specific challenges.
Because meeting all the use cases only using the settings is not an option (even with [external plugins](/1.5.0-beta/plugins)), you can use custom configurations to solve your specific challenges.
Under the hood, BunkerWeb uses the notorious NGINX web server, that's why you can leverage its configuration system for your specific needs. Custom NGINX configurations can be included in different [contexts](https://docs.nginx.com/nginx/admin-guide/basic-functionality/managing-configuration-files/#contexts) like HTTP or server (all servers and/or specific server block).
@ -91,4 +91,36 @@ Another core component of BunkerWeb is the ModSecurity Web Application Firewall
!!! info "Going further"
You will find concrete examples of custom configurations in the [quickstart guide](/1.4/quickstart-guide) of the documentation and the [examples](https://github.com/bunkerity/bunkerweb/tree/master/examples) directory of the repository.
You will find concrete examples of custom configurations in the [quickstart guide](/1.5.0-beta/quickstart-guide) of the documentation and the [examples](https://github.com/bunkerity/bunkerweb/tree/master/examples) directory of the repository.
## Database
State of the current configuration of BunkerWeb is stored in a backend database which contains the following data :
- Settings defined for all the services
- Custom configurations
- BunkerWeb instances
- Metadata about jobs execution
- Cached files
Under the hood, when you edit a setting or add a new configuration, everything is stored in the database. We actually support SQLite, MariaDB, MySQL and PostgreSQL as backends.
Database configuration is done by using the `DATABASE_URI` setting which respects the following formats :
- SQLite : `sqlite:///var/lib/bunkerweb/db.sqlite3`
- MariaDB : `mariadb+pymysql://bunkerweb:changeme@bw-db:3306/db`
- MySQL : `mysql+pymysql://bunkerweb:changeme@bw-db:3306/db`
- PostgreSQL : `postgresql://bunkerweb:changeme@bw-db:5432/db`
## Scheduler
To make things automagically work together, a dedicated service called the scheduler is in charge of :
- Storing the settings and custom configurations inside the database
- Executing various tasks (called jobs)
- Generating a configuration which is understood by BunkerWeb
- Being the intermediary for other services (like web UI or autoconf)
In other words, the scheduler is the brain of BunkerWeb.
When using container-based integrations, the scheduler is executed in its own container. Whereas, for linux-based integrations scheduler is self-contained in the `bunkerweb` service.

View File

@ -7,17 +7,19 @@
<figcaption>Make your web services secure by default !</figcaption>
</figure>
BunkerWeb is a web server based on the notorious [NGINX](https://nginx.org/) and focused on security.
BunkerWeb is a next-generation and open-source Web Application Firewall (WAF).
It integrates into existing environments ([Linux](/1.4/integrations/#linux), [Docker](/1.4/integrations/#docker), [Swarm](/1.4/integrations/#swarm), [Kubernetes](/1.4/integrations/#Kubernetes), …) to make your web services "secure by default" without any hassle. The security best practices are automatically applied for you while keeping control of every setting to meet your use case.
Being a full-featured web server (based on [NGINX](https://nginx.org/) under the hood), it will protect your web services to make them "secure by default". BunkerWeb integrates seamlessly into your existing environments ([Linux](/1.5.0-beta/integrations/#linux), [Docker](/1.5.0-beta/integrations/#docker), [Swarm](/1.5.0-beta/integrations/#swarm), [Kubernetes](/1.5.0-beta/integrations/#kubernetes), …) and is fully configurable (don't panic, there is an [awesome web UI](/1.5.0-beta/web-ui/) if you don't like the CLI) to meet your own use-cases . In other words, cybersecurity is no more a hassle.
BunkerWeb contains primary [security features](/1.4/security-tuning) as part of the core but can be easily extended with additional ones thanks to a [plugin system](/1.4/plugins).
BunkerWeb contains primary [security features](/1.5.0-beta/security-tuning/) as part of the core but can be easily extended with additional ones thanks to a [plugin system](/1.5.0-beta/plugins/).
## Why BunkerWeb ?
- **Easy integration into existing environments** : support for Linux, Docker, Swarm and Kubernetes
- **Easy integration into existing environments** : support for Linux, Docker, Swarm, Kubernetes, Ansible, Vagrant, ...
- **Highly customizable** : enable, disable and configure features easily to meet your use case
- **Secure by default** : offers out-of-the-box and hassle-free minimal security for your web services
- **Awesome web UI** : keep control of everything more efficiently without the need of the CLI
- **Plugin system** : extend BunkerWeb to meet your own use-cases
- **Free as in "freedom"** : licensed under the free [AGPLv3 license](https://www.gnu.org/licenses/agpl-3.0.en.html)
## Security features
@ -33,7 +35,7 @@ A non-exhaustive list of security features :
- **Block known bad IPs** with external blacklists and DNSBL
- And much more ...
Learn more about the core security features in the [security tuning](security-tuning) section of the documentation.
Learn more about the core security features in the [security tuning](/1.5.0-beta/security-tuning) section of the documentation.
## Demo

View File

@ -15,19 +15,22 @@ We provide ready-to-use prebuilt images for x64, x86 armv8 and armv7 architectur
docker pull bunkerity/bunkerweb:1.5.0-beta
```
Alternatively, you can build the Docker images directly from the [source](https://github.com/bunkerity/bunkerweb) (and get a coffee ☕ because it may take a long time depending on your hardware) :
Alternatively, you can build the Docker image directly from the [source](https://github.com/bunkerity/bunkerweb) (and get a coffee ☕ because it may take a long time depending on your hardware) :
```shell
git clone https://github.com/bunkerity/bunkerweb.git && \
cd bunkerweb && \
docker build -t my-bunkerweb .
docker build -t my-bunkerweb -f src/bunkerweb/Dockerfile .
```
BunkerWeb container's usage and configuration are based on :
Docker integration key concepts are :
- **Environment variables** to configure BunkerWeb and meet your use cases
- **Volume** to cache important data and mount custom configuration files
- **Networks** to expose ports for clients and connect to upstream web services
- **Environment variables** to configure BunkerWeb
- **Scheduler** container to store configuration and execute jobs
- **Networks** to expose ports for clients and connect to upstream web services.
!!! info "Database backend"
Please note that we assume you are using SQLite as database backend (which is the default for the `DATABASE_URI` setting). Other backends for this integration are not documented but still possible if you want to.
### Environment variables
@ -56,14 +59,26 @@ services:
!!! info "Full list"
For the complete list of environment variables, see the [settings section](/1.4/settings) of the documentation.
### Volume
### Scheduler
A volume is used to share data with BunkerWeb and store persistent data like certificates, cached files, ...
The easiest way of managing the volume is by using a named one. You will first need to create it :
The [scheduler](/1.5.0-beta/concepts/#scheduler) is executed in its own container which is also available on Docker Hub :
```shell
docker volume create bw_data
docker pull bunkerity/bunkerweb-scheduler:1.5.0-beta
```
Alternatively, you can build the Docker image directly from the [source](https://github.com/bunkerity/bunkerweb) (less coffee ☕ needed than BunkerWeb image) :
```shell
git clone https://github.com/bunkerity/bunkerweb.git && \
cd bunkerweb && \
docker build -t my-scheduler -f src/scheduler/Dockerfile .
```
A volume is needed to store the SQLite database that will be used by the scheduler :
```shell
docker volume create bw-data
```
Once it's created, you will be able to mount it on `/data` when running the container :
@ -71,9 +86,9 @@ Once it's created, you will be able to mount it on `/data` when running the cont
```shell
docker run \
...
-v bw_data:/data \
-v bw-data:/data \
...
bunkerity/bunkerweb:1.5.0-beta
bunkerity/bunkerweb-scheduler:1.5.0-beta
```
Here is the docker-compose equivalent :
@ -82,17 +97,17 @@ Here is the docker-compose equivalent :
...
services:
mybunker:
image: bunkerity/bunkerweb:1.5.0-beta
image: bunkerity/bunkerweb-scheduler:1.5.0-beta
volumes:
- bw_data:/data
- bw-data:/data
...
volumes:
bw_data:
bw-data:
```
!!! warning "Using local folder for persistent data"
BunkerWeb runs as an **unprivileged user with UID 101 and GID 101** inside the container. The reason behind this is security : in case a vulnerability is exploited, the attacker won't have full root (UID/GID 0) privileges.
The scheduler runs as an **unprivileged user with UID 101 and GID 101** inside the container. The reason behind this is security : in case a vulnerability is exploited, the attacker won't have full root (UID/GID 0) privileges.
But there is a downside : if you use a **local folder for the persistent data**, you will need to **set the correct permissions** so the unprivileged user can write data to it. Something like that should do the trick :
```shell
mkdir bw-data && \
@ -125,6 +140,8 @@ volumes:
chmod -R 770 bw-data
```
TODO
### Networks
By default, BunkerWeb container is listening (inside the container) on **8080/tcp** for **HTTP** and **8443/tcp** for **HTTPS**.
@ -137,7 +154,7 @@ By default, BunkerWeb container is listening (inside the container) on **8080/tc
sudo sysctl net.ipv4.ip_unprivileged_port_start=1
```
The easiest way to connect BunkerWeb to web applications is by using Docker networks.
The easiest way to connect BunkerWeb to web applications is by using Docker networks.
First of all, you will need to create a network :

View File

@ -1,36 +1,41 @@
# Migrating from bunkerized
# Migrating from 1.4.X
!!! warning "Read this if you were a bunkerized user"
!!! warning "Read this if you were a 1.4.X user"
A lot of things changed since the last bunkerized release. If you want to do an upgrade, which we recommend you do because BunkerWeb is by far, better than bunkerized. Please read carefully this section as well as the whole documentation.
A lot of things changed since the 1.4.X releases. Container-based integrations stacks contain more services but, trust us, fundamental principles of BunkerWeb are still there.
## Volumes
## Scheduler
When using container-based integrations like [Docker](/1.4/integrations/#docker), [Docker autoconf](/1.4/integrations/#docker-autoconf), [Swarm](/1.4/integrations/#swarm) or [Kubernetes](/1.4/integrations/#kubernetes), volumes for storing data like certificates, cache or custom configurations have changed. We now have a single "bw-data" volume which contains everything and should be easier to manage than bunkerized.
Back to the 1.4.X releases, jobs (like Let's Encrypt certificate generation/renewal or blacklists download) **were executed in the same container as BunkerWeb**. For the purpose of [separation of concerns](https://en.wikipedia.org/wiki/Separation_of_concerns), we decided to create a **separate service** which is now responsible for managing jobs.
## Removed features
Called **Scheduler**, this service also generates the final configuration used by BunkerWeb and acts as an intermediary between autoconf and BunkerWeb. In other words, the scheduler is the **brain of the BunkerWeb 1.5.X stack**.
We decided to drop the following features :
You will find more information about the scheduler [here](/1.5.0-beta/concepts/#scheduler).
- Blocking "bad" referrers : we may add it again in the future
- ROOT_SITE_SUBFOLDER : we will need to redesign this in the future
## Database
## Changed Authelia support
BunkerWeb configuration is **no more stored in a plain file** (located at `/etc/nginx/variables.env` if you didn't know it). That's it, we now support a **fully-featured database as a backend** to store settings, cache, custom configs, ... 🥳
Instead of supporting only Authelia, we decided to support generic auth request settings. See the new [authelia example](https://github.com/bunkerity/bunkerweb/tree/master/examples/authelia) and [auth request documentation](https://docs.bunkerweb.io/1.4/security-tuning/#auth-request) for more information.
Using a real database offers many advantages :
## Replaced BLOCK_\*, WHITELIST_\* and BLACKLIST_\* settings
- Backup of the current configuration
- Usage with multiple services (scheduler, web UI, ...)
- Upgrade to a new BunkerWeb version
The blocking mechanisms have been completely redesigned. We have detected that a lot of false positives came from the default blacklists hardcoded into bunkerized. That's why we now give users the possibility of choosing their own blacklists (and also whitelists) for IP address, reverse DNS, user-agent, URI and ASN, see the [Blacklisting and whitelisting](/1.4/security-tuning/#blacklisting-and-whitelisting) section of the [security tuning](/1.4/security-tuning).
Please note that we actually support, **SQLite**, **MySQL**, **MariaDB** and **PostgreSQL** as backends.
## Changed WHITELIST_USER_AGENT setting behavior
You will find more information about the database [here](/1.5.0-beta/concepts/#database).
The new behavior of the WHITELIST_USER_AGENT setting is to **disable completely security checks** if the User-Agent value of a client matches any of the patterns. In bunkerized it was used to ignore specific User-Agent values when `BLOCK_USER_AGENT` was set to `yes` to avoid false positives. You can select the blacklist of your choice to avoid FP (see previous section).
## Redis
## Changed PROXY_REAL_IP_* settings
When BunkerWeb 1.4.X was used in cluster mode (Swarm or Kubernetes integrations), **data were not shared among the nodes**. For example, if an attacker was banned via the "bad behavior" feature on a specific node, **he could still connect to the other nodes**.
To avoid any confusion between reverse proxy and real IP, we decided to rename the `PROXY_REAL_IP_*` settings, you will find more information on the subject [here](/1.4/quickstart-guide/#behind-load-balancer-or-reverse-proxy).
Security is not the only reason to have a shared data store for clustered integrations, **caching** is also another one. We can now **store results** of time-consuming operations like (reverse) dns lookups so they are **available for other nodes**.
We actually support **Redis** as a backend for the shared data store.
See the list of [redis settings](/1.5.0-beta/settings/#redis) and the corresponding documentation of your integration for more information.
## Default values and new settings
The default value of some settings have changed and we have added many other settings, we recommend you read the [security tuning](/1.4/security-tuning) and [settings](/1.4/settings) sections of the documentation.
The default value of some settings have changed and we have added many other settings, we recommend you read the [security tuning](/1.5.0-beta/security-tuning) and [settings](/1.5.0-beta/settings) sections of the documentation.

View File

@ -1,5 +1,5 @@
mkdocs==1.4.2
mkdocs-material==9.1.7
mkdocs-material==9.1.8
pytablewriter==0.64.2
mike==1.1.2
jinja2<3.1.0

View File

@ -279,6 +279,22 @@ You can use the following settings to set up whitelisting :
| `WHITELIST_USER_AGENT_URLS` | | List of URLs containing User-Agent to whitelist. |
| `WHITELIST_URI` | | List of requests URI to whitelist. |
| `WHITELIST_URI_URLS` | | List of URLs containing request(s) URI to whitelist. |
## ReverseScan
ReverseScan" is a feature designed to detect open ports by establishing TCP connections with clients' IP addresses.
Consider adding this feature if you want to detect possible open proxies or connections from servers.
We provide a list of suspicious ports by default, but it can be modified to fit your needs.Be mindful, Adding too many ports to the list can significantly slow down clients' connections due to the caching process.If a listed port is open, the client's access will be denied.
Please be aware, this feature is new and further improvements will be added soon.
Here is the list of settings related to ReverseScan:
| Setting | Default | Description |
| :----------: | :--------------------------------------------------------------------------: | :--------------------------------------------- |
| `USE_REVERSE_SCAN` | `no` | When set to `yes`, will enable ReverseScan. |
| `REVERSE_SCAN_PORTS` | `22 80 443 3128 8000 8080` | List of suspicious ports to scan. |
| `REVERSE_SCAN_TIMEOUT` | `500` | Specify the maximum timeout (in ms) when scanning a port. |
## BunkerNet

View File

@ -69,7 +69,7 @@ Because the web UI is a web application, the recommended installation procedure
-e bwadm.example.com_REVERSE_PROXY_URL=/changeme/ \
-e bwadm.example.com_REVERSE_PROXY_HOST=http://bw-ui:7000 \
-e "bwadm.example.com_REVERSE_PROXY_HEADERS=X-Script-Name /changeme" \
-e bwadm.example.com_INTERCEPTED_ERROR_CODES="400 401 405 413 429 500 501 502 503 504" \
-e bwadm.example.com_INTERCEPTED_ERROR_CODES="400 401 404 405 413 429 500 501 502 503 504" \
-l bunkerweb.INSTANCE \
bunkerity/bunkerweb:1.5.0-beta && \
docker network connect bw-universe bunkerweb
@ -645,7 +645,7 @@ Because the web UI is a web application, the recommended installation procedure
- bunkerweb.REVERSE_PROXY_URL=/changeme
- bunkerweb.REVERSE_PROXY_HOST=http://bw-ui:7000
- bunkerweb.REVERSE_PROXY_HEADERS=X-Script-Name /changeme
- bunkerweb.INTERCEPTED_ERROR_CODES=400 401 405 413 429 500 501 502 503 504
- bunkerweb.INTERCEPTED_ERROR_CODES=400 401 404 405 413 429 500 501 502 503 504
volumes:
bw-data:

View File

@ -0,0 +1,78 @@
version: "3"
services:
mybunker:
image: bunkerity/bunkerweb:1.5.0-beta
ports:
- 80:8080 # required to resolve let's encrypt challenges
- 10000:10000 # app1 without SSL/TLS
- 10001:10001 # app1 with SSL/TLS
- 20000:20000 # app2 without SSL/TLS
- 20001:20001 # app2 with SSL/TLS
environment:
- MULTISITE=yes
- SERVER_NAME=app1.example.com app2.example.com # replace with your domains
- API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24
- SERVE_FILES=no
- DISABLE_DEFAULT_SERVER=yes
- AUTO_LETS_ENCRYPT=yes
- USE_CLIENT_CACHE=yes
- USE_GZIP=yes
- USE_REVERSE_PROXY=yes
- SERVER_TYPE=stream
- app1.example.com_REVERSE_PROXY_HOST=app1:9000
- app1.example.com_LISTEN_STREAM_PORT=10000
- app1.example.com_LISTEN_STREAM_PORT_SSL=10001
- app2.example.com_REVERSE_PROXY_HOST=app2:9000
- app2.example.com_LISTEN_STREAM_PORT=20000
- app2.example.com_LISTEN_STREAM_PORT_SSL=20001
labels:
- "bunkerweb.INSTANCE" # required for the scheduler to recognize the container
networks:
- bw-universe
- bw-services
bw-scheduler:
image: bunkerity/bunkerweb-scheduler:1.5.0-beta
depends_on:
- mybunker
environment:
- DOCKER_HOST=tcp://bw-docker-proxy:2375
volumes:
- bw-data:/data
networks:
- bw-universe
- bw-docker
bw-docker-proxy:
image: tecnativa/docker-socket-proxy:0.1
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- CONTAINERS=1
networks:
- bw-docker
app1:
image: istio/tcp-echo-server:1.2
command: [ "9000", "app1" ]
networks:
- bw-services
app2:
image: istio/tcp-echo-server:1.2
command: [ "9000", "app2" ]
networks:
- bw-services
volumes:
bw-data:
networks:
bw-services:
bw-universe:
ipam:
driver: default
config:
- subnet: 10.20.30.0/24
bw-docker:

View File

@ -236,7 +236,11 @@ spec:
- name: "ADMIN_PASSWORD"
value: "changeme"
- name: "ABSOLUTE_URI"
value: "http://www.example.com/admin"
value: "http://www.example.com/admin/"
- name: KUBERNETES_MODE
value: "YES"
- name: "DATABASE_URI"
value: "mariadb+pymysql://bunkerweb:testor@svc-bunkerweb-db:3306/db"
---
apiVersion: v1
kind: Service
@ -308,13 +312,13 @@ metadata:
name: ingress
annotations:
bunkerweb.io/www.example.com_USE_UI: "yes"
bunkerweb.io/www.example.com_REVERSE_PROXY_HEADERS: "X-Script-Name /admin"
bunkerweb.io/www.example.com_REVERSE_PROXY_HEADERS_1: "X-Script-Name /admin"
spec:
rules:
- host: www.example.com
http:
paths:
- path: /admin
- path: /admin/
pathType: Prefix
backend:
service:

View File

@ -294,7 +294,11 @@ spec:
- name: "ADMIN_PASSWORD"
value: "changeme"
- name: "ABSOLUTE_URI"
value: "http://www.example.com/admin"
value: "http://www.example.com/admin/"
- name: KUBERNETES_MODE
value: "YES"
- name: "DATABASE_URI"
value: "mariadb+pymysql://bunkerweb:testor@svc-bunkerweb-db:3306/db"
---
apiVersion: v1
kind: Service
@ -366,13 +370,13 @@ metadata:
name: ingress
annotations:
bunkerweb.io/www.example.com_USE_UI: "yes"
bunkerweb.io/www.example.com_REVERSE_PROXY_HEADERS: "X-Script-Name /admin"
bunkerweb.io/www.example.com_REVERSE_PROXY_HEADERS_1: "X-Script-Name /admin"
spec:
rules:
- host: www.example.com
http:
paths:
- path: /admin
- path: /admin/
pathType: Prefix
backend:
service:

View File

@ -234,7 +234,11 @@ spec:
- name: "ADMIN_PASSWORD"
value: "changeme"
- name: "ABSOLUTE_URI"
value: "http://www.example.com/admin"
value: "http://www.example.com/admin/"
- name: KUBERNETES_MODE
value: "YES"
- name: "DATABASE_URI"
value: "mariadb+pymysql://bunkerweb:testor@svc-bunkerweb-db:3306/db"
---
apiVersion: v1
kind: Service
@ -319,13 +323,13 @@ metadata:
name: ingress
annotations:
bunkerweb.io/www.example.com_USE_UI: "yes"
bunkerweb.io/www.example.com_REVERSE_PROXY_HEADERS: "X-Script-Name /admin"
bunkerweb.io/www.example.com_REVERSE_PROXY_HEADERS_1: "X-Script-Name /admin"
spec:
rules:
- host: www.example.com
http:
paths:
- path: /admin
- path: /admin/
pathType: Prefix
backend:
service:

View File

@ -8,7 +8,7 @@ copyright: Bunkerity
nav:
- Introduction: 'index.md'
- Migrating from bunkerized: 'migrating.md'
- Migrating from 1.4.X: 'migrating.md'
- Concepts: 'concepts.md'
- Integrations: 'integrations.md'
- Quickstart guide: 'quickstart-guide.md'

View File

@ -43,12 +43,11 @@ RUN apk add --no-cache bash && \
for dir in $(echo "configs/http configs/stream configs/server-http configs/server-stream configs/default-server-http configs/default-server-stream configs/modsec configs/modsec-crs") ; do mkdir "/data/${dir}" ; done && \
chown -R root:nginx /data && \
chmod -R 770 /data && \
chown -R root:nginx /usr/share/bunkerweb /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb && \
chown -R root:nginx /usr/share/bunkerweb /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /usr/bin/bwcli && \
find /usr/share/bunkerweb -type f -exec chmod 0740 {} \; && \
find /usr/share/bunkerweb -type d -exec chmod 0750 {} \; && \
chmod -R 770 /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb && \
chmod 750 /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/helpers/*.sh /usr/bin/bwcli /usr/share/bunkerweb/autoconf/main.py /usr/share/bunkerweb/deps/python/bin/* && \
chown root:nginx /usr/bin/bwcli && \
mkdir /var/log/letsencrypt /var/lib/letsencrypt && \
chown root:nginx /var/log/letsencrypt /var/lib/letsencrypt && \
chmod 770 /var/log/letsencrypt /var/lib/letsencrypt

View File

@ -36,13 +36,12 @@ COPY src/common/api /usr/share/bunkerweb/api
COPY src/common/cli /usr/share/bunkerweb/cli
COPY src/common/confs /usr/share/bunkerweb/confs
COPY src/common/core /usr/share/bunkerweb/core
COPY src/common/db /usr/share/bunkerweb/db
COPY src/common/gen /usr/share/bunkerweb/gen
COPY src/common/helpers /usr/share/bunkerweb/helpers
COPY src/common/settings.json /usr/share/bunkerweb/settings.json
COPY src/common/utils /usr/share/bunkerweb/utils
COPY src/VERSION /usr/share/bunkerweb/VERSION
COPY misc/*.ascii /usr/share/bunkerweb/
COPY misc/*.ascii /usr/share/bunkerweb/misc/
# Install runtime dependencies, pypi packages, move bwcli, create data folders and set permissions
RUN apk add --no-cache pcre bash python3 && \

View File

@ -2,7 +2,7 @@
. /usr/share/bunkerweb/helpers/utils.sh
ascii_array=($(ls /usr/share/bunkerweb/*.ascii))
ascii_array=($(ls /usr/share/bunkerweb/misc/*.ascii))
cat ${ascii_array[$(($RANDOM % ${#ascii_array[@]}))]}
log "ENTRYPOINT" "" "Starting BunkerWeb v$(cat /usr/share/bunkerweb/VERSION) ..."

View File

@ -41,17 +41,17 @@ api.global.POST["^/stop$"] = function(self)
end
api.global.POST["^/confs$"] = function(self)
local tmp = "/var/tmp/bunkerweb/api_" .. ngx.var.uri:sub(2) .. ".tar.gz"
local destination = "/usr/share/bunkerweb/" .. ngx.var.uri:sub(2)
if ngx.var.uri == "/confs" then
local tmp = "/var/tmp/bunkerweb/api_" .. ngx.ctx.bw.uri:sub(2) .. ".tar.gz"
local destination = "/usr/share/bunkerweb/" .. ngx.ctx.bw.uri:sub(2)
if ngx.ctx.bw.uri == "/confs" then
destination = "/etc/nginx"
elseif ngx.var.uri == "/data" then
elseif ngx.ctx.bw.uri == "/data" then
destination = "/data"
elseif ngx.var.uri == "/cache" then
elseif ngx.ctx.bw.uri == "/cache" then
destination = "/data/cache"
elseif ngx.var.uri == "/custom_configs" then
elseif ngx.ctx.bw.uri == "/custom_configs" then
destination = "/data/configs"
elseif ngx.var.uri == "/plugins" then
elseif ngx.ctx.bw.uri == "/plugins" then
destination = "/data/plugins"
end
local form, err = upload:new(4096)
@ -136,17 +136,17 @@ api.global.GET["^/bans$"] = function(self)
local data = {}
for i, k in ipairs(self.datastore:keys()) do
if k:find("^bans_ip_") then
local ret, reason = datastore:get(k)
local ret, reason = self.datastore:get(k)
if not ret then
return self:response(ngx.HTTP_INTERNAL_SERVER_ERROR, "error",
"can't access " .. k .. " from datastore : " + reason)
end
local ret, exp = self.datastore:exp(k)
if not ret then
local ttl, err = self.datastore:ttl(k)
if not ttl then
return self:response(ngx.HTTP_INTERNAL_SERVER_ERROR, "error",
"can't access exp " .. k .. " from datastore : " + exp)
"can't access ttl " .. k .. " from datastore : " .. err)
end
local ban = { ip = k:sub(9, #k), reason = reason, exp = exp }
local ban = { ip = k:sub(9, #k), reason = reason, exp = ttl }
table.insert(data, ban)
end
end
@ -158,16 +158,16 @@ function api:is_allowed_ip()
if not data then
return false, "can't access api_allowed_ips in datastore"
end
if utils.is_ip_in_networks(ngx.var.remote_addr, cjson.decode(data)) then
if utils.is_ip_in_networks(ngx.ctx.bw.remote_addr, cjson.decode(data)) then
return true, "ok"
end
return false, "IP is not in API_WHITELIST_IP"
end
function api:do_api_call()
if self.global[ngx.var.request_method] ~= nil then
for uri, api_fun in pairs(self.global[ngx.var.request_method]) do
if string.match(ngx.var.uri, uri) then
if self.global[ngx.ctx.bw.request_method] ~= nil then
for uri, api_fun in pairs(self.global[ngx.ctx.bw.request_method]) do
if string.match(ngx.ctx.bw.uri, uri) then
local status, resp = api_fun(self)
local ret = true
if status ~= ngx.HTTP_OK then

View File

@ -96,7 +96,7 @@ helpers.fill_ctx = function()
local data = {}
-- Common vars
data.kind = "http"
if not ngx.shared.cachestore then
if ngx.shared.datastore_stream then
data.kind = "stream"
end
data.remote_addr = ngx.var.remote_addr

View File

@ -33,6 +33,11 @@ function plugin:initialize(id)
self.logger:log(ngx.ERR, "can't get IS_LOADING variable : " .. err)
end
self.is_loading = is_loading == "yes"
-- Kind of server
self.kind = "http"
if ngx.shared.datastore_stream then
self.kind = "stream"
end
end
function plugin:get_id()

View File

@ -1,10 +1,17 @@
from pathlib import Path
from os import getenv
from dotenv import dotenv_values
from docker import DockerClient
from kubernetes import client, config
from pathlib import Path
from redis import StrictRedis
from sys import path as sys_path
from typing import Tuple
if "/usr/share/bunkerweb/utils" not in sys_path:
sys_path.append("/usr/share/bunkerweb/utils")
from ApiCaller import ApiCaller
from API import API
from ApiCaller import ApiCaller
from logger import setup_logger
def format_remaining_time(seconds):
@ -29,129 +36,174 @@ def format_remaining_time(seconds):
class CLI(ApiCaller):
def __init__(self):
self.__variables = dotenv_values("/etc/nginx/variables.env")
self.__logger = setup_logger("CLI", getenv("LOG_LEVEL", "INFO"))
if not Path("/usr/share/bunkerweb/db").is_dir():
self.__variables = dotenv_values("/etc/nginx/variables.env")
else:
if "/usr/share/bunkerweb/db" not in sys_path:
sys_path.append("/usr/share/bunkerweb/db")
from Database import Database
db = Database(
self.__logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
self.__variables = db.get_config()
self.__integration = self.__detect_integration()
super().__init__(self.__get_apis())
def __detect_integration(self):
distrib = ""
if Path("/etc/os-release").is_file():
with open("/etc/os-release", "r") as f:
if "Alpine" in f.read():
distrib = "alpine"
else:
distrib = "other"
# Docker case
if distrib == "alpine" and Path("/usr/sbin/nginx").is_file():
return "docker"
# Linux case
if distrib == "other":
return "linux"
# Swarm case
if self.__variables["SWARM_MODE"] == "yes":
return "swarm"
# Kubernetes case
if self.__variables["KUBERNETES_MODE"] == "yes":
return "kubernetes"
# Autoconf case
if distrib == "alpine":
return "autoconf"
raise Exception("Can't detect integration")
def __get_apis(self):
# Docker case
if self.__integration in ("docker", "linux"):
return [
API(
f"http://127.0.0.1:{self.__variables['API_HTTP_PORT']}",
host=self.__variables["API_SERVER_NAME"],
)
]
# Autoconf case
if self.__integration == "autoconf":
docker_client = DockerClient()
apis = []
for container in self.__client.containers.list(
filters={"label": "bunkerweb.INSTANCE"}
):
port = "5000"
host = "bwapi"
for env in container.attrs["Config"]["Env"]:
if env.startswith("API_HTTP_PORT="):
port = env.split("=")[1]
elif env.startswith("API_SERVER_NAME="):
host = env.split("=")[1]
apis.append(API(f"http://{container.name}:{port}", host=host))
return apis
# Swarm case
if self.__integration == "swarm":
docker_client = DockerClient()
apis = []
for service in self.__client.services.list(
filters={"label": "bunkerweb.INSTANCE"}
):
port = "5000"
host = "bwapi"
for env in service.attrs["Spec"]["TaskTemplate"]["ContainerSpec"][
"Env"
]:
if env.startswith("API_HTTP_PORT="):
port = env.split("=")[1]
elif env.startswith("API_SERVER_NAME="):
host = env.split("=")[1]
for task in service.tasks():
apis.append(
API(
f"http://{service.name}.{task['NodeID']}.{task['ID']}:{port}",
host=host,
)
self.__use_redis = self.__variables.get("USE_REDIS", "no") == "yes"
self.__redis = None
if self.__use_redis:
redis_host = self.__variables.get("REDIS_HOST")
if redis_host:
redis_port = self.__variables.get("REDIS_PORT", "6379")
if not redis_port.isdigit():
self.__logger.error(
f"REDIS_PORT is not a valid port number: {redis_port}, defaulting to 6379"
)
return apis
redis_port = "6379"
redis_port = int(redis_port)
# Kubernetes case
if self.__integration == "kubernetes":
config.load_incluster_config()
corev1 = client.CoreV1Api()
apis = []
for pod in corev1.list_pod_for_all_namespaces(watch=False).items:
if (
pod.metadata.annotations != None
and "bunkerweb.io/INSTANCE" in pod.metadata.annotations
and pod.status.pod_ip
):
port = "5000"
host = "bwapi"
for env in pod.spec.containers[0].env:
if env.name == "API_HTTP_PORT":
port = env.value
elif env.name == "API_SERVER_NAME":
host = env.value
apis.append(API(f"http://{pod.status.pod_ip}:{port}", host=host))
return apis
redis_db = self.__variables.get("REDIS_DB", "0")
if not redis_db.isdigit():
self.__logger.error(
f"REDIS_DB is not a valid database number: {redis_db}, defaulting to 0"
)
redis_db = "0"
redis_db = int(redis_db)
redis_timeout = self.__variables.get("REDIS_TIMEOUT", "1000.0")
if redis_timeout:
try:
redis_timeout = float(redis_timeout)
except ValueError:
self.__logger.error(
f"REDIS_TIMEOUT is not a valid timeout: {redis_timeout}, defaulting to 1000 ms"
)
redis_timeout = 1000.0
redis_keepalive_pool = self.__variables.get(
"REDIS_KEEPALIVE_POOL", "10"
)
if not redis_keepalive_pool.isdigit():
self.__logger.error(
f"REDIS_KEEPALIVE_POOL is not a valid number of connections: {redis_keepalive_pool}, defaulting to 10"
)
redis_keepalive_pool = "10"
redis_keepalive_pool = int(redis_keepalive_pool)
self.__redis = StrictRedis(
host=redis_host,
port=redis_port,
db=redis_db,
socket_timeout=redis_timeout,
socket_connect_timeout=redis_timeout,
socket_keepalive=True,
max_connections=redis_keepalive_pool,
ssl=self.__variables.get("REDIS_SSL", "no") == "yes",
)
else:
self.__logger.error(
"USE_REDIS is set to yes but REDIS_HOST is not set, disabling redis"
)
self.__use_redis = False
if not Path("/usr/share/bunkerweb/db").is_dir() or self.__integration not in (
"kubernetes",
"swarm",
"autoconf",
):
# Docker & Linux case
super().__init__(
apis=[
API(
f"http://127.0.0.1:{self.__variables.get('API_HTTP_PORT', '5000')}",
host=self.__variables.get("API_SERVER_NAME", "bwapi"),
)
]
)
else:
super().__init__()
self.auto_setup(self.__integration)
def __detect_integration(self) -> str:
if self.__variables.get("KUBERNETES_MODE", "no") == "yes":
return "kubernetes"
elif self.__variables.get("SWARM_MODE", "no") == "yes":
return "swarm"
elif self.__variables.get("AUTOCONF_MODE", "no") == "yes":
return "autoconf"
elif Path("/usr/share/bunkerweb/INTEGRATION").is_file():
return Path("/usr/share/bunkerweb/INTEGRATION").read_text().strip().lower()
elif (
Path("/etc/os-release").is_file()
and "Alpine" in Path("/etc/os-release").read_text()
):
return "docker"
return "linux"
def unban(self, ip: str) -> Tuple[bool, str]:
if self.__redis:
ok = self.__redis.delete(f"bans_ip_{ip}")
if not ok:
self.__logger.error(f"Failed to delete ban for {ip} from redis")
def unban(self, ip):
if self._send_to_apis("POST", "/unban", data={"ip": ip}):
return True, f"IP {ip} has been unbanned"
return False, "error"
def ban(self, ip, exp):
def ban(self, ip: str, exp: float) -> Tuple[bool, str]:
if self.__redis:
ok = self.__redis.set(
f"bans_ip_{ip}",
"manual",
ex=exp,
)
if not ok:
self.__logger.error(f"Failed to ban {ip} in redis")
if self._send_to_apis("POST", "/ban", data={"ip": ip, "exp": exp}):
return True, f"IP {ip} has been banned"
return (
True,
f"IP {ip} has been banned for {format_remaining_time(exp)}",
)
return False, "error"
def bans(self):
def bans(self) -> Tuple[bool, str]:
servers = {}
ret, resp = self._send_to_apis("GET", "/bans", response=True)
if ret:
bans = resp.get("data", [])
if not ret:
return False, "error"
if len(bans) == 0:
return True, "No ban found"
for k, v in resp.items():
servers[k] = v.get("data", [])
if self.__redis:
servers["redis"] = []
for key in self.__redis.scan_iter("bans_ip_*"):
ip = key.decode("utf-8").replace("bans_ip_", "")
exp = self.__redis.ttl(key)
servers["redis"].append(
{
"ip": ip,
"exp": exp,
"reason": "manual",
}
)
cli_str = ""
for server, bans in servers.items():
cli_str += f"List of bans for {server}:\n"
if not bans:
cli_str += "No ban found\n"
cli_str = "List of bans :\n"
for ban in bans:
cli_str += f"- {ban['ip']} for {format_remaining_time(ban['exp'])} : {ban.get('reason', 'no reason given')}\n"
return True, cli_str
return False, "error"
else:
cli_str += "\n"
return True, cli_str

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3
from argparse import ArgumentParser
from os import _exit
from os import _exit, getenv
from sys import exit as sys_exit, path
from traceback import format_exc
@ -34,11 +34,17 @@ if __name__ == "__main__":
# Ban subparser
parser_ban = subparsers.add_parser("ban", help="add a ban to the cache")
parser_ban.add_argument("ip", type=str, help="IP address to ban")
ban_time = getenv("BAD_BEHAVIOR_BAN_TIME", "86400")
if not ban_time.isdigit():
ban_time = "86400"
ban_time = int(ban_time)
parser_ban.add_argument(
"exp",
"-exp",
type=int,
help="banning time in seconds (default : 86400)",
default=86400,
help=f"banning time in seconds (default : {ban_time})",
default=ban_time,
)
# Bans subparser

View File

@ -86,7 +86,7 @@ server_tokens off;
{% set x = found.update({"res" : true}) %}
{% endif %}
{% endfor %}
{% if not found["res"] and all[server_name + "_SERVER_TYPE"] == "stream" %}
{% if not found["res"] and all[server_name + "_SERVER_TYPE"] == "http" %}
{% set x = map_servers.update({server_name : [server_name]}) %}
{% endif %}
{% endif %}

View File

@ -1,118 +1,138 @@
init_by_lua_block {
local logger = require "logger"
local datastore = require "datastore"
local plugins = require "plugins"
local utils = require "utils"
local cjson = require "cjson"
local class = require "middleclass"
local clogger = require "bunkerweb.logger"
local helpers = require "bunkerweb.helpers"
local cdatastore = require "bunkerweb.datastore"
local cjson = require "cjson"
logger.log(ngx.NOTICE, "INIT-STREAM", "Init phase started")
-- Start init phase
local logger = clogger:new("INIT-STREAM")
local datastore = cdatastore:new()
logger:log(ngx.NOTICE, "init-stream phase started")
-- Remove previous data from the datastore
logger:log(ngx.NOTICE, "deleting old keys from datastore ...")
local data_keys = {"^plugin_", "^variable_", "^plugins$", "^api_", "^misc_"}
for i, key in pairs(data_keys) do
local ok, err = datastore:delete_all(key)
if not ok then
logger.log(ngx.ERR, "INIT-STREAM", "Can't delete " .. key .. " from datastore : " .. err)
logger:log(ngx.ERR, "can't delete " .. key .. " from datastore : " .. err)
return false
end
logger.log(ngx.INFO, "INIT-STREAM", "Deleted " .. key .. " from datastore")
logger:log(ngx.INFO, "deleted " .. key .. " from datastore")
end
logger:log(ngx.NOTICE, "deleted old keys from datastore")
-- Load variables into the datastore
logger:log(ngx.NOTICE, "saving variables into datastore ...")
local file = io.open("/etc/nginx/variables.env")
if not file then
logger.log(ngx.ERR, "INIT-STREAM", "Can't open /etc/nginx/variables.env file")
logger:log(ngx.ERR, "can't open /etc/nginx/variables.env file")
return false
end
file:close()
for line in io.lines("/etc/nginx/variables.env") do
local variable, value = line:match("(.+)=(.*)")
ok, err = datastore:set("variable_" .. variable, value)
local ok, err = datastore:set("variable_" .. variable, value)
if not ok then
logger.log(ngx.ERR, "INIT-STREAM", "Can't save variable " .. variable .. " into datastore")
logger:log(ngx.ERR, "can't save variable " .. variable .. " into datastore : " .. err)
return false
end
logger:log(ngx.INFO, "saved variable " .. variable .. "=" .. value .. " into datastore")
end
logger:log(ngx.NOTICE, "saved variables into datastore")
-- Set default values into the datastore
ok, err = datastore:set("plugins", cjson.encode({}))
if not ok then
logger.log(ngx.ERR, "INIT-STREAM", "Can't set default value for plugins into the datastore : " .. err)
return false
end
ok, err = utils.set_values()
if not ok then
logger.log(ngx.ERR, "INIT-STREAM", "Error while setting default values : " .. err)
return false
end
-- API setup
-- Set API values into the datastore
logger:log(ngx.NOTICE, "saving API values into datastore ...")
local value, err = datastore:get("variable_USE_API")
if not value then
logger.log(ngx.ERR, "INIT-STREAM", "Can't get variable USE_API from the datastore")
logger:log(ngx.ERR, "can't get variable USE_API from the datastore : " .. err)
return false
end
if value == "yes" then
value, err = datastore:get("variable_API_WHITELIST_IP")
local value, err = datastore:get("variable_API_WHITELIST_IP")
if not value then
logger.log(ngx.ERR, "INIT-STREAM", "Can't get variable API_WHITELIST_IP from the datastore")
logger:log(ngx.ERR, "can't get variable API_WHITELIST_IP from the datastore : " .. err)
return false
end
local whitelists = { data = {}}
local whitelists = {}
for whitelist in value:gmatch("%S+") do
table.insert(whitelists.data, whitelist)
table.insert(whitelists, whitelist)
end
ok, err = datastore:set("api_whitelist_ip", cjson.encode(whitelists))
local ok, err = datastore:set("api_whitelist_ip", cjson.encode(whitelists))
if not ok then
logger.log(ngx.ERR, "INIT-STREAM", "Can't save api_whitelist_ip to datastore : " .. err)
logger:log(ngx.ERR, "can't save API whitelist_ip to datastore : " .. err)
return false
end
logger:log(ngx.INFO, "saved API whitelist_ip into datastore")
end
logger:log(ngx.NOTICE, "saved API values into datastore")
-- Load plugins into the datastore
logger:log(ngx.NOTICE, "saving plugins into datastore ...")
local plugins = {}
local plugin_paths = {"/usr/share/bunkerweb/core", "/etc/bunkerweb/plugins"}
for i, plugin_path in ipairs(plugin_paths) do
local paths = io.popen("find -L " .. plugin_path .. " -maxdepth 1 -type d ! -path " .. plugin_path)
for path in paths:lines() do
plugin, err = plugins:load(path)
if not plugin then
logger.log(ngx.ERR, "INIT-STREAM", "Error while loading plugin from " .. path .. " : " .. err)
return false
local ok, plugin = helpers.load_plugin(path .. "/plugin.json")
if not ok then
logger:log(ngx.ERR, plugin)
else
local ok, err = datastore:set("plugin_" .. plugin.id, cjson.encode(plugin))
if not ok then
logger:log(ngx.ERR, "can't save " .. plugin.id .. " into datastore : " .. err)
else
table.insert(plugins, plugin)
table.sort(plugins, function (a, b)
return a.order < b.order
end)
logger:log(ngx.NOTICE, "loaded plugin " .. plugin.id .. " v" .. plugin.version)
end
end
logger.log(ngx.NOTICE, "INIT-STREAM", "Loaded plugin " .. plugin.id .. " v" .. plugin.version)
end
end
-- Call init method of plugins
local list, err = plugins:list()
if not list then
logger.log(ngx.ERR, "INIT-STREAM", "Can't list loaded plugins : " .. err)
list = {}
local ok, err = datastore:set("plugins", cjson.encode(plugins))
if not ok then
logger:log(ngx.ERR, "can't save plugins into datastore : " .. err)
return false
end
for i, plugin in ipairs(list) do
local ret, plugin_lua = pcall(require, plugin.id .. "/" .. plugin.id)
if ret then
local plugin_obj = plugin_lua.new()
if plugin_obj.init ~= nil then
ok, err = plugin_obj:init()
logger:log(ngx.NOTICE, "saved plugins into datastore")
-- Call init() methodatastore
logger:log(ngx.NOTICE, "calling init() methods of plugins ...")
for i, plugin in ipairs(plugins) do
-- Require call
local plugin_lua, err = helpers.require_plugin(plugin.id)
if plugin_lua == false then
logger:log(ngx.ERR, err)
elseif plugin_lua == nil then
logger:log(ngx.NOTICE, err)
else
-- Check if plugin has init method
if plugin_lua.init ~= nil then
-- New call
local ok, plugin_obj = helpers.new_plugin(plugin_lua)
if not ok then
logger.log(ngx.ERR, "INIT-STREAM", "Plugin " .. plugin.id .. " failed on init() : " .. err)
logger:log(ngx.ERR, plugin_obj)
else
logger.log(ngx.INFO, "INIT-STREAM", "Successfull init() call for plugin " .. plugin.id .. " : " .. err)
local ok, ret = helpers.call_plugin(plugin_obj, "init")
if not ok then
logger:log(ngx.ERR, ret)
elseif not ret.ret then
logger:log(ngx.ERR, plugin.id .. ":init() call failed : " .. ret.msg)
else
logger:log(ngx.NOTICE, plugin.id .. ":init() call successful : " .. ret.msg)
end
end
else
logger.log(ngx.INFO, "INIT-STREAM", "init() method not found in " .. plugin.id .. ", skipped execution")
end
else
if plugin_lua:match("not found") then
logger.log(ngx.INFO, "INIT-STREAM", "can't require " .. plugin.id .. " : not found")
else
logger.log(ngx.ERR, "INIT-STREAM", "can't require " .. plugin.id .. " : " .. plugin_lua)
logger:log(ngx.NOTICE, "skipped execution of " .. plugin.id .. " because method init() is not defined")
end
end
end
logger:log(ngx.NOTICE, "called init() methods of plugins")
logger.log(ngx.NOTICE, "INIT-STREAM", "Init phase ended")
logger:log(ngx.NOTICE, "init-stream phase ended")
}

View File

@ -5,7 +5,7 @@ init_worker_by_lua_block {
-- Our timer function
local ready_log = function(premature)
-- Instantiate objects
local logger = require "bunkerweb.logger":new("INIT")
local logger = require "bunkerweb.logger":new("INIT-STREAM")
local datastore = require "bunkerweb.datastore":new()
-- Don't print the ready log if we are in loading state
local is_loading, err = require "bunkerweb.utils".get_variable("IS_LOADING", false)

View File

@ -0,0 +1,48 @@
lua_shared_dict ready_lock_stream 16k;
init_worker_by_lua_block {
-- Our timer function
local ready_log = function(premature)
-- Instantiate objects
local logger = require "bunkerweb.logger":new("INIT")
local datastore = require "bunkerweb.datastore":new()
-- Don't print the ready log if we are in loading state
local is_loading, err = require "bunkerweb.utils".get_variable("IS_LOADING", false)
if not is_loading then
logger:log(ngx.ERR, "utils.get_variable() failed : " .. err)
return
elseif is_loading == "yes" then
return
end
-- Instantiate lock
local lock = require "resty.lock":new("ready_lock_stream")
if not lock then
logger:log(ngx.ERR, "lock:new() failed : " .. err)
return
end
-- Acquire lock
local elapsed, err = lock:lock("ready")
if elapsed == nil then
logger:log(ngx.ERR, "lock:lock() failed : " .. err)
else
-- Display ready log
local ok, err = datastore:get("misc_ready")
if not ok and err ~= "not found" then
logger:log(ngx.ERR, "datastore:get() failed : " .. err)
elseif not ok and err == "not found" then
logger:log(ngx.NOTICE, "BunkerWeb is ready to fool hackers ! 🚀")
local ok, err = datastore:set("misc_ready", "ok")
if not ok then
logger:log(ngx.ERR, "datastore:set() failed : " .. err)
end
end
end
-- Release lock
lock:unlock()
end
-- Start timer
ngx.timer.at(5, ready_log)
}

View File

@ -66,7 +66,7 @@ logger:log(ngx.INFO, "called log() methods of plugins")
-- Display reason at info level
if ngx.ctx.reason then
logger:log(ngx.INFO, "client was denied with reason : " .. reason)
logger:log(ngx.INFO, "client was denied with reason : " .. ngx.ctx.reason)
end
logger:log(ngx.INFO, "log phase ended")

View File

@ -1,44 +1,74 @@
log_by_lua_block {
local utils = require "utils"
local logger = require "logger"
local datastore = require "datastore"
local plugins = require "plugins"
local class = require "middleclass"
local clogger = require "bunkerweb.logger"
local helpers = require "bunkerweb.helpers"
local cdatastore = require "bunkerweb.datastore"
local cjson = require "cjson"
logger.log(ngx.INFO, "LOG", "Log phase started")
-- Start log phase
local logger = clogger:new("LOG")
local datastore = cdatastore:new()
logger:log(ngx.INFO, "log phase started")
-- List all plugins
local list, err = plugins:list()
if not list then
logger.log(ngx.ERR, "LOG", "Can't list loaded plugins : " .. err)
list = {}
-- Fill ctx
logger:log(ngx.INFO, "filling ngx.ctx ...")
local ok, ret, errors = helpers.fill_ctx()
if not ok then
logger:log(ngx.ERR, "fill_ctx() failed : " .. ret)
elseif errors then
for i, error in ipairs(errors) do
logger:log(ngx.ERR, "fill_ctx() error " .. tostring(i) .. " : " .. error)
end
end
logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")")
-- Call log method of plugins
for i, plugin in ipairs(list) do
local ret, plugin_lua = pcall(require, plugin.id .. "/" .. plugin.id)
if ret then
local plugin_obj = plugin_lua.new()
if plugin_obj.log ~= nil then
logger.log(ngx.INFO, "LOG", "Executing log() of " .. plugin.id)
local ok, err = plugin_obj:log()
-- Get plugins
local plugins, err = datastore:get("plugins")
if not plugins then
logger:log(ngx.ERR, "can't get plugins from datastore : " .. err)
return false
end
plugins = cjson.decode(plugins)
-- Call log_stream() methods
logger:log(ngx.INFO, "calling log_stream() methods of plugins ...")
for i, plugin in ipairs(plugins) do
-- Require call
local plugin_lua, err = helpers.require_plugin(plugin.id)
if plugin_lua == false then
logger:log(ngx.ERR, err)
elseif plugin_lua == nil then
logger:log(ngx.INFO, err)
else
-- Check if plugin has log method
if plugin_lua.log_stream ~= nil then
-- New call
local ok, plugin_obj = helpers.new_plugin(plugin_lua)
if not ok then
logger.log(ngx.ERR, "LOG", "Error while calling log() on plugin " .. plugin.id .. " : " .. err)
logger:log(ngx.ERR, plugin_obj)
else
logger.log(ngx.INFO, "LOG", "Return value from " .. plugin.id .. ".log() is : " .. err)
local ok, ret = helpers.call_plugin(plugin_obj, "log_stream")
if not ok then
logger:log(ngx.ERR, ret)
elseif not ret.ret then
logger:log(ngx.ERR, plugin.id .. ":log_stream() call failed : " .. ret.msg)
else
logger:log(ngx.INFO, plugin.id .. ":log_stream() call successful : " .. ret.msg)
end
end
else
logger.log(ngx.INFO, "LOG", "log() method not found in " .. plugin.id .. ", skipped execution")
logger:log(ngx.INFO, "skipped execution of " .. plugin.id .. " because method log_stream() is not defined")
end
end
end
logger:log(ngx.INFO, "called log_stream() methods of plugins")
-- Display reason at info level
local reason = utils.get_reason()
if reason then
logger.log(ngx.INFO, "LOG", "Client was denied with reason : " .. reason)
if ngx.ctx.reason then
logger:log(ngx.INFO, "client was denied with reason : " .. ngx.ctx.reason)
end
logger.log(ngx.INFO, "LOG", "Log phase ended")
logger:log(ngx.INFO, "log phase ended")
}

View File

@ -1,81 +1,100 @@
preread_by_lua_block {
local logger = require "logger"
local datastore = require "datastore"
local plugins = require "plugins"
local utils = require "utils"
local redisutils = require "redisutils"
local class = require "middleclass"
local clogger = require "bunkerweb.logger"
local helpers = require "bunkerweb.helpers"
local utils = require "bunkerweb.utils"
local cdatastore = require "bunkerweb.datastore"
local cclusterstore = require "bunkerweb.clusterstore"
local cjson = require "cjson"
logger.log(ngx.INFO, "PREREAD", "Preread phase started")
-- Start preread phase
local logger = clogger:new("PREREAD")
local datastore = cdatastore:new()
logger:log(ngx.INFO, "preread phase started")
-- Fill ctx
logger:log(ngx.INFO, "filling ngx.ctx ...")
local ok, ret, errors = helpers.fill_ctx()
if not ok then
logger:log(ngx.ERR, "fill_ctx() failed : " .. ret)
elseif errors then
for i, error in ipairs(errors) do
logger:log(ngx.ERR, "fill_ctx() error " .. tostring(i) .. " : " .. error)
end
end
logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")")
-- Process bans as soon as possible
local banned = nil
-- Redis case
local use_redis = utils.get_variable("USE_REDIS")
if use_redis == "yes" then
local redis_banned, reason = redisutils.ban(ngx.var.remote_addr)
if redis_banned == nil then
logger.log(ngx.ERR, "ACCESS", "Error while checking ban from redis, falling back to local : " .. reason)
elseif not redis_banned then
banned = false
else
banned = reason
end
end
-- Local case
local banned, reason, ttl = utils.is_banned(ngx.ctx.bw.remote_addr)
if banned == nil then
local reason, err = datastore:get("bans_ip_" .. ngx.var.remote_addr)
if reason then
banned = reason
logger:log(ngx.ERR, "can't check if IP " .. ngx.ctx.bw.remote_addr .. " is banned : " .. reason)
elseif banned then
logger:log(ngx.WARN, "IP " .. ngx.ctx.bw.remote_addr .. " is banned with reason " .. reason .. " (" .. tostring(ttl) .. "s remaining)")
return ngx.exit(utils.get_deny_status())
else
logger:log(ngx.INFO, "IP " .. ngx.ctx.bw.remote_addr .. " is not banned")
end
-- Get plugins
local plugins, err = datastore:get("plugins")
if not plugins then
logger:log(ngx.ERR, "can't get plugins from datastore : " .. err)
return false
end
plugins = cjson.decode(plugins)
-- Call preread() methods
logger:log(ngx.INFO, "calling preread() methods of plugins ...")
local status = nil
for i, plugin in ipairs(plugins) do
-- Require call
local plugin_lua, err = helpers.require_plugin(plugin.id)
if plugin_lua == false then
logger:log(ngx.ERR, err)
elseif plugin_lua == nil then
logger:log(ngx.INFO, err)
else
banned = false
end
end
-- Deny request
if banned then
logger.log(ngx.WARN, "ACCESS", "IP " .. ngx.var.remote_addr .. " is banned with reason : " .. banned)
ngx.exit(utils.get_deny_status())
end
-- List all plugins
local list, err = plugins:list()
if not list then
logger.log(ngx.ERR, "PREREAD", "Can't list loaded plugins : " .. err)
list = {}
end
-- Call preread method of plugins
for i, plugin in ipairs(list) do
local ret, plugin_lua = pcall(require, plugin.id .. "/" .. plugin.id)
if ret then
local plugin_obj = plugin_lua.new()
if plugin_obj.preread ~= nil then
logger.log(ngx.INFO, "PREREAD", "Executing preread() of " .. plugin.id)
local ok, err, ret, value = plugin_obj:preread()
-- Check if plugin has preread method
if plugin_lua.preread ~= nil then
-- New call
local ok, plugin_obj = helpers.new_plugin(plugin_lua)
if not ok then
logger.log(ngx.ERR, "PREREAD", "Error while calling preread() on plugin " .. plugin.id .. " : " .. err)
logger:log(ngx.ERR, plugin_obj)
else
logger.log(ngx.INFO, "PREREAD", "Return value from " .. plugin.id .. ".preread() is : " .. err)
end
if ret then
if type(value) == "number" then
if value == utils.get_deny_status() then
logger.log(ngx.WARN, "PREREAD", "Denied access from " .. plugin.id .. " : " .. err)
ngx.var.reason = plugin.id
else
logger.log(ngx.NOTICE, "PREREAD", plugin.id .. " returned status " .. tostring(value) .. " : " .. err)
end
return ngx.exit(value)
local ok, ret = helpers.call_plugin(plugin_obj, "preread")
if not ok then
logger:log(ngx.ERR, ret)
elseif not ret.ret then
logger:log(ngx.ERR, plugin.id .. ":preread() call failed : " .. ret.msg)
else
return value
logger:log(ngx.INFO, plugin.id .. ":preread() call successful : " .. ret.msg)
end
if ret.status then
if ret.status == utils.get_deny_status() then
ngx.ctx.reason = plugin.id
logger:log(ngx.WARN, "denied access from " .. plugin.id .. " : " .. ret.msg)
else
logger:log(ngx.NOTICE, plugin.id .. " returned status " .. tostring(ret.status) .. " : " .. ret.msg)
end
status = ret.status
break
end
end
else
logger.log(ngx.INFO, "PREREAD", "preread() method not found in " .. plugin.id .. ", skipped execution")
logger:log(ngx.INFO, "skipped execution of " .. plugin.id .. " because method preread() is not defined")
end
end
end
logger:log(ngx.INFO, "called preread() methods of plugins")
logger.log(ngx.INFO, "PREREAD", "Preread phase ended")
logger:log(ngx.INFO, "preread phase ended")
-- Return status if needed
if status then
return ngx.exit(status)
end
return true
}

View File

@ -14,8 +14,8 @@ server {
# reason variable
set $reason '';
# stream flag
set $is_stream 'yes';
# server_name variable
set $server_name '{{ SERVER_NAME.split(" ")[0] }}';
# include LUA files
include {{ NGINX_PREFIX }}preread-stream-lua.conf;

View File

@ -29,10 +29,17 @@ lua_ssl_trusted_certificate "/usr/share/bunkerweb/misc/root-ca.pem";
lua_ssl_verify_depth 2;
{% if has_variable(all, "SERVER_TYPE", "stream") +%}
lua_shared_dict datastore_stream {{ DATASTORE_MEMORY_SIZE }};
lua_shared_dict cachestore_stream {{ CACHESTORE_MEMORY_SIZE }};
lua_shared_dict cachestore_ipc_stream {{ CACHESTORE_IPC_MEMORY_SIZE }};
lua_shared_dict cachestore_miss_stream {{ CACHESTORE_MISS_MEMORY_SIZE }};
lua_shared_dict cachestore_locks_stream {{ CACHESTORE_LOCKS_MEMORY_SIZE }};
# LUA init block
include /etc/nginx/init-stream-lua.conf;
# LUA init worker block
include /etc/nginx/init-worker-stream-lua.conf;
# TODO add default stream server if that makes any sense ?
# server config(s)
@ -58,9 +65,19 @@ include /etc/nginx/init-stream-lua.conf;
{% endfor %}
{% for first_server in map_servers +%}
include /etc/nginx/{{ first_server }}/server-stream.conf;
{% if all[first_server + "_USE_REVERSE_PROXY"] == "yes" and all[first_server + "_REVERSE_PROXY_HOST"] != "" +%}
upstream {{ first_server }} {
server {{ all[first_server + "_REVERSE_PROXY_HOST"] }};
}
{% endif %}
{% endfor %}
{% elif MULTISITE == "no" and SERVER_NAME != "" and SERVER_TYPE == "stream" +%}
include /etc/nginx/server-stream.conf;
{% if USE_REVERSE_PROXY == "yes" and REVERSE_PROXY_HOST != "" +%}
upstream {{ SERVER_NAME.split(" ")[0] }} {
server {{ REVERSE_PROXY_HOST }};
}
{% endif %}
{% endif %}
{% endif %}

View File

@ -8,7 +8,10 @@ local base64 = require "base64"
local sha256 = require "resty.sha256"
local str = require "resty.string"
local http = require "resty.http"
local template = require "resty.template"
local template = nil
if ngx.shared.datastore then
template = require "resty.template"
end
local antibot = class("antibot", plugin)

View File

@ -45,6 +45,10 @@ function badbehavior:log_default()
return self:log()
end
function badbehavior:log_stream()
return self:log()
end
function badbehavior.increase(premature, ip, count_time, ban_time, threshold, use_redis)
-- Instantiate objects
local logger = require "bunkerweb.logger":new("badbehavior")

View File

@ -220,27 +220,19 @@ function blacklist:is_blacklisted_ip()
end
if check_rdns then
-- Get rDNS
local rdns_list, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
if not rdns_list then
return false, err
end
-- Check if rDNS is in ignore list
local ignore = false
for i, ignore_suffix in ipairs(self.lists["IGNORE_RDNS"]) do
for j, rdns in ipairs(rdns_list) do
local rdns, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
if rdns then
-- Check if rDNS is in ignore list
local ignore = false
for i, ignore_suffix in ipairs(self.lists["IGNORE_RDNS"]) do
if rdns:sub(-#ignore_suffix) == ignore_suffix then
ignore = true
break
end
end
if ignore then
break
end
end
-- Check if rDNS is in blacklist
if not ignore then
for i, suffix in ipairs(self.lists["RDNS"]) do
for j, rdns in ipairs(rdns_list) do
-- Check if rDNS is in blacklist
if not ignore then
for i, suffix in ipairs(self.lists["RDNS"]) do
if rdns:sub(-#suffix) == suffix then
return true, "rDNS " .. suffix
end
@ -253,7 +245,6 @@ function blacklist:is_blacklisted_ip()
if ngx.ctx.bw.ip_is_global then
local asn, err = utils.get_asn(ngx.ctx.bw.remote_addr)
if not asn then
self.logger:log(ngx.ERR, "7")
return nil, err
end
local ignore = false

View File

@ -11,7 +11,7 @@ function bunkernet:initialize()
-- Call parent initialize
plugin.initialize(self, "bunkernet")
-- Get BunkerNet ID
if ngx.get_phase() ~= "init" and self.variables["USE_BUNKERNET"] == "yes" then
if ngx.get_phase() ~= "init" and self.variables["USE_BUNKERNET"] == "yes" and not self.is_loading then
local id, err = self.datastore:get("plugin_bunkernet_id")
if id then
self.bunkernet_id = id
@ -23,6 +23,9 @@ end
function bunkernet:init()
-- Check if init is needed
if self.is_loading then
return self:ret(true, "bunkerweb is loading")
end
local init_needed, err = utils.has_variable("USE_BUNKERNET", "yes")
if init_needed == nil then
return self:ret(false, "can't check USE_BUNKERNET variable : " .. err)
@ -73,6 +76,10 @@ function bunkernet:init()
end
function bunkernet:log(bypass_use_bunkernet)
-- Check if not loading is needed
if self.is_loading then
return self:ret(true, "bunkerweb is loading")
end
if not bypass_use_bunkernet then
-- Check if BunkerNet is enabled
if self.variables["USE_BUNKERNET"] ~= "yes" then
@ -118,6 +125,10 @@ function bunkernet:log(bypass_use_bunkernet)
end
function bunkernet:log_default()
-- Check if not loading is needed
if self.is_loading then
return self:ret(true, "bunkerweb is loading")
end
-- Check if BunkerNet is activated
local check, err = utils.has_variable("USE_BUNKERNET", "yes")
if check == nil then
@ -138,6 +149,10 @@ function bunkernet:log_default()
return self:log(true)
end
function bunkernet:log_stream()
return self:log()
end
function bunkernet:request(method, url, data)
local httpc, err = http.new()
if not httpc then

View File

@ -24,8 +24,9 @@ function country:access()
return self:ret(true, "country not activated")
end
-- Check if IP is in cache
local data, err = self:is_in_cache(ngx.ctx.bw.remote_addr)
local ok, data = self:is_in_cache(ngx.ctx.bw.remote_addr)
if data then
data = cjson.decode(data)
if data.result == "ok" then
return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is in country cache (not blacklisted, country = " .. data.country .. ")")
end
@ -95,7 +96,7 @@ function country:is_in_cache(ip)
if not ok then
return false, data
end
return true, cjson.decode(data)
return true, data
end
function country:add_to_cache(ip, country, result)

View File

@ -11,7 +11,7 @@ ssl_protocols {{ SSL_PROTOCOLS }};
ssl_prefer_server_ciphers on;
ssl_session_tickets off;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
ssl_session_cache shared:MozSSLStream:10m;
{% if "TLSv1.2" in SSL_PROTOCOLS +%}
ssl_dhparam /etc/nginx/dhparam;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;

View File

@ -2,7 +2,10 @@ local class = require "middleclass"
local plugin = require "bunkerweb.plugin"
local utils = require "bunkerweb.utils"
local cjson = require "cjson"
local template = require "resty.template"
local template = nil
if ngx.shared.datastore then
template = require "resty.template"
end
local errors = class("errors", plugin)

View File

@ -180,13 +180,10 @@ function greylist:is_greylisted_ip()
end
if check_rdns then
-- Get rDNS
local rdns_list, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
if not rdns_list then
return nil, err
end
local rdns, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
-- Check if rDNS is in greylist
for i, suffix in ipairs(self.lists["RDNS"]) do
for j, rdns in ipairs(rdns_list) do
if rdns then
for i, suffix in ipairs(self.lists["RDNS"]) do
if rdns:sub(-#suffix) == suffix then
return true, "rDNS " .. suffix
end

View File

@ -10,7 +10,7 @@ ssl_protocols {{ SSL_PROTOCOLS }};
ssl_prefer_server_ciphers on;
ssl_session_tickets off;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
ssl_session_cache shared:MozSSLStream:10m;
{% if "TLSv1.2" in SSL_PROTOCOLS +%}
ssl_dhparam /etc/nginx/dhparam;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;

View File

@ -60,6 +60,15 @@
"label": "Maximum number of HTTP/2 streams",
"regex": "^\\d+$",
"type": "text"
},
"LIMIT_CONN_MAX_STREAM": {
"context": "multisite",
"default": "10",
"help": "Maximum number of connections per IP when using stream.",
"id": "limit-conn-max-stream",
"label": "Maximum number of stream connections",
"regex": "^\\d+$",
"type": "text"
}
}
}

View File

@ -1,11 +1,11 @@
{% if USE_REVERSE_PROXY == "yes" +%}
{% if USE_REVERSE_PROXY == "yes" and REVERSE_PROXY_HOST != "" +%}
# TODO : more settings specific to stream
{% if REVERSE_PROXY_STREAM_PROXY_PROTOCOL == "yes" +%}
proxy_protocol on;
{% endif +%}
set $backend "{{ host }}";
set $backend "{{ SERVER_NAME.split(" ")[0] }}";
proxy_pass $backend;
{% endif %}

View File

@ -29,7 +29,7 @@
"help": "Full URL of the proxied resource (proxy_pass).",
"id": "reverse-proxy-host",
"label": "Reverse proxy host",
"regex": "^(https?:\\/\\/[-\\w@:%.+~#=]+[-\\w()!@:%+.~#?&\\/=$]*)?$",
"regex": "^.*$",
"type": "text",
"multiple": "reverse-proxy"
},

View File

@ -49,6 +49,10 @@ function reversescan:access()
return self:ret(true, "no port open for IP " .. ngx.ctx.bw.remote_addr)
end
function reversescan:preread()
return self:access()
end
function reversescan:scan(ip, port, timeout)
local tcpsock = ngx.socket.tcp()
tcpsock:settimeout(timeout)

View File

@ -10,7 +10,7 @@ ssl_protocols {{ SSL_PROTOCOLS }};
ssl_prefer_server_ciphers on;
ssl_session_tickets off;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
ssl_session_cache shared:MozSSLStream:10m;
{% if "TLSv1.2" in SSL_PROTOCOLS +%}
ssl_dhparam /etc/nginx/dhparam;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;

View File

@ -11,7 +11,7 @@ function sessions:initialize()
end
function sessions:init()
if self.is_loading then
if self.is_loading or self.kind ~= "http" then
return self:ret(true, "init not needed")
end
-- Get redis vars

View File

@ -165,11 +165,10 @@ function whitelist:check_cache()
if ngx.ctx.bw.uri then
checks["URI"] = "uri" .. ngx.ctx.bw.uri
end
local already_cached = {
["IP"] = false,
["URI"] = false,
["UA"] = false
}
local already_cached = {}
for i, k in ipairs(checks) do
already_cached[k] = false
end
for k, v in pairs(checks) do
local ok, cached = self:is_in_cache(v)
if not ok then
@ -237,13 +236,10 @@ function whitelist:is_whitelisted_ip()
end
if check_rdns then
-- Get rDNS
local rdns_list, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
if not rdns_list then
return nil, err
end
local rdns, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
-- Check if rDNS is in whitelist
for i, suffix in ipairs(self.lists["RDNS"]) do
for j, rdns in ipairs(rdns_list) do
if rdns then
for i, suffix in ipairs(self.lists["RDNS"]) do
if rdns:sub(-#suffix) == suffix then
return true, "rDNS " .. suffix
end

View File

@ -186,7 +186,11 @@ if __name__ == "__main__":
retries += 1
sleep(5)
proc = run(["nginx", "-s", "reload"], stdin=DEVNULL, stderr=STDOUT)
proc = run(
["sudo", "/usr/sbin/nginx", "-s", "reload"],
stdin=DEVNULL,
stderr=STDOUT,
)
if proc.returncode != 0:
status = 1
logger.error("Error while reloading nginx")

View File

@ -3,3 +3,4 @@ kubernetes==26.1.0
jinja2==3.1.2
python-dotenv==1.0.0
requests==2.28.2
redis==4.5.4

View File

@ -4,6 +4,10 @@
#
# pip-compile --allow-unsafe --generate-hashes --resolver=backtracking
#
async-timeout==4.0.2 \
--hash=sha256:2163e1640ddb52b7a8c80d0a67a08587e5d245cc9c553a74a847056bc2976b15 \
--hash=sha256:8ca1e4fcf50d07413d66d1a5e416e42cfdf5851c981d679a09851a6853383b3c
# via redis
cachetools==5.3.0 \
--hash=sha256:13dfddc7b8df938c21a940dfa6557ce6e94a2f1cdfa58eb90c805721d58f2c14 \
--hash=sha256:429e1a1e845c008ea6c85aa35d4b98b65d6a9763eeef3e37e92728a12d1de9d4
@ -231,6 +235,10 @@ pyyaml==6.0 \
--hash=sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174 \
--hash=sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5
# via kubernetes
redis==4.5.4 \
--hash=sha256:2c19e6767c474f2e85167909061d525ed65bea9301c0770bb151e041b7ac89a2 \
--hash=sha256:73ec35da4da267d6847e47f68730fdd5f62e2ca69e3ef5885c6a78a9374c3893
# via -r requirements.in
requests==2.28.2 \
--hash=sha256:64299f4909223da747622c030b781c0d7811e359c37124b4bd368fb8c6518baa \
--hash=sha256:98b1b2782e3c6c4904938b84c0eb932721069dfdb9134313beff7c83c2df24bf
@ -269,7 +277,7 @@ websocket-client==1.5.1 \
# kubernetes
# The following packages are considered to be unsafe in a requirements file:
setuptools==67.7.1 \
--hash=sha256:6f0839fbdb7e3cfef1fc38d7954f5c1c26bf4eebb155a55c9bf8faf997b9fb67 \
--hash=sha256:bb16732e8eb928922eabaa022f881ae2b7cdcfaf9993ef1f5e841a96d32b8e0c
setuptools==67.7.2 \
--hash=sha256:23aaf86b85ca52ceb801d32703f12d77517b2556af839621c641fca11287952b \
--hash=sha256:f104fa03692a2602fa0fec6c6a9e63b6c8a968de13e17c026957dd1f53d80990
# via kubernetes

View File

@ -393,7 +393,7 @@ if __name__ == "__main__":
logger.warning(err)
else:
err = db.add_instance(
"localhost",
"127.0.0.1",
config_files.get("API_HTTP_PORT", 5000),
config_files.get("API_SERVER_NAME", "bwapi"),
)

View File

@ -1,8 +1,8 @@
from io import BytesIO
from os import environ, getenv
from os import getenv
from sys import path as sys_path
from tarfile import open as taropen
from typing import Optional
from typing import Any, Dict, List, Literal, Optional, Tuple, Union
if "/usr/share/bunkerweb/utils" not in sys_path:
sys_path.append("/usr/share/bunkerweb/utils")
@ -18,9 +18,9 @@ from docker import DockerClient
class ApiCaller:
def __init__(self, apis=[]):
self.__apis = apis
self.__logger = setup_logger("Api", environ.get("LOG_LEVEL", "INFO"))
def __init__(self, apis: List[API] = None):
self.__apis = apis or []
self.__logger = setup_logger("Api", getenv("LOG_LEVEL", "INFO"))
def auto_setup(self, bw_integration: Optional[str] = None):
if bw_integration is None:
@ -101,14 +101,22 @@ class ApiCaller:
)
)
def _set_apis(self, apis):
def _set_apis(self, apis: List[API]):
self.__apis = apis
def _get_apis(self):
return self.__apis
def _send_to_apis(self, method, url, files=None, data=None, response=False):
def _send_to_apis(
self,
method: Union[Literal["POST"], Literal["GET"]],
url: str,
files: Optional[Dict[str, BytesIO]] = None,
data: Optional[Dict[str, Any]] = None,
response: bool = False,
) -> Tuple[bool, Tuple[bool, Optional[Dict[str, Any]]]]:
ret = True
responses = {}
for api in self.__apis:
if files is not None:
for buffer in files.values():
@ -130,16 +138,23 @@ class ApiCaller:
f"Successfully sent API request to {api.get_endpoint()}{url}",
)
if response:
instance = api.get_endpoint().replace("http://", "").split(":")[0]
if isinstance(resp, dict):
responses[instance] = resp
else:
responses[instance] = resp.json()
if response:
if isinstance(resp, dict):
return ret, resp
return ret, resp.json()
return ret, responses
return ret
def _send_files(self, path, url):
def _send_files(self, path: str, url: str) -> bool:
ret = True
with BytesIO() as tgz:
with taropen(mode="w:gz", fileobj=tgz, dereference=True, compresslevel=5) as tf:
with taropen(
mode="w:gz", fileobj=tgz, dereference=True, compresslevel=5
) as tf:
tf.add(path, arcname=".")
tgz.seek(0, 0)
files = {"archive.tar.gz": tgz}

View File

@ -1,4 +1,3 @@
from datetime import datetime
from logging import (
CRITICAL,
DEBUG,
@ -31,10 +30,6 @@ class BWLogger(Logger):
stack_info=False,
stacklevel=1,
):
if self.name == "UI":
with open("/var/log/nginx/ui.log", "a") as f:
f.write(f"[{datetime.now().replace(microsecond=0)}] {msg}\n")
return super(BWLogger, self)._log(
level, msg, args, exc_info, extra, stack_info, stacklevel
)

View File

@ -5,7 +5,7 @@ After=bunkerweb.service
[Service]
Restart=no
User=root
User=nginx
PIDFile=/var/tmp/bunkerweb/ui.pid
ExecStart=/usr/share/bunkerweb/scripts/bunkerweb-ui.sh start
ExecStop=/usr/share/bunkerweb/scripts/bunkerweb-ui.sh stop

View File

@ -1,13 +1,13 @@
#!/bin/bash
# Set the PYTHONPATH
export PYTHONPATH=/usr/share/bunkerweb/deps/python
export PYTHONPATH=/usr/share/bunkerweb/deps/python:/usr/share/bunkerweb/ui
# Create the ui.env file if it doesn't exist
if [ ! -f /etc/bunkerweb/ui.env ]; then
echo "ADMIN_USERNAME=admin" > /etc/bunkerweb/ui.env
echo "ADMIN_PASSWORD=changeme" >> /etc/bunkerweb/ui.env
echo "ABSOLUTE_URI=http://mydomain.ext/mypath/" >> /etc/bunkerweb/ui.env
echo "ABSOLUTE_URI=http://bwadm.example.com/changeme/" >> /etc/bunkerweb/ui.env
fi
# Function to start the UI
@ -18,7 +18,7 @@ start() {
fi
source /etc/bunkerweb/ui.env
export $(cat /etc/bunkerweb/ui.env)
python3 -m gunicorn --graceful-timeout=0 --bind=127.0.0.1:7000 --chdir /usr/share/bunkerweb/ui/ --workers=1 --threads=2 main:app &
python3 -m gunicorn main:app --worker-class gevent --bind 127.0.0.1:7000 --graceful-timeout 0 --access-logfile - --error-logfile - &
echo $! > /var/tmp/bunkerweb/ui.pid
}

View File

@ -80,9 +80,12 @@ function start() {
log "SYSTEMCTL" "" "Starting BunkerWeb service ..."
echo "nginx ALL=(ALL) NOPASSWD: /usr/sbin/nginx" > /etc/sudoers.d/bunkerweb
chown -R nginx:nginx /etc/nginx
# Create dummy variables.env
if [ ! -f /etc/bunkerweb/variables.env ]; then
echo -ne "# remove IS_LOADING=yes when your config is ready\nIS_LOADING=yes\nHTTP_PORT=80\nHTTPS_PORT=443\nAPI_LISTEN_IP=127.0.0.1\nSERVER_NAME=\n" > /etc/bunkerweb/variables.env
sudo -E -u nginx -g nginx /bin/bash -c "echo -ne '\# remove IS_LOADING=yes when your config is ready\nIS_LOADING=yes\nHTTP_PORT=80\nHTTPS_PORT=443\nAPI_LISTEN_IP=127.0.0.1\nSERVER_NAME=\n' > /etc/bunkerweb/variables.env"
log "SYSTEMCTL" "" "Created dummy variables.env file"
fi
@ -101,8 +104,8 @@ function start() {
if [ "$HTTPS_PORT" = "" ] ; then
HTTPS_PORT="8443"
fi
echo -ne "IS_LOADING=yes\nHTTP_PORT=${HTTP_PORT}\nHTTPS_PORT=${HTTPS_PORT}\nAPI_LISTEN_IP=127.0.0.1\nSERVER_NAME=\n" > /var/tmp/bunkerweb/tmp.env
/usr/share/bunkerweb/gen/main.py --variables /var/tmp/bunkerweb/tmp.env --no-linux-reload
sudo -E -u nginx -g nginx /bin/bash -c "echo -ne 'IS_LOADING=yes\nHTTP_PORT=${HTTP_PORT}\nHTTPS_PORT=${HTTPS_PORT}\nAPI_LISTEN_IP=127.0.0.1\nSERVER_NAME=\n' > /var/tmp/bunkerweb/tmp.env"
sudo -E -u nginx -g nginx /bin/bash -c "/usr/share/bunkerweb/gen/main.py --variables /var/tmp/bunkerweb/tmp.env --no-linux-reload"
if [ $? -ne 0 ] ; then
log "SYSTEMCTL" "❌" "Error while generating config from /var/tmp/bunkerweb/tmp.env"
exit 1
@ -134,9 +137,9 @@ function start() {
# Update database
log "SYSTEMCTL" "" "Updating database ..."
if [ ! -f /var/lib/bunkerweb/db.sqlite3 ]; then
/usr/share/bunkerweb/gen/save_config.py --variables /etc/bunkerweb/variables.env --init
else
/usr/share/bunkerweb/gen/save_config.py --variables /etc/bunkerweb/variables.env
sudo -E -u nginx -g nginx /bin/bash -c "/usr/share/bunkerweb/gen/save_config.py --variables /etc/bunkerweb/variables.env --init"
else
sudo -E -u nginx -g nginx /bin/bash -c "/usr/share/bunkerweb/gen/save_config.py --variables /etc/bunkerweb/variables.env"
fi
if [ $? -ne 0 ] ; then
log "SYSTEMCTL" "❌" "save_config failed"
@ -146,7 +149,7 @@ function start() {
# Execute scheduler
log "SYSTEMCTL" " " "Executing scheduler ..."
/usr/share/bunkerweb/scheduler/main.py --variables /etc/bunkerweb/variables.env
sudo -E -u nginx -g nginx /bin/bash -c "/usr/share/bunkerweb/scheduler/main.py --variables /etc/bunkerweb/variables.env"
if [ "$?" -ne 0 ] ; then
log "SYSTEMCTL" "❌" "Scheduler failed"
exit 1

View File

@ -21,6 +21,7 @@ RUN apk add --no-cache --virtual .build-deps g++ gcc musl-dev jpeg-dev zlib-dev
# Copy files
# can't exclude specific files/dir from . so we are copying everything by hand
COPY src/common/api /usr/share/bunkerweb/api
COPY src/common/cli /usr/share/bunkerweb/cli
COPY src/common/confs /usr/share/bunkerweb/confs
COPY src/common/db /usr/share/bunkerweb/db
COPY src/common/core /usr/share/bunkerweb/core
@ -31,11 +32,12 @@ COPY src/common/utils /usr/share/bunkerweb/utils
COPY src/scheduler /usr/share/bunkerweb/scheduler
COPY src/VERSION /usr/share/bunkerweb/VERSION
# Add scheduler user, install runtime dependencies, create data folders and set permissions
# Add scheduler user, drop bwcli, install runtime dependencies, create data folders and set permissions
RUN apk add --no-cache bash libgcc libstdc++ openssl && \
ln -s /usr/local/bin/python3 /usr/bin/python3 && \
addgroup -g 101 scheduler && \
adduser -h /var/cache/nginx -g scheduler -s /bin/sh -G scheduler -D -H -u 101 scheduler && \
cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
echo "Docker" > /usr/share/bunkerweb/INTEGRATION && \
mkdir -p /var/tmp/bunkerweb && \
mkdir -p /var/www && \
@ -48,12 +50,12 @@ RUN apk add --no-cache bash libgcc libstdc++ openssl && \
for dir in $(echo "configs/http configs/stream configs/server-http configs/server-stream configs/default-server-http configs/default-server-stream configs/modsec configs/modsec-crs") ; do mkdir "/data/${dir}" ; done && \
chown -R root:scheduler /data && \
chmod -R 770 /data && \
chown -R root:scheduler /usr/share/bunkerweb /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb && \
chown -R root:scheduler /usr/share/bunkerweb /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /usr/bin/bwcli && \
find /usr/share/bunkerweb -type f -exec chmod 0740 {} \; && \
find /usr/share/bunkerweb -type d -exec chmod 0750 {} \; && \
chmod -R 770 /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb && \
find /usr/share/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \
chmod 750 /usr/share/bunkerweb/gen/*.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/scheduler/entrypoint.sh /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/deps/python/bin/* && \
chmod 750 /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/gen/*.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/scheduler/entrypoint.sh /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/deps/python/bin/* /usr/bin/bwcli && \
mkdir -p /etc/nginx && \
chown -R scheduler:scheduler /etc/nginx && \
chmod -R 770 /etc/nginx && \

View File

@ -79,7 +79,10 @@ class JobScheduler(ApiCaller):
if self.__integration not in ("Autoconf", "Swarm", "Kubernetes", "Docker"):
self.__logger.info("Reloading nginx ...")
proc = run(
["nginx", "-s", "reload"], stdin=DEVNULL, stderr=PIPE, env=self.__env
["sudo", "/usr/sbin/nginx", "-s", "reload"],
stdin=DEVNULL,
stderr=PIPE,
env=self.__env,
)
reload = proc.returncode == 0
if reload:

View File

@ -315,9 +315,7 @@ if __name__ == "__main__":
"Looks like BunkerWeb configuration is already generated, will not generate it again ..."
)
if Path("/var/lib/bunkerweb/db.sqlite3").exists():
chmod("/var/lib/bunkerweb/db.sqlite3", 0o760)
first_run = True
while True:
# Instantiate scheduler
scheduler = JobScheduler(
@ -346,7 +344,11 @@ if __name__ == "__main__":
"--output",
"/etc/nginx",
]
+ (["--variables", args.variables] if args.variables else []),
+ (
["--variables", args.variables]
if args.variables and first_run
else []
),
stdin=DEVNULL,
stderr=STDOUT,
)
@ -381,7 +383,7 @@ if __name__ == "__main__":
# Stop temp nginx
logger.info("Stopping temp nginx ...")
proc = subprocess_run(
["/usr/sbin/nginx", "-s", "stop"],
["sudo", "/usr/sbin/nginx", "-s", "stop"],
stdin=DEVNULL,
stderr=STDOUT,
env=deepcopy(env),
@ -403,7 +405,7 @@ if __name__ == "__main__":
# Start nginx
logger.info("Starting nginx ...")
proc = subprocess_run(
["/usr/sbin/nginx"],
["sudo", "/usr/sbin/nginx"],
stdin=DEVNULL,
stderr=STDOUT,
env=deepcopy(env),
@ -431,6 +433,7 @@ if __name__ == "__main__":
generate = True
scheduler.setup()
need_reload = False
first_run = False
# infinite schedule for the jobs
logger.info("Executing job scheduler ...")
@ -439,83 +442,82 @@ if __name__ == "__main__":
scheduler.run_pending()
sleep(1)
if not args.variables:
# check if the custom configs have changed since last time
tmp_custom_configs = db.get_custom_configs()
if custom_configs != tmp_custom_configs:
logger.info("Custom configs changed, generating ...")
logger.debug(f"{tmp_custom_configs=}")
logger.debug(f"{custom_configs=}")
custom_configs = deepcopy(tmp_custom_configs)
# check if the custom configs have changed since last time
tmp_custom_configs = db.get_custom_configs()
if custom_configs != tmp_custom_configs:
logger.info("Custom configs changed, generating ...")
logger.debug(f"{tmp_custom_configs=}")
logger.debug(f"{custom_configs=}")
custom_configs = deepcopy(tmp_custom_configs)
# Remove old custom configs files
logger.info("Removing old custom configs files ...")
for file in glob("/data/configs/*"):
if Path(file).is_symlink() or Path(file).is_file():
Path(file).unlink()
elif Path(file).is_dir():
rmtree(file, ignore_errors=False)
# Remove old custom configs files
logger.info("Removing old custom configs files ...")
for file in glob("/data/configs/*"):
if Path(file).is_symlink() or Path(file).is_file():
Path(file).unlink()
elif Path(file).is_dir():
rmtree(file, ignore_errors=False)
logger.info("Generating new custom configs ...")
generate_custom_configs(custom_configs, integration, api_caller)
logger.info("Generating new custom configs ...")
generate_custom_configs(custom_configs, integration, api_caller)
# reload nginx
logger.info("Reloading nginx ...")
if integration not in (
"Autoconf",
"Swarm",
"Kubernetes",
"Docker",
):
# Reloading the nginx server.
proc = subprocess_run(
# Reload nginx
["/usr/sbin/nginx", "-s", "reload"],
stdin=DEVNULL,
stderr=STDOUT,
env=deepcopy(env),
)
if proc.returncode == 0:
logger.info("Successfully reloaded nginx")
else:
logger.error(
f"Error while reloading nginx - returncode: {proc.returncode} - error: {proc.stderr.decode('utf-8')}",
)
else:
need_reload = True
# check if the plugins have changed since last time
tmp_external_plugins = db.get_plugins(external=True)
if external_plugins != tmp_external_plugins:
logger.info("External plugins changed, generating ...")
logger.debug(f"{tmp_external_plugins=}")
logger.debug(f"{external_plugins=}")
external_plugins = deepcopy(tmp_external_plugins)
# Remove old external plugins files
logger.info("Removing old external plugins files ...")
for file in glob("/data/plugins/*"):
if Path(file).is_symlink() or Path(file).is_file():
Path(file).unlink()
elif Path(file).is_dir():
rmtree(file, ignore_errors=False)
logger.info("Generating new external plugins ...")
generate_external_plugins(
db.get_plugins(external=True, with_data=True),
integration,
api_caller,
# reload nginx
logger.info("Reloading nginx ...")
if integration not in (
"Autoconf",
"Swarm",
"Kubernetes",
"Docker",
):
# Reloading the nginx server.
proc = subprocess_run(
# Reload nginx
["sudo", "/usr/sbin/nginx", "-s", "reload"],
stdin=DEVNULL,
stderr=STDOUT,
env=deepcopy(env),
)
if proc.returncode == 0:
logger.info("Successfully reloaded nginx")
else:
logger.error(
f"Error while reloading nginx - returncode: {proc.returncode} - error: {proc.stderr.decode('utf-8')}",
)
else:
need_reload = True
# check if the config have changed since last time
tmp_env = db.get_config()
if env != tmp_env:
logger.info("Config changed, generating ...")
logger.debug(f"{tmp_env=}")
logger.debug(f"{env=}")
env = deepcopy(tmp_env)
need_reload = True
# check if the plugins have changed since last time
tmp_external_plugins = db.get_plugins(external=True)
if external_plugins != tmp_external_plugins:
logger.info("External plugins changed, generating ...")
logger.debug(f"{tmp_external_plugins=}")
logger.debug(f"{external_plugins=}")
external_plugins = deepcopy(tmp_external_plugins)
# Remove old external plugins files
logger.info("Removing old external plugins files ...")
for file in glob("/data/plugins/*"):
if Path(file).is_symlink() or Path(file).is_file():
Path(file).unlink()
elif Path(file).is_dir():
rmtree(file, ignore_errors=False)
logger.info("Generating new external plugins ...")
generate_external_plugins(
db.get_plugins(external=True, with_data=True),
integration,
api_caller,
)
need_reload = True
# check if the config have changed since last time
tmp_env = db.get_config()
if env != tmp_env:
logger.info("Config changed, generating ...")
logger.debug(f"{tmp_env=}")
logger.debug(f"{env=}")
env = deepcopy(tmp_env)
need_reload = True
except:
logger.error(
f"Exception while executing scheduler : {format_exc()}",

View File

@ -254,9 +254,9 @@ urllib3==1.26.15 \
# via requests
# The following packages are considered to be unsafe in a requirements file:
setuptools==67.7.1 \
--hash=sha256:6f0839fbdb7e3cfef1fc38d7954f5c1c26bf4eebb155a55c9bf8faf997b9fb67 \
--hash=sha256:bb16732e8eb928922eabaa022f881ae2b7cdcfaf9993ef1f5e841a96d32b8e0c
setuptools==67.7.2 \
--hash=sha256:23aaf86b85ca52ceb801d32703f12d77517b2556af839621c641fca11287952b \
--hash=sha256:f104fa03692a2602fa0fec6c6a9e63b6c8a968de13e17c026957dd1f53d80990
# via
# acme
# certbot

View File

@ -52,17 +52,16 @@ from json import JSONDecodeError, dumps, load as json_load
from jinja2 import Template
from kubernetes import client as kube_client
from kubernetes.client.exceptions import ApiException as kube_ApiException
from os import _exit, chmod, getenv, getpid, listdir, walk
from os.path import join
from os import _exit, getenv, getpid, listdir
from re import match as re_match
from requests import get
from shutil import move, rmtree, copytree, chown
from shutil import move, rmtree
from signal import SIGINT, signal, SIGTERM
from subprocess import PIPE, Popen, call
from tarfile import CompressionError, HeaderError, ReadError, TarError, open as tar_open
from threading import Thread
from tempfile import NamedTemporaryFile
from time import time
from time import sleep, time
from traceback import format_exc
from typing import Optional
from zipfile import BadZipFile, ZipFile
@ -81,10 +80,6 @@ from utils import (
from logger import setup_logger
from Database import Database
if not Path("/var/log/nginx/ui.log").exists():
Path("/var/log/nginx").mkdir(parents=True, exist_ok=True)
Path("/var/log/nginx/ui.log").touch()
logger = setup_logger("UI", getenv("LOG_LEVEL", "INFO"))
@ -114,8 +109,8 @@ def handle_stop(signum, frame):
signal(SIGINT, handle_stop)
signal(SIGTERM, handle_stop)
Path("/var/tmp/bunkerweb/ui.pid").write_text(str(getpid()))
if not Path("/var/tmp/bunkerweb/ui.pid").is_file():
Path("/var/tmp/bunkerweb/ui.pid").write_text(str(getpid()))
# Flask app
app = Flask(
@ -188,6 +183,24 @@ elif integration == "Kubernetes":
kubernetes_client = kube_client.CoreV1Api()
db = Database(logger)
while not db.is_initialized():
logger.warning(
"Database is not initialized, retrying in 5s ...",
)
sleep(5)
env = db.get_config()
while not db.is_first_config_saved() or not env:
logger.warning(
"Database doesn't have any config saved yet, retrying in 5s ...",
)
sleep(5)
env = db.get_config()
logger.info("Database is ready")
Path("/var/tmp/bunkerweb/ui.healthy").write_text("ok")
with open("/usr/share/bunkerweb/VERSION", "r") as f:
bw_version = f.read().strip()
@ -197,7 +210,7 @@ try:
SECRET_KEY=vars["FLASK_SECRET"],
ABSOLUTE_URI=vars["ABSOLUTE_URI"],
INSTANCES=Instances(docker_client, kubernetes_client, integration),
CONFIG=Config(logger, db),
CONFIG=Config(db),
CONFIGFILES=ConfigFiles(logger, db),
SESSION_COOKIE_DOMAIN=vars["ABSOLUTE_URI"]
.replace("http://", "")
@ -250,8 +263,6 @@ def manage_bunkerweb(method: str, operation: str = "reloads", *args):
operation = app.config["INSTANCES"].stop_instance(args[0])
elif operation == "restart":
operation = app.config["INSTANCES"].restart_instance(args[0])
elif Path("/usr/sbin/nginx").is_file():
operation = app.config["INSTANCES"].reload_instances()
else:
operation = "The scheduler will be in charge of reloading the instances."
@ -452,7 +463,7 @@ def services():
del variables["OLD_SERVER_NAME"]
# Edit check fields and remove already existing ones
config = app.config["CONFIG"].get_config(methods=True)
config = app.config["CONFIG"].get_config(methods=False)
for variable, value in deepcopy(variables).items():
if variable.endswith("SCHEMA"):
del variables[variable]
@ -463,19 +474,15 @@ def services():
elif value == "off":
value = "no"
config_setting = config.get(
f"{variables['SERVER_NAME'].split(' ')[0]}_{variable}", None
)
if variable in variables and (
request.form["operation"] == "edit"
and variable != "SERVER_NAME"
and config_setting is not None
and value == config_setting["value"]
variable != "SERVER_NAME"
and value == config.get(variable, None)
or not value.strip()
):
del variables[variable]
print(variables, flush=True)
if len(variables) <= 1:
flash(
f"{variables['SERVER_NAME'].split(' ')[0]} was not edited because no values were changed."
@ -633,6 +640,8 @@ def configs():
operation = app.config["CONFIGFILES"].check_path(variables["path"])
print(variables, flush=True)
if operation:
flash(operation, "error")
return redirect(url_for("loading", next=url_for("configs"))), 500
@ -728,36 +737,18 @@ def plugins():
flash(f"Can't delete internal plugin {variables['name']}", "error")
return redirect(url_for("loading", next=url_for("plugins"))), 500
if not Path("/usr/sbin/nginx").is_file():
plugins = app.config["CONFIG"].get_plugins()
for plugin in deepcopy(plugins):
if plugin["external"] is False or plugin["id"] == variables["name"]:
del plugins[plugins.index(plugin)]
plugins = app.config["CONFIG"].get_plugins()
for plugin in deepcopy(plugins):
if plugin["external"] is False or plugin["id"] == variables["name"]:
del plugins[plugins.index(plugin)]
err = db.update_external_plugins(plugins)
if err:
flash(
f"Couldn't update external plugins to database: {err}",
"error",
)
else:
variables["path"] = f"/etc/bunkerweb/plugins/{variables['name']}"
operation = app.config["CONFIGFILES"].check_path(
variables["path"], "/etc/bunkerweb/plugins/"
err = db.update_external_plugins(plugins)
if err:
flash(
f"Couldn't update external plugins to database: {err}",
"error",
)
if operation:
flash(operation, "error")
return redirect(url_for("loading", next=url_for("plugins"))), 500
operation, error = app.config["CONFIGFILES"].delete_path(
variables["path"]
)
if error:
flash(operation, "error")
return redirect(url_for("loading", next=url_for("plugins")))
flash(f"Deleted plugin {variables['name']} successfully")
else:
if not Path("/var/tmp/bunkerweb/ui").exists() or not listdir(
"/var/tmp/bunkerweb/ui"
@ -811,43 +802,32 @@ def plugins():
)
raise Exception
if not Path("/usr/sbin/nginx").is_file():
plugin_content = BytesIO()
with tar_open(
fileobj=plugin_content, mode="w:gz"
) as tar:
tar.add(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
arcname=temp_folder_name,
recursive=True,
)
plugin_content.seek(0)
value = plugin_content.getvalue()
new_plugins.append(
plugin_file
| {
"external": True,
"page": "ui"
in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
),
"method": "ui",
"data": value,
"checksum": sha256(value).hexdigest(),
}
)
new_plugins_ids.append(folder_name)
else:
if Path(
f"/etc/bunkerweb/plugins/{folder_name}"
).exists():
raise FileExistsError
copytree(
plugin_content = BytesIO()
with tar_open(
fileobj=plugin_content, mode="w:gz"
) as tar:
tar.add(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
f"/etc/bunkerweb/plugins/{folder_name}",
arcname=temp_folder_name,
recursive=True,
)
plugin_content.seek(0)
value = plugin_content.getvalue()
new_plugins.append(
plugin_file
| {
"external": True,
"page": "ui"
in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
),
"method": "ui",
"data": value,
"checksum": sha256(value).hexdigest(),
}
)
new_plugins_ids.append(folder_name)
except KeyError:
zip_file.extractall(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
@ -895,54 +875,43 @@ def plugins():
)
raise Exception
if not Path("/usr/sbin/nginx").is_file():
for file_name in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}"
):
move(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}/{file_name}",
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{file_name}",
)
rmtree(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}"
for file_name in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}"
):
move(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}/{file_name}",
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{file_name}",
)
rmtree(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}"
)
plugin_content = BytesIO()
with tar_open(
fileobj=plugin_content, mode="w:gz"
) as tar:
tar.add(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
arcname=temp_folder_name,
recursive=True,
)
plugin_content.seek(0)
value = plugin_content.getvalue()
new_plugins.append(
plugin_file
| {
"external": True,
"page": "ui"
in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
),
"method": "ui",
"data": value,
"checksum": sha256(value).hexdigest(),
}
plugin_content = BytesIO()
with tar_open(
fileobj=plugin_content, mode="w:gz"
) as tar:
tar.add(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
arcname=temp_folder_name,
recursive=True,
)
new_plugins_ids.append(folder_name)
else:
if Path(
f"/etc/bunkerweb/plugins/{folder_name}"
).exists():
raise FileExistsError
plugin_content.seek(0)
value = plugin_content.getvalue()
copytree(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}",
f"/etc/bunkerweb/plugins/{folder_name}",
)
new_plugins.append(
plugin_file
| {
"external": True,
"page": "ui"
in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
),
"method": "ui",
"data": value,
"checksum": sha256(value).hexdigest(),
}
)
new_plugins_ids.append(folder_name)
except BadZipFile:
errors += 1
error = 1
@ -985,43 +954,32 @@ def plugins():
)
raise Exception
if not Path("/usr/sbin/nginx").is_file():
plugin_content = BytesIO()
with tar_open(
fileobj=plugin_content, mode="w:gz"
) as tar:
tar.add(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
arcname=temp_folder_name,
recursive=True,
)
plugin_content.seek(0)
value = plugin_content.getvalue()
new_plugins.append(
plugin_file
| {
"external": True,
"page": "ui"
in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
),
"method": "ui",
"data": value,
"checksum": sha256(value).hexdigest(),
}
)
new_plugins_ids.append(folder_name)
else:
if Path(
f"/etc/bunkerweb/plugins/{folder_name}"
).exists():
raise FileExistsError
copytree(
plugin_content = BytesIO()
with tar_open(
fileobj=plugin_content, mode="w:gz"
) as tar:
tar.add(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
f"/etc/bunkerweb/plugins/{folder_name}",
arcname=temp_folder_name,
recursive=True,
)
plugin_content.seek(0)
value = plugin_content.getvalue()
new_plugins.append(
plugin_file
| {
"external": True,
"page": "ui"
in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
),
"method": "ui",
"data": value,
"checksum": sha256(value).hexdigest(),
}
)
new_plugins_ids.append(folder_name)
except KeyError:
tar_file.extractall(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
@ -1069,54 +1027,43 @@ def plugins():
)
raise Exception
if not Path("/usr/sbin/nginx").is_file():
for file_name in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}"
):
move(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}/{file_name}",
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{file_name}",
)
rmtree(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}"
for file_name in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}"
):
move(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}/{file_name}",
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{file_name}",
)
rmtree(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}"
)
plugin_content = BytesIO()
with tar_open(
fileobj=plugin_content, mode="w:gz"
) as tar:
tar.add(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
arcname=temp_folder_name,
recursive=True,
)
plugin_content.seek(0)
value = plugin_content.getvalue()
new_plugins.append(
plugin_file
| {
"external": True,
"page": "ui"
in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
),
"method": "ui",
"data": value,
"checksum": sha256(value).hexdigest(),
}
plugin_content = BytesIO()
with tar_open(
fileobj=plugin_content, mode="w:gz"
) as tar:
tar.add(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}",
arcname=temp_folder_name,
recursive=True,
)
new_plugins_ids.append(folder_name)
else:
if Path(
f"/etc/bunkerweb/plugins/{folder_name}"
).exists():
raise FileExistsError
plugin_content.seek(0)
value = plugin_content.getvalue()
copytree(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}/{dirs[0]}",
f"/etc/bunkerweb/plugins/{folder_name}",
)
new_plugins.append(
plugin_file
| {
"external": True,
"page": "ui"
in listdir(
f"/var/tmp/bunkerweb/ui/{temp_folder_name}"
),
"method": "ui",
"data": value,
"checksum": sha256(value).hexdigest(),
}
)
new_plugins_ids.append(folder_name)
except ReadError:
errors += 1
error = 1
@ -1185,12 +1132,6 @@ def plugins():
if errors >= files_count:
return redirect(url_for("loading", next=url_for("plugins")))
# Fix permissions for plugins folders
for root, dirs, files in walk("/etc/bunkerweb/plugins", topdown=False):
for name in files + dirs:
chown(join(root, name), "root", 101)
chmod(join(root, name), 0o770)
plugins = app.config["CONFIG"].get_plugins(external=True, with_data=True)
for plugin in deepcopy(plugins):
if plugin["id"] in new_plugins_ids:
@ -1232,26 +1173,10 @@ def plugins():
plugin_id = request.args.get("plugin_id")
template = None
if not Path("/usr/sbin/nginx").is_file():
page = db.get_plugin_template(plugin_id)
page = db.get_plugin_template(plugin_id)
if page is not None:
template = Template(page.decode("utf-8"))
else:
page_path = ""
if Path(f"/etc/bunkerweb/plugins/{plugin_id}/ui/template.html").exists():
page_path = f"/etc/bunkerweb/plugins/{plugin_id}/ui/template.html"
elif Path(
f"/usr/share/bunkerweb/core/{plugin_id}/ui/template.html"
).exists():
page_path = f"/usr/share/bunkerweb/core/{plugin_id}/ui/template.html"
else:
flash(f"Plugin {plugin_id} not found", "error")
if page_path:
with open(page_path, "r") as f:
template = Template(f.read())
if page is not None:
template = Template(page.decode("utf-8"))
if template is not None:
return template.render(
@ -1312,67 +1237,29 @@ def custom_plugin(plugin):
)
return redirect(url_for("loading", next=url_for("plugins", plugin_id=plugin)))
if not Path("/usr/sbin/nginx").is_file():
module = db.get_plugin_actions(plugin)
module = db.get_plugin_actions(plugin)
if module is None:
flash(
f"The <i>actions.py</i> file for the plugin <b>{plugin}</b> does not exist",
"error",
)
return redirect(
url_for("loading", next=url_for("plugins", plugin_id=plugin))
)
try:
# Try to import the custom plugin
with NamedTemporaryFile(mode="wb", suffix=".py", delete=True) as temp:
temp.write(module)
temp.flush()
temp.seek(0)
loader = SourceFileLoader("actions", temp.name)
actions = loader.load_module()
except:
flash(
f"An error occurred while importing the plugin <b>{plugin}</b>:<br/>{format_exc()}",
"error",
)
return redirect(
url_for("loading", next=url_for("plugins", plugin_id=plugin))
)
else:
if (
not Path(f"/etc/bunkerweb/plugins/{plugin}/ui/actions.py").exists()
and not Path(f"/usr/share/bunkerweb/core/{plugin}/ui/actions.py").exists()
):
flash(
f"The <i>actions.py</i> file for the plugin <b>{plugin}</b> does not exist",
"error",
)
return redirect(
url_for("loading", next=url_for("plugins", plugin_id=plugin))
)
# Add the custom plugin to sys.path
sys_path.append(
(
"/etc/bunkerweb/plugins"
if Path(f"/etc/bunkerweb/plugins/{plugin}/ui/actions.py").exists()
else "/usr/share/bunkerweb/core"
)
+ f"/{plugin}/ui/"
if module is None:
flash(
f"The <i>actions.py</i> file for the plugin <b>{plugin}</b> does not exist",
"error",
)
try:
# Try to import the custom plugin
import actions
except:
flash(
f"An error occurred while importing the plugin <b>{plugin}</b>:<br/>{format_exc()}",
"error",
)
return redirect(
url_for("loading", next=url_for("plugins", plugin_id=plugin))
)
return redirect(url_for("loading", next=url_for("plugins", plugin_id=plugin)))
try:
# Try to import the custom plugin
with NamedTemporaryFile(mode="wb", suffix=".py", delete=True) as temp:
temp.write(module)
temp.flush()
temp.seek(0)
loader = SourceFileLoader("actions", temp.name)
actions = loader.load_module()
except:
flash(
f"An error occurred while importing the plugin <b>{plugin}</b>:<br/>{format_exc()}",
"error",
)
return redirect(url_for("loading", next=url_for("plugins", plugin_id=plugin)))
error = False
res = None

View File

@ -303,9 +303,9 @@ zope-interface==6.0 \
# via gevent
# The following packages are considered to be unsafe in a requirements file:
setuptools==67.7.1 \
--hash=sha256:6f0839fbdb7e3cfef1fc38d7954f5c1c26bf4eebb155a55c9bf8faf997b9fb67 \
--hash=sha256:bb16732e8eb928922eabaa022f881ae2b7cdcfaf9993ef1f5e841a96d32b8e0c
setuptools==67.7.2 \
--hash=sha256:23aaf86b85ca52ceb801d32703f12d77517b2556af839621c641fca11287952b \
--hash=sha256:f104fa03692a2602fa0fec6c6a9e63b6c8a968de13e17c026957dd1f53d80990
# via
# gevent
# gunicorn

View File

@ -10,37 +10,17 @@ from pathlib import Path
from re import search as re_search
from subprocess import run, DEVNULL, STDOUT
from tarfile import open as tar_open
from time import sleep
from typing import List, Tuple
from uuid import uuid4
class Config:
def __init__(self, logger, db) -> None:
def __init__(self, db) -> None:
with open("/usr/share/bunkerweb/settings.json", "r") as f:
self.__settings: dict = json_load(f)
self.__logger = logger
self.__db = db
if not Path("/usr/sbin/nginx").exists():
while not self.__db.is_initialized():
self.__logger.warning(
"Database is not initialized, retrying in 5s ...",
)
sleep(5)
env = self.__db.get_config()
while not self.__db.is_first_config_saved() or not env:
self.__logger.warning(
"Database doesn't have any config saved yet, retrying in 5s ...",
)
sleep(5)
env = self.__db.get_config()
self.__logger.info("Database is ready")
Path("/var/tmp/bunkerweb/ui.healthy").write_text("ok")
def __env_to_dict(self, filename: str) -> dict:
"""Converts the content of an env file into a dict
@ -144,21 +124,6 @@ class Config:
def get_plugins(
self, *, external: bool = False, with_data: bool = False
) -> List[dict]:
if not Path("/usr/sbin/nginx").exists():
plugins = self.__db.get_plugins(external=external, with_data=with_data)
plugins.sort(key=lambda x: x["name"])
if not external:
general_plugin = None
for x, plugin in enumerate(plugins):
if plugin["name"] == "General":
general_plugin = plugin
del plugins[x]
break
plugins.insert(0, general_plugin)
return plugins
plugins = []
for foldername in list(iglob("/etc/bunkerweb/plugins/*")) + (
@ -231,12 +196,6 @@ class Config:
dict
The nginx variables env file as a dict
"""
if Path("/usr/sbin/nginx").exists():
return {
k: ({"value": v, "method": "ui"} if methods else v)
for k, v in self.__env_to_dict("/etc/nginx/variables.env").items()
}
return self.__db.get_config(methods=methods)
def get_services(self, methods: bool = True) -> list[dict]:
@ -247,22 +206,6 @@ class Config:
list
The services
"""
if Path("/usr/sbin/nginx").exists():
services = []
plugins_settings = self.get_plugins_settings()
for filename in iglob("/etc/nginx/**/variables.env"):
service = filename.split("/")[3]
env = {
k.replace(f"{service}_", ""): (
{"value": v, "method": "ui"} if methods else v
)
for k, v in self.__env_to_dict(filename).items()
if k.startswith(f"{service}_") or k in plugins_settings
}
services.append(env)
return services
return self.__db.get_services_settings(methods=methods)
def check_variables(self, variables: dict, _global: bool = False) -> int:

View File

@ -1,13 +1,29 @@
from glob import glob
from os import listdir, replace, walk
from os.path import dirname, join
from pathlib import Path
from re import compile as re_compile
from shutil import rmtree, move as shutil_move
from typing import Tuple
from typing import Any, Dict, List, Tuple
from utils import path_to_dict
def generate_custom_configs(
custom_configs: List[Dict[str, Any]],
*,
original_path: str = "/data/configs",
):
Path(original_path).mkdir(parents=True, exist_ok=True)
for custom_config in custom_configs:
tmp_path = f"{original_path}/{custom_config['type'].replace('_', '-')}"
if custom_config["service_id"]:
tmp_path += f"/{custom_config['service_id']}"
tmp_path += f"/{custom_config['name']}.conf"
Path(dirname(tmp_path)).mkdir(parents=True, exist_ok=True)
Path(tmp_path).write_bytes(custom_config["data"])
class ConfigFiles:
def __init__(self, logger, db):
self.__name_regex = re_compile(r"^[a-zA-Z0-9_\-.]{1,64}$")
@ -19,6 +35,21 @@ class ConfigFiles:
self.__logger = logger
self.__db = db
if not Path("/usr/sbin/nginx").is_file():
custom_configs = self.__db.get_custom_configs()
if custom_configs:
self.__logger.info("Refreshing custom configs ...")
# Remove old custom configs files
for file in glob("/data/configs/*"):
if Path(file).is_symlink() or Path(file).is_file():
Path(file).unlink()
elif Path(file).is_dir():
rmtree(file, ignore_errors=False)
generate_custom_configs(custom_configs)
self.__logger.info("Custom configs refreshed successfully")
def save_configs(self) -> str:
custom_configs = []
root_dirs = listdir("/etc/bunkerweb/configs")
@ -109,8 +140,8 @@ class ConfigFiles:
return f"The file {file_path} was successfully created", 0
def edit_folder(self, path: str, name: str, old_name: str) -> Tuple[str, int]:
new_folder_path = dirname(join(path, name))
old_folder_path = dirname(join(path, old_name))
new_folder_path = join(dirname(path), name)
old_folder_path = join(dirname(path), old_name)
if old_folder_path == new_folder_path:
return (
@ -131,8 +162,8 @@ class ConfigFiles:
def edit_file(
self, path: str, name: str, old_name: str, content: str
) -> Tuple[str, int]:
new_path = dirname(join(path, name))
old_path = dirname(join(path, old_name))
new_path = join(dirname(path), name)
old_path = join(dirname(path), old_name)
try:
file_content = Path(old_path).read_text()

View File

@ -1,10 +1,18 @@
from pathlib import Path
from subprocess import run
from subprocess import DEVNULL, STDOUT, run
from sys import path as sys_path
from typing import Any, Optional, Union
from API import API
from ApiCaller import ApiCaller
if "/usr/share/bunkerweb/deps/python" not in sys_path:
sys_path.append("/usr/share/bunkerweb/deps/python")
from dotenv import dotenv_values
from kubernetes import config
class Instance:
_id: str
@ -45,15 +53,55 @@ class Instance:
return self._id
def reload(self) -> bool:
if self._type == "local":
return (
run(
["sudo", "/usr/sbin/nginx", "-s", "reload"],
stdin=DEVNULL,
stderr=STDOUT,
).returncode
== 0
)
return self.apiCaller._send_to_apis("POST", "/reload")
def start(self) -> bool:
if self._type == "local":
return (
run(
["sudo", "/usr/sbin/nginx"],
stdin=DEVNULL,
stderr=STDOUT,
).returncode
== 0
)
return self.apiCaller._send_to_apis("POST", "/start")
def stop(self) -> bool:
if self._type == "local":
return (
run(
["sudo", "/usr/sbin/nginx", "-s", "stop"],
stdin=DEVNULL,
stderr=STDOUT,
).returncode
== 0
)
return self.apiCaller._send_to_apis("POST", "/stop")
def restart(self) -> bool:
if self._type == "local":
return (
run(
["sudo", "/usr/sbin/nginx", "-s", "restart"],
stdin=DEVNULL,
stderr=STDOUT,
).returncode
== 0
)
return self.apiCaller._send_to_apis("POST", "/restart")
@ -114,6 +162,30 @@ class Instances:
if desired_tasks > 0 and (desired_tasks == running_tasks):
status = "up"
apis = []
for instance in self.__docker_client.services.list(
filters={"label": "bunkerweb.INSTANCE"}
):
api_http_port = None
api_server_name = None
for var in instance.attrs["Spec"]["TaskTemplate"]["ContainerSpec"][
"Env"
]:
if var.startswith("API_HTTP_PORT="):
api_http_port = var.replace("API_HTTP_PORT=", "", 1)
elif var.startswith("API_SERVER_NAME="):
api_server_name = var.replace("API_SERVER_NAME=", "", 1)
for task in instance.tasks():
apis.append(
API(
f"http://{instance.name}.{task['NodeID']}.{task['ID']}:{api_http_port or '5000'}",
host=api_server_name or "bwapi",
)
)
apiCaller = ApiCaller(apis=apis)
instances.append(
Instance(
instance.id,
@ -122,7 +194,7 @@ class Instances:
"service",
status,
instance,
None,
apiCaller,
)
)
elif self.__integration == "Kubernetes":
@ -137,15 +209,30 @@ class Instances:
e.name: e.value for e in pod.spec.containers[0].env
}
apiCaller = ApiCaller()
apiCaller._set_apis(
[
API(
f"http://{pod.status.pod_ip}:{env_variables.get('API_HTTP_PORT', '5000')}",
env_variables.get("API_SERVER_NAME", "bwapi"),
apis = []
config.load_incluster_config()
corev1 = self.__kubernetes_client.CoreV1Api()
for pod in corev1.list_pod_for_all_namespaces(watch=False).items:
if (
pod.metadata.annotations != None
and "bunkerweb.io/INSTANCE" in pod.metadata.annotations
):
api_http_port = None
api_server_name = None
for pod_env in pod.spec.containers[0].env:
if pod_env.name == "API_HTTP_PORT":
api_http_port = pod_env.value or "5000"
elif pod_env.name == "API_SERVER_NAME":
api_server_name = pod_env.value or "bwapi"
apis.append(
API(
f"http://{pod.status.pod_ip}:{api_http_port or '5000'}",
host=api_server_name or "bwapi",
)
)
]
)
apiCaller = ApiCaller(apis=apis)
status = "up"
if pod.status.conditions is not None:
@ -173,6 +260,17 @@ class Instances:
# Local instance
if Path("/usr/sbin/nginx").exists():
apiCaller = ApiCaller()
env_variables = dotenv_values("/etc/bunkerweb/variables.env")
apiCaller._set_apis(
[
API(
f"http://127.0.0.1:{env_variables.get('API_HTTP_PORT', '5000')}",
env_variables.get("API_SERVER_NAME", "bwapi"),
)
]
)
instances.insert(
0,
Instance(
@ -181,6 +279,8 @@ class Instances:
"127.0.0.1",
"local",
"up" if Path("/var/tmp/bunkerweb/nginx.pid").exists() else "down",
None,
apiCaller,
),
)
@ -204,17 +304,7 @@ class Instances:
if instance is None:
instance = self.__instance_from_id(id)
result = True
if instance._type == "local":
result = (
run(
["sudo", "systemctl", "restart", "bunkerweb"], capture_output=True
).returncode
!= 0
)
elif instance._type == "container":
# result = instance.run_jobs()
result = result & instance.reload()
result = instance.reload()
if result:
return f"Instance {instance.name} has been reloaded."
@ -223,16 +313,8 @@ class Instances:
def start_instance(self, id) -> str:
instance = self.__instance_from_id(id)
result = True
if instance._type == "local":
proc = run(
["sudo", "/usr/share/bunkerweb/ui/linux.sh", "start"],
capture_output=True,
)
result = proc.returncode == 0
elif instance._type == "container":
result = instance.start()
result = instance.start()
if result:
return f"Instance {instance.name} has been started."
@ -241,16 +323,8 @@ class Instances:
def stop_instance(self, id) -> str:
instance = self.__instance_from_id(id)
result = True
if instance._type == "local":
proc = run(
["sudo", "/usr/share/bunkerweb/ui/linux.sh", "stop"],
capture_output=True,
)
result = proc.returncode == 0
elif instance._type == "container":
result = instance.stop()
result = instance.stop()
if result:
return f"Instance {instance.name} has been stopped."
@ -259,16 +333,8 @@ class Instances:
def restart_instance(self, id) -> str:
instance = self.__instance_from_id(id)
result = True
if instance._type == "local":
proc = run(
["sudo", "/usr/share/bunkerweb/ui/linux.sh", "restart"],
capture_output=True,
)
result = proc.returncode == 0
elif instance._type == "container":
result = instance.restart()
result = instance.restart()
if result:
return f"Instance {instance.name} has been restarted."

View File

@ -150,7 +150,9 @@
>
Plugins
</p>
<h5 class="mb-1 font-bold dark:text-gray-400">{{ plugins_number }}</h5>
<h5 class="mb-1 font-bold dark:text-gray-400">
{{ config["CONFIG"].get_plugins()|length }}
</h5>
<p class="mb-0 dark:text-white dark:opacity-60">
<span class="font-bold leading-normal text-sm text-red-500 mx-0.5"
>{{plugins_errors}}</span

View File

@ -43,7 +43,7 @@
async function check_reloading() {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 500);
const timeoutId = setTimeout(() => controller.abort(), 2000);
const response = await fetch(
`${location.href.replace("/loading", "/check_reloading")}`,
{ signal: controller.signal }