Merge pull request #441 from bunkerity/dev

Merge branch "dev" into branch "ui"
This commit is contained in:
Théophile Diot 2023-04-24 16:59:34 +02:00 committed by GitHub
commit 52806afe73
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
85 changed files with 2463 additions and 1422 deletions

7
TODO
View file

@ -1,6 +1,3 @@
- utils refactoring
- load inline values for white/black/grey list core
- check if correct setting is set to yes in new() before loading stuff in self
- store object in ngx.ctx
- bwcli with redis - bwcli with redis
- move bans to cachestore - stream refactoring
- stream examples

View file

@ -96,7 +96,7 @@ vagrant ssh
python3 -m http.server -b 127.0.0.1 python3 -m http.server -b 127.0.0.1
``` ```
Configuration of BunkerWeb is done by editing the `/opt/bunkerweb/variables.env` file. Configuration of BunkerWeb is done by editing the `/etc/bunkerweb/variables.env` file.
Connect to your vagrant machine : Connect to your vagrant machine :
```shell ```shell
@ -159,7 +159,7 @@ vagrant ssh
vagrant ssh vagrant ssh
``` ```
Configuration of BunkerWeb is done by editing the /opt/bunkerweb/variables.env file : Configuration of BunkerWeb is done by editing the /etc/bunkerweb/variables.env file :
```conf ```conf
SERVER_NAME=app1.example.com app2.example.com app3.example.com SERVER_NAME=app1.example.com app2.example.com app3.example.com
HTTP_PORT=80 HTTP_PORT=80
@ -190,7 +190,7 @@ vagrant ssh
=== "Vagrant" === "Vagrant"
You will need to add the settings to the `/opt/bunkerweb/variables.env` file : You will need to add the settings to the `/etc/bunkerweb/variables.env` file :
```conf ```conf
... ...
@ -204,7 +204,7 @@ vagrant ssh
=== "Vagrant" === "Vagrant"
You will need to add the settings to the `/opt/bunkerweb/variables.env` file : You will need to add the settings to the `/etc/bunkerweb/variables.env` file :
```conf ```conf
... ...
@ -219,7 +219,7 @@ vagrant ssh
=== "Vagrant" === "Vagrant"
When using the [Vagrant integration](/1.4/integrations/#vagrant), custom configurations must be written to the `/opt/bunkerweb/configs` folder. When using the [Vagrant integration](/1.4/integrations/#vagrant), custom configurations must be written to the `/etc/bunkerweb/configs` folder.
Here is an example for server-http/hello-world.conf : Here is an example for server-http/hello-world.conf :
```conf ```conf
@ -233,8 +233,8 @@ vagrant ssh
Because BunkerWeb runs as an unprivileged user (nginx:nginx), you will need to edit the permissions : Because BunkerWeb runs as an unprivileged user (nginx:nginx), you will need to edit the permissions :
```shell ```shell
chown -R root:nginx /opt/bunkerweb/configs && \ chown -R root:nginx /etc/bunkerweb/configs && \
chmod -R 770 /opt/bunkerweb/configs chmod -R 770 /etc/bunkerweb/configs
``` ```
Don't forget to restart the BunkerWeb service once it's done. Don't forget to restart the BunkerWeb service once it's done.
@ -243,9 +243,9 @@ vagrant ssh
We will assume that you already have the [Vagrant integration](/1.4/integrations/#vagrant) stack running on your machine. We will assume that you already have the [Vagrant integration](/1.4/integrations/#vagrant) stack running on your machine.
By default, BunkerWeb will search for web files inside the `/opt/bunkerweb/www` folder. You can use it to store your PHP application. Please note that you will need to configure your PHP-FPM service to get or set the user/group of the running processes and the UNIX socket file used to communicate with BunkerWeb. By default, BunkerWeb will search for web files inside the `/var/www/html` folder. You can use it to store your PHP application. Please note that you will need to configure your PHP-FPM service to get or set the user/group of the running processes and the UNIX socket file used to communicate with BunkerWeb.
First of all, you will need to make sure that your PHP-FPM instance can access the files inside the `/opt/bunkerweb/www` folder and also that BunkerWeb can access the UNIX socket file in order to communicate with PHP-FPM. We recommend to set a different user like `www-data` for the PHP-FPM service and to give the nginx group access to the UNIX socket file. Here is corresponding PHP-FPM configuration : First of all, you will need to make sure that your PHP-FPM instance can access the files inside the `/var/www/html` folder and also that BunkerWeb can access the UNIX socket file in order to communicate with PHP-FPM. We recommend to set a different user like `www-data` for the PHP-FPM service and to give the nginx group access to the UNIX socket file. Here is corresponding PHP-FPM configuration :
```ini ```ini
... ...
[www] [www]
@ -263,14 +263,14 @@ vagrant ssh
systemctl restart php8.1-fpm systemctl restart php8.1-fpm
``` ```
Once your application is copied to the `/opt/bunkerweb/www` folder, you will need to fix the permissions so BunkerWeb (user/group nginx) can at least read files and list folders and PHP-FPM (user/group www-data) is the owner of the files and folders : Once your application is copied to the `/var/www/html` folder, you will need to fix the permissions so BunkerWeb (user/group nginx) can at least read files and list folders and PHP-FPM (user/group www-data) is the owner of the files and folders :
```shell ```shell
chown -R www-data:nginx /opt/bunkerweb/www && \ chown -R www-data:nginx /var/www/html && \
find /opt/bunkerweb/www -type f -exec chmod 0640 {} \; && \ find /var/www/html -type f -exec chmod 0640 {} \; && \
find /opt/bunkerweb/www -type d -exec chmod 0750 {} \; find /var/www/html -type d -exec chmod 0750 {} \;
``` ```
You can now edit the `/opt/bunkerweb/variable.env` file : You can now edit the `/etc/bunkerweb/variable.env` file :
```env ```env
HTTP_PORT=80 HTTP_PORT=80
HTTPS_PORT=443 HTTPS_PORT=443
@ -278,7 +278,7 @@ vagrant ssh
SERVER_NAME=www.example.com SERVER_NAME=www.example.com
AUTO_LETS_ENCRYPT=yes AUTO_LETS_ENCRYPT=yes
LOCAL_PHP=/run/php/php-fpm.sock LOCAL_PHP=/run/php/php-fpm.sock
LOCAL_PHP_PATH=/opt/bunkerweb/www/ LOCAL_PHP_PATH=/var/www/html/
``` ```
Let's check the status of BunkerWeb : Let's check the status of BunkerWeb :
@ -299,9 +299,9 @@ vagrant ssh
We will assume that you already have the [Vagrant integration](/1.4/integrations/#vagrant) stack running on your machine. We will assume that you already have the [Vagrant integration](/1.4/integrations/#vagrant) stack running on your machine.
By default, BunkerWeb will search for web files inside the `/opt/bunkerweb/www` folder. You can use it to store your PHP applications : each application will be in its own subfolder named the same as the primary server name. Please note that you will need to configure your PHP-FPM service to get or set the user/group of the running processes and the UNIX socket file used to communicate with BunkerWeb. By default, BunkerWeb will search for web files inside the `/var/www/html` folder. You can use it to store your PHP applications : each application will be in its own subfolder named the same as the primary server name. Please note that you will need to configure your PHP-FPM service to get or set the user/group of the running processes and the UNIX socket file used to communicate with BunkerWeb.
First of all, you will need to make sure that your PHP-FPM instance can access the files inside the `/opt/bunkerweb/www` folder and also that BunkerWeb can access the UNIX socket file in order to communicate with PHP-FPM. We recommend to set a different user like `www-data` for the PHP-FPM service and to give the nginx group access to the UNIX socket file. Here is corresponding PHP-FPM configuration : First of all, you will need to make sure that your PHP-FPM instance can access the files inside the `/var/www/html` folder and also that BunkerWeb can access the UNIX socket file in order to communicate with PHP-FPM. We recommend to set a different user like `www-data` for the PHP-FPM service and to give the nginx group access to the UNIX socket file. Here is corresponding PHP-FPM configuration :
```ini ```ini
... ...
[www] [www]
@ -319,14 +319,14 @@ vagrant ssh
systemctl restart php8.1-fpm systemctl restart php8.1-fpm
``` ```
Once your application is copied to the `/opt/bunkerweb/www` folder, you will need to fix the permissions so BunkerWeb (user/group nginx) can at least read files and list folders and PHP-FPM (user/group www-data) is the owner of the files and folders : Once your application is copied to the `/var/www/html` folder, you will need to fix the permissions so BunkerWeb (user/group nginx) can at least read files and list folders and PHP-FPM (user/group www-data) is the owner of the files and folders :
```shell ```shell
chown -R www-data:nginx /opt/bunkerweb/www && \ chown -R www-data:nginx /var/www/html && \
find /opt/bunkerweb/www -type f -exec chmod 0640 {} \; && \ find /var/www/html -type f -exec chmod 0640 {} \; && \
find /opt/bunkerweb/www -type d -exec chmod 0750 {} \; find /var/www/html -type d -exec chmod 0750 {} \;
``` ```
You can now edit the `/opt/bunkerweb/variable.env` file : You can now edit the `/etc/bunkerweb/variable.env` file :
```env ```env
HTTP_PORT=80 HTTP_PORT=80
HTTPS_PORT=443 HTTPS_PORT=443
@ -335,11 +335,11 @@ vagrant ssh
MULTISITE=yes MULTISITE=yes
AUTO_LETS_ENCRYPT=yes AUTO_LETS_ENCRYPT=yes
app1.example.com_LOCAL_PHP=/run/php/php-fpm.sock app1.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app1.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app1.example.com app1.example.com_LOCAL_PHP_PATH=/var/www/html/app1.example.com
app2.example.com_LOCAL_PHP=/run/php/php-fpm.sock app2.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app2.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app2.example.com app2.example.com_LOCAL_PHP_PATH=/var/www/html/app2.example.com
app3.example.com_LOCAL_PHP=/run/php/php-fpm.sock app3.example.com_LOCAL_PHP=/run/php/php-fpm.sock
app3.example.com_LOCAL_PHP_PATH=/opt/bunkerweb/www/app3.example.com app3.example.com_LOCAL_PHP_PATH=/var/www/html/app3.example.com
``` ```
Let's check the status of BunkerWeb : Let's check the status of BunkerWeb :
@ -360,7 +360,7 @@ vagrant ssh
=== "Vagrant" === "Vagrant"
When using the [Linux integration](/1.4/integrations/#linux), plugins must be written to the `/opt/bunkerweb/plugins` folder : When using the [Linux integration](/1.4/integrations/#linux), plugins must be written to the `/etc/bunkerweb/plugins` folder :
```shell ```shell
git clone https://github.com/bunkerity/bunkerweb-plugins && \ git clone https://github.com/bunkerity/bunkerweb-plugins && \
cp -rp ./bunkerweb-plugins/* /data/plugins cp -rp ./bunkerweb-plugins/* /data/plugins
@ -372,7 +372,7 @@ vagrant ssh
The installation of the web UI using the [Vagrant integration](/1.4/integrations/#vagrant) is pretty straightforward because it is installed with BunkerWeb. The installation of the web UI using the [Vagrant integration](/1.4/integrations/#vagrant) is pretty straightforward because it is installed with BunkerWeb.
The first thing to do is to edit the BunkerWeb configuration located at **/opt/bunkerweb/variables.env** to add settings related to the web UI : The first thing to do is to edit the BunkerWeb configuration located at **/etc/bunkerweb/variables.env** to add settings related to the web UI :
```conf ```conf
HTTP_PORT=80 HTTP_PORT=80
HTTPS_PORT=443 HTTPS_PORT=443
@ -401,7 +401,7 @@ vagrant ssh
systemctl restart bunkerweb systemctl restart bunkerweb
``` ```
You can edit the **/opt/bunkerweb/ui.env** file containing the settings of the web UI : You can edit the **/etc/bunkerweb/ui.env** file containing the settings of the web UI :
```conf ```conf
ADMIN_USERNAME=admin ADMIN_USERNAME=admin
ADMIN_PASSWORD=changeme ADMIN_PASSWORD=changeme
@ -410,7 +410,7 @@ vagrant ssh
Important things to note : Important things to note :
* `http(s)://bwadmin.example.com/changeme/` is the full base URL of the web UI (must match the sub(domain) and /changeme URL used in **/opt/bunkerweb/variables.env**) * `http(s)://bwadmin.example.com/changeme/` is the full base URL of the web UI (must match the sub(domain) and /changeme URL used in **/etc/bunkerweb/variables.env**)
* replace the username `admin` and password `changeme` with strong ones * replace the username `admin` and password `changeme` with strong ones
Restart the BunkerWeb UI service and you are now ready to access it : Restart the BunkerWeb UI service and you are now ready to access it :

View file

@ -3,7 +3,7 @@
## Docker ## Docker
<figure markdown> <figure markdown>
![Overwiew](assets/img/integration-docker.svg){ align=center } ![Overview](assets/img/integration-docker.svg){ align=center }
<figcaption>Docker integration</figcaption> <figcaption>Docker integration</figcaption>
</figure> </figure>
@ -174,7 +174,7 @@ networks:
## Docker autoconf ## Docker autoconf
<figure markdown> <figure markdown>
![Overwiew](assets/img/integration-autoconf.svg){ align=center } ![Overview](assets/img/integration-autoconf.svg){ align=center }
<figcaption>Docker autoconf integration</figcaption> <figcaption>Docker autoconf integration</figcaption>
</figure> </figure>
@ -325,7 +325,7 @@ networks:
## Swarm ## Swarm
<figure markdown> <figure markdown>
![Overwiew](assets/img/integration-swarm.svg){ align=center } ![Overview](assets/img/integration-swarm.svg){ align=center }
<figcaption>Docker Swarm integration</figcaption> <figcaption>Docker Swarm integration</figcaption>
</figure> </figure>
@ -486,7 +486,7 @@ networks:
## Kubernetes ## Kubernetes
<figure markdown> <figure markdown>
![Overwiew](assets/img/integration-kubernetes.svg){ align=center } ![Overview](assets/img/integration-kubernetes.svg){ align=center }
<figcaption>Kubernetes integration</figcaption> <figcaption>Kubernetes integration</figcaption>
</figure> </figure>
@ -580,7 +580,7 @@ spec:
livenessProbe: livenessProbe:
exec: exec:
command: command:
- /opt/bunkerweb/helpers/healthcheck.sh - /usr/share/bunkerweb/helpers/healthcheck.sh
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 5 periodSeconds: 5
timeoutSeconds: 1 timeoutSeconds: 1
@ -588,7 +588,7 @@ spec:
readinessProbe: readinessProbe:
exec: exec:
command: command:
- /opt/bunkerweb/helpers/healthcheck.sh - /usr/share/bunkerweb/helpers/healthcheck.sh
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 1 periodSeconds: 1
timeoutSeconds: 1 timeoutSeconds: 1
@ -673,7 +673,7 @@ spec:
## Linux ## Linux
<figure markdown> <figure markdown>
![Overwiew](assets/img/integration-linux.svg){ align=center } ![Overview](assets/img/integration-linux.svg){ align=center }
<figcaption>Linux integration</figcaption> <figcaption>Linux integration</figcaption>
</figure> </figure>
@ -806,9 +806,9 @@ Repositories of Linux packages for BunkerWeb are available on [PackageCloud](htt
The first step is to install NGINX 1.20.2 using the repository of your choice or by [compiling it from source](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/#compiling-and-installing-from-source). The first step is to install NGINX 1.20.2 using the repository of your choice or by [compiling it from source](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/#compiling-and-installing-from-source).
The target installation folder of BunkerWeb is located at `/opt/bunkerweb`, let's create it : The target installation folder of BunkerWeb is located at `/usr/share/bunkerweb`, let's create it :
```shell ```shell
mkdir /opt/bunkerweb mkdir /usr/share/bunkerweb
``` ```
You can now clone the BunkerWeb project to the `/tmp` folder : You can now clone the BunkerWeb project to the `/tmp` folder :
@ -816,40 +816,42 @@ Repositories of Linux packages for BunkerWeb are available on [PackageCloud](htt
https://github.com/bunkerity/bunkerweb.git /tmp/bunkerweb https://github.com/bunkerity/bunkerweb.git /tmp/bunkerweb
``` ```
BunkerWeb needs some dependencies to be compiled and installed to `/opt/bunkerweb/deps`, the easiest way to do it is by executing the [install.sh helper script](https://github.com/bunkerity/bunkerweb/blob/master/deps/install.sh) (please note that you will need to install additional packages which is not covered in this procedure and depends on your own system) : BunkerWeb needs some dependencies to be compiled and installed to `/usr/share/bunkerweb/deps`, the easiest way to do it is by executing the [install.sh helper script](https://github.com/bunkerity/bunkerweb/blob/master/deps/install.sh) (please note that you will need to install additional packages which is not covered in this procedure and depends on your own system) :
``` ```
mkdir /opt/bunkerweb/deps && \ mkdir /usr/share/bunkerweb/deps && \
/tmp/bunkerweb/deps/install.sh /tmp/bunkerweb/deps/install.sh
``` ```
Additional Python dependencies needs to be installed into the `/opt/bunkerweb/deps/python` folder : Additional Python dependencies needs to be installed into the `/usr/share/bunkerweb/deps/python` folder :
```shell ```shell
mkdir /opt/bunkerweb/deps/python && \ mkdir /usr/share/bunkerweb/deps/python && \
pip install --no-cache-dir --require-hashes --target /opt/bunkerweb/deps/python -r /tmp/bunkerweb/deps/requirements.txt && \ pip install --no-cache-dir --require-hashes --target /usr/share/bunkerweb/deps/python -r /tmp/bunkerweb/deps/requirements.txt && \
pip install --no-cache-dir --target /opt/bunkerweb/deps/python -r /tmp/bunkerweb/ui/requirements.txt pip install --no-cache-dir --target /usr/share/bunkerweb/deps/python -r /tmp/bunkerweb/ui/requirements.txt
``` ```
Once dependencies are installed, you will be able to copy the BunkerWeb sources to the target `/opt/bunkerweb` folder : Once dependencies are installed, you will be able to copy the BunkerWeb sources to the target `/usr/share/bunkerweb` folder :
```shell ```shell
for src in api cli confs core gen helpers job lua misc utils ui settings.json VERSION linux/variables.env linux/ui.env linux/scripts ; do for src in api cli confs core gen helpers job lua misc utils ui settings.json VERSION linux/variables.env linux/ui.env linux/scripts ; do
cp -r /tmp/bunkerweb/${src} /opt/bunkerweb cp -r /tmp/bunkerweb/${src} /usr/share/bunkerweb
done done
cp /opt/bunkerweb/helpers/bwcli /usr/local/bin cp /usr/share/bunkerweb/helpers/bwcli /usr/local/bin
``` ```
Additional folders also need to be created : Additional folders also need to be created :
```shell ```shell
mkdir /opt/bunkerweb/{configs,cache,plugins,tmp} mkdir -p /etc/bunkerweb/{configs,plugins} && \
mkdir -p /var/cache/bunkerweb && \
mkdir -p /var/tmp/bunkerweb
``` ```
Permissions needs to be fixed : Permissions needs to be fixed :
```shell ```shell
find /opt/bunkerweb -path /opt/bunkerweb/deps -prune -o -type f -exec chmod 0740 {} \; && \ find /usr/share/bunkerweb -path /usr/share/bunkerweb/deps -prune -o -type f -exec chmod 0740 {} \; && \
find /opt/bunkerweb -path /opt/bunkerweb/deps -prune -o -type d -exec chmod 0750 {} \; && \ find /usr/share/bunkerweb -path /usr/share/bunkerweb/deps -prune -o -type d -exec chmod 0750 {} \; && \
find /opt/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \ find /usr/share/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \
chmod 770 /opt/bunkerweb/cache /opt/bunkerweb/tmp && \ chmod 770 /var/cache/bunkerweb /var/tmp/bunkerweb && \
chmod 750 /opt/bunkerweb/gen/main.py /opt/bunkerweb/job/main.py /opt/bunkerweb/cli/main.py /opt/bunkerweb/helpers/*.sh /opt/bunkerweb/scripts/*.sh /usr/local/bin/bwcli /opt/bunkerweb/ui/main.py && \ chmod 750 /usr/share/bunkerweb/gen/main.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/scripts/*.sh /usr/bin/bwcli /usr/share/bunkerweb/ui/main.py && \
chown -R root:nginx /opt/bunkerweb chown -R root:nginx /usr/share/bunkerweb
``` ```
Last but not least, you will need to set up systemd unit files : Last but not least, you will need to set up systemd unit files :
@ -862,7 +864,7 @@ Repositories of Linux packages for BunkerWeb are available on [PackageCloud](htt
systemctl enable bunkerweb-ui systemctl enable bunkerweb-ui
``` ```
The configuration of BunkerWeb is done by editing the `/opt/bunkerweb/variables.env` file : The configuration of BunkerWeb is done by editing the `/etc/bunkerweb/variables.env` file :
```conf ```conf
MY_SETTING_1=value1 MY_SETTING_1=value1
@ -880,7 +882,7 @@ BunkerWeb is managed using systemctl :
## Ansible ## Ansible
<figure markdown> <figure markdown>
![Overwiew](assets/img/integration-ansible.svg){ align=center } ![Overview](assets/img/integration-ansible.svg){ align=center }
<figcaption>Ansible integration</figcaption> <figcaption>Ansible integration</figcaption>
</figure> </figure>
@ -939,3 +941,87 @@ Configuration of BunkerWeb is done by using specific role variables :
| `custom_plugins` | string | Path of the plugins directory to upload. | empty value | | `custom_plugins` | string | Path of the plugins directory to upload. | empty value |
| `custom_www_owner` | string | Default owner for www files and folders. | `nginx` | | `custom_www_owner` | string | Default owner for www files and folders. | `nginx` |
| `custom_www_group` | string | Default group for www files and folders. | `nginx` | | `custom_www_group` | string | Default group for www files and folders. | `nginx` |
## Vagrant
<figure markdown>
![Overview](assets/img/integration-vagrant.svg){ align=center }
<figcaption>BunkerWeb integration with Vagrant</figcaption>
</figure>
List of supported providers :
- vmware_desktop
- virtualbox
- libvirt
**_Note on Supported Base Images_**
Please be aware that the provided Vagrant boxes are based **exclusively on Ubuntu 22.04 "Jammy"**. While BunkerWeb supports other Linux distributions, the Vagrant setup currently only supports Ubuntu 22.04 as the base operating system. This ensures a consistent and reliable environment for users who want to deploy BunkerWeb using Vagrant.
Similar to other BunkerWeb integrations, the Vagrant setup uses **NGINX version 1.20.2**. This specific version is required to ensure compatibility and smooth functioning with BunkerWeb. Additionally, the Vagrant box includes **PHP** pre-installed, providing a ready-to-use environment for hosting PHP-based applications alongside BunkerWeb.
By using the provided Vagrant box based on Ubuntu 22.04 "Jammy", you benefit from a well-configured and integrated setup, allowing you to focus on developing and securing your applications with BunkerWeb without worrying about the underlying infrastructure.
Here are the steps to install BunkerWeb using Vagrant on Ubuntu with the supported virtualization providers (VirtualBox, VMware, and libvirt):
1. Make sure you have Vagrant and one of the supported virtualization providers (VirtualBox, VMware, or libvirt) installed on your system.
2. There are two ways to install the Vagrant box with BunkerWeb: either by using a provided Vagrantfile to configure your virtual machine or by creating a new box based on the existing BunkerWeb Vagrant box, offering you flexibility in how you set up your development environment.
=== "Vagrantfile"
```shell
Vagrant.configure("2") do |config|
config.vm.box = "bunkerity/bunkerity"
end
```
Depending on the virtualization provider you choose, you may need to install additional plugins:
* For **VMware**, install the `vagrant-vmware-desktop` plugin. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
* For **libvirt**, install the `vagrant-libvirt plugin`. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
* For **VirtualBox**, install the `vagrant-vbguest` plugin. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
=== "New Vagrant Box"
```shell
vagrant init bunkerity/bunkerity
```
Depending on the virtualization provider you choose, you may need to install additional plugins:
* For **VMware**, install the `vagrant-vmware-desktop` plugin. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
* For **libvirt**, install the `vagrant-libvirt plugin`. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
* For **VirtualBox**, install the `vagrant-vbguest` plugin. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
After installing the necessary plugins for your chosen virtualization provider, run the following command to start the virtual machine and install BunkerWeb:
```shell
vagrant up --provider=virtualbox # or --provider=vmware_desktop or --provider=libvirt
```
Finally, to access the virtual machine using SSH, execute the following command:
```shell
vagrant ssh
```
**Example Vagrantfile**
Here is an example `Vagrantfile` for installing BunkerWeb on Ubuntu 22.04 "Jammy" using the different supported virtualization providers:
```shell
Vagrant.configure("2") do |config|
# Ubuntu 22.04 "Jammy"
config.vm.box = "bunkerity/bunkerity"
# Uncomment the desired virtualization provider
# For VirtualBox (default)
config.vm.provider "virtualbox"
# For VMware
# config.vm.provider "vmware_desktop" # Windows
# config.vm.provider "vmware_workstation" # Linux
# For libvirt
# config.vm.provider "libvirt"
end
```

View file

@ -1,11 +1,12 @@
#!/usr/bin/python3 #!/usr/bin/python3
from io import StringIO
from json import loads from json import loads
from glob import glob from glob import glob
from pytablewriter import MarkdownTableWriter from pytablewriter import MarkdownTableWriter
def print_md_table(settings): def print_md_table(settings) -> MarkdownTableWriter:
writer = MarkdownTableWriter( writer = MarkdownTableWriter(
headers=["Setting", "Default", "Context", "Multiple", "Description"], headers=["Setting", "Default", "Context", "Multiple", "Description"],
value_matrix=[ value_matrix=[
@ -19,37 +20,52 @@ def print_md_table(settings):
for setting, data in settings.items() for setting, data in settings.items()
], ],
) )
writer.write_table() return writer
print()
print("# Settings\n") doc = StringIO()
print("# Settings\n", file=doc)
print( print(
'!!! info "Settings generator tool"\n\n To help you tune BunkerWeb, we have made an easy-to-use settings generator tool available at [config.bunkerweb.io](https://config.bunkerweb.io).\n' '!!! info "Settings generator tool"\n\n To help you tune BunkerWeb, we have made an easy-to-use settings generator tool available at [config.bunkerweb.io](https://config.bunkerweb.io).\n',
file=doc,
) )
print( print(
"This section contains the full list of settings supported by BunkerWeb. If you are not yet familiar with BunkerWeb, you should first read the [concepts](/1.4/concepts) section of the documentation. Please follow the instructions for your own [integration](/1.4/integrations) on how to apply the settings.\n" "This section contains the full list of settings supported by BunkerWeb. If you are not yet familiar with BunkerWeb, you should first read the [concepts](/1.4/concepts) section of the documentation. Please follow the instructions for your own [integration](/1.4/integrations) on how to apply the settings.\n",
file=doc,
) )
print( print(
"As a general rule when multisite mode is enabled, if you want to apply settings with multisite context to a specific server, you will need to add the primary (first) server name as a prefix like `www.example.com_USE_ANTIBOT=captcha` or `myapp.example.com_USE_GZIP=yes` for example.\n" "As a general rule when multisite mode is enabled, if you want to apply settings with multisite context to a specific server, you will need to add the primary (first) server name as a prefix like `www.example.com_USE_ANTIBOT=captcha` or `myapp.example.com_USE_GZIP=yes` for example.\n",
file=doc,
) )
print( print(
'When settings are considered as "multiple", it means that you can have multiple groups of settings for the same feature by adding numbers as suffix like `REVERSE_PROXY_URL_1=/subdir`, `REVERSE_PROXY_HOST_1=http://myhost1`, `REVERSE_PROXY_URL_2=/anotherdir`, `REVERSE_PROXY_HOST_2=http://myhost2`, ... for example.\n' 'When settings are considered as "multiple", it means that you can have multiple groups of settings for the same feature by adding numbers as suffix like `REVERSE_PROXY_URL_1=/subdir`, `REVERSE_PROXY_HOST_1=http://myhost1`, `REVERSE_PROXY_URL_2=/anotherdir`, `REVERSE_PROXY_HOST_2=http://myhost2`, ... for example.\n',
file=doc,
) )
# Print global settings # Print global settings
print("## Global settings\n") print("## Global settings\n", file=doc)
with open("src/common/settings.json", "r") as f: with open("src/common/settings.json", "r") as f:
print_md_table(loads(f.read())) print(print_md_table(loads(f.read())), file=doc)
print(file=doc)
# Print core settings # Print core settings
print("## Core settings\n") print("## Core settings\n", file=doc)
core_settings = {} core_settings = {}
for core in glob("src/common/core/*/plugin.json"): for core in glob("src/common/core/*/plugin.json"):
with open(core, "r") as f: with open(core, "r") as f:
core_plugin = loads(f.read()) core_plugin = loads(f.read())
if len(core_plugin["settings"]) > 0: if len(core_plugin["settings"]) > 0:
core_settings[core_plugin["name"]] = core_plugin["settings"] core_settings[core_plugin["name"]] = core_plugin["settings"]
for name, settings in dict(sorted(core_settings.items())).items(): for name, settings in dict(sorted(core_settings.items())).items():
print(f"### {name}\n") print(f"### {name}\n", file=doc)
print_md_table(settings) print(print_md_table(settings), file=doc)
doc.seek(0)
content = doc.read()
doc = StringIO(content.replace("\\|", "|"))
doc.seek(0)
with open("docs/settings.md", "w") as f:
f.write(doc.read())

View file

@ -1,5 +1,5 @@
mkdocs==1.4.2 mkdocs==1.4.2
mkdocs-material==9.1.6 mkdocs-material==9.1.7
pytablewriter==0.64.2 pytablewriter==0.64.2
mike==1.1.2 mike==1.1.2
jinja2<3.1.0 jinja2<3.1.0

View file

@ -118,9 +118,9 @@ If you want to use your own certificates, here is the list of related settings :
| Setting | Default | Description | | Setting | Default | Description |
| :-----------------: | :-----: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | :-----------------: | :-----: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `USE_CUSTOM_HTTPS` | `no` | When set to `yes`, HTTPS will be enabled with custom certificates. | | `USE_CUSTOM_SSL` | `no` | When set to `yes`, HTTPS will be enabled with custom certificates. |
| `CUSTOM_HTTPS_CERT` | | Full path to the certificate. If you have one or more intermediate certificate(s) in your chain of trust, you will need to provide the bundle (more info [here](https://nginx.org/en/docs/http/configuring_https_servers.html#chains)). | | `CUSTOM_SSL_CERT` | | Full path to the certificate. If you have one or more intermediate certificate(s) in your chain of trust, you will need to provide the bundle (more info [here](https://nginx.org/en/docs/http/configuring_https_servers.html#chains)). |
| `CUSTOM_HTTPS_KEY` | | Full path to the private key. | | `CUSTOM_SSL_KEY` | | Full path to the private key. |
When `USE_CUSTOM_HTTPS` is set to `yes`, BunkerWeb will check every day if the custom certificate specified in `CUSTOM_HTTPS_CERT` is modified and will reload NGINX if that's the case. When `USE_CUSTOM_HTTPS` is set to `yes`, BunkerWeb will check every day if the custom certificate specified in `CUSTOM_HTTPS_CERT` is modified and will reload NGINX if that's the case.
@ -221,17 +221,28 @@ You can use the following settings to set up blacklisting :
| Setting | Default | Description | | Setting | Default | Description |
| :-------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------- | | :-------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------- |
| `USE_BLACKLIST` | `yes` | When set to `yes`, will enable blacklisting based on various criteria. | |`USE_BLACKLIST` |`yes` |Activate blacklist feature. |
| `BLACKLIST_IP` | | List of IPs and networks to blacklist. | |`BLACKLIST_IP` | |List of IP/network, separated with spaces, to block. |
| `BLACKLIST_IP_URLS` | `https://www.dan.me.uk/torlist/?exit` | List of URL containing IP and network to blacklist. The default list contains TOR exit nodes. | |`BLACKLIST_IP_URLS` |`https://www.dan.me.uk/torlist/?exit` |List of URLs, separated with spaces, containing bad IP/network to block. |
| `BLACKLIST_RDNS` | `.shodan.io .censys.io` | List of reverse DNS to blacklist. | |`BLACKLIST_RDNS_GLOBAL` |`yes` |Only perform RDNS blacklist checks on global IP addresses. |
| `BLACKLIST_RDNS_URLS` | | List of URLs containing reverse DNS to blacklist. | |`BLACKLIST_RDNS` |`.shodan.io .censys.io` |List of reverse DNS suffixes, separated with spaces, to block. |
| `BLACKLIST_ASN` | | List of ASN to blacklist. | |`BLACKLIST_RDNS_URLS` | |List of URLs, separated with spaces, containing reverse DNS suffixes to block. |
| `BLACKLIST_ASN_URLS` | | List of URLs containing ASN to blacklist. | |`BLACKLIST_ASN` | |List of ASN numbers, separated with spaces, to block. |
| `BLACKLIST_USER_AGENT` | | List of User-Agents to blacklist. | |`BLACKLIST_ASN_URLS` | |List of URLs, separated with spaces, containing ASN to block. |
| `BLACKLIST_USER_AGENT_URLS` | `https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/_generator_lists/bad-user-agents.list` | List of URLs containing User-Agent(s) to blacklist. | |`BLACKLIST_USER_AGENT` | |List of User-Agent, separated with spaces, to block. |
| `BLACKLIST_URI` | | List of requests URI to blacklist. | |`BLACKLIST_USER_AGENT_URLS` |`https://raw.githubusercontent.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/master/_generator_lists/bad-user-agents.list`|List of URLs, separated with spaces, containing bad User-Agent to block. |
| `BLACKLIST_URI_URLS` | | List of URLs containing request URI to blacklist. | |`BLACKLIST_URI` | |List of URI, separated with spaces, to block. |
|`BLACKLIST_URI_URLS` | |List of URLs, separated with spaces, containing bad URI to block. |
|`BLACKLIST_IGNORE_IP` | |List of IP/network, separated with spaces, to ignore in the blacklist. |
|`BLACKLIST_IGNORE_IP_URLS` | |List of URLs, separated with spaces, containing IP/network to ignore in the blacklist. |
|`BLACKLIST_IGNORE_RDNS` | |List of reverse DNS suffixes, separated with spaces, to ignore in the blacklist. |
|`BLACKLIST_IGNORE_RDNS_URLS` | |List of URLs, separated with spaces, containing reverse DNS suffixes to ignore in the blacklist.|
|`BLACKLIST_IGNORE_ASN` | |List of ASN numbers, separated with spaces, to ignore in the blacklist. |
|`BLACKLIST_IGNORE_ASN_URLS` | |List of URLs, separated with spaces, containing ASN to ignore in the blacklist. |
|`BLACKLIST_IGNORE_USER_AGENT` | |List of User-Agent, separated with spaces, to ignore in the blacklist. |
|`BLACKLIST_IGNORE_USER_AGENT_URLS`| |List of URLs, separated with spaces, containing User-Agent to ignore in the blacklist. |
|`BLACKLIST_IGNORE_URI` | |List of URI, separated with spaces, to ignore in the blacklist. |
|`BLACKLIST_IGNORE_URI_URLS` | |List of URLs, separated with spaces, containing URI to ignore in the blacklist. |
### Greylisting ### Greylisting

View file

@ -12,34 +12,39 @@ When settings are considered as "multiple", it means that you can have multiple
## Global settings ## Global settings
| Setting | Default | Context |Multiple| Description | | Setting | Default | Context |Multiple| Description |
|------------------------|------------------------------------------------------------------------------------------------------------------------|---------|--------|--------------------------------------------------| |------------------------------|------------------------------------------------------------------------------------------------------------------------|---------|--------|--------------------------------------------------|
|`IS_LOADING` |`no` |global |no |Internal use : set to yes when BW is loading. | |`IS_LOADING` |`no` |global |no |Internal use : set to yes when BW is loading. |
|`NGINX_PREFIX` |`/etc/nginx/` |global |no |Where nginx will search for configurations. | |`NGINX_PREFIX` |`/etc/nginx/` |global |no |Where nginx will search for configurations. |
|`HTTP_PORT` |`8080` |global |no |HTTP port number which bunkerweb binds to. | |`HTTP_PORT` |`8080` |global |no |HTTP port number which bunkerweb binds to. |
|`HTTPS_PORT` |`8443` |global |no |HTTPS port number which bunkerweb binds to. | |`HTTPS_PORT` |`8443` |global |no |HTTPS port number which bunkerweb binds to. |
|`MULTISITE` |`no` |global |no |Multi site activation. | |`MULTISITE` |`no` |global |no |Multi site activation. |
|`SERVER_NAME` |`www.example.com` |multisite|no |List of the virtual hosts served by bunkerweb. | |`SERVER_NAME` |`www.example.com` |multisite|no |List of the virtual hosts served by bunkerweb. |
|`WORKER_PROCESSES` |`auto` |global |no |Number of worker processes. | |`WORKER_PROCESSES` |`auto` |global |no |Number of worker processes. |
|`WORKER_RLIMIT_NOFILE` |`2048` |global |no |Maximum number of open files for worker processes.| |`WORKER_RLIMIT_NOFILE` |`2048` |global |no |Maximum number of open files for worker processes.|
|`WORKER_CONNECTIONS` |`1024` |global |no |Maximum number of connections per worker. | |`WORKER_CONNECTIONS` |`1024` |global |no |Maximum number of connections per worker. |
|`LOG_FORMAT` |`$host $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"`|global |no |The format to use for access logs. | |`LOG_FORMAT` |`$host $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"`|global |no |The format to use for access logs. |
|`LOG_LEVEL` |`notice` |global |no |The level to use for error logs. | |`LOG_LEVEL` |`notice` |global |no |The level to use for error logs. |
|`DNS_RESOLVERS` |`127.0.0.11` |global |no |DNS addresses of resolvers to use. | |`DNS_RESOLVERS` |`127.0.0.11` |global |no |DNS addresses of resolvers to use. |
|`DATASTORE_MEMORY_SIZE` |`256m` |global |no |Size of the internal datastore. | |`DATASTORE_MEMORY_SIZE` |`64m` |global |no |Size of the internal datastore. |
|`USE_API` |`yes` |global |no |Activate the API to control BunkerWeb. | |`CACHESTORE_MEMORY_SIZE` |`64m` |global |no |Size of the internal cachestore. |
|`API_HTTP_PORT` |`5000` |global |no |Listen port number for the API. | |`CACHESTORE_IPC_MEMORY_SIZE` |`16m` |global |no |Size of the internal cachestore (ipc). |
|`API_LISTEN_IP` |`0.0.0.0` |global |no |Listen IP address for the API. | |`CACHESTORE_MISS_MEMORY_SIZE` |`16m` |global |no |Size of the internal cachestore (miss). |
|`API_SERVER_NAME` |`bwapi` |global |no |Server name (virtual host) for the API. | |`CACHESTORE_LOCKS_MEMORY_SIZE`|`16m` |global |no |Size of the internal cachestore (locks). |
|`API_WHITELIST_IP` |`127.0.0.0/8` |global |no |List of IP/network allowed to contact the API. | |`USE_API` |`yes` |global |no |Activate the API to control BunkerWeb. |
|`AUTOCONF_MODE` |`no` |global |no |Enable Autoconf Docker integration. | |`API_HTTP_PORT` |`5000` |global |no |Listen port number for the API. |
|`SWARM_MODE` |`no` |global |no |Enable Docker Swarm integration. | |`API_LISTEN_IP` |`0.0.0.0` |global |no |Listen IP address for the API. |
|`KUBERNETES_MODE` |`no` |global |no |Enable Kubernetes integration. | |`API_SERVER_NAME` |`bwapi` |global |no |Server name (virtual host) for the API. |
|`SERVER_TYPE` |`http` |multisite|no |Server type : http or stream. | |`API_WHITELIST_IP` |`127.0.0.0/8` |global |no |List of IP/network allowed to contact the API. |
|`LISTEN_STREAM` |`yes` |multisite|no |Enable listening for non-ssl (passthrough). | |`AUTOCONF_MODE` |`no` |global |no |Enable Autoconf Docker integration. |
|`LISTEN_STREAM_PORT` |`1337` |multisite|no |Listening port for non-ssl (passthrough). | |`SWARM_MODE` |`no` |global |no |Enable Docker Swarm integration. |
|`LISTEN_STREAM_PORT_SSL`|`4242` |multisite|no |Listening port for ssl (passthrough). | |`KUBERNETES_MODE` |`no` |global |no |Enable Kubernetes integration. |
|`USE_UDP` |`no` |multisite|no |UDP listen instead of TCP (stream). | |`SERVER_TYPE` |`http` |multisite|no |Server type : http or stream. |
|`LISTEN_STREAM` |`yes` |multisite|no |Enable listening for non-ssl (passthrough). |
|`LISTEN_STREAM_PORT` |`1337` |multisite|no |Listening port for non-ssl (passthrough). |
|`LISTEN_STREAM_PORT_SSL` |`4242` |multisite|no |Listening port for ssl (passthrough). |
|`USE_UDP` |`no` |multisite|no |UDP listen instead of TCP (stream). |
## Core settings ## Core settings
@ -135,7 +140,7 @@ When settings are considered as "multiple", it means that you can have multiple
| Setting | Default | Context |Multiple| Description | | Setting | Default | Context |Multiple| Description |
|-------------------------|------------------------------------------------------------|---------|--------|--------------------------------------------------------------------| |-------------------------|------------------------------------------------------------|---------|--------|--------------------------------------------------------------------|
|`USE_CLIENT_CACHE` |`no` |multisite|no |Tell client to store locally static files. | |`USE_CLIENT_CACHE` |`no` |multisite|no |Tell client to store locally static files. |
|`CLIENT_CACHE_EXTENSIONS`|`jpg\|jpeg\|png\|bmp\|ico\|svg\|tif\|css\|js\|otf\|ttf\|eot\|woff\|woff2`|global |no |List of file extensions, separated with pipes that should be cached.| |`CLIENT_CACHE_EXTENSIONS`|`jpg|jpeg|png|bmp|ico|svg|tif|css|js|otf|ttf|eot|woff|woff2`|global |no |List of file extensions, separated with pipes that should be cached.|
|`CLIENT_CACHE_ETAG` |`yes` |multisite|no |Send the HTTP ETag header for static resources. | |`CLIENT_CACHE_ETAG` |`yes` |multisite|no |Send the HTTP ETag header for static resources. |
|`CLIENT_CACHE_CONTROL` |`public, max-age=15552000` |multisite|no |Value of the Cache-Control HTTP header. | |`CLIENT_CACHE_CONTROL` |`public, max-age=15552000` |multisite|no |Value of the Cache-Control HTTP header. |
@ -249,7 +254,7 @@ When settings are considered as "multiple", it means that you can have multiple
|`DISABLE_DEFAULT_SERVER` |`no` |global |no |Close connection if the request vhost is unknown. | |`DISABLE_DEFAULT_SERVER` |`no` |global |no |Close connection if the request vhost is unknown. |
|`REDIRECT_HTTP_TO_HTTPS` |`no` |multisite|no |Redirect all HTTP request to HTTPS. | |`REDIRECT_HTTP_TO_HTTPS` |`no` |multisite|no |Redirect all HTTP request to HTTPS. |
|`AUTO_REDIRECT_HTTP_TO_HTTPS`|`yes` |multisite|no |Try to detect if HTTPS is used and activate HTTP to HTTPS redirection if that's the case. | |`AUTO_REDIRECT_HTTP_TO_HTTPS`|`yes` |multisite|no |Try to detect if HTTPS is used and activate HTTP to HTTPS redirection if that's the case. |
|`ALLOWED_METHODS` |`GET\|POST\|HEAD` |multisite|no |Allowed HTTP and WebDAV methods, separated with pipes to be sent by clients. | |`ALLOWED_METHODS` |`GET|POST|HEAD` |multisite|no |Allowed HTTP and WebDAV methods, separated with pipes to be sent by clients. |
|`MAX_CLIENT_SIZE` |`10m` |multisite|no |Maximum body size (0 for infinite). | |`MAX_CLIENT_SIZE` |`10m` |multisite|no |Maximum body size (0 for infinite). |
|`SERVE_FILES` |`yes` |multisite|no |Serve files from the local folder. | |`SERVE_FILES` |`yes` |multisite|no |Serve files from the local folder. |
|`ROOT_FOLDER` | |multisite|no |Root folder containing files to serve (/var/www/html/{server_name} if unset). | |`ROOT_FOLDER` | |multisite|no |Root folder containing files to serve (/var/www/html/{server_name} if unset). |

View file

@ -69,7 +69,7 @@ Because the web UI is a web application, the recommended installation procedure
-e bwadm.example.com_REVERSE_PROXY_URL=/changeme/ \ -e bwadm.example.com_REVERSE_PROXY_URL=/changeme/ \
-e bwadm.example.com_REVERSE_PROXY_HOST=http://bw-ui:7000 \ -e bwadm.example.com_REVERSE_PROXY_HOST=http://bw-ui:7000 \
-e "bwadm.example.com_REVERSE_PROXY_HEADERS=X-Script-Name /changeme" \ -e "bwadm.example.com_REVERSE_PROXY_HEADERS=X-Script-Name /changeme" \
-e bwadm.example.com_INTERCEPTED_ERROR_CODES="400 401.5.0-beta 413 429 500 501 502 503 504" \ -e bwadm.example.com_INTERCEPTED_ERROR_CODES="400 401 405 413 429 500 501 502 503 504" \
-l bunkerweb.INSTANCE \ -l bunkerweb.INSTANCE \
bunkerity/bunkerweb:1.5.0-beta && \ bunkerity/bunkerweb:1.5.0-beta && \
docker network connect bw-universe bunkerweb docker network connect bw-universe bunkerweb
@ -294,7 +294,7 @@ Because the web UI is a web application, the recommended installation procedure
-l "bunkerweb.REVERSE_PROXY_URL=/changeme" \ -l "bunkerweb.REVERSE_PROXY_URL=/changeme" \
-l "bunkerweb.REVERSE_PROXY_HOST=http://bw-ui:7000" \ -l "bunkerweb.REVERSE_PROXY_HOST=http://bw-ui:7000" \
-l "bunkerweb.REVERSE_PROXY_HEADERS=X-Script-Name /changeme" \ -l "bunkerweb.REVERSE_PROXY_HEADERS=X-Script-Name /changeme" \
-l "bunkerweb.INTERCEPTED_ERROR_CODES=400 401.5.0-beta 405 413 429 500 501 502 503 504" \ -l "bunkerweb.INTERCEPTED_ERROR_CODES=400 401 405 405 413 429 500 501 502 503 504" \
bunkerity/bunkerweb-ui:1.5.0-beta && \ bunkerity/bunkerweb-ui:1.5.0-beta && \
docker network connect bw-docker bw-ui docker network connect bw-docker bw-ui
``` ```
@ -379,7 +379,7 @@ Because the web UI is a web application, the recommended installation procedure
- "bunkerweb.REVERSE_PROXY_URL=/changeme" - "bunkerweb.REVERSE_PROXY_URL=/changeme"
- "bunkerweb.REVERSE_PROXY_HOST=http://bw-ui:7000" - "bunkerweb.REVERSE_PROXY_HOST=http://bw-ui:7000"
- "bunkerweb.REVERSE_PROXY_HEADERS=X-Script-Name /changeme" - "bunkerweb.REVERSE_PROXY_HEADERS=X-Script-Name /changeme"
- "bunkerweb.INTERCEPTED_ERROR_CODES=400 401.5.0-beta 405 413 429 500 501 502 503 504" - "bunkerweb.INTERCEPTED_ERROR_CODES=400 401 405 405 413 429 500 501 502 503 504"
volumes: volumes:
bw-data: bw-data:
@ -526,7 +526,7 @@ Because the web UI is a web application, the recommended installation procedure
-l "bunkerweb.REVERSE_PROXY_URL=/changeme" \ -l "bunkerweb.REVERSE_PROXY_URL=/changeme" \
-l "bunkerweb.REVERSE_PROXY_HOST=http://bw-ui:7000" \ -l "bunkerweb.REVERSE_PROXY_HOST=http://bw-ui:7000" \
-l "bunkerweb.REVERSE_PROXY_HEADERS=X-Script-Name /changeme" \ -l "bunkerweb.REVERSE_PROXY_HEADERS=X-Script-Name /changeme" \
-l "bunkerweb.INTERCEPTED_ERROR_CODES=400 401.5.0-beta 405 413 429 500 501 502 503 504" \ -l "bunkerweb.INTERCEPTED_ERROR_CODES=400 401 405 405 413 429 500 501 502 503 504" \
bunkerity/bunkerweb-ui:1.5.0-beta bunkerity/bunkerweb-ui:1.5.0-beta
``` ```

View file

@ -13,7 +13,7 @@ services:
# another example for existing folder : chown -R root:101 folder && chmod -R 770 folder # another example for existing folder : chown -R root:101 folder && chmod -R 770 folder
# more info at https://docs.bunkerweb.io # more info at https://docs.bunkerweb.io
volumes: volumes:
- ./www/var/www/html # contains web files (PHP, assets, ...) - ./www:/var/www/html # contains web files (PHP, assets, ...)
environment: environment:
- SERVER_NAME=www.example.com # replace with your domain - SERVER_NAME=www.example.com # replace with your domain
- API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24 - API_WHITELIST_IP=127.0.0.0/8 10.20.30.0/24
@ -62,6 +62,10 @@ services:
networks: networks:
- bw-services - bw-services
volumes:
bw-data:
networks: networks:
bw-universe: bw-universe:
ipam: ipam:

View file

@ -141,9 +141,10 @@ spec:
labels: labels:
app: bunkerweb-scheduler app: bunkerweb-scheduler
spec: spec:
serviceAccountName: sa-bunkerweb
containers: containers:
- name: bunkerweb-controller - name: bunkerweb-scheduler
image: bunkerity/bunkerweb-autoconf:1.4.6 image: bunkerity/bunkerweb-scheduler:1.4.6
imagePullPolicy: Always imagePullPolicy: Always
env: env:
- name: KUBERNETES_MODE - name: KUBERNETES_MODE

View file

@ -141,9 +141,10 @@ spec:
labels: labels:
app: bunkerweb-scheduler app: bunkerweb-scheduler
spec: spec:
serviceAccountName: sa-bunkerweb
containers: containers:
- name: bunkerweb-controller - name: bunkerweb-scheduler
image: bunkerity/bunkerweb-autoconf:1.4.6 image: bunkerity/bunkerweb-scheduler:1.4.6
imagePullPolicy: Always imagePullPolicy: Always
env: env:
- name: KUBERNETES_MODE - name: KUBERNETES_MODE

View file

@ -141,9 +141,10 @@ spec:
labels: labels:
app: bunkerweb-scheduler app: bunkerweb-scheduler
spec: spec:
serviceAccountName: sa-bunkerweb
containers: containers:
- name: bunkerweb-controller - name: bunkerweb-scheduler
image: bunkerity/bunkerweb-autoconf:1.4.6 image: bunkerity/bunkerweb-scheduler:1.4.6
imagePullPolicy: Always imagePullPolicy: Always
env: env:
- name: KUBERNETES_MODE - name: KUBERNETES_MODE

View file

@ -53,6 +53,9 @@ RUN apk add --no-cache bash && \
chown root:nginx /var/log/letsencrypt /var/lib/letsencrypt && \ chown root:nginx /var/log/letsencrypt /var/lib/letsencrypt && \
chmod 770 /var/log/letsencrypt /var/lib/letsencrypt chmod 770 /var/log/letsencrypt /var/lib/letsencrypt
# Fix CVEs
RUN apk add "libcrypto3>=3.0.8-r4" "libssl3>=3.0.8-r4"
VOLUME /data /etc/nginx VOLUME /data /etc/nginx
WORKDIR /usr/share/bunkerweb/autoconf WORKDIR /usr/share/bunkerweb/autoconf

View file

@ -72,6 +72,9 @@ RUN apk add --no-cache pcre bash python3 && \
ln -s /proc/1/fd/1 /var/log/nginx/access.log && \ ln -s /proc/1/fd/1 /var/log/nginx/access.log && \
ln -s /proc/1/fd/1 /var/log/nginx/jobs.log ln -s /proc/1/fd/1 /var/log/nginx/jobs.log
# Fix CVEs
RUN apk add "libcrypto3>=3.0.8-r4" "libssl3>=3.0.8-r4"
VOLUME /data /etc/nginx VOLUME /data /etc/nginx
EXPOSE 8080/tcp 8443/tcp EXPOSE 8080/tcp 8443/tcp

View file

@ -188,6 +188,7 @@ function api:do_api_call()
local status, resp = self:response(ngx.HTTP_INTERNAL_SERVER_ERROR, "error", "can't list loaded plugins : " .. err) local status, resp = self:response(ngx.HTTP_INTERNAL_SERVER_ERROR, "error", "can't list loaded plugins : " .. err)
return false, resp["msg"], ngx.HTTP_INTERNAL_SERVER_ERROR, resp return false, resp["msg"], ngx.HTTP_INTERNAL_SERVER_ERROR, resp
end end
list = cjson.decode(list)
for i, plugin in ipairs(list) do for i, plugin in ipairs(list) do
if pcall(require, plugin.id .. "/" .. plugin.id) then if pcall(require, plugin.id .. "/" .. plugin.id) then
local plugin_lua = require(plugin.id .. "/" .. plugin.id) local plugin_lua = require(plugin.id .. "/" .. plugin.id)

View file

@ -47,48 +47,41 @@ function cachestore:initialize(use_redis)
end end
function cachestore:get(key) function cachestore:get(key)
local function callback(key) local callback = function(key)
-- Connect to redis -- Connect to redis
local clusterstore = require "bunkerweb.clusterstore" local clusterstore = require "bunkerweb.clusterstore":new()
local ok, err = clusterstore:new()
if not ok then
return nil, "clusterstore:new() failed : " .. err, nil
end
local ok, err = clusterstore:connect() local ok, err = clusterstore:connect()
if not ok then if not ok then
return nil, "can't connect to redis : " .. err, nil return nil, "can't connect to redis : " .. err, nil
end end
-- Exec transaction -- Redis script to get value + ttl
local calls = { local redis_script = [[
{"get", {key}}, local ret_get = redis.pcall("GET", KEYS[1])
{"ttl", {key}} if type(ret_get) == "table" and ret_get["err"] ~= nil then
} redis.log(redis.LOG_WARNING, "BUNKERWEB CACHESTORE GET error : " .. ret_get["err"])
-- Exec transaction return ret_get
local exec, err = clusterstore:multi(calls) end
if err then local ret_ttl = redis.pcall("TTL", KEYS[1])
if type(ret_ttl) == "table" and ret_ttl["err"] ~= nil then
redis.log(redis.LOG_WARNING, "BUNKERWEB CACHESTORE DEL error : " .. ret_ttl["err"])
return ret_ttl
end
return {ret_get, ret_ttl}
]]
local ret, err = clusterstore:call("eval", redis_script, 1, key)
if not ret then
clusterstore:close() clusterstore:close()
return nil, "exec() failed : " .. err, nil return nil, err, nil
end end
-- Get results -- Extract values
local value = exec[1] clusterstore:close()
if type(value) == "table" then if ret[1] == ngx.null then
clusterstore:close(redis) ret[1] = nil
return nil, "GET error : " .. value[2], nil
end end
local ttl = exec[2] if ret[2] < 0 then
if type(ttl) == "table" then ret[2] = ret[2] + 1
clusterstore:close(redis)
return nil, "TTL error : " .. ttl[2], nil
end end
-- Return value return ret[1], nil, ret[2]
clusterstore:close(redis)
if value == ngx.null then
value = nil
end
if ttl < 0 then
ttl = ttl + 1
end
return value, nil, ttl
end end
local value, err, hit_level local value, err, hit_level
if self.use_redis then if self.use_redis then
@ -96,7 +89,7 @@ function cachestore:get(key)
else else
value, err, hit_level = self.cache:get(key) value, err, hit_level = self.cache:get(key)
end end
if value == nil and hit_level == nil then if value == nil and err ~= nil then
return false, err return false, err
end end
self.logger:log(ngx.INFO, "hit level for " .. key .. " = " .. tostring(hit_level)) self.logger:log(ngx.INFO, "hit level for " .. key .. " = " .. tostring(hit_level))
@ -124,18 +117,19 @@ end
function cachestore:set_redis(key, value, ex) function cachestore:set_redis(key, value, ex)
-- Connect to redis -- Connect to redis
local redis, err = clusterstore:connect() local clusterstore = require "bunkerweb.clusterstore":new()
if not redis then local ok, err = clusterstore:connect()
if not ok then
return false, "can't connect to redis : " .. err return false, "can't connect to redis : " .. err
end end
-- Set value with ttl -- Set value with ttl
local default_ex = ttl or 30 local default_ex = ex or 30
local ok, err = redis:set(key, value, "EX", ex) local ok, err = clusterstore:call("set", key, value, "EX", default_ex)
if err then if err then
clusterstore:close(redis) clusterstore:close()
return false, "GET failed : " .. err return false, "SET failed : " .. err
end end
clusterstore:close(redis) clusterstore:close()
return true return true
end end
@ -155,17 +149,18 @@ end
function cachestore:del_redis(key) function cachestore:del_redis(key)
-- Connect to redis -- Connect to redis
local redis, err = clusterstore:connect() local clusterstore = require "bunkerweb.clusterstore":new()
if not redis then local ok, err = clusterstore:connect()
if not ok then
return false, "can't connect to redis : " .. err return false, "can't connect to redis : " .. err
end end
-- Set value with ttl -- Set value with ttl
local ok, err = redis:del(key) local ok, err = clusterstore:del(key)
if err then if err then
clusterstore:close(redis) clusterstore:close()
return false, "DEL failed : " .. err return false, "DEL failed : " .. err
end end
clusterstore:close(redis) clusterstore:close()
return true return true
end end

View file

@ -62,7 +62,7 @@ function clusterstore:connect()
return false, err return false, err
end end
if times == 0 then if times == 0 then
local select, err = redis_client:select(tonumber(variables["REDIS_DATABASE"])) local select, err = redis_client:select(tonumber(self.variables["REDIS_DATABASE"]))
if err then if err then
self:close() self:close()
return false, err return false, err
@ -74,8 +74,9 @@ end
function clusterstore:close() function clusterstore:close()
if self.redis_client then if self.redis_client then
-- Equivalent to close but keep a pool of connections -- Equivalent to close but keep a pool of connections
local ok, err = self.redis_client:set_keepalive(tonumber(self.variables["REDIS_KEEPALIVE_IDLE"]), tonumber(self.variables["REDIS_KEEPALIVE_POOL"]))
self.redis_client = nil self.redis_client = nil
return self.redis_client:set_keepalive(tonumber(self.variables["REDIS_KEEPALIVE_IDLE"]), tonumber(self.variables["REDIS_KEEPALIVE_POOL"])) return ok, err
end end
return false, "not connected" return false, "not connected"
end end
@ -102,7 +103,7 @@ function clusterstore:multi(calls)
-- Loop on calls -- Loop on calls
for i, call in ipairs(calls) do for i, call in ipairs(calls) do
local method = call[1] local method = call[1]
local args = table.unpack(call[2]) local args = unpack(call[2])
local ok, err = self.redis_client[method](self.redis_client, args) local ok, err = self.redis_client[method](self.redis_client, args)
if not ok then if not ok then
return false, method + "() failed : " .. err return false, method + "() failed : " .. err

View file

@ -30,7 +30,7 @@ function datastore:keys()
return self.dict:get_keys(0) return self.dict:get_keys(0)
end end
function datastore:exp(key) function datastore:ttl(key)
local ttl, err = self.dict:ttl(key) local ttl, err = self.dict:ttl(key)
if not ttl then if not ttl then
return false, err return false, err

View file

@ -99,12 +99,15 @@ helpers.fill_ctx = function()
if not ngx.shared.cachestore then if not ngx.shared.cachestore then
data.kind = "stream" data.kind = "stream"
end end
data.ip = ngx.var.remote_addr data.remote_addr = ngx.var.remote_addr
data.uri = ngx.var.uri data.uri = ngx.var.uri
data.original_uri = ngx.var.original_uri data.request_uri = ngx.var.request_uri
data.user_agent = ngx.var.http_user_agent data.request_method = ngx.var.request_method
data.http_user_agent = ngx.var.http_user_agent
data.http_host = ngx.var.http_host
data.server_name = ngx.var.server_name
-- IP data : global -- IP data : global
local ip_is_global, err = utils.ip_is_global(data.ip) local ip_is_global, err = utils.ip_is_global(data.remote_addr)
if ip_is_global == nil then if ip_is_global == nil then
table.insert(errors, "can't check if IP is global : " .. err) table.insert(errors, "can't check if IP is global : " .. err)
else else

View file

@ -27,6 +27,12 @@ function plugin:initialize(id)
end end
self.variables[k] = value self.variables[k] = value
end end
-- Is loading
local is_loading, err = utils.get_variable("IS_LOADING", false)
if is_loading == nil then
self.logger:log(ngx.ERR, "can't get IS_LOADING variable : " .. err)
end
self.is_loading = is_loading == "yes"
end end
function plugin:get_id() function plugin:get_id()

View file

@ -338,7 +338,7 @@ utils.get_rdns = function(ip)
return false, nil return false, nil
end end
utils.get_ips = function(fqdn, resolvers) utils.get_ips = function(fqdn)
-- Get resolvers -- Get resolvers
local resolvers, err = utils.get_resolvers() local resolvers, err = utils.get_resolvers()
if not resolvers then if not resolvers then
@ -433,57 +433,153 @@ end
utils.get_session = function() utils.get_session = function()
-- Session already in context -- Session already in context
if ngx.ctx.bw.session then if ngx.ctx.bw.session then
return ngx.ctx.bw.session, ngx.ctx.bw.session_err, ngx.ctx.bw.session_exists return ngx.ctx.bw.session, ngx.ctx.bw.session_err, ngx.ctx.bw.session_exists, ngx.ctx.bw.session_refreshed
end end
-- Open session -- Open session and fill ctx
local _session, err, exists = session.start() local _session, err, exists, refreshed = session.start()
if err then ngx.ctx.bw.session_err = nil
logger:log(ngx.ERR, "can't start session : " .. err) if err and err ~= "missing session cookie" and err ~= "no session" then
logger:log(ngx.WARN, "can't start session : " .. err)
ngx.ctx.bw.session_err = err
end end
-- Fill ctx ngx.ctx.bw.session = _session
ngx.ctx.session = _session ngx.ctx.bw.session_exists = exists
ngx.ctx.session_err = err ngx.ctx.bw.session_refreshed = refreshed
ngx.ctx.session_exists = exists ngx.ctx.bw.session_saved = false
ngx.ctx.session_saved = false ngx.ctx.bw.session_data = _session:get_data()
ngx.ctx.session_data = _session.get_data() if not ngx.ctx.bw.session_data then
if not ngx.ctx.session_data then ngx.ctx.bw.session_data = {}
ngx.ctx.session_data = {}
end end
return _session, err, exists return _session, ngx.ctx.bw.session_err, exists, refreshed
end end
utils.save_session = function() utils.save_session = function()
-- Check if save is needed -- Check if save is needed
if ngx.ctx.session and not ngx.ctx.session_err and not ngx.ctx.session_saved then if ngx.ctx.bw.session and not ngx.ctx.bw.session_saved then
ngx.ctx.session:set_data(ngx.ctx.session_data) ngx.ctx.bw.session:set_data(ngx.ctx.bw.session_data)
local ok, err = ngx.ctx.session:save() local ok, err = ngx.ctx.bw.session:save()
if err then if err then
logger:log(ngx.ERR, "can't save session : " .. err) logger:log(ngx.ERR, "can't save session : " .. err)
return false, "can't save session : " .. err return false, "can't save session : " .. err
end end
ngx.ctx.session_saved = true ngx.ctx.bw.session_saved = true
return true, "session saved" return true, "session saved"
elseif ngx.ctx.session_saved then elseif ngx.ctx.bw.session_saved then
return true, "session already saved" return true, "session already saved"
end end
return true, "no session" return true, "no session"
end end
utils.set_session = function(key, value) utils.set_session_var = function(key, value)
-- Set new data -- Set new data
if ngx.ctx.session and not ngx.ctx.session_err then if ngx.ctx.bw.session then
ngx.ctx.session_data[key] = value ngx.ctx.bw.session_data[key] = value
return true, "value set" return true, "value set"
end end
return true, "no session"
end
utils.get_session = function(key)
-- Get data
if ngx.ctx.session and not ngx.ctx.session_err then
return true, "value get", ngx.ctx.session_data[key]
end
return false, "no session" return false, "no session"
end end
utils.get_session_var = function(key)
-- Get data
if ngx.ctx.bw.session then
if ngx.ctx.bw.session_data[key] then
return true, "data present", ngx.ctx.bw.session_data[key]
end
return true, "no data"
end
return false, "no session"
end
utils.is_banned = function(ip)
-- Check on local datastore
local reason, err = datastore:get("bans_ip_" .. ip)
if not reason and err ~= "not found" then
return nil, "datastore:get() error : " .. reason
elseif reason and err ~= "not found" then
local ok, ttl = datastore:ttl("bans_ip_" .. ip)
if not ok then
return true, reason, -1
end
return true, reason, ttl
end
-- Redis case
local use_redis, err = utils.get_variable("USE_REDIS", false)
if not use_redis then
return nil, "can't get USE_REDIS variable : " .. err
elseif use_redis ~= "yes" then
return false, "not banned"
end
-- Connect
local clusterstore = require "bunkerweb.clusterstore":new()
local ok, err = clusterstore:connect()
if not ok then
return nil, "can't connect to redis server : " .. err
end
-- Redis atomic script : GET+TTL
local redis_script = [[
local ret_get = redis.pcall("GET", KEYS[1])
if type(ret_get) == "table" and ret_get["err"] ~= nil then
redis.log(redis.LOG_WARNING, "access GET error : " .. ret_get["err"])
return ret_get
end
local ret_ttl = nil
if ret_get ~= nil then
ret_ttl = redis.pcall("TTL", KEYS[1])
if type(ret_ttl) == "table" and ret_ttl["err"] ~= nil then
redis.log(redis.LOG_WARNING, "access TTL error : " .. ret_ttl["err"])
return ret_ttl
end
end
return {ret_get, ret_ttl}
]]
-- Execute redis script
local data, err = clusterstore:call("eval", redis_script, 1, "bans_ip_" .. ip)
if not data then
clusterstore:close()
return nil, "redis call error : " .. err
elseif data.err then
clusterstore:close()
return nil, "redis script error : " .. data.err
elseif data[1] ~= ngx.null then
clusterstore:close()
-- Update local cache
local ok, err = datastore:set("bans_ip_" .. ip, data[1], data[2])
if not ok then
return nil, "datastore:set() error : " .. err
end
return true, data[1], data[2]
end
clusterstore:close()
return false, "not banned"
end
utils.add_ban = function(ip, reason, ttl)
-- Set on local datastore
local ok, err = datastore:set("bans_ip_" .. ip, reason, ttl)
if not ok then
return false, "datastore:set() error : " .. err
end
-- Set on redis
local use_redis, err = utils.get_variable("USE_REDIS", false)
if not use_redis then
return nil, "can't get USE_REDIS variable : " .. err
elseif use_redis ~= "yes" then
return true, "success"
end
-- Connect
local clusterstore = require "bunkerweb.clusterstore":new()
local ok, err = clusterstore:connect()
if not ok then
return false, "can't connect to redis server : " .. err
end
-- SET call
local ok, err = clusterstore:call("set", "bans_ip_" .. ip, reason, "EX", ttl)
if not ok then
clusterstore:close()
return false, "redis SET failed : " .. err
end
clusterstore:close()
return true, "success"
end
return utils return utils

View file

@ -1,68 +0,0 @@
local datastore = require "datastore"
local cjson = require "cjson"
local plugins = {}
plugins.load = function(self, path)
-- Read plugin.json file
local file = io.open(path .. "/plugin.json")
if not file then
return false, "can't read plugin.json file"
end
-- Decode plugin.json
-- TODO : check return value of file:read and cjson.encode
local data = cjson.decode(file:read("*a"))
file:close()
-- Check required fields
local required_fields = {"id", "order", "name", "description", "version", "settings"}
for i, field in ipairs(required_fields) do
if data[field] == nil then
return false, "missing field " .. field .. " in plugin.json"
end
-- TODO : check values and types with regex
end
-- Get existing plugins
local list, err = plugins:list()
if not list then
return false, err
end
-- Add our plugin to existing list and sort it
table.insert(list, data)
table.sort(list, function (a, b)
return a.order < b.order
end)
-- Save new plugin list in datastore
local ok, err = datastore:set("plugins", cjson.encode(list))
if not ok then
return false, "can't save new plugin list"
end
-- Save default settings value
for variable, value in pairs(data.settings) do
ok, err = datastore:set("plugin_" .. data.id .. "_" .. variable, value["default"])
if not ok then
return false, "can't save default variable value of " .. variable .. " into datastore"
end
end
-- Return the plugin
return data, "success"
end
plugins.list = function(self)
-- Get encoded plugins from datastore
local encoded_plugins, err = datastore:get("plugins")
if not encoded_plugins then
return false, "can't get encoded plugins from datastore"
end
-- Decode and return the list
return cjson.decode(encoded_plugins), "success"
end
return plugins

View file

@ -1,73 +0,0 @@
local clusterstore = require "clusterstore"
local datastore = require "datastore"
local utils = require "utils"
local redisutils = {}
redisutils.ban = function(ip)
-- Connect
local redis_client, err = clusterstore:connect()
if not redis_client then
return nil, "can't connect to redis server : " .. err
end
-- Start transaction
local ok, err = redis_client:multi()
if not ok then
clusterstore:close(redis_client)
return nil, "MULTI failed : " .. err
end
-- Get ban
ok, err = redis_client:get("ban_" .. ip)
if not ok then
clusterstore:close(redis_client)
return nil, "GET failed : " .. err
end
-- Get ttl
ok, err = redis_client:ttl("ban_" .. ip)
if not ok then
clusterstore:close(redis_client)
return nil, "TTL failed : " .. err
end
-- Exec transaction
local exec, err = redis_client:exec()
if err then
clusterstore:close(redis_client)
return nil, "EXEC failed : " .. err
end
if type(exec) ~= "table" then
clusterstore:close(redis_client)
return nil, "EXEC result is not a table"
end
-- Extract ban reason
local reason = exec[1]
if type(reason) == "table" then
clusterstore:close(redis_client)
return nil, "GET failed : " .. reason[2]
end
if reason == ngx.null then
clusterstore:close(redis_client)
datastore:delete("bans_ip_" .. ip)
return false
end
-- Extract ttl
local ttl = exec[2]
if type(ttl) == "table" then
clusterstore:close(redis_client)
return nil, "TTL failed : " .. ttl[2]
end
if ttl <= 0 then
clusterstore:close(redis_client)
return nil, "TTL returned invalid value : " .. tostring(ttl)
end
ok, err = datastore:set("bans_ip_" .. ip, reason, ttl)
if not ok then
clusterstore:close(redis_client)
datastore:delete("bans_ip_" .. ip)
return nil, "can't save ban to local datastore : " .. err
end
-- Return reason
clusterstore:close(redis_client)
return true, reason
end
return redisutils

View file

@ -15,26 +15,52 @@ server {
# check IP and do the API call # check IP and do the API call
access_by_lua_block { access_by_lua_block {
local capi = require "bunkerweb.api" -- Instantiate objects and import required modules
local clogger = require "bunkerweb.logger" local logger = require "bunkerweb.logger":new("API")
local logger = clogger:new("API") local api = require "bunkerweb.api":new()
local api = capi:new() local helpers = require "bunkerweb.helpers"
if not ngx.var.http_host or ngx.var.http_host ~= "{{ API_SERVER_NAME }}" then
logger:log(ngx.WARN, "wrong Host header from IP " .. ngx.var.remote_addr) -- Start API handler
logger:log(ngx.INFO, "API handler started")
-- Fill ctx
logger:log(ngx.INFO, "filling ngx.ctx ...")
local ok, ret, errors = helpers.fill_ctx()
if not ok then
logger:log(ngx.ERR, "fill_ctx() failed : " .. ret)
elseif errors then
for i, error in ipairs(errors) do
logger:log(ngx.ERR, "fill_ctx() error " .. tostring(i) .. " : " .. error)
end
end
logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")")
-- Check host header
if not ngx.ctx.bw.http_host or ngx.ctx.bw.http_host ~= "{{ API_SERVER_NAME }}" then
logger:log(ngx.WARN, "wrong Host header from IP " .. ngx.ctx.bw.remote_addr)
return ngx.exit(ngx.HTTP_CLOSE) return ngx.exit(ngx.HTTP_CLOSE)
end end
-- Check IP
local ok, err = api:is_allowed_ip() local ok, err = api:is_allowed_ip()
if not ok then if not ok then
logger:log(ngx.WARN, "can't validate access from IP " .. ngx.var.remote_addr .. " : " .. err) logger:log(ngx.WARN, "can't validate access from IP " .. ngx.ctx.bw.remote_addr .. " : " .. err)
return ngx.exit(ngx.HTTP_CLOSE) return ngx.exit(ngx.HTTP_CLOSE)
end end
logger:log(ngx.NOTICE, "validated access from IP " .. ngx.var.remote_addr) logger:log(ngx.NOTICE, "validated access from IP " .. ngx.ctx.bw.remote_addr)
-- Do API call
local ok, err, status, resp = api:do_api_call() local ok, err, status, resp = api:do_api_call()
if not ok then if not ok then
logger:log(ngx.WARN, "call from " .. ngx.var.remote_addr .. " on " .. ngx.var.uri .. " failed : " .. err) logger:log(ngx.WARN, "call from " .. ngx.ctx.bw.remote_addr .. " on " .. ngx.ctx.bw.uri .. " failed : " .. err)
else else
logger:log(ngx.NOTICE, "successful call from " .. ngx.var.remote_addr .. " on " .. ngx.var.uri .. " : " .. err) logger:log(ngx.NOTICE, "successful call from " .. ngx.ctx.bw.remote_addr .. " on " .. ngx.ctx.bw.uri .. " : " .. err)
end end
-- Start API handler
logger:log(ngx.INFO, "API handler ended")
-- Send response
ngx.status = status ngx.status = status
ngx.say(resp) ngx.say(resp)
return ngx.exit(status) return ngx.exit(status)

View file

@ -44,6 +44,18 @@ server {
local datastore = cdatastore:new() local datastore = cdatastore:new()
logger:log(ngx.INFO, "log_default phase started") logger:log(ngx.INFO, "log_default phase started")
-- Fill ctx
logger:log(ngx.INFO, "filling ngx.ctx ...")
local ok, ret, errors = helpers.fill_ctx()
if not ok then
logger:log(ngx.ERR, "fill_ctx() failed : " .. ret)
elseif errors then
for i, error in ipairs(errors) do
logger:log(ngx.ERR, "fill_ctx() error " .. tostring(i) .. " : " .. error)
end
end
logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")")
-- Get plugins -- Get plugins
local plugins, err = datastore:get("plugins") local plugins, err = datastore:get("plugins")
if not plugins then if not plugins then

View file

@ -53,6 +53,9 @@ lua_shared_dict cachestore_locks {{ CACHESTORE_LOCKS_MEMORY_SIZE }};
# LUA init block # LUA init block
include /etc/nginx/init-lua.conf; include /etc/nginx/init-lua.conf;
# LUA init worker block
include /etc/nginx/init-worker-lua.conf;
# API server # API server
{% if USE_API == "yes" %}include /etc/nginx/api.conf;{% endif +%} {% if USE_API == "yes" %}include /etc/nginx/api.conf;{% endif +%}

View file

@ -1,76 +1,76 @@
init_by_lua_block { init_by_lua_block {
local class = require "middleclass" local class = require "middleclass"
local logger = require "bunkerweb.logger" local clogger = require "bunkerweb.logger"
local helpers = require "bunkerweb.helpers" local helpers = require "bunkerweb.helpers"
local datastore = require "bunkerweb.datastore" local cdatastore = require "bunkerweb.datastore"
local cjson = require "cjson" local cjson = require "cjson"
-- Start init phase -- Start init phase
local init_logger = logger:new("INIT") local logger = clogger:new("INIT")
local ds = datastore:new() local datastore = cdatastore:new()
init_logger:log(ngx.NOTICE, "init phase started") logger:log(ngx.NOTICE, "init phase started")
-- Remove previous data from the datastore -- Remove previous data from the datastore
init_logger:log(ngx.NOTICE, "deleting old keys from datastore ...") logger:log(ngx.NOTICE, "deleting old keys from datastore ...")
local data_keys = {"^plugin_", "^variable_", "^plugins$", "^api_", "^misc_"} local data_keys = {"^plugin_", "^variable_", "^plugins$", "^api_", "^misc_"}
for i, key in pairs(data_keys) do for i, key in pairs(data_keys) do
local ok, err = ds:delete_all(key) local ok, err = datastore:delete_all(key)
if not ok then if not ok then
init_logger:log(ngx.ERR, "can't delete " .. key .. " from datastore : " .. err) logger:log(ngx.ERR, "can't delete " .. key .. " from datastore : " .. err)
return false return false
end end
init_logger:log(ngx.INFO, "deleted " .. key .. " from datastore") logger:log(ngx.INFO, "deleted " .. key .. " from datastore")
end end
init_logger:log(ngx.NOTICE, "deleted old keys from datastore") logger:log(ngx.NOTICE, "deleted old keys from datastore")
-- Load variables into the datastore -- Load variables into the datastore
init_logger:log(ngx.NOTICE, "saving variables into datastore ...") logger:log(ngx.NOTICE, "saving variables into datastore ...")
local file = io.open("/etc/nginx/variables.env") local file = io.open("/etc/nginx/variables.env")
if not file then if not file then
init_logger:log(ngx.ERR, "can't open /etc/nginx/variables.env file") logger:log(ngx.ERR, "can't open /etc/nginx/variables.env file")
return false return false
end end
file:close() file:close()
for line in io.lines("/etc/nginx/variables.env") do for line in io.lines("/etc/nginx/variables.env") do
local variable, value = line:match("(.+)=(.*)") local variable, value = line:match("(.+)=(.*)")
local ok, err = ds:set("variable_" .. variable, value) local ok, err = datastore:set("variable_" .. variable, value)
if not ok then if not ok then
init_logger:log(ngx.ERR, "can't save variable " .. variable .. " into datastore : " .. err) logger:log(ngx.ERR, "can't save variable " .. variable .. " into datastore : " .. err)
return false return false
end end
init_logger:log(ngx.INFO, "saved variable " .. variable .. "=" .. value .. " into datastore") logger:log(ngx.INFO, "saved variable " .. variable .. "=" .. value .. " into datastore")
end end
init_logger:log(ngx.NOTICE, "saved variables into datastore") logger:log(ngx.NOTICE, "saved variables into datastore")
-- Set API values into the datastore -- Set API values into the datastore
init_logger:log(ngx.NOTICE, "saving API values into datastore ...") logger:log(ngx.NOTICE, "saving API values into datastore ...")
local value, err = ds:get("variable_USE_API") local value, err = datastore:get("variable_USE_API")
if not value then if not value then
init_logger:log(ngx.ERR, "can't get variable USE_API from the datastore : " .. err) logger:log(ngx.ERR, "can't get variable USE_API from the datastore : " .. err)
return false return false
end end
if value == "yes" then if value == "yes" then
local value, err = ds:get("variable_API_WHITELIST_IP") local value, err = datastore:get("variable_API_WHITELIST_IP")
if not value then if not value then
init_logger:log(ngx.ERR, "can't get variable API_WHITELIST_IP from the datastore : " .. err) logger:log(ngx.ERR, "can't get variable API_WHITELIST_IP from the datastore : " .. err)
return false return false
end end
local whitelists = {} local whitelists = {}
for whitelist in value:gmatch("%S+") do for whitelist in value:gmatch("%S+") do
table.insert(whitelists, whitelist) table.insert(whitelists, whitelist)
end end
local ok, err = ds:set("api_whitelist_ip", cjson.encode(whitelists)) local ok, err = datastore:set("api_whitelist_ip", cjson.encode(whitelists))
if not ok then if not ok then
init_logger:log(ngx.ERR, "can't save API whitelist_ip to datastore : " .. err) logger:log(ngx.ERR, "can't save API whitelist_ip to datastore : " .. err)
return false return false
end end
init_logger:log(ngx.INFO, "saved API whitelist_ip into datastore") logger:log(ngx.INFO, "saved API whitelist_ip into datastore")
end end
init_logger:log(ngx.NOTICE, "saved API values into datastore") logger:log(ngx.NOTICE, "saved API values into datastore")
-- Load plugins into the datastore -- Load plugins into the datastore
init_logger:log(ngx.NOTICE, "saving plugins into datastore ...") logger:log(ngx.NOTICE, "saving plugins into datastore ...")
local plugins = {} local plugins = {}
local plugin_paths = {"/usr/share/bunkerweb/core", "/etc/bunkerweb/plugins"} local plugin_paths = {"/usr/share/bunkerweb/core", "/etc/bunkerweb/plugins"}
for i, plugin_path in ipairs(plugin_paths) do for i, plugin_path in ipairs(plugin_paths) do
@ -78,61 +78,61 @@ for i, plugin_path in ipairs(plugin_paths) do
for path in paths:lines() do for path in paths:lines() do
local ok, plugin = helpers.load_plugin(path .. "/plugin.json") local ok, plugin = helpers.load_plugin(path .. "/plugin.json")
if not ok then if not ok then
init_logger:log(ngx.ERR, plugin) logger:log(ngx.ERR, plugin)
else else
local ok, err = ds:set("plugin_" .. plugin.id, cjson.encode(plugin)) local ok, err = datastore:set("plugin_" .. plugin.id, cjson.encode(plugin))
if not ok then if not ok then
init_logger:log(ngx.ERR, "can't save " .. plugin.id .. " into datastore : " .. err) logger:log(ngx.ERR, "can't save " .. plugin.id .. " into datastore : " .. err)
else else
table.insert(plugins, plugin) table.insert(plugins, plugin)
table.sort(plugins, function (a, b) table.sort(plugins, function (a, b)
return a.order < b.order return a.order < b.order
end) end)
init_logger:log(ngx.NOTICE, "loaded plugin " .. plugin.id .. " v" .. plugin.version) logger:log(ngx.NOTICE, "loaded plugin " .. plugin.id .. " v" .. plugin.version)
end end
end end
end end
end end
local ok, err = ds:set("plugins", cjson.encode(plugins)) local ok, err = datastore:set("plugins", cjson.encode(plugins))
if not ok then if not ok then
init_logger:log(ngx.ERR, "can't save plugins into datastore : " .. err) logger:log(ngx.ERR, "can't save plugins into datastore : " .. err)
return false return false
end end
init_logger:log(ngx.NOTICE, "saved plugins into datastore") logger:log(ngx.NOTICE, "saved plugins into datastore")
-- Call init() methods -- Call init() methodatastore
init_logger:log(ngx.NOTICE, "calling init() methods of plugins ...") logger:log(ngx.NOTICE, "calling init() methods of plugins ...")
for i, plugin in ipairs(plugins) do for i, plugin in ipairs(plugins) do
-- Require call -- Require call
local plugin_lua, err = helpers.require_plugin(plugin.id) local plugin_lua, err = helpers.require_plugin(plugin.id)
if plugin_lua == false then if plugin_lua == false then
init_logger:log(ngx.ERR, err) logger:log(ngx.ERR, err)
elseif plugin_lua == nil then elseif plugin_lua == nil then
init_logger:log(ngx.NOTICE, err) logger:log(ngx.NOTICE, err)
else else
-- Check if plugin has init method -- Check if plugin has init method
if plugin_lua.init ~= nil then if plugin_lua.init ~= nil then
-- New call -- New call
local ok, plugin_obj = helpers.new_plugin(plugin_lua) local ok, plugin_obj = helpers.new_plugin(plugin_lua)
if not ok then if not ok then
init_logger:log(ngx.ERR, plugin_obj) logger:log(ngx.ERR, plugin_obj)
else else
local ok, ret = helpers.call_plugin(plugin_obj, "init") local ok, ret = helpers.call_plugin(plugin_obj, "init")
if not ok then if not ok then
init_logger:log(ngx.ERR, ret) logger:log(ngx.ERR, ret)
elseif not ret.ret then elseif not ret.ret then
init_logger:log(ngx.ERR, plugin.id .. ":init() call failed : " .. ret.msg) logger:log(ngx.ERR, plugin.id .. ":init() call failed : " .. ret.msg)
else else
init_logger:log(ngx.NOTICE, plugin.id .. ":init() call successful : " .. ret.msg) logger:log(ngx.NOTICE, plugin.id .. ":init() call successful : " .. ret.msg)
end end
end end
else else
init_logger:log(ngx.NOTICE, "skipped execution of " .. plugin.id .. " because method init() is not defined") logger:log(ngx.NOTICE, "skipped execution of " .. plugin.id .. " because method init() is not defined")
end end
end end
end end
init_logger:log(ngx.NOTICE, "called init() methods of plugins") logger:log(ngx.NOTICE, "called init() methods of plugins")
init_logger:log(ngx.NOTICE, "init phase ended") logger:log(ngx.NOTICE, "init phase ended")
} }

View file

@ -0,0 +1,39 @@
lua_shared_dict ready_lock 16k;
init_worker_by_lua_block {
-- Our timer function
local ready_log = function(premature)
-- Instantiate objects
local logger = require "bunkerweb.logger":new("INIT")
local datastore = require "bunkerweb.datastore":new()
local lock = require "resty.lock":new("ready_lock")
if not lock then
logger:log(ngx.ERR, "lock:new() failed : " .. err)
return
end
-- Acquire lock
local elapsed, err = lock:lock("ready")
if elapsed == nil then
logger:log(ngx.ERR, "lock:lock() failed : " .. err)
else
-- Display ready log
local ok, err = datastore:get("misc_ready")
if not ok and err ~= "not found" then
logger:log(ngx.ERR, "datastore:get() failed : " .. err)
elseif not ok and err == "not found" then
logger:log(ngx.NOTICE, "BunkerWeb is ready to fool hackers ! 🚀")
local ok, err = datastore:set("misc_ready", "ok")
if not ok then
logger:log(ngx.ERR, "datastore:set() failed : " .. err)
end
end
end
-- Release lock
lock:unlock()
end
-- Start timer
ngx.timer.at(5, ready_log)
}

View file

@ -5,7 +5,7 @@ local clogger = require "bunkerweb.logger"
local helpers = require "bunkerweb.helpers" local helpers = require "bunkerweb.helpers"
local utils = require "bunkerweb.utils" local utils = require "bunkerweb.utils"
local cdatastore = require "bunkerweb.datastore" local cdatastore = require "bunkerweb.datastore"
local ccachestore = require "bunkerweb.cachestore" local cclusterstore = require "bunkerweb.clusterstore"
local cjson = require "cjson" local cjson = require "cjson"
-- Don't process internal requests -- Don't process internal requests
@ -17,19 +17,8 @@ end
-- Start access phase -- Start access phase
local datastore = cdatastore:new() local datastore = cdatastore:new()
local use_redis, err = utils.get_variable("USE_REDIS", false)
if not use_redis then
logger:log(ngx.ERR, err)
end
local cachestore = ccachestore:new(use_redis == "yes")
logger:log(ngx.INFO, "access phase started") logger:log(ngx.INFO, "access phase started")
-- Update cachestore only once and before any other code
local ok, err = cachestore.cache:update()
if not ok then
logger:log(ngx.ERR, "can't update cachestore : " .. err)
end
-- Fill ctx -- Fill ctx
logger:log(ngx.INFO, "filling ngx.ctx ...") logger:log(ngx.INFO, "filling ngx.ctx ...")
local ok, ret, errors = helpers.fill_ctx() local ok, ret, errors = helpers.fill_ctx()
@ -43,13 +32,14 @@ end
logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")") logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")")
-- Process bans as soon as possible -- Process bans as soon as possible
local ok, reason = datastore:get("bans_ip_" .. ngx.var.remote_addr) local banned, reason, ttl = utils.is_banned(ngx.ctx.bw.remote_addr)
if not ok and reason ~= "not found" then if banned == nil then
logger:log(ngx.INFO, "error while checking if client is banned : " .. reason) logger:log(ngx.ERR, "can't check if IP " .. ngx.ctx.bw.remote_addr .. " is banned : " .. reason)
return false elseif banned then
elseif ok and reason ~= "not found" then logger:log(ngx.WARN, "IP " .. ngx.ctx.bw.remote_addr .. " is banned with reason " .. reason .. " (" .. tostring(ttl) .. "s remaining)")
logger:log(ngx.WARN, "IP " .. ngx.var.remote_addr .. " is banned with reason : " .. reason)
return ngx.exit(utils.get_deny_status()) return ngx.exit(utils.get_deny_status())
else
logger:log(ngx.INFO, "IP " .. ngx.ctx.bw.remote_addr .. " is not banned")
end end
-- Get plugins -- Get plugins
@ -62,6 +52,8 @@ plugins = cjson.decode(plugins)
-- Call access() methods -- Call access() methods
logger:log(ngx.INFO, "calling access() methods of plugins ...") logger:log(ngx.INFO, "calling access() methods of plugins ...")
local status = nil
local redirect = nil
for i, plugin in ipairs(plugins) do for i, plugin in ipairs(plugins) do
-- Require call -- Require call
local plugin_lua, err = helpers.require_plugin(plugin.id) local plugin_lua, err = helpers.require_plugin(plugin.id)
@ -83,20 +75,20 @@ for i, plugin in ipairs(plugins) do
elseif not ret.ret then elseif not ret.ret then
logger:log(ngx.ERR, plugin.id .. ":access() call failed : " .. ret.msg) logger:log(ngx.ERR, plugin.id .. ":access() call failed : " .. ret.msg)
else else
logger:log(ngx.NOTICE, plugin.id .. ":access() call successful : " .. ret.msg) logger:log(ngx.INFO, plugin.id .. ":access() call successful : " .. ret.msg)
end end
if ret.status then if ret.status then
if ret.status == utils.get_deny_status() then if ret.status == utils.get_deny_status() then
ngx.ctx.reason = plugin.id ngx.ctx.reason = plugin.id
logger:log(ngx.WARN, "denied access from " .. plugin.id .. " : " .. err) logger:log(ngx.WARN, "denied access from " .. plugin.id .. " : " .. ret.msg)
else else
logger:log(ngx.NOTICE, plugin.id .. " returned status " .. tostring(ret.status)) logger:log(ngx.NOTICE, plugin.id .. " returned status " .. tostring(ret.status) .. " : " .. ret.msg)
end end
ngx.ctx.status = ret.status status = ret.status
break break
elseif ret.redirect then elseif ret.redirect then
logger:log(ngx.NOTICE, plugin.id .. " redirect to " .. ret.redirect .. " : " .. err) logger:log(ngx.NOTICE, plugin.id .. " redirect to " .. ret.redirect .. " : " .. ret.msg)
ngx.ctx.redirect = ret.redirect redirect = ret.redirect
break break
end end
end end
@ -111,18 +103,20 @@ logger:log(ngx.INFO, "called access() methods of plugins")
local ok, err = utils.save_session() local ok, err = utils.save_session()
if not ok then if not ok then
logger:log(ngx.ERR, "can't save session : " .. err) logger:log(ngx.ERR, "can't save session : " .. err)
else
logger:log(ngx.INFO, "session save return : " .. err)
end end
logger:log(ngx.INFO, "access phase ended") logger:log(ngx.INFO, "access phase ended")
-- Return status if needed -- Return status if needed
if ngx.ctx.status then if status then
return ngx.exit(ngx.ctx.status) return ngx.exit(status)
end end
-- Redirect if needed -- Redirect if needed
if ngx.ctx.redirect then if redirect then
return ngx.redirect(ngx.ctx.redirect) return ngx.redirect(redirect)
end end
return true return true

View file

@ -17,6 +17,18 @@ end
local datastore = cdatastore:new() local datastore = cdatastore:new()
logger:log(ngx.INFO, "header phase started") logger:log(ngx.INFO, "header phase started")
-- Fill ctx
logger:log(ngx.INFO, "filling ngx.ctx ...")
local ok, ret, errors = helpers.fill_ctx()
if not ok then
logger:log(ngx.ERR, "fill_ctx() failed : " .. ret)
elseif errors then
for i, error in ipairs(errors) do
logger:log(ngx.ERR, "fill_ctx() error " .. tostring(i) .. " : " .. error)
end
end
logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")")
-- Get plugins -- Get plugins
local plugins, err = datastore:get("plugins") local plugins, err = datastore:get("plugins")
if not plugins then if not plugins then
@ -48,7 +60,7 @@ for i, plugin in ipairs(plugins) do
elseif not ret.ret then elseif not ret.ret then
logger:log(ngx.ERR, plugin.id .. ":header() call failed : " .. ret.msg) logger:log(ngx.ERR, plugin.id .. ":header() call failed : " .. ret.msg)
else else
logger:log(ngx.NOTICE, plugin.id .. ":header() call successful : " .. ret.msg) logger:log(ngx.INFO, plugin.id .. ":header() call successful : " .. ret.msg)
end end
end end
else else

View file

@ -11,6 +11,18 @@ local logger = clogger:new("LOG")
local datastore = cdatastore:new() local datastore = cdatastore:new()
logger:log(ngx.INFO, "log phase started") logger:log(ngx.INFO, "log phase started")
-- Fill ctx
logger:log(ngx.INFO, "filling ngx.ctx ...")
local ok, ret, errors = helpers.fill_ctx()
if not ok then
logger:log(ngx.ERR, "fill_ctx() failed : " .. ret)
elseif errors then
for i, error in ipairs(errors) do
logger:log(ngx.ERR, "fill_ctx() error " .. tostring(i) .. " : " .. error)
end
end
logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")")
-- Get plugins -- Get plugins
local plugins, err = datastore:get("plugins") local plugins, err = datastore:get("plugins")
if not plugins then if not plugins then
@ -42,7 +54,7 @@ for i, plugin in ipairs(plugins) do
elseif not ret.ret then elseif not ret.ret then
logger:log(ngx.ERR, plugin.id .. ":log() call failed : " .. ret.msg) logger:log(ngx.ERR, plugin.id .. ":log() call failed : " .. ret.msg)
else else
logger:log(ngx.NOTICE, plugin.id .. ":log() call successful : " .. ret.msg) logger:log(ngx.INFO, plugin.id .. ":log() call successful : " .. ret.msg)
end end
end end
else else

View file

@ -5,6 +5,7 @@ local class = require "middleclass"
local clogger = require "bunkerweb.logger" local clogger = require "bunkerweb.logger"
local helpers = require "bunkerweb.helpers" local helpers = require "bunkerweb.helpers"
local cdatastore = require "bunkerweb.datastore" local cdatastore = require "bunkerweb.datastore"
local ccachestore = require "bunkerweb.cachestore"
local cjson = require "cjson" local cjson = require "cjson"
-- Don't process internal requests -- Don't process internal requests
@ -18,6 +19,25 @@ end
local datastore = cdatastore:new() local datastore = cdatastore:new()
logger:log(ngx.INFO, "set phase started") logger:log(ngx.INFO, "set phase started")
-- Update cachestore only once and before any other code
local cachestore = ccachestore:new()
local ok, err = cachestore.cache:update()
if not ok then
logger:log(ngx.ERR, "can't update cachestore : " .. err)
end
-- Fill ctx
logger:log(ngx.INFO, "filling ngx.ctx ...")
local ok, ret, errors = helpers.fill_ctx()
if not ok then
logger:log(ngx.ERR, "fill_ctx() failed : " .. ret)
elseif errors then
for i, error in ipairs(errors) do
logger:log(ngx.ERR, "fill_ctx() error " .. tostring(i) .. " : " .. error)
end
end
logger:log(ngx.INFO, "ngx.ctx filled (ret = " .. ret .. ")")
-- Get plugins -- Get plugins
local plugins, err = datastore:get("plugins") local plugins, err = datastore:get("plugins")
if not plugins then if not plugins then
@ -49,7 +69,7 @@ for i, plugin in ipairs(plugins) do
elseif not ret.ret then elseif not ret.ret then
logger:log(ngx.ERR, plugin.id .. ":set() call failed : " .. ret.msg) logger:log(ngx.ERR, plugin.id .. ":set() call failed : " .. ret.msg)
else else
logger:log(ngx.NOTICE, plugin.id .. ":set() call successful : " .. ret.msg) logger:log(ngx.INFO, plugin.id .. ":set() call successful : " .. ret.msg)
end end
end end
else else

View file

@ -23,46 +23,59 @@ function antibot:access()
return self:ret(true, "antibot not activated") return self:ret(true, "antibot not activated")
end end
-- Prepare challenge
local ok, err = self:prepare_challenge(antibot, challenge_uri)
if not ok then
return self:ret(false, "can't prepare challenge : " .. err, ngx.HTTP_INTERNAL_SERVER_ERROR)
end
-- Don't go further if client resolved the challenge -- Don't go further if client resolved the challenge
local resolved, err, original_uri = self:challenge_resolved(antibot) local resolved, err, original_uri = self:challenge_resolved()
if resolved == nil then if resolved == nil then
return self:ret(false, "can't check if challenge is resolved : " .. err) return self:ret(false, "can't check if challenge is resolved : " .. err)
end end
if resolved then if resolved then
if ngx.var.uri == challenge_uri then if ngx.ctx.bw.uri == self.variables["ANTIBOT_URI"] then
return self:ret(true, "client already resolved the challenge", nil, original_uri) return self:ret(true, "client already resolved the challenge", nil, original_uri)
end end
return self:ret(true, "client already resolved the challenge") return self:ret(true, "client already resolved the challenge")
end end
-- Redirect to challenge page -- Redirect to challenge page
if ngx.var.uri ~= challenge_uri then if ngx.ctx.bw.uri ~= self.variables["ANTIBOT_URI"] then
return self:ret(true, "redirecting client to the challenge uri", nil, challenge_uri) -- Prepare challenge
local ok, err = self:prepare_challenge()
if not ok then
return self:ret(false, "can't prepare challenge : " .. err, ngx.HTTP_INTERNAL_SERVER_ERROR)
end
return self:ret(true, "redirecting client to the challenge uri", nil, self.variables["ANTIBOT_URI"])
end
-- Direct access without session => prepare challenge
if not self:challenge_prepared() then
local ok, err = self:prepare_challenge()
if not ok then
return self:ret(false, "can't prepare challenge : " .. err, ngx.HTTP_INTERNAL_SERVER_ERROR)
end
end end
-- Display challenge needed -- Display challenge needed
if ngx.var.request_method == "GET" then if ngx.ctx.bw.request_method == "GET" then
ngx.ctx.antibot_display_content = true ngx.ctx.bw.antibot_display_content = true
return self:ret(true, "displaying challenge to client", ngx.HTTP_OK) return self:ret(true, "displaying challenge to client", ngx.OK)
end end
-- Check challenge -- Check challenge
if ngx.var.request_method == "POST" then if ngx.ctx.bw.request_method == "POST" then
local ok, err, redirect = self:check_challenge(antibot) local ok, err, redirect = self:check_challenge()
if ok == nil then if ok == nil then
return self:ret(false, "check challenge error : " .. err, ngx.HTTP_INTERNAL_SERVER_ERROR) return self:ret(false, "check challenge error : " .. err, ngx.HTTP_INTERNAL_SERVER_ERROR)
elseif not ok then
self.logger:log(ngx.WARN, "client failed challenge : " .. err)
local ok, err = self:prepare_challenge()
if not ok then
return self:ret(false, "can't prepare challenge : " .. err, ngx.HTTP_INTERNAL_SERVER_ERROR)
end
end end
if redirect then if redirect then
return self:ret(true, "check challenge redirect : " .. redirect, nil, redirect) return self:ret(true, "check challenge redirect : " .. redirect, nil, redirect)
end end
ngx.ctx.antibot_display_content = true ngx.ctx.bw.antibot_display_content = true
return self:ret(true, "displaying challenge to client", ngx.HTTP_OK) return self:ret(true, "displaying challenge to client", ngx.OK)
end end
-- Method is suspicious, let's deny the request -- Method is suspicious, let's deny the request
@ -70,20 +83,16 @@ function antibot:access()
end end
function antibot:content() function antibot:content()
-- Check if access is needed -- Check if content is needed
local antibot, err = utils.get_variable("USE_ANTIBOT") if not self.variables["USE_ANTIBOT"] or self.variables["USE_ANTIBOT"] == "no" then
if antibot == nil then
return self:ret(false, err)
end
if antibot == "no" then
return self:ret(true, "antibot not activated") return self:ret(true, "antibot not activated")
end end
-- Check if display content is needed -- Check if display content is needed
if not ngx.ctx.antibot_display_content then if not ngx.ctx.bw.antibot_display_content then
return self:ret(true, "display content not needed") return self:ret(true, "display content not needed", nil, "/")
end end
-- Display content -- Display content
local ok, err = self:display_challenge(antibot) local ok, err = self:display_challenge()
if not ok then if not ok then
return self:ret(false, "display challenge error : " .. err) return self:ret(false, "display challenge error : " .. err)
end end
@ -91,41 +100,50 @@ function antibot:content()
end end
function antibot:challenge_resolved() function antibot:challenge_resolved()
local session, err, exists = utils.get_session() local session, err, exists, refreshed = utils.get_session()
if err then if not exists then
return false, "session error : " .. err return false, "no session set"
end end
local raw_data = get_session("antibot") local ok, err, raw_data = utils.get_session_var("antibot")
if not raw_data then if not raw_data then
return false, "session is set but no antibot data", nil return false, "session is set but no antibot data"
end end
local data = cjson.decode(raw_data) local data = raw_data
if data.resolved and self.variables["USE_ANTIBOT"] == data.antibot then if data.resolved and self.variables["USE_ANTIBOT"] == data.type then
return true, "challenge resolved", data.original_uri return true, "challenge resolved", data.original_uri
end end
return false, "challenge not resolved", data.original_uri return false, "challenge not resolved", data.original_uri
end end
function antibot:prepare_challenge() function antibot:challenge_prepared()
local session, err, exists = utils.get_session() local session, err, exists, refreshed = utils.get_session()
if err then if not exists then
return false, "session error : " .. err return false
end end
local ok, err, raw_data = utils.get_session_var("antibot")
if not raw_data then
return false
end
return self.variables["USE_ANTIBOT"] == raw_data.type
end
function antibot:prepare_challenge()
local session, err, exists, refreshed = utils.get_session()
local set_needed = false local set_needed = false
local data = nil local data = nil
if exists then if exists then
local raw_data = get_session("antibot") local ok, err, raw_data = utils.get_session_var("antibot")
if raw_data then if raw_data then
data = cjson.decode(raw_data) data = raw_data
end end
end end
if not data or current_data.antibot ~= self.variables["USE_ANTIBOT"] then if not data or data.type ~= self.variables["USE_ANTIBOT"] then
data = { data = {
type = self.variables["USE_ANTIBOT"], type = self.variables["USE_ANTIBOT"],
resolved = self.variables["USE_ANTIBOT"] == "cookie", resolved = self.variables["USE_ANTIBOT"] == "cookie",
original_uri = ngx.var.request_uri original_uri = ngx.ctx.bw.request_uri
} }
if ngx.var.original_uri == challenge_uri then if ngx.ctx.bw.uri == self.variables["ANTIBOT_URI"] then
data.original_uri = "/" data.original_uri = "/"
end end
set_needed = true set_needed = true
@ -144,24 +162,27 @@ function antibot:prepare_challenge()
end end
end end
if set_needed then if set_needed then
utils.set_session("antibot", cjson.encode(data)) local ok, err = utils.set_session_var("antibot", data)
if not ok then
return false, "error while setting session antibot : " .. err
end
end end
return true, "prepared" return true, "prepared"
end end
function antibot:display_challenge(challenge_uri) function antibot:display_challenge()
-- Open session -- Open session
local session, err, exists = utils.get_session() local session, err, exists, refreshed = utils.get_session()
if err then if not exists then
return false, "can't open session : " .. err return false, "no session set"
end end
-- Get data -- Get data
local raw_data = get_session("antibot") local ok, err, raw_data = utils.get_session_var("antibot")
if not raw_data then if not raw_data then
return false, "session is set but no data" return false, "session is set but no data"
end end
local data = cjson.decode(raw_data) local data = raw_data
-- Check if session type is equal to antibot type -- Check if session type is equal to antibot type
if self.variables["USE_ANTIBOT"] ~= data.type then if self.variables["USE_ANTIBOT"] ~= data.type then
@ -201,20 +222,20 @@ end
function antibot:check_challenge() function antibot:check_challenge()
-- Open session -- Open session
local session, err, exists = utils.get_session() local session, err, exists, refreshed = utils.get_session()
if err then if not exists then
return nil, "can't open session : " .. err, nil return false, "no session set"
end end
-- Get data -- Get data
local raw_data = get_session("antibot") local ok, err, raw_data = utils.get_session_var("antibot")
if not raw_data then if not raw_data then
return false, "session is set but no data", nil return false, "session is set but no data", nil
end end
local data = cjson.decode(raw_data) local data = raw_data
-- Check if session type is equal to antibot type -- Check if session type is equal to antibot type
if elf.variables["USE_ANTIBOT"] ~= data.type then if self.variables["USE_ANTIBOT"] ~= data.type then
return nil, "session type is different from antibot type", nil return nil, "session type is different from antibot type", nil
end end
@ -227,7 +248,7 @@ function antibot:check_challenge()
ngx.req.read_body() ngx.req.read_body()
local args, err = ngx.req.get_post_args(1) local args, err = ngx.req.get_post_args(1)
if err == "truncated" or not args or not args["challenge"] then if err == "truncated" or not args or not args["challenge"] then
return false, "missing challenge arg", nil return nil, "missing challenge arg", nil
end end
local hash = sha256:new() local hash = sha256:new()
hash:update(data.random .. args["challenge"]) hash:update(data.random .. args["challenge"])
@ -237,7 +258,10 @@ function antibot:check_challenge()
return false, "wrong value", nil return false, "wrong value", nil
end end
data.resolved = true data.resolved = true
utils.set_session("antibot", cjson.encode(data)) local ok, err = utils.set_session_var("antibot", data)
if not ok then
return nil, "error while setting session antibot : " .. err
end
return true, "resolved", data.original_uri return true, "resolved", data.original_uri
end end
@ -246,13 +270,16 @@ function antibot:check_challenge()
ngx.req.read_body() ngx.req.read_body()
local args, err = ngx.req.get_post_args(1) local args, err = ngx.req.get_post_args(1)
if err == "truncated" or not args or not args["captcha"] then if err == "truncated" or not args or not args["captcha"] then
return false, "missing challenge arg", nil return nil, "missing challenge arg", nil
end end
if data.text ~= args["captcha"] then if data.text ~= args["captcha"] then
return false, "wrong value", nil return false, "wrong value", nil
end end
data.resolved = true data.resolved = true
utils.set_session("antibot", cjson.encode(data)) local ok, err = utils.set_session_var("antibot", data)
if not ok then
return nil, "error while setting session antibot : " .. err
end
return true, "resolved", data.original_uri return true, "resolved", data.original_uri
end end
@ -261,15 +288,15 @@ function antibot:check_challenge()
ngx.req.read_body() ngx.req.read_body()
local args, err = ngx.req.get_post_args(1) local args, err = ngx.req.get_post_args(1)
if err == "truncated" or not args or not args["token"] then if err == "truncated" or not args or not args["token"] then
return false, "missing challenge arg", nil return nil, "missing challenge arg", nil
end end
local httpc, err = http.new() local httpc, err = http.new()
if not httpc then if not httpc then
return false, "can't instantiate http object : " .. err, nil, nil return nil, "can't instantiate http object : " .. err, nil, nil
end end
local res, err = httpc:request_uri("https://www.google.com/recaptcha/api/siteverify", { local res, err = httpc:request_uri("https://www.google.com/recaptcha/api/siteverify", {
method = "POST", method = "POST",
body = "secret=" .. self.variables["ANTIBOT_RECAPTCHA_SECRET"] .. "&response=" .. args["token"] .. "&remoteip=" .. ngx.var.remote_addr, body = "secret=" .. self.variables["ANTIBOT_RECAPTCHA_SECRET"] .. "&response=" .. args["token"] .. "&remoteip=" .. ngx.ctx.bw.remote_addr,
headers = { headers = {
["Content-Type"] = "application/x-www-form-urlencoded" ["Content-Type"] = "application/x-www-form-urlencoded"
} }
@ -286,7 +313,10 @@ function antibot:check_challenge()
return false, "client failed challenge with score " .. tostring(rdata.score), nil return false, "client failed challenge with score " .. tostring(rdata.score), nil
end end
data.resolved = true data.resolved = true
utils.set_session("antibot", cjson.encode(data)) local ok, err = utils.set_session_var("antibot", data)
if not ok then
return nil, "error while setting session antibot : " .. err
end
return true, "resolved", data.original_uri return true, "resolved", data.original_uri
end end
@ -295,15 +325,15 @@ function antibot:check_challenge()
ngx.req.read_body() ngx.req.read_body()
local args, err = ngx.req.get_post_args(1) local args, err = ngx.req.get_post_args(1)
if err == "truncated" or not args or not args["token"] then if err == "truncated" or not args or not args["token"] then
return false, "missing challenge arg", nil return nil, "missing challenge arg", nil
end end
local httpc, err = http.new() local httpc, err = http.new()
if not httpc then if not httpc then
return false, "can't instantiate http object : " .. err, nil, nil return nil, "can't instantiate http object : " .. err, nil, nil
end end
local res, err = httpc:request_uri("https://hcaptcha.com/siteverify", { local res, err = httpc:request_uri("https://hcaptcha.com/siteverify", {
method = "POST", method = "POST",
body = "secret=" .. self.variables["ANTIBOT_HCAPTCHA_SECRET"] .. "&response=" .. args["token"] .. "&remoteip=" .. ngx.var.remote_addr, body = "secret=" .. self.variables["ANTIBOT_HCAPTCHA_SECRET"] .. "&response=" .. args["token"] .. "&remoteip=" .. ngx.ctx.bw.remote_addr,
headers = { headers = {
["Content-Type"] = "application/x-www-form-urlencoded" ["Content-Type"] = "application/x-www-form-urlencoded"
} }
@ -320,7 +350,10 @@ function antibot:check_challenge()
return false, "client failed challenge", nil return false, "client failed challenge", nil
end end
data.resolved = true data.resolved = true
utils.set_session("antibot", cjson.encode(data)) local ok, err = utils.set_session_var("antibot", data)
if not ok then
return nil, "error while setting session antibot : " .. err
end
return true, "resolved", data.original_uri return true, "resolved", data.original_uri
end end

View file

@ -1,16 +1,18 @@
{% if USE_ANTIBOT == "yes" +%} {% if USE_ANTIBOT != "no" +%}
location /{{ ANTIBOT_URI }} { location {{ ANTIBOT_URI }} {
default_type 'text/html';
root /usr/share/bunkerweb/core/antibot/files; root /usr/share/bunkerweb/core/antibot/files;
content_by_lua_block { content_by_lua_block {
local cantibot = require "antibot.antibot" local cantibot = require "antibot.antibot"
local clogger = require "bunkerweb.logger" local clogger = require "bunkerweb.logger"
local antibot = cantibot:new() local antibot = cantibot:new()
local logger = clogger:new("ANTIBOT") local logger = clogger:new("ANTIBOT")
local ok, err = antibot:content() local ret = antibot:content()
if not ok then if not ret.ret then
logger:log(ngx.ERR, "antibot:content() failed : " .. err) logger:log(ngx.ERR, "antibot:content() failed : " .. ret.msg)
else else
logger:log(ngx.INFO, "antibot:content() success : " .. err) logger:log(ngx.INFO, "antibot:content() success : " .. ret.msg)
end end
} }
} }

View file

@ -1,6 +1,6 @@
{ {
"id": "antibot", "id": "antibot",
"order": 8, "order": 9,
"name": "Antibot", "name": "Antibot",
"description": "Bot detection by using a challenge.", "description": "Bot detection by using a challenge.",
"version": "0.1", "version": "0.1",

View file

@ -1,8 +1,6 @@
local class = require "middleclass" local class = require "middleclass"
local plugin = require "bunkerweb.plugin" local plugin = require "bunkerweb.plugin"
local utils = require "bunkerweb.utils" local utils = require "bunkerweb.utils"
local datastore = require "bunkerweb.datastore"
local clusterstore = require "bunkerweb.clusterstore"
local badbehavior = class("badbehavior", plugin) local badbehavior = class("badbehavior", plugin)
@ -19,7 +17,7 @@ end
function badbehavior:log() function badbehavior:log()
-- Check if we are whitelisted -- Check if we are whitelisted
if ngx.var.is_whitelisted == "yes" then if ngx.ctx.bw.is_whitelisted == "yes" then
return self:ret(true, "client is whitelisted") return self:ret(true, "client is whitelisted")
end end
-- Check if bad behavior is activated -- Check if bad behavior is activated
@ -31,12 +29,12 @@ function badbehavior:log()
return self:ret(true, "not increasing counter") return self:ret(true, "not increasing counter")
end end
-- Check if we are already banned -- Check if we are already banned
local banned, err = self.datastore:get("bans_ip_" .. ngx.var.remote_addr) local banned, err = self.datastore:get("bans_ip_" .. ngx.ctx.bw.remote_addr)
if banned then if banned then
return self:ret(true, "already banned") return self:ret(true, "already banned")
end end
-- Call increase function later and with cosocket enabled -- Call increase function later and with cosocket enabled
local ok, err = ngx.timer.at(0, badbehavior.increase, self, ngx.var.remote_addr) local ok, err = ngx.timer.at(0, badbehavior.increase, ngx.ctx.bw.remote_addr, tonumber(self.variables["BAD_BEHAVIOR_COUNT_TIME"]), tonumber(self.variables["BAD_BEHAVIOR_BAN_TIME"]), tonumber(self.variables["BAD_BEHAVIOR_THRESHOLD"]), self.use_redis)
if not ok then if not ok then
return self:ret(false, "can't create increase timer : " .. err) return self:ret(false, "can't create increase timer : " .. err)
end end
@ -47,27 +45,26 @@ function badbehavior:log_default()
return self:log() return self:log()
end end
function badbehavior.increase(premature, obj, ip) function badbehavior.increase(premature, ip, count_time, ban_time, threshold, use_redis)
-- Our vars -- Instantiate objects
local count_time = tonumber(obj.variables["BAD_BEHAVIOR_COUNT_TIME"]) local logger = require "bunkerweb.logger":new("badbehavior")
local ban_time = tonumber(obj.variables["BAD_BEHAVIOR_BAN_TIME"]) local datastore = require "bunkerweb.datastore":new()
local threshold = tonumber(obj.variables["BAD_BEHAVIOR_THRESHOLD"])
-- Declare counter -- Declare counter
local counter = false local counter = false
-- Redis case -- Redis case
if obj.use_redis then if use_redis then
local redis_counter, err = obj:redis_increase(ip) local redis_counter, err = badbehavior.redis_increase(ip, count_time, ban_time)
if not redis_counter then if not redis_counter then
obj.logger:log(ngx.ERR, "(increase) redis_increase failed, falling back to local : " .. err) logger:log(ngx.ERR, "(increase) redis_increase failed, falling back to local : " .. err)
else else
counter = redis_counter counter = redis_counter
end end
end end
-- Local case -- Local case
if not counter then if not counter then
local local_counter, err = obj.datastore:get("plugin_badbehavior_count_" .. ip) local local_counter, err = datastore:get("plugin_badbehavior_count_" .. ip)
if not local_counter and err ~= "not found" then if not local_counter and err ~= "not found" then
obj.logger:log(ngx.ERR, "(increase) can't get counts from the datastore : " .. err) logger:log(ngx.ERR, "(increase) can't get counts from the datastore : " .. err)
end end
if local_counter == nil then if local_counter == nil then
local_counter = 0 local_counter = 0
@ -75,48 +72,48 @@ function badbehavior.increase(premature, obj, ip)
counter = local_counter + 1 counter = local_counter + 1
end end
-- Call decrease later -- Call decrease later
local ok, err = ngx.timer.at(count_time, badbehavior.decrease, obj, ip) local ok, err = ngx.timer.at(count_time, badbehavior.decrease, ip, count_time, threshold, use_redis)
if not ok then if not ok then
obj.logger:log(ngx.ERR, "(increase) can't create decrease timer : " .. err) logger:log(ngx.ERR, "(increase) can't create decrease timer : " .. err)
end end
-- Store local counter -- Store local counter
local ok, err = obj.datastore:set("plugin_badbehavior_count_" .. ip, counter) local ok, err = datastore:set("plugin_badbehavior_count_" .. ip, counter, count_time)
if not ok then if not ok then
obj.logger:log(ngx.ERR, "(increase) can't save counts to the datastore : " .. err) logger:log(ngx.ERR, "(increase) can't save counts to the datastore : " .. err)
return return
end end
-- Store local ban -- Store local ban
if counter > threshold then if counter > threshold then
local ok, err = obj.datastore:set("bans_ip_" .. ip, "bad behavior", ban_time) local ok, err = utils.add_ban(ip, "bad behavior", ban_time)
if not ok then if not ok then
obj.logger:log(ngx.ERR, "(increase) can't save ban to the datastore : " .. err) logger:log(ngx.ERR, "(increase) can't save ban : " .. err)
return return
end end
obj.logger:log(ngx.WARN, "IP " .. ip .. " is banned for " .. ban_time .. "s (" .. tostring(counter) .. "/" .. tostring(threshold) .. ")") logger:log(ngx.WARN, "IP " .. ip .. " is banned for " .. ban_time .. "s (" .. tostring(counter) .. "/" .. tostring(threshold) .. ")")
end end
logger:log(ngx.NOTICE, "increased counter for IP " .. ip .. " (" .. tostring(counter) .. "/" .. tostring(threshold) .. ")")
end end
function badbehavior.decrease(premature, obj, ip) function badbehavior.decrease(premature, ip, count_time, threshold, use_redis)
-- Our vars -- Instantiate objects
local count_time = tonumber(obj.variables["BAD_BEHAVIOR_COUNT_TIME"]) local logger = require "bunkerweb.logger":new("badbehavior")
local ban_time = tonumber(obj.variables["BAD_BEHAVIOR_BAN_TIME"]) local datastore = require "bunkerweb.datastore":new()
local threshold = tonumber(obj.variables["BAD_BEHAVIOR_THRESHOLD"])
-- Declare counter -- Declare counter
local counter = false local counter = false
-- Redis case -- Redis case
if obj.use_redis then if use_redis then
local redis_counter, err = obj:redis_decrease(ip) local redis_counter, err = badbehavior.redis_decrease(ip, count_time)
if not redis_counter then if not redis_counter then
obj.logger:log(ngx.ERR, "(increase) redis_increase failed, falling back to local : " .. err) logger:log(ngx.ERR, "(decrease) redis_decrease failed, falling back to local : " .. err)
else else
counter = redis_counter counter = redis_counter
end end
end end
-- Local case -- Local case
if not counter then if not counter then
local local_counter, err = obj.datastore:get("plugin_badbehavior_count_" .. ip) local local_counter, err = datastore:get("plugin_badbehavior_count_" .. ip)
if not local_counter and err ~= "not found" then if not local_counter and err ~= "not found" then
obj.logger:log(ngx.ERR, "(increase) can't get counts from the datastore : " .. err) logger:log(ngx.ERR, "(decrease) can't get counts from the datastore : " .. err)
end end
if local_counter == nil or local_counter <= 1 then if local_counter == nil or local_counter <= 1 then
counter = 0 counter = 0
@ -126,92 +123,92 @@ function badbehavior.decrease(premature, obj, ip)
end end
-- Store local counter -- Store local counter
if counter <= 0 then if counter <= 0 then
local ok, err = obj.datastore:delete("plugin_badbehavior_count_" .. ip) counter = 0
local ok, err = datastore:delete("plugin_badbehavior_count_" .. ip)
else else
local ok, err = obj.datastore:delete("plugin_badbehavior_count_" .. ip, counter) local ok, err = datastore:set("plugin_badbehavior_count_" .. ip, counter, count_time)
if not ok then if not ok then
obj.logger:log(ngx.ERR, "(increase) can't save counts to the datastore : " .. err) logger:log(ngx.ERR, "(decrease) can't save counts to the datastore : " .. err)
return return
end end
end end
logger:log(ngx.NOTICE, "decreased counter for IP " .. ip .. " (" .. tostring(counter) .. "/" .. tostring(threshold) .. ")")
end end
function badbehavior:redis_increase(ip) function badbehavior.redis_increase(ip, count_time, ban_time)
-- Our vars -- Instantiate objects
local count_time = tonumber(self.variables["BAD_BEHAVIOR_COUNT_TIME"]) local clusterstore = require "bunkerweb.clusterstore":new()
local ban_time = tonumber(self.variables["BAD_BEHAVIOR_BAN_TIME"]) -- Our LUA script to execute on redis
local redis_script = [[
local ret_incr = redis.pcall("INCR", KEYS[1])
if type(ret_incr) == "table" and ret_incr["err"] ~= nil then
redis.log(redis.LOG_WARNING, "Bad behavior increase INCR error : " .. ret_incr["err"])
return ret_incr
end
local ret_expire = redis.pcall("EXPIRE", KEYS[1], ARGV[1])
if type(ret_expire) == "table" and ret_expire["err"] ~= nil then
redis.log(redis.LOG_WARNING, "Bad behavior increase EXPIRE error : " .. ret_expire["err"])
return ret_expire
end
if ret_incr > tonumber(ARGV[2]) then
local ret_set = redis.pcall("SET", KEYS[2], "bad behavior", "EX", ARGV[2])
if type(ret_set) == "table" and ret_set["err"] ~= nil then
redis.log(redis.LOG_WARNING, "Bad behavior increase SET error : " .. ret_set["err"])
return ret_set
end
end
return ret_incr
]]
-- Connect to server -- Connect to server
local cstore, err = clusterstore:new()
if not cstore then
return false, err
end
local ok, err = clusterstore:connect() local ok, err = clusterstore:connect()
if not ok then if not ok then
return false, err return false, err
end end
-- Exec transaction -- Execute LUA script
local calls = { local counter, err = clusterstore:call("eval", redis_script, 2, "bad_behavior_" .. ip, "bans_ip" .. ip, count_time, ban_time)
{"incr", {"bad_behavior_" .. ip}}, if not counter then
{"expire", {"bad_behavior_" .. ip, count_time}}
}
local ok, err, exec = clusterstore:multi(calls)
if not ok then
clusterstore:close() clusterstore:close()
return false, err return false, err
end end
-- Extract counter
local counter = exec[1]
if type(counter) == "table" then
clusterstore:close()
return false, counter[2]
end
-- Check expire result
local expire = exec[2]
if type(expire) == "table" then
clusterstore:close()
return false, expire[2]
end
-- Add IP to redis bans if needed
if counter > threshold then
local ok, err = clusterstore:call("set", "ban_" .. ip, "bad behavior", "EX", ban_time)
if err then
clusterstore:close()
return false, err
end
end
-- End connection -- End connection
clusterstore:close() clusterstore:close()
return counter return counter
end end
function badbehavior:redis_decrease(ip) function badbehavior.redis_decrease(ip, count_time)
-- Instantiate objects
local clusterstore = require "bunkerweb.clusterstore":new()
-- Our LUA script to execute on redis
local redis_script = [[
local ret_decr = redis.pcall("DECR", KEYS[1])
if type(ret_decr) == "table" and ret_decr["err"] ~= nil then
redis.log(redis.LOG_WARNING, "Bad behavior decrease DECR error : " .. ret_decr["err"])
return ret_decr
end
local ret_expire = redis.pcall("EXPIRE", KEYS[1], ARGV[1])
if type(ret_expire) == "table" and ret_expire["err"] ~= nil then
redis.log(redis.LOG_WARNING, "Bad behavior decrease EXPIRE error : " .. ret_expire["err"])
return ret_expire
end
if ret_decr <= 0 then
local ret_del = redis.pcall("DEL", KEYS[1])
if type(ret_del) == "table" and ret_del["err"] ~= nil then
redis.log(redis.LOG_WARNING, "Bad behavior decrease DEL error : " .. ret_del["err"])
return ret_del
end
end
return ret_decr
]]
-- Connect to server -- Connect to server
local cstore, err = clusterstore:new()
if not cstore then
return false, err
end
local ok, err = clusterstore:connect() local ok, err = clusterstore:connect()
if not ok then if not ok then
return false, err return false, err
end end
-- Decrement counter local counter, err = clusterstore:call("eval", redis_script, 1, "bad_behavior_" .. ip, count_time)
local counter, err = clusterstore:call("decr", "bad_behavior_" .. ip) if not counter then
if err then
clusterstore:close() clusterstore:close()
return false, err return false, err
end end
-- Delete counter
if counter < 0 then
counter = 0
end
if counter == 0 then
local ok, err = clusterstore:call("del", "bad_behavior_" .. ip)
if err then
clusterstore:close()
return false, err
end
end
-- End connection
clusterstore:close() clusterstore:close()
return counter return counter
end end

View file

@ -17,21 +17,31 @@ function blacklist:initialize()
self.logger:log(ngx.ERR, err) self.logger:log(ngx.ERR, err)
end end
self.use_redis = use_redis == "yes" self.use_redis = use_redis == "yes"
-- Check if init is needed
if ngx.get_phase() == "init" then
local init_needed, err = utils.has_variable("USE_BLACKLIST", "yes")
if init_needed == nil then
self.logger:log(ngx.ERR, err)
end
self.init_needed = init_needed
-- Decode lists -- Decode lists
else if ngx.get_phase() ~= "init" and self.variables["USE_BLACKLIST"] == "yes" then
local lists, err = self.datastore:get("plugin_blacklist_lists") local lists, err = self.datastore:get("plugin_blacklist_lists")
if not lists then if not lists then
self.logger:log(ngx.ERR, err) self.logger:log(ngx.ERR, err)
else else
self.lists = cjson.decode(lists) self.lists = cjson.decode(lists)
end end
local kinds = {
["IP"] = {},
["RDNS"] = {},
["ASN"] = {},
["USER_AGENT"] = {},
["URI"] = {},
["IGNORE_IP"] = {},
["IGNORE_RDNS"] = {},
["IGNORE_ASN"] = {},
["IGNORE_USER_AGENT"] = {},
["IGNORE_URI"] = {},
}
for kind, _ in pairs(kinds) do
for data in self.variables["BLACKLIST_" .. kind]:gmatch("%S+") do
table.insert(self.lists[kind], data)
end
end
end end
-- Instantiate cachestore -- Instantiate cachestore
self.cachestore = cachestore:new(self.use_redis) self.cachestore = cachestore:new(self.use_redis)
@ -39,9 +49,14 @@ end
function blacklist:init() function blacklist:init()
-- Check if init is needed -- Check if init is needed
if not self.init_needed then local init_needed, err = utils.has_variable("USE_BLACKLIST", "yes")
if init_needed == nil then
return self:ret(false, "can't check USE_BLACKLIST variable : " .. err)
end
if not init_needed or self.is_loading then
return self:ret(true, "init not needed") return self:ret(true, "init not needed")
end end
-- Read blacklists -- Read blacklists
local blacklists = { local blacklists = {
["IP"] = {}, ["IP"] = {},
@ -81,13 +96,13 @@ function blacklist:access()
end end
-- Check the caches -- Check the caches
local checks = { local checks = {
["IP"] = "ip" .. ngx.var.remote_addr ["IP"] = "ip" .. ngx.ctx.bw.remote_addr
} }
if ngx.var.http_user_agent then if ngx.ctx.bw.http_user_agent then
checks["UA"] = "ua" .. ngx.var.http_user_agent checks["UA"] = "ua" .. ngx.ctx.bw.http_user_agent
end end
if ngx.var.uri then if ngx.ctx.bw.uri then
checks["URI"] = "uri" .. ngx.var.uri checks["URI"] = "uri" .. ngx.ctx.bw.uri
end end
local already_cached = { local already_cached = {
["IP"] = false, ["IP"] = false,
@ -99,7 +114,7 @@ function blacklist:access()
if not ok then if not ok then
self.logger:log(ngx.ERR, "error while checking cache : " .. cached) self.logger:log(ngx.ERR, "error while checking cache : " .. cached)
elseif cached and cached ~= "ok" then elseif cached and cached ~= "ok" then
return self:ret(true, k + " is in cached blacklist (info : " .. cached .. ")", utils.get_deny_status()) return self:ret(true, k .. " is in cached blacklist (info : " .. cached .. ")", utils.get_deny_status())
end end
if cached then if cached then
already_cached[k] = true already_cached[k] = true
@ -121,7 +136,7 @@ function blacklist:access()
self.logger:log(ngx.ERR, "error while adding element to cache : " .. err) self.logger:log(ngx.ERR, "error while adding element to cache : " .. err)
end end
if blacklisted ~= "ok" then if blacklisted ~= "ok" then
return self:ret(true, k + " is blacklisted (info : " .. blacklisted .. ")", utils.get_deny_status()) return self:ret(true, k .. " is blacklisted (info : " .. blacklisted .. ")", utils.get_deny_status())
end end
end end
end end
@ -138,11 +153,11 @@ end
function blacklist:kind_to_ele(kind) function blacklist:kind_to_ele(kind)
if kind == "IP" then if kind == "IP" then
return "ip" .. ngx.var.remote_addr return "ip" .. ngx.ctx.bw.remote_addr
elseif kind == "UA" then elseif kind == "UA" then
return "ua" .. ngx.var.http_user_agent return "ua" .. ngx.ctx.bw.http_user_agent
elseif kind == "URI" then elseif kind == "URI" then
return "uri" .. ngx.var.uri return "uri" .. ngx.ctx.bw.uri
end end
end end
@ -179,7 +194,7 @@ function blacklist:is_blacklisted_ip()
if not ipm then if not ipm then
return nil, err return nil, err
end end
local match, err = ipm:match(ngx.var.remote_addr) local match, err = ipm:match(ngx.ctx.bw.remote_addr)
if err then if err then
return nil, err return nil, err
end end
@ -189,7 +204,7 @@ function blacklist:is_blacklisted_ip()
if not ipm then if not ipm then
return nil, err return nil, err
end end
local match, err = ipm:match(ngx.var.remote_addr) local match, err = ipm:match(ngx.ctx.bw.remote_addr)
if err then if err then
return nil, err return nil, err
end end
@ -200,18 +215,12 @@ function blacklist:is_blacklisted_ip()
-- Check if rDNS is needed -- Check if rDNS is needed
local check_rdns = true local check_rdns = true
local is_global, err = utils.ip_is_global(ngx.var.remote_addr) if self.variables["BLACKLIST_RDNS_GLOBAL"] == "yes" and not ngx.ctx.bw.ip_is_global then
if self.variables["BLACKLIST_RDNS_GLOBAL"] == "yes" then check_rdns = false
if is_global == nil then
return nil, err
end
if not is_global then
check_rdns = false
end
end end
if check_rdns then if check_rdns then
-- Get rDNS -- Get rDNS
local rdns_list, err = utils.get_rdns(ngx.var.remote_addr) local rdns_list, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
if not rdns_list then if not rdns_list then
return false, err return false, err
end end
@ -241,8 +250,8 @@ function blacklist:is_blacklisted_ip()
end end
-- Check if ASN is in ignore list -- Check if ASN is in ignore list
if is_global then if ngx.ctx.bw.ip_is_global then
local asn, err = utils.get_asn(ngx.var.remote_addr) local asn, err = utils.get_asn(ngx.ctx.bw.remote_addr)
if not asn then if not asn then
self.logger:log(ngx.ERR, "7") self.logger:log(ngx.ERR, "7")
return nil, err return nil, err
@ -272,7 +281,7 @@ function blacklist:is_blacklisted_uri()
-- Check if URI is in ignore list -- Check if URI is in ignore list
local ignore = false local ignore = false
for i, ignore_uri in ipairs(self.lists["IGNORE_URI"]) do for i, ignore_uri in ipairs(self.lists["IGNORE_URI"]) do
if ngx.var.uri:match(ignore_uri) then if ngx.ctx.bw.uri:match(ignore_uri) then
ignore = true ignore = true
break break
end end
@ -280,7 +289,7 @@ function blacklist:is_blacklisted_uri()
-- Check if URI is in blacklist -- Check if URI is in blacklist
if not ignore then if not ignore then
for i, uri in ipairs(self.lists["URI"]) do for i, uri in ipairs(self.lists["URI"]) do
if ngx.var.uri:match(uri) then if ngx.ctx.bw.uri:match(uri) then
return true, "URI " .. uri return true, "URI " .. uri
end end
end end
@ -293,7 +302,7 @@ function blacklist:is_blacklisted_ua()
-- Check if UA is in ignore list -- Check if UA is in ignore list
local ignore = false local ignore = false
for i, ignore_ua in ipairs(self.lists["IGNORE_USER_AGENT"]) do for i, ignore_ua in ipairs(self.lists["IGNORE_USER_AGENT"]) do
if ngx.var.http_user_agent:match(ignore_ua) then if ngx.ctx.bw.http_user_agent:match(ignore_ua) then
ignore = true ignore = true
break break
end end
@ -301,7 +310,7 @@ function blacklist:is_blacklisted_ua()
-- Check if UA is in blacklist -- Check if UA is in blacklist
if not ignore then if not ignore then
for i, ua in ipairs(self.lists["USER_AGENT"]) do for i, ua in ipairs(self.lists["USER_AGENT"]) do
if ngx.var.http_user_agent:match(ua) then if ngx.ctx.bw.http_user_agent:match(ua) then
return true, "UA " .. ua return true, "UA " .. ua
end end
end end

View file

@ -6,7 +6,6 @@ from os import _exit, getenv
from pathlib import Path from pathlib import Path
from re import IGNORECASE, compile as re_compile from re import IGNORECASE, compile as re_compile
from sys import exit as sys_exit, path as sys_path from sys import exit as sys_exit, path as sys_path
from threading import Lock
from traceback import format_exc from traceback import format_exc
from typing import Tuple from typing import Tuple
@ -84,7 +83,6 @@ try:
logger, logger,
sqlalchemy_string=getenv("DATABASE_URI", None), sqlalchemy_string=getenv("DATABASE_URI", None),
) )
lock = Lock()
# Create directories if they don't exist # Create directories if they don't exist
Path("/var/cache/bunkerweb/blacklist").mkdir(parents=True, exist_ok=True) Path("/var/cache/bunkerweb/blacklist").mkdir(parents=True, exist_ok=True)
@ -108,7 +106,9 @@ try:
} }
all_fresh = True all_fresh = True
for kind in kinds_fresh: for kind in kinds_fresh:
if not is_cached_file(f"/var/cache/bunkerweb/blacklist/{kind}.list", "hour"): if not is_cached_file(
f"/var/cache/bunkerweb/blacklist/{kind}.list", "hour", db
):
kinds_fresh[kind] = False kinds_fresh[kind] = False
all_fresh = False all_fresh = False
logger.info( logger.info(
@ -172,7 +172,7 @@ try:
logger.info(f"Downloaded {i} bad {kind}") logger.info(f"Downloaded {i} bad {kind}")
# Check if file has changed # Check if file has changed
new_hash = file_hash(f"/var/tmp/bunkerweb/blacklist/{kind}.list") new_hash = file_hash(f"/var/tmp/bunkerweb/blacklist/{kind}.list")
old_hash = cache_hash(f"/var/cache/bunkerweb/blacklist/{kind}.list") old_hash = cache_hash(f"/var/cache/bunkerweb/blacklist/{kind}.list", db)
if new_hash == old_hash: if new_hash == old_hash:
logger.info( logger.info(
f"New file {kind}.list is identical to cache file, reload is not needed", f"New file {kind}.list is identical to cache file, reload is not needed",
@ -186,25 +186,12 @@ try:
f"/var/tmp/bunkerweb/blacklist/{kind}.list", f"/var/tmp/bunkerweb/blacklist/{kind}.list",
f"/var/cache/bunkerweb/blacklist/{kind}.list", f"/var/cache/bunkerweb/blacklist/{kind}.list",
new_hash, new_hash,
db,
) )
if not cached: if not cached:
logger.error(f"Error while caching blacklist : {err}") logger.error(f"Error while caching blacklist : {err}")
status = 2 status = 2
else:
# Update db
with lock:
err = db.update_job_cache(
"blacklist-download",
None,
f"{kind}.list",
content,
checksum=new_hash,
)
if err:
logger.warning(f"Couldn't update db cache: {err}")
status = 1
except: except:
status = 2 status = 2
logger.error( logger.error(

View file

@ -10,29 +10,27 @@ local bunkernet = class("bunkernet", plugin)
function bunkernet:initialize() function bunkernet:initialize()
-- Call parent initialize -- Call parent initialize
plugin.initialize(self, "bunkernet") plugin.initialize(self, "bunkernet")
-- Check if init is needed
if ngx.get_phase() == "init" then
local init_needed, err = utils.has_variable("USE_BUNKERNET", "yes")
if init_needed == nil then
self.logger:log(ngx.ERR, err)
end
self.init_needed = init_needed
-- Get BunkerNet ID -- Get BunkerNet ID
else if ngx.get_phase() ~= "init" and self.variables["USE_BUNKERNET"] == "yes" then
local id, err = self.datastore:get("plugin_bunkernet_id") local id, err = self.datastore:get("plugin_bunkernet_id")
if not id then if id then
self.bunkernet_id = nil
else
self.bunkernet_id = id self.bunkernet_id = id
else
self.logger:log(ngx.ERR, "can't get BunkerNet ID from datastore : " .. err)
end end
end end
end end
function bunkernet:init() function bunkernet:init()
-- Check if init is needed -- Check if init is needed
if not self.init_needed then local init_needed, err = utils.has_variable("USE_BUNKERNET", "yes")
if init_needed == nil then
return self:ret(false, "can't check USE_BUNKERNET variable : " .. err)
end
if not init_needed or self.is_loading then
return self:ret(true, "no service uses bunkernet, skipping init") return self:ret(true, "no service uses bunkernet, skipping init")
end end
-- Check if instance ID is present -- Check if instance ID is present
local f, err = io.open("/var/cache/bunkerweb/bunkernet/instance.id", "r") local f, err = io.open("/var/cache/bunkerweb/bunkernet/instance.id", "r")
if not f then if not f then
@ -83,7 +81,7 @@ function bunkernet:log(bypass_use_bunkernet)
end end
-- Check if BunkerNet ID is generated -- Check if BunkerNet ID is generated
if not self.bunkernet_id then if not self.bunkernet_id then
return self:ret(true, "bunkernet ID is not generated") return self:ret(false, "bunkernet ID is not generated")
end end
-- Check if IP has been blocked -- Check if IP has been blocked
local reason = utils.get_reason() local reason = utils.get_reason()
@ -94,16 +92,14 @@ function bunkernet:log(bypass_use_bunkernet)
return self:ret(true, "skipping report because the reason is bunkernet") return self:ret(true, "skipping report because the reason is bunkernet")
end end
-- Check if IP is global -- Check if IP is global
local is_global, err = utils.ip_is_global(ngx.var.remote_addr) if not ngx.ctx.bw.ip_is_global then
if is_global == nil then
return self:ret(false, "error while checking if IP is global " .. err)
end
if not is_global then
return self:ret(true, "IP is not global") return self:ret(true, "IP is not global")
end end
-- TODO : check if IP has been reported recently -- TODO : check if IP has been reported recently
self.integration = ngx.ctx.bw.integration
self.version = ngx.ctx.bw.version
local function report_callback(premature, obj, ip, reason, method, url, headers) local function report_callback(premature, obj, ip, reason, method, url, headers)
local ok, err, status, data = obj:report(ip, reason, method, url, headers) local ok, err, status, data = obj:report(ip, reason, method, url, headers, obj.ctx.integration, obj.ctx.version)
if status == 429 then if status == 429 then
obj.logger:log(ngx.WARN, "bunkernet API is rate limiting us") obj.logger:log(ngx.WARN, "bunkernet API is rate limiting us")
elseif not ok then elseif not ok then
@ -113,8 +109,8 @@ function bunkernet:log(bypass_use_bunkernet)
end end
end end
local hdr, err = ngx.timer.at(0, report_callback, self, ngx.var.remote_addr, reason, ngx.var.request_method, local hdr, err = ngx.timer.at(0, report_callback, self, ngx.ctx.bw.remote_addr, reason, ngx.ctx.bw.request_method,
ngx.var.request_uri, ngx.req.get_headers()) ngx.ctx.bw.request_uri, ngx.req.get_headers())
if not hdr then if not hdr then
return self:ret(false, "can't create report timer : " .. err) return self:ret(false, "can't create report timer : " .. err)
end end
@ -149,8 +145,8 @@ function bunkernet:request(method, url, data)
end end
local all_data = { local all_data = {
id = self.id, id = self.id,
integration = utils.get_integration(), integration = self.integration,
version = utils.get_version() version = self.version
} }
for k, v in pairs(data) do for k, v in pairs(data) do
all_data[k] = v all_data[k] = v
@ -160,7 +156,7 @@ function bunkernet:request(method, url, data)
body = cjson.encode(all_data), body = cjson.encode(all_data),
headers = { headers = {
["Content-Type"] = "application/json", ["Content-Type"] = "application/json",
["User-Agent"] = "BunkerWeb/" .. utils.get_version() ["User-Agent"] = "BunkerWeb/" .. self.version
} }
}) })
httpc:close() httpc:close()

View file

@ -47,7 +47,6 @@ try:
logger, logger,
sqlalchemy_string=getenv("DATABASE_URI", None), sqlalchemy_string=getenv("DATABASE_URI", None),
) )
lock = Lock()
# Create directory if it doesn't exist # Create directory if it doesn't exist
Path("/var/cache/bunkerweb/bunkernet").mkdir(parents=True, exist_ok=True) Path("/var/cache/bunkerweb/bunkernet").mkdir(parents=True, exist_ok=True)
@ -64,7 +63,7 @@ try:
_exit(2) _exit(2)
# Don't go further if the cache is fresh # Don't go further if the cache is fresh
if is_cached_file("/var/cache/bunkerweb/bunkernet/ip.list", "day"): if is_cached_file("/var/cache/bunkerweb/bunkernet/ip.list", "day", db):
logger.info( logger.info(
"BunkerNet list is already in cache, skipping download...", "BunkerNet list is already in cache, skipping download...",
) )
@ -111,7 +110,7 @@ try:
# Check if file has changed # Check if file has changed
new_hash = file_hash("/var/tmp/bunkerweb/bunkernet-ip.list") new_hash = file_hash("/var/tmp/bunkerweb/bunkernet-ip.list")
old_hash = cache_hash("/var/cache/bunkerweb/bunkernet/ip.list") old_hash = cache_hash("/var/cache/bunkerweb/bunkernet/ip.list", db)
if new_hash == old_hash: if new_hash == old_hash:
logger.info( logger.info(
"New file is identical to cache file, reload is not needed", "New file is identical to cache file, reload is not needed",
@ -123,24 +122,12 @@ try:
"/var/tmp/bunkerweb/bunkernet-ip.list", "/var/tmp/bunkerweb/bunkernet-ip.list",
"/var/cache/bunkerweb/bunkernet/ip.list", "/var/cache/bunkerweb/bunkernet/ip.list",
new_hash, new_hash,
db,
) )
if not cached: if not cached:
logger.error(f"Error while caching BunkerNet data : {err}") logger.error(f"Error while caching BunkerNet data : {err}")
_exit(2) _exit(2)
# Update db
with lock:
err = db.update_job_cache(
"bunkernet-data",
None,
"ip.list",
content,
checksum=new_hash,
)
if err:
logger.warning(f"Couldn't update db ip.list cache: {err}")
logger.info("Successfully saved BunkerNet data") logger.info("Successfully saved BunkerNet data")
status = 1 status = 1

View file

@ -1,6 +1,6 @@
{ {
"id": "bunkernet", "id": "bunkernet",
"order": 6, "order": 7,
"name": "BunkerNet", "name": "BunkerNet",
"description": "Share threat data with other BunkerWeb instances via BunkerNet.", "description": "Share threat data with other BunkerWeb instances via BunkerNet.",
"version": "0.1", "version": "0.1",

View file

@ -14,7 +14,7 @@ function cors:header()
if self.variables["USE_CORS"] ~= "yes" then if self.variables["USE_CORS"] ~= "yes" then
return self:ret(true, "service doesn't use CORS") return self:ret(true, "service doesn't use CORS")
end end
if ngx.var.request_method ~= "OPTIONS" then if ngx.ctx.bw.request_method ~= "OPTIONS" then
return self:ret(true, "method is not OPTIONS") return self:ret(true, "method is not OPTIONS")
end end
-- Add headers -- Add headers

View file

@ -2,6 +2,7 @@ local class = require "middleclass"
local plugin = require "bunkerweb.plugin" local plugin = require "bunkerweb.plugin"
local utils = require "bunkerweb.utils" local utils = require "bunkerweb.utils"
local cachestore = require "bunkerweb.cachestore" local cachestore = require "bunkerweb.cachestore"
local cjson = require "cjson"
local country = class("country", plugin) local country = class("country", plugin)
@ -23,69 +24,66 @@ function country:access()
return self:ret(true, "country not activated") return self:ret(true, "country not activated")
end end
-- Check if IP is in cache -- Check if IP is in cache
local data, err = self:is_in_cache(ngx.var.remote_addr) local data, err = self:is_in_cache(ngx.ctx.bw.remote_addr)
if data then if data then
if data.result == "ok" then if data.result == "ok" then
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is in country cache (not blacklisted, country = " .. data.country .. ")") return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is in country cache (not blacklisted, country = " .. data.country .. ")")
end end
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is in country cache (blacklisted, country = " .. data.country .. ")", utils.get_deny_status()) return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is in country cache (blacklisted, country = " .. data.country .. ")", utils.get_deny_status())
end end
-- Don't go further if IP is not global -- Don't go further if IP is not global
local is_global, err = utils.ip_is_global(ngx.var.remote_addr) if not ngx.ctx.bw.ip_is_global then
if is_global == nil then local ok, err = self:add_to_cache(ngx.ctx.bw.remote_addr, "unknown", "ok")
return self:ret(false, "error while checking if ip is global : " .. err)
elseif not is_global then
local ok, err = self:add_to_cache(ngx.var.remote_addr, "unknown", "ok")
if not ok then if not ok then
return self:ret(false, "error while adding ip to cache : " .. err) return self:ret(false, "error while adding ip to cache : " .. err)
end end
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is not global, skipping check") return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is not global, skipping check")
end end
-- Get the country of client -- Get the country of client
local country, err = utils.get_country(ngx.var.remote_addr) local country, err = utils.get_country(ngx.ctx.bw.remote_addr)
if not country then if not country then
return self:ret(false, "can't get country of client IP " .. ngx.var.remote_addr .. " : " .. err) return self:ret(false, "can't get country of client IP " .. ngx.ctx.bw.remote_addr .. " : " .. err)
end end
-- Process whitelist first -- Process whitelist first
if self.variables["WHITELIST_COUNTRY"] ~= "" then if self.variables["WHITELIST_COUNTRY"] ~= "" then
for wh_country in self.variables["WHITELIST_COUNTRY"]:gmatch("%S+") do for wh_country in self.variables["WHITELIST_COUNTRY"]:gmatch("%S+") do
if wh_country == country then if wh_country == country then
local ok, err = self:add_to_cache(ngx.var.remote_addr, country, "ok") local ok, err = self:add_to_cache(ngx.ctx.bw.remote_addr, country, "ok")
if not ok then if not ok then
return self:ret(false, "error while adding item to cache : " .. err) return self:ret(false, "error while adding item to cache : " .. err)
end end
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is whitelisted (country = " .. country .. ")") return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is whitelisted (country = " .. country .. ")")
end end
end end
local ok, err = self:add_to_cache(ngx.var.remote_addr, country, "ko") local ok, err = self:add_to_cache(ngx.ctx.bw.remote_addr, country, "ko")
if not ok then if not ok then
return self:ret(false, "error while adding item to cache : " .. err) return self:ret(false, "error while adding item to cache : " .. err)
end end
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is not whitelisted (country = " .. country .. ")", utils.get_deny_status()) return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is not whitelisted (country = " .. country .. ")", utils.get_deny_status())
end end
-- And then blacklist -- And then blacklist
if self.variables["BLACKLIST_COUNTRY"] ~= "" then if self.variables["BLACKLIST_COUNTRY"] ~= "" then
for bl_country in self.variables["BLACKLIST_COUNTRY"]:gmatch("%S+") do for bl_country in self.variables["BLACKLIST_COUNTRY"]:gmatch("%S+") do
if bl_country == country then if bl_country == country then
local ok, err = self:add_to_cache(ngx.var.remote_addr, country, "ko") local ok, err = self:add_to_cache(ngx.ctx.bw.remote_addr, country, "ko")
if not ok then if not ok then
return self:ret(false, "error while adding item to cache : " .. err) return self:ret(false, "error while adding item to cache : " .. err)
end end
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is blacklisted (country = " .. country .. ")", true, utils.get_deny_status()) return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is blacklisted (country = " .. country .. ")", true, utils.get_deny_status())
end end
end end
end end
-- Country IP is not in blacklist -- Country IP is not in blacklist
local ok, err = self:add_to_cache(ngx.var.remote_addr, country, "ok") local ok, err = self:add_to_cache(ngx.ctx.bw.remote_addr, country, "ok")
if not ok then if not ok then
return self:ret(false, "error while caching IP " .. ngx.var.remote_addr .. " : " .. err) return self:ret(false, "error while caching IP " .. ngx.ctx.bw.remote_addr .. " : " .. err)
end end
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is not blacklisted (country = " .. country .. ")") return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is not blacklisted (country = " .. country .. ")")
end end
function country:preread() function country:preread()

View file

@ -81,30 +81,28 @@ def check_cert(cert_path, key_path, first_server: Optional[str] = None) -> bool:
if old_hash != key_hash: if old_hash != key_hash:
copy(key_path, key_cache_path.replace(".hash", "")) copy(key_path, key_cache_path.replace(".hash", ""))
with open(key_path, "r") as f: with lock:
with lock: err = db.update_job_cache(
err = db.update_job_cache( "custom-cert",
"custom-cert", first_server,
first_server, key_cache_path.replace(".hash", "").split("/")[-1],
key_cache_path.replace(".hash", "").split("/")[-1], Path(key_path).read_bytes(),
f.read().encode("utf-8"), checksum=key_hash,
checksum=key_hash, )
)
if err: if err:
logger.warning( logger.warning(
f"Couldn't update db cache for {key_path.replace('/', '_')}.hash: {err}" f"Couldn't update db cache for {key_path.replace('/', '_')}.hash: {err}"
) )
with open(cert_path, "r") as f: with lock:
with lock: err = db.update_job_cache(
err = db.update_job_cache( "custom-cert",
"custom-cert", first_server,
first_server, cert_cache_path.replace(".hash", "").split("/")[-1],
cert_cache_path.replace(".hash", "").split("/")[-1], Path(cert_path).read_bytes(),
f.read().encode("utf-8"), checksum=cert_hash,
checksum=cert_hash, )
)
if err: if err:
logger.warning( logger.warning(

View file

@ -28,22 +28,18 @@ function dnsbl:access()
return self:ret(true, "dnsbl list is empty") return self:ret(true, "dnsbl list is empty")
end end
-- Check if IP is in cache -- Check if IP is in cache
local ok, cached = self:is_in_cache(ngx.var.remote_addr) local ok, cached = self:is_in_cache(ngx.ctx.bw.remote_addr)
if not ok then if not ok then
return self:ret(false, "error while checking cache : " .. err) return self:ret(false, "error while checking cache : " .. cached)
elseif cached then elseif cached then
if cached == "ok" then if cached == "ok" then
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is in DNSBL cache (not blacklisted)") return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is in DNSBL cache (not blacklisted)")
end end
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is in DNSBL cache (server = " .. cached .. ")", utils.get_deny_status()) return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is in DNSBL cache (server = " .. cached .. ")", utils.get_deny_status())
end end
-- Don't go further if IP is not global -- Don't go further if IP is not global
local is_global, err = utils.ip_is_global(ngx.var.remote_addr) if not ngx.ctx.bw.ip_is_global then
if is_global == nil then local ok, err = self:add_to_cache(ngx.ctx.bw.remote_addr, "ok")
return self:ret(false, "can't check if client IP is global : " .. err)
end
if not is_global then
local ok, err = self:add_to_cache(ngx.var.remote_addr, "ok")
if not ok then if not ok then
return self:ret(false, "error while adding element to cache : " .. err) return self:ret(false, "error while adding element to cache : " .. err)
end end
@ -51,12 +47,12 @@ function dnsbl:access()
end end
-- Loop on DNSBL list -- Loop on DNSBL list
for server in self.variables["DNSBL_LIST"]:gmatch("%S+") do for server in self.variables["DNSBL_LIST"]:gmatch("%S+") do
local result, err = self:is_in_dnsbl(server) local result, err = self:is_in_dnsbl(ngx.ctx.bw.remote_addr, server)
if result == nil then if result == nil then
self.logger:log(ngx.ERR, "error while sending DNS request to " .. server .. " : " .. err) self.logger:log(ngx.ERR, "error while sending DNS request to " .. server .. " : " .. err)
end end
if result then if result then
local ok, err self:add_to_cache(ngx.var.remote_addr, server) local ok, err self:add_to_cache(ngx.ctx.bw.remote_addr, server)
if not ok then if not ok then
return self:ret(false, "error while adding element to cache : " .. err) return self:ret(false, "error while adding element to cache : " .. err)
end end
@ -64,7 +60,7 @@ function dnsbl:access()
end end
end end
-- IP is not in DNSBL -- IP is not in DNSBL
local ok, err = self:add_to_cache(ngx.var.remote_addr, "ok") local ok, err = self:add_to_cache(ngx.ctx.bw.remote_addr, "ok")
if not ok then if not ok then
return self:ret(false, "IP is not in DNSBL (error = " .. err .. ")") return self:ret(false, "IP is not in DNSBL (error = " .. err .. ")")
end end
@ -91,7 +87,7 @@ function dnsbl:add_to_cache(ip, value)
return true return true
end end
function dnsbl:is_in_dnsbl(server) function dnsbl:is_in_dnsbl(ip, server)
local request = resolver.arpa_str(ip) .. "." .. server local request = resolver.arpa_str(ip) .. "." .. server
local ips, err = utils.get_ips(request) local ips, err = utils.get_ips(request)
if not ips then if not ips then

File diff suppressed because one or more lines are too long

View file

@ -16,21 +16,26 @@ function greylist:initialize()
self.logger:log(ngx.ERR, err) self.logger:log(ngx.ERR, err)
end end
self.use_redis = use_redis == "yes" self.use_redis = use_redis == "yes"
-- Check if init is needed
if ngx.get_phase() == "init" then
local init_needed, err = utils.has_variable("USE_GREYLIST", "yes")
if init_needed == nil then
self.logger:log(ngx.ERR, err)
end
self.init_needed = init_needed
-- Decode lists -- Decode lists
elseif self.variables["USE_GREYLIST"] == "yes" then if ngx.get_phase() ~= "init" and self.variables["USE_GREYLIST"] == "yes" then
local lists, err = self.datastore:get("plugin_greylist_lists") local lists, err = self.datastore:get("plugin_greylist_lists")
if not lists then if not lists then
self.logger:log(ngx.ERR, err) self.logger:log(ngx.ERR, err)
else else
self.lists = cjson.decode(lists) self.lists = cjson.decode(lists)
end end
local kinds = {
["IP"] = {},
["RDNS"] = {},
["ASN"] = {},
["USER_AGENT"] = {},
["URI"] = {}
}
for kind, _ in pairs(kinds) do
for data in self.variables["GREYLIST_" .. kind]:gmatch("%S+") do
table.insert(self.lists[kind], data)
end
end
end end
-- Instantiate cachestore -- Instantiate cachestore
self.cachestore = cachestore:new(self.use_redis) self.cachestore = cachestore:new(self.use_redis)
@ -38,10 +43,14 @@ end
function greylist:init() function greylist:init()
-- Check if init is needed -- Check if init is needed
if not self.init_needed then local init_needed, err = utils.has_variable("USE_GREYLIST", "yes")
if init_needed == nil then
return self:ret(false, "can't check USE_GREYLIST variable : " .. err)
end
if not init_needed or self.is_loading then
return self:ret(true, "init not needed") return self:ret(true, "init not needed")
end end
-- Read blacklists -- Read greylists
local greylists = { local greylists = {
["IP"] = {}, ["IP"] = {},
["RDNS"] = {}, ["RDNS"] = {},
@ -75,13 +84,13 @@ function greylist:access()
end end
-- Check the caches -- Check the caches
local checks = { local checks = {
["IP"] = "ip" .. ngx.var.remote_addr ["IP"] = "ip" .. ngx.ctx.bw.remote_addr
} }
if ngx.var.http_user_agent then if ngx.ctx.bw.http_user_agent then
checks["UA"] = "ua" .. ngx.var.http_user_agent checks["UA"] = "ua" .. ngx.ctx.bw.http_user_agent
end end
if ngx.var.uri then if ngx.ctx.bw.uri then
checks["URI"] = "uri" .. ngx.var.uri checks["URI"] = "uri" .. ngx.ctx.bw.uri
end end
local already_cached = { local already_cached = {
["IP"] = false, ["IP"] = false,
@ -93,7 +102,7 @@ function greylist:access()
if not cached and err ~= "success" then if not cached and err ~= "success" then
self.logger:log(ngx.ERR, "error while checking cache : " .. err) self.logger:log(ngx.ERR, "error while checking cache : " .. err)
elseif cached and cached ~= "ok" then elseif cached and cached ~= "ok" then
return self:ret(true, k + " is in cached greylist", utils.get_deny_status()) return self:ret(true, k .. " is in cached greylist", utils.get_deny_status())
end end
if cached then if cached then
already_cached[k] = true already_cached[k] = true
@ -115,7 +124,7 @@ function greylist:access()
self.logger:log(ngx.ERR, "error while adding element to cache : " .. err) self.logger:log(ngx.ERR, "error while adding element to cache : " .. err)
end end
if greylisted == "ko" then if greylisted == "ko" then
return self:ret(true, k + " is not in greylist", utils.get_deny_status()) return self:ret(true, k .. " is not in greylist", utils.get_deny_status())
end end
end end
end end
@ -131,11 +140,11 @@ end
function greylist:kind_to_ele(kind) function greylist:kind_to_ele(kind)
if kind == "IP" then if kind == "IP" then
return "ip" .. ngx.var.remote_addr return "ip" .. ngx.ctx.bw.remote_addr
elseif kind == "UA" then elseif kind == "UA" then
return "ua" .. ngx.var.http_user_agent return "ua" .. ngx.ctx.bw.http_user_agent
elseif kind == "URI" then elseif kind == "URI" then
return "uri" .. ngx.var.uri return "uri" .. ngx.ctx.bw.uri
end end
end end
@ -151,12 +160,12 @@ function greylist:is_greylisted(kind)
end end
function greylist:is_greylisted_ip() function greylist:is_greylisted_ip()
-- Check if IP is in blacklist -- Check if IP is in greylist
local ipm, err = ipmatcher.new(self.lists["IP"]) local ipm, err = ipmatcher.new(self.lists["IP"])
if not ipm then if not ipm then
return nil, err return nil, err
end end
local match, err = ipm:match(ngx.var.remote_addr) local match, err = ipm:match(ngx.ctx.bw.remote_addr)
if err then if err then
return nil, err return nil, err
end end
@ -166,18 +175,12 @@ function greylist:is_greylisted_ip()
-- Check if rDNS is needed -- Check if rDNS is needed
local check_rdns = true local check_rdns = true
local is_global, err = utils.ip_is_global(ngx.var.remote_addr) if self.variables["GREYLIST_RDNS_GLOBAL"] == "yes" and not ngx.ctx.bw.ip_is_global then
if self.variables["BLACKLIST_RDNS_GLOBAL"] == "yes" then check_rdns = false
if is_global == nil then
return nil, err
end
if not is_global then
check_rdns = false
end
end end
if check_rdns then if check_rdns then
-- Get rDNS -- Get rDNS
local rdns_list, err = utils.get_rdns(ngx.var.remote_addr) local rdns_list, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
if not rdns_list then if not rdns_list then
return nil, err return nil, err
end end
@ -192,8 +195,8 @@ function greylist:is_greylisted_ip()
end end
-- Check if ASN is in greylist -- Check if ASN is in greylist
if is_global then if ngx.ctx.bw.ip_is_global then
local asn, err = utils.get_asn(ngx.var.remote_addr) local asn, err = utils.get_asn(ngx.ctx.bw.remote_addr)
if not asn then if not asn then
return nil, err return nil, err
end end
@ -209,9 +212,9 @@ function greylist:is_greylisted_ip()
end end
function greylist:is_greylisted_uri() function greylist:is_greylisted_uri()
-- Check if URI is in blacklist -- Check if URI is in greylist
for i, uri in ipairs(self.lists["URI"]) do for i, uri in ipairs(self.lists["URI"]) do
if ngx.var.uri:match(uri) then if ngx.ctx.bw.uri:match(uri) then
return true, "URI " .. uri return true, "URI " .. uri
end end
end end
@ -222,7 +225,7 @@ end
function greylist:is_greylisted_ua() function greylist:is_greylisted_ua()
-- Check if UA is in greylist -- Check if UA is in greylist
for i, ua in ipairs(self.lists["USER_AGENT"]) do for i, ua in ipairs(self.lists["USER_AGENT"]) do
if ngx.var.http_user_agent:match(ua) then if ngx.ctx.bw.http_user_agent:match(ua) then
return true, "UA " .. ua return true, "UA " .. ua
end end
end end

View file

@ -6,7 +6,6 @@ from os import _exit, getenv
from pathlib import Path from pathlib import Path
from re import IGNORECASE, compile as re_compile from re import IGNORECASE, compile as re_compile
from sys import exit as sys_exit, path as sys_path from sys import exit as sys_exit, path as sys_path
from threading import Lock
from traceback import format_exc from traceback import format_exc
from typing import Tuple from typing import Tuple
@ -84,7 +83,6 @@ try:
logger, logger,
sqlalchemy_string=getenv("DATABASE_URI", None), sqlalchemy_string=getenv("DATABASE_URI", None),
) )
lock = Lock()
# Create directories if they don't exist # Create directories if they don't exist
Path("/var/cache/bunkerweb/greylist").mkdir(parents=True, exist_ok=True) Path("/var/cache/bunkerweb/greylist").mkdir(parents=True, exist_ok=True)
@ -103,7 +101,7 @@ try:
} }
all_fresh = True all_fresh = True
for kind in kinds_fresh: for kind in kinds_fresh:
if not is_cached_file(f"/var/cache/bunkerweb/greylist/{kind}.list", "hour"): if not is_cached_file(f"/var/cache/bunkerweb/greylist/{kind}.list", "hour", db):
kinds_fresh[kind] = False kinds_fresh[kind] = False
all_fresh = False all_fresh = False
logger.info( logger.info(
@ -156,7 +154,7 @@ try:
logger.info(f"Downloaded {i} grey {kind}") logger.info(f"Downloaded {i} grey {kind}")
# Check if file has changed # Check if file has changed
new_hash = file_hash(f"/var/tmp/bunkerweb/greylist/{kind}.list") new_hash = file_hash(f"/var/tmp/bunkerweb/greylist/{kind}.list")
old_hash = cache_hash(f"/var/cache/bunkerweb/greylist/{kind}.list") old_hash = cache_hash(f"/var/cache/bunkerweb/greylist/{kind}.list", db)
if new_hash == old_hash: if new_hash == old_hash:
logger.info( logger.info(
f"New file {kind}.list is identical to cache file, reload is not needed", f"New file {kind}.list is identical to cache file, reload is not needed",
@ -170,25 +168,12 @@ try:
f"/var/tmp/bunkerweb/greylist/{kind}.list", f"/var/tmp/bunkerweb/greylist/{kind}.list",
f"/var/cache/bunkerweb/greylist/{kind}.list", f"/var/cache/bunkerweb/greylist/{kind}.list",
new_hash, new_hash,
db,
) )
if not cached: if not cached:
logger.error(f"Error while caching greylist : {err}") logger.error(f"Error while caching greylist : {err}")
status = 2 status = 2
else:
# Update db
with lock:
err = db.update_job_cache(
"greylist-download",
None,
f"{kind}.list",
content,
checksum=new_hash,
)
if err:
logger.warning(f"Couldn't update db cache: {err}")
status = 1
except: except:
status = 2 status = 2
logger.error( logger.error(

View file

@ -5,7 +5,6 @@ from gzip import decompress
from os import _exit, getenv from os import _exit, getenv
from pathlib import Path from pathlib import Path
from sys import exit as sys_exit, path as sys_path from sys import exit as sys_exit, path as sys_path
from threading import Lock
from traceback import format_exc from traceback import format_exc
sys_path.extend( sys_path.extend(
@ -25,10 +24,14 @@ from jobs import cache_file, cache_hash, file_hash, is_cached_file
logger = setup_logger("JOBS.mmdb-asn", getenv("LOG_LEVEL", "INFO")) logger = setup_logger("JOBS.mmdb-asn", getenv("LOG_LEVEL", "INFO"))
status = 0 status = 0
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
try: try:
# Don't go further if the cache is fresh # Don't go further if the cache is fresh
if is_cached_file("/var/cache/bunkerweb/asn.mmdb", "month"): if is_cached_file("/var/cache/bunkerweb/asn.mmdb", "month", db):
logger.info("asn.mmdb is already in cache, skipping download...") logger.info("asn.mmdb is already in cache, skipping download...")
_exit(0) _exit(0)
@ -52,8 +55,7 @@ try:
# Decompress it # Decompress it
logger.info("Decompressing mmdb file ...") logger.info("Decompressing mmdb file ...")
file_content = decompress(file_content) Path(f"/var/tmp/bunkerweb/asn.mmdb").write_bytes(decompress(file_content))
Path(f"/var/tmp/bunkerweb/asn.mmdb").write_bytes(file_content)
# Try to load it # Try to load it
logger.info("Checking if mmdb file is valid ...") logger.info("Checking if mmdb file is valid ...")
@ -62,7 +64,7 @@ try:
# Check if file has changed # Check if file has changed
new_hash = file_hash("/var/tmp/bunkerweb/asn.mmdb") new_hash = file_hash("/var/tmp/bunkerweb/asn.mmdb")
old_hash = cache_hash("/var/cache/bunkerweb/asn.mmdb") old_hash = cache_hash("/var/cache/bunkerweb/asn.mmdb", db)
if new_hash == old_hash: if new_hash == old_hash:
logger.info("New file is identical to cache file, reload is not needed") logger.info("New file is identical to cache file, reload is not needed")
_exit(0) _exit(0)
@ -70,27 +72,12 @@ try:
# Move it to cache folder # Move it to cache folder
logger.info("Moving mmdb file to cache ...") logger.info("Moving mmdb file to cache ...")
cached, err = cache_file( cached, err = cache_file(
"/var/tmp/bunkerweb/asn.mmdb", "/var/cache/bunkerweb/asn.mmdb", new_hash "/var/tmp/bunkerweb/asn.mmdb", "/var/cache/bunkerweb/asn.mmdb", new_hash, db
) )
if not cached: if not cached:
logger.error(f"Error while caching mmdb file : {err}") logger.error(f"Error while caching mmdb file : {err}")
_exit(2) _exit(2)
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
lock = Lock()
# Update db
with lock:
err = db.update_job_cache(
"mmdb-asn", None, "asn.mmdb", file_content, checksum=new_hash
)
if err:
logger.warning(f"Couldn't update db cache: {err}")
# Success # Success
logger.info(f"Downloaded new mmdb from {mmdb_url}") logger.info(f"Downloaded new mmdb from {mmdb_url}")

View file

@ -5,7 +5,6 @@ from gzip import decompress
from os import _exit, getenv from os import _exit, getenv
from pathlib import Path from pathlib import Path
from sys import exit as sys_exit, path as sys_path from sys import exit as sys_exit, path as sys_path
from threading import Lock
from traceback import format_exc from traceback import format_exc
sys_path.extend( sys_path.extend(
@ -27,8 +26,35 @@ logger = setup_logger("JOBS.mmdb-country", getenv("LOG_LEVEL", "INFO"))
status = 0 status = 0
try: try:
# Only download mmdb if the country blacklist or whitelist is enabled
dl_mmdb = False
# Multisite case
if getenv("MULTISITE", "no") == "yes":
for first_server in getenv("SERVER_NAME", "").split(" "):
if getenv(
f"{first_server}_BLACKLIST_COUNTRY", getenv("BLACKLIST_COUNTRY")
) or getenv(
f"{first_server}_WHITELIST_COUNTRY", getenv("WHITELIST_COUNTRY")
):
dl_mmdb = True
break
# Singlesite case
elif getenv("BLACKLIST_COUNTRY") or getenv("WHITELIST_COUNTRY"):
dl_mmdb = True
if not dl_mmdb:
logger.info(
"Country blacklist or whitelist is not enabled, skipping download..."
)
_exit(0)
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
# Don't go further if the cache is fresh # Don't go further if the cache is fresh
if is_cached_file("/var/cache/bunkerweb/country.mmdb", "month"): if is_cached_file("/var/cache/bunkerweb/country.mmdb", "month", db):
logger.info("country.mmdb is already in cache, skipping download...") logger.info("country.mmdb is already in cache, skipping download...")
_exit(0) _exit(0)
@ -52,8 +78,7 @@ try:
# Decompress it # Decompress it
logger.info("Decompressing mmdb file ...") logger.info("Decompressing mmdb file ...")
file_content = decompress(file_content) Path(f"/var/tmp/bunkerweb/country.mmdb").write_bytes(decompress(file_content))
Path(f"/var/tmp/bunkerweb/country.mmdb").write_bytes(file_content)
# Try to load it # Try to load it
logger.info("Checking if mmdb file is valid ...") logger.info("Checking if mmdb file is valid ...")
@ -62,7 +87,7 @@ try:
# Check if file has changed # Check if file has changed
new_hash = file_hash("/var/tmp/bunkerweb/country.mmdb") new_hash = file_hash("/var/tmp/bunkerweb/country.mmdb")
old_hash = cache_hash("/var/cache/bunkerweb/country.mmdb") old_hash = cache_hash("/var/cache/bunkerweb/country.mmdb", db)
if new_hash == old_hash: if new_hash == old_hash:
logger.info("New file is identical to cache file, reload is not needed") logger.info("New file is identical to cache file, reload is not needed")
_exit(0) _exit(0)
@ -70,27 +95,15 @@ try:
# Move it to cache folder # Move it to cache folder
logger.info("Moving mmdb file to cache ...") logger.info("Moving mmdb file to cache ...")
cached, err = cache_file( cached, err = cache_file(
"/var/tmp/bunkerweb/country.mmdb", "/var/cache/bunkerweb/country.mmdb", new_hash "/var/tmp/bunkerweb/country.mmdb",
"/var/cache/bunkerweb/country.mmdb",
new_hash,
db,
) )
if not cached: if not cached:
logger.error(f"Error while caching mmdb file : {err}") logger.error(f"Error while caching mmdb file : {err}")
_exit(2) _exit(2)
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
lock = Lock()
# Update db
with lock:
err = db.update_job_cache(
"mmdb-country", None, "country.mmdb", file_content, checksum=new_hash
)
if err:
logger.warning(f"Couldn't update db cache: {err}")
# Success # Success
logger.info(f"Downloaded new mmdb from {mmdb_url}") logger.info(f"Downloaded new mmdb from {mmdb_url}")

View file

@ -99,17 +99,15 @@ try:
) )
if Path(f"/etc/letsencrypt/live/{first_server}/cert.pem").exists(): if Path(f"/etc/letsencrypt/live/{first_server}/cert.pem").exists():
cert = Path(
f"/etc/letsencrypt/live/{first_server}/cert.pem"
).read_bytes()
# Update db # Update db
with lock: with lock:
err = db.update_job_cache( err = db.update_job_cache(
"certbot-new", "certbot-new",
first_server, first_server,
"cert.pem", "cert.pem",
cert, Path(
f"/etc/letsencrypt/live/{first_server}/cert.pem"
).read_bytes(),
) )
if err: if err:
@ -139,17 +137,15 @@ try:
) )
if Path(f"/etc/letsencrypt/live/{first_server}/cert.pem").exists(): if Path(f"/etc/letsencrypt/live/{first_server}/cert.pem").exists():
cert = Path(
f"/etc/letsencrypt/live/{first_server}/cert.pem"
).read_bytes()
# Update db # Update db
with lock: with lock:
err = db.update_job_cache( err = db.update_job_cache(
"certbot-new", "certbot-new",
first_server, first_server,
"cert.pem", "cert.pem",
cert, Path(
f"/etc/letsencrypt/live/{first_server}/cert.pem"
).read_bytes(),
) )
if err: if err:

View file

@ -11,7 +11,7 @@ function letsencrypt:initialize()
end end
function letsencrypt:access() function letsencrypt:access()
if string.sub(ngx.var.uri, 1, string.len("/.well-known/acme-challenge/")) == "/.well-known/acme-challenge/" then if string.sub(ngx.ctx.bw.uri, 1, string.len("/.well-known/acme-challenge/")) == "/.well-known/acme-challenge/" then
self.logger:log(ngx.NOTICE, "got a visit from Let's Encrypt, let's whitelist it") self.logger:log(ngx.NOTICE, "got a visit from Let's Encrypt, let's whitelist it")
return self:ret(true, "visit from LE", ngx.OK) return self:ret(true, "visit from LE", ngx.OK)
end end
@ -19,8 +19,8 @@ function letsencrypt:access()
end end
function letsencrypt:api() function letsencrypt:api()
if not string.match(ngx.var.uri, "^/lets%-encrypt/challenge$") or if not string.match(ngx.ctx.bw.uri, "^/lets%-encrypt/challenge$") or
(ngx.var.request_method ~= "POST" and ngx.var.request_method ~= "DELETE") then (ngx.ctx.bw.request_method ~= "POST" and ngx.ctx.bw.request_method ~= "DELETE") then
return false, nil, nil return false, nil, nil
end end
local acme_folder = "/var/tmp/bunkerweb/lets-encrypt/.well-known/acme-challenge/" local acme_folder = "/var/tmp/bunkerweb/lets-encrypt/.well-known/acme-challenge/"
@ -30,7 +30,7 @@ function letsencrypt:api()
return true, ngx.HTTP_BAD_REQUEST, { status = "error", msg = "json body decoding failed" } return true, ngx.HTTP_BAD_REQUEST, { status = "error", msg = "json body decoding failed" }
end end
os.execute("mkdir -p " .. acme_folder) os.execute("mkdir -p " .. acme_folder)
if ngx.var.request_method == "POST" then if ngx.ctx.bw.request_method == "POST" then
local file, err = io.open(acme_folder .. data.token, "w+") local file, err = io.open(acme_folder .. data.token, "w+")
if not file then if not file then
return true, ngx.HTTP_INTERNAL_SERVER_ERROR, { status = "error", msg = "can't write validation token : " .. err } return true, ngx.HTTP_INTERNAL_SERVER_ERROR, { status = "error", msg = "can't write validation token : " .. err }
@ -38,7 +38,7 @@ function letsencrypt:api()
file:write(data.validation) file:write(data.validation)
file:close() file:close()
return true, ngx.HTTP_OK, { status = "success", msg = "validation token written" } return true, ngx.HTTP_OK, { status = "success", msg = "validation token written" }
elseif ngx.var.request_method == "DELETE" then elseif ngx.ctx.bw.request_method == "DELETE" then
local ok, err = os.remove(acme_folder .. data.token) local ok, err = os.remove(acme_folder .. data.token)
if not ok then if not ok then
return true, ngx.HTTP_INTERNAL_SERVER_ERROR, { status = "error", msg = "can't remove validation token : " .. err } return true, ngx.HTTP_INTERNAL_SERVER_ERROR, { status = "error", msg = "can't remove validation token : " .. err }

View file

@ -16,29 +16,28 @@ function limit:initialize()
self.logger:log(ngx.ERR, err) self.logger:log(ngx.ERR, err)
end end
self.use_redis = use_redis == "yes" self.use_redis = use_redis == "yes"
self.clusterstore = clusterstore:new()
-- Load rules if needed -- Load rules if needed
if ngx.get_phase() == "access" then if ngx.get_phase() ~= "init" and self.variables["USE_LIMIT_REQ"] == "yes" then
if self.variables["USE_LIMIT_REQ"] == "yes" then -- Get all rules from datastore
-- Get all rules from datastore local limited = false
local limited = false local all_rules, err = self.datastore:get("plugin_limit_rules")
local all_rules, err = self.datastore:get("plugin_limit_rules") if not all_rules then
if not all_rules then self.logger:log(ngx.ERR, err)
self.logger:log(ngx.ERR, err) return
return end
all_rules = cjson.decode(all_rules)
self.rules = {}
-- Extract global rules
if all_rules.global then
for k, v in pairs(all_rules.global) do
self.rules[k] = v
end end
all_rules = cjson.decode(all_rules) end
self.rules = {} -- Extract and overwrite if needed server rules
-- Extract global rules if all_rules[ngx.ctx.bw.server_name] then
if all_rules.global then for k, v in pairs(all_rules[ngx.ctx.bw.server_name]) do
for k, v in pairs(all_rules.global) do self.rules[k] = v
self.rules[k] = v
end
end
-- Extract and overwrite if needed server rules
if all_rules[ngx.var.server_name] then
for k, v in pairs(all_rules[ngx.var.server_name]) do
self.rules[k] = v
end
end end
end end
end end
@ -50,7 +49,7 @@ function limit:init()
if init_needed == nil then if init_needed == nil then
return self:ret(false, err) return self:ret(false, err)
end end
if not init_needed then if not init_needed or self.is_loading then
return self:ret(true, "no service uses Limit for requests, skipping init") return self:ret(true, "no service uses Limit for requests, skipping init")
end end
-- Get variables -- Get variables
@ -83,7 +82,7 @@ end
function limit:access() function limit:access()
-- Check if we are whitelisted -- Check if we are whitelisted
if ngx.var.is_whitelisted == "yes" then if ngx.ctx.bw.is_whitelisted == "yes" then
return self:ret(true, "client is whitelisted") return self:ret(true, "client is whitelisted")
end end
-- Check if access is needed -- Check if access is needed
@ -94,7 +93,7 @@ function limit:access()
local rate = nil local rate = nil
local uri = nil local uri = nil
for k, v in pairs(self.rules) do for k, v in pairs(self.rules) do
if k ~= "/" and ngx.var.uri:match(k) then if k ~= "/" and ngx.ctx.bw.uri:match(k) then
rate = v rate = v
uri = k uri = k
break break
@ -105,7 +104,7 @@ function limit:access()
rate = self.rules["/"] rate = self.rules["/"]
uri = "/" uri = "/"
else else
return self:ret(true, "no rule for " .. ngx.var.uri) return self:ret(true, "no rule for " .. ngx.ctx.bw.uri)
end end
end end
-- Check if limit is reached -- Check if limit is reached
@ -116,10 +115,10 @@ function limit:access()
end end
-- Limit reached -- Limit reached
if limited then if limited then
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is limited for URL " .. ngx.var.uri .. " (current rate = " .. current_rate .. "r/" .. rate_time .. " and max rate = " .. rate .. ")", ngx.HTTP_TOO_MANY_REQUESTS) return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is limited for URL " .. ngx.ctx.bw.uri .. " (current rate = " .. current_rate .. "r/" .. rate_time .. " and max rate = " .. rate .. ")", ngx.HTTP_TOO_MANY_REQUESTS)
end end
-- Limit not reached -- Limit not reached
return self:ret(true, "client IP " .. ngx.var.remote_addr .. " is not limited for URL " .. ngx.var.uri .. " (current rate = " .. current_rate .. "r/" .. rate_time .. " and max rate = " .. rate .. ")") return self:ret(true, "client IP " .. ngx.ctx.bw.remote_addr .. " is not limited for URL " .. ngx.ctx.bw.uri .. " (current rate = " .. current_rate .. "r/" .. rate_time .. " and max rate = " .. rate .. ")")
end end
function limit:limit_req(rate_max, rate_time) function limit:limit_req(rate_max, rate_time)
@ -132,7 +131,7 @@ function limit:limit_req(rate_max, rate_time)
else else
timestamps = redis_timestamps timestamps = redis_timestamps
-- Save the new timestamps -- Save the new timestamps
local ok, err = self.datastore:set("plugin_limit_cache_" .. ngx.var.server_name .. ngx.var.remote_addr .. ngx.var.uri, cjson.encode(timestamps), delay) local ok, err = self.datastore:set("plugin_limit_cache_" .. ngx.ctx.bw.server_name .. ngx.ctx.bw.remote_addr .. ngx.ctx.bw.uri, cjson.encode(timestamps), delay)
if not ok then if not ok then
return nil, "can't update timestamps : " .. err return nil, "can't update timestamps : " .. err
end end
@ -154,7 +153,7 @@ end
function limit:limit_req_local(rate_max, rate_time) function limit:limit_req_local(rate_max, rate_time)
-- Get timestamps -- Get timestamps
local timestamps, err = self.datastore:get("plugin_limit_cache_" .. ngx.var.server_name .. ngx.var.remote_addr .. ngx.var.uri) local timestamps, err = self.datastore:get("plugin_limit_cache_" .. ngx.ctx.bw.server_name .. ngx.ctx.bw.remote_addr .. ngx.ctx.bw.uri)
if not timestamps and err ~= "not found" then if not timestamps and err ~= "not found" then
return nil, err return nil, err
elseif err == "not found" then elseif err == "not found" then
@ -165,7 +164,7 @@ function limit:limit_req_local(rate_max, rate_time)
local updated, new_timestamps, delay = self:limit_req_timestamps(rate_max, rate_time, timestamps) local updated, new_timestamps, delay = self:limit_req_timestamps(rate_max, rate_time, timestamps)
-- Save new timestamps if needed -- Save new timestamps if needed
if updated then if updated then
local ok, err = self.datastore:set("plugin_limit_cache_" .. ngx.var.server_name .. ngx.var.remote_addr .. ngx.var.uri, cjson.encode(timestamps), delay) local ok, err = self.datastore:set("plugin_limit_cache_" .. ngx.ctx.bw.server_name .. ngx.ctx.bw.remote_addr .. ngx.ctx.bw.uri, cjson.encode(new_timestamps), delay)
if not ok then if not ok then
return nil, err return nil, err
end end
@ -174,38 +173,69 @@ function limit:limit_req_local(rate_max, rate_time)
end end
function limit:limit_req_redis(rate_max, rate_time) function limit:limit_req_redis(rate_max, rate_time)
-- Connect to server -- Redis atomic script
local cstore, err = clusterstore:new() local redis_script = [[
if not cstore then local ret_get = redis.pcall("GET", KEYS[1])
return nil, err if type(ret_get) == "table" and ret_get["err"] ~= nil then
end redis.log(redis.LOG_WARNING, "limit GET error : " .. ret_get["err"])
local ok, err = clusterstore:connect() return ret_get
end
local timestamps = {}
if ret_get then
timestamps = cjson.decode(ret_get)
end
-- Keep only timestamps within the delay
local updated = false
local new_timestamps = {}
local rate_max = tonumber(ARGV[1])
local rate_time = ARGV[2]
local current_timestamp = tonumber(ARGV[3])
local delay = 0
if rate_time == "s" then
delay = 1
elseif rate_time == "m" then
delay = 60
elseif rate_time == "h" then
delay = 3600
elseif rate_time == "d" then
delay = 86400
end
for i, timestamp in ipairs(timestamps) do
if current_timestamp - timestamp <= delay then
table.insert(new_timestamps, timestamp)
else
updated = true
end
end
-- Only insert the new timestamp if client is not limited already to avoid infinite insert
if #new_timestamps <= rate_max then
table.insert(new_timestamps, current_timestamp)
updated = true
end
-- Save new timestamps if needed
if updated then
local ret_set = redis.pcall("SET", KEYS[1], cjson.encode(new_timestamps), "EX", delay)
if type(ret_set) == "table" and ret_set["err"] ~= nil then
redis.log(redis.LOG_WARNING, "limit SET error : " .. ret_set["err"])
return ret_set
end
end
return new_timestamps
]]
-- Connect
local ok, err = self.clusterstore:connect()
if not ok then if not ok then
return nil, err return nil, err
end end
-- Get timestamps -- Execute script
local timestamps, err = clusterstore:call("get", "limit_" .. ngx.var.server_name .. ngx.var.remote_addr .. ngx.var.uri) local timestamps, err = self.clusterstore:call("eval", redis_script, 1, "limit_" .. ngx.ctx.bw.server_name .. ngx.ctx.bw.remote_addr .. ngx.ctx.bw.uri, rate_max, rate_time, os.time(os.date("!*t")))
if err then if not timestamps then
clusterstore:close() self.clusterstore:close()
return nil, err return nil, err
end end
if timestamps then -- Return timestamps
timestamps = cjson.decode(timestamps) self.clusterstore:close()
else return timestamps, "success"
timestamps = {}
end
-- Compute new timestamps
local updated, new_timestamps, delay = self:limit_req_timestamps(rate_max, rate_time, timestamps)
-- Save new timestamps if needed
if updated then
local ok, err = clusterstore:call("set", "limit_" .. ngx.var.server_name .. ngx.var.remote_addr .. ngx.var.uri, cjson.encode(new_timestamps), "EX", delay)
if not ok then
clusterstore:close()
return nil, err
end
end
lusterstore:close()
return new_timestamps, "success"
end end
function limit:limit_req_timestamps(rate_max, rate_time, timestamps) function limit:limit_req_timestamps(rate_max, rate_time, timestamps)

View file

@ -1,6 +1,6 @@
{ {
"id": "limit", "id": "limit",
"order": 7, "order": 8,
"name": "Limit", "name": "Limit",
"description": "Limit maximum number of requests and connections.", "description": "Limit maximum number of requests and connections.",
"version": "0.1", "version": "0.1",

View file

@ -67,8 +67,13 @@ try:
# Create directory if it doesn't exist # Create directory if it doesn't exist
Path("/var/cache/bunkerweb/realip").mkdir(parents=True, exist_ok=True) Path("/var/cache/bunkerweb/realip").mkdir(parents=True, exist_ok=True)
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
# Don't go further if the cache is fresh # Don't go further if the cache is fresh
if is_cached_file("/var/cache/bunkerweb/realip/combined.list", "hour"): if is_cached_file("/var/cache/bunkerweb/realip/combined.list", "hour", db):
logger.info("RealIP list is already in cache, skipping download...") logger.info("RealIP list is already in cache, skipping download...")
_exit(0) _exit(0)
@ -106,7 +111,7 @@ try:
# Check if file has changed # Check if file has changed
new_hash = file_hash("/var/tmp/bunkerweb/realip-combined.list") new_hash = file_hash("/var/tmp/bunkerweb/realip-combined.list")
old_hash = cache_hash("/var/cache/bunkerweb/realip/combined.list") old_hash = cache_hash("/var/cache/bunkerweb/realip/combined.list", db)
if new_hash == old_hash: if new_hash == old_hash:
logger.info("New file is identical to cache file, reload is not needed") logger.info("New file is identical to cache file, reload is not needed")
_exit(0) _exit(0)
@ -116,30 +121,12 @@ try:
"/var/tmp/bunkerweb/realip-combined.list", "/var/tmp/bunkerweb/realip-combined.list",
"/var/cache/bunkerweb/realip/combined.list", "/var/cache/bunkerweb/realip/combined.list",
new_hash, new_hash,
db,
) )
if not cached: if not cached:
logger.error(f"Error while caching list : {err}") logger.error(f"Error while caching list : {err}")
_exit(2) _exit(2)
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
lock = Lock()
# Update db
with lock:
err = db.update_job_cache(
"realip-download",
None,
"combined.list",
content,
checksum=new_hash,
)
if err:
logger.warning(f"Couldn't update db cache: {err}")
logger.info(f"Downloaded {i} trusted IP/net") logger.info(f"Downloaded {i} trusted IP/net")
status = 1 status = 1

View file

@ -13,10 +13,10 @@ end
function redis:init() function redis:init()
-- Check if init is needed -- Check if init is needed
if self.variables["USE_REDIS"] then if self.variables["USE_REDIS"] ~= "yes" or self.is_loading then
return self:ret(true, "redis not used") return self:ret(true, "init not needed")
end end
-- Check redis connection -- Check redis connection ()
local ok, err = clusterstore:connect() local ok, err = clusterstore:connect()
if not ok then if not ok then
return self:ret(false, "redis connect error : " .. err) return self:ret(false, "redis connect error : " .. err)

View file

@ -1,6 +1,6 @@
{ {
"id": "reversescan", "id": "reversescan",
"order": 5, "order": 6,
"name": "Reverse scan", "name": "Reverse scan",
"description": "Scan clients ports to detect proxies or servers.", "description": "Scan clients ports to detect proxies or servers.",
"version": "0.1", "version": "0.1",

View file

@ -25,28 +25,28 @@ function reversescan:access()
-- Loop on ports -- Loop on ports
for port in self.variables["REVERSE_SCAN_PORTS"]:gmatch("%S+") do for port in self.variables["REVERSE_SCAN_PORTS"]:gmatch("%S+") do
-- Check if the scan is already cached -- Check if the scan is already cached
local cached, err = self:is_in_cache(ngx.var.remote_addr .. ":" .. port) local cached, err = self:is_in_cache(ngx.ctx.bw.remote_addr .. ":" .. port)
if cached == nil then if cached == nil then
return self:ret(false, "error getting cache from datastore : " .. err) return self:ret(false, "error getting cache from datastore : " .. err)
end end
if cached == "open" then if cached == "open" then
return self:ret(true, "port " .. port .. " is opened for IP " .. ngx.var.remote_addr, utils.get_deny_status()) return self:ret(true, "port " .. port .. " is opened for IP " .. ngx.ctx.bw.remote_addr, utils.get_deny_status())
elseif not cached then elseif not cached then
-- Do the scan -- Do the scan
local res, err = self:scan(ngx.var.remote_addr, tonumber(port), tonumber(self.variables["REVERSE_SCAN_TIMEOUT"])) local res, err = self:scan(ngx.ctx.bw.remote_addr, tonumber(port), tonumber(self.variables["REVERSE_SCAN_TIMEOUT"]))
-- Cache the result -- Cache the result
local ok, err = self:add_to_cache(ngx.var.remote_addr .. ":" .. port, res) local ok, err = self:add_to_cache(ngx.ctx.bw.remote_addr .. ":" .. port, res)
if not ok then if not ok then
return self:ret(false, "error updating cache from datastore : " .. err) return self:ret(false, "error updating cache from datastore : " .. err)
end end
-- Deny request if port is open -- Deny request if port is open
if res == "open" then if res == "open" then
return self:ret(true, "port " .. port .. " is opened for IP " .. ngx.var.remote_addr, utils.get_deny_status()) return self:ret(true, "port " .. port .. " is opened for IP " .. ngx.ctx.bw.remote_addr, utils.get_deny_status())
end end
end end
end end
-- No port opened -- No port opened
return self:ret(true, "no port open for IP " .. ngx.var.remote_addr) return self:ret(true, "no port open for IP " .. ngx.ctx.bw.remote_addr)
end end
function reversescan:scan(ip, port, timeout) function reversescan:scan(ip, port, timeout)

View file

@ -42,21 +42,23 @@ def generate_cert(first_server, days, subj):
return False, 2 return False, 2
# Update db # Update db
key_data = Path(f"/var/cache/bunkerweb/selfsigned/{first_server}.key").read_bytes()
with lock: with lock:
err = db.update_job_cache( err = db.update_job_cache(
"self-signed", first_server, f"{first_server}.key", key_data "self-signed",
first_server,
f"{first_server}.key",
Path(f"/var/cache/bunkerweb/selfsigned/{first_server}.key").read_bytes(),
) )
if err: if err:
logger.warning(f"Couldn't update db cache for {first_server}.key file: {err}") logger.warning(f"Couldn't update db cache for {first_server}.key file: {err}")
pem_data = Path(f"/var/cache/bunkerweb/selfsigned/{first_server}.pem").read_bytes()
with lock: with lock:
err = db.update_job_cache( err = db.update_job_cache(
"self-signed", first_server, f"{first_server}.pem", pem_data "self-signed",
first_server,
f"{first_server}.pem",
Path(f"/var/cache/bunkerweb/selfsigned/{first_server}.pem").read_bytes(),
) )
if err: if err:

View file

@ -11,21 +11,26 @@ function sessions:initialize()
end end
function sessions:init() function sessions:init()
if self.is_loading then
return self:ret(true, "init not needed")
end
-- Get redis vars -- Get redis vars
local redis_vars = { local redis_vars = {
["USE_REDIS"] = "", ["USE_REDIS"] = "",
["REDIS_HOST"] = "", ["REDIS_HOST"] = "",
["REDIS_PORT"] = "", ["REDIS_PORT"] = "",
["REDIS_DATABASE"] = "",
["REDIS_SSL"] = "", ["REDIS_SSL"] = "",
["REDIS_TIMEOUT"] = "", ["REDIS_TIMEOUT"] = "",
["REDIS_KEEPALIVE_IDLE"] = "", ["REDIS_KEEPALIVE_IDLE"] = "",
["REDIS_KEEPALIVE_POOL"] = "" ["REDIS_KEEPALIVE_POOL"] = ""
} }
for k, v in pairs(redis_vars) do for k, v in pairs(redis_vars) do
local var, err = utils.get_variable(k, false) local value, err = utils.get_variable(k, false)
if var == nil then if value == nil then
return self:ret(false, "can't get " .. k .. " variable : " .. err) return self:ret(false, "can't get " .. k .. " variable : " .. err)
end end
redis_vars[k] = value
end end
-- Init configuration -- Init configuration
local config = { local config = {
@ -55,7 +60,7 @@ function sessions:init()
pool_size = tonumber(redis_vars["REDIS_KEEPALIVE_POOL"]), pool_size = tonumber(redis_vars["REDIS_KEEPALIVE_POOL"]),
ssl = redis_vars["REDIS_SSL"] == "yes", ssl = redis_vars["REDIS_SSL"] == "yes",
host = redis_vars["REDIS_HOST"], host = redis_vars["REDIS_HOST"],
port = tonumber(redis_vars["REDIS_HOST"]), port = tonumber(redis_vars["REDIS_PORT"]),
database = tonumber(redis_vars["REDIS_DATABASE"]) database = tonumber(redis_vars["REDIS_DATABASE"])
} }
end end

View file

@ -6,7 +6,6 @@ from os import _exit, getenv
from pathlib import Path from pathlib import Path
from re import IGNORECASE, compile as re_compile from re import IGNORECASE, compile as re_compile
from sys import exit as sys_exit, path as sys_path from sys import exit as sys_exit, path as sys_path
from threading import Lock
from traceback import format_exc from traceback import format_exc
from typing import Tuple from typing import Tuple
@ -80,6 +79,11 @@ try:
logger.info("Whitelist is not activated, skipping downloads...") logger.info("Whitelist is not activated, skipping downloads...")
_exit(0) _exit(0)
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
# Create directories if they don't exist # Create directories if they don't exist
Path("/var/cache/bunkerweb/whitelist").mkdir(parents=True, exist_ok=True) Path("/var/cache/bunkerweb/whitelist").mkdir(parents=True, exist_ok=True)
Path("/var/tmp/bunkerweb/whitelist").mkdir(parents=True, exist_ok=True) Path("/var/tmp/bunkerweb/whitelist").mkdir(parents=True, exist_ok=True)
@ -97,7 +101,9 @@ try:
} }
all_fresh = True all_fresh = True
for kind in kinds_fresh: for kind in kinds_fresh:
if not is_cached_file(f"/var/cache/bunkerweb/whitelist/{kind}.list", "hour"): if not is_cached_file(
f"/var/cache/bunkerweb/whitelist/{kind}.list", "hour", db
):
kinds_fresh[kind] = False kinds_fresh[kind] = False
all_fresh = False all_fresh = False
logger.info( logger.info(
@ -150,7 +156,7 @@ try:
logger.info(f"Downloaded {i} good {kind}") logger.info(f"Downloaded {i} good {kind}")
# Check if file has changed # Check if file has changed
new_hash = file_hash(f"/var/tmp/bunkerweb/whitelist/{kind}.list") new_hash = file_hash(f"/var/tmp/bunkerweb/whitelist/{kind}.list")
old_hash = cache_hash(f"/var/cache/bunkerweb/whitelist/{kind}.list") old_hash = cache_hash(f"/var/cache/bunkerweb/whitelist/{kind}.list", db)
if new_hash == old_hash: if new_hash == old_hash:
logger.info( logger.info(
f"New file {kind}.list is identical to cache file, reload is not needed", f"New file {kind}.list is identical to cache file, reload is not needed",
@ -164,30 +170,12 @@ try:
f"/var/tmp/bunkerweb/whitelist/{kind}.list", f"/var/tmp/bunkerweb/whitelist/{kind}.list",
f"/var/cache/bunkerweb/whitelist/{kind}.list", f"/var/cache/bunkerweb/whitelist/{kind}.list",
new_hash, new_hash,
db,
) )
if not cached: if not cached:
logger.error(f"Error while caching whitelist : {err}") logger.error(f"Error while caching whitelist : {err}")
status = 2 status = 2
else:
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
lock = Lock()
# Update db
with lock:
err = db.update_job_cache(
"whitelist-download",
None,
f"{kind}.list",
content,
checksum=new_hash,
)
if err:
logger.warning(f"Couldn't update db cache: {err}")
status = 1
except: except:
status = 2 status = 2
logger.error( logger.error(

View file

@ -18,29 +18,38 @@ function whitelist:initialize()
self.logger:log(ngx.ERR, err) self.logger:log(ngx.ERR, err)
end end
self.use_redis = use_redis == "yes" self.use_redis = use_redis == "yes"
-- Check if init is needed
if ngx.get_phase() == "init" then
local init_needed, err = utils.has_variable("USE_WHITELIST", "yes")
if init_needed == nil then
self.logger:log(ngx.ERR, err)
end
self.init_needed = init_needed
-- Decode lists -- Decode lists
else if ngx.get_phase() ~= "init" and self.variables["USE_WHITELIST"] == "yes" then
local lists, err = self.datastore:get("plugin_whitelist_lists") local lists, err = self.datastore:get("plugin_whitelist_lists")
if not lists then if not lists then
self.logger:log(ngx.ERR, err) self.logger:log(ngx.ERR, err)
else else
self.lists = cjson.decode(lists) self.lists = cjson.decode(lists)
end end
local kinds = {
["IP"] = {},
["RDNS"] = {},
["ASN"] = {},
["USER_AGENT"] = {},
["URI"] = {}
}
for kind, _ in pairs(kinds) do
for data in self.variables["WHITELIST_" .. kind]:gmatch("%S+") do
table.insert(self.lists[kind], data)
end
end
end end
-- Instantiate cachestore -- Instantiate cachestore
self.cachestore = cachestore:new(self.use_redis) self.cachestore = cachestore:new(self.use_redis and ngx.get_phase() == "access")
end end
function whitelist:init() function whitelist:init()
-- Check if init is needed -- Check if init is needed
if not self.init_needed then local init_needed, err = utils.has_variable("USE_WHITELIST", "yes")
if init_needed == nil then
return self:ret(false, "can't check USE_WHITELIST variable : " .. err)
end
if not init_needed or self.is_loading then
return self:ret(true, "init not needed") return self:ret(true, "init not needed")
end end
-- Read whitelists -- Read whitelists
@ -73,6 +82,7 @@ end
function whitelist:set() function whitelist:set()
-- Set default value -- Set default value
ngx.var.is_whitelisted = "no" ngx.var.is_whitelisted = "no"
ngx.ctx.bw.is_whitelisted = "no"
env.set("is_whitelisted", "no") env.set("is_whitelisted", "no")
-- Check if set is needed -- Check if set is needed
if self.variables["USE_WHITELIST"] ~= "yes" then if self.variables["USE_WHITELIST"] ~= "yes" then
@ -84,6 +94,7 @@ function whitelist:set()
return self:ret(false, err) return self:ret(false, err)
elseif whitelisted then elseif whitelisted then
ngx.var.is_whitelisted = "yes" ngx.var.is_whitelisted = "yes"
ngx.ctx.bw.is_whitelisted = "yes"
env.set("is_whitelisted", "yes") env.set("is_whitelisted", "yes")
return self:ret(true, err) return self:ret(true, err)
end end
@ -101,6 +112,7 @@ function whitelist:access()
return self:ret(false, err) return self:ret(false, err)
elseif whitelisted then elseif whitelisted then
ngx.var.is_whitelisted = "yes" ngx.var.is_whitelisted = "yes"
ngx.ctx.bw.is_whitelisted = "yes"
env.set("is_whitelisted", "yes") env.set("is_whitelisted", "yes")
return self:ret(true, err, ngx.OK) return self:ret(true, err, ngx.OK)
end end
@ -117,8 +129,9 @@ function whitelist:access()
end end
if whitelisted ~= "ok" then if whitelisted ~= "ok" then
ngx.var.is_whitelisted = "yes" ngx.var.is_whitelisted = "yes"
ngx.ctx.bw.is_whitelisted = "yes"
env.set("is_whitelisted", "yes") env.set("is_whitelisted", "yes")
return self:ret(true, k + " is whitelisted (info : " .. whitelisted .. ")", ngx.OK) return self:ret(true, k .. " is whitelisted (info : " .. whitelisted .. ")", ngx.OK)
end end
end end
end end
@ -133,24 +146,24 @@ end
function whitelist:kind_to_ele(kind) function whitelist:kind_to_ele(kind)
if kind == "IP" then if kind == "IP" then
return "ip" .. ngx.var.remote_addr return "ip" .. ngx.ctx.bw.remote_addr
elseif kind == "UA" then elseif kind == "UA" then
return "ua" .. ngx.var.http_user_agent return "ua" .. ngx.ctx.bw.http_user_agent
elseif kind == "URI" then elseif kind == "URI" then
return "uri" .. ngx.var.uri return "uri" .. ngx.ctx.bw.uri
end end
end end
function whitelist:check_cache() function whitelist:check_cache()
-- Check the caches -- Check the caches
local checks = { local checks = {
["IP"] = "ip" .. ngx.var.remote_addr ["IP"] = "ip" .. ngx.ctx.bw.remote_addr
} }
if ngx.var.http_user_agent then if ngx.ctx.bw.http_user_agent then
checks["UA"] = "ua" .. ngx.var.http_user_agent checks["UA"] = "ua" .. ngx.ctx.bw.http_user_agent
end end
if ngx.var.uri then if ngx.ctx.bw.uri then
checks["URI"] = "uri" .. ngx.var.uri checks["URI"] = "uri" .. ngx.ctx.bw.uri
end end
local already_cached = { local already_cached = {
["IP"] = false, ["IP"] = false,
@ -162,7 +175,7 @@ function whitelist:check_cache()
if not ok then if not ok then
self.logger:log(ngx.ERR, "error while checking cache : " .. cached) self.logger:log(ngx.ERR, "error while checking cache : " .. cached)
elseif cached and cached ~= "ok" then elseif cached and cached ~= "ok" then
return true, k + " is in cached whitelist (info : " .. cached .. ")" return true, k .. " is in cached whitelist (info : " .. cached .. ")"
end end
if cached then if cached then
already_cached[k] = true already_cached[k] = true
@ -209,7 +222,7 @@ function whitelist:is_whitelisted_ip()
if not ipm then if not ipm then
return nil, err return nil, err
end end
local match, err = ipm:match(ngx.var.remote_addr) local match, err = ipm:match(ngx.ctx.bw.remote_addr)
if err then if err then
return nil, err return nil, err
end end
@ -219,18 +232,12 @@ function whitelist:is_whitelisted_ip()
-- Check if rDNS is needed -- Check if rDNS is needed
local check_rdns = true local check_rdns = true
local is_global, err = utils.ip_is_global(ngx.var.remote_addr) if self.variables["WHITELIST_RDNS_GLOBAL"] == "yes" and not ngx.ctx.bw.ip_is_global then
if self.variables["WHITELIST_RDNS_GLOBAL"] == "yes" then check_rdns = false
if is_global == nil then
return nil, err
end
if not is_global then
check_rdns = false
end
end end
if check_rdns then if check_rdns then
-- Get rDNS -- Get rDNS
local rdns_list, err = utils.get_rdns(ngx.var.remote_addr) local rdns_list, err = utils.get_rdns(ngx.ctx.bw.remote_addr)
if not rdns_list then if not rdns_list then
return nil, err return nil, err
end end
@ -245,8 +252,8 @@ function whitelist:is_whitelisted_ip()
end end
-- Check if ASN is in whitelist -- Check if ASN is in whitelist
if is_global then if ngx.ctx.bw.ip_is_global then
local asn, err = utils.get_asn(ngx.var.remote_addr) local asn, err = utils.get_asn(ngx.ctx.bw.remote_addr)
if not asn then if not asn then
return nil, err return nil, err
end end
@ -264,7 +271,7 @@ end
function whitelist:is_whitelisted_uri() function whitelist:is_whitelisted_uri()
-- Check if URI is in whitelist -- Check if URI is in whitelist
for i, uri in ipairs(self.lists["URI"]) do for i, uri in ipairs(self.lists["URI"]) do
if ngx.var.uri:match(uri) then if ngx.ctx.bw.uri:match(uri) then
return true, "URI " .. uri return true, "URI " .. uri
end end
end end
@ -275,7 +282,7 @@ end
function whitelist:is_whitelisted_ua() function whitelist:is_whitelisted_ua()
-- Check if UA is in whitelist -- Check if UA is in whitelist
for i, ua in ipairs(self.lists["USER_AGENT"]) do for i, ua in ipairs(self.lists["USER_AGENT"]) do
if ngx.var.http_user_agent:match(ua) then if ngx.ctx.bw.http_user_agent:match(ua) then
return true, "UA " .. ua return true, "UA " .. ua
end end
end end

View file

@ -1501,12 +1501,25 @@ class Database:
) )
} }
def get_job_cache_file(self, job_name: str, file_name: str) -> Optional[Any]: def get_job_cache_file(
self,
job_name: str,
file_name: str,
*,
with_info: bool = False,
with_data: bool = True,
) -> Optional[Any]:
"""Get job cache file.""" """Get job cache file."""
entities = []
if with_info:
entities.extend([Jobs_cache.last_update, Jobs_cache.checksum])
if with_data:
entities.append(Jobs_cache.data)
with self.__db_session() as session: with self.__db_session() as session:
return ( return (
session.query(Jobs_cache) session.query(Jobs_cache)
.with_entities(Jobs_cache.data) .with_entities(*entities)
.filter_by(job_name=job_name, file_name=file_name) .filter_by(job_name=job_name, file_name=file_name)
.first() .first()
) )

View file

@ -1,4 +1,4 @@
sqlalchemy==2.0.9 sqlalchemy==2.0.10
psycopg2-binary==2.9.6 psycopg2-binary==2.9.6
PyMySQL==1.0.3 PyMySQL==1.0.3
cryptography==40.0.2 cryptography==40.0.2

View file

@ -225,48 +225,48 @@ pymysql==1.0.3 \
--hash=sha256:3dda943ef3694068a75d69d071755dbecacee1adf9a1fc5b206830d2b67d25e8 \ --hash=sha256:3dda943ef3694068a75d69d071755dbecacee1adf9a1fc5b206830d2b67d25e8 \
--hash=sha256:89fc6ae41c0aeb6e1f7710cdd623702ea2c54d040565767a78b00a5ebb12f4e5 --hash=sha256:89fc6ae41c0aeb6e1f7710cdd623702ea2c54d040565767a78b00a5ebb12f4e5
# via -r requirements.in # via -r requirements.in
sqlalchemy==2.0.9 \ sqlalchemy==2.0.10 \
--hash=sha256:07950fc82f844a2de67ddb4e535f29b65652b4d95e8b847823ce66a6d540a41d \ --hash=sha256:04020aba2c0266ec521095ddd5cb760fc0067b0088828ccbf6b323c900a62e59 \
--hash=sha256:0a865b5ec4ba24f57c33b633b728e43fde77b968911a6046443f581b25d29dd9 \ --hash=sha256:06401013dad015e6f6f72c946f66d750fe4c5ef852ed2f15537d572cb92d7a75 \
--hash=sha256:0b49f1f71d7a44329a43d3edd38cc5ee4c058dfef4487498393d16172007954b \ --hash=sha256:096d9f72882035b4c6906172bf5c5afe4caefbfe0e028ab0c83dfdaa670cc193 \
--hash=sha256:13f984a190d249769a050634b248aef8991acc035e849d02b634ea006c028fa8 \ --hash=sha256:1f5638aac94c8f3fe04ca030e2b3e84d52d70f15d67f35f794fd2057284abced \
--hash=sha256:1b69666e25cc03c602d9d3d460e1281810109e6546739187044fc256c67941ef \ --hash=sha256:1fa90ed075ebc5fefc504c0e35b84fde1880d7c095473c5aa0c01f63eb37beae \
--hash=sha256:1d06e119cf79a3d80ab069f064a07152eb9ba541d084bdaee728d8a6f03fd03d \ --hash=sha256:207c2cc9b946f832fd45fbdd6276c28e3e80b206909a028cd163e87f4080a333 \
--hash=sha256:246712af9fc761d6c13f4f065470982e175d902e77aa4218c9cb9fc9ff565a0c \ --hash=sha256:23e3e1cc3634a70bba2ab10c144d4f11cf0ddeca239bbdaf646770873030c600 \
--hash=sha256:34eb96c1de91d8f31e988302243357bef3f7785e1b728c7d4b98bd0c117dafeb \ --hash=sha256:28c79289b4bf21cf09fb770b124cfae2432bbafb2ffd6758ac280bc1cacabfac \
--hash=sha256:4c3020afb144572c7bfcba9d7cce57ad42bff6e6115dffcfe2d4ae6d444a214f \ --hash=sha256:2bd944dc701be15a91ec965c6634ab90998ca2d14e4f1f568545547a3a3adc16 \
--hash=sha256:4f759eccb66e6d495fb622eb7f4ac146ae674d829942ec18b7f5a35ddf029597 \ --hash=sha256:2fdccadc9359784ae12ae9199849b724c7165220ae93c6066e841b66c6823742 \
--hash=sha256:68ed381bc340b4a3d373dbfec1a8b971f6350139590c4ca3cb722fdb50035777 \ --hash=sha256:300e8165bc78a0a917b39617730caf2c08c399302137c562e5ce7a37780ad10f \
--hash=sha256:6b72dccc5864ea95c93e0a9c4e397708917fb450f96737b4a8395d009f90b868 \ --hash=sha256:39869cf2cfe73c8ad9a6f15712a2ed8c13c1f87646611882efb6a8ec80d180e8 \
--hash=sha256:6e84ab63d25d8564d7a8c05dc080659931a459ee27f6ed1cf4c91f292d184038 \ --hash=sha256:3e77ed2e6d911aafc931c92033262d2979a44317294328b071a53aa10e2a9614 \
--hash=sha256:734805708632e3965c2c40081f9a59263c29ffa27cba9b02d4d92dfd57ba869f \ --hash=sha256:4a1ec8fcbe7e6a6ec28e161c6030d8cf5077e31efc3d08708d8de5aa8314b345 \
--hash=sha256:78612edf4ba50d407d0eb3a64e9ec76e6efc2b5d9a5c63415d53e540266a230a \ --hash=sha256:5892afc393ecd5f20910ff5a6b90d56620ec2ef3e36e3358eaedbae2aa36816d \
--hash=sha256:7e472e9627882f2d75b87ff91c5a2bc45b31a226efc7cc0a054a94fffef85862 \ --hash=sha256:5e8abd2ce0745a2819f3e41a17570c9d74b634a5b5ab5a04de5919e55d5d8601 \
--hash=sha256:865392a50a721445156809c1a6d6ab6437be70c1c2599f591a8849ed95d3c693 \ --hash=sha256:61ea1af2d01e709dcd4edc0d994db42bac6b2673c093cc35df3875e54cad9cef \
--hash=sha256:8d118e233f416d713aac715e2c1101e17f91e696ff315fc9efbc75b70d11e740 \ --hash=sha256:631ea4d1a8d78b43126773fa2de5472d97eb54dc4b9fbae4d8bd910f72f31f25 \
--hash=sha256:8d3ece5960b3e821e43a4927cc851b6e84a431976d3ffe02aadb96519044807e \ --hash=sha256:6b15cadba33d77e6fcee4f4f7706913d143d20e48ce26e9b6578b5cd07d4a353 \
--hash=sha256:93c78d42c14aa9a9e0866eacd5b48df40a50d0e2790ee377af7910d224afddcf \ --hash=sha256:70aed8f508f6c2f4da63ee6fa853534bb97d47bc82e28d56442f62a0b6ad2660 \
--hash=sha256:95719215e3ec7337b9f57c3c2eda0e6a7619be194a5166c07c1e599f6afc20fa \ --hash=sha256:736e92fa4d6e020fc780b915bcdd69749ad32c79bc6b031e85dcd2b8069f8de1 \
--hash=sha256:9838bd247ee42eb74193d865e48dd62eb50e45e3fdceb0fdef3351133ee53dcf \ --hash=sha256:7a8ca39fbc2dfe357f03e398bf5c1421b9b6614a8cf69ccada9ab3ef7e036073 \
--hash=sha256:aa5c270ece17c0c0e0a38f2530c16b20ea05d8b794e46c79171a86b93b758891 \ --hash=sha256:7da5bf86746ddbf8d68f1a3f9d1efee1d95e07d5ad63f47b839f4db799e12566 \
--hash=sha256:ac6a0311fb21a99855953f84c43fcff4bdca27a2ffcc4f4d806b26b54b5cddc9 \ --hash=sha256:88df3327c32468716a52c10e7991268afb552a0a7ef36130925864f28873d2e0 \
--hash=sha256:ad5363a1c65fde7b7466769d4261126d07d872fc2e816487ae6cec93da604b6b \ --hash=sha256:89e7a05639b3ae4fd17062a37b0ee336ea50ac9751e98e3330a6ed95daa4880c \
--hash=sha256:b3e5864eba71a3718236a120547e52c8da2ccb57cc96cecd0480106a0c799c92 \ --hash=sha256:8a3e3f34468a512b3886ac5584384aed8bef388297c710509a842fb1468476f3 \
--hash=sha256:bbda1da8d541904ba262825a833c9f619e93cb3fd1156be0a5e43cd54d588dcd \ --hash=sha256:8c3366be42bca5c066703af54b856e00f23b8fbef9ab0346a58d34245af695a5 \
--hash=sha256:c6e27189ff9aebfb2c02fd252c629ea58657e7a5ff1a321b7fc9c2bf6dc0b5f3 \ --hash=sha256:9a77e29a96779f373eb144040e5fae1e3944916c13360715e74f73b186f0d8d2 \
--hash=sha256:c8239ce63a90007bce479adf5460d48c1adae4b933d8e39a4eafecfc084e503c \ --hash=sha256:a4cdac392547dec07d69c5e8b05374b0357359ebc58ab2bbcb9fa0370ecb715f \
--hash=sha256:d209594e68bec103ad5243ecac1b40bf5770c9ebf482df7abf175748a34f4853 \ --hash=sha256:a9aa445201754a49b7ddb0b99fbe5ccf98f6900548fc60a0a07dde2253dd541e \
--hash=sha256:d5327f54a9c39e7871fc532639616f3777304364a0bb9b89d6033ad34ef6c5f8 \ --hash=sha256:af525e9fbcf7da7404fc4b91ca4ce6172457d3f4390b93941fb97bfe29afb7dc \
--hash=sha256:db4bd1c4792da753f914ff0b688086b9a8fd78bb9bc5ae8b6d2e65f176b81eb9 \ --hash=sha256:b608ad640ac70e2901d111a69ad975e6b0ca39947e08cc28691b0de00831a787 \
--hash=sha256:e4780be0f19e5894c17f75fc8de2fe1ae233ab37827125239ceb593c6f6bd1e2 \ --hash=sha256:d46edd508123413595a17bb64655db7c4bfefa83e721a3064f66e046e9a6a103 \
--hash=sha256:e4a019f723b6c1e6b3781be00fb9e0844bc6156f9951c836ff60787cc3938d76 \ --hash=sha256:d975ac2bc513f530fa2574eb58e0ca731357d4686de2fb644af3036fca4f3fd6 \
--hash=sha256:e62c4e762d6fd2901692a093f208a6a6575b930e9458ad58c2a7f080dd6132da \ --hash=sha256:dcd5793b98eb043703895443cc399fb8e2ce21c9b09757e954e425c8415c541b \
--hash=sha256:e730603cae5747bc6d6dece98b45a57d647ed553c8d5ecef602697b1c1501cf2 \ --hash=sha256:dd40fbf4f916a41b4afe50665e2d029a1c9f74967fd3b7422475529641d31ef5 \
--hash=sha256:ebc4eeb1737a5a9bdb0c24f4c982319fa6edd23cdee27180978c29cbb026f2bd \ --hash=sha256:dddbe2c012d712873fb9f203512db57d3cbdd20803f0792aa01bc513da8a2380 \
--hash=sha256:ee2946042cc7851842d7a086a92b9b7b494cbe8c3e7e4627e27bc912d3a7655e \ --hash=sha256:e9d7e65c2c4f313524399f6b8ec14bfa8f4e9fccd999ff585e10e073cfd21429 \
--hash=sha256:f005245e1cb9b8ca53df73ee85e029ac43155e062405015e49ec6187a2e3fb44 \ --hash=sha256:ec910449c70b0359dbe08a5e8c63678c7ef0113ab61cd0bb2e80ed09ea8ce6ab \
--hash=sha256:f49c5d3c070a72ecb96df703966c9678dda0d4cb2e2736f88d15f5e1203b4159 \ --hash=sha256:ed368ee7b1c119d5f6321cc9a3ea806adacf522bb4c2e9e398cbfc2e2cc68a2a \
--hash=sha256:f61ab84956dc628c8dfe9d105b6aec38afb96adae3e5e7da6085b583ff6ea789 --hash=sha256:faa6d2e6d6d46d2d58c5a4713148300b44fcfc911341ec82d8731488d0757f96
# via -r requirements.in # via -r requirements.in
typing-extensions==4.5.0 \ typing-extensions==4.5.0 \
--hash=sha256:5cb5f4a79139d699607b3ef622a1dedafa84e115ab0024e0d9c044a9479ca7cb \ --hash=sha256:5cb5f4a79139d699607b3ef622a1dedafa84e115ab0024e0d9c044a9479ca7cb \

View file

@ -146,14 +146,21 @@ class Configurator:
ret, err = self.__check_var(variable) ret, err = self.__check_var(variable)
if ret: if ret:
config[variable] = value config[variable] = value
elif not variable.startswith("PYTHON") and variable not in ( elif (
"GPG_KEY", not variable.startswith("PYTHON")
"LANG", and not variable.startswith("KUBERNETES_SERVICE_")
"PATH", and not variable.startswith("KUBERNETES_PORT_")
"NGINX_VERSION", and not variable.startswith("SVC_")
"NJS_VERSION", and variable
"PKG_RELEASE", not in (
"DOCKER_HOST", "GPG_KEY",
"LANG",
"PATH",
"NGINX_VERSION",
"NJS_VERSION",
"PKG_RELEASE",
"DOCKER_HOST",
)
): ):
self.__logger.warning(f"Ignoring variable {variable} : {err}") self.__logger.warning(f"Ignoring variable {variable} : {err}")
# Expand variables to each sites if MULTISITE=yes and if not present # Expand variables to each sites if MULTISITE=yes and if not present

View file

@ -171,15 +171,15 @@ packaging==23.1 \
--hash=sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61 \ --hash=sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61 \
--hash=sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f --hash=sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f
# via docker # via docker
pyasn1==0.4.8 \ pyasn1==0.5.0 \
--hash=sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d \ --hash=sha256:87a2121042a1ac9358cabcaf1d07680ff97ee6404333bacca15f76aa8ad01a57 \
--hash=sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba --hash=sha256:97b7290ca68e62a832558ec3976f15cbf911bf5d7c7039d8b861c2a0ece69fde
# via # via
# pyasn1-modules # pyasn1-modules
# rsa # rsa
pyasn1-modules==0.2.8 \ pyasn1-modules==0.3.0 \
--hash=sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e \ --hash=sha256:5bd01446b736eb9d31512a30d46c1ac3395d676c6f3cafa4c03eb54b9925631c \
--hash=sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74 --hash=sha256:d3ccd6ed470d9ffbc716be08bd90efbd44d0734bc9303818f7336070984a162d
# via google-auth # via google-auth
python-dateutil==2.8.2 \ python-dateutil==2.8.2 \
--hash=sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86 \ --hash=sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86 \
@ -269,7 +269,7 @@ websocket-client==1.5.1 \
# kubernetes # kubernetes
# The following packages are considered to be unsafe in a requirements file: # The following packages are considered to be unsafe in a requirements file:
setuptools==67.6.1 \ setuptools==67.7.1 \
--hash=sha256:257de92a9d50a60b8e22abfcbb771571fde0dbf3ec234463212027a4eeecbe9a \ --hash=sha256:6f0839fbdb7e3cfef1fc38d7954f5c1c26bf4eebb155a55c9bf8faf997b9fb67 \
--hash=sha256:e728ca814a823bf7bf60162daf9db95b93d532948c4c0bea762ce62f60189078 --hash=sha256:bb16732e8eb928922eabaa022f881ae2b7cdcfaf9993ef1f5e841a96d32b8e0c
# via kubernetes # via kubernetes

View file

@ -1,11 +1,16 @@
from contextlib import suppress from contextlib import suppress
from datetime import datetime from datetime import datetime
from hashlib import sha512 from hashlib import sha512
from inspect import getsourcefile
from json import dumps, loads from json import dumps, loads
from os.path import basename
from pathlib import Path from pathlib import Path
from shutil import copy from sys import _getframe
from threading import Lock
from traceback import format_exc from traceback import format_exc
from typing import Optional, Tuple
lock = Lock()
""" """
{ {
@ -15,29 +20,46 @@ from traceback import format_exc
""" """
def is_cached_file(file, expire): def is_cached_file(file: str, expire: str, db=None) -> bool:
is_cached = False is_cached = False
cached_file = None
try: try:
if not Path(f"{file}.md").is_file(): if not Path(f"{file}.md").is_file():
return False if not db:
return False
cached_file = db.get_job_cache_file(
basename(getsourcefile(_getframe(1))).replace(".py", ""),
basename(file),
with_info=True,
)
if not cached_file:
return False
cached_time = cached_file.last_update.timestamp()
else:
cached_time = loads(Path(f"{file}.md").read_text())["date"]
cached_time = loads(Path(f"{file}.md").read_text())["date"]
current_time = datetime.now().timestamp() current_time = datetime.now().timestamp()
if current_time < cached_time: if current_time < cached_time:
return False is_cached = False
diff_time = current_time - cached_time else:
if expire == "hour": diff_time = current_time - cached_time
is_cached = diff_time < 3600 if expire == "hour":
elif expire == "day": is_cached = diff_time < 3600
is_cached = diff_time < 86400 elif expire == "day":
elif expire == "month": is_cached = diff_time < 86400
is_cached = diff_time < 2592000 elif expire == "month":
is_cached = diff_time < 2592000
except: except:
is_cached = False is_cached = False
if is_cached and cached_file:
Path(file).write_bytes(cached_file.data)
return is_cached return is_cached
def file_hash(file): def file_hash(file: str) -> str:
_sha512 = sha512() _sha512 = sha512()
with open(file, "rb") as f: with open(file, "rb") as f:
while True: while True:
@ -48,19 +70,47 @@ def file_hash(file):
return _sha512.hexdigest() return _sha512.hexdigest()
def cache_hash(cache): def cache_hash(cache: str, db=None) -> Optional[str]:
with suppress(BaseException): with suppress(BaseException):
return loads(Path(f"{cache}.md").read_text())["checksum"] return loads(Path(f"{cache}.md").read_text()).get("checksum", None)
if db:
cached_file = db.get_job_cache_file(
basename(getsourcefile(_getframe(1))).replace(".py", ""),
basename(cache),
with_info=True,
with_data=False,
)
if cached_file:
return cached_file.checksum
return None return None
def cache_file(file, cache, _hash): def cache_file(
file: str, cache: str, _hash: str, db=None, *, service_id: Optional[str] = None
) -> Tuple[bool, str]:
ret, err = True, "success" ret, err = True, "success"
try: try:
copy(file, cache) content = Path(file).read_bytes()
Path(cache).write_bytes(content)
Path(file).unlink() Path(file).unlink()
md = {"date": datetime.timestamp(datetime.now()), "checksum": _hash}
Path(f"{cache}.md").write_text(dumps(md)) if db:
with lock:
err = db.update_job_cache(
basename(getsourcefile(_getframe(1))).replace(".py", ""),
service_id,
basename(cache),
content,
checksum=_hash,
)
if err:
ret = False
else:
Path(f"{cache}.md").write_text(
dumps(dict(date=datetime.now().timestamp(), checksum=_hash))
)
except: except:
return False, f"exception :\n{format_exc()}" return False, f"exception :\n{format_exc()}"
return ret, err return ret, err

View file

@ -276,6 +276,10 @@ if [ "$dopatch" == "yes" ] ; then
do_and_check_cmd rm -r deps/src/lua-resty-openssl/t do_and_check_cmd rm -r deps/src/lua-resty-openssl/t
fi fi
# lua-ffi-zlib v0.5.0
echo " Downloading lua-ffi-zlib"
git_secure_clone "https://github.com/hamishforbes/lua-ffi-zlib.git" "1fb69ca505444097c82d2b72e87904f3ed923ae9"
# ModSecurity v3.0.9 # ModSecurity v3.0.9
echo " Downloading ModSecurity" echo " Downloading ModSecurity"
dopatch="no" dopatch="no"

View file

@ -142,6 +142,11 @@ CHANGE_DIR="/tmp/bunkerweb/deps/src/lua-pack" do_and_check_cmd make INST_LIBDIR=
# Installing lua-resty-openssl # Installing lua-resty-openssl
echo " Installing lua-resty-openssl" echo " Installing lua-resty-openssl"
CHANGE_DIR="/tmp/bunkerweb/deps/src/lua-resty-openssl" do_and_check_cmd make LUA_LIB_DIR=/usr/share/bunkerweb/deps/lib/lua install CHANGE_DIR="/tmp/bunkerweb/deps/src/lua-resty-openssl" do_and_check_cmd make LUA_LIB_DIR=/usr/share/bunkerweb/deps/lib/lua install
do_and_check_cmd cp /tmp/bunkerweb/deps/src/lua-resty-openssl/lib/resty/openssl.lua /usr/share/bunkerweb/deps/lib/lua/resty
# Installing lua-ffi-zlib
echo " Installing lua-ffi-zlib"
do_and_check_cmd cp /tmp/bunkerweb/deps/src/lua-ffi-zlib/lib/ffi-zlib.lua /usr/share/bunkerweb/deps/lib/lua
# Compile dynamic modules # Compile dynamic modules
echo " Compiling and installing dynamic modules" echo " Compiling and installing dynamic modules"

View file

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2016 Hamish Forbes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -0,0 +1,133 @@
# lua-ffi-zlib
A [Lua](http://www.lua.org) module using LuaJIT's [FFI](http://luajit.org/ext_ffi.html) feature to access zlib.
Intended primarily for use within [OpenResty](http://openresty.org) to allow manipulation of gzip encoded HTTP responses.
# Methods
Basic methods allowing for simple compression or decompression of gzip data
## inflateGzip
`Syntax: ok, err = inflateGzip(input, output, chunk?, windowBits?)`
* `input` should be a function that accepts a chunksize as its only argument and return that many bytes of the gzip stream
* `output` will receive a string of decompressed data as its only argument, do with it as you will!
* `chunk` is the size of the input and output buffers, optional and defaults to 16KB
* `windowBits` is passed to `inflateInit2()`, should be left as default for most cases.
See [zlib manual](http://zlib.net/manual.html) for details
On error returns `false` and the error message, otherwise `true` and the last status message
## deflateGzip
`Syntax: ok, err = deflateGzip(input, output, chunk?, options?)`
* `input` should be a function that accepts a chunksize as its only argument and return that many bytes of uncompressed data.
* `output` will receive a string of compressed data as its only argument, do with it as you will!
* `chunk` is the size of the input and output buffers, optional and defaults to 16KB
* `options` is a table of options to pass to `deflateInit2()`
Valid options are level, memLevel, strategy and windowBits, see [zlib manual](http://zlib.net/manual.html) for details
On error returns `false` and the error message, otherwise `true` and the last status message
# Example
Reads a file and output the decompressed version.
Roughly equivalent to running `gzip -dc file.gz > out_file | tee`
```lua
local table_insert = table.insert
local table_concat = table.concat
local zlib = require('lib.ffi-zlib')
local f = io.open(arg[1], "rb")
local out_f = io.open(arg[2], "w")
local input = function(bufsize)
-- Read the next chunk
local d = f:read(bufsize)
if d == nil then
return nil
end
return d
end
local output_table = {}
local output = function(data)
table_insert(output_table, data)
local ok, err = out_f:write(data)
if not ok then
-- abort decompression when error occurs
return nil, err
end
end
-- Decompress the data
local ok, err = zlib.inflateGzip(input, output)
if not ok then
print(err)
return
end
local decompressed = table_concat(output_table,'')
print(decompressed)
```
# Advanced Usage
Several other methods are available for advanced usage.
Some of these map directly to functions in the zlib library itself, see the [manual](http://zlib.net/manual.html) for full details.
Others are lower level utility functions.
## createStream
`Synax: stream, inbuf, outbuf = createStream(bufsize)`
Returns a z_stream struct, input buffer and output buffer of length `bufsize`
## initInflate
`Syntax: ok = initInflate(stream, windowBits?)`
Calls zlib's inflateInit2 with given stream, defaults to automatic header detection.
## initDeflate
`Syntax: ok = initDeflate(stream, options?)`
Calls zlib's deflateInit2 with the given stream.
`options` is an optional table that can set level, memLevel, strategy and windowBits
## deflate
`Syntax: ok, err = deflate(input, output, bufsize, stream, inbuf, outbuf)`
* `input` is a function that takes a chunk size argument and returns at most that many input bytes
* `output` is a function that takes a string argument of output data
* `bufsize` is the length of the output buffer
* `inbuf` cdata input buffer
* `outpuf` ccdata output buffer
This function will loop until all input data is consumed (`input` returns nil) or an error occurs.
It will then clean up the stream and return an error code
## inflate
`Syntax: ok, err = inflate(input, output, bufsize, stream, inbuf, outbuf)`
* `input` is a function that takes a chunk size argument and returns at most that many input bytes
* `output` is a function that takes a string argument of output data
* `bufsize` is the length of the output buffer
* `inbuf` cdata input buffer
* `outpuf` ccdata output buffer
This function will loop until all input data is consumed (`input` returns nil) or an error occurs.
It will then clean up the stream and return an error code
## adler
`Syntax: chksum = adler(str, chksum?)`
Computes an adler32 checksum for a string, updates an existing checksum if provided
## crc
`Syntax: chksum = crc(str, chksum?)`
Computes an crc32 checksum for a string, updates an existing checksum if provided
## zlib_err
`Syntax: err = zlib_err(code)`
Returns the string representation of a zlib error code

View file

@ -0,0 +1,9 @@
name=lua-ffi-zlib
abstract=Luajit FFI binding for zlib
author=Hamish Forbes
is_original=yes
license=mit
lib_dir=lib
repo_link=https://github.com/hamishforbes/lua-ffi-zlib
main_module=lib/ffi-zlib.lua
requires = luajit

View file

@ -0,0 +1,330 @@
local ffi = require "ffi"
local ffi_new = ffi.new
local ffi_str = ffi.string
local ffi_sizeof = ffi.sizeof
local ffi_copy = ffi.copy
local tonumber = tonumber
local _M = {
_VERSION = '0.5.0',
}
local mt = { __index = _M }
ffi.cdef([[
enum {
Z_NO_FLUSH = 0,
Z_PARTIAL_FLUSH = 1,
Z_SYNC_FLUSH = 2,
Z_FULL_FLUSH = 3,
Z_FINISH = 4,
Z_BLOCK = 5,
Z_TREES = 6,
/* Allowed flush values; see deflate() and inflate() below for details */
Z_OK = 0,
Z_STREAM_END = 1,
Z_NEED_DICT = 2,
Z_ERRNO = -1,
Z_STREAM_ERROR = -2,
Z_DATA_ERROR = -3,
Z_MEM_ERROR = -4,
Z_BUF_ERROR = -5,
Z_VERSION_ERROR = -6,
/* Return codes for the compression/decompression functions. Negative values
* are errors, positive values are used for special but normal events.
*/
Z_NO_COMPRESSION = 0,
Z_BEST_SPEED = 1,
Z_BEST_COMPRESSION = 9,
Z_DEFAULT_COMPRESSION = -1,
/* compression levels */
Z_FILTERED = 1,
Z_HUFFMAN_ONLY = 2,
Z_RLE = 3,
Z_FIXED = 4,
Z_DEFAULT_STRATEGY = 0,
/* compression strategy; see deflateInit2() below for details */
Z_BINARY = 0,
Z_TEXT = 1,
Z_ASCII = Z_TEXT, /* for compatibility with 1.2.2 and earlier */
Z_UNKNOWN = 2,
/* Possible values of the data_type field (though see inflate()) */
Z_DEFLATED = 8,
/* The deflate compression method (the only one supported in this version) */
Z_NULL = 0, /* for initializing zalloc, zfree, opaque */
};
typedef void* (* z_alloc_func)( void* opaque, unsigned items, unsigned size );
typedef void (* z_free_func) ( void* opaque, void* address );
typedef struct z_stream_s {
char* next_in;
unsigned avail_in;
unsigned long total_in;
char* next_out;
unsigned avail_out;
unsigned long total_out;
char* msg;
void* state;
z_alloc_func zalloc;
z_free_func zfree;
void* opaque;
int data_type;
unsigned long adler;
unsigned long reserved;
} z_stream;
const char* zlibVersion();
const char* zError(int);
int inflate(z_stream*, int flush);
int inflateEnd(z_stream*);
int inflateInit2_(z_stream*, int windowBits, const char* version, int stream_size);
int deflate(z_stream*, int flush);
int deflateEnd(z_stream* );
int deflateInit2_(z_stream*, int level, int method, int windowBits, int memLevel,int strategy, const char *version, int stream_size);
unsigned long adler32(unsigned long adler, const char *buf, unsigned len);
unsigned long crc32(unsigned long crc, const char *buf, unsigned len);
unsigned long adler32_combine(unsigned long, unsigned long, long);
unsigned long crc32_combine(unsigned long, unsigned long, long);
]])
local zlib = ffi.load(ffi.os == "Windows" and "zlib1" or "z")
_M.zlib = zlib
-- Default to 16k output buffer
local DEFAULT_CHUNK = 16384
local Z_OK = zlib.Z_OK
local Z_NO_FLUSH = zlib.Z_NO_FLUSH
local Z_STREAM_END = zlib.Z_STREAM_END
local Z_FINISH = zlib.Z_FINISH
local Z_NEED_DICT = zlib.Z_NEED_DICT
local Z_BUF_ERROR = zlib.Z_BUF_ERROR
local Z_STREAM_ERROR = zlib.Z_STREAM_ERROR
local function zlib_err(err)
return ffi_str(zlib.zError(err))
end
_M.zlib_err = zlib_err
local function createStream(bufsize)
-- Setup Stream
local stream = ffi_new("z_stream")
-- Create input buffer var
local inbuf = ffi_new('char[?]', bufsize+1)
stream.next_in, stream.avail_in = inbuf, 0
-- create the output buffer
local outbuf = ffi_new('char[?]', bufsize)
stream.next_out, stream.avail_out = outbuf, 0
return stream, inbuf, outbuf
end
_M.createStream = createStream
local function initInflate(stream, windowBits)
-- Setup inflate process
local windowBits = windowBits or (15 + 32) -- +32 sets automatic header detection
local version = ffi_str(zlib.zlibVersion())
return zlib.inflateInit2_(stream, windowBits, version, ffi_sizeof(stream))
end
_M.initInflate = initInflate
local function initDeflate(stream, options)
-- Setup deflate process
local method = zlib.Z_DEFLATED
local level = options.level or zlib.Z_DEFAULT_COMPRESSION
local memLevel = options.memLevel or 8
local strategy = options.strategy or zlib.Z_DEFAULT_STRATEGY
local windowBits = options.windowBits or (15 + 16) -- +16 sets gzip wrapper not zlib
local version = ffi_str(zlib.zlibVersion())
return zlib.deflateInit2_(stream, level, method, windowBits, memLevel, strategy, version, ffi_sizeof(stream))
end
_M.initDeflate = initDeflate
local function flushOutput(stream, bufsize, output, outbuf)
-- Calculate available output bytes
local out_sz = bufsize - stream.avail_out
if out_sz == 0 then
return
end
-- Read bytes from output buffer and pass to output function
local ok, err = output(ffi_str(outbuf, out_sz))
if not ok then
return err
end
end
local function inflate(input, output, bufsize, stream, inbuf, outbuf)
local zlib_flate = zlib.inflate
local zlib_flateEnd = zlib.inflateEnd
-- Inflate a stream
local err = 0
repeat
-- Read some input
local data = input(bufsize)
if data ~= nil then
ffi_copy(inbuf, data)
stream.next_in, stream.avail_in = inbuf, #data
else
-- no more input data
stream.avail_in = 0
end
if stream.avail_in == 0 then
-- When decompressing we *must* have input bytes
zlib_flateEnd(stream)
return false, "INFLATE: Data error, no input bytes"
end
-- While the output buffer is being filled completely just keep going
repeat
stream.next_out = outbuf
stream.avail_out = bufsize
-- Process the stream, always Z_NO_FLUSH in inflate mode
err = zlib_flate(stream, Z_NO_FLUSH)
-- Buffer errors are OK here
if err == Z_BUF_ERROR then
err = Z_OK
end
if err < Z_OK or err == Z_NEED_DICT then
-- Error, clean up and return
zlib_flateEnd(stream)
return false, "INFLATE: "..zlib_err(err), stream
end
-- Write the data out
local err = flushOutput(stream, bufsize, output, outbuf)
if err then
zlib_flateEnd(stream)
return false, "INFLATE: "..err
end
until stream.avail_out ~= 0
until err == Z_STREAM_END
-- Stream finished, clean up and return
zlib_flateEnd(stream)
return true, zlib_err(err)
end
_M.inflate = inflate
local function deflate(input, output, bufsize, stream, inbuf, outbuf)
local zlib_flate = zlib.deflate
local zlib_flateEnd = zlib.deflateEnd
-- Deflate a stream
local err = 0
local mode = Z_NO_FLUSH
repeat
-- Read some input
local data = input(bufsize)
if data ~= nil then
ffi_copy(inbuf, data)
stream.next_in, stream.avail_in = inbuf, #data
else
-- EOF, try and finish up
mode = Z_FINISH
stream.avail_in = 0
end
-- While the output buffer is being filled completely just keep going
repeat
stream.next_out = outbuf
stream.avail_out = bufsize
-- Process the stream
err = zlib_flate(stream, mode)
-- Only possible *bad* return value here
if err == Z_STREAM_ERROR then
-- Error, clean up and return
zlib_flateEnd(stream)
return false, "DEFLATE: "..zlib_err(err), stream
end
-- Write the data out
local err = flushOutput(stream, bufsize, output, outbuf)
if err then
zlib_flateEnd(stream)
return false, "DEFLATE: "..err
end
until stream.avail_out ~= 0
-- In deflate mode all input must be used by this point
if stream.avail_in ~= 0 then
zlib_flateEnd(stream)
return false, "DEFLATE: Input not used"
end
until err == Z_STREAM_END
-- Stream finished, clean up and return
zlib_flateEnd(stream)
return true, zlib_err(err)
end
_M.deflate = deflate
local function adler(str, chksum)
local chksum = chksum or 0
local str = str or ""
return zlib.adler32(chksum, str, #str)
end
_M.adler = adler
local function crc(str, chksum)
local chksum = chksum or 0
local str = str or ""
return zlib.crc32(chksum, str, #str)
end
_M.crc = crc
function _M.inflateGzip(input, output, bufsize, windowBits)
local bufsize = bufsize or DEFAULT_CHUNK
-- Takes 2 functions that provide input data from a gzip stream and receives output data
-- Returns uncompressed string
local stream, inbuf, outbuf = createStream(bufsize)
local init = initInflate(stream, windowBits)
if init == Z_OK then
return inflate(input, output, bufsize, stream, inbuf, outbuf)
else
-- Init error
zlib.inflateEnd(stream)
return false, "INIT: "..zlib_err(init)
end
end
function _M.deflateGzip(input, output, bufsize, options)
local bufsize = bufsize or DEFAULT_CHUNK
options = options or {}
-- Takes 2 functions that provide plain input data and receives output data
-- Returns gzip compressed string
local stream, inbuf, outbuf = createStream(bufsize)
local init = initDeflate(stream, options)
if init == Z_OK then
return deflate(input, output, bufsize, stream, inbuf, outbuf)
else
-- Init error
zlib.deflateEnd(stream)
return false, "INIT: "..zlib_err(init)
end
end
function _M.version()
return ffi_str(zlib.zlibVersion())
end
return _M

View file

@ -0,0 +1,20 @@
package = "lua-ffi-zlib"
version = "0.4-0"
source = {
url = "git://github.com/hamishforbes/lua-ffi-zlib",
tag = "v0.4"
}
description = {
summary = "A Lua module using LuaJIT's FFI feature to access zlib.",
homepage = "https://github.com/hamishforbes/lua-ffi-zlib",
maintainer = "Hamish Forbes"
}
dependencies = {
"lua >= 5.1",
}
build = {
type = "builtin",
modules = {
["ffi-zlib"] = "lib/ffi-zlib.lua",
}
}

View file

@ -0,0 +1,20 @@
package = "lua-ffi-zlib"
version = "0.5-0"
source = {
url = "git://github.com/hamishforbes/lua-ffi-zlib",
tag = "v0.5"
}
description = {
summary = "A Lua module using LuaJIT's FFI feature to access zlib.",
homepage = "https://github.com/hamishforbes/lua-ffi-zlib",
maintainer = "Hamish Forbes"
}
dependencies = {
"lua >= 5.1",
}
build = {
type = "builtin",
modules = {
["ffi-zlib"] = "lib/ffi-zlib.lua",
}
}

View file

@ -0,0 +1,145 @@
local table_insert = table.insert
local table_concat = table.concat
local zlib = require('lib.ffi-zlib')
local chunk = tonumber(arg[2]) or 16384
local uncompressed = ''
local input
local f
local passing = true
local in_adler
local out_adler
local in_crc
local out_crc
if arg[1] == nil then
print("No file provided")
return
else
f = io.open(arg[1], "rb")
input = function(bufsize)
local d = f:read(bufsize)
if d == nil then
return nil
end
in_crc = zlib.crc(d, in_crc)
in_adler = zlib.adler(d, in_adler)
uncompressed = uncompressed..d
return d
end
end
print('zlib version: '..zlib.version())
print()
local output_table = {}
local output = function(data)
out_crc = zlib.crc(data, out_crc)
out_adler = zlib.adler(data, out_adler)
table_insert(output_table, data)
end
-- Compress the data
print('Compressing')
local ok, err = zlib.deflateGzip(input, output, chunk)
if not ok then
-- Err message
print(err)
end
local compressed = table_concat(output_table,'')
local orig_in_crc = in_crc
local orig_in_adler = in_adler
print('Input crc32: ', in_crc)
print('Output crc32: ', out_crc)
print('Input adler32: ', in_adler)
print('Output adler32: ', out_adler)
-- Decompress it again
print()
print('Decompressing')
-- Reset vars
in_adler = nil
out_adler = nil
in_crc = nil
out_crc = nil
output_table = {}
local count = 0
local input = function(bufsize)
local start = count > 0 and bufsize*count or 1
local finish = (bufsize*(count+1)-1)
count = count + 1
if bufsize == 1 then
start = count
finish = count
end
local data = compressed:sub(start, finish)
in_crc = zlib.crc(data, in_crc)
in_adler = zlib.adler(data, in_adler)
return data
end
local ok, err = zlib.inflateGzip(input, output, chunk)
if not ok then
-- Err message
print(err)
end
local output_data = table_concat(output_table,'')
print('Input crc32: ', in_crc)
print('Output crc32: ', out_crc)
print('Input adler32: ', in_adler)
print('Output adler32: ', out_adler)
print()
if output_data ~= uncompressed then
passing = false
print("inflateGzip / deflateGzip failed")
end
if orig_in_adler ~= out_adler then
passing = false
print("Adler checksum failed")
end
if orig_in_crc ~= out_crc then
passing = false
print("CRC checksum failed")
end
local bad_output = function(data)
return nil, "bad output"
end
if not passing then
print(":(")
else
print(":)")
end
local dump_input = function(bufsize)
return compressed
end
local ok, err = zlib.deflateGzip(dump_input, bad_output, chunk)
if not ok then
if err ~= "DEFLATE: bad output" then
print(err)
else
print("abort deflation: ok")
end
end
local ok, err = zlib.inflateGzip(dump_input, bad_output, chunk)
if not ok then
if err ~= "INFLATE: bad output" then
print(err)
else
print("abort inflation: ok")
end
end

View file

@ -63,6 +63,9 @@ RUN apk add --no-cache bash libgcc libstdc++ openssl && \
ln -s /proc/1/fd/1 /var/log/letsencrypt/letsencrypt.log && \ ln -s /proc/1/fd/1 /var/log/letsencrypt/letsencrypt.log && \
chmod 660 /usr/share/bunkerweb/INTEGRATION chmod 660 /usr/share/bunkerweb/INTEGRATION
# Fix CVEs
RUN apk add "libcrypto3>=3.0.8-r4" "libssl3>=3.0.8-r4"
VOLUME /data /etc/nginx VOLUME /data /etc/nginx
WORKDIR /usr/share/bunkerweb/scheduler WORKDIR /usr/share/bunkerweb/scheduler

View file

@ -106,7 +106,7 @@ def generate_custom_configs(
Path(dirname(tmp_path)).mkdir(parents=True, exist_ok=True) Path(dirname(tmp_path)).mkdir(parents=True, exist_ok=True)
Path(tmp_path).write_bytes(custom_config["data"]) Path(tmp_path).write_bytes(custom_config["data"])
if integration not in ("Autoconf", "Swarm", "Kubernetes", "Docker"): if integration in ("Autoconf", "Swarm", "Kubernetes", "Docker"):
logger.info("Sending custom configs to BunkerWeb") logger.info("Sending custom configs to BunkerWeb")
ret = api_caller._send_files("/data/configs", "/custom_configs") ret = api_caller._send_files("/data/configs", "/custom_configs")
@ -137,7 +137,7 @@ def generate_external_plugins(
st = stat(job_file) st = stat(job_file)
chmod(job_file, st.st_mode | S_IEXEC) chmod(job_file, st.st_mode | S_IEXEC)
if integration not in ("Autoconf", "Swarm", "Kubernetes", "Docker"): if integration in ("Autoconf", "Swarm", "Kubernetes", "Docker"):
logger.info("Sending plugins to BunkerWeb") logger.info("Sending plugins to BunkerWeb")
ret = api_caller._send_files("/data/plugins", "/plugins") ret = api_caller._send_files("/data/plugins", "/plugins")
@ -461,7 +461,12 @@ if __name__ == "__main__":
# reload nginx # reload nginx
logger.info("Reloading nginx ...") logger.info("Reloading nginx ...")
if integration not in ("Autoconf", "Swarm", "Kubernetes", "Docker"): if integration not in (
"Autoconf",
"Swarm",
"Kubernetes",
"Docker",
):
# Reloading the nginx server. # Reloading the nginx server.
proc = subprocess_run( proc = subprocess_run(
# Reload nginx # Reload nginx

View file

@ -254,9 +254,9 @@ urllib3==1.26.15 \
# via requests # via requests
# The following packages are considered to be unsafe in a requirements file: # The following packages are considered to be unsafe in a requirements file:
setuptools==67.6.1 \ setuptools==67.7.1 \
--hash=sha256:257de92a9d50a60b8e22abfcbb771571fde0dbf3ec234463212027a4eeecbe9a \ --hash=sha256:6f0839fbdb7e3cfef1fc38d7954f5c1c26bf4eebb155a55c9bf8faf997b9fb67 \
--hash=sha256:e728ca814a823bf7bf60162daf9db95b93d532948c4c0bea762ce62f60189078 --hash=sha256:bb16732e8eb928922eabaa022f881ae2b7cdcfaf9993ef1f5e841a96d32b8e0c
# via # via
# acme # acme
# certbot # certbot

View file

@ -49,6 +49,9 @@ RUN apk add --no-cache bash && \
chmod 750 /usr/share/bunkerweb/gen/*.py /usr/share/bunkerweb/ui/*.py /usr/share/bunkerweb/ui/src/*.py /usr/share/bunkerweb/deps/python/bin/* && \ chmod 750 /usr/share/bunkerweb/gen/*.py /usr/share/bunkerweb/ui/*.py /usr/share/bunkerweb/ui/src/*.py /usr/share/bunkerweb/deps/python/bin/* && \
chmod 660 /usr/share/bunkerweb/INTEGRATION chmod 660 /usr/share/bunkerweb/INTEGRATION
# Fix CVEs
RUN apk add "libcrypto3>=3.0.8-r4" "libssl3>=3.0.8-r4"
VOLUME /data /etc/nginx VOLUME /data /etc/nginx
EXPOSE 7000 EXPOSE 7000

View file

@ -1,6 +1,6 @@
# #
# This file is autogenerated by pip-compile with python 3.10 # This file is autogenerated by pip-compile with Python 3.9
# To update, run: # by the following command:
# #
# pip-compile --allow-unsafe --generate-hashes --resolver=backtracking # pip-compile --allow-unsafe --generate-hashes --resolver=backtracking
# #
@ -170,6 +170,10 @@ gunicorn==20.1.0 \
--hash=sha256:9dcc4547dbb1cb284accfb15ab5667a0e5d1881cc443e0677b4882a4067a807e \ --hash=sha256:9dcc4547dbb1cb284accfb15ab5667a0e5d1881cc443e0677b4882a4067a807e \
--hash=sha256:e0a968b5ba15f8a328fdfd7ab1fcb5af4470c28aaf7e55df02a99bc13138e6e8 --hash=sha256:e0a968b5ba15f8a328fdfd7ab1fcb5af4470c28aaf7e55df02a99bc13138e6e8
# via -r requirements.in # via -r requirements.in
importlib-metadata==6.6.0 \
--hash=sha256:43dd286a2cd8995d5eaef7fee2066340423b818ed3fd70adf0bad5f1fac53fed \
--hash=sha256:92501cdf9cc66ebd3e612f1b4f0c0765dfa42f0fa38ffb319b6bd84dd675d705
# via flask
itsdangerous==2.1.2 \ itsdangerous==2.1.2 \
--hash=sha256:2c2349112351b88699d8d4b6b075022c0808887cb7ad10069318a8b0bc88db44 \ --hash=sha256:2c2349112351b88699d8d4b6b075022c0808887cb7ad10069318a8b0bc88db44 \
--hash=sha256:5dbbc68b317e5e42f327f9021763545dc3fc3bfe22e6deb96aaf1fc38874156a --hash=sha256:5dbbc68b317e5e42f327f9021763545dc3fc3bfe22e6deb96aaf1fc38874156a
@ -243,9 +247,9 @@ six==1.16.0 \
--hash=sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926 \ --hash=sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926 \
--hash=sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254 --hash=sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254
# via python-dateutil # via python-dateutil
soupsieve==2.4 \ soupsieve==2.4.1 \
--hash=sha256:49e5368c2cda80ee7e84da9dbe3e110b70a4575f196efb74e51b94549d921955 \ --hash=sha256:1c1bfee6819544a3447586c889157365a27e10d88cde3ad3da0cf0ddf646feb8 \
--hash=sha256:e28dba9ca6c7c00173e34e4ba57448f0688bb681b7c5e8bf4971daafc093d69a --hash=sha256:89d12b2d5dfcd2c9e8c22326da9d9aa9cb3dfab0a83a024f05704076ee8d35ea
# via beautifulsoup4 # via beautifulsoup4
werkzeug==2.2.3 \ werkzeug==2.2.3 \
--hash=sha256:2e1ccc9417d4da358b9de6f174e3ac094391ea1d4fbef2d667865d819dfd0afe \ --hash=sha256:2e1ccc9417d4da358b9de6f174e3ac094391ea1d4fbef2d667865d819dfd0afe \
@ -257,6 +261,10 @@ wtforms==3.0.1 \
--hash=sha256:6b351bbb12dd58af57ffef05bc78425d08d1914e0fd68ee14143b7ade023c5bc \ --hash=sha256:6b351bbb12dd58af57ffef05bc78425d08d1914e0fd68ee14143b7ade023c5bc \
--hash=sha256:837f2f0e0ca79481b92884962b914eba4e72b7a2daaf1f939c890ed0124b834b --hash=sha256:837f2f0e0ca79481b92884962b914eba4e72b7a2daaf1f939c890ed0124b834b
# via flask-wtf # via flask-wtf
zipp==3.15.0 \
--hash=sha256:112929ad649da941c23de50f356a2b5570c954b65150642bccdd66bf194d224b \
--hash=sha256:48904fc76a60e542af151aded95726c1a5c34ed43ab4134b597665c86d7ad556
# via importlib-metadata
zope-event==4.6 \ zope-event==4.6 \
--hash=sha256:73d9e3ef750cca14816a9c322c7250b0d7c9dbc337df5d1b807ff8d3d0b9e97c \ --hash=sha256:73d9e3ef750cca14816a9c322c7250b0d7c9dbc337df5d1b807ff8d3d0b9e97c \
--hash=sha256:81d98813046fc86cc4136e3698fee628a3282f9c320db18658c21749235fce80 --hash=sha256:81d98813046fc86cc4136e3698fee628a3282f9c320db18658c21749235fce80
@ -295,9 +303,9 @@ zope-interface==6.0 \
# via gevent # via gevent
# The following packages are considered to be unsafe in a requirements file: # The following packages are considered to be unsafe in a requirements file:
setuptools==67.6.1 \ setuptools==67.7.1 \
--hash=sha256:257de92a9d50a60b8e22abfcbb771571fde0dbf3ec234463212027a4eeecbe9a \ --hash=sha256:6f0839fbdb7e3cfef1fc38d7954f5c1c26bf4eebb155a55c9bf8faf997b9fb67 \
--hash=sha256:e728ca814a823bf7bf60162daf9db95b93d532948c4c0bea762ce62f60189078 --hash=sha256:bb16732e8eb928922eabaa022f881ae2b7cdcfaf9993ef1f5e841a96d32b8e0c
# via # via
# gevent # gevent
# gunicorn # gunicorn

View file

@ -28,14 +28,14 @@ class Config:
self.__logger.warning( self.__logger.warning(
"Database is not initialized, retrying in 5s ...", "Database is not initialized, retrying in 5s ...",
) )
sleep(3) sleep(5)
env = self.__db.get_config() env = self.__db.get_config()
while not self.__db.is_first_config_saved() or not env: while not self.__db.is_first_config_saved() or not env:
self.__logger.warning( self.__logger.warning(
"Database doesn't have any config saved yet, retrying in 5s ...", "Database doesn't have any config saved yet, retrying in 5s ...",
) )
sleep(3) sleep(5)
env = self.__db.get_config() env = self.__db.get_config()
self.__logger.info("Database is ready") self.__logger.info("Database is ready")

View file

@ -1,2 +1,2 @@
selenium==4.8.3 selenium==4.9.0
requests==2.28.2 requests==2.28.2