Merge pull request #509 from bunkerity/dev

Merge branch "dev" into branch "staging"
This commit is contained in:
Théophile Diot 2023-06-02 09:51:03 -04:00 committed by GitHub
commit 4bbddf7975
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
104 changed files with 1606 additions and 860 deletions

View File

@ -1,5 +1,40 @@
# Changelog
## v1.5.1 -
- [BUGFIX] New version checker in logs displays "404 not found"
- [BUGFIX] New version checker in UI
- [BUGFIX] Only get the right keys from plugin.json files when importing plugins
- [BUGFIX] Remove external resources for Google fonts in UI
- [BUGFIX] Support multiple plugin uploads in one zip when using the UI
- [BUGFIX] Variable being ignored instead of saved in the database when value is empty
- [BUGFIX] ALLOWED_METHODS regex working with LOCK/UNLOCK methods
- [BUGFIX] Custom certificate bug after the refactoring
- [BUGFIX] Fix wrong variables in header phase (fix CORS feature too)
- [PERFORMANCE] Reduce CPU usage of scheduler
- [FEATURE] Add Turnstile antibot mode
- [MISC] Add LOG_LEVEL=warning for docker socket proxy in docs, examples and boilerplates
- [MISC] Temp remove VMWare provider for Vagrant integration
## v1.5.0 - 2023/05/23
- Refactoring of almost all the components of the project
- Dedicated scheduler service to manage jobs and configuration
- Store configuration in a database backend
- Improved web UI and make it working with all integrations
- Improved internal LUA code
- Improved internal cache of BW
- Add Redis support when using clustered integrations
- Add RHEL integration
- Add Vagrant integration
- Init support of generic TCP/UDP (stream)
- Init support of IPv6
- Improved CI/CD : UI tests, core tests and release automation
- Reduce Docker images size
- Fix and improved core plugins : antibot, cors, dnsbl, ...
- Use PCRE regex instead of LUA patterns
- Connectivity tests at startup/reload with logging
## v1.5.0-beta - 2023/05/02
- Refactoring of almost all the components of the project

View File

@ -247,8 +247,7 @@ You will find more information in the [Ansible section](https://docs.bunkerweb.i
We maintain ready to use Vagrant boxes hosted on Vagrant cloud for the following providers :
- vmware_desktop
- virtualbox
- virtualbox
- libvirt
You will find more information in the [Vagrant section](https://docs.bunkerweb.io/1.5.0/integrations/#vagrant) of the documentation.
@ -304,13 +303,14 @@ BunkerWeb comes with a plugin system to make it possible to easily add new featu
Here is the list of "official" plugins that we maintain (see the [bunkerweb-plugins](https://github.com/bunkerity/bunkerweb-plugins) repository for more information) :
| Name | Version | Description | Link |
| :------------: | :-----: | :------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------: |
| **ClamAV** | 0.1 | Automatically scans uploaded files with the ClamAV antivirus engine and denies the request when a file is detected as malicious. | [bunkerweb-plugins/clamav](https://github.com/bunkerity/bunkerweb-plugins/tree/main/clamav) |
| **CrowdSec** | 0.1 | CrowdSec bouncer for BunkerWeb. | [bunkerweb-plugins/crowdsec](https://github.com/bunkerity/bunkerweb-plugins/tree/main/crowdsec) |
| **Discord** | 0.1 | Send security notifications to a Discord channel using a Webhook. | [bunkerweb-plugins/discord](https://github.com/bunkerity/bunkerweb-plugins/tree/main/discord) |
| **Slack** | 0.1 | Send security notifications to a Slack channel using a Webhook. | [bunkerweb-plugins/slack](https://github.com/bunkerity/bunkerweb-plugins/tree/main/slack) |
| **VirusTotal** | 0.1 | Automatically scans uploaded files with the VirusTotal API and denies the request when a file is detected as malicious. | [bunkerweb-plugins/virustotal](https://github.com/bunkerity/bunkerweb-plugins/tree/main/virustotal) |
| Name | Version | Description | Link |
| :------------: | :-----: | :------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------: |
| **ClamAV** | 1.0 | Automatically scans uploaded files with the ClamAV antivirus engine and denies the request when a file is detected as malicious. | [bunkerweb-plugins/clamav](https://github.com/bunkerity/bunkerweb-plugins/tree/main/clamav) |
| **CrowdSec** | 1.0 | CrowdSec bouncer for BunkerWeb. | [bunkerweb-plugins/crowdsec](https://github.com/bunkerity/bunkerweb-plugins/tree/main/crowdsec) |
| **Discord** | 1.0 | Send security notifications to a Discord channel using a Webhook. | [bunkerweb-plugins/discord](https://github.com/bunkerity/bunkerweb-plugins/tree/main/discord) |
| **Slack** | 1.0 | Send security notifications to a Slack channel using a Webhook. | [bunkerweb-plugins/slack](https://github.com/bunkerity/bunkerweb-plugins/tree/main/slack) |
| **VirusTotal** | 1.0 | Automatically scans uploaded files with the VirusTotal API and denies the request when a file is detected as malicious. | [bunkerweb-plugins/virustotal](https://github.com/bunkerity/bunkerweb-plugins/tree/main/virustotal) |
| **Coraza** | 0.1 | Inspect requests using a the Coraza WAF (alternative of ModSecurity). | [bunkerweb-plugins/coraza](https://github.com/bunkerity/bunkerweb-plugins/tree/main/coraza) |
You will find more information in the [plugins section](https://docs.bunkerweb.io/1.5.0/plugins) of the documentation.

View File

@ -1231,7 +1231,6 @@ Configuration of BunkerWeb is done by using specific role variables :
-->
List of supported providers :
- vmware_desktop
- virtualbox
- libvirt
@ -1243,10 +1242,10 @@ Similar to other BunkerWeb integrations, the Vagrant setup uses **NGINX version
By using the provided Vagrant box based on Ubuntu 22.04 "Jammy", you benefit from a well-configured and integrated setup, allowing you to focus on developing and securing your applications with BunkerWeb without worrying about the underlying infrastructure.
Here are the steps to install BunkerWeb using Vagrant on Ubuntu with the supported virtualization providers (VirtualBox, VMware, and libvirt):
Here are the steps to install BunkerWeb using Vagrant on Ubuntu with the supported virtualization providers (VirtualBox, and libvirt):
1. Make sure you have Vagrant and one of the supported virtualization providers (VirtualBox, VMware, or libvirt) installed on your system.
1. Make sure you have Vagrant and one of the supported virtualization providers (VirtualBox or libvirt) installed on your system.
2. There are two ways to install the Vagrant box with BunkerWeb: either by using a provided Vagrantfile to configure your virtual machine or by creating a new box based on the existing BunkerWeb Vagrant box, offering you flexibility in how you set up your development environment.
=== "Vagrantfile"
@ -1259,7 +1258,6 @@ Here are the steps to install BunkerWeb using Vagrant on Ubuntu with the support
Depending on the virtualization provider you choose, you may need to install additional plugins:
* For **VMware**, install the `vagrant-vmware-desktop` plugin. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
* For **libvirt**, install the `vagrant-libvirt plugin`. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
* For **VirtualBox**, install the `vagrant-vbguest` plugin. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
@ -1271,14 +1269,13 @@ Here are the steps to install BunkerWeb using Vagrant on Ubuntu with the support
Depending on the virtualization provider you choose, you may need to install additional plugins:
* For **VMware**, install the `vagrant-vmware-desktop` plugin. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
* For **libvirt**, install the `vagrant-libvirt plugin`. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
* For **VirtualBox**, install the `vagrant-vbguest` plugin. For more information, see the [Vagrant documentation](https://www.vagrantup.com/docs/providers).
After installing the necessary plugins for your chosen virtualization provider, run the following command to start the virtual machine and install BunkerWeb:
```shell
vagrant up --provider=virtualbox # or --provider=vmware_desktop or --provider=libvirt
vagrant up --provider=virtualbox # or --provider=libvirt
```
Finally, to access the virtual machine using SSH, execute the following command:
@ -1298,9 +1295,6 @@ Vagrant.configure("2") do |config|
# Uncomment the desired virtualization provider
# For VirtualBox (default)
config.vm.provider "virtualbox"
# For VMware
# config.vm.provider "vmware_desktop" # Windows
# config.vm.provider "vmware_workstation" # Linux
# For libvirt
# config.vm.provider "libvirt"
end

View File

@ -13,7 +13,7 @@ Here is the list of "official" plugins that we maintain (see the [bunkerweb-plug
| **Discord** | 1.0 | Send security notifications to a Discord channel using a Webhook. | [bunkerweb-plugins/discord](https://github.com/bunkerity/bunkerweb-plugins/tree/main/discord) |
| **Slack** | 1.0 | Send security notifications to a Slack channel using a Webhook. | [bunkerweb-plugins/slack](https://github.com/bunkerity/bunkerweb-plugins/tree/main/slack) |
| **VirusTotal** | 1.0 | Automatically scans uploaded files with the VirusTotal API and denies the request when a file is detected as malicious. | [bunkerweb-plugins/virustotal](https://github.com/bunkerity/bunkerweb-plugins/tree/main/virustotal) |
| **Coraza** | 1.0 | Inspect requests using a Core Rule Set and deny malicious ones. | [bunkerweb-plugins/coraza](https://github.com/bunkerity/bunkerweb-plugins/tree/main/coraza) |
| **Coraza** | 0.1 | Inspect requests using a Core Rule Set and deny malicious ones. | [bunkerweb-plugins/coraza](https://github.com/bunkerity/bunkerweb-plugins/tree/main/coraza) |
## How to use a plugin

View File

@ -1,5 +1,5 @@
mkdocs==1.4.3
mkdocs-material==9.1.13
mkdocs-material==9.1.15
pytablewriter==0.64.2
mike==1.1.2
jinja2<3.1.0

View File

@ -3,7 +3,7 @@
from os import getenv
from threading import Lock
from time import sleep
from typing import Literal, Optional, Union
from typing import Optional
from ConfigCaller import ConfigCaller # type: ignore
from Database import Database # type: ignore
@ -11,13 +11,8 @@ from logger import setup_logger # type: ignore
class Config(ConfigCaller):
def __init__(
self,
ctrl_type: Union[Literal["docker"], Literal["swarm"], Literal["kubernetes"]],
lock: Optional[Lock] = None,
):
def __init__(self, lock: Optional[Lock] = None):
super().__init__()
self.__ctrl_type = ctrl_type
self.__lock = lock
self.__logger = setup_logger("Config", getenv("LOG_LEVEL", "INFO"))
self.__instances = []

View File

@ -11,12 +11,13 @@ from Config import Config
from logger import setup_logger # type: ignore
class Controller(ABC):
class Controller(Config):
def __init__(
self,
ctrl_type: Union[Literal["docker"], Literal["swarm"], Literal["kubernetes"]],
lock: Optional[Lock] = None,
):
super().__init__(lock)
self._type = ctrl_type
self._instances = []
self._services = []
@ -32,15 +33,16 @@ class Controller(ABC):
self._configs = {
config_type: {} for config_type in self._supported_config_types
}
self._config = Config(ctrl_type, lock)
self.__logger = setup_logger("Controller", getenv("LOG_LEVEL", "INFO"))
self._logger = setup_logger(
f"{self._type}-controller", getenv("LOG_LEVEL", "INFO")
)
def wait(self, wait_time: int) -> list:
all_ready = False
while not all_ready:
self._instances = self.get_instances()
if not self._instances:
self.__logger.warning(
self._logger.warning(
f"No instance found, waiting {wait_time}s ...",
)
sleep(wait_time)
@ -48,7 +50,7 @@ class Controller(ABC):
all_ready = True
for instance in self._instances:
if not instance["health"]:
self.__logger.warning(
self._logger.warning(
f"Instance {instance['name']} is not ready, waiting {wait_time}s ...",
)
sleep(wait_time)
@ -83,10 +85,10 @@ class Controller(ABC):
pass
def _set_autoconf_load_db(self):
if not self._config._db.is_autoconf_loaded():
ret = self._config._db.set_autoconf_load(True)
if not self._db.is_autoconf_loaded():
ret = self._db.set_autoconf_load(True)
if ret:
self.__logger.warning(
self._logger.warning(
f"Can't set autoconf loaded metadata to true in database: {ret}",
)

View File

@ -1,6 +1,5 @@
#!/usr/bin/python3
from os import getenv
from typing import Any, Dict, List
from docker import DockerClient
from re import compile as re_compile
@ -8,16 +7,12 @@ from traceback import format_exc
from docker.models.containers import Container
from Controller import Controller
from ConfigCaller import ConfigCaller # type: ignore
from logger import setup_logger # type: ignore
class DockerController(Controller, ConfigCaller):
class DockerController(Controller):
def __init__(self, docker_host):
Controller.__init__(self, "docker")
ConfigCaller.__init__(self)
super().__init__("docker")
self.__client = DockerClient(base_url=docker_host)
self.__logger = setup_logger("docker-controller", getenv("LOG_LEVEL", "INFO"))
self.__custom_confs_rx = re_compile(
r"^bunkerweb.CUSTOM_CONF_(SERVER_HTTP|MODSEC_CRS|MODSEC)_(.+)$"
)
@ -111,9 +106,7 @@ class DockerController(Controller, ConfigCaller):
return configs
def apply_config(self) -> bool:
return self._config.apply(
self._instances, self._services, configs=self._configs
)
return self.apply(self._instances, self._services, configs=self._configs)
def process_events(self):
self._set_autoconf_load_db()
@ -122,27 +115,22 @@ class DockerController(Controller, ConfigCaller):
self._instances = self.get_instances()
self._services = self.get_services()
self._configs = self.get_configs()
if not self._config.update_needed(
if not self.update_needed(
self._instances, self._services, configs=self._configs
):
continue
self.__logger.info(
self._logger.info(
"Caught Docker event, deploying new configuration ..."
)
if not self.apply_config():
self.__logger.error("Error while deploying new configuration")
self._logger.error("Error while deploying new configuration")
else:
self.__logger.info(
self._logger.info(
"Successfully deployed new configuration 🚀",
)
if not self._config._db.is_autoconf_loaded():
ret = self._config._db.set_autoconf_load(True)
if ret:
self.__logger.warning(
f"Can't set autoconf loaded metadata to true in database: {ret}",
)
self._set_autoconf_load_db()
except:
self.__logger.error(
self._logger.error(
f"Exception while processing events :\n{format_exc()}"
)

View File

@ -8,6 +8,9 @@ RUN mkdir -p /usr/share/bunkerweb/deps && \
cat /tmp/req/requirements.txt /tmp/req/requirements.txt.1 > /usr/share/bunkerweb/deps/requirements.txt && \
rm -rf /tmp/req
# Update apk
RUN apk update
# Install python dependencies
RUN apk add --no-cache --virtual .build-deps g++ gcc musl-dev jpeg-dev zlib-dev libffi-dev cairo-dev pango-dev gdk-pixbuf-dev openssl-dev cargo postgresql-dev
@ -60,7 +63,7 @@ RUN apk add --no-cache bash && \
chmod 750 /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/helpers/*.sh /usr/bin/bwcli /usr/share/bunkerweb/autoconf/main.py /usr/share/bunkerweb/deps/python/bin/*
# Fix CVEs
# There are no CVEs for python:3.11.3-alpine at the moment
RUN apk add --no-cache "libcrypto3>=3.1.1-r0" "libssl3>=3.1.1-r0"
VOLUME /data /etc/nginx

View File

@ -1,6 +1,5 @@
#!/usr/bin/python3
from os import getenv
from time import sleep
from traceback import format_exc
from typing import List
@ -9,19 +8,15 @@ from kubernetes.client.exceptions import ApiException
from threading import Thread, Lock
from Controller import Controller
from ConfigCaller import ConfigCaller # type: ignore
from logger import setup_logger # type: ignore
class IngressController(Controller, ConfigCaller):
class IngressController(Controller):
def __init__(self):
Controller.__init__(self, "kubernetes")
ConfigCaller.__init__(self)
self.__internal_lock = Lock()
super().__init__("kubernetes", self.__internal_lock)
config.load_incluster_config()
self.__corev1 = client.CoreV1Api()
self.__networkingv1 = client.NetworkingV1Api()
self.__internal_lock = Lock()
self.__logger = setup_logger("Ingress-controller", getenv("LOG_LEVEL", "INFO"))
def _get_controller_instances(self) -> list:
return [
@ -51,7 +46,7 @@ class IngressController(Controller, ConfigCaller):
pod = container
break
if not pod:
self.__logger.warning(
self._logger.warning(
f"Missing container bunkerweb in pod {controller_instance.metadata.name}"
)
else:
@ -81,7 +76,7 @@ class IngressController(Controller, ConfigCaller):
# parse rules
for rule in controller_service.spec.rules:
if not rule.host:
self.__logger.warning(
self._logger.warning(
"Ignoring unsupported ingress rule without host.",
)
continue
@ -93,22 +88,22 @@ class IngressController(Controller, ConfigCaller):
location = 1
for path in rule.http.paths:
if not path.path:
self.__logger.warning(
self._logger.warning(
"Ignoring unsupported ingress rule without path.",
)
continue
elif not path.backend.service:
self.__logger.warning(
self._logger.warning(
"Ignoring unsupported ingress rule without backend service.",
)
continue
elif not path.backend.service.port:
self.__logger.warning(
self._logger.warning(
"Ignoring unsupported ingress rule without backend service port.",
)
continue
elif not path.backend.service.port.number:
self.__logger.warning(
self._logger.warning(
"Ignoring unsupported ingress rule without backend service port number.",
)
continue
@ -119,7 +114,7 @@ class IngressController(Controller, ConfigCaller):
).items
if not service_list:
self.__logger.warning(
self._logger.warning(
f"Ignoring ingress rule with service {path.backend.service.name} : service not found.",
)
continue
@ -137,7 +132,7 @@ class IngressController(Controller, ConfigCaller):
# parse tls
if controller_service.spec.tls: # TODO: support tls
self.__logger.warning("Ignoring unsupported tls.")
self._logger.warning("Ignoring unsupported tls.")
# parse annotations
if controller_service.metadata.annotations:
@ -204,12 +199,12 @@ class IngressController(Controller, ConfigCaller):
config_type = configmap.metadata.annotations["bunkerweb.io/CONFIG_TYPE"]
if config_type not in self._supported_config_types:
self.__logger.warning(
self._logger.warning(
f"Ignoring unsupported CONFIG_TYPE {config_type} for ConfigMap {configmap.metadata.name}",
)
continue
elif not configmap.data:
self.__logger.warning(
self._logger.warning(
f"Ignoring blank ConfigMap {configmap.metadata.name}",
)
continue
@ -218,7 +213,7 @@ class IngressController(Controller, ConfigCaller):
if not self._is_service_present(
configmap.metadata.annotations["bunkerweb.io/CONFIG_SITE"]
):
self.__logger.warning(
self._logger.warning(
f"Ignoring config {configmap.metadata.name} because {configmap.metadata.annotations['bunkerweb.io/CONFIG_SITE']} doesn't exist",
)
continue
@ -253,46 +248,41 @@ class IngressController(Controller, ConfigCaller):
self._instances = self.get_instances()
self._services = self.get_services()
self._configs = self.get_configs()
if not self._config.update_needed(
if not self.update_needed(
self._instances, self._services, configs=self._configs
):
self.__internal_lock.release()
locked = False
continue
self.__logger.info(
self._logger.info(
f"Catched kubernetes event ({watch_type}), deploying new configuration ...",
)
try:
ret = self.apply_config()
if not ret:
self.__logger.error(
self._logger.error(
"Error while deploying new configuration ...",
)
else:
self.__logger.info(
self._logger.info(
"Successfully deployed new configuration 🚀",
)
if not self._config._db.is_autoconf_loaded():
ret = self._config._db.set_autoconf_load(True)
if ret:
self.__logger.warning(
f"Can't set autoconf loaded metadata to true in database: {ret}",
)
self._set_autoconf_load_db()
except:
self.__logger.error(
self._logger.error(
f"Exception while deploying new configuration :\n{format_exc()}",
)
self.__internal_lock.release()
locked = False
except ApiException as e:
if e.status != 410:
self.__logger.error(
self._logger.error(
f"API exception while reading k8s event (type = {watch_type}) :\n{format_exc()}",
)
error = True
except:
self.__logger.error(
self._logger.error(
f"Unknown exception while reading k8s event (type = {watch_type}) :\n{format_exc()}",
)
error = True
@ -302,13 +292,11 @@ class IngressController(Controller, ConfigCaller):
locked = False
if error is True:
self.__logger.warning("Got exception, retrying in 10 seconds ...")
self._logger.warning("Got exception, retrying in 10 seconds ...")
sleep(10)
def apply_config(self) -> bool:
return self._config.apply(
self._instances, self._services, configs=self._configs
)
return self.apply(self._instances, self._services, configs=self._configs)
def process_events(self):
self._set_autoconf_load_db()

View File

@ -1,6 +1,5 @@
#!/usr/bin/python3
from os import getenv
from time import sleep
from traceback import format_exc
from threading import Thread, Lock
@ -10,17 +9,13 @@ from base64 import b64decode
from docker.models.services import Service
from Controller import Controller
from ConfigCaller import ConfigCaller # type: ignore
from logger import setup_logger # type: ignore
class SwarmController(Controller, ConfigCaller):
class SwarmController(Controller):
def __init__(self, docker_host):
Controller.__init__(self, "swarm")
ConfigCaller.__init__(self)
super().__init__("swarm")
self.__client = DockerClient(base_url=docker_host)
self.__internal_lock = Lock()
self.__logger = setup_logger("Swarm-controller", getenv("LOG_LEVEL", "INFO"))
def _get_controller_instances(self) -> List[Service]:
return self.__client.services.list(filters={"label": "bunkerweb.INSTANCE"})
@ -110,7 +105,7 @@ class SwarmController(Controller, ConfigCaller):
config_type = config.attrs["Spec"]["Labels"]["bunkerweb.CONFIG_TYPE"]
config_name = config.name
if config_type not in self._supported_config_types:
self.__logger.warning(
self._logger.warning(
f"Ignoring unsupported CONFIG_TYPE {config_type} for Config {config_name}",
)
continue
@ -119,7 +114,7 @@ class SwarmController(Controller, ConfigCaller):
if not self._is_service_present(
config.attrs["Spec"]["Labels"]["bunkerweb.CONFIG_SITE"]
):
self.__logger.warning(
self._logger.warning(
f"Ignoring config {config_name} because {config.attrs['Spec']['Labels']['bunkerweb.CONFIG_SITE']} doesn't exist",
)
continue
@ -132,9 +127,7 @@ class SwarmController(Controller, ConfigCaller):
return configs
def apply_config(self) -> bool:
return self._config.apply(
self._instances, self._services, configs=self._configs
)
return self.apply(self._instances, self._services, configs=self._configs)
def __event(self, event_type):
while True:
@ -150,31 +143,31 @@ class SwarmController(Controller, ConfigCaller):
self._instances = self.get_instances()
self._services = self.get_services()
self._configs = self.get_configs()
if not self._config.update_needed(
if not self.update_needed(
self._instances, self._services, configs=self._configs
):
self.__internal_lock.release()
locked = False
continue
self.__logger.info(
self._logger.info(
f"Catched Swarm event ({event_type}), deploying new configuration ..."
)
if not self.apply_config():
self.__logger.error(
self._logger.error(
"Error while deploying new configuration"
)
else:
self.__logger.info(
self._logger.info(
"Successfully deployed new configuration 🚀",
)
except:
self.__logger.error(
self._logger.error(
f"Exception while processing Swarm event ({event_type}) :\n{format_exc()}"
)
self.__internal_lock.release()
locked = False
except:
self.__logger.error(
self._logger.error(
f"Exception while reading Swarm event ({event_type}) :\n{format_exc()}",
)
error = True
@ -183,7 +176,7 @@ class SwarmController(Controller, ConfigCaller):
self.__internal_lock.release()
locked = False
if error is True:
self.__logger.warning("Got exception, retrying in 10 seconds ...")
self._logger.warning("Got exception, retrying in 10 seconds ...")
sleep(10)
def process_events(self):

View File

@ -3,6 +3,9 @@ FROM nginx:1.24.0-alpine AS builder
# Copy dependencies sources folder
COPY src/deps /tmp/bunkerweb/deps
# Update apk
RUN apk update
# Compile and install dependencies
RUN apk add --no-cache --virtual .build-deps bash autoconf libtool automake geoip-dev g++ gcc curl-dev libxml2-dev pcre-dev make linux-headers musl-dev gd-dev gnupg brotli-dev openssl-dev patch readline-dev && \
mkdir -p /usr/share/bunkerweb/deps && \
@ -51,6 +54,7 @@ COPY --from=builder --chown=0:101 /usr/share/bunkerweb /usr/share/bunkerweb
RUN apk add --no-cache pcre bash python3 && \
cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
mkdir -p /var/tmp/bunkerweb && \
mkdir -p /var/run/bunkerweb && \
mkdir -p /var/www/html && \
mkdir -p /etc/bunkerweb && \
mkdir -p /data/cache && ln -s /data/cache /var/cache/bunkerweb && \
@ -58,8 +62,8 @@ RUN apk add --no-cache pcre bash python3 && \
for dir in $(echo "configs/http configs/stream configs/server-http configs/server-stream configs/default-server-http configs/default-server-stream configs/modsec configs/modsec-crs") ; do mkdir "/data/${dir}" ; done && \
chown -R root:nginx /data && \
chmod -R 770 /data && \
chown -R root:nginx /var/cache/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /usr/bin/bwcli && \
chmod 770 /var/cache/bunkerweb /var/tmp/bunkerweb && \
chown -R root:nginx /var/cache/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /usr/bin/bwcli && \
chmod 770 /var/cache/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb && \
chmod 750 /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/gen/main.py /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/entrypoint.sh /usr/bin/bwcli /usr/share/bunkerweb/deps/python/bin/* && \
chown -R root:nginx /etc/nginx && \
chmod -R 770 /etc/nginx && \

View File

@ -21,7 +21,7 @@ trap "trap_exit" TERM INT QUIT
# trap SIGHUP
function trap_reload() {
log "ENTRYPOINT" "" "Catched reload operation"
if [ -f /var/tmp/bunkerweb/nginx.pid ] ; then
if [ -f /var/run/bunkerweb/nginx.pid ] ; then
log "ENTRYPOINT" "" "Reloading nginx ..."
nginx -s reload
if [ $? -eq 0 ] ; then
@ -50,7 +50,7 @@ pid="$!"
# wait while nginx is running
wait "$pid"
while [ -f "/var/tmp/bunkerweb/nginx.pid" ] ; do
while [ -f "/var/run/bunkerweb/nginx.pid" ] ; do
wait "$pid"
done

View File

@ -7,12 +7,16 @@ from requests import request
class API:
def __init__(self, endpoint: str, host: str = "bwapi"):
self.__endpoint = endpoint
if not self.__endpoint.endswith("/"):
self.__endpoint += "/"
self.__host = host
def get_endpoint(self) -> str:
@property
def endpoint(self) -> str:
return self.__endpoint
def get_host(self) -> str:
@property
def host(self) -> str:
return self.__host
def request(

View File

@ -121,7 +121,7 @@ class CLI(ApiCaller):
):
# Docker & Linux case
super().__init__(
apis=[
[
API(
f"http://127.0.0.1:{self.__variables.get('API_HTTP_PORT', '5000')}",
host=self.__variables.get("API_SERVER_NAME", "bwapi"),
@ -142,8 +142,10 @@ class CLI(ApiCaller):
elif self.__variables.get("AUTOCONF_MODE", "no").lower() == "yes":
return "autoconf"
elif integration_path.is_file():
return integration_path.read_text().strip().lower()
elif os_release_path.is_file() and "Alpine" in os_release_path.read_text():
return integration_path.read_text(encoding="utf-8").strip().lower()
elif os_release_path.is_file() and "Alpine" in os_release_path.read_text(
encoding="utf-8"
):
return "docker"
return "linux"
@ -154,7 +156,7 @@ class CLI(ApiCaller):
if not ok:
self.__logger.error(f"Failed to delete ban for {ip} from redis")
if self._send_to_apis("POST", "/unban", data={"ip": ip}):
if self.send_to_apis("POST", "/unban", data={"ip": ip}):
return True, f"IP {ip} has been unbanned"
return False, "error"
@ -168,7 +170,7 @@ class CLI(ApiCaller):
if not ok:
self.__logger.error(f"Failed to ban {ip} in redis")
if self._send_to_apis("POST", "/ban", data={"ip": ip, "exp": exp}):
if self.send_to_apis("POST", "/ban", data={"ip": ip, "exp": exp}):
return (
True,
f"IP {ip} has been banned for {format_remaining_time(exp)}",
@ -178,7 +180,7 @@ class CLI(ApiCaller):
def bans(self) -> Tuple[bool, str]:
servers = {}
ret, resp = self._send_to_apis("GET", "/bans", response=True)
ret, resp = self.send_to_apis("GET", "/bans", response=True)
if not ret:
return False, "error"
@ -206,7 +208,6 @@ class CLI(ApiCaller):
for ban in bans:
cli_str += f"- {ban['ip']} for {format_remaining_time(ban['exp'])} : {ban.get('reason', 'no reason given')}\n"
else:
cli_str += "\n"
cli_str += "\n"
return True, cli_str

View File

@ -5,7 +5,7 @@ server {
listen {{ API_LISTEN_IP }}:{{ API_HTTP_PORT }};
{% if API_LISTEN_IP != "127.0.0.1" +%}
listen 127.0.0.1:{{ API_HTTP_PORT }};
{% endif +%}
{% endif %}
# maximum body size for API
client_max_body_size 1G;

View File

@ -53,6 +53,7 @@ lua_shared_dict cachestore_locks {{ CACHESTORE_LOCKS_MEMORY_SIZE }};
{% if LOG_LEVEL != "info" and LOG_LEVEL != "debug" %}
lua_socket_log_errors off;
{% endif %}
access_by_lua_no_postpone on;
# LUA init block
include /etc/nginx/init-lua.conf;

View File

@ -15,7 +15,7 @@ load_module /usr/share/bunkerweb/modules/ngx_http_brotli_static_module.so;
load_module /usr/share/bunkerweb/modules/ngx_stream_lua_module.so;
# PID file
pid /var/tmp/bunkerweb/nginx.pid;
pid /var/run/bunkerweb/nginx.pid;
# worker number (default = auto)
worker_processes {{ WORKER_PROCESSES }};

View File

@ -145,7 +145,7 @@ try:
for url in urls_list:
try:
logger.info(f"Downloading blacklist data from {url} ...")
resp = get(url, stream=True)
resp = get(url, stream=True, timeout=10)
if resp.status_code != 200:
continue

View File

@ -53,8 +53,7 @@ try:
bunkernet_tmp_path.mkdir(parents=True, exist_ok=True)
# Create empty file in case it doesn't exist
if not bunkernet_path.joinpath("ip.list").is_file():
bunkernet_path.joinpath("ip.list").write_text("")
bunkernet_path.joinpath("ip.list").touch(exist_ok=True)
# Get ID from cache
bunkernet_id = None

View File

@ -32,7 +32,7 @@ try:
bunkernet_activated = False
# Multisite case
if getenv("MULTISITE", "no") == "yes":
servers = getenv("SERVER_NAME", [])
servers = getenv("SERVER_NAME") or []
if isinstance(servers, str):
servers = servers.split(" ")
@ -110,7 +110,7 @@ try:
)
_exit(2)
bunkernet_id = data["data"]
instance_id_path.write_text(bunkernet_id)
instance_id_path.write_text(bunkernet_id, encoding="utf-8")
registered = True
exit_status = 1
logger.info(

View File

@ -53,13 +53,17 @@ def data() -> Tuple[bool, Optional[int], Union[str, dict]]:
def get_id() -> str:
return (
Path(sep, "var", "cache", "bunkerweb", "bunkernet", "instance.id")
.read_text()
.read_text(encoding="utf-8")
.strip()
)
def get_version() -> str:
return Path(sep, "usr", "share", "bunkerweb", "VERSION").read_text().strip()
return (
Path(sep, "usr", "share", "bunkerweb", "VERSION")
.read_text(encoding="utf-8")
.strip()
)
def get_integration() -> str:
@ -73,8 +77,10 @@ def get_integration() -> str:
elif getenv("AUTOCONF_MODE", "no").lower() == "yes":
return "autoconf"
elif integration_path.is_file():
return integration_path.read_text().strip().lower()
elif os_release_path.is_file() and "Alpine" in os_release_path.read_text():
return integration_path.read_text(encoding="utf-8").strip().lower()
elif os_release_path.is_file() and "Alpine" in os_release_path.read_text(
encoding="utf-8"
):
return "docker"
return "linux"

View File

@ -1,6 +1,7 @@
{% set os_path = import("os.path") %}
{% if USE_CUSTOM_SSL == "yes" and os_path.isfile("/var/cache/bunkerweb/customcert/cert.pem") and os_path.isfile("/var/cache/bunkerweb/customcert/cert.key") +%}
{% if USE_CUSTOM_SSL == "yes" %}
{% if os_path.isfile("/var/cache/bunkerweb/customcert/cert.pem") and os_path.isfile("/var/cache/bunkerweb/customcert/key.pem") or os_path.isfile("/var/cache/bunkerweb/customcert/" + SERVER_NAME + "/cert.pem") and os_path.isfile("/var/cache/bunkerweb/customcert/" + SERVER_NAME + "/key.pem") +%}
# listen on HTTPS PORT
listen 0.0.0.0:{{ HTTPS_PORT }} ssl {% if HTTP2 == "yes" %}http2{% endif %} {% if USE_PROXY_PROTOCOL == "yes" %}proxy_protocol{% endif %};
@ -9,8 +10,16 @@ listen [::]:{{ HTTPS_PORT }} ssl {% if HTTP2 == "yes" %}http2{% endif %} {% if U
{% endif %}
# TLS config
{% if os_path.isfile("/var/cache/bunkerweb/customcert/" + SERVER_NAME + "/cert.pem") %}
ssl_certificate /var/cache/bunkerweb/customcert/{{ SERVER_NAME }}/cert.pem;
{% else %}
ssl_certificate /var/cache/bunkerweb/customcert/cert.pem;
ssl_certificate_key /var/cache/bunkerweb/customcert/cert.key;
{% endif %}
{% if os_path.isfile("/var/cache/bunkerweb/customcert/" + SERVER_NAME + "/key.pem") %}
ssl_certificate_key /var/cache/bunkerweb/customcert/{{ SERVER_NAME }}/key.pem;
{% else %}
ssl_certificate_key /var/cache/bunkerweb/customcert/key.pem;
{% endif %}
ssl_protocols {{ SSL_PROTOCOLS }};
ssl_prefer_server_ciphers on;
ssl_session_tickets off;
@ -21,4 +30,5 @@ ssl_dhparam /etc/nginx/dhparam;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
{% endif %}
{% endif %}
{% endif %}

View File

@ -1,6 +1,7 @@
{% set os_path = import("os.path") %}
{% if USE_CUSTOM_SSL == "yes" and os_path.isfile("/var/cache/bunkerweb/customcert/cert.pem") and os_path.isfile("/var/cache/bunkerweb/customcert/cert.key") +%}
{% if USE_CUSTOM_SSL == "yes" %}
{% if os_path.isfile("/var/cache/bunkerweb/customcert/cert.pem") and os_path.isfile("/var/cache/bunkerweb/customcert/key.pem") or os_path.isfile("/var/cache/bunkerweb/customcert/" + SERVER_NAME + "/cert.pem") and os_path.isfile("/var/cache/bunkerweb/customcert/" + SERVER_NAME + "/key.pem") +%}
# listen
listen 0.0.0.0:{{ LISTEN_STREAM_PORT_SSL }} ssl {% if USE_UDP == "yes" %} udp {% endif %}{% if USE_PROXY_PROTOCOL == "yes" %} proxy_protocol {% endif %};
@ -9,8 +10,16 @@ listen [::]:{{ LISTEN_STREAM_PORT_SSL }} ssl {% if USE_UDP == "yes" %} udp {% en
{% endif %}
# TLS config
{% if os_path.isfile("/var/cache/bunkerweb/customcert/" + SERVER_NAME + "/cert.pem") %}
ssl_certificate /var/cache/bunkerweb/customcert/{{ SERVER_NAME }}/cert.pem;
{% else %}
ssl_certificate /var/cache/bunkerweb/customcert/cert.pem;
ssl_certificate_key /var/cache/bunkerweb/customcert/cert.key;
{% endif %}
{% if os_path.isfile("/var/cache/bunkerweb/customcert/" + SERVER_NAME + "/key.pem") %}
ssl_certificate_key /var/cache/bunkerweb/customcert/{{ SERVER_NAME }}/key.pem;
{% else %}
ssl_certificate_key /var/cache/bunkerweb/customcert/key.pem;
{% endif %}
ssl_protocols {{ SSL_PROTOCOLS }};
ssl_prefer_server_ciphers on;
ssl_session_tickets off;
@ -21,4 +30,5 @@ ssl_dhparam /etc/nginx/dhparam;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
{% endif %}
{% endif %}
{% endif %}

View File

@ -36,8 +36,8 @@ def check_cert(
)
return False
cert_path = Path(normpath(cert_path))
key_path = Path(normpath(key_path))
cert_path: Path = Path(normpath(cert_path))
key_path: Path = Path(normpath(key_path))
if not cert_path.is_file():
logger.warning(
@ -51,8 +51,15 @@ def check_cert(
return False
cert_cache_path = Path(
sep, "var", "cache", "bunkerweb", "customcert", "cert.pem"
sep,
"var",
"cache",
"bunkerweb",
"customcert",
first_server or "",
"cert.pem",
)
cert_cache_path.parent.mkdir(parents=True, exist_ok=True)
cert_hash = file_hash(cert_path)
old_hash = cache_hash(cert_cache_path, db)
@ -66,8 +73,15 @@ def check_cert(
logger.error(f"Error while caching custom-cert cert.pem file : {err}")
key_cache_path = Path(
sep, "var", "cache", "bunkerweb", "customcert", "cert.key"
sep,
"var",
"cache",
"bunkerweb",
"customcert",
first_server or "",
"key.pem",
)
key_cache_path.parent.mkdir(parents=True, exist_ok=True)
key_hash = file_hash(key_path)
old_hash = cache_hash(key_cache_path, db)
@ -76,7 +90,7 @@ def check_cert(
key_path, key_cache_path, key_hash, db, delete_file=False
)
if not cached:
logger.error(f"Error while caching custom-cert cert.key file : {err}")
logger.error(f"Error while caching custom-cert key.pem file : {err}")
return True
except:
@ -93,9 +107,26 @@ try:
parents=True, exist_ok=True
)
# Multisite case
if getenv("MULTISITE") == "yes":
servers = getenv("SERVER_NAME", [])
if getenv("USE_CUSTOM_SSL", "no") == "yes" and getenv("SERVER_NAME", "") != "":
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
cert_path = getenv("CUSTOM_SSL_CERT", "")
key_path = getenv("CUSTOM_SSL_KEY", "")
if cert_path and key_path:
logger.info(f"Checking certificate {cert_path} ...")
need_reload = check_cert(cert_path, key_path)
if need_reload:
logger.info(f"Detected change for certificate {cert_path}")
status = 1
else:
logger.info(f"No change for certificate {cert_path}")
if getenv("MULTISITE", "no") == "yes":
servers = getenv("SERVER_NAME") or []
if isinstance(servers, str):
servers = servers.split(" ")
@ -113,43 +144,23 @@ try:
sqlalchemy_string=getenv("DATABASE_URI", None),
)
cert_path = getenv(
f"{first_server}_CUSTOM_SSL_CERT", getenv("CUSTOM_SSL_CERT", "")
)
key_path = getenv(
f"{first_server}_CUSTOM_SSL_KEY", getenv("CUSTOM_SSL_KEY", "")
)
cert_path = getenv(f"{first_server}_CUSTOM_SSL_CERT", "")
key_path = getenv(f"{first_server}_CUSTOM_SSL_KEY", "")
logger.info(
f"Checking certificate {cert_path} ...",
)
need_reload = check_cert(cert_path, key_path, first_server)
if need_reload:
if cert_path and key_path:
logger.info(
f"Detected change for certificate {cert_path}",
f"Checking certificate {cert_path} ...",
)
status = 1
else:
logger.info(
f"No change for certificate {cert_path}",
)
# Singlesite case
elif getenv("USE_CUSTOM_SSL") == "yes" and getenv("SERVER_NAME") != "":
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
cert_path = getenv("CUSTOM_SSL_CERT", "")
key_path = getenv("CUSTOM_SSL_KEY", "")
logger.info(f"Checking certificate {cert_path} ...")
need_reload = check_cert(cert_path, key_path)
if need_reload:
logger.info(f"Detected change for certificate {cert_path}")
status = 1
else:
logger.info(f"No change for certificate {cert_path}")
need_reload = check_cert(cert_path, key_path, first_server)
if need_reload:
logger.info(
f"Detected change for certificate {cert_path}",
)
status = 1
else:
logger.info(
f"No change for certificate {cert_path}",
)
except:
status = 2
logger.error(f"Exception while running custom-cert.py :\n{format_exc()}")

View File

@ -129,7 +129,7 @@ try:
for url in urls_list:
try:
logger.info(f"Downloading greylist data from {url} ...")
resp = get(url, stream=True)
resp = get(url, stream=True, timeout=10)
if resp.status_code != 200:
continue

View File

@ -40,7 +40,7 @@ status = 0
def install_plugin(plugin_dir) -> bool:
# Load plugin.json
metadata = loads(Path(plugin_dir, "plugin.json").read_text())
metadata = loads(Path(plugin_dir, "plugin.json").read_text(encoding="utf-8"))
# Don't go further if plugin is already installed
if Path("etc", "bunkerweb", "plugins", metadata["id"], "plugin.json").is_file():
logger.warning(
@ -71,7 +71,7 @@ try:
for plugin_url in plugin_urls.split(" "):
# Download ZIP file
try:
req = get(plugin_url)
req = get(plugin_url, timeout=10)
except:
logger.error(
f"Exception while downloading plugin(s) from {plugin_url} :\n{format_exc()}",
@ -122,7 +122,7 @@ try:
rmtree(path, ignore_errors=True)
continue
plugin_file = loads(Path(path, "plugin.json").read_text())
plugin_file = loads(Path(path, "plugin.json").read_text(encoding="utf-8"))
plugin_content = BytesIO()
with tar_open(fileobj=plugin_content, mode="w:gz", compresslevel=9) as tar:

View File

@ -22,7 +22,7 @@ for deps_path in [
sys_path.append(deps_path)
from maxminddb import open_database
from requests import get
from requests import RequestException, get
from Database import Database # type: ignore
from logger import setup_logger # type: ignore
@ -41,9 +41,15 @@ try:
# Don't go further if the cache match the latest version
if tmp_path.exists():
with lock:
response = get("https://db-ip.com/db/download/ip-to-asn-lite")
response = None
try:
response = get(
"https://db-ip.com/db/download/ip-to-asn-lite", timeout=5
)
except RequestException:
logger.warning("Unable to check if asn.mmdb is the latest version")
if response.status_code == 200:
if response and response.status_code == 200:
_sha1 = sha1()
with open(str(tmp_path), "rb") as f:
while True:
@ -79,11 +85,15 @@ try:
# Download the mmdb file and save it to tmp
logger.info(f"Downloading mmdb file from url {mmdb_url} ...")
file_content = b""
with get(mmdb_url, stream=True) as resp:
resp.raise_for_status()
for chunk in resp.iter_content(chunk_size=4 * 1024):
if chunk:
file_content += chunk
try:
with get(mmdb_url, stream=True, timeout=5) as resp:
resp.raise_for_status()
for chunk in resp.iter_content(chunk_size=4 * 1024):
if chunk:
file_content += chunk
except RequestException:
logger.error(f"Error while downloading mmdb file from {mmdb_url}")
_exit(2)
try:
assert file_content

View File

@ -22,7 +22,7 @@ for deps_path in [
sys_path.append(deps_path)
from maxminddb import open_database
from requests import get
from requests import RequestException, get
from Database import Database # type: ignore
from logger import setup_logger # type: ignore
@ -41,9 +41,15 @@ try:
# Don't go further if the cache match the latest version
if tmp_path.exists():
with lock:
response = get("https://db-ip.com/db/download/ip-to-country-lite")
response = None
try:
response = get(
"https://db-ip.com/db/download/ip-to-country-lite", timeout=5
)
except RequestException:
logger.warning("Unable to check if country.mmdb is the latest version")
if response.status_code == 200:
if response and response.status_code == 200:
_sha1 = sha1()
with open(str(tmp_path), "rb") as f:
while True:
@ -79,11 +85,15 @@ try:
# Download the mmdb file and save it to tmp
logger.info(f"Downloading mmdb file from url {mmdb_url} ...")
file_content = b""
with get(mmdb_url, stream=True) as resp:
resp.raise_for_status()
for chunk in resp.iter_content(chunk_size=4 * 1024):
if chunk:
file_content += chunk
try:
with get(mmdb_url, stream=True, timeout=5) as resp:
resp.raise_for_status()
for chunk in resp.iter_content(chunk_size=4 * 1024):
if chunk:
file_content += chunk
except RequestException:
logger.error(f"Error while downloading mmdb file from {mmdb_url}")
_exit(2)
try:
assert file_content

View File

@ -37,7 +37,7 @@ try:
elif getenv("AUTOCONF_MODE") == "yes":
bw_integration = "Autoconf"
elif integration_path.is_file():
integration = integration_path.read_text().strip()
integration = integration_path.read_text(encoding="utf-8").strip()
token = getenv("CERTBOT_TOKEN", "")
validation = getenv("CERTBOT_VALIDATION", "")
@ -65,16 +65,16 @@ try:
if not sent:
status = 1
logger.error(
f"Can't send API request to {api.get_endpoint()}/lets-encrypt/challenge : {err}"
f"Can't send API request to {api.endpoint}/lets-encrypt/challenge : {err}"
)
elif status != 200:
status = 1
logger.error(
f"Error while sending API request to {api.get_endpoint()}/lets-encrypt/challenge : status = {resp['status']}, msg = {resp['msg']}",
f"Error while sending API request to {api.endpoint}/lets-encrypt/challenge : status = {resp['status']}, msg = {resp['msg']}",
)
else:
logger.info(
f"Successfully sent API request to {api.get_endpoint()}/lets-encrypt/challenge",
f"Successfully sent API request to {api.endpoint}/lets-encrypt/challenge",
)
# Linux case
@ -89,7 +89,7 @@ try:
"acme-challenge",
)
root_dir.mkdir(parents=True, exist_ok=True)
root_dir.joinpath(token).write_text(validation)
root_dir.joinpath(token).write_text(validation, encoding="utf-8")
except:
status = 1
logger.error(f"Exception while running certbot-auth.py :\n{format_exc()}")

View File

@ -37,7 +37,7 @@ try:
elif getenv("AUTOCONF_MODE") == "yes":
bw_integration = "Autoconf"
elif integration_path.is_file():
integration = integration_path.read_text().strip()
integration = integration_path.read_text(encoding="utf-8").strip()
token = getenv("CERTBOT_TOKEN", "")
# Cluster case
@ -61,16 +61,16 @@ try:
if not sent:
status = 1
logger.error(
f"Can't send API request to {api.get_endpoint()}/lets-encrypt/challenge : {err}"
f"Can't send API request to {api.endpoint}/lets-encrypt/challenge : {err}"
)
elif status != 200:
status = 1
logger.error(
f"Error while sending API request to {api.get_endpoint()}/lets-encrypt/challenge : status = {resp['status']}, msg = {resp['msg']}",
f"Error while sending API request to {api.endpoint}/lets-encrypt/challenge : status = {resp['status']}, msg = {resp['msg']}",
)
else:
logger.info(
f"Successfully sent API request to {api.get_endpoint()}/lets-encrypt/challenge",
f"Successfully sent API request to {api.endpoint}/lets-encrypt/challenge",
)
# Linux case
else:

View File

@ -40,7 +40,7 @@ try:
elif getenv("AUTOCONF_MODE") == "yes":
bw_integration = "Autoconf"
elif integration_path.is_file():
integration = integration_path.read_text().strip()
integration = integration_path.read_text(encoding="utf-8").strip()
token = getenv("CERTBOT_TOKEN", "")
logger.info(f"Certificates renewal for {getenv('RENEWED_DOMAINS')} successful")
@ -78,31 +78,31 @@ try:
if not sent:
status = 1
logger.error(
f"Can't send API request to {api.get_endpoint()}/lets-encrypt/certificates : {err}"
f"Can't send API request to {api.endpoint}/lets-encrypt/certificates : {err}"
)
elif status != 200:
status = 1
logger.error(
f"Error while sending API request to {api.get_endpoint()}/lets-encrypt/certificates : status = {resp['status']}, msg = {resp['msg']}"
f"Error while sending API request to {api.endpoint}/lets-encrypt/certificates : status = {resp['status']}, msg = {resp['msg']}"
)
else:
logger.info(
f"Successfully sent API request to {api.get_endpoint()}/lets-encrypt/certificates",
f"Successfully sent API request to {api.endpoint}/lets-encrypt/certificates",
)
sent, err, status, resp = api.request("POST", "/reload")
if not sent:
status = 1
logger.error(
f"Can't send API request to {api.get_endpoint()}/reload : {err}"
f"Can't send API request to {api.endpoint}/reload : {err}"
)
elif status != 200:
status = 1
logger.error(
f"Error while sending API request to {api.get_endpoint()}/reload : status = {resp['status']}, msg = {resp['msg']}"
f"Error while sending API request to {api.endpoint}/reload : status = {resp['status']}, msg = {resp['msg']}"
)
else:
logger.info(
f"Successfully sent API request to {api.get_endpoint()}/reload"
f"Successfully sent API request to {api.endpoint}/reload"
)
# Linux case
else:
@ -111,6 +111,7 @@ try:
["sudo", join(sep, "usr", "sbin", "nginx"), "-s", "reload"],
stdin=DEVNULL,
stderr=STDOUT,
check=False,
).returncode
!= 0
):

View File

@ -60,6 +60,7 @@ def certbot_new(
stderr=STDOUT,
env=environ.copy()
| {"PYTHONPATH": join(sep, "usr", "share", "bunkerweb", "deps", "python")},
check=True,
).returncode
@ -190,7 +191,7 @@ try:
bio.seek(0, 0)
# Put tgz in cache
cached, err = set_file_in_db(f"folder.tgz", bio.read(), db)
cached, err = set_file_in_db("folder.tgz", bio.read(), db)
if not cached:
logger.error(f"Error while saving Let's Encrypt data to db cache : {err}")

View File

@ -54,6 +54,7 @@ def renew(domain: str, letsencrypt_path: Path) -> int:
stdin=DEVNULL,
stderr=STDOUT,
env=environ,
check=False,
).returncode
@ -101,8 +102,8 @@ try:
else:
logger.info("No Let's Encrypt data found in db cache")
if getenv("MULTISITE") == "yes":
servers = getenv("SERVER_NAME", [])
if getenv("MULTISITE", "no") == "yes":
servers = getenv("SERVER_NAME") or []
if isinstance(servers, str):
servers = servers.split(" ")

View File

@ -85,6 +85,7 @@ try:
],
stdin=DEVNULL,
stderr=DEVNULL,
check=False,
).returncode
!= 0
):

View File

@ -23,11 +23,14 @@ logger = setup_logger("UPDATE-CHECK", getenv("LOG_LEVEL", "INFO"))
status = 0
try:
current_version = f"v{Path('/usr/share/bunkerweb/VERSION').read_text().strip()}"
current_version = (
f"v{Path('/usr/share/bunkerweb/VERSION').read_text(encoding='utf-8').strip()}"
)
response = get(
"https://github.com/bunkerity/bunkerweb/releases/latest",
allow_redirects=True,
timeout=5,
)
response.raise_for_status()

View File

@ -92,7 +92,7 @@ try:
for url in urls:
try:
logger.info(f"Downloading RealIP list from {url} ...")
resp = get(url, stream=True)
resp = get(url, stream=True, timeout=10)
if resp.status_code != 200:
continue

View File

@ -47,6 +47,7 @@ def generate_cert(
],
stdin=DEVNULL,
stderr=STDOUT,
check=False,
).returncode
== 0
):
@ -74,6 +75,7 @@ def generate_cert(
],
stdin=DEVNULL,
stderr=DEVNULL,
check=False,
).returncode
!= 0
):
@ -111,7 +113,7 @@ try:
# Multisite case
if getenv("MULTISITE") == "yes":
servers = getenv("SERVER_NAME", [])
servers = getenv("SERVER_NAME") or []
if isinstance(servers, str):
servers = servers.split(" ")

View File

@ -129,7 +129,7 @@ try:
for url in urls_list:
try:
logger.info(f"Downloading whitelist data from {url} ...")
resp = get(url, stream=True)
resp = get(url, stream=True, timeout=10)
if resp.status_code != 200:
continue

View File

@ -16,7 +16,7 @@
},
"WHITELIST_IP": {
"context": "multisite",
"default": "20.191.45.212 40.88.21.235 40.76.173.151 40.76.163.7 20.185.79.47 52.142.26.175 20.185.79.15 52.142.24.149 40.76.162.208 40.76.163.23 40.76.162.191 40.76.162.247 54.208.102.37 107.21.1.8",
"default": "20.191.45.212 40.88.21.235 40.76.173.151 40.76.163.7 20.185.79.47 52.142.26.175 20.185.79.15 52.142.24.149 40.76.162.208 40.76.163.23 40.76.162.191 40.76.162.247",
"help": "List of IP/network, separated with spaces, to put into the whitelist.",
"id": "whitelist-ip",
"label": "Whitelist IP/network",

View File

@ -11,7 +11,7 @@ from os.path import basename, dirname, join
from pathlib import Path
from re import compile as re_compile
from sys import _getframe, path as sys_path
from typing import Any, Dict, List, Optional, Tuple
from typing import Any, Dict, List, Literal, Optional, Tuple, Union
from time import sleep
from traceback import format_exc
@ -55,7 +55,13 @@ install_as_MySQLdb()
class Database:
def __init__(self, logger: Logger, sqlalchemy_string: str = None) -> None:
def __init__(
self,
logger: Logger,
sqlalchemy_string: Optional[str] = None,
*,
ui: bool = False,
) -> None:
"""Initialize the database"""
self.__logger = logger
self.__sql_session = None
@ -67,10 +73,14 @@ class Database:
)
if sqlalchemy_string.startswith("sqlite"):
with suppress(FileExistsError):
Path(dirname(sqlalchemy_string.split("///")[1])).mkdir(
parents=True, exist_ok=True
)
if ui:
while not Path(sep, "var", "lib", "bunkerweb", "db.sqlite3"):
sleep(1)
else:
with suppress(FileExistsError):
Path(dirname(sqlalchemy_string.split("///")[1])).mkdir(
parents=True, exist_ok=True
)
elif "+" in sqlalchemy_string and "+pymysql" not in sqlalchemy_string:
splitted = sqlalchemy_string.split("+")
sqlalchemy_string = f"{splitted[0]}:{':'.join(splitted[1].split(':')[1:])}"
@ -151,9 +161,6 @@ class Database:
)
self.suffix_rx = re_compile(r"_\d+$")
def get_database_uri(self) -> str:
return self.database_uri
def __del__(self) -> None:
"""Close the database"""
if self.__sql_session:
@ -257,31 +264,44 @@ class Database:
return ""
def check_changes(self) -> Dict[str, bool]:
def check_changes(
self, _type: Union[Literal["scheduler"], Literal["ui"]] = "scheduler"
) -> Union[Dict[str, bool], bool, str]:
"""Check if either the config, the custom configs or plugins have changed inside the database"""
with self.__db_session() as session:
try:
metadata = (
session.query(Metadata)
.with_entities(
if _type == "scheduler":
entities = (
Metadata.custom_configs_changed,
Metadata.external_plugins_changed,
Metadata.config_changed,
)
else:
entities = (Metadata.ui_config_changed,)
metadata = (
session.query(Metadata)
.with_entities(*entities)
.filter_by(id=1)
.first()
)
return dict(
custom_configs_changed=metadata is not None
and metadata.custom_configs_changed,
external_plugins_changed=metadata is not None
and metadata.external_plugins_changed,
config_changed=metadata is not None and metadata.config_changed,
)
if _type == "scheduler":
return dict(
custom_configs_changed=metadata is not None
and metadata.custom_configs_changed,
external_plugins_changed=metadata is not None
and metadata.external_plugins_changed,
config_changed=metadata is not None and metadata.config_changed,
)
else:
return metadata is not None and metadata.ui_config_changed
except BaseException:
return format_exc()
def checked_changes(self) -> str:
def checked_changes(
self, _type: Union[Literal["scheduler"], Literal["ui"]] = "scheduler"
) -> str:
"""Set that the config, the custom configs and the plugins didn't change"""
with self.__db_session() as session:
try:
@ -290,9 +310,12 @@ class Database:
if not metadata:
return "The metadata are not set yet, try again"
metadata.config_changed = False
metadata.custom_configs_changed = False
metadata.external_plugins_changed = False
if _type == "scheduler":
metadata.config_changed = False
metadata.custom_configs_changed = False
metadata.external_plugins_changed = False
else:
metadata.ui_config_changed = False
session.commit()
except BaseException:
return format_exc()
@ -661,6 +684,7 @@ class Database:
if not metadata.first_config_saved:
metadata.first_config_saved = True
metadata.config_changed = bool(to_put)
metadata.ui_config_changed = bool(to_put)
try:
session.add_all(to_put)

View File

@ -282,5 +282,6 @@ class Metadata(Base):
custom_configs_changed = Column(Boolean, default=False, nullable=True)
external_plugins_changed = Column(Boolean, default=False, nullable=True)
config_changed = Column(Boolean, default=False, nullable=True)
ui_config_changed = Column(Boolean, default=False, nullable=True)
integration = Column(INTEGRATIONS_ENUM, default="Unknown", nullable=False)
version = Column(String(32), default="1.5.0", nullable=False)

View File

@ -1,4 +1,4 @@
sqlalchemy==2.0.15
psycopg2-binary==2.9.6
PyMySQL==1.0.3
cryptography==40.0.2
cryptography==41.0.0

View File

@ -70,26 +70,26 @@ cffi==1.15.1 \
--hash=sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01 \
--hash=sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0
# via cryptography
cryptography==40.0.2 \
--hash=sha256:05dc219433b14046c476f6f09d7636b92a1c3e5808b9a6536adf4932b3b2c440 \
--hash=sha256:0dcca15d3a19a66e63662dc8d30f8036b07be851a8680eda92d079868f106288 \
--hash=sha256:142bae539ef28a1c76794cca7f49729e7c54423f615cfd9b0b1fa90ebe53244b \
--hash=sha256:3daf9b114213f8ba460b829a02896789751626a2a4e7a43a28ee77c04b5e4958 \
--hash=sha256:48f388d0d153350f378c7f7b41497a54ff1513c816bcbbcafe5b829e59b9ce5b \
--hash=sha256:4df2af28d7bedc84fe45bd49bc35d710aede676e2a4cb7fc6d103a2adc8afe4d \
--hash=sha256:4f01c9863da784558165f5d4d916093737a75203a5c5286fde60e503e4276c7a \
--hash=sha256:7a38250f433cd41df7fcb763caa3ee9362777fdb4dc642b9a349721d2bf47404 \
--hash=sha256:8f79b5ff5ad9d3218afb1e7e20ea74da5f76943ee5edb7f76e56ec5161ec782b \
--hash=sha256:956ba8701b4ffe91ba59665ed170a2ebbdc6fc0e40de5f6059195d9f2b33ca0e \
--hash=sha256:a04386fb7bc85fab9cd51b6308633a3c271e3d0d3eae917eebab2fac6219b6d2 \
--hash=sha256:a95f4802d49faa6a674242e25bfeea6fc2acd915b5e5e29ac90a32b1139cae1c \
--hash=sha256:adc0d980fd2760c9e5de537c28935cc32b9353baaf28e0814df417619c6c8c3b \
--hash=sha256:aecbb1592b0188e030cb01f82d12556cf72e218280f621deed7d806afd2113f9 \
--hash=sha256:b12794f01d4cacfbd3177b9042198f3af1c856eedd0a98f10f141385c809a14b \
--hash=sha256:c0764e72b36a3dc065c155e5b22f93df465da9c39af65516fe04ed3c68c92636 \
--hash=sha256:c33c0d32b8594fa647d2e01dbccc303478e16fdd7cf98652d5b3ed11aa5e5c99 \
--hash=sha256:cbaba590180cba88cb99a5f76f90808a624f18b169b90a4abb40c1fd8c19420e \
--hash=sha256:d5a1bd0e9e2031465761dfa920c16b0065ad77321d8a8c1f5ee331021fda65e9
cryptography==41.0.0 \
--hash=sha256:0ddaee209d1cf1f180f1efa338a68c4621154de0afaef92b89486f5f96047c55 \
--hash=sha256:14754bcdae909d66ff24b7b5f166d69340ccc6cb15731670435efd5719294895 \
--hash=sha256:344c6de9f8bda3c425b3a41b319522ba3208551b70c2ae00099c205f0d9fd3be \
--hash=sha256:34d405ea69a8b34566ba3dfb0521379b210ea5d560fafedf9f800a9a94a41928 \
--hash=sha256:3680248309d340fda9611498a5319b0193a8dbdb73586a1acf8109d06f25b92d \
--hash=sha256:3c5ef25d060c80d6d9f7f9892e1d41bb1c79b78ce74805b8cb4aa373cb7d5ec8 \
--hash=sha256:4ab14d567f7bbe7f1cdff1c53d5324ed4d3fc8bd17c481b395db224fb405c237 \
--hash=sha256:5c1f7293c31ebc72163a9a0df246f890d65f66b4a40d9ec80081969ba8c78cc9 \
--hash=sha256:6b71f64beeea341c9b4f963b48ee3b62d62d57ba93eb120e1196b31dc1025e78 \
--hash=sha256:7d92f0248d38faa411d17f4107fc0bce0c42cae0b0ba5415505df72d751bf62d \
--hash=sha256:8362565b3835ceacf4dc8f3b56471a2289cf51ac80946f9087e66dc283a810e0 \
--hash=sha256:84a165379cb9d411d58ed739e4af3396e544eac190805a54ba2e0322feb55c46 \
--hash=sha256:88ff107f211ea696455ea8d911389f6d2b276aabf3231bf72c8853d22db755c5 \
--hash=sha256:9f65e842cb02550fac96536edb1d17f24c0a338fd84eaf582be25926e993dde4 \
--hash=sha256:a4fc68d1c5b951cfb72dfd54702afdbbf0fb7acdc9b7dc4301bbf2225a27714d \
--hash=sha256:b7f2f5c525a642cecad24ee8670443ba27ac1fab81bba4cc24c7b6b41f2d0c75 \
--hash=sha256:b846d59a8d5a9ba87e2c3d757ca019fa576793e8758174d3868aecb88d6fc8eb \
--hash=sha256:bf8fc66012ca857d62f6a347007e166ed59c0bc150cefa49f28376ebe7d992a2 \
--hash=sha256:f5d0bf9b252f30a31664b6f64432b4730bb7038339bd18b1fafe129cfc2be9be
# via -r requirements.in
greenlet==2.0.2 \
--hash=sha256:03a8f4f3430c3b3ff8d10a2a86028c660355ab637cee9333d63d66b56f09d52a \
@ -268,7 +268,7 @@ sqlalchemy==2.0.15 \
--hash=sha256:f6fd3c88ea4b170d13527e93be1945e69facd917661d3725a63470eb683fbffe \
--hash=sha256:f7f994a53c0e6b44a2966fd6bfc53e37d34b7dca34e75b6be295de6db598255e
# via -r requirements.in
typing-extensions==4.6.0 \
--hash=sha256:6ad00b63f849b7dcc313b70b6b304ed67b2b2963b3098a33efe18056b1a9a223 \
--hash=sha256:ff6b238610c747e44c268aa4bb23c8c735d665a63726df3f9431ce707f2aa768
typing-extensions==4.6.2 \
--hash=sha256:06006244c70ac8ee83fa8282cb188f697b8db25bc8b4df07be1873c43897060c \
--hash=sha256:3a8b36f13dd5fdc5d1b16fe317f5668545de77fa0b8e02006381fd49d731ab98
# via sqlalchemy

View File

@ -298,7 +298,7 @@ class Configurator:
elif not self.__plugin_version_rx.match(plugin["version"]):
return (
False,
f"Invalid version for plugin {plugin['id']} (Must be in format \d+\.\d+(\.\d+)?)",
f"Invalid version for plugin {plugin['id']} (Must be in format \\d+\\.\\d+(\\.\\d+)?)",
)
elif plugin["stream"] not in ["yes", "no", "partial"]:
return (

View File

@ -188,7 +188,7 @@ if __name__ == "__main__":
and not args.no_linux_reload
):
retries = 0
while not Path(sep, "var", "tmp", "bunkerweb", "nginx.pid").exists():
while not Path(sep, "var", "run", "bunkerweb", "nginx.pid").exists():
if retries == 5:
logger.error(
"BunkerWeb's nginx didn't start in time.",

View File

@ -8,9 +8,9 @@ async-timeout==4.0.2 \
--hash=sha256:2163e1640ddb52b7a8c80d0a67a08587e5d245cc9c553a74a847056bc2976b15 \
--hash=sha256:8ca1e4fcf50d07413d66d1a5e416e42cfdf5851c981d679a09851a6853383b3c
# via redis
cachetools==5.3.0 \
--hash=sha256:13dfddc7b8df938c21a940dfa6557ce6e94a2f1cdfa58eb90c805721d58f2c14 \
--hash=sha256:429e1a1e845c008ea6c85aa35d4b98b65d6a9763eeef3e37e92728a12d1de9d4
cachetools==5.3.1 \
--hash=sha256:95ef631eeaea14ba2e36f06437f36463aac3a096799e876ee55e5cdccb102590 \
--hash=sha256:dce83f2d9b4e1f732a8cd44af8e8fab2dbe46201467fc98b3ef8f269092bf62b
# via google-auth
certifi==2023.5.7 \
--hash=sha256:0f0d56dc5a6ad56fd4ba36484d6cc34451e1c6548c61daad8c320169f91eddc7 \

View File

@ -371,10 +371,8 @@ if __name__ == "__main__":
if args.method != "ui":
if apis:
for api in apis:
endpoint_data = api.get_endpoint().replace("http://", "").split(":")
err = db.add_instance(
endpoint_data[0], endpoint_data[1], api.get_host()
)
endpoint_data = api.endpoint.replace("http://", "").split(":")
err = db.add_instance(endpoint_data[0], endpoint_data[1], api.host)
if err:
logger.warning(err)

View File

@ -1,6 +1,6 @@
#!/bin/bash
if [ ! -f /var/tmp/bunkerweb/scheduler.pid ] ; then
if [ ! -f /var/run/bunkerweb/scheduler.pid ] ; then
exit 1
fi

View File

@ -1,6 +1,6 @@
#!/bin/bash
if [ ! -f /var/tmp/bunkerweb/ui.pid ] ; then
if [ ! -f /var/run/bunkerweb/ui.pid ] ; then
exit 1
fi

View File

@ -1,6 +1,6 @@
#!/bin/bash
if [ ! -f /var/tmp/bunkerweb/nginx.pid ] ; then
if [ ! -f /var/run/bunkerweb/nginx.pid ] ; then
exit 1
fi

View File

@ -26,6 +26,14 @@ class ApiCaller:
self.__apis = apis or []
self.__logger = setup_logger("Api", getenv("LOG_LEVEL", "INFO"))
@property
def apis(self) -> List[API]:
return self.__apis
@apis.setter
def apis(self, apis: List[API]):
self.__apis = apis
def auto_setup(self, bw_integration: Optional[str] = None):
if bw_integration is None:
if getenv("KUBERNETES_MODE", "no") == "yes":
@ -105,13 +113,7 @@ class ApiCaller:
)
)
def _set_apis(self, apis: List[API]):
self.__apis = apis
def _get_apis(self):
return self.__apis
def _send_to_apis(
def send_to_apis(
self,
method: Union[Literal["POST"], Literal["GET"]],
url: str,
@ -129,23 +131,21 @@ class ApiCaller:
if not sent:
ret = False
self.__logger.error(
f"Can't send API request to {api.get_endpoint()}{url} : {err}",
f"Can't send API request to {api.endpoint}{url} : {err}",
)
else:
if status != 200:
ret = False
self.__logger.error(
f"Error while sending API request to {api.get_endpoint()}{url} : status = {resp['status']}, msg = {resp['msg']}",
f"Error while sending API request to {api.endpoint}{url} : status = {resp['status']}, msg = {resp['msg']}",
)
else:
self.__logger.info(
f"Successfully sent API request to {api.get_endpoint()}{url}",
f"Successfully sent API request to {api.endpoint}{url}",
)
if response:
instance = (
api.get_endpoint().replace("http://", "").split(":")[0]
)
instance = api.endpoint.replace("http://", "").split(":")[0]
if isinstance(resp, dict):
responses[instance] = resp
else:
@ -155,7 +155,7 @@ class ApiCaller:
return ret, responses
return ret
def _send_files(self, path: str, url: str) -> bool:
def send_files(self, path: str, url: str) -> bool:
ret = True
with BytesIO() as tgz:
with tar_open(
@ -164,6 +164,6 @@ class ApiCaller:
tf.add(path, arcname=".")
tgz.seek(0, 0)
files = {"archive.tar.gz": tgz}
if not self._send_to_apis("POST", url, files=files):
if not self.send_to_apis("POST", url, files=files):
ret = False
return ret

View File

@ -16,13 +16,17 @@ class ConfigCaller:
def __init__(self):
self.__logger = setup_logger("Config", "INFO")
self._settings = loads(
Path(sep, "usr", "share", "bunkerweb", "settings.json").read_text()
Path(sep, "usr", "share", "bunkerweb", "settings.json").read_text(
encoding="utf-8"
)
)
for plugin in glob(
join(sep, "usr", "share", "bunkerweb", "core", "*", "plugin.json")
) + glob(join(sep, "etc", "bunkerweb", "plugins", "*", "plugin.json")):
try:
self._settings.update(loads(Path(plugin).read_text())["settings"])
self._settings.update(
loads(Path(plugin).read_text(encoding="utf-8"))["settings"]
)
except KeyError:
self.__logger.error(
f'Error while loading plugin metadata file at {plugin} : missing "settings" key',

View File

@ -70,12 +70,12 @@ def is_cached_file(
return is_cached and cached_file
def get_file_in_db(file: Union[str, Path], db) -> bytes:
def get_file_in_db(file: Union[str, Path], db) -> Optional[bytes]:
cached_file = db.get_job_cache_file(
basename(getsourcefile(_getframe(1))).replace(".py", ""), normpath(file)
)
if not cached_file:
return False
return None
return cached_file.data
@ -142,7 +142,9 @@ def bytes_hash(bio: BufferedReader) -> str:
def cache_hash(cache: Union[str, Path], db=None) -> Optional[str]:
with suppress(BaseException):
return loads(Path(normpath(f"{cache}.md")).read_text()).get("checksum", None)
return loads(Path(normpath(f"{cache}.md")).read_text(encoding="utf-8")).get(
"checksum", None
)
if db:
cached_file = db.get_job_cache_file(
basename(getsourcefile(_getframe(1))).replace(".py", ""),
@ -192,7 +194,8 @@ def cache_file(
)
else:
Path(f"{cache}.md").write_text(
dumps(dict(date=datetime.now().timestamp(), checksum=_hash))
dumps(dict(date=datetime.now().timestamp(), checksum=_hash)),
encoding="utf-8",
)
except:
return False, f"exception :\n{format_exc()}"

View File

@ -18,21 +18,7 @@ from typing import Optional, Union
class BWLogger(Logger):
def __init__(self, name, level=INFO):
self.name = name
return super(BWLogger, self).__init__(name, level)
def _log(
self,
level,
msg,
args,
exc_info=None,
extra=None,
stack_info=False,
stacklevel=1,
):
return super(BWLogger, self)._log(
level, msg, args, exc_info, extra, stack_info, stacklevel
)
super(BWLogger, self).__init__(name, level)
setLoggerClass(BWLogger)

View File

@ -317,6 +317,10 @@ fi
git_secure_clone "https://github.com/SpiderLabs/ModSecurity-nginx.git" "d59e4ad121df702751940fd66bcc0b3ecb51a079"
if [ "$dopatch" = "yes" ] ; then
do_and_check_cmd patch deps/src/ModSecurity-nginx/src/ngx_http_modsecurity_log.c deps/misc/modsecurity-nginx.patch
do_and_check_cmd patch deps/src/ModSecurity-nginx/config deps/misc/config.patch
do_and_check_cmd patch deps/src/ModSecurity-nginx/src/ngx_http_modsecurity_common.h deps/misc/ngx_http_modsecurity_common.h.patch
do_and_check_cmd patch deps/src/ModSecurity-nginx/src/ngx_http_modsecurity_module.c deps/misc/ngx_http_modsecurity_module.c.patch
do_and_check_cmd cp deps/misc/ngx_http_modsecurity_access.c deps/src/ModSecurity-nginx/src
fi
# libmaxminddb v1.7.1

View File

@ -169,7 +169,7 @@ if [ "$OS" = "fedora" ] ; then
CONFARGS="$(echo -n "$CONFARGS" | sed "s/--with-ld-opt='.*'/--with-ld-opt=-lpcre/" | sed "s/--with-cc-opt='.*'//")"
fi
echo '#!/bin/bash' > "/tmp/bunkerweb/deps/src/nginx-${NGINX_VERSION}/configure-fix.sh"
echo "./configure $CONFARGS --add-dynamic-module=/tmp/bunkerweb/deps/src/ModSecurity-nginx --add-dynamic-module=/tmp/bunkerweb/deps/src/headers-more-nginx-module --add-dynamic-module=/tmp/bunkerweb/deps/src/nginx_cookie_flag_module --add-dynamic-module=/tmp/bunkerweb/deps/src/lua-nginx-module --add-dynamic-module=/tmp/bunkerweb/deps/src/ngx_brotli --add-dynamic-module=/tmp/bunkerweb/deps/src/ngx_devel_kit --add-dynamic-module=/tmp/bunkerweb/deps/src/stream-lua-nginx-module" >> "/tmp/bunkerweb/deps/src/nginx-${NGINX_VERSION}/configure-fix.sh"
echo "./configure $CONFARGS --add-dynamic-module=/tmp/bunkerweb/deps/src/headers-more-nginx-module --add-dynamic-module=/tmp/bunkerweb/deps/src/nginx_cookie_flag_module --add-dynamic-module=/tmp/bunkerweb/deps/src/lua-nginx-module --add-dynamic-module=/tmp/bunkerweb/deps/src/ngx_brotli --add-dynamic-module=/tmp/bunkerweb/deps/src/ngx_devel_kit --add-dynamic-module=/tmp/bunkerweb/deps/src/stream-lua-nginx-module" --add-dynamic-module=/tmp/bunkerweb/deps/src/ModSecurity-nginx >> "/tmp/bunkerweb/deps/src/nginx-${NGINX_VERSION}/configure-fix.sh"
do_and_check_cmd chmod +x "/tmp/bunkerweb/deps/src/nginx-${NGINX_VERSION}/configure-fix.sh"
CHANGE_DIR="/tmp/bunkerweb/deps/src/nginx-${NGINX_VERSION}" LUAJIT_LIB="/usr/share/bunkerweb/deps/lib -Wl,-rpath,/usr/share/bunkerweb/deps/lib" LUAJIT_INC="/usr/share/bunkerweb/deps/include/luajit-2.1" MODSECURITY_LIB="/usr/share/bunkerweb/deps/lib" MODSECURITY_INC="/usr/share/bunkerweb/deps/include" do_and_check_cmd ./configure-fix.sh
CHANGE_DIR="/tmp/bunkerweb/deps/src/nginx-${NGINX_VERSION}" do_and_check_cmd make -j $NTASK modules

View File

@ -0,0 +1,18 @@
@@ -110,7 +110,7 @@
ngx_module_type=HTTP_FILTER
ngx_module_name="$ngx_addon_name"
ngx_module_srcs="$ngx_addon_dir/src/ngx_http_modsecurity_module.c \
- $ngx_addon_dir/src/ngx_http_modsecurity_pre_access.c \
+ $ngx_addon_dir/src/ngx_http_modsecurity_access.c \
$ngx_addon_dir/src/ngx_http_modsecurity_header_filter.c \
$ngx_addon_dir/src/ngx_http_modsecurity_body_filter.c \
$ngx_addon_dir/src/ngx_http_modsecurity_log.c \
@@ -141,7 +141,7 @@
NGX_ADDON_SRCS="\
$NGX_ADDON_SRCS \
$ngx_addon_dir/src/ngx_http_modsecurity_module.c \
- $ngx_addon_dir/src/ngx_http_modsecurity_pre_access.c \
+ $ngx_addon_dir/src/ngx_http_modsecurity_access.c \
$ngx_addon_dir/src/ngx_http_modsecurity_header_filter.c \
$ngx_addon_dir/src/ngx_http_modsecurity_body_filter.c \
$ngx_addon_dir/src/ngx_http_modsecurity_log.c \

View File

@ -0,0 +1,228 @@
/*
* ModSecurity connector for nginx, http://www.modsecurity.org/
* Copyright (c) 2015 Trustwave Holdings, Inc. (http://www.trustwave.com/)
*
* You may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* If any of the files related to licensing are missing or if you have any
* other questions related to licensing please contact Trustwave Holdings, Inc.
* directly using the email address security@modsecurity.org.
*
*/
#ifndef MODSECURITY_DDEBUG
#define MODSECURITY_DDEBUG 0
#endif
#include "ddebug.h"
#include "ngx_http_modsecurity_common.h"
void
ngx_http_modsecurity_request_read(ngx_http_request_t *r)
{
ngx_http_modsecurity_ctx_t *ctx;
ctx = ngx_http_get_module_ctx(r, ngx_http_modsecurity_module);
#if defined(nginx_version) && nginx_version >= 8011
r->main->count--;
#endif
if (ctx->waiting_more_body)
{
ctx->waiting_more_body = 0;
r->write_event_handler = ngx_http_core_run_phases;
ngx_http_core_run_phases(r);
}
}
ngx_int_t
ngx_http_modsecurity_access_handler(ngx_http_request_t *r)
{
#if 1
ngx_pool_t *old_pool;
ngx_http_modsecurity_ctx_t *ctx;
ngx_http_modsecurity_conf_t *mcf;
dd("catching a new _access_ phase handler");
mcf = ngx_http_get_module_loc_conf(r, ngx_http_modsecurity_module);
if (mcf == NULL || mcf->enable != 1)
{
dd("ModSecurity not enabled... returning");
return NGX_DECLINED;
}
/*
* FIXME:
* In order to perform some tests, let's accept everything.
*
if (r->method != NGX_HTTP_GET &&
r->method != NGX_HTTP_POST && r->method != NGX_HTTP_HEAD) {
dd("ModSecurity is not ready to deal with anything different from " \
"POST, GET or HEAD");
return NGX_DECLINED;
}
*/
ctx = ngx_http_get_module_ctx(r, ngx_http_modsecurity_module);
dd("recovering ctx: %p", ctx);
if (ctx == NULL)
{
dd("ctx is null; Nothing we can do, returning an error.");
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
if (ctx->intervention_triggered) {
return NGX_DECLINED;
}
if (ctx->waiting_more_body == 1)
{
dd("waiting for more data before proceed. / count: %d",
r->main->count);
return NGX_DONE;
}
if (ctx->body_requested == 0)
{
ngx_int_t rc = NGX_OK;
ctx->body_requested = 1;
dd("asking for the request body, if any. Count: %d",
r->main->count);
/**
* TODO: Check if there is any benefit to use request_body_in_single_buf set to 1.
*
* saw some module using this request_body_in_single_buf
* but not sure what exactly it does, same for the others options below.
*
* r->request_body_in_single_buf = 1;
*/
r->request_body_in_single_buf = 1;
r->request_body_in_persistent_file = 1;
if (!r->request_body_in_file_only) {
// If the above condition fails, then the flag below will have been
// set correctly elsewhere. We need to set the flag here for other
// conditions (client_body_in_file_only not used but
// client_body_buffer_size is)
r->request_body_in_clean_file = 1;
}
rc = ngx_http_read_client_request_body(r,
ngx_http_modsecurity_request_read);
if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) {
#if (nginx_version < 1002006) || \
(nginx_version >= 1003000 && nginx_version < 1003009)
r->main->count--;
#endif
return rc;
}
if (rc == NGX_AGAIN)
{
dd("nginx is asking us to wait for more data.");
ctx->waiting_more_body = 1;
return NGX_DONE;
}
}
if (ctx->waiting_more_body == 0)
{
int ret = 0;
int already_inspected = 0;
dd("request body is ready to be processed");
r->write_event_handler = ngx_http_core_run_phases;
ngx_chain_t *chain = r->request_body->bufs;
/**
* TODO: Speed up the analysis by sending chunk while they arrive.
*
* Notice that we are waiting for the full request body to
* start to process it, it may not be necessary. We may send
* the chunks to ModSecurity while nginx keep calling this
* function.
*/
if (r->request_body->temp_file != NULL) {
ngx_str_t file_path = r->request_body->temp_file->file.name;
const char *file_name = ngx_str_to_char(file_path, r->pool);
if (file_name == (char*)-1) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
/*
* Request body was saved to a file, probably we don't have a
* copy of it in memory.
*/
dd("request body inspection: file -- %s", file_name);
msc_request_body_from_file(ctx->modsec_transaction, file_name);
already_inspected = 1;
} else {
dd("inspection request body in memory.");
}
while (chain && !already_inspected)
{
u_char *data = chain->buf->pos;
msc_append_request_body(ctx->modsec_transaction, data,
chain->buf->last - data);
if (chain->buf->last_buf) {
break;
}
chain = chain->next;
/* XXX: chains are processed one-by-one, maybe worth to pass all chains and then call intervention() ? */
/**
* ModSecurity may perform stream inspection on this buffer,
* it may ask for a intervention in consequence of that.
*
*/
ret = ngx_http_modsecurity_process_intervention(ctx->modsec_transaction, r, 0);
if (ret > 0) {
return ret;
}
}
/**
* At this point, all the request body was sent to ModSecurity
* and we want to make sure that all the request body inspection
* happened; consequently we have to check if ModSecurity have
* returned any kind of intervention.
*/
/* XXX: once more -- is body can be modified ? content-length need to be adjusted ? */
old_pool = ngx_http_modsecurity_pcre_malloc_init(r->pool);
msc_process_request_body(ctx->modsec_transaction);
ngx_http_modsecurity_pcre_malloc_done(old_pool);
ret = ngx_http_modsecurity_process_intervention(ctx->modsec_transaction, r, 0);
if (r->error_page) {
return NGX_DECLINED;
}
if (ret > 0) {
return ret;
}
}
dd("Nothing to add on the body inspection, reclaiming a NGX_DECLINED");
#endif
return NGX_DECLINED;
}

View File

@ -0,0 +1,11 @@
@@ -163,8 +163,8 @@
void ngx_http_modsecurity_log(void *log, const void* data);
ngx_int_t ngx_http_modsecurity_log_handler(ngx_http_request_t *r);
-/* ngx_http_modsecurity_pre_access.c */
-ngx_int_t ngx_http_modsecurity_pre_access_handler(ngx_http_request_t *r);
+/* ngx_http_modsecurity_access.c */
+ngx_int_t ngx_http_modsecurity_access_handler(ngx_http_request_t *r);
/* ngx_http_modsecurity_rewrite.c */
ngx_int_t ngx_http_modsecurity_rewrite_handler(ngx_http_request_t *r);

View File

@ -0,0 +1,33 @@
@@ -526,7 +526,7 @@
ngx_http_modsecurity_init(ngx_conf_t *cf)
{
ngx_http_handler_pt *h_rewrite;
- ngx_http_handler_pt *h_preaccess;
+ ngx_http_handler_pt *h_access;
ngx_http_handler_pt *h_log;
ngx_http_core_main_conf_t *cmcf;
int rc = 0;
@@ -556,18 +556,18 @@
/**
*
- * Processing the request body on the preaccess phase.
+ * Processing the request body on the access phase.
*
* TODO: check if hook into separated phases is the best thing to do.
*
*/
- h_preaccess = ngx_array_push(&cmcf->phases[NGX_HTTP_PREACCESS_PHASE].handlers);
- if (h_preaccess == NULL)
+ h_access = ngx_array_push(&cmcf->phases[NGX_HTTP_ACCESS_PHASE].handlers);
+ if (h_access == NULL)
{
- dd("Not able to create a new NGX_HTTP_PREACCESS_PHASE handle");
+ dd("Not able to create a new NGX_HTTP_ACCESS_PHASE handle");
return NGX_ERROR;
}
- *h_preaccess = ngx_http_modsecurity_pre_access_handler;
+ *h_access = ngx_http_modsecurity_access_handler;
/**
* Process the log phase.

View File

@ -110,7 +110,7 @@ if test -n "$ngx_module_link"; then
ngx_module_type=HTTP_FILTER
ngx_module_name="$ngx_addon_name"
ngx_module_srcs="$ngx_addon_dir/src/ngx_http_modsecurity_module.c \
$ngx_addon_dir/src/ngx_http_modsecurity_pre_access.c \
$ngx_addon_dir/src/ngx_http_modsecurity_access.c \
$ngx_addon_dir/src/ngx_http_modsecurity_header_filter.c \
$ngx_addon_dir/src/ngx_http_modsecurity_body_filter.c \
$ngx_addon_dir/src/ngx_http_modsecurity_log.c \
@ -141,7 +141,7 @@ else
NGX_ADDON_SRCS="\
$NGX_ADDON_SRCS \
$ngx_addon_dir/src/ngx_http_modsecurity_module.c \
$ngx_addon_dir/src/ngx_http_modsecurity_pre_access.c \
$ngx_addon_dir/src/ngx_http_modsecurity_access.c \
$ngx_addon_dir/src/ngx_http_modsecurity_header_filter.c \
$ngx_addon_dir/src/ngx_http_modsecurity_body_filter.c \
$ngx_addon_dir/src/ngx_http_modsecurity_log.c \

View File

@ -0,0 +1,228 @@
/*
* ModSecurity connector for nginx, http://www.modsecurity.org/
* Copyright (c) 2015 Trustwave Holdings, Inc. (http://www.trustwave.com/)
*
* You may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* If any of the files related to licensing are missing or if you have any
* other questions related to licensing please contact Trustwave Holdings, Inc.
* directly using the email address security@modsecurity.org.
*
*/
#ifndef MODSECURITY_DDEBUG
#define MODSECURITY_DDEBUG 0
#endif
#include "ddebug.h"
#include "ngx_http_modsecurity_common.h"
void
ngx_http_modsecurity_request_read(ngx_http_request_t *r)
{
ngx_http_modsecurity_ctx_t *ctx;
ctx = ngx_http_get_module_ctx(r, ngx_http_modsecurity_module);
#if defined(nginx_version) && nginx_version >= 8011
r->main->count--;
#endif
if (ctx->waiting_more_body)
{
ctx->waiting_more_body = 0;
r->write_event_handler = ngx_http_core_run_phases;
ngx_http_core_run_phases(r);
}
}
ngx_int_t
ngx_http_modsecurity_access_handler(ngx_http_request_t *r)
{
#if 1
ngx_pool_t *old_pool;
ngx_http_modsecurity_ctx_t *ctx;
ngx_http_modsecurity_conf_t *mcf;
dd("catching a new _access_ phase handler");
mcf = ngx_http_get_module_loc_conf(r, ngx_http_modsecurity_module);
if (mcf == NULL || mcf->enable != 1)
{
dd("ModSecurity not enabled... returning");
return NGX_DECLINED;
}
/*
* FIXME:
* In order to perform some tests, let's accept everything.
*
if (r->method != NGX_HTTP_GET &&
r->method != NGX_HTTP_POST && r->method != NGX_HTTP_HEAD) {
dd("ModSecurity is not ready to deal with anything different from " \
"POST, GET or HEAD");
return NGX_DECLINED;
}
*/
ctx = ngx_http_get_module_ctx(r, ngx_http_modsecurity_module);
dd("recovering ctx: %p", ctx);
if (ctx == NULL)
{
dd("ctx is null; Nothing we can do, returning an error.");
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
if (ctx->intervention_triggered) {
return NGX_DECLINED;
}
if (ctx->waiting_more_body == 1)
{
dd("waiting for more data before proceed. / count: %d",
r->main->count);
return NGX_DONE;
}
if (ctx->body_requested == 0)
{
ngx_int_t rc = NGX_OK;
ctx->body_requested = 1;
dd("asking for the request body, if any. Count: %d",
r->main->count);
/**
* TODO: Check if there is any benefit to use request_body_in_single_buf set to 1.
*
* saw some module using this request_body_in_single_buf
* but not sure what exactly it does, same for the others options below.
*
* r->request_body_in_single_buf = 1;
*/
r->request_body_in_single_buf = 1;
r->request_body_in_persistent_file = 1;
if (!r->request_body_in_file_only) {
// If the above condition fails, then the flag below will have been
// set correctly elsewhere. We need to set the flag here for other
// conditions (client_body_in_file_only not used but
// client_body_buffer_size is)
r->request_body_in_clean_file = 1;
}
rc = ngx_http_read_client_request_body(r,
ngx_http_modsecurity_request_read);
if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) {
#if (nginx_version < 1002006) || \
(nginx_version >= 1003000 && nginx_version < 1003009)
r->main->count--;
#endif
return rc;
}
if (rc == NGX_AGAIN)
{
dd("nginx is asking us to wait for more data.");
ctx->waiting_more_body = 1;
return NGX_DONE;
}
}
if (ctx->waiting_more_body == 0)
{
int ret = 0;
int already_inspected = 0;
dd("request body is ready to be processed");
r->write_event_handler = ngx_http_core_run_phases;
ngx_chain_t *chain = r->request_body->bufs;
/**
* TODO: Speed up the analysis by sending chunk while they arrive.
*
* Notice that we are waiting for the full request body to
* start to process it, it may not be necessary. We may send
* the chunks to ModSecurity while nginx keep calling this
* function.
*/
if (r->request_body->temp_file != NULL) {
ngx_str_t file_path = r->request_body->temp_file->file.name;
const char *file_name = ngx_str_to_char(file_path, r->pool);
if (file_name == (char*)-1) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
/*
* Request body was saved to a file, probably we don't have a
* copy of it in memory.
*/
dd("request body inspection: file -- %s", file_name);
msc_request_body_from_file(ctx->modsec_transaction, file_name);
already_inspected = 1;
} else {
dd("inspection request body in memory.");
}
while (chain && !already_inspected)
{
u_char *data = chain->buf->pos;
msc_append_request_body(ctx->modsec_transaction, data,
chain->buf->last - data);
if (chain->buf->last_buf) {
break;
}
chain = chain->next;
/* XXX: chains are processed one-by-one, maybe worth to pass all chains and then call intervention() ? */
/**
* ModSecurity may perform stream inspection on this buffer,
* it may ask for a intervention in consequence of that.
*
*/
ret = ngx_http_modsecurity_process_intervention(ctx->modsec_transaction, r, 0);
if (ret > 0) {
return ret;
}
}
/**
* At this point, all the request body was sent to ModSecurity
* and we want to make sure that all the request body inspection
* happened; consequently we have to check if ModSecurity have
* returned any kind of intervention.
*/
/* XXX: once more -- is body can be modified ? content-length need to be adjusted ? */
old_pool = ngx_http_modsecurity_pcre_malloc_init(r->pool);
msc_process_request_body(ctx->modsec_transaction);
ngx_http_modsecurity_pcre_malloc_done(old_pool);
ret = ngx_http_modsecurity_process_intervention(ctx->modsec_transaction, r, 0);
if (r->error_page) {
return NGX_DECLINED;
}
if (ret > 0) {
return ret;
}
}
dd("Nothing to add on the body inspection, reclaiming a NGX_DECLINED");
#endif
return NGX_DECLINED;
}

View File

@ -163,8 +163,8 @@ int ngx_http_modsecurity_store_ctx_header(ngx_http_request_t *r, ngx_str_t *name
void ngx_http_modsecurity_log(void *log, const void* data);
ngx_int_t ngx_http_modsecurity_log_handler(ngx_http_request_t *r);
/* ngx_http_modsecurity_pre_access.c */
ngx_int_t ngx_http_modsecurity_pre_access_handler(ngx_http_request_t *r);
/* ngx_http_modsecurity_access.c */
ngx_int_t ngx_http_modsecurity_access_handler(ngx_http_request_t *r);
/* ngx_http_modsecurity_rewrite.c */
ngx_int_t ngx_http_modsecurity_rewrite_handler(ngx_http_request_t *r);

View File

@ -526,7 +526,7 @@ static ngx_int_t
ngx_http_modsecurity_init(ngx_conf_t *cf)
{
ngx_http_handler_pt *h_rewrite;
ngx_http_handler_pt *h_preaccess;
ngx_http_handler_pt *h_access;
ngx_http_handler_pt *h_log;
ngx_http_core_main_conf_t *cmcf;
int rc = 0;
@ -556,18 +556,18 @@ ngx_http_modsecurity_init(ngx_conf_t *cf)
/**
*
* Processing the request body on the preaccess phase.
* Processing the request body on the access phase.
*
* TODO: check if hook into separated phases is the best thing to do.
*
*/
h_preaccess = ngx_array_push(&cmcf->phases[NGX_HTTP_PREACCESS_PHASE].handlers);
if (h_preaccess == NULL)
h_access = ngx_array_push(&cmcf->phases[NGX_HTTP_ACCESS_PHASE].handlers);
if (h_access == NULL)
{
dd("Not able to create a new NGX_HTTP_PREACCESS_PHASE handle");
dd("Not able to create a new NGX_HTTP_ACCESS_PHASE handle");
return NGX_ERROR;
}
*h_preaccess = ngx_http_modsecurity_pre_access_handler;
*h_access = ngx_http_modsecurity_access_handler;
/**
* Process the log phase.

View File

@ -63,6 +63,7 @@ RUN cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
mkdir -p /var/cache/bunkerweb/ && \
mkdir -p /etc/bunkerweb/plugins && \
mkdir -p /var/tmp/bunkerweb/ && \
mkdir -p /var/run/bunkerweb/ && \
mkdir -p /var/www/html && \
mkdir -p /var/lib/bunkerweb && \
#mkdir /var/www/html && \
@ -71,7 +72,7 @@ RUN cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type f -exec chmod 0740 {} \; && \
#It's a find command that will find all files in the bunkerweb directory, excluding the ui/deps directory, and then chmod them to 0740.
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type d -exec chmod 0750 {} \; && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ /var/run/bunkerweb/ && \
chmod 750 /usr/share/bunkerweb/gen/main.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/ui/main.py /var/www && \
# Don't forget to add /var/www/html on the above line
find /usr/share/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \

View File

@ -20,8 +20,7 @@ RUN mkdir -p /usr/share/bunkerweb/deps && \
rm -rf /tmp/req
# Nginx
RUN apt update && \
apt-get install gnupg2 ca-certificates wget -y && \
RUN apt-get install gnupg2 ca-certificates wget -y && \
echo "deb https://nginx.org/packages/debian/ bullseye nginx" > /etc/apt/sources.list.d/nginx.list && \
echo "deb-src https://nginx.org/packages/debian/ bullseye nginx" >> /etc/apt/sources.list.d/nginx.list && \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys ABF5BD827BD9BF62 && \
@ -29,8 +28,7 @@ RUN apt update && \
apt-get install -y --no-install-recommends nginx=${NGINX_VERSION}-1~bullseye
# Compile and install dependencies
RUN apt update && \
apt install --no-install-recommends bash python3-pip libssl-dev git libpcre++-dev zlib1g-dev libxml2-dev libyajl-dev pkgconf libcurl4-openssl-dev libgeoip-dev liblmdb-dev apt-utils bash build-essential autoconf libtool automake g++ gcc libxml2-dev make musl-dev gnupg patch libreadline-dev libpcre3-dev libgd-dev -y && \
RUN apt install --no-install-recommends bash python3-pip libssl-dev git libpcre++-dev zlib1g-dev libxml2-dev libyajl-dev pkgconf libcurl4-openssl-dev libgeoip-dev liblmdb-dev apt-utils bash build-essential autoconf libtool automake g++ gcc libxml2-dev make musl-dev gnupg patch libreadline-dev libpcre3-dev libgd-dev -y && \
pip install --no-cache-dir --upgrade pip && \
pip install wheel && \
#mkdir -p /usr/share/bunkerweb/deps && \
@ -68,6 +66,7 @@ RUN cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
mkdir -p /var/cache/bunkerweb/ && \
mkdir -p /etc/bunkerweb/plugins && \
mkdir -p /var/tmp/bunkerweb/ && \
mkdir -p /var/run/bunkerweb/ && \
mkdir -p /var/www/ && \
mkdir -p /var/lib/bunkerweb && \
mkdir /var/www/html && \
@ -76,7 +75,7 @@ RUN cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type f -exec chmod 0740 {} \; && \
#It's a find command that will find all files in the bunkerweb directory, excluding the ui/deps directory, and then chmod them to 0740.
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type d -exec chmod 0750 {} \; && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ /var/run/bunkerweb/ && \
chmod 750 /usr/share/bunkerweb/gen/main.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/ui/main.py /var/www/ && \
# Don't forget to add /var/www/html on the above line
find /usr/share/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \

View File

@ -4,12 +4,12 @@ ENV OS=fedora
ENV NGINX_VERSION 1.24.0
# Install fpm
RUN dnf install -y ruby ruby-devel make gcc redhat-rpm-config rpm-build && \
RUN dnf update -y && \
dnf install -y ruby ruby-devel make gcc redhat-rpm-config rpm-build && \
gem install fpm
# Nginx
RUN dnf update -y && \
dnf install -y curl gnupg2 ca-certificates redhat-lsb-core && \
RUN dnf install -y curl gnupg2 ca-certificates redhat-lsb-core && \
dnf install nginx-${NGINX_VERSION} -y
# Copy dependencies sources folder
@ -62,12 +62,13 @@ RUN cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
mkdir -p /var/cache/bunkerweb/ && \
mkdir -p /etc/bunkerweb/plugins && \
mkdir -p /var/tmp/bunkerweb/ && \
mkdir -p /var/run/bunkerweb/ && \
mkdir -p /var/www/html && \
mkdir -p /var/lib/bunkerweb && \
echo "Linux" > /usr/share/bunkerweb/INTEGRATION && \
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type f -exec chmod 0740 {} \; && \
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type d -exec chmod 0750 {} \; && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ /var/run/bunkerweb/ && \
chmod 750 /usr/share/bunkerweb/gen/main.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/ui/main.py /var/www/ && \
find /usr/share/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \
chmod 755 /usr/share/bunkerweb

View File

@ -76,12 +76,13 @@ RUN cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
mkdir -p /var/cache/bunkerweb/ && \
mkdir -p /etc/bunkerweb/plugins && \
mkdir -p /var/tmp/bunkerweb/ && \
mkdir -p /var/run/bunkerweb/ && \
mkdir -p /var/www/html && \
mkdir -p /var/lib/bunkerweb && \
echo "Linux" > /usr/share/bunkerweb/INTEGRATION && \
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type f -exec chmod 0740 {} \; && \
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type d -exec chmod 0750 {} \; && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ /var/run/bunkerweb/ && \
chmod 750 /usr/share/bunkerweb/gen/main.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/ui/main.py /var/www/ && \
find /usr/share/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \
chmod 755 /usr/share/bunkerweb

View File

@ -29,8 +29,7 @@ RUN apt update && \
apt-get install -y --no-install-recommends nginx=${NGINX_VERSION}-1~jammy
# Compile and install dependencies
RUN apt update && \
apt install --no-install-recommends bash python3-pip libssl-dev git libpcre++-dev zlib1g-dev libxml2-dev libyajl-dev pkgconf libcurl4-openssl-dev libgeoip-dev liblmdb-dev apt-utils bash build-essential autoconf libtool automake g++ gcc libxml2-dev make musl-dev gnupg patch libreadline-dev libpcre3-dev libgd-dev -y && \
RUN apt install --no-install-recommends bash python3-pip libssl-dev git libpcre++-dev zlib1g-dev libxml2-dev libyajl-dev pkgconf libcurl4-openssl-dev libgeoip-dev liblmdb-dev apt-utils bash build-essential autoconf libtool automake g++ gcc libxml2-dev make musl-dev gnupg patch libreadline-dev libpcre3-dev libgd-dev -y && \
pip install --no-cache-dir --upgrade pip && \
pip install wheel && \
#mkdir -p /usr/share/bunkerweb/deps && \
@ -65,12 +64,13 @@ RUN cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
mkdir -p /var/cache/bunkerweb/ && \
mkdir -p /etc/bunkerweb/plugins && \
mkdir -p /var/tmp/bunkerweb/ && \
mkdir -p /var/run/bunkerweb/ && \
mkdir -p /var/www/html && \
mkdir -p /var/lib/bunkerweb && \
echo "Linux" > /usr/share/bunkerweb/INTEGRATION && \
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type f -exec chmod 0740 {} \; && \
find /usr/share/bunkerweb -path /usr/share/bunkerweb/ui/deps -prune -o -type d -exec chmod 0750 {} \; && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ && \
chmod 770 /var/cache/bunkerweb/ /var/tmp/bunkerweb/ /var/run/bunkerweb/ && \
chmod 750 /usr/share/bunkerweb/gen/main.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/ui/main.py /var/www/ && \
find /usr/share/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \
chmod 755 /usr/share/bunkerweb

View File

@ -6,7 +6,7 @@ After=bunkerweb.service
[Service]
Restart=no
User=nginx
PIDFile=/var/tmp/bunkerweb/ui.pid
PIDFile=/var/run/bunkerweb/ui.pid
ExecStart=/usr/share/bunkerweb/scripts/bunkerweb-ui.sh start
ExecStop=/usr/share/bunkerweb/scripts/bunkerweb-ui.sh stop
ExecReload=/usr/share/bunkerweb/scripts/bunkerweb-ui.sh reload

View File

@ -6,7 +6,7 @@ After=network.target
[Service]
Restart=no
User=root
PIDFile=/var/tmp/bunkerweb/scheduler.pid
PIDFile=/var/run/bunkerweb/scheduler.pid
ExecStart=/usr/share/bunkerweb/scripts/start.sh start
ExecStop=/usr/share/bunkerweb/scripts/start.sh stop
ExecReload=/usr/share/bunkerweb/scripts/start.sh reload

View File

@ -10,4 +10,4 @@
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
--after-remove /usr/share/bunkerweb/scripts/afterRemoveRPM.sh
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb

View File

@ -10,4 +10,4 @@
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
--after-remove /usr/share/bunkerweb/scripts/afterRemoveDEB.sh
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb

View File

@ -10,4 +10,4 @@
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
--after-remove /usr/share/bunkerweb/scripts/afterRemoveRPM.sh
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb

View File

@ -10,4 +10,4 @@
--before-install /usr/share/bunkerweb/scripts/beforeInstall.sh
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
--after-remove /usr/share/bunkerweb/scripts/afterRemoveRPM.sh
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb

View File

@ -11,4 +11,4 @@
--after-install /usr/share/bunkerweb/scripts/postinstall.sh
--after-remove /usr/share/bunkerweb/scripts/afterRemoveDEB.sh
--deb-no-default-config-files
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb
/usr/share/bunkerweb/=/usr/share/bunkerweb/ /usr/bin/bwcli=/usr/bin/bwcli /etc/bunkerweb/=/etc/bunkerweb /var/tmp/bunkerweb/=/var/tmp/bunkerweb /var/run/bunkerweb/=/var/run/bunkerweb /var/cache/bunkerweb/=/var/cache/bunkerweb /lib/systemd/system/bunkerweb.service=/lib/systemd/system/bunkerweb.service /lib/systemd/system/bunkerweb-ui.service=/lib/systemd/system/bunkerweb-ui.service /var/lib/bunkerweb=/var/lib/bunkerweb

View File

@ -54,12 +54,18 @@ function remove {
do_and_check_cmd rm -rf /usr/share/bunkerweb
fi
# Remove /etc/bunkerweb
# Remove /var/tmp/bunkerweb
if test -e "/var/tmp/bunkerweb"; then
echo " Remove /var/tmp/bunkerweb"
do_and_check_cmd rm -rf /var/tmp/bunkerweb
fi
# Remove /var/run/bunkerweb
if test -e "/var/run/bunkerweb"; then
echo " Remove /var/run/bunkerweb"
do_and_check_cmd rm -rf /var/run/bunkerweb
fi
# Remove /var/lib/bunkerweb
if test -e "/var/cache/bunkerweb"; then
echo " Remove /var/cache/bunkerweb"

View File

@ -54,12 +54,18 @@ function remove {
do_and_check_cmd rm -rf /usr/share/bunkerweb
fi
# Remove /etc/bunkerweb
# Remove /var/tmp/bunkerweb
if test -e "/var/tmp/bunkerweb"; then
echo " Remove /var/tmp/bunkerweb"
do_and_check_cmd rm -rf /var/tmp/bunkerweb
fi
# Remove /var/run/bunkerweb
if test -e "/var/run/bunkerweb"; then
echo " Remove /var/run/bunkerweb"
do_and_check_cmd rm -rf /var/run/bunkerweb
fi
# Remove /var/lib/bunkerweb
if test -e "/var/cache/bunkerweb"; then
echo " Remove /var/cache/bunkerweb"

View File

@ -13,20 +13,17 @@ fi
# Function to start the UI
start() {
echo "Starting UI"
if [ ! -f /var/tmp/bunkerweb/ui.pid ]; then
touch /var/tmp/bunkerweb/ui.pid
fi
source /etc/bunkerweb/ui.env
export $(cat /etc/bunkerweb/ui.env)
python3 -m gunicorn main:app --worker-class gevent --bind 127.0.0.1:7000 --graceful-timeout 0 --access-logfile - --error-logfile - &
echo $! > /var/tmp/bunkerweb/ui.pid
python3 -m gunicorn --config /usr/share/bunkerweb/ui/gunicorn.conf.py --user nginx --group nginx --bind 127.0.0.1:7000 &
echo $! > /var/run/bunkerweb/ui.pid
}
# Function to stop the UI
stop() {
echo "Stopping UI service..."
if [ -f "/var/tmp/bunkerweb/ui.pid" ]; then
pid=$(cat /var/tmp/bunkerweb/ui.pid)
if [ -f "/var/run/bunkerweb/ui.pid" ]; then
pid=$(cat /var/run/bunkerweb/ui.pid)
kill -s TERM $pid
else
echo "UI service is not running or the pid file doesn't exist."

View File

@ -23,7 +23,7 @@ function do_and_check_cmd() {
# Give all the permissions to the nginx user
echo "Setting ownership for all necessary directories to nginx user and group..."
do_and_check_cmd chown -R nginx:nginx /usr/share/bunkerweb /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb
do_and_check_cmd chown -R nginx:nginx /usr/share/bunkerweb /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb
# Stop and disable nginx on boot
echo "Stop and disable nginx on boot..."

View File

@ -45,8 +45,8 @@ function stop_nginx() {
}
function stop_scheduler() {
if [ -f "/var/tmp/bunkerweb/scheduler.pid" ] ; then
scheduler_pid=$(cat "/var/tmp/bunkerweb/scheduler.pid")
if [ -f "/var/run/bunkerweb/scheduler.pid" ] ; then
scheduler_pid=$(cat "/var/run/bunkerweb/scheduler.pid")
log "SYSTEMCTL" " " "Stopping scheduler..."
kill -SIGINT "$scheduler_pid"
if [ $? -ne 0 ] ; then
@ -58,7 +58,7 @@ function stop_scheduler() {
return 0
fi
count=0
while [ -f "/var/tmp/bunkerweb/scheduler.pid" ] ; do
while [ -f "/var/run/bunkerweb/scheduler.pid" ] ; do
sleep 1
count=$(($count + 1))
if [ $count -ge 10 ] ; then
@ -85,7 +85,7 @@ function start() {
# Create dummy variables.env
if [ ! -f /etc/bunkerweb/variables.env ]; then
sudo -E -u nginx -g nginx /bin/bash -c "echo -ne '# remove IS_LOADING=yes when your config is ready\nIS_LOADING=yes\nHTTP_PORT=80\nHTTPS_PORT=443\nAPI_LISTEN_IP=127.0.0.1\nSERVER_NAME=\n' > /etc/bunkerweb/variables.env"
sudo -E -u nginx -g nginx /bin/bash -c "echo -ne '# remove IS_LOADING=yes when your config is ready\nIS_LOADING=yes\nUSE_BUNKERNET=no\nHTTP_PORT=80\nHTTPS_PORT=443\nAPI_LISTEN_IP=127.0.0.1\nSERVER_NAME=\n' > /etc/bunkerweb/variables.env"
log "SYSTEMCTL" "" "Created dummy variables.env file"
fi
@ -104,7 +104,7 @@ function start() {
if [ "$HTTPS_PORT" = "" ] ; then
HTTPS_PORT="8443"
fi
sudo -E -u nginx -g nginx /bin/bash -c "echo -ne 'IS_LOADING=yes\nHTTP_PORT=${HTTP_PORT}\nHTTPS_PORT=${HTTPS_PORT}\nAPI_LISTEN_IP=127.0.0.1\nSERVER_NAME=\n' > /var/tmp/bunkerweb/tmp.env"
sudo -E -u nginx -g nginx /bin/bash -c "echo -ne 'IS_LOADING=yes\nUSE_BUNKERNET=no\nHTTP_PORT=${HTTP_PORT}\nHTTPS_PORT=${HTTPS_PORT}\nAPI_LISTEN_IP=127.0.0.1\nSERVER_NAME=\n' > /var/tmp/bunkerweb/tmp.env"
sudo -E -u nginx -g nginx /bin/bash -c "PYTHONPATH=/usr/share/bunkerweb/deps/python/ /usr/share/bunkerweb/gen/main.py --variables /var/tmp/bunkerweb/tmp.env --no-linux-reload"
if [ $? -ne 0 ] ; then
log "SYSTEMCTL" "❌" "Error while generating config from /var/tmp/bunkerweb/tmp.env"
@ -134,19 +134,6 @@ function start() {
fi
log "SYSTEMCTL" "" "nginx started ..."
# Update database
log "SYSTEMCTL" "" "Updating database ..."
if [ ! -f /var/lib/bunkerweb/db.sqlite3 ]; then
sudo -E -u nginx -g nginx /bin/bash -c "PYTHONPATH=/usr/share/bunkerweb/deps/python/ /usr/share/bunkerweb/gen/save_config.py --variables /etc/bunkerweb/variables.env --init"
else
sudo -E -u nginx -g nginx /bin/bash -c "PYTHONPATH=/usr/share/bunkerweb/deps/python/ /usr/share/bunkerweb/gen/save_config.py --variables /etc/bunkerweb/variables.env"
fi
if [ $? -ne 0 ] ; then
log "SYSTEMCTL" "❌" "save_config failed"
exit 1
fi
log "SYSTEMCTL" "" "Database updated ..."
# Execute scheduler
log "SYSTEMCTL" " " "Executing scheduler ..."
sudo -E -u nginx -g nginx /bin/bash -c "PYTHONPATH=/usr/share/bunkerweb/deps/python/ /usr/share/bunkerweb/scheduler/main.py --variables /etc/bunkerweb/variables.env"
@ -171,7 +158,7 @@ function reload()
log "SYSTEMCTL" "" "Reloading BunkerWeb service ..."
PID_FILE_PATH="/var/tmp/bunkerweb/scheduler.pid"
PID_FILE_PATH="/var/run/bunkerweb/scheduler.pid"
if [ -f "$PID_FILE_PATH" ];
then
var=$(cat "$PID_FILE_PATH")

View File

@ -9,6 +9,9 @@ RUN mkdir -p /usr/share/bunkerweb/deps && \
cat /tmp/req/requirements.txt /tmp/req/requirements.txt.1 /tmp/req/requirements.txt.2 > /usr/share/bunkerweb/deps/requirements.txt && \
rm -rf /tmp/req
# Update apk
RUN apk update
# Install python dependencies
RUN apk add --no-cache --virtual .build-deps g++ gcc musl-dev jpeg-dev zlib-dev libffi-dev cairo-dev pango-dev gdk-pixbuf-dev openssl-dev cargo postgresql-dev
@ -52,6 +55,7 @@ RUN apk add --no-cache bash libgcc libstdc++ openssl && \
cp /usr/share/bunkerweb/helpers/bwcli /usr/bin/ && \
echo "Docker" > /usr/share/bunkerweb/INTEGRATION && \
mkdir -p /var/tmp/bunkerweb && \
mkdir -p /var/run/bunkerweb && \
mkdir -p /var/www && \
mkdir -p /etc/bunkerweb && \
mkdir -p /data/cache && ln -s /data/cache /var/cache/bunkerweb && \
@ -61,8 +65,8 @@ RUN apk add --no-cache bash libgcc libstdc++ openssl && \
for dir in $(echo "configs/http configs/stream configs/server-http configs/server-stream configs/default-server-http configs/default-server-stream configs/modsec configs/modsec-crs") ; do mkdir "/data/${dir}" ; done && \
chown -R root:scheduler /data && \
chmod -R 770 /data && \
chown -R root:scheduler /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /usr/bin/bwcli && \
chmod -R 770 /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb && \
chown -R root:scheduler /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /usr/bin/bwcli && \
chmod -R 770 /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb && \
find /usr/share/bunkerweb/core/*/jobs/* -type f -exec chmod 750 {} \; && \
chmod 750 /usr/share/bunkerweb/cli/main.py /usr/share/bunkerweb/gen/*.py /usr/share/bunkerweb/scheduler/main.py /usr/share/bunkerweb/scheduler/entrypoint.sh /usr/share/bunkerweb/helpers/*.sh /usr/share/bunkerweb/deps/python/bin/* /usr/bin/bwcli && \
mkdir -p /etc/nginx && \
@ -77,7 +81,7 @@ COPY --chown=root:scheduler src/bw/misc/country.mmdb /var/tmp/bunkerweb/country.
RUN chmod 770 /var/tmp/bunkerweb/asn.mmdb /var/tmp/bunkerweb/country.mmdb
# Fix CVEs
# There are no CVEs for python:3.11.3-alpine at the moment
RUN apk add --no-cache "libcrypto3>=3.1.1-r0" "libssl3>=3.1.1-r0"
VOLUME /data /etc/nginx

View File

@ -36,10 +36,11 @@ class JobScheduler(ApiCaller):
def __init__(
self,
env: Optional[Dict[str, Any]] = None,
lock: Optional[Lock] = None,
apis: Optional[list] = None,
logger: Optional[Logger] = None,
integration: str = "Linux",
*,
lock: Optional[Lock] = None,
apis: Optional[list] = None,
):
super().__init__(apis or [])
self.__logger = logger or setup_logger("Scheduler", getenv("LOG_LEVEL", "INFO"))
@ -53,6 +54,20 @@ class JobScheduler(ApiCaller):
self.__job_success = True
self.__semaphore = Semaphore(cpu_count() or 1)
@property
def env(self) -> Dict[str, Any]:
return self.__env
@env.setter
def env(self, env: Dict[str, Any]):
self.__env = env
def set_integration(self, integration: str):
self.__integration = integration
def auto_setup(self):
super().auto_setup(bw_integration=self.__integration)
def __get_jobs(self):
jobs = {}
for plugin_file in glob(
@ -63,7 +78,7 @@ class JobScheduler(ApiCaller):
plugin_name = basename(dirname(plugin_file))
jobs[plugin_name] = []
try:
plugin_data = loads(Path(plugin_file).read_text())
plugin_data = loads(Path(plugin_file).read_text(encoding="utf-8"))
if not "jobs" in plugin_data:
continue
@ -130,7 +145,7 @@ class JobScheduler(ApiCaller):
return schedule_every().day
elif every == "week":
return schedule_every().week
raise Exception(f"can't convert string {every} to schedule")
raise ValueError(f"can't convert string {every} to schedule")
def __reload(self) -> bool:
reload = True
@ -141,6 +156,7 @@ class JobScheduler(ApiCaller):
stdin=DEVNULL,
stderr=PIPE,
env=self.__env,
check=False,
)
reload = proc.returncode == 0
if reload:
@ -151,7 +167,7 @@ class JobScheduler(ApiCaller):
)
else:
self.__logger.info("Reloading nginx ...")
reload = self._send_to_apis("POST", "/reload")
reload = self.send_to_apis("POST", "/reload")
if reload:
self.__logger.info("Successfully reloaded nginx")
else:
@ -166,7 +182,11 @@ class JobScheduler(ApiCaller):
ret = -1
try:
proc = run(
join(path, "jobs", file), stdin=DEVNULL, stderr=STDOUT, env=self.__env
join(path, "jobs", file),
stdin=DEVNULL,
stderr=STDOUT,
env=self.__env,
check=False,
)
ret = proc.returncode
except BaseException:
@ -235,10 +255,10 @@ class JobScheduler(ApiCaller):
if reload:
try:
if self._get_apis():
if self.apis:
cache_path = join(sep, "var", "cache", "bunkerweb")
self.__logger.info(f"Sending {cache_path} folder ...")
if not self._send_files(cache_path, "/cache"):
if not self.send_files(cache_path, "/cache"):
success = False
self.__logger.error(f"Error while sending {cache_path} folder")
else:
@ -283,7 +303,7 @@ class JobScheduler(ApiCaller):
return ret
def __run_in_thread(self, jobs: list):
self.__semaphore.acquire()
self.__semaphore.acquire(timeout=60)
for job in jobs:
job()
self.__semaphore.release()

View File

@ -5,15 +5,15 @@
# trap SIGTERM and SIGINT
function trap_exit() {
log "ENTRYPOINT" " " "Catched stop operation"
if [ -f "/var/tmp/bunkerweb/scheduler.pid" ] ; then
if [ -f "/var/run/bunkerweb/scheduler.pid" ] ; then
log "ENTRYPOINT" " " "Stopping job scheduler ..."
kill -s TERM "$(cat /var/tmp/bunkerweb/scheduler.pid)"
kill -s TERM "$(cat /var/run/bunkerweb/scheduler.pid)"
fi
}
trap "trap_exit" TERM INT QUIT
if [ -f /var/tmp/bunkerweb/scheduler.pid ] ; then
rm -f /var/tmp/bunkerweb/scheduler.pid
if [ -f /var/run/bunkerweb/scheduler.pid ] ; then
rm -f /var/run/bunkerweb/scheduler.pid
fi
log "ENTRYPOINT" "" "Starting the job scheduler v$(cat /usr/share/bunkerweb/VERSION) ..."
@ -44,7 +44,7 @@ log "ENTRYPOINT" " " "Executing scheduler ..."
/usr/share/bunkerweb/scheduler/main.py &
pid="$!"
wait "$pid"
while [ -f /var/tmp/bunkerweb/scheduler.pid ] ; do
while [ -f /var/run/bunkerweb/scheduler.pid ] ; do
wait "$pid"
done

View File

@ -25,7 +25,7 @@ from sys import path as sys_path
from tarfile import open as tar_open
from time import sleep
from traceback import format_exc
from typing import Any, Dict, List
from typing import Any, Dict, List, Optional, Union
for deps_path in [
join(sep, "usr", "share", "bunkerweb", *paths)
@ -41,17 +41,17 @@ from Database import Database # type: ignore
from JobScheduler import JobScheduler
from ApiCaller import ApiCaller # type: ignore
run = True
scheduler = None
reloading = False
RUN = True
SCHEDULER: Optional[JobScheduler] = None
GENERATE = False
INTEGRATION = "Linux"
CACHE_PATH = join(sep, "var", "cache", "bunkerweb")
logger = setup_logger("Scheduler", getenv("LOG_LEVEL", "INFO"))
def handle_stop(signum, frame):
global run, scheduler
run = False
if scheduler is not None:
scheduler.clear()
if SCHEDULER is not None:
SCHEDULER.clear()
stop(0)
@ -61,13 +61,11 @@ signal(SIGTERM, handle_stop)
# Function to catch SIGHUP and reload the scheduler
def handle_reload(signum, frame):
global reloading, run, scheduler
reloading = True
try:
if scheduler is not None and run:
if SCHEDULER is not None and RUN:
# Get the env by reading the .env file
env = dotenv_values(join(sep, "etc", "bunkerweb", "variables.env"))
if scheduler.reload(env):
tmp_env = dotenv_values(join(sep, "etc", "bunkerweb", "variables.env"))
if SCHEDULER.reload(tmp_env):
logger.info("Reload successful")
else:
logger.error("Reload failed")
@ -85,73 +83,96 @@ signal(SIGHUP, handle_reload)
def stop(status):
Path(sep, "var", "tmp", "bunkerweb", "scheduler.pid").unlink(missing_ok=True)
Path(sep, "var", "run", "bunkerweb", "scheduler.pid").unlink(missing_ok=True)
Path(sep, "var", "tmp", "bunkerweb", "scheduler.healthy").unlink(missing_ok=True)
_exit(status)
def generate_custom_configs(
custom_configs: List[Dict[str, Any]],
integration: str,
api_caller: ApiCaller,
configs: List[Dict[str, Any]],
*,
original_path: str = join(sep, "etc", "bunkerweb", "configs"),
original_path: Union[Path, str] = join(sep, "etc", "bunkerweb", "configs"),
):
logger.info("Generating new custom configs ...")
Path(original_path).mkdir(parents=True, exist_ok=True)
for custom_config in custom_configs:
tmp_path = join(original_path, custom_config["type"].replace("_", "-"))
if custom_config["service_id"]:
tmp_path = join(tmp_path, custom_config["service_id"])
tmp_path = Path(tmp_path, f"{custom_config['name']}.conf")
tmp_path.parent.mkdir(parents=True, exist_ok=True)
tmp_path.write_bytes(custom_config["data"])
if not isinstance(original_path, Path):
original_path = Path(original_path)
if integration in ("Autoconf", "Swarm", "Kubernetes", "Docker"):
logger.info("Sending custom configs to BunkerWeb")
ret = api_caller._send_files(original_path, "/custom_configs")
# Remove old custom configs files
logger.info("Removing old custom configs files ...")
for file in glob(str(original_path.joinpath("*", "*"))):
file = Path(file)
if file.is_symlink() or file.is_file():
file.unlink()
elif file.is_dir():
rmtree(str(file), ignore_errors=True)
if not ret:
logger.error(
"Sending custom configs failed, configuration will not work as expected...",
if configs:
logger.info("Generating new custom configs ...")
original_path.mkdir(parents=True, exist_ok=True)
for custom_config in configs:
tmp_path = original_path.joinpath(
custom_config["type"].replace("_", "-"),
custom_config["service_id"] or "",
f"{custom_config['name']}.conf",
)
tmp_path.parent.mkdir(parents=True, exist_ok=True)
tmp_path.write_bytes(custom_config["data"])
if SCHEDULER.apis:
logger.info("Sending custom configs to BunkerWeb")
ret = SCHEDULER.send_files(original_path, "/custom_configs")
if not ret:
logger.error(
"Sending custom configs failed, configuration will not work as expected...",
)
def generate_external_plugins(
plugins: List[Dict[str, Any]],
integration: str,
api_caller: ApiCaller,
*,
original_path: str = join(sep, "etc", "bunkerweb", "plugins"),
original_path: Union[Path, str] = join(sep, "etc", "bunkerweb", "plugins"),
):
logger.info("Generating new external plugins ...")
Path(original_path).mkdir(parents=True, exist_ok=True)
for plugin in plugins:
tmp_path = Path(original_path, plugin["id"], f"{plugin['name']}.tar.gz")
tmp_path.parent.mkdir(parents=True, exist_ok=True)
tmp_path.write_bytes(plugin["data"])
with tar_open(str(tmp_path), "r:gz") as tar:
tar.extractall(original_path)
tmp_path.unlink()
if not isinstance(original_path, Path):
original_path = Path(original_path)
for job_file in glob(join(str(tmp_path.parent), "jobs", "*")):
st = Path(job_file).stat()
chmod(job_file, st.st_mode | S_IEXEC)
# Remove old external plugins files
logger.info("Removing old external plugins files ...")
for file in glob(str(original_path.joinpath("*"))):
file = Path(file)
if file.is_symlink() or file.is_file():
file.unlink()
elif file.is_dir():
rmtree(str(file), ignore_errors=True)
if integration in ("Autoconf", "Swarm", "Kubernetes", "Docker"):
logger.info("Sending plugins to BunkerWeb")
ret = api_caller._send_files(original_path, "/plugins")
if plugins:
logger.info("Generating new external plugins ...")
original_path.mkdir(parents=True, exist_ok=True)
for plugin in plugins:
tmp_path = original_path.joinpath(plugin["id"], f"{plugin['name']}.tar.gz")
tmp_path.parent.mkdir(parents=True, exist_ok=True)
tmp_path.write_bytes(plugin["data"])
with tar_open(str(tmp_path), "r:gz") as tar:
tar.extractall(original_path)
tmp_path.unlink()
if not ret:
logger.error(
"Sending plugins failed, configuration will not work as expected...",
)
for job_file in glob(join(str(tmp_path.parent), "jobs", "*")):
st = Path(job_file).stat()
chmod(job_file, st.st_mode | S_IEXEC)
if SCHEDULER.apis:
logger.info("Sending plugins to BunkerWeb")
ret = SCHEDULER.send_files(original_path, "/plugins")
if not ret:
logger.error(
"Sending plugins failed, configuration will not work as expected...",
)
if __name__ == "__main__":
try:
# Don't execute if pid file exists
pid_path = Path(sep, "var", "tmp", "bunkerweb", "scheduler.pid")
pid_path = Path(sep, "var", "run", "bunkerweb", "scheduler.pid")
if pid_path.is_file():
logger.error(
"Scheduler is already running, skipping execution ...",
@ -159,7 +180,7 @@ if __name__ == "__main__":
_exit(1)
# Write pid to file
pid_path.write_text(str(getpid()))
pid_path.write_text(str(getpid()), encoding="utf-8")
del pid_path
@ -171,114 +192,113 @@ if __name__ == "__main__":
help="path to the file containing environment variables",
)
args = parser.parse_args()
generate = False
integration = "Linux"
api_caller = ApiCaller()
db_configs = None
tmp_variables_path = Path(
normpath(args.variables) if args.variables else sep,
"var",
"tmp",
"bunkerweb",
"variables.env",
integration_path = Path(sep, "usr", "share", "bunkerweb", "INTEGRATION")
os_release_path = Path(sep, "etc", "os-release")
if getenv("KUBERNETES_MODE", "no").lower() == "yes":
INTEGRATION = "Kubernetes"
elif getenv("SWARM_MODE", "no").lower() == "yes":
INTEGRATION = "Swarm"
elif getenv("AUTOCONF_MODE", "no").lower() == "yes":
INTEGRATION = "Autoconf"
elif integration_path.is_file():
INTEGRATION = integration_path.read_text(encoding="utf-8").strip()
elif os_release_path.is_file() and "Alpine" in os_release_path.read_text(
encoding="utf-8"
):
INTEGRATION = "Docker"
del integration_path, os_release_path
tmp_variables_path = (
normpath(args.variables)
if args.variables
else join(sep, "var", "tmp", "bunkerweb", "variables.env")
)
tmp_variables_path = Path(tmp_variables_path)
nginx_variables_path = Path(sep, "etc", "nginx", "variables.env")
dotenv_env = dotenv_values(str(tmp_variables_path))
db = Database(
logger,
sqlalchemy_string=dotenv_env.get(
"DATABASE_URI", getenv("DATABASE_URI", None)
),
)
if INTEGRATION in (
"Swarm",
"Kubernetes",
"Autoconf",
):
while not db.is_autoconf_loaded():
logger.warning(
"Autoconf is not loaded yet in the database, retrying in 5s ...",
)
sleep(5)
elif (
not tmp_variables_path.exists()
or not nginx_variables_path.exists()
or (
tmp_variables_path.read_text(encoding="utf-8")
!= nginx_variables_path.read_text(encoding="utf-8")
)
or db.is_initialized()
and db.get_config() != dotenv_env
):
# run the config saver
proc = subprocess_run(
[
"python3",
join(sep, "usr", "share", "bunkerweb", "gen", "save_config.py"),
"--settings",
join(sep, "usr", "share", "bunkerweb", "settings.json"),
]
+ (["--variables", str(tmp_variables_path)] if args.variables else []),
stdin=DEVNULL,
stderr=STDOUT,
check=False,
)
if proc.returncode != 0:
logger.error(
"Config saver failed, configuration will not work as expected...",
)
while not db.is_initialized():
logger.warning(
"Database is not initialized, retrying in 5s ...",
)
sleep(5)
env = db.get_config()
while not db.is_first_config_saved() or not env:
logger.warning(
"Database doesn't have any config saved yet, retrying in 5s ...",
)
sleep(5)
env = db.get_config()
env["DATABASE_URI"] = db.database_uri
# Instantiate scheduler
SCHEDULER = JobScheduler(env.copy() | environ.copy(), logger, INTEGRATION)
if INTEGRATION in ("Swarm", "Kubernetes", "Autoconf", "Docker"):
# Automatically setup the scheduler apis
SCHEDULER.auto_setup()
logger.info("Scheduler started ...")
# Checking if the argument variables is true.
if args.variables:
logger.info(f"Variables : {tmp_variables_path}")
# Read env file
env = dotenv_values(str(tmp_variables_path))
db = Database(
logger,
sqlalchemy_string=env.get("DATABASE_URI", getenv("DATABASE_URI", None)),
)
while not db.is_initialized():
logger.warning(
"Database is not initialized, retrying in 5s ...",
)
sleep(5)
db_configs = db.get_custom_configs()
else:
# Read from database
integration = "Docker"
integration_path = Path(sep, "usr", "share", "bunkerweb", "INTEGRATION")
if integration_path.is_file():
integration = integration_path.read_text().strip()
del integration_path
api_caller.auto_setup(bw_integration=integration)
db = Database(
logger,
sqlalchemy_string=getenv("DATABASE_URI", None),
)
if db.is_initialized():
db_configs = db.get_custom_configs()
if integration in (
"Swarm",
"Kubernetes",
"Autoconf",
):
while not db.is_autoconf_loaded():
logger.warning(
"Autoconf is not loaded yet in the database, retrying in 5s ...",
)
sleep(5)
elif not tmp_variables_path.is_file() or db.get_config() != dotenv_values(
str(tmp_variables_path)
):
# run the config saver
proc = subprocess_run(
[
"python",
join(sep, "usr", "share", "bunkerweb", "gen", "save_config.py"),
"--settings",
join(sep, "usr", "share", "bunkerweb", "settings.json"),
],
stdin=DEVNULL,
stderr=STDOUT,
)
if proc.returncode != 0:
logger.error(
"Config saver failed, configuration will not work as expected...",
)
while not db.is_initialized():
logger.warning(
"Database is not initialized, retrying in 5s ...",
)
sleep(5)
if not db_configs:
db_configs = db.get_custom_configs()
env = db.get_config()
while not db.is_first_config_saved() or not env:
logger.warning(
"Database doesn't have any config saved yet, retrying in 5s ...",
)
sleep(5)
env = db.get_config()
env["DATABASE_URI"] = db.get_database_uri()
# Checking if any custom config has been created by the user
custom_configs = []
configs_path = join(sep, "etc", "bunkerweb", "configs")
root_dirs = listdir(configs_path)
for root, dirs, files in walk(configs_path):
db_configs = db.get_custom_configs()
configs_path = Path(sep, "etc", "bunkerweb", "configs")
root_dirs = listdir(str(configs_path))
for root, dirs, files in walk(str(configs_path)):
if files or (dirs and basename(root) not in root_dirs):
path_exploded = root.split("/")
for file in files:
with open(join(root, file), "r") as f:
with open(join(root, file), "r", encoding="utf-8") as f:
custom_conf = {
"value": f.read(),
"exploded": (
@ -309,26 +329,13 @@ if __name__ == "__main__":
f"Couldn't save some manually created custom configs to database: {err}",
)
# Remove old custom configs files
logger.info("Removing old custom configs files ...")
for file in glob(join(configs_path, "*", "*")):
file = Path(file)
if file.is_symlink() or file.is_file():
file.unlink()
elif file.is_dir():
rmtree(str(file), ignore_errors=True)
db_configs = db.get_custom_configs()
if db_configs:
logger.info("Generating new custom configs ...")
generate_custom_configs(db_configs, integration, api_caller)
generate_custom_configs(db.get_custom_configs(), original_path=configs_path)
# Check if any external plugin has been added by the user
external_plugins = []
plugins_dir = join(sep, "etc", "bunkerweb", "plugins")
for filename in glob(join(plugins_dir, "*", "plugin.json")):
with open(filename, "r") as f:
plugins_dir = Path(sep, "etc", "bunkerweb", "plugins")
for filename in glob(str(plugins_dir.joinpath("*", "plugin.json"))):
with open(filename, "r", encoding="utf-8") as f:
_dir = dirname(filename)
plugin_content = BytesIO()
with tar_open(
@ -356,60 +363,91 @@ if __name__ == "__main__":
f"Couldn't save some manually added plugins to database: {err}",
)
external_plugins = db.get_plugins(external=True)
if external_plugins:
# Remove old external plugins files
logger.info("Removing old external plugins files ...")
for file in glob(join(plugins_dir, "*")):
file = Path(file)
if file.is_symlink() or file.is_file():
file.unlink()
elif file.is_dir():
rmtree(str(file), ignore_errors=True)
generate_external_plugins(
db.get_plugins(external=True, with_data=True),
integration,
api_caller,
original_path=plugins_dir,
)
generate_external_plugins(
db.get_plugins(external=True, with_data=True),
original_path=plugins_dir,
)
logger.info("Executing scheduler ...")
generate = not tmp_variables_path.exists() or env != dotenv_values(
str(tmp_variables_path)
GENERATE = (
env != dotenv_env
or not tmp_variables_path.exists()
or not nginx_variables_path.exists()
or (
tmp_variables_path.read_text(encoding="utf-8")
!= nginx_variables_path.read_text(encoding="utf-8")
)
)
if not generate:
del dotenv_env
if not GENERATE:
logger.warning(
"Looks like BunkerWeb configuration is already generated, will not generate it again ..."
)
first_run = True
FIRST_RUN = True
while True:
ret = db.checked_changes()
if ret:
logger.error(
f"An error occurred when setting the changes to checked in the database : {changes}"
f"An error occurred when setting the changes to checked in the database : {ret}"
)
stop(1)
# Instantiate scheduler
scheduler = JobScheduler(
env=env.copy() | environ.copy(),
apis=api_caller._get_apis(),
logger=logger,
integration=integration,
)
# Update the environment variables of the scheduler
SCHEDULER.env = env.copy() | environ.copy()
# Only run jobs once
if not scheduler.run_once():
if not SCHEDULER.run_once():
logger.error("At least one job in run_once() failed")
else:
logger.info("All jobs in run_once() were successful")
if generate:
changes = db.check_changes()
if isinstance(changes, str):
logger.error(
f"An error occurred when checking for changes in the database : {changes}"
)
stop(1)
# check if the plugins have changed since last time
if changes["external_plugins_changed"]:
logger.info("External plugins changed, generating ...")
generate_external_plugins(
db.get_plugins(external=True, with_data=True),
original_path=plugins_dir,
)
# run the config saver to save potential plugins settings
proc = subprocess_run(
[
"python",
join(sep, "usr", "share", "bunkerweb", "gen", "save_config.py"),
"--settings",
join(sep, "usr", "share", "bunkerweb", "settings.json"),
],
stdin=DEVNULL,
stderr=STDOUT,
check=False,
)
if proc.returncode != 0:
logger.error(
"Config saver failed, configuration will not work as expected...",
)
ret = db.checked_changes()
if ret:
logger.error(
f"An error occurred when setting the changes to checked in the database : {ret}"
)
stop(1)
if GENERATE:
# run the generator
proc = subprocess_run(
[
@ -424,11 +462,12 @@ if __name__ == "__main__":
]
+ (
["--variables", str(tmp_variables_path)]
if args.variables and first_run
if args.variables and FIRST_RUN
else []
),
stdin=DEVNULL,
stderr=STDOUT,
check=False,
)
if proc.returncode != 0:
@ -436,34 +475,31 @@ if __name__ == "__main__":
"Config generator failed, configuration will not work as expected...",
)
else:
copy(
join(sep, "etc", "nginx", "variables.env"),
str(tmp_variables_path),
)
copy(str(nginx_variables_path), str(tmp_variables_path))
if api_caller._get_apis():
if SCHEDULER.apis:
# send nginx configs
logger.info(f"Sending {join(sep, 'etc', 'nginx')} folder ...")
ret = api_caller._send_files(
join(sep, "etc", "nginx"), "/confs"
)
ret = SCHEDULER.send_files(join(sep, "etc", "nginx"), "/confs")
if not ret:
logger.error(
"Sending nginx configs failed, configuration will not work as expected...",
)
try:
if api_caller._get_apis():
cache_path = join(sep, "var", "cache", "bunkerweb")
if SCHEDULER.apis:
# send cache
logger.info(f"Sending {cache_path} folder ...")
if not api_caller._send_files(cache_path, "/cache"):
logger.error(f"Error while sending {cache_path} folder")
logger.info(f"Sending {CACHE_PATH} folder ...")
if not SCHEDULER.send_files(CACHE_PATH, "/cache"):
logger.error(f"Error while sending {CACHE_PATH} folder")
else:
logger.info(f"Successfully sent {cache_path} folder")
logger.info(f"Successfully sent {CACHE_PATH} folder")
# restart nginx
if integration not in ("Autoconf", "Swarm", "Kubernetes", "Docker"):
if SCHEDULER.send_to_apis("POST", "/reload"):
logger.info("Successfully reloaded nginx")
else:
logger.error("Error while reloading nginx")
else:
# Stop temp nginx
logger.info("Stopping temp nginx ...")
proc = subprocess_run(
@ -471,13 +507,14 @@ if __name__ == "__main__":
stdin=DEVNULL,
stderr=STDOUT,
env=env.copy(),
check=False,
)
if proc.returncode == 0:
logger.info("Successfully sent stop signal to temp nginx")
i = 0
while i < 20:
if not Path(
sep, "var", "tmp", "bunkerweb", "nginx.pid"
sep, "var", "run", "bunkerweb", "nginx.pid"
).is_file():
break
logger.warning("Waiting for temp nginx to stop ...")
@ -495,6 +532,7 @@ if __name__ == "__main__":
stdin=DEVNULL,
stderr=STDOUT,
env=env.copy(),
check=False,
)
if proc.returncode == 0:
logger.info("Successfully started nginx")
@ -506,28 +544,25 @@ if __name__ == "__main__":
logger.error(
f"Error while sending stop signal to temp nginx - returncode: {proc.returncode} - error: {proc.stderr.decode('utf-8') if proc.stderr else 'Missing stderr'}",
)
else:
if api_caller._send_to_apis("POST", "/reload"):
logger.info("Successfully reloaded nginx")
else:
logger.error("Error while reloading nginx")
except:
logger.error(
f"Exception while reloading after running jobs once scheduling : {format_exc()}",
)
generate = True
scheduler.setup()
need_reload = False
configs_need_generation = False
plugins_need_generation = False
first_run = False
GENERATE = True
SCHEDULER.setup()
NEED_RELOAD = False
CONFIGS_NEED_GENERATION = False
PLUGINS_NEED_GENERATION = False
FIRST_RUN = False
# infinite schedule for the jobs
logger.info("Executing job scheduler ...")
Path(sep, "var", "tmp", "bunkerweb", "scheduler.healthy").write_text("ok")
while run and not need_reload:
scheduler.run_pending()
Path(sep, "var", "tmp", "bunkerweb", "scheduler.healthy").write_text(
"ok", encoding="utf-8"
)
while RUN and not NEED_RELOAD:
SCHEDULER.run_pending()
sleep(1)
changes = db.check_changes()
@ -541,58 +576,29 @@ if __name__ == "__main__":
# check if the custom configs have changed since last time
if changes["custom_configs_changed"]:
logger.info("Custom configs changed, generating ...")
configs_need_generation = True
need_reload = True
CONFIGS_NEED_GENERATION = True
NEED_RELOAD = True
# check if the plugins have changed since last time
if changes["external_plugins_changed"]:
logger.info("External plugins changed, generating ...")
plugins_need_generation = True
need_reload = True
PLUGINS_NEED_GENERATION = True
NEED_RELOAD = True
# check if the config have changed since last time
if changes["config_changed"]:
logger.info("Config changed, generating ...")
need_reload = True
if need_reload:
if configs_need_generation:
db_configs = db.get_custom_configs()
# Remove old custom configs files
logger.info("Removing old custom configs files ...")
for file in glob(join(configs_path, "*", "*")):
file = Path(file)
if file.is_symlink() or file.is_file():
file.unlink()
elif file.is_dir():
rmtree(str(file), ignore_errors=True)
NEED_RELOAD = True
if NEED_RELOAD:
if CONFIGS_NEED_GENERATION:
generate_custom_configs(
db_configs,
integration,
api_caller,
original_path=configs_path,
db.get_custom_configs(), original_path=configs_path
)
if plugins_need_generation:
external_plugins: List[Dict[str, Any]] = db.get_plugins(
external=True, with_data=True
)
# Remove old external plugins files
logger.info("Removing old external plugins files ...")
for file in glob(join(plugins_dir, "*")):
file = Path(file)
if file.is_symlink() or file.is_file():
file.unlink()
elif file.is_dir():
rmtree(str(file), ignore_errors=True)
if PLUGINS_NEED_GENERATION:
generate_external_plugins(
external_plugins,
integration,
api_caller,
db.get_plugins(external=True, with_data=True),
original_path=plugins_dir,
)

View File

@ -167,26 +167,26 @@ configobj==5.0.8 \
--hash=sha256:6f704434a07dc4f4dc7c9a745172c1cad449feb548febd9f7fe362629c627a97 \
--hash=sha256:a7a8c6ab7daade85c3f329931a807c8aee750a2494363934f8ea84d8a54c87ea
# via certbot
cryptography==40.0.2 \
--hash=sha256:05dc219433b14046c476f6f09d7636b92a1c3e5808b9a6536adf4932b3b2c440 \
--hash=sha256:0dcca15d3a19a66e63662dc8d30f8036b07be851a8680eda92d079868f106288 \
--hash=sha256:142bae539ef28a1c76794cca7f49729e7c54423f615cfd9b0b1fa90ebe53244b \
--hash=sha256:3daf9b114213f8ba460b829a02896789751626a2a4e7a43a28ee77c04b5e4958 \
--hash=sha256:48f388d0d153350f378c7f7b41497a54ff1513c816bcbbcafe5b829e59b9ce5b \
--hash=sha256:4df2af28d7bedc84fe45bd49bc35d710aede676e2a4cb7fc6d103a2adc8afe4d \
--hash=sha256:4f01c9863da784558165f5d4d916093737a75203a5c5286fde60e503e4276c7a \
--hash=sha256:7a38250f433cd41df7fcb763caa3ee9362777fdb4dc642b9a349721d2bf47404 \
--hash=sha256:8f79b5ff5ad9d3218afb1e7e20ea74da5f76943ee5edb7f76e56ec5161ec782b \
--hash=sha256:956ba8701b4ffe91ba59665ed170a2ebbdc6fc0e40de5f6059195d9f2b33ca0e \
--hash=sha256:a04386fb7bc85fab9cd51b6308633a3c271e3d0d3eae917eebab2fac6219b6d2 \
--hash=sha256:a95f4802d49faa6a674242e25bfeea6fc2acd915b5e5e29ac90a32b1139cae1c \
--hash=sha256:adc0d980fd2760c9e5de537c28935cc32b9353baaf28e0814df417619c6c8c3b \
--hash=sha256:aecbb1592b0188e030cb01f82d12556cf72e218280f621deed7d806afd2113f9 \
--hash=sha256:b12794f01d4cacfbd3177b9042198f3af1c856eedd0a98f10f141385c809a14b \
--hash=sha256:c0764e72b36a3dc065c155e5b22f93df465da9c39af65516fe04ed3c68c92636 \
--hash=sha256:c33c0d32b8594fa647d2e01dbccc303478e16fdd7cf98652d5b3ed11aa5e5c99 \
--hash=sha256:cbaba590180cba88cb99a5f76f90808a624f18b169b90a4abb40c1fd8c19420e \
--hash=sha256:d5a1bd0e9e2031465761dfa920c16b0065ad77321d8a8c1f5ee331021fda65e9
cryptography==41.0.0 \
--hash=sha256:0ddaee209d1cf1f180f1efa338a68c4621154de0afaef92b89486f5f96047c55 \
--hash=sha256:14754bcdae909d66ff24b7b5f166d69340ccc6cb15731670435efd5719294895 \
--hash=sha256:344c6de9f8bda3c425b3a41b319522ba3208551b70c2ae00099c205f0d9fd3be \
--hash=sha256:34d405ea69a8b34566ba3dfb0521379b210ea5d560fafedf9f800a9a94a41928 \
--hash=sha256:3680248309d340fda9611498a5319b0193a8dbdb73586a1acf8109d06f25b92d \
--hash=sha256:3c5ef25d060c80d6d9f7f9892e1d41bb1c79b78ce74805b8cb4aa373cb7d5ec8 \
--hash=sha256:4ab14d567f7bbe7f1cdff1c53d5324ed4d3fc8bd17c481b395db224fb405c237 \
--hash=sha256:5c1f7293c31ebc72163a9a0df246f890d65f66b4a40d9ec80081969ba8c78cc9 \
--hash=sha256:6b71f64beeea341c9b4f963b48ee3b62d62d57ba93eb120e1196b31dc1025e78 \
--hash=sha256:7d92f0248d38faa411d17f4107fc0bce0c42cae0b0ba5415505df72d751bf62d \
--hash=sha256:8362565b3835ceacf4dc8f3b56471a2289cf51ac80946f9087e66dc283a810e0 \
--hash=sha256:84a165379cb9d411d58ed739e4af3396e544eac190805a54ba2e0322feb55c46 \
--hash=sha256:88ff107f211ea696455ea8d911389f6d2b276aabf3231bf72c8853d22db755c5 \
--hash=sha256:9f65e842cb02550fac96536edb1d17f24c0a338fd84eaf582be25926e993dde4 \
--hash=sha256:a4fc68d1c5b951cfb72dfd54702afdbbf0fb7acdc9b7dc4301bbf2225a27714d \
--hash=sha256:b7f2f5c525a642cecad24ee8670443ba27ac1fab81bba4cc24c7b6b41f2d0c75 \
--hash=sha256:b846d59a8d5a9ba87e2c3d757ca019fa576793e8758174d3868aecb88d6fc8eb \
--hash=sha256:bf8fc66012ca857d62f6a347007e166ed59c0bc150cefa49f28376ebe7d992a2 \
--hash=sha256:f5d0bf9b252f30a31664b6f64432b4730bb7038339bd18b1fafe129cfc2be9be
# via
# acme
# certbot
@ -217,9 +217,9 @@ pycparser==2.21 \
--hash=sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9 \
--hash=sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206
# via cffi
pyopenssl==23.1.1 \
--hash=sha256:841498b9bec61623b1b6c47ebbc02367c07d60e0e195f19790817f10cc8db0b7 \
--hash=sha256:9e0c526404a210df9d2b18cd33364beadb0dc858a739b885677bc65e105d4a4c
pyopenssl==23.2.0 \
--hash=sha256:24f0dc5227396b3e831f4c7f602b950a5e9833d292c8e4a2e06b709292806ae2 \
--hash=sha256:276f931f55a452e7dea69c7173e984eb2a4407ce413c918aa34b55f82f9b8bac
# via
# acme
# josepy

View File

@ -9,6 +9,9 @@ RUN mkdir -p /usr/share/bunkerweb/deps && \
cat /tmp/req/requirements.txt /tmp/req/requirements.txt.1 /tmp/req/requirements.txt.2 > /usr/share/bunkerweb/deps/requirements.txt && \
rm -rf /tmp/req
# Update apk
RUN apk update
# Install python dependencies
RUN apk add --no-cache --virtual .build-deps g++ gcc musl-dev jpeg-dev zlib-dev libffi-dev cairo-dev pango-dev gdk-pixbuf-dev openssl-dev cargo postgresql-dev file make
@ -48,6 +51,7 @@ RUN apk add --no-cache bash && \
adduser -h /var/cache/nginx -g ui -s /bin/sh -G ui -D -H -u 101 ui && \
echo "Docker" > /usr/share/bunkerweb/INTEGRATION && \
mkdir -p /var/tmp/bunkerweb && \
mkdir -p /var/run/bunkerweb && \
mkdir -p /etc/bunkerweb && \
mkdir -p /data/cache && ln -s /data/cache /var/cache/bunkerweb && \
mkdir -p /data/lib && ln -s /data/lib /var/lib/bunkerweb && \
@ -56,14 +60,14 @@ RUN apk add --no-cache bash && \
for dir in $(echo "configs/http configs/stream configs/server-http configs/server-stream configs/default-server-http configs/default-server-stream configs/modsec configs/modsec-crs") ; do mkdir "/data/${dir}" ; done && \
chown -R root:ui /data && \
chmod -R 770 /data && \
chown -R root:ui /usr/share/bunkerweb/INTEGRATION /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /var/log/nginx && \
chmod 770 /var/cache/bunkerweb /var/lib/bunkerweb /var/tmp/bunkerweb /var/log/nginx/ui.log && \
chown -R root:ui /usr/share/bunkerweb/INTEGRATION /var/cache/bunkerweb /var/lib/bunkerweb /etc/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/nginx && \
chmod 770 /var/cache/bunkerweb /var/lib/bunkerweb /var/tmp/bunkerweb /var/run/bunkerweb /var/log/nginx/ui.log && \
chmod 750 /usr/share/bunkerweb/gen/*.py /usr/share/bunkerweb/ui/*.py /usr/share/bunkerweb/ui/src/*.py /usr/share/bunkerweb/deps/python/bin/* /usr/share/bunkerweb/helpers/*.sh && \
chmod 660 /usr/share/bunkerweb/INTEGRATION && \
chown root:ui /usr/share/bunkerweb/INTEGRATION
# Fix CVEs
# There are no CVEs for python:3.11.3-alpine at the moment
RUN apk add --no-cache "libcrypto3>=3.1.1-r0" "libssl3>=3.1.1-r0"
VOLUME /data /etc/nginx
@ -76,4 +80,4 @@ USER ui:ui
HEALTHCHECK --interval=10s --timeout=10s --start-period=30s --retries=6 CMD /usr/share/bunkerweb/helpers/healthcheck-ui.sh
ENV PYTHONPATH /usr/share/bunkerweb/deps/python
CMD ["python3", "-m", "gunicorn", "--user", "ui", "--group", "ui", "main:app", "--worker-class", "gevent", "--bind", "0.0.0.0:7000", "--graceful-timeout", "0", "--access-logfile", "-", "--error-logfile", "-"]
CMD ["python3", "-m", "gunicorn", "--config", "/usr/share/bunkerweb/ui/gunicorn.conf.py", "--user", "ui", "--group", "ui", "--bind", "0.0.0.0:7000"]

21
src/ui/gunicorn.conf.py Normal file
View File

@ -0,0 +1,21 @@
from os import sep
from os.path import join
wsgi_app = "main:app"
proc_name = "bunkerweb-ui"
accesslog = "-"
access_log_format = (
'%({x-forwarded-for}i)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
)
errorlog = "-"
preload_app = True
pidfile = join(sep, "var", "run", "bunkerweb", "ui.pid")
secure_scheme_headers = {
"X-FORWARDED-PROTOCOL": "https",
"X-FORWARDED-PROTO": "https",
"X-FORWARDED-SSL": "on",
}
forwarded_allow_ips = "*"
proxy_allow_ips = "*"
worker_class = "gevent"
graceful_timeout = 0

View File

@ -1,12 +1,14 @@
#!/usr/bin/python3
from os import _exit, getenv, getpid, listdir, sep
from os import _exit, environ, getenv, listdir, sep
from os.path import basename, dirname, join
from sys import path as sys_path, modules as sys_modules
from pathlib import Path
os_release_path = Path(sep, "etc", "os-release")
if os_release_path.is_file() and "Alpine" not in os_release_path.read_text():
if os_release_path.is_file() and "Alpine" not in os_release_path.read_text(
encoding="utf-8"
):
sys_path.append(join(sep, "usr", "share", "bunkerweb", "deps", "python"))
del os_release_path
@ -18,6 +20,10 @@ for deps_path in [
if deps_path not in sys_path:
sys_path.append(deps_path)
from gevent import monkey, spawn
monkey.patch_all()
from bs4 import BeautifulSoup
from copy import deepcopy
from datetime import datetime, timedelta, timezone
@ -92,10 +98,10 @@ def stop_gunicorn():
call(["kill", "-SIGTERM", pid])
def stop(status, stop=True):
Path(sep, "var", "tmp", "bunkerweb", "ui.pid").unlink(exist_ok=True)
Path(sep, "var", "tmp", "bunkerweb", "ui.healthy").unlink(exist_ok=True)
if stop is True:
def stop(status, _stop=True):
Path(sep, "var", "run", "bunkerweb", "ui.pid").unlink(missing_ok=True)
Path(sep, "var", "tmp", "bunkerweb", "ui.healthy").unlink(missing_ok=True)
if _stop is True:
stop_gunicorn()
_exit(status)
@ -110,11 +116,6 @@ signal(SIGINT, handle_stop)
signal(SIGTERM, handle_stop)
sbin_nginx_path = Path(sep, "usr", "sbin", "nginx")
pid_file = Path(sep, "var", "tmp", "bunkerweb", "ui.pid")
if not pid_file.is_file():
pid_file.write_text(str(getpid()))
del pid_file
# Flask app
app = Flask(
@ -128,10 +129,7 @@ app.wsgi_app = ReverseProxied(app.wsgi_app)
# Set variables and instantiate objects
vars = get_variables()
if "ABSOLUTE_URI" not in vars:
logger.error("ABSOLUTE_URI is not set")
stop(1)
elif "ADMIN_USERNAME" not in vars:
if "ADMIN_USERNAME" not in vars:
logger.error("ADMIN_USERNAME is not set")
stop(1)
elif "ADMIN_PASSWORD" not in vars:
@ -147,14 +145,6 @@ if not vars.get("FLASK_DEBUG", False) and not regex_match(
)
stop(1)
if not vars["ABSOLUTE_URI"].endswith("/"):
vars["ABSOLUTE_URI"] += "/"
if not vars.get("FLASK_DEBUG", False) and vars["ABSOLUTE_URI"].endswith("/changeme/"):
logger.error("Please change the default URL.")
stop(1)
login_manager = LoginManager()
login_manager.init_app(app)
login_manager.login_view = "login"
@ -168,33 +158,44 @@ PLUGIN_KEYS = [
"settings",
]
integration = "Linux"
INTEGRATION = "Linux"
integration_path = Path(sep, "usr", "share", "bunkerweb", "INTEGRATION")
if getenv("KUBERNETES_MODE", "no").lower() == "yes":
integration = "Kubernetes"
INTEGRATION = "Kubernetes"
elif getenv("SWARM_MODE", "no").lower() == "yes":
integration = "Swarm"
INTEGRATION = "Swarm"
elif getenv("AUTOCONF_MODE", "no").lower() == "yes":
integration = "Autoconf"
INTEGRATION = "Autoconf"
elif integration_path.is_file():
integration = integration_path.read_text().strip()
INTEGRATION = integration_path.read_text(encoding="utf-8").strip()
del integration_path
docker_client = None
kubernetes_client = None
if integration in ("Docker", "Swarm", "Autoconf"):
if INTEGRATION in ("Docker", "Swarm", "Autoconf"):
try:
docker_client: DockerClient = DockerClient(
base_url=vars.get("DOCKER_HOST", "unix:///var/run/docker.sock")
)
except (docker_APIError, DockerException):
logger.warning("No docker host found")
elif integration == "Kubernetes":
elif INTEGRATION == "Kubernetes":
kube_config.load_incluster_config()
kubernetes_client = kube_client.CoreV1Api()
db = Database(logger)
db = Database(logger, ui=True)
if INTEGRATION in (
"Swarm",
"Kubernetes",
"Autoconf",
):
while not db.is_autoconf_loaded():
logger.warning(
"Autoconf is not loaded yet in the database, retrying in 5s ...",
)
sleep(5)
while not db.is_initialized():
logger.warning(
@ -210,20 +211,111 @@ while not db.is_first_config_saved() or not env:
sleep(5)
env = db.get_config()
del env
logger.info("Database is ready")
Path(sep, "var", "tmp", "bunkerweb", "ui.healthy").write_text("ok")
bw_version = Path(sep, "usr", "share", "bunkerweb", "VERSION").read_text().strip()
Path(sep, "var", "tmp", "bunkerweb", "ui.healthy").write_text("ok", encoding="utf-8")
bw_version = (
Path(sep, "usr", "share", "bunkerweb", "VERSION")
.read_text(encoding="utf-8")
.strip()
)
ABSOLUTE_URI = vars.get("ABSOLUTE_URI")
CONFIG = Config(db)
def update_config():
global ABSOLUTE_URI
ret = db.checked_changes("ui")
if ret:
logger.error(
f"An error occurred when setting the changes to checked in the database : {ret}"
)
stop(1)
ssl = False
server_name = None
endpoint = None
for service in CONFIG.get_services():
if service.get("USE_UI", "no") == "no":
continue
server_name = service.get("SERVER_NAME", {"value": None})["value"]
endpoint = service.get("REVERSE_PROXY_URL", {"value": "/"})["value"]
logger.warning(service.get("AUTO_LETS_ENCRYPT", {"value": "no"}))
logger.warning(service.get("GENERATE_SELF_SIGNED_SSL", {"value": "no"}))
logger.warning(service.get("USE_CUSTOM_SSL", {"value": "no"}))
if any(
[
service.get("AUTO_LETS_ENCRYPT", {"value": "no"})["value"] == "yes",
service.get("GENERATE_SELF_SIGNED_SSL", {"value": "no"})["value"]
== "yes",
service.get("USE_CUSTOM_SSL", {"value": "no"})["value"] == "yes",
]
):
ssl = True
break
if not server_name:
logger.error("No service found with USE_UI=yes")
stop(1)
ABSOLUTE_URI = f"http{'s' if ssl else ''}://{server_name}{endpoint}"
SCRIPT_NAME = f"/{basename(ABSOLUTE_URI[:-1] if ABSOLUTE_URI.endswith('/') and ABSOLUTE_URI != '/' else ABSOLUTE_URI)}"
if not ABSOLUTE_URI.endswith("/"):
ABSOLUTE_URI += "/"
if ABSOLUTE_URI != app.config.get("ABSOLUTE_URI"):
app.config["ABSOLUTE_URI"] = ABSOLUTE_URI
app.config["SESSION_COOKIE_DOMAIN"] = server_name
logger.info(f"The ABSOLUTE_URI is now {ABSOLUTE_URI}")
else:
logger.info(f"The ABSOLUTE_URI is still {ABSOLUTE_URI}")
if SCRIPT_NAME != getenv("SCRIPT_NAME"):
environ["SCRIPT_NAME"] = f"/{basename(ABSOLUTE_URI[:-1])}"
logger.info(f"The script name is now {environ['SCRIPT_NAME']}")
else:
logger.info(f"The script name is still {environ['SCRIPT_NAME']}")
def check_config_changes():
while True:
changes = db.check_changes("ui")
if isinstance(changes, str):
continue
if changes:
logger.info(
"Config changed in the database, updating ABSOLUTE_URI and SCRIPT_NAME ..."
)
update_config()
sleep(1)
update_config()
spawn(check_config_changes)
try:
app.config.update(
DEBUG=True,
SECRET_KEY=vars["FLASK_SECRET"],
ABSOLUTE_URI=vars["ABSOLUTE_URI"],
INSTANCES=Instances(docker_client, kubernetes_client, integration),
INSTANCES=Instances(docker_client, kubernetes_client, INTEGRATION),
CONFIG=Config(db),
CONFIGFILES=ConfigFiles(logger, db),
SESSION_COOKIE_DOMAIN=vars["ABSOLUTE_URI"]
.replace("http://", "")
SESSION_COOKIE_DOMAIN=ABSOLUTE_URI.replace("http://", "")
.replace("https://", "")
.split("/")[0],
WTF_CSRF_SSL_STRICT=False,
@ -244,8 +336,12 @@ plugin_id_rx = re_compile(r"^[\w_-]{1,64}$")
# Declare functions for jinja2
app.jinja_env.globals.update(check_settings=check_settings)
# CSRF protection
csrf = CSRFProtect()
csrf.init_app(app)
def manage_bunkerweb(method: str, operation: str = "reloads", *args):
def manage_bunkerweb(method: str, *args, operation: str = "reloads"):
# Do the operation
if method == "services":
error = False
@ -296,11 +392,6 @@ def load_user(user_id):
return User(user_id, vars["ADMIN_PASSWORD"])
# CSRF protection
csrf = CSRFProtect()
csrf.init_app(app)
@app.errorhandler(CSRFError)
def handle_csrf_error(_):
"""
@ -349,6 +440,7 @@ def home():
r = get(
"https://github.com/bunkerity/bunkerweb/releases/latest",
allow_redirects=True,
timeout=5,
)
r.raise_for_status()
except BaseException:
@ -419,7 +511,8 @@ def instances():
Thread(
target=manage_bunkerweb,
name="Reloading instances",
args=("instances", request.form["operation"], request.form["INSTANCE_ID"]),
args=("instances", request.form["INSTANCE_ID"]),
kwargs={"operation": request.form["operation"]},
).start()
return redirect(
@ -523,11 +616,11 @@ def services():
name="Reloading instances",
args=(
"services",
request.form["operation"],
variables,
request.form.get("OLD_SERVER_NAME", "").split(" ")[0],
variables.get("SERVER_NAME", "").split(" ")[0],
),
kwargs={"operation": request.form["operation"]},
).start()
message = ""
@ -590,7 +683,7 @@ def global_config():
if not variables:
flash(
f"The global configuration was not edited because no values were changed."
"The global configuration was not edited because no values were changed."
)
return redirect(url_for("loading", next=url_for("global_config")))
@ -607,7 +700,6 @@ def global_config():
name="Reloading instances",
args=(
"global_config",
"reloads",
variables,
),
).start()
@ -670,6 +762,8 @@ def configs():
variables["content"], "html.parser"
).get_text()
error = False
if request.form["operation"] == "new":
if variables["type"] == "folder":
operation, error = app.config["CONFIGFILES"].create_folder(
@ -853,7 +947,9 @@ def plugins():
)
plugin_file = json_loads(
temp_folder_path.joinpath("plugin.json").read_text()
temp_folder_path.joinpath("plugin.json").read_text(
encoding="utf-8"
)
)
if not all(key in plugin_file.keys() for key in PLUGIN_KEYS):
@ -1201,13 +1297,13 @@ def logs_linux():
nginx_error_file = Path(sep, "var", "log", "nginx", "error.log")
if nginx_error_file.is_file():
raw_logs_access = nginx_error_file.read_text().splitlines()[
raw_logs_access = nginx_error_file.read_text(encoding="utf-8").splitlines()[
int(last_update.split(".")[0]) if last_update else 0 :
]
nginx_access_file = Path(sep, "var", "log", "nginx", "access.log")
if nginx_access_file.is_file():
raw_logs_error = nginx_access_file.read_text().splitlines()[
raw_logs_error = nginx_access_file.read_text(encoding="utf-8").splitlines()[
int(last_update.split(".")[1]) if last_update else 0 :
]
@ -1339,7 +1435,7 @@ def logs_container(container_id):
tmp_logs = []
if docker_client:
try:
if integration != "Swarm":
if INTEGRATION != "Swarm":
docker_logs = docker_client.containers.get(container_id).logs(
stdout=True,
stderr=True,

View File

@ -4,6 +4,5 @@ Flask_WTF==1.1.1
beautifulsoup4==4.12.2
python_dateutil==2.8.2
bcrypt==4.0.1
gunicorn==20.1.0
gevent==22.10.2
gunicorn[gevent]==20.1.0
regex==2023.5.5

View File

@ -107,7 +107,7 @@ gevent==22.10.2 \
--hash=sha256:f23d0997149a816a2a9045af29c66f67f405a221745b34cefeac5769ed451db8 \
--hash=sha256:f3329bedbba4d3146ae58c667e0f9ac1e6f1e1e6340c7593976cdc60aa7d1a47 \
--hash=sha256:f7ed2346eb9dc4344f9cb0d7963ce5b74fe16fdd031a2809bb6c2b6eba7ebcd5
# via -r requirements.in
# via gunicorn
greenlet==2.0.2 \
--hash=sha256:03a8f4f3430c3b3ff8d10a2a86028c660355ab637cee9333d63d66b56f09d52a \
--hash=sha256:0bf60faf0bc2468089bdc5edd10555bab6e85152191df713e2ab1fcc86382b5a \
@ -170,7 +170,7 @@ greenlet==2.0.2 \
--hash=sha256:f82d4d717d8ef19188687aa32b8363e96062911e63ba22a0cff7802a8e58e5f1 \
--hash=sha256:fc3a569657468b6f3fb60587e48356fe512c1754ca05a564f11366ac9e306526
# via gevent
gunicorn==20.1.0 \
gunicorn[gevent]==20.1.0 \
--hash=sha256:9dcc4547dbb1cb284accfb15ab5667a0e5d1881cc443e0677b4882a4067a807e \
--hash=sha256:e0a968b5ba15f8a328fdfd7ab1fcb5af4470c28aaf7e55df02a99bc13138e6e8
# via -r requirements.in

View File

@ -15,24 +15,12 @@ from uuid import uuid4
class Config:
def __init__(self, db) -> None:
self.__settings = json_loads(
Path(sep, "usr", "share", "bunkerweb", "settings.json").read_text()
Path(sep, "usr", "share", "bunkerweb", "settings.json").read_text(
encoding="utf-8"
)
)
self.__db = db
def __dict_to_env(self, filename: str, variables: dict) -> None:
"""Converts the content of a dict into an env file
Parameters
----------
filename : str
The path to save the env file
variables : dict
The dict to convert to env file
"""
Path(filename).write_text(
"\n".join(f"{k}={variables[k]}" for k in sorted(variables))
)
def __gen_conf(self, global_conf: dict, services_conf: list[dict]) -> None:
"""Generates the nginx configuration file from the given configuration
@ -43,7 +31,7 @@ class Config:
Raises
------
Exception
ConfigGenerationError
If an error occurred during the generation of the configuration file, raises this exception
"""
conf = deepcopy(global_conf)
@ -68,7 +56,11 @@ class Config:
conf["SERVER_NAME"] = " ".join(servers)
env_file = Path(sep, "tmp", f"{uuid4()}.env")
self.__dict_to_env(env_file, conf)
env_file.write_text(
"\n".join(f"{k}={conf[k]}" for k in sorted(conf)),
encoding="utf-8",
)
proc = run(
[
"python3",
@ -80,6 +72,7 @@ class Config:
],
stdin=DEVNULL,
stderr=STDOUT,
check=False,
)
if proc.returncode != 0:
@ -270,7 +263,7 @@ class Config:
self.__gen_conf(
self.get_config(methods=False) | variables, self.get_services(methods=False)
)
return f"The global configuration has been edited."
return "The global configuration has been edited."
def delete_service(self, service_name: str) -> Tuple[str, int]:
"""Deletes a service

View File

@ -14,9 +14,8 @@ from utils import path_to_dict
def generate_custom_configs(
custom_configs: List[Dict[str, Any]],
*,
original_path: str = join(sep, "etc", "bunkerweb", "configs"),
original_path: Path = Path(sep, "etc", "bunkerweb", "configs"),
):
original_path: Path = Path(original_path)
original_path.mkdir(parents=True, exist_ok=True)
for custom_config in custom_configs:
tmp_path = original_path.joinpath(custom_config["type"].replace("_", "-"))
@ -64,7 +63,7 @@ class ConfigFiles:
if files or (dirs and basename(root) not in root_dirs):
path_exploded = root.split("/")
for file in files:
with open(join(root, file), "r") as f:
with open(join(root, file), "r", encoding="utf-8") as f:
custom_configs.append(
{
"value": f.read(),
@ -148,7 +147,7 @@ class ConfigFiles:
def create_file(self, path: str, name: str, content: str) -> Tuple[str, int]:
file_path = Path(path, name)
file_path.parent.mkdir(exist_ok=True)
file_path.write_text(content)
file_path.write_text(content, encoding="utf-8")
return f"The file {file_path} was successfully created", 0
def edit_folder(self, path: str, name: str, old_name: str) -> Tuple[str, int]:
@ -178,7 +177,7 @@ class ConfigFiles:
old_path = join(dirname(path), old_name)
try:
file_content = Path(old_path).read_text()
file_content = Path(old_path).read_text(encoding="utf-8")
except FileNotFoundError:
return f"Could not find {old_path}", 1
@ -201,6 +200,6 @@ class ConfigFiles:
except OSError:
return f"Could not remove {old_path}", 1
Path(new_path).write_text(content)
Path(new_path).write_text(content, encoding="utf-8")
return f"The file {old_path} was successfully edited", 0

View File

@ -4,7 +4,6 @@ from os import sep
from os.path import join
from pathlib import Path
from subprocess import DEVNULL, STDOUT, run
from sys import path as sys_path
from typing import Any, Optional, Union
from API import API # type: ignore
@ -47,7 +46,8 @@ class Instance:
self.env = data
self.apiCaller = apiCaller or ApiCaller()
def get_id(self) -> str:
@property
def id(self) -> str:
return self._id
def reload(self) -> bool:
@ -57,11 +57,12 @@ class Instance:
["sudo", join(sep, "usr", "sbin", "nginx"), "-s", "reload"],
stdin=DEVNULL,
stderr=STDOUT,
check=False,
).returncode
== 0
)
return self.apiCaller._send_to_apis("POST", "/reload")
return self.apiCaller.send_to_apis("POST", "/reload")
def start(self) -> bool:
if self._type == "local":
@ -70,11 +71,12 @@ class Instance:
["sudo", join(sep, "usr", "sbin", "nginx")],
stdin=DEVNULL,
stderr=STDOUT,
check=False,
).returncode
== 0
)
return self.apiCaller._send_to_apis("POST", "/start")
return self.apiCaller.send_to_apis("POST", "/start")
def stop(self) -> bool:
if self._type == "local":
@ -83,11 +85,12 @@ class Instance:
["sudo", join(sep, "usr", "sbin", "nginx"), "-s", "stop"],
stdin=DEVNULL,
stderr=STDOUT,
check=False,
).returncode
== 0
)
return self.apiCaller._send_to_apis("POST", "/stop")
return self.apiCaller.send_to_apis("POST", "/stop")
def restart(self) -> bool:
if self._type == "local":
@ -96,11 +99,12 @@ class Instance:
["sudo", join(sep, "usr", "sbin", "nginx"), "-s", "restart"],
stdin=DEVNULL,
stderr=STDOUT,
check=False,
).returncode
== 0
)
return self.apiCaller._send_to_apis("POST", "/restart")
return self.apiCaller.send_to_apis("POST", "/restart")
class Instances:
@ -112,10 +116,10 @@ class Instances:
def __instance_from_id(self, _id) -> Instance:
instances: list[Instance] = self.get_instances()
for instance in instances:
if instance._id == _id:
if instance.id == _id:
return instance
raise Exception(f"Can't find instance with id {_id}")
raise ValueError(f"Can't find instance with _id {_id}")
def get_instances(self) -> list[Instance]:
instances = []
@ -129,16 +133,6 @@ class Instances:
for x in [env.split("=") for env in instance.attrs["Config"]["Env"]]
}
apiCaller = ApiCaller()
apiCaller._set_apis(
[
API(
f"http://{instance.name}:{env_variables.get('API_HTTP_PORT', '5000')}",
env_variables.get("API_SERVER_NAME", "bwapi"),
)
]
)
instances.append(
Instance(
instance.id,
@ -147,7 +141,14 @@ class Instances:
"container",
"up" if instance.status == "running" else "down",
instance,
apiCaller,
ApiCaller(
[
API(
f"http://{instance.name}:{env_variables.get('API_HTTP_PORT', '5000')}",
env_variables.get("API_SERVER_NAME", "bwapi"),
)
]
),
)
)
elif self.__integration == "Swarm":
@ -160,7 +161,7 @@ class Instances:
if desired_tasks > 0 and (desired_tasks == running_tasks):
status = "up"
apis = []
apiCaller = ApiCaller()
api_http_port = None
api_server_name = None
@ -173,13 +174,12 @@ class Instances:
api_server_name = var.replace("API_SERVER_NAME=", "", 1)
for task in instance.tasks():
apis.append(
apiCaller.append(
API(
f"http://{instance.name}.{task['NodeID']}.{task['ID']}:{api_http_port or '5000'}",
host=api_server_name or "bwapi",
)
)
apiCaller = ApiCaller(apis=apis)
instances.append(
Instance(
@ -204,15 +204,6 @@ class Instances:
env.name: env.value or "" for env in pod.spec.containers[0].env
}
apiCaller = ApiCaller(
apis=[
API(
f"http://{pod.status.pod_ip}:{env_variables.get('API_HTTP_PORT', '5000')}",
host=env_variables.get("API_SERVER_NAME", "bwapi"),
)
]
)
status = "up"
if pod.status.conditions is not None:
for condition in pod.status.conditions:
@ -228,7 +219,16 @@ class Instances:
"pod",
status,
pod,
apiCaller,
ApiCaller(
[
API(
f"http://{pod.status.pod_ip}:{env_variables.get('API_HTTP_PORT', '5000')}",
host=env_variables.get(
"API_SERVER_NAME", "bwapi"
),
)
]
),
)
)
@ -239,18 +239,9 @@ class Instances:
# Local instance
if Path(sep, "usr", "sbin", "nginx").exists():
apiCaller = ApiCaller()
env_variables = dotenv_values(
join(sep, "etc", "bunkerweb", "variables.env")
)
apiCaller._set_apis(
[
API(
f"http://127.0.0.1:{env_variables.get('API_HTTP_PORT', '5000')}",
env_variables.get("API_SERVER_NAME", "bwapi"),
)
]
)
instances.insert(
0,
@ -260,10 +251,17 @@ class Instances:
"127.0.0.1",
"local",
"up"
if Path(sep, "var", "tmp", "bunkerweb", "nginx.pid").exists()
if Path(sep, "var", "run", "bunkerweb", "nginx.pid").exists()
else "down",
None,
apiCaller,
ApiCaller(
[
API(
f"http://127.0.0.1:{env_variables.get('API_HTTP_PORT', '5000')}",
env_variables.get("API_SERVER_NAME", "bwapi"),
)
]
),
),
)
@ -282,10 +280,10 @@ class Instances:
return not_reloaded or "Successfully reloaded instances"
def reload_instance(
self, id: Optional[int] = None, instance: Optional[Instance] = None
self, _id: Optional[int] = None, instance: Optional[Instance] = None
) -> str:
if instance is None:
instance = self.__instance_from_id(id)
instance = self.__instance_from_id(_id)
result = instance.reload()
@ -294,8 +292,8 @@ class Instances:
return f"Can't reload {instance.name}"
def start_instance(self, id) -> str:
instance = self.__instance_from_id(id)
def start_instance(self, _id) -> str:
instance = self.__instance_from_id(_id)
result = instance.start()
@ -304,8 +302,8 @@ class Instances:
return f"Can't start {instance.name}"
def stop_instance(self, id) -> str:
instance = self.__instance_from_id(id)
def stop_instance(self, _id) -> str:
instance = self.__instance_from_id(_id)
result = instance.stop()
@ -314,8 +312,8 @@ class Instances:
return f"Can't stop {instance.name}"
def restart_instance(self, id) -> str:
instance = self.__instance_from_id(id)
def restart_instance(self, _id) -> str:
instance = self.__instance_from_id(_id)
result = instance.restart()

View File

@ -5,8 +5,8 @@ from bcrypt import checkpw, hashpw, gensalt
class User(UserMixin):
def __init__(self, id, password):
self.__id = id
def __init__(self, _id, password):
self.__id = _id
self.__password = hashpw(password.encode("utf-8"), gensalt())
def get_id(self):

View File

@ -40,6 +40,7 @@
logoEl.classList.toggle("scale-105");
}, 300);
const reloading = setInterval(check_reloading, 2000);
check_reloading();
async function check_reloading() {
const controller = new AbortController();

View File

@ -2,7 +2,7 @@
from os import environ, urandom
from os.path import join
from typing import List
from typing import List, Optional
def get_variables():
@ -22,12 +22,15 @@ def get_variables():
def path_to_dict(
path,
path: str,
*,
is_cache: bool = False,
db_data: List[dict] = [],
services: List[str] = [],
db_data: Optional[List[dict]] = None,
services: Optional[List[dict]] = None,
) -> dict:
db_data = db_data or []
services = services or []
if not is_cache:
config_types = [
"http",

View File

@ -51,6 +51,8 @@ try:
print(" Navigating to http://www.example.com ...", flush=True)
driver.get("http://www.example.com")
sleep(2)
try:
driver.find_element(By.XPATH, "//img[@alt='NGINX Logo']")
except NoSuchElementException:
@ -85,6 +87,8 @@ try:
print("❌ The page is accessible without auth-basic ...", flush=True)
exit(1)
sleep(2)
print(
f" Trying to access http://{auth_basic_username}:{auth_basic_password}@www.example.com ...",
flush=True,

View File

@ -50,6 +50,7 @@ try:
"http://www.example.com/?id=/etc/passwd",
headers={"Host": "www.example.com"},
)
sleep(1)
sleep(1)

Some files were not shown because too many files have changed in this diff Show More