36 Commits

Author SHA1 Message Date
023cee3447 🤖 ci(vault): declare dance-lessons-coach JWT role + ops policy (#1)
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 11s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
Co-authored-by: Gabriel Radureau <arcodange@gmail.com>
Co-committed-by: Gabriel Radureau <arcodange@gmail.com>
2026-05-06 12:55:26 +02:00
2367bd6cd7 fix(crowdsec): use Recreate strategy for lapi to avoid RWO volume multi-attach
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 49s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
RollingUpdate with maxSurge>0 creates a new pod before terminating the old one,
causing a Multi-Attach error on the RWO PVCs (crowdsec-db-pvc, crowdsec-config-pvc).
Recreate terminates the old pod first, then starts the new one.
2026-04-16 10:25:10 +02:00
82a6eb0d85 configure grafana with prometheus
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 5m2s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2026-03-18 17:07:35 +01:00
a762c8f90f deploy prometheus
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 25s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2026-03-18 16:21:31 +01:00
1b2c325023 no clickhouse pod on pi2
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 25s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2026-01-03 19:17:04 +01:00
9f0adfe14d use self signed cert
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 1m2s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2026-01-02 19:07:46 +01:00
02322e9a24 use internal .lab instead of failing duckdns.org
Some checks failed
Helm Charts / Detect changed charts (push) Successful in 22s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Failing after 34s
2025-12-31 17:54:36 +01:00
2c8de3468a set IP_GEOLOCATION_DB to geoip lite city db
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m29s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-12 15:32:58 +01:00
87aac41959 plausible:geoip include free city database
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m38s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-11 13:09:56 +01:00
2f0bc7ab4a expose plausible event api
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m28s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-11 08:29:17 +01:00
3225c17b4a fix ingressroute service
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m44s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-11 07:51:06 +01:00
d91f8e2900 fix yaml indentation
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m38s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-11 07:39:55 +01:00
ec0f42676a fix: specify namespace in kustomization.yaml
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m31s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-11 07:28:58 +01:00
6822b53775 clickhouse: donne droit de lecture sur bd system
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 12s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-10 16:58:07 +01:00
5b13d0f679 plausible: fix clickhouse url
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m58s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-10 15:54:41 +01:00
dee0fed059 plausible: add reconnect config for orm
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 14s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-10 15:46:58 +01:00
09d7aa9b9e pgboucner: set server_idle_tiemout to 2h
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 15s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-10 15:35:28 +01:00
0b74c97a85 plausible: don't run db create
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m35s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-10 15:15:53 +01:00
0942171673 set crowdsec lapi deployment strategy
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m50s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-10 15:00:57 +01:00
a8c497a5da try plausible CE for web analytics 2025-12-10 15:00:47 +01:00
d7130b1635 create plausible clickhouse database
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 3m9s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-10 13:19:56 +01:00
a5338ac6f7 TODO: 1 vault_database_secret_backend_connection per database 2025-12-09 12:58:59 +01:00
2903f70e9f fixes
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 16s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-09 12:14:57 +01:00
4f578b1164 set securityContext for redis pod to run as redis user
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 16s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-09 11:06:41 +01:00
bcb868944d fix grafana clickhouse datasource
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 2m55s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
change promql based dashboard for sql
2025-12-08 16:30:25 +01:00
269f09b7a8 try kustomize overlays
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 15s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-08 15:15:47 +01:00
3be78a836a try kustomize patch with argocd
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 17s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-06 14:57:42 +01:00
2b6fc7937b configure clickhouse user
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 19s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-06 14:29:21 +01:00
7df5d01de8 try grafana and clickhouse dashboard
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 15s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-06 01:24:37 +01:00
e0641c4c42 document redis insights usage - redis gui
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 16s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-05 21:39:57 +01:00
4cbc28bbdb try clickhouse - pascaliske chart
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 17s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-04 17:26:48 +01:00
7676196b8a use pascaliske chart for redis
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 12s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-04 13:01:08 +01:00
c46c479dc5 try chatgpt provided chart for redis/keyDB
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 18s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-04 12:31:32 +01:00
5e6c2acb21 try out keyDB - redis alternative
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 12s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-04 11:29:54 +01:00
07e2c6d171 fix attempt for crowdsec sql error
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 15s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-03 18:29:05 +01:00
859057be66 configure postgresql for crowdsec
All checks were successful
Helm Charts / Detect changed charts (push) Successful in 16s
Helm Charts / Library charts tool (push) Has been skipped
Helm Charts / Application charts pgcat (push) Has been skipped
2025-12-03 18:08:53 +01:00
59 changed files with 4353 additions and 29 deletions

View File

@@ -0,0 +1,63 @@
---
# template source: https://github.com/bretfisher/docker-build-workflow/blob/main/templates/call-docker-build.yaml
name: Crowdsec
on: #[push,pull_request]
workflow_dispatch: {}
push: &crowdsecPaths
paths:
- 'crowdsec/**/*.tf'
pull_request: *crowdsecPaths
# cancel any previously-started, yet still active runs of this workflow on the same branch
concurrency:
group: ${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
.vault_step: &vault_step
name: read vault secret
uses: https://gitea.arcodange.lab/arcodange-org/vault-action.git@main
id: vault-secrets
with:
url: https://vault.arcodange.lab
caCertificate: ${{ secrets.HOMELAB_CA_CERT }}
jwtGiteaOIDC: ${{ needs.gitea_vault_auth.outputs.gitea_vault_jwt }}
role: gitea_cicd_crowdsec
method: jwt
path: gitea_jwt
secrets: |
kvv1/google/credentials credentials | GOOGLE_BACKEND_CREDENTIALS ;
kvv1/gitea/tofu_module_reader ssh_private_key | TERRAFORM_SSH_KEY ;
jobs:
gitea_vault_auth:
name: Auth with gitea for vault
runs-on: ubuntu-latest
outputs:
gitea_vault_jwt: ${{steps.gitea_vault_jwt.outputs.id_token}}
steps:
- name: Auth with gitea for vault
id: gitea_vault_jwt
run: |
echo -n "${{ secrets.vault_oauth__sh_b64 }}" | base64 -d | bash
tofu:
name: Tofu - Crowdsec IAC
needs:
- gitea_vault_auth
runs-on: ubuntu-latest
env:
OPENTOFU_VERSION: 1.8.2
TERRAFORM_VAULT_AUTH_JWT: ${{ needs.gitea_vault_auth.outputs.gitea_vault_jwt }}
VAULT_CACERT: "${{ github.workspace }}/homelab.pem"
steps:
- *vault_step
- uses: actions/checkout@v4
- name: prepare vault self signed cert
run: echo -n "${{ secrets.HOMELAB_CA_CERT }}" | base64 -d > $VAULT_CACERT
- name: terraform apply
uses: dflook/terraform-apply@v1
with:
path: crowdsec/iac
auto_approve: true

View File

@@ -165,7 +165,7 @@ jobs:
chart_package=${chart}-${chart_version}.tgz chart_package=${chart}-${chart_version}.tgz
# helm package ${chart} # helm package ${chart}
tar -X ${chart}/.helmignore -czf ${chart_package} ${chart} tar -X ${chart}/.helmignore -czf ${chart_package} ${chart}
curl --user ${{ github.actor }}:${{ secrets.PACKAGES_TOKEN }} -X POST --upload-file ./${chart_package} https://gitea.arcodange.duckdns.org/api/packages/${{ github.repository_owner }}/helm/api/charts curl --user ${{ github.actor }}:${{ secrets.PACKAGES_TOKEN }} -X POST --upload-file ./${chart_package} https://gitea.arcodange.lab/api/packages/${{ github.repository_owner }}/helm/api/charts
application-charts: application-charts:
<<: *charts-matrix-job <<: *charts-matrix-job

View File

@@ -0,0 +1,63 @@
---
# template source: https://github.com/bretfisher/docker-build-workflow/blob/main/templates/call-docker-build.yaml
name: Plausible
on: #[push,pull_request]
workflow_dispatch: {}
push: &plausiblePaths
paths:
- 'plausible/**/*.tf'
pull_request: *plausiblePaths
# cancel any previously-started, yet still active runs of this workflow on the same branch
concurrency:
group: ${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
.vault_step: &vault_step
name: read vault secret
uses: https://gitea.arcodange.lab/arcodange-org/vault-action.git@main
id: vault-secrets
with:
url: https://vault.arcodange.lab
caCertificate: ${{ secrets.HOMELAB_CA_CERT }}
jwtGiteaOIDC: ${{ needs.gitea_vault_auth.outputs.gitea_vault_jwt }}
role: gitea_cicd_plausible
method: jwt
path: gitea_jwt
secrets: |
kvv1/google/credentials credentials | GOOGLE_BACKEND_CREDENTIALS ;
kvv1/gitea/tofu_module_reader ssh_private_key | TERRAFORM_SSH_KEY ;
jobs:
gitea_vault_auth:
name: Auth with gitea for vault
runs-on: ubuntu-latest
outputs:
gitea_vault_jwt: ${{steps.gitea_vault_jwt.outputs.id_token}}
steps:
- name: Auth with gitea for vault
id: gitea_vault_jwt
run: |
echo -n "${{ secrets.vault_oauth__sh_b64 }}" | base64 -d | bash
tofu:
name: Tofu - plausible IAC
needs:
- gitea_vault_auth
runs-on: ubuntu-latest
env:
OPENTOFU_VERSION: 1.8.2
TERRAFORM_VAULT_AUTH_JWT: ${{ needs.gitea_vault_auth.outputs.gitea_vault_jwt }}
VAULT_CACERT: "${{ github.workspace }}/homelab.pem"
steps:
- *vault_step
- uses: actions/checkout@v4
- name: prepare vault self signed cert
run: echo -n "${{ secrets.HOMELAB_CA_CERT }}" | base64 -d > $VAULT_CACERT
- name: terraform apply
uses: dflook/terraform-apply@v1
with:
path: plausible/iac
auto_approve: true

View File

@@ -16,10 +16,11 @@ concurrency:
.vault_step: &vault_step .vault_step: &vault_step
name: read vault secret name: read vault secret
uses: https://gitea.arcodange.duckdns.org/arcodange-org/vault-action.git@main uses: https://gitea.arcodange.lab/arcodange-org/vault-action.git@main
id: vault-secrets id: vault-secrets
with: with:
url: https://vault.arcodange.duckdns.org url: https://vault.arcodange.lab
caCertificate: ${{ secrets.HOMELAB_CA_CERT }}
jwtGiteaOIDC: ${{ needs.gitea_vault_auth.outputs.gitea_vault_jwt }} jwtGiteaOIDC: ${{ needs.gitea_vault_auth.outputs.gitea_vault_jwt }}
role: gitea_cicd role: gitea_cicd
method: jwt method: jwt
@@ -50,12 +51,12 @@ jobs:
env: env:
OPENTOFU_VERSION: 1.8.2 OPENTOFU_VERSION: 1.8.2
TERRAFORM_VAULT_AUTH_JWT: ${{ needs.gitea_vault_auth.outputs.gitea_vault_jwt }} TERRAFORM_VAULT_AUTH_JWT: ${{ needs.gitea_vault_auth.outputs.gitea_vault_jwt }}
VAULT_CACERT: "${{ github.workspace }}/homelab.pem"
steps: steps:
- *vault_step - *vault_step
- uses: actions/checkout@v4 - uses: actions/checkout@v4
# - uses: dflook/terraform-plan@v1 - name: prepare vault self signed cert
# with: run: echo -n "${{ secrets.HOMELAB_CA_CERT }}" | base64 -d > $VAULT_CACERT
# path: hashicorp-vault/iac
- name: terraform apply - name: terraform apply
uses: dflook/terraform-apply@v1 uses: dflook/terraform-apply@v1
with: with:

2
.gitignore vendored
View File

@@ -1,5 +1,5 @@
.DS_Store .DS_Store
Chart.lock Chart.lock
*/charts/ **/charts/
.terraform .terraform
.terraform.lock.hcl .terraform.lock.hcl

View File

@@ -1,4 +1,4 @@
{{- range $app_name := .Values.tools -}} {{- range $app_name, $app := .Values.tools }}
--- ---
apiVersion: argoproj.io/v1alpha1 apiVersion: argoproj.io/v1alpha1
kind: Application kind: Application
@@ -10,7 +10,7 @@ metadata:
spec: spec:
project: tools project: tools
source: source:
repoURL: https://gitea.arcodange.duckdns.org/arcodange-org/tools repoURL: https://gitea.arcodange.lab/arcodange-org/tools
targetRevision: HEAD targetRevision: HEAD
path: {{ $app_name }} path: {{ $app_name }}
destination: destination:
@@ -22,4 +22,4 @@ spec:
selfHeal: true selfHeal: true
syncOptions: syncOptions:
- CreateNamespace=true - CreateNamespace=true
{{ end }} {{- end }}

View File

@@ -10,7 +10,7 @@ metadata:
spec: spec:
description: Arcodange tools (monitoring, cache, connection pool, secret management...) description: Arcodange tools (monitoring, cache, connection pool, secret management...)
sourceRepos: sourceRepos:
- 'https://gitea.arcodange.duckdns.org/arcodange-org/tools' - 'https://gitea.arcodange.lab/arcodange-org/tools'
# Only permit applications to deploy to the tools namespace in the same cluster # Only permit applications to deploy to the tools namespace in the same cluster
destinations: destinations:
- namespace: tools - namespace: tools

View File

@@ -1,6 +1,11 @@
tools: tools:
- pgbouncer pgbouncer: {}
#- pgcat # trop contraignant: lister tous les databases/users et auth_type md5 uniquement #pgcat # trop contraignant: lister tous les databases/users et auth_type md5 uniquement: {}
# - prometheus # prometheus: {}
- hashicorp-vault hashicorp-vault: {}
- crowdsec crowdsec: {}
redis: {}
clickhouse: {}
grafana: {}
plausible: {}
prometheus: {}

2
clickhouse/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
charts/
!charts/databases/

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,24 @@
apiVersion: v2
name: clickhouse databases
description: declare clickhouse databases
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "24.12.6.70-alpine"

View File

@@ -0,0 +1,85 @@
apiVersion: batch/v1
kind: Job
metadata:
name: clickhouse-db-init
labels:
app.kubernetes.io/name: clickhouse-db-init
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/init-sql-configmap.yaml") . | sha256sum }}
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: clickhouse-init
image: clickhouse/clickhouse-server:{{ .Chart.AppVersion }}
command: ["bash", "-c"]
args:
- |
echo "⏳ Waiting for ClickHouse..."
until clickhouse-client \
--host {{ .Values.clickhouse.host }} \
--port {{ .Values.clickhouse.port }} \
--user {{ .Values.clickhouse.adminUser }} \
--password "{{ .Values.clickhouse.adminPassword }}" \
-q "SELECT 1" >/dev/null 2>&1; do
sleep 2
done
echo "✅ ClickHouse ready"
{{- if .Values.databases }}
echo "➡️ Creating declared databases & users..."
clickhouse-client \
--host {{ .Values.clickhouse.host }} \
--port {{ .Values.clickhouse.port }} \
--user {{ .Values.clickhouse.adminUser }} \
--password "{{ .Values.clickhouse.adminPassword }}" \
--multiquery < /config/init.sql
{{- end }}
echo "➡️ Generating list of databases to drop..."
clickhouse-client \
--host {{ .Values.clickhouse.host }} \
--port {{ .Values.clickhouse.port }} \
--user {{ .Values.clickhouse.adminUser }} \
--password "{{ .Values.clickhouse.adminPassword }}" \
-q "
SELECT concat('DROP DATABASE IF EXISTS ', name, ';')
FROM system.databases
WHERE name NOT IN (
'system',
'information_schema',
'INFORMATION_SCHEMA',
'default'
{{- if .Values.databases }}
{{- range $db := .Values.databases }}
, '{{ $db }}'
{{- end }}
{{- end }}
);
" > /tmp/to_drop.sql
if [ -s /tmp/to_drop.sql ]; then
echo "➡️ Dropping leftover databases:"
cat /tmp/to_drop.sql
clickhouse-client \
--host {{ .Values.clickhouse.host }} \
--port {{ .Values.clickhouse.port }} \
--user {{ .Values.clickhouse.adminUser }} \
--password "{{ .Values.clickhouse.adminPassword }}" \
--multiquery < /tmp/to_drop.sql
else
echo "✔️ No databases to drop."
fi
echo "🎉 Initialization completed"
volumeMounts:
- name: init-sql
mountPath: /config
volumes:
- name: init-sql
configMap:
name: clickhouse-init-sql

View File

@@ -0,0 +1,26 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: clickhouse-init-sql
data:
init.sql: |
-- This file is auto-generated by Helm
-- Databases and users initialization
{{- range $db := .Values.databases }}
-- Database: {{ $db }}
CREATE DATABASE IF NOT EXISTS {{ $db }};
-- User: {{ $db }}
CREATE USER IF NOT EXISTS {{ $db }}
IDENTIFIED BY '{{ $db }}arcodange';
-- Privileges
GRANT CREATE, SELECT, INSERT, ALTER, DROP
ON {{ $db }}.*
TO {{ $db }};
GRANT SELECT ON system.* TO {{ $db }};
{{- end }}

View File

@@ -0,0 +1,8 @@
clickhouse:
host: clickhouse.tools
port: 9000
adminUser: arcodange
adminPassword: clickhousearcodange
databases:
- plausible

View File

@@ -0,0 +1,176 @@
global: {}
image:
# -- The registry to pull the image from.
registry: docker.io
# -- The repository to pull the image from.
repository: clickhouse/clickhouse-server
# -- The docker tag, if left empty chart's appVersion will be used.
# @default -- `.Chart.AppVersion`
tag: ''
# -- The pull policy for the controller.
pullPolicy: IfNotPresent
nameOverride: ''
fullnameOverride: ''
controller:
# -- Create a workload for this chart.
enabled: true
# -- Type of the workload object.
kind: StatefulSet
# -- The number of replicas.
replicas: 1
# -- The controller update strategy. Currently only applies to controllers of kind `Deployment`.
updateStrategy: {}
# -- Additional annotations for the controller object.
annotations: {}
# -- Additional labels for the controller object.
labels: {}
service:
# -- Create a service for exposing this chart.
enabled: true
# -- The service type used.
type: ClusterIP
# -- ClusterIP used if service type is `ClusterIP`.
clusterIP: ''
# -- LoadBalancerIP if service type is `LoadBalancer`.
loadBalancerIP: ''
# -- Allowed addresses when service type is `LoadBalancer`.
loadBalancerSourceRanges: []
# -- Additional annotations for the service object.
annotations: {}
# -- Additional labels for the service object.
labels: {}
env:
# -- Timezone for the container.
- name: TZ
value: Europe/Paris
# -- List of extra arguments for the container.
extraArgs: []
# - --loglevel warning
ports:
rest:
# -- Enable the port inside the `Controller` and `Service` objects.
enabled: true
# -- The port used as internal port and cluster-wide port if `.service.type` == `ClusterIP`.
port: 8123
# -- The external port used if `.service.type` == `NodePort`.
nodePort: null
# -- The protocol used for the service.
protocol: TCP
rpc:
# -- Enable the port inside the `Controller` and `Service` objects.
enabled: true
# -- The port used as internal port and cluster-wide port if `.service.type` == `ClusterIP`.
port: 9000
# -- The external port used if `.service.type` == `NodePort`.
nodePort: null
# -- The protocol used for the service.
protocol: TCP
configMap:
# -- Create a new config map object.
create: true
# -- Mount path of the config map object.
mountPath: /etc/config
# -- Use an existing config map object.
existingConfigMap: ''
# -- Map of configuration files as strings.
files:
custom-users.xml: |
<clickhouse>
<users>
<default>
<networks>
<ip>::1</ip>
<ip>127.0.0.1</ip>
</networks>
</default>
<arcodange>
<password>clickhousearcodange</password>
<networks>
<ip>::/0</ip>
<ip>0.0.0.0/0</ip>
</networks>
<profile>default</profile>
<quota>default</quota>
<access_management>1</access_management>
</arcodange>
</users>
</clickhouse>
# file1.yml: |
# # contents
# file2.yml: |
# # contents
# -- Additional annotations for the config map object.
annotations: {}
# -- Additional labels for the config map object.
labels: {}
persistentVolumeClaim:
# -- Create a new persistent volume claim object.
create: true
# -- Mount path of the persistent volume claim object.
mountPath: /var/lib/clickhouse
# -- Access mode of the persistent volume claim object.
accessMode: ReadWriteOnce
# -- Volume mode of the persistent volume claim object.
volumeMode: Filesystem
# -- Storage request size for the persistent volume claim object.
size: 16Gi
# -- Storage class name for the persistent volume claim object.
storageClassName: ''
# -- Use an existing persistent volume claim object.
existingPersistentVolumeClaim: ''
# -- Additional annotations for the persistent volume claim object.
annotations: {}
# -- Additional labels for the persistent volume claim object.
labels: {}
serviceAccount:
# -- Create a `ServiceAccount` object.
create: true
# -- Specify the service account used for the controller.
name: ''
# -- Additional annotations for the role and role binding objects.
annotations: {}
# -- Additional labels for the role and role binding objects.
labels: {}
# -- Pod-level security attributes. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context).
securityContext:
fsGroup: 101
runAsNonRoot: true
runAsGroup: 101
runAsUser: 101
# -- Compute resources used by the container. More info [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# -- Pod-level affinity. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- pi2
# -- Pod-level tolerations. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
tolerations: []
# - key: node-role.kubernetes.io/control-plane
# operator: Exists
# effect: NoSchedule

View File

@@ -0,0 +1,28 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: tools
helmGlobals:
chartHome: charts
helmCharts:
- name: clickhouse
repo: https://charts.pascaliske.dev
version: 0.4.0
releaseName: clickhouse
valuesFile: clickhouseValues.yaml
- name: databases
releaseName: clickhouse-databases
patches:
- target:
kind: StatefulSet
name: clickhouse
patch: |-
- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value:
name: config-volume
mountPath: /etc/clickhouse-server/users.d/custom-users.xml
subPath: custom-users.xml
readOnly: true

View File

@@ -5,7 +5,7 @@ description: A Helm chart for Kubernetes
dependencies: dependencies:
- name: tool - name: tool
version: 0.1.0 version: 0.1.0
repository: https://gitea.arcodange.duckdns.org/api/packages/arcodange-org/helm repository: https://gitea.arcodange.lab/api/packages/arcodange-org/helm
- name: crowdsec - name: crowdsec
version: 0.20.1 version: 0.20.1
repository: https://crowdsecurity.github.io/helm-charts repository: https://crowdsecurity.github.io/helm-charts

6
crowdsec/iac/backend.tf Normal file
View File

@@ -0,0 +1,6 @@
terraform {
backend "gcs" {
bucket = "arcodange-tf"
prefix = "tools/crowdsec/main"
}
}

5
crowdsec/iac/main.tf Normal file
View File

@@ -0,0 +1,5 @@
module "app_roles" {
source = "git::ssh://git@192.168.1.202:2222/arcodange-org/tools.git//hashicorp-vault/iac/modules/app_roles?depth=1&ref=main"
name = "crowdsec"
service_account_namespaces = ["tools"]
}

16
crowdsec/iac/providers.tf Normal file
View File

@@ -0,0 +1,16 @@
terraform {
required_providers {
vault = {
source = "vault"
version = "4.4.0"
}
}
}
provider "vault" {
address = "https://vault.arcodange.lab"
auth_login_jwt { # TERRAFORM_VAULT_AUTH_JWT environment variable
mount = "gitea_jwt"
role = "gitea_cicd_crowdsec"
}
}

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: crowdsec
namespace: {{ .Release.Namespace }}
automountServiceAccountToken: true

View File

@@ -0,0 +1,14 @@
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: crowdsec
namespace: {{ .Release.Namespace }}
spec:
vaultConnectionRef: default
method: kubernetes
mount: kubernetes
kubernetes:
role: crowdsec
serviceAccount: crowdsec
audiences:
- vault

View File

@@ -0,0 +1,25 @@
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
name: crowdsec-db-credentials
namespace: {{ .Release.Namespace }}
spec:
# Mount path of the secrets backend
mount: postgres
# Path to the secret
path: creds/crowdsec
# Where to store the secrets, VSO will create the secret
destination:
create: true
name: crowdsec-db-credentials
# Restart these pods when secrets rotated
rolloutRestartTargets:
- kind: Deployment
name: crowdsec-lapi
# Name of the CRD to authenticate to Vault
vaultAuthRef: crowdsec

View File

@@ -20,8 +20,14 @@ crowdsec: &crowdsec_config
env: env:
- name: COLLECTIONS - name: COLLECTIONS
value: "crowdsecurity/traefik crowdsecurity/http-cve" value: "crowdsecurity/traefik crowdsecurity/http-cve"
- name: TZ
value: Europe/Paris
lapi: lapi:
strategy:
type: Recreate
env: env:
- name: TZ
value: Europe/Paris
# To enroll the Security Engine to the console # To enroll the Security Engine to the console
- name: ENROLL_KEY - name: ENROLL_KEY
value: "cmieq72i3000802jr1wx8kply" value: "cmieq72i3000802jr1wx8kply"
@@ -29,6 +35,16 @@ crowdsec: &crowdsec_config
value: "homelab" value: "homelab"
- name: ENROLL_TAGS - name: ENROLL_TAGS
value: "k3s rpi test" value: "k3s rpi test"
- name: DB_USER
valueFrom:
secretKeyRef:
name: crowdsec-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: crowdsec-db-credentials
key: password
appsec: appsec:
enabled: true enabled: true
acquisitions: acquisitions:
@@ -39,6 +55,8 @@ crowdsec: &crowdsec_config
path: / path: /
source: appsec source: appsec
env: env:
- name: TZ
value: Europe/Paris
- name: COLLECTIONS - name: COLLECTIONS
value: "crowdsecurity/appsec-virtual-patching crowdsecurity/appsec-generic-rules" value: "crowdsecurity/appsec-virtual-patching crowdsecurity/appsec-generic-rules"
resources: resources:
@@ -48,6 +66,25 @@ crowdsec: &crowdsec_config
requests: requests:
cpu: "100m" cpu: "100m"
memory: "200Mi" memory: "200Mi"
config:
config.yaml.local: |
db_config:
type: postgresql
user: ${DB_USER}
password: ${DB_PASSWORD}
db_name: crowdsec
host: pgbouncer.tools
port: 5432
api:
server:
auto_registration: # Activate if not using TLS for authentication
enabled: true
token: "${REGISTRATION_TOKEN}" # /!\ do not change
allowed_ranges: # /!\ adapt to the pod IP ranges used by your cluster
- "127.0.0.1/32"
- "192.168.0.0/16"
- "10.42.0.0/16"
- "172.16.0.0/12"
tool: tool:
# kind: 'SubChart' or 'HelmChart', if subchart then uncomment Chart.yaml dependency, else comment and use tool library with helm chart template # kind: 'SubChart' or 'HelmChart', if subchart then uncomment Chart.yaml dependency, else comment and use tool library with helm chart template

34
grafana/Chart.yaml Normal file
View File

@@ -0,0 +1,34 @@
# Chart: keydb-custom
# Helm chart tailored for KeyDB (EqAlpha) on 2 Raspberry Pi 5 nodes
# - Mode: master (statefulset index 0) + replica (index 1)
# - Replica runs as replicaof master at startup
# - server-threads = 4
# - Config mounted via ConfigMap
# - Liveness / readiness probes included
# - Persistence via PersistentVolumeClaim (storageClass configurable)
# -----------------------------------------------------------------------------
# Chart.yaml
# -----------------------------------------------------------------------------
apiVersion: v2
name: grafana
description: A Helm chart for Kubernetes
dependencies:
- name: tool
version: 0.1.0
repository: https://gitea.arcodange.lab/api/packages/arcodange-org/helm
- name: grafana
version: 10.3.0
repository: https://grafana.github.io/helm-charts
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
version: 0.1.0
appVersion: "latest"

View File

@@ -0,0 +1,3 @@
{{- if eq .Values.tool.kind "HelmChart" -}}
{{- include "tool.helm-chart-config.tpl" . -}}
{{- end -}}

View File

@@ -0,0 +1,3 @@
{{- if eq .Values.tool.kind "HelmChart" -}}
{{- include "tool.helm-chart.tpl" . -}}
{{- end -}}

1634
grafana/values.yaml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -5,7 +5,7 @@ description: A Helm chart for Kubernetes
dependencies: dependencies:
- name: tool - name: tool
version: 0.1.0 version: 0.1.0
repository: https://gitea.arcodange.duckdns.org/api/packages/arcodange-org/helm repository: https://gitea.arcodange.lab/api/packages/arcodange-org/helm
- name: vault - name: vault
version: 0.28.1 version: 0.28.1
repository: https://helm.releases.hashicorp.com repository: https://helm.releases.hashicorp.com

View File

@@ -1,8 +1,8 @@
# Vault # Vault
1. Les [playbooks ansible](https://gitea.arcodange.duckdns.org/arcodange-org/factory/src/branch/main/ansible/arcodange/factory/playbooks) configurent la base de données postgres et le minimum requis pour permetre au dépot "tools" d'appliquer via un workflow gitea action [une configuration vault via tofu](./iac/). 1. Les [playbooks ansible](https://gitea.arcodange.lab/arcodange-org/factory/src/branch/main/ansible/arcodange/factory/playbooks) configurent la base de données postgres et le minimum requis pour permetre au dépot "tools" d'appliquer via un workflow gitea action [une configuration vault via tofu](./iac/).
2. Configuration des backend d'authentification et des roles pour postgres et kubernetes. Définition de rôles "${app}-ops" pour permettre au dépot d'une application de définir ses propres dépendances dans vault. Rotation de credentials postgres pour les applications. 2. Configuration des backend d'authentification et des roles pour postgres et kubernetes. Définition de rôles "${app}-ops" pour permettre au dépot d'une application de définir ses propres dépendances dans vault. Rotation de credentials postgres pour les applications.
3. [Le dépot de l'application webapp](https://gitea.arcodange.duckdns.org/arcodange-org/webapp) gère l'obtention de ses crédentials pour postgres. 3. [Le dépot de l'application webapp](https://gitea.arcodange.lab/arcodange-org/webapp) gère l'obtention de ses crédentials pour postgres.
```mermaid ```mermaid
flowchart LR flowchart LR

View File

@@ -116,6 +116,22 @@ Lobjectif est déviter de stocker des credentials statiques, en déléguan
## 🛠️ Ressources déployées ## 🛠️ Ressources déployées
### `VaultConnection`
```yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultConnection
metadata:
finalizers:
- vaultconnection.secrets.hashicorp.com/finalizer
labels:
name: default
namespace: {{ .Release.Namespace }}
spec:
address: http://hashicorp-vault.tools.svc.cluster.local:8200
skipTLSVerify: false
```
### `VaultAuth` ### `VaultAuth`
```yaml ```yaml
@@ -125,6 +141,7 @@ metadata:
name: auth name: auth
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
spec: spec:
vaultConnectionRef: default
method: kubernetes method: kubernetes
mount: kubernetes mount: kubernetes
kubernetes: kubernetes:

View File

@@ -27,8 +27,8 @@ resource "vault_database_secret_backend_role" "role" {
"GRANT ${local.name}_role TO \"{{name}}\";", "GRANT ${local.name}_role TO \"{{name}}\";",
] ]
revocation_statements = [ revocation_statements = [
"REASSIGN OWNED BY \"{{name}}\" TO ${local.name}_role;", "REASSIGN OWNED BY \"{{name}}\" TO ${local.name}_role;", # reassign must be executed in the database where the reassgined objects are - TODO (one connection per database/app)
"REVOKE ALL ON DATABASE ${local.database} FROM \"{{name}}\";", # should we drop the role ? "REVOKE ALL ON DATABASE ${local.database} FROM \"{{name}}\";", # should we drop the role ? -> YES after fixing reassign
] ]
renew_statements = [] renew_statements = []
rollback_statements = [] rollback_statements = []

View File

@@ -8,7 +8,7 @@ terraform {
} }
provider "vault" { provider "vault" {
address = "https://vault.arcodange.duckdns.org" address = "https://vault.arcodange.lab"
auth_login_jwt { # TERRAFORM_VAULT_AUTH_JWT environment variable auth_login_jwt { # TERRAFORM_VAULT_AUTH_JWT environment variable
mount = "gitea_jwt" mount = "gitea_jwt"
role = "gitea_cicd" role = "gitea_cicd"

View File

@@ -1,9 +1,18 @@
applications = [ applications = [
{ name = "webapp" }, { name = "webapp" },
{ name = "erp" }, { name = "erp" },
{ name = "dance-lessons-coach" },
{ {
name = "cms" name = "cms"
ops_policies = ["factory__cf_r2_arcodange_tf"] ops_policies = ["factory__cf_r2_arcodange_tf"]
service_account_names = ["cloudflared"] service_account_names = ["cloudflared"]
}, },
{
name = "crowdsec"
service_account_namespaces = ["tools"]
},
{
name = "plausible"
service_account_namespaces = ["tools"]
},
] ]

View File

@@ -15,11 +15,11 @@ vault: &vault_config
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
traefik.ingress.kubernetes.io/router.tls.domains.0.main: arcodange.duckdns.org traefik.ingress.kubernetes.io/router.tls.domains.0.main: arcodange.lab
traefik.ingress.kubernetes.io/router.tls.domains.0.sans: vault.arcodange.duckdns.org traefik.ingress.kubernetes.io/router.tls.domains.0.sans: vault.arcodange.lab
traefik.ingress.kubernetes.io/router.middlewares: localIp@file traefik.ingress.kubernetes.io/router.middlewares: localIp@file
hosts: hosts:
- host: vault.arcodange.duckdns.org - host: vault.arcodange.lab
paths: [] paths: []
postStart: [] # https://github.com/hashicorp/vault-helm/blob/main/values.yaml postStart: [] # https://github.com/hashicorp/vault-helm/blob/main/values.yaml

View File

@@ -5,7 +5,7 @@ description: A Helm chart for Kubernetes
dependencies: dependencies:
- name: tool - name: tool
version: 0.1.0 version: 0.1.0
repository: https://gitea.arcodange.duckdns.org/api/packages/arcodange-org/helm repository: https://gitea.arcodange.lab/api/packages/arcodange-org/helm
- name: pgbouncer - name: pgbouncer
version: 2.3.1 version: 2.3.1
repository: https://icoretech.github.io/helm repository: https://icoretech.github.io/helm

View File

@@ -14,6 +14,8 @@ pgbouncer: &pgbouncer_config
auth_type: scram-sha-256 auth_type: scram-sha-256
auth_query: SELECT uname, phash FROM user_lookup($1) auth_query: SELECT uname, phash FROM user_lookup($1)
ignore_startup_parameters: extra_float_digits # unsupported jdbc extra_float_digits=2 argument ignore_startup_parameters: extra_float_digits # unsupported jdbc extra_float_digits=2 argument
server_reset_query: DEALLOCATE ALL # fix prepared statement already exist (crowdsec)
server_idle_timeout: 7200
pgbouncerExporter: pgbouncerExporter:
enabled: false enabled: false

View File

@@ -5,7 +5,7 @@ description: A Helm chart for Kubernetes
dependencies: dependencies:
- name: tool - name: tool
version: 0.1.0 version: 0.1.0
repository: https://gitea.arcodange.duckdns.org/api/packages/arcodange-org/helm repository: https://gitea.arcodange.lab/api/packages/arcodange-org/helm
- name: pgcat - name: pgcat
version: 0.1.0 version: 0.1.0
repository: https://improwised.github.io/charts/ repository: https://improwised.github.io/charts/

View File

@@ -0,0 +1,36 @@
- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value:
name: generated-secrets
mountPath: /run/secrets
- op: add
path: /spec/template/spec/initContainers/0/volumeMounts
value:
- name: generated-secrets
mountPath: /run/secrets
- op: add
path: /spec/template/spec/initContainers/0
value:
name: build-database-url
image: alpine:3.19
command: ["/bin/sh", "-c"]
args:
- |
echo "postgres://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}" > /run/secrets/DATABASE_URL
volumeMounts:
- name: generated-secrets
mountPath: /run/secrets
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: plausible-db-credentials
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: plausible-db-credentials
key: password
envFrom:
- configMapRef:
name: plausible-config

6
plausible/iac/backend.tf Normal file
View File

@@ -0,0 +1,6 @@
terraform {
backend "gcs" {
bucket = "arcodange-tf"
prefix = "tools/plausible/main"
}
}

30
plausible/iac/main.tf Normal file
View File

@@ -0,0 +1,30 @@
module "app_roles" {
source = "git::ssh://git@192.168.1.202:2222/arcodange-org/tools.git//hashicorp-vault/iac/modules/app_roles?depth=1&ref=main"
name = "plausible"
service_account_namespaces = ["tools"]
}
# https://github.com/plausible/community-edition/wiki/configuration#database
#SECRET_KEY_BASE (openssl rand -base64 48)
#
resource "random_password" "secret" {
for_each = toset(["48","32"])
length = tonumber(each.value)
special = false
}
locals {
config = {
SECRET_KEY_BASE = base64encode(random_password.secret["48"].result)
TOTP_VAULT_KEY = base64encode(random_password.secret["32"].result)
}
}
resource "vault_kv_secret_v2" "config" {
mount = "kvv2"
name = "plausible/config"
cas = 1
# delete_all_versions = true
data_json = jsonencode(local.config)
}

View File

@@ -0,0 +1,16 @@
terraform {
required_providers {
vault = {
source = "vault"
version = "4.4.0"
}
}
}
provider "vault" {
address = "https://vault.arcodange.lab"
auth_login_jwt { # TERRAFORM_VAULT_AUTH_JWT environment variable
mount = "gitea_jwt"
role = "gitea_cicd_plausible"
}
}

View File

@@ -0,0 +1,85 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: tools
# https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_helmchartinflationgenerator_
helmCharts:
- name: plausible
repo: https://charts.pascaliske.dev
version: 2.0.0
releaseName: plausible
valuesFile: plausibleValues.yaml
namespace: tools
patches:
- target:
kind: IngressRoute
name: plausible-route
patch: |-
- op: add
path: /spec/tls
value:
certResolver: letsencrypt
domains:
- main: arcodange.lab
sans:
- analytics.arcodange.lab
resources:
- resources/vaultauth.yaml
- resources/vaultdynamicsecret.yaml
- resources/vaultsecret.yaml
- resources/configmap.yaml
- resources/geoipsecret.yaml
- resources/ingressroute.yaml
patchesJson6902:
- target:
version: v1
kind: Deployment
name: plausible
patch: |-
- op: replace
path: /spec/template/spec/containers/1/env/2
value:
name: GEOIPUPDATE_LICENSE_KEY
valueFrom:
secretKeyRef:
name: plausible-geoip
key: LICENSE_KEY
- op: replace
path: /spec/template/spec/containers/1/env/4
value:
name: GEOIPUPDATE_EDITION_IDS
value: "GeoLite2-Country GeoLite2-City"
- op: add
path: /spec/template/spec/containers/0/env/2
value:
name: IP_GEOLOCATION_DB
value: /geoip/GeoLite2-City.mmdb
- op: add
path: /spec/template/spec/volumes/-
value:
name: generated-secrets
emptyDir:
medium: Memory
- op: add
path: /spec/template/spec/containers/0/envFrom
value:
- configMapRef:
name: plausible-config
- op: add
path: /spec/template/spec/initContainers/0/envFrom
value:
- configMapRef:
name: plausible-config
- op: replace
path: /spec/template/spec/initContainers/0/args
value:
- >-
sleep 10 && /entrypoint.sh db migrate
- target:
version: v1
kind: Deployment
name: plausible
path: add-initcontainer.yaml

View File

@@ -0,0 +1,180 @@
image:
# -- The registry to pull the image from.
registry: ghcr.io
# -- The repository to pull the image from.
repository: plausible/community-edition
# -- The docker tag, if left empty chart's appVersion will be used.
# @default -- `.Chart.AppVersion`
tag: ''
# -- The pull policy for the controller.
pullPolicy: IfNotPresent
nameOverride: ''
fullnameOverride: ''
controller:
# -- Create a workload for this chart.
enabled: true
# -- Type of the workload object.
kind: Deployment
# -- The number of replicas.
replicas: 1
# -- Additional annotations for the controller object.
annotations: {}
# -- Additional labels for the controller object.
labels: {}
service:
# -- Create a service for exposing this chart.
enabled: true
# -- The service type used.
type: ClusterIP
# -- ClusterIP used if service type is `ClusterIP`.
clusterIP: ''
# -- LoadBalancerIP if service type is `LoadBalancer`.
loadBalancerIP: ''
# -- Allowed addresses when service type is `LoadBalancer`.
loadBalancerSourceRanges: []
# -- Additional annotations for the service object.
annotations: {}
# -- Additional labels for the service object.
labels: {}
serviceMonitor:
# -- Create a service monitor for prometheus operator.
enabled: false
# -- How frequently the exporter should be scraped.
interval: 30s
# -- Timeout value for individual scrapes.
timeout: 10s
# -- Additional annotations for the service monitor object.
annotations: {}
# -- Additional labels for the service monitor object.
labels: {}
ingressRoute:
# -- Create an IngressRoute object for exposing this chart.
create: true
# -- List of [entry points](https://doc.traefik.io/traefik/routing/routers/#entrypoints) on which the ingress route will be available.
entryPoints: [websecure]
# -- [Matching rule](https://doc.traefik.io/traefik/routing/routers/#rule) for the underlying router.
rule: Host(`analytics.arcodange.lab`)
# -- List of [middleware objects](https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-middleware) for the ingress route.
middlewares:
- name: localIp@file
# -- Use an existing secret containing the TLS certificate.
tlsSecretName: ''
# -- Additional annotations for the ingress route object.
annotations: {}
# -- Additional labels for the ingress route object.
labels: {}
certificate:
# -- Create an Certificate object for the exposed chart.
create: false
# -- List of subject alternative names for the certificate.
dnsNames: []
# -- Name of the secret in which the certificate will be stored. Defaults to the first item in dnsNames.
secretName: ''
issuerRef:
# -- Type of the referenced certificate issuer. Can be "Issuer" or "ClusterIssuer".
kind: ClusterIssuer
# -- Name of the referenced certificate issuer.
name: ''
# -- Additional annotations for the certificate object.
annotations: {}
# -- Additional labels for the certificate object.
labels: {}
env:
# -- Timezone for the container.
- name: TZ
value: Europe/Paris
ports:
http:
# -- Enable the port inside the `Deployment` and `Service` objects.
enabled: true
# -- The port used as internal port and cluster-wide port if `.service.type` == `ClusterIP`.
port: 8000
# -- The external port used if `.service.type` == `NodePort`.
nodePort: null
# -- The protocol used for the service.
protocol: TCP
secret:
# -- Create a new secret object.
create: false
# -- Use an existing secret object.
existingSecret: 'plausible-config'
# -- Secret values used when not using an existing secret. Helm templates are supported for values.
values:
# -- Secret key for session tokens.
SECRET_KEY_BASE: '{{ randAlphaNum 42 | b64enc }}'
# -- Encryption token for TOTP secrets.
TOTP_VAULT_KEY: '{{ randAlphaNum 32 | b64enc }}'
# -- Additional annotations for the secret object.
annotations: {}
# -- Additional labels for the secret object.
labels: {}
geoip:
# -- Enable support for MaxMinds GeoLite2 database.
enabled: true
image:
# -- The repository for the geoip image.
repository: ghcr.io/maxmind/geoipupdate
# -- The docker tag for the geoip image.
tag: v7.1.1
# -- Required. MaxMind account ID.
accountId: '1266329'
# -- Required. Case-sensitive MaxMind license key.
# licenseKey: 'kvv2/data/plausible/geoip LICENSE_KEY'
# -- Optional. Database update frequency. Defaults to "168" which equals 7 days.
frequency: 168
# -- Optional. Specify the database mount path inside the containers.
mountPath: /geoip
serviceAccount:
# -- Create a `ServiceAccount` object.
create: true
# -- Specify the service account used for the controller.
name: ''
# -- Additional annotations for the service account object.
annotations: {}
# -- Additional labels for the service account object.
labels: {}
# -- Pod-level security attributes. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context).
securityContext: {}
# fsGroup: 1000
# runAsNonRoot: true
# runAsGroup: 1000
# runAsUser: 1000
# -- Compute resources used by the container. More info [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# -- Pod-level affinity. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/hostname
# operator: In
# values:
# - my-node-xyz
# -- Pod-level tolerations. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
tolerations: []
# - key: node-role.kubernetes.io/control-plane
# operator: Exists
# effect: NoSchedule

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: plausible-config
namespace: tools
# Doc: https://github.com/plausible/community-edition/wiki/Configuration
data:
DB_HOST: pgbouncer.tools
DB_PORT: !!str 5432
DB_NAME: plausible
BASE_URL: https://analytics.arcodange.lab
CLICKHOUSE_DATABASE_URL: http://plausible:plausiblearcodange@clickhouse.tools:8123/plausible
DB_POOL_SIZE: "30"
DB_QUEUE_TARGET: "10000" # 10 secondes
DB_CONNECT_TIMEOUT: "30000" # 30 secondes
DB_RECONNECT_ATTEMPTS: "5"
DB_RECONNECT_DELAY: "5000"

View File

@@ -0,0 +1,24 @@
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: plausible-geoip
namespace: tools
spec:
type: kv-v2
# mount path
mount: kvv2
# path of the secret
path: plausible/geoip
# dest k8s secret
destination:
name: plausible-geoip
create: true
# static secret refresh interval
refreshAfter: 30s
# Name of the CRD to authenticate to Vault
vaultAuthRef: plausible

View File

@@ -0,0 +1,20 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: plausible-external
labels:
app.kubernetes.io/instance: plausible
app.kubernetes.io/name: plausible
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`analytics.arcodange.fr`) && (PathPrefix(`/api/event`) || PathPrefix(`/js/`))
middlewares:
- name: kube-system-crowdsec@kubernetescrd
services:
- kind: Service
name: plausible-web
namespace: tools
port: 8000

View File

@@ -0,0 +1,14 @@
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: plausible
namespace: tools
spec:
vaultConnectionRef: default
method: kubernetes
mount: kubernetes
kubernetes:
role: plausible
serviceAccount: plausible
audiences:
- vault

View File

@@ -0,0 +1,25 @@
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
name: plausible-db-credentials
namespace: tools
spec:
# Mount path of the secrets backend
mount: postgres
# Path to the secret
path: creds/plausible
# Where to store the secrets, VSO will create the secret
destination:
create: true
name: plausible-db-credentials
# Restart these pods when secrets rotated
rolloutRestartTargets:
- kind: Deployment
name: plausible
# Name of the CRD to authenticate to Vault
vaultAuthRef: plausible

View File

@@ -0,0 +1,24 @@
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: plausible
namespace: tools
spec:
type: kv-v2
# mount path
mount: kvv2
# path of the secret
path: plausible/config
# dest k8s secret
destination:
name: plausible-config
create: true
# static secret refresh interval
refreshAfter: 30s
# Name of the CRD to authenticate to Vault
vaultAuthRef: plausible

23
prometheus/Chart.yaml Normal file
View File

@@ -0,0 +1,23 @@
apiVersion: v2
name: prometheus
description: A Helm chart for Kubernetes
dependencies:
- name: tool
version: 0.1.0
repository: https://gitea.arcodange.lab/api/packages/arcodange-org/helm
- name: prometheus
version: 28.13.0
repository: https://prometheus-community.github.io/helm-charts
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
version: 0.1.0
appVersion: "v3.10.0"

View File

@@ -0,0 +1,3 @@
{{- if eq .Values.tool.kind "HelmChart" -}}
{{- include "tool.helm-chart-config.tpl" . -}}
{{- end -}}

View File

@@ -0,0 +1,3 @@
{{- if eq .Values.tool.kind "HelmChart" -}}
{{- include "tool.helm-chart.tpl" . -}}
{{- end -}}

1260
prometheus/values.yaml Normal file

File diff suppressed because it is too large Load Diff

34
redis/Chart.yaml Normal file
View File

@@ -0,0 +1,34 @@
# Chart: keydb-custom
# Helm chart tailored for KeyDB (EqAlpha) on 2 Raspberry Pi 5 nodes
# - Mode: master (statefulset index 0) + replica (index 1)
# - Replica runs as replicaof master at startup
# - server-threads = 4
# - Config mounted via ConfigMap
# - Liveness / readiness probes included
# - Persistence via PersistentVolumeClaim (storageClass configurable)
# -----------------------------------------------------------------------------
# Chart.yaml
# -----------------------------------------------------------------------------
apiVersion: v2
name: redis
description: A Helm chart for Kubernetes
dependencies:
- name: tool
version: 0.1.0
repository: https://gitea.arcodange.lab/api/packages/arcodange-org/helm
- name: redis
version: 2.1.0
repository: https://charts.pascaliske.dev
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
version: 0.1.0
appVersion: "latest"

2
redis/README.md Normal file
View File

@@ -0,0 +1,2 @@
Run `kubectl port-forward -n tools svc/redis 6379:6379` and launch `Redis Insights`

View File

@@ -0,0 +1,3 @@
{{- if eq .Values.tool.kind "HelmChart" -}}
{{- include "tool.helm-chart-config.tpl" . -}}
{{- end -}}

View File

@@ -0,0 +1,3 @@
{{- if eq .Values.tool.kind "HelmChart" -}}
{{- include "tool.helm-chart.tpl" . -}}
{{- end -}}

197
redis/values.yaml Normal file
View File

@@ -0,0 +1,197 @@
redis: &redis_config
image:
# -- The repository to pull the image from.
repository: redis
# -- The docker tag, if left empty chart's appVersion will be used.
# @default -- `.Chart.AppVersion`
tag: ''
# -- The pull policy for the controller.
pullPolicy: IfNotPresent
# -- Optionally supply image pull secrets.
imagePullSecrets: []
nameOverride: ''
fullnameOverride: ''
controller:
# -- Create a workload for this chart.
enabled: true
# -- Type of the workload object.
kind: StatefulSet
# -- The number of replicas.
replicas: 1
# -- Additional annotations for the controller object.
annotations: {}
# -- Additional labels for the controller object.
labels: {}
service:
# -- Create a service for exposing this chart.
enabled: true
# -- The service type used.
type: ClusterIP
# -- ClusterIP used if service type is `ClusterIP`.
clusterIP: ''
# -- LoadBalancerIP if service type is `LoadBalancer`.
loadBalancerIP: ''
# -- Allowed addresses when service type is `LoadBalancer`.
loadBalancerSourceRanges: []
# -- Additional annotations for the service object.
annotations: {}
# -- Additional labels for the service object.
labels: {}
serviceMonitor:
# -- Create a service monitor for prometheus operator.
enabled: false
# -- How frequently the exporter should be scraped.
interval: 30s
# -- Timeout value for individual scrapes.
timeout: 10s
# -- Additional annotations for the service monitor object.
annotations: {}
# -- Additional labels for the service monitor object.
labels: {}
redisExporter:
# -- Enable optional redis exporter instance as sidecar container.
enabled: false
# -- Image for the metric exporter
image:
# -- The repository to pull the image from.
repository: oliver006/redis_exporter
# -- The docker tag, if left empty latest will be used.
# @default -- `latest`
tag: 'latest'
# -- The pull policy for the exporter.
pullPolicy: IfNotPresent
# -- Pod-level security attributes. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context).
securityContext:
runAsUser: 59000
runAsGroup: 59000
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# -- Compute resources used by the container. More info [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
resources:
requests:
cpu: 10m
memory: 50Mi
limits:
cpu: 100m
memory: 100Mi
env:
# -- Timezone for the container.
- name: TZ
value: Europe/Paris
# -- List of extra arguments for the container.
extraArgs: []
# - --loglevel warning
ports:
redis:
# -- Enable the port inside the `Controller` and `Service` objects.
enabled: true
# -- The port used as internal port and cluster-wide port if `.service.type` == `ClusterIP`.
port: 6379
# -- The external port used if `.service.type` == `NodePort`.
nodePort: null
# -- The protocol used for the service.
protocol: TCP
# -- The application protocol for this port. Used as hint for implementations to offer richer behavior.
appProtocol: redis
persistentVolumeClaim:
# -- Create a new persistent volume claim object.
create: true
# -- Mount path of the persistent volume claim object.
mountPath: /data
# -- Access mode of the persistent volume claim object.
accessMode: ReadWriteOnce
# -- Volume mode of the persistent volume claim object.
volumeMode: Filesystem
# -- Storage request size for the persistent volume claim object.
size: 1Gi
# -- Storage class name for the persistent volume claim object.
storageClassName: ''
# -- Use an existing persistent volume claim object.
existingPersistentVolumeClaim: ''
# -- Additional annotations for the persistent volume claim object.
annotations: {}
# -- Additional labels for the persistent volume claim object.
labels: {}
serviceAccount:
# -- Specify the service account used for the controller.
name: ''
# -- Optional priority class name to be used for pods.
priorityClassName: ''
# -- Pod-level security attributes. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context).
securityContext:
fsGroup: 999
runAsNonRoot: true
runAsGroup: 999
runAsUser: 999
# -- Compute resources used by the container. More info [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# -- Pod-level affinity. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/hostname
# operator: In
# values:
# - my-node-xyz
# -- Pod-level tolerations. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
tolerations: []
# - key: node-role.kubernetes.io/control-plane
# operator: Exists
# effect: NoSchedule
# -- Pod-level node selector. More info [here](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
nodeSelector: {}
# label: value
# -- Specify any extra containers here as dictionary items - each should have its own key.
extraContainers: {}
# container:
# name: my-container
# image: my-org/my-image
# -- Specify extra volume mounts for the default containers.
extraVolumeMounts: []
# - name: my-volume
# mountPath: /path/to/volume
# readOnly: false
# -- Specify extra volumes for the workload.
extraVolumes: []
# - name: my-volume
# secret:
# secretName: my-secret
tool:
# kind: 'SubChart' or 'HelmChart', if subchart then uncomment Chart.yaml dependency, else comment and use tool library with helm chart template
kind: 'SubChart'
repo: https://charts.pascaliske.dev
chart: redis
version: 2.1.0
values: *redis_config