created testing environment

This commit is contained in:
Christopher Bisset 2022-08-06 17:19:26 +10:00
parent ccb57bb8d0
commit d8594f2982
10 changed files with 485 additions and 34 deletions

25
docker/test/Caddyfile Normal file
View file

@ -0,0 +1,25 @@
{
http_port 80
https_port 443
}
https://headscale-test.local {
tls internal
reverse_proxy /web* https://headscale-test-frontend {
transport http {
tls_insecure_skip_verify
}
}
reverse_proxy * http://headscale-test-backend:8080
}
:80 {
reverse_proxy /web* https://headscale-test-frontend {
transport http {
tls_insecure_skip_verify
}
}
reverse_proxy * http://headscale-test-backend:8080
}

View file

@ -0,0 +1,257 @@
---
# headscale will look for a configuration file named `config.yaml` (or `config.json`) in the following order:
#
# - `/etc/headscale`
# - `~/.headscale`
# - current working directory
# The url clients will connect to.
# Typically this will be a domain like:
#
# https://myheadscale.example.com:443
#
server_url: https://headscale-test.local
# Address to listen to / bind to on the server
#
listen_addr: 0.0.0.0:8080
# Address to listen to /metrics, you may want
# to keep this endpoint private to your internal
# network
#
metrics_listen_addr: 127.0.0.1:9090
# Address to listen for gRPC.
# gRPC is used for controlling a headscale server
# remotely with the CLI
# Note: Remote access _only_ works if you have
# valid certificates.
grpc_listen_addr: 0.0.0.0:50443
# Allow the gRPC admin interface to run in INSECURE
# mode. This is not recommended as the traffic will
# be unencrypted. Only enable if you know what you
# are doing.
grpc_allow_insecure: false
# Private key used encrypt the traffic between headscale
# and Tailscale clients.
# The private key file which will be
# autogenerated if it's missing
private_key_path: /var/lib/headscale/private.key
# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash.
ip_prefixes:
- fd7a:115c:a1e0::/48
- 100.64.0.0/10
# DERP is a relay system that Tailscale uses when a direct
# connection cannot be established.
# https://tailscale.com/blog/how-tailscale-works/#encrypted-tcp-relays-derp
#
# headscale needs a list of DERP servers that can be presented
# to the clients.
derp:
server:
# If enabled, runs the embedded DERP server and merges it into the rest of the DERP config
# The Headscale server_url defined above MUST be using https, DERP requires TLS to be in place
enabled: false
# Region ID to use for the embedded DERP server.
# The local DERP prevails if the region ID collides with other region ID coming from
# the regular DERP config.
region_id: 999
# Region code and name are displayed in the Tailscale UI to identify a DERP region
region_code: "headscale"
region_name: "Headscale Embedded DERP"
# Listens in UDP at the configured address for STUN connections to help on NAT traversal.
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
#
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
stun_listen_addr: "0.0.0.0:3478"
# List of externally available DERP maps encoded in JSON
urls:
- https://controlplane.tailscale.com/derpmap/default
# Locally available DERP map files encoded in YAML
#
# This option is mostly interesting for people hosting
# their own DERP servers:
# https://tailscale.com/kb/1118/custom-derp-servers/
#
# paths:
# - /etc/headscale/derp-example.yaml
paths: []
# If enabled, a worker will be set up to periodically
# refresh the given sources and update the derpmap
# will be set up.
auto_update_enabled: true
# How often should we check for DERP updates?
update_frequency: 24h
# Disables the automatic check for headscale updates on startup
disable_check_updates: false
# Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m
# Period to check for node updates in the tailnet. A value too low will severily affect
# CPU consumption of Headscale. A value too high (over 60s) will cause problems
# to the nodes, as they won't get updates or keep alive messages in time.
# In case of doubts, do not touch the default 10s.
node_update_check_interval: 10s
# SQLite config
db_type: sqlite3
db_path: /var/lib/headscale/db.sqlite
# # Postgres config
# db_type: postgres
# db_host: localhost
# db_port: 5432
# db_name: headscale
# db_user: foo
# db_pass: bar
### TLS configuration
#
## Let's encrypt / ACME
#
# headscale supports automatically requesting and setting up
# TLS for a domain with Let's Encrypt.
#
# URL to ACME directory
acme_url: https://acme-v02.api.letsencrypt.org/directory
# Email to register with ACME provider
acme_email: ""
# Domain name to request a TLS certificate for:
tls_letsencrypt_hostname: ""
# Client (Tailscale/Browser) authentication mode (mTLS)
# Acceptable values:
# - disabled: client authentication disabled
# - relaxed: client certificate is required but not verified
# - enforced: client certificate is required and verified
tls_client_auth_mode: relaxed
# Path to store certificates and metadata needed by
# letsencrypt
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# Type of ACME challenge to use, currently supported types:
# HTTP-01 or TLS-ALPN-01
# See [docs/tls.md](docs/tls.md) for more information
tls_letsencrypt_challenge_type: HTTP-01
# When HTTP-01 challenge is chosen, letsencrypt must set up a
# verification endpoint, and it will be listning on:
# :http = port 80
tls_letsencrypt_listen: ":http"
## Use already defined certificates:
tls_cert_path: ""
tls_key_path: ""
log_level: info
# Path to a file containg ACL policies.
# ACLs can be defined as YAML or HUJSON.
# https://tailscale.com/kb/1018/acls/
acl_policy_path: ""
## DNS
#
# headscale supports Tailscale's DNS configuration and MagicDNS.
# Please have a look to their KB to better understand the concepts:
#
# - https://tailscale.com/kb/1054/dns/
# - https://tailscale.com/kb/1081/magicdns/
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
#
dns_config:
# List of DNS servers to expose to clients.
nameservers:
- 1.1.1.1
# Split DNS (see https://tailscale.com/kb/1054/dns/),
# list of search domains and the DNS to query for each one.
#
# restricted_nameservers:
# foo.bar.com:
# - 1.1.1.1
# darp.headscale.net:
# - 1.1.1.1
# - 8.8.8.8
# Search domains to inject.
domains: []
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
# Only works if there is at least a nameserver defined.
magic_dns: true
# Defines the base domain to create the hostnames for MagicDNS.
# `base_domain` must be a FQDNs, without the trailing dot.
# The FQDN of the hosts will be
# `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
base_domain: example.com
# Unix socket used for the CLI to connect without authentication
# Note: for local development, you probably want to change this to:
# unix_socket: ./headscale.sock
unix_socket: /var/run/headscale.sock
unix_socket_permission: "0770"
#
# headscale supports experimental OpenID connect support,
# it is still being tested and might have some bugs, please
# help us test it.
# OpenID Connect
# oidc:
# issuer: "https://your-oidc.issuer.com/path"
# client_id: "your-oidc-client-id"
# client_secret: "your-oidc-client-secret"
#
# Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
#
# scope: ["openid", "profile", "email", "custom"]
# extra_params:
# domain_hint: example.com
#
# List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
# authentication request will be rejected.
#
# allowed_domains:
# - example.com
# allowed_users:
# - alice@example.com
#
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# namespace: `first-name.last-name.example.com`
#
# strip_email_domain: true
# Logtail configuration
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
# to instruct tailscale nodes to log their activity to a remote server.
logtail:
# Enable logtail for this headscales clients.
# As there is currently no support for overriding the log server in headscale, this is
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
enabled: false
# Enabling this option makes devices prefer a random port for WireGuard traffic over the
# default static port 41641. This option is intended as a workaround for some buggy
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
randomize_client_port: false

View file

@ -0,0 +1,38 @@
services:
headscale-worker-1:
image: headscale-test-proxy:latest
container_name: headscale-worker-1
restart: unless-stopped
networks:
headscale-ui-test-network:
entrypoint: |
sh -c "tailscaled --tun=userspace-networking --socks5-server=localhost:1055 --outbound-http-proxy-listen=localhost:1055 &
tailscale up --authkey=$PREAUTH_KEY --login-server=https://headscale-test.local;
/etc/init.d/tailscale start
while true; do sleep 1; done"
headscale-worker-2:
image: headscale-test-proxy:latest
container_name: headscale-worker-2
restart: unless-stopped
networks:
headscale-ui-test-network:
entrypoint: |
sh -c "tailscaled --tun=userspace-networking --socks5-server=localhost:1055 --outbound-http-proxy-listen=localhost:1055 &
tailscale up --authkey=$PREAUTH_KEY --login-server=https://headscale-test.local;
/etc/init.d/tailscale start
while true; do sleep 1; done"
headscale-worker-3:
image: headscale-test-proxy:latest
container_name: headscale-worker-3
restart: unless-stopped
networks:
headscale-ui-test-network:
entrypoint: |
sh -c "tailscaled --tun=userspace-networking --socks5-server=localhost:1055 --outbound-http-proxy-listen=localhost:1055 &
tailscale up --authkey=$PREAUTH_KEY --login-server=https://headscale-test.local --advertise-routes=10.30.10.1/32,10.30.10.2/32,10.30.10.3/32;
/etc/init.d/tailscale start
while true; do sleep 1; done"
networks:
headscale-ui-test-network:
external: true

View file

@ -0,0 +1,41 @@
services:
headscale-test-backend:
image: headscale/headscale:latest-alpine
container_name: headscale-test-backend
security_opt:
- label:disable
volumes:
- ./container-config:/etc/headscale
# - ./container-data/data:/var/lib/headscale
entrypoint: |
sh -c "mkdir -p /var/lib/headscale;
touch /var/lib/headscale/db.sqlite;
wget -O /etc/headscale/config.yaml https://raw.githubusercontent.com/juanfont/headscale/main/config-example.yaml
sed -i 's|http://127.0.0.1:8080|https://headscale-test.local|g' /etc/headscale/config.yaml;
headscale serve"
restart: unless-stopped
networks:
headscale-ui-test-network:
headscale-test-frontend:
image: ghcr.io/gurucomputing/headscale-ui:latest
container_name: headscale-test-frontend
restart: unless-stopped
networks:
headscale-ui-test-network:
headscale-test-proxy:
image: headscale-test-proxy:latest
build: .
container_name: headscale-test-proxy
ports:
- 8080:80
restart: unless-stopped
networks:
headscale-ui-test-network:
aliases:
- headscale-test.local
networks:
headscale-ui-test-network:
external: true

27
docker/test/dockerfile Normal file
View file

@ -0,0 +1,27 @@
FROM alpine:latest
# environment variables
ENV XDG_DATA_HOME=/data/
# Set the staging environment
WORKDIR /staging/scripts
WORKDIR /staging
# Copy across the scripts folder
COPY scripts/* ./scripts/
# Copy default caddy config from project root
COPY ./Caddyfile /staging/Caddyfile
# Set permissions for all scripts. We do not want normal users to have write
# access to the scripts
RUN chown -R 0:0 scripts
RUN chmod -R 755 scripts
# Build the image. This build runs as root
RUN /staging/scripts/1-image-build.sh
# Tell docker that all future commands should run as the appuser user
# USER appuser
WORKDIR /data
ENTRYPOINT /bin/sh /staging/scripts/2-initialise.sh

View file

@ -0,0 +1,25 @@
#!/bin/sh
set -x
# temporarily set the caddy home to staging
export XDG_DATA_HOME=/staging
# create the group and user
addgroup -S appgroup && adduser -D appuser -G appgroup
# install caddy plus dependencies
apk add --no-cache caddy nss-tools
# install tailscale
echo http://dl-2.alpinelinux.org/alpine/edge/community/ >> /etc/apk/repositories
apk add -U --no-cache tailscale
rc-update add tailscale
# do a dry run of caddy to install the certificates
caddy start
caddy trust -adapter caddyfile -config /staging/Caddyfile
caddy stop
# set the caddy directory to the non-root user
# commented out for now as we need root anyway for tailscale
# chown -R 1000:1000 /staging/caddy

View file

@ -0,0 +1,23 @@
#!/bin/sh
#----#
# placeholder for testing
# while true; do sleep 1; done
#----#
# copy everything from staging
if [ ! -f /data/Caddyfile ];
then
echo "no Caddyfile detected, copying across default config"
cp /staging/Caddyfile /data/Caddyfile
fi
if [ ! -f /data/caddy ];
then
echo "no caddy directory detected, copying across default config"
cp -r /staging/caddy /data/caddy
fi
# start caddy
echo "Starting Caddy"
/usr/sbin/caddy run --adapter caddyfile --config /data/Caddyfile

View file

@ -4,7 +4,7 @@ Development can be done either by using the official development docker image, o
## Testing
All branches should undergo manual testing as specified in the [System Integration Testing](./system-integration-testing.md) document. If someone is well versed in unit automation tests for browser front ends, please educate me! For now do it manually before making a pull request.
All branches should undergo manual testing as specified in the [Testing](./testing.md) document. If someone is well versed in unit automation tests for browser front ends, please educate me! For now do it manually before making a pull request.
### Quick Start (Docker)
* `docker run -p 443:443 -p 3000:3000 -v "$(pwd)"/data:/data ghcr.io/gurucomputing/headscale-ui-dev:latest`

View file

@ -1,33 +0,0 @@
## Tests Before Release
Eventually it would be nice to automate this, but I've found a front end is difficult to fully automate testing. Prove me wrong other users!
## User Testing
* Create a User
* Delete a User
* Rename a User
* Create a PreAuth Key
* Try all Sort Categories
## Device Testing
* Add a Device with a Preauth Key
* Add a Device with a machine key
* Add a Device with OIDC (if set up to do so)
* Rename a Device
* Try all sort categories
* Create a Tag
* Delete a Tag
* Delete a Device
* Add and approve a route (if set up to do so)
* Change the assigned user for a device
## Failure Test
* Test messages (both console and alerts) with failed apikey
* Test recovery once apikey is back
* Test messages (both console and alerts) with failed URL
* Test recovery once URL is back
## Settings Test
* Verify version comes across once released
## Docker Test
* Verify setting a custom PORT environment variable does not break the image

48
documentation/testing.md Normal file
View file

@ -0,0 +1,48 @@
## Using the Test Dockerfiles
The `/docker/test` folder contains a number of local test containers for testing before release. Specifically, the `docker-compose` creates a local environment with self signed keys, and the `docker-compose-workers` set up a bunch of clients that trust those keys.
To use this environment, do the following:
* Navigate to the `/docker/test` directory
* Create a test network: `docker network create headscale-ui-test-network`
* Stand up the `docker-compose` with `docker-compose up -d`. This will expose an HTTP (not https) portal on `8080`
* Generate an API key with `docker exec headscale-test-backend headscale apikeys create`
* Paste the api key into the UI at `http://<your-ip>:8080/web`
* Generate a pre-auth key that's reusable and ephemeral. save it into `.env` in the test folder as the following:
* `PREAUTH_KEY=<Your Preauth Key>`
* Stand up the test bench with `docker-compose -f docker-compose-workers.yaml up -d`, all of the workers should automatically join headscale
* Run your tests in the UI
* Bring down the environment with `docker-compose -f docker-compose-workers.yaml down` and `docker-compose down`.
## Tests Before Release
Eventually it would be nice to automate this, but I've found a front end is difficult to fully automate testing. Prove me wrong other users!
## User Testing
* Create a User
* Delete a User
* Rename a User
* Create a PreAuth Key
* Try all Sort Categories
## Device Testing
* Add a Device with a Preauth Key
* Add a Device with a machine key
* Add a Device with OIDC (if set up to do so)
* Rename a Device
* Try all sort categories
* Create a Tag
* Delete a Tag
* Delete a Device
* Add and approve a route (if set up to do so)
* Change the assigned user for a device
## Failure Test
* Test messages (both console and alerts) with failed apikey
* Test recovery once apikey is back
* Test messages (both console and alerts) with failed URL
* Test recovery once URL is back
## Settings Test
* Verify version comes across once released
## Docker Test
* Verify setting a custom PORT environment variable does not break the image