diff --git a/.claude/agents/headscale-integration-tester.md b/.claude/agents/headscale-integration-tester.md index 2b25977d..0ce60eed 100644 --- a/.claude/agents/headscale-integration-tester.md +++ b/.claude/agents/headscale-integration-tester.md @@ -52,7 +52,7 @@ go test ./integration -timeout 45m **Timeout Guidelines by Test Type**: - **Basic functionality tests**: `--timeout=900s` (15 minutes minimum) - **Route/ACL tests**: `--timeout=1200s` (20 minutes) -- **HA/failover tests**: `--timeout=1800s` (30 minutes) +- **HA/failover tests**: `--timeout=1800s` (30 minutes) - **Long-running tests**: `--timeout=2100s` (35 minutes) - **Full test suite**: `-timeout 45m` (45 minutes) @@ -71,7 +71,7 @@ go run ./cmd/hi run "TestName" --timeout=60s - **Slow tests** (5+ min): Node expiration, HA failover - **Long-running tests** (10+ min): `TestNodeOnlineStatus` runs for 12 minutes -**CRITICAL**: Only ONE test can run at a time due to Docker port conflicts and resource constraints. +**CONCURRENT EXECUTION**: Multiple tests CAN run simultaneously. Each test run gets a unique Run ID for isolation. See "Concurrent Execution and Run ID Isolation" section below. ## Test Artifacts and Log Analysis @@ -98,6 +98,97 @@ When tests fail, examine artifacts in this specific order: 4. **Client status dumps** (`*_status.json`): Network state and peer connectivity information 5. **Database snapshots** (`.db` files): For data consistency and state persistence issues +## Concurrent Execution and Run ID Isolation + +### Overview + +The integration test system supports running multiple tests concurrently on the same Docker daemon. Each test run is isolated through a unique Run ID that ensures containers, networks, and cleanup operations don't interfere with each other. + +### Run ID Format and Usage + +Each test run generates a unique Run ID in the format: `YYYYMMDD-HHMMSS-{6-char-hash}` +- Example: `20260109-104215-mdjtzx` + +The Run ID is used for: +- **Container naming**: `ts-{runIDShort}-{version}-{hash}` (e.g., `ts-mdjtzx-1-74-fgdyls`) +- **Docker labels**: All containers get `hi.run-id={runID}` label +- **Log directories**: `control_logs/{runID}/` +- **Cleanup isolation**: Only containers with matching run ID are cleaned up + +### Container Isolation Mechanisms + +1. **Unique Container Names**: Each container includes the run ID for identification +2. **Docker Labels**: `hi.run-id` and `hi.test-type` labels on all containers +3. **Dynamic Port Allocation**: All ports use `{HostPort: "0"}` to let kernel assign free ports +4. **Per-Run Networks**: Network names include scenario hash for isolation +5. **Isolated Cleanup**: `killTestContainersByRunID()` only removes containers matching the run ID + +### ⚠️ CRITICAL: Never Interfere with Other Test Runs + +**FORBIDDEN OPERATIONS** when other tests may be running: + +```bash +# ❌ NEVER do global container cleanup while tests are running +docker rm -f $(docker ps -q --filter "name=hs-") +docker rm -f $(docker ps -q --filter "name=ts-") + +# ❌ NEVER kill all test containers +# This will destroy other agents' test sessions! + +# ❌ NEVER prune all Docker resources during active tests +docker system prune -f # Only safe when NO tests are running +``` + +**SAFE OPERATIONS**: + +```bash +# ✅ Clean up only YOUR test run's containers (by run ID) +# The test runner does this automatically via cleanup functions + +# ✅ Clean stale (stopped/exited) containers only +# Pre-test cleanup only removes stopped containers, not running ones + +# ✅ Check what's running before cleanup +docker ps --filter "name=headscale-test-suite" --format "{{.Names}}" +``` + +### Running Concurrent Tests + +```bash +# Start multiple tests in parallel - each gets unique run ID +go run ./cmd/hi run "TestPingAllByIP" & +go run ./cmd/hi run "TestACLAllowUserDst" & +go run ./cmd/hi run "TestOIDCAuthenticationPingAll" & + +# Monitor running test suites +docker ps --filter "name=headscale-test-suite" --format "table {{.Names}}\t{{.Status}}" +``` + +### Agent Session Isolation Rules + +When working as an agent: + +1. **Your run ID is unique**: Each test you start gets its own run ID +2. **Never clean up globally**: Only use run ID-specific cleanup +3. **Check before cleanup**: Verify no other tests are running if you need to prune resources +4. **Respect other sessions**: Other agents may have tests running concurrently +5. **Log directories are isolated**: Your artifacts are in `control_logs/{your-run-id}/` + +### Identifying Your Containers + +Your test containers can be identified by: +- The run ID in the container name +- The `hi.run-id` Docker label +- The test suite container: `headscale-test-suite-{your-run-id}` + +```bash +# List containers for a specific run ID +docker ps --filter "label=hi.run-id=20260109-104215-mdjtzx" + +# Get your run ID from the test output +# Look for: "Run ID: 20260109-104215-mdjtzx" +``` + ## Common Failure Patterns and Root Cause Analysis ### CRITICAL MINDSET: Code Issues vs Infrastructure Issues @@ -250,10 +341,10 @@ require.NotNil(t, targetNode, "should find expected node") - **Detection**: No progress in logs for >2 minutes during initialization - **Solution**: `docker system prune -f` and retry -3. **Docker Port Conflicts**: Multiple tests trying to use same ports - - **Pattern**: "bind: address already in use" errors - - **Detection**: Port binding failures in Docker logs - - **Solution**: Only run ONE test at a time +3. **Docker Resource Exhaustion**: Too many concurrent tests overwhelming system + - **Pattern**: Container creation timeouts, OOM kills, slow test execution + - **Detection**: System load high, Docker daemon slow to respond + - **Solution**: Reduce number of concurrent tests, wait for completion before starting more **CODE ISSUES (99% of failures)**: 1. **Route Approval Process Failures**: Routes not getting approved when they should be @@ -273,12 +364,22 @@ require.NotNil(t, targetNode, "should find expected node") ### Critical Test Environment Setup -**Pre-Test Cleanup (MANDATORY)**: +**Pre-Test Cleanup**: + +The test runner automatically handles cleanup: +- **Before test**: Removes only stale (stopped/exited) containers - does NOT affect running tests +- **After test**: Removes only containers belonging to the specific run ID + ```bash -# ALWAYS run this before each test +# Only clean old log directories if disk space is low rm -rf control_logs/202507* -docker system prune -f df -h # Verify sufficient disk space + +# SAFE: Clean only stale/stopped containers (does not affect running tests) +# The test runner does this automatically via cleanupStaleTestContainers() + +# ⚠️ DANGEROUS: Only use when NO tests are running +docker system prune -f ``` **Environment Verification**: @@ -286,8 +387,8 @@ df -h # Verify sufficient disk space # Verify system readiness go run ./cmd/hi doctor -# Check for running containers that might conflict -docker ps +# Check what tests are currently running (ALWAYS check before global cleanup) +docker ps --filter "name=headscale-test-suite" --format "{{.Names}}" ``` ### Specific Test Categories and Known Issues @@ -433,7 +534,7 @@ When you understand a test's purpose through debugging, always add comprehensive // // The test verifies: // - Route announcements are received and tracked -// - ACL policies control route approval correctly +// - ACL policies control route approval correctly // - Only approved routes appear in peer network maps // - Route state persists correctly in the database func TestSubnetRoutes(t *testing.T) { @@ -535,7 +636,7 @@ var nodeKey key.NodePublic assert.EventuallyWithT(t, func(c *assert.CollectT) { nodes, err := headscale.ListNodes() assert.NoError(c, err) - + for _, node := range nodes { if node.GetName() == "router" { routeNode = node @@ -550,7 +651,7 @@ assert.EventuallyWithT(t, func(c *assert.CollectT) { assert.EventuallyWithT(t, func(c *assert.CollectT) { status, err := client.Status() assert.NoError(c, err) - + peerStatus, ok := status.Peer[nodeKey] assert.True(c, ok, "peer should exist in status") requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes) @@ -566,7 +667,7 @@ assert.EventuallyWithT(t, func(c *assert.CollectT) { nodes, err := headscale.ListNodes() assert.NoError(c, err) assert.Len(c, nodes, 2) - + // Second unrelated external call - WRONG! status, err := client.Status() assert.NoError(c, err) @@ -577,7 +678,7 @@ assert.EventuallyWithT(t, func(c *assert.CollectT) { assert.EventuallyWithT(t, func(c *assert.CollectT) { nodes, err := headscale.ListNodes() assert.NoError(c, err) - + // NEVER do this! assert.EventuallyWithT(t, func(c2 *assert.CollectT) { status, _ := client.Status() @@ -666,11 +767,11 @@ When working within EventuallyWithT blocks where you need to prevent panics: assert.EventuallyWithT(t, func(c *assert.CollectT) { nodes, err := headscale.ListNodes() assert.NoError(c, err) - + // For array bounds - use require with t to prevent panic assert.Len(c, nodes, 6) // Test expectation require.GreaterOrEqual(t, len(nodes), 3, "need at least 3 nodes to avoid panic") - + // For nil pointer access - use require with t before dereferencing assert.NotNil(c, srs1PeerStatus.PrimaryRoutes) // Test expectation require.NotNil(t, srs1PeerStatus.PrimaryRoutes, "primary routes must be set to avoid panic") @@ -681,7 +782,7 @@ assert.EventuallyWithT(t, func(c *assert.CollectT) { }, 5*time.Second, 200*time.Millisecond, "checking route state") ``` -**Key Principle**: +**Key Principle**: - Use `assert` with `c` (*assert.CollectT) for test expectations that can be retried - Use `require` with `t` (*testing.T) for MUST conditions that prevent panics - Within EventuallyWithT, both are available - choose based on whether failure would cause a panic @@ -704,7 +805,7 @@ assert.EventuallyWithT(t, func(c *assert.CollectT) { assert.EventuallyWithT(t, func(c *assert.CollectT) { status, err := client.Status() assert.NoError(c, err) - + // Check all peers have expected routes for _, peerKey := range status.Peers() { peerStatus := status.Peer[peerKey] @@ -756,8 +857,14 @@ assert.EventuallyWithT(t, func(c *assert.CollectT) { - **Why security focus**: Integration tests are the last line of defense against security regressions - **EventuallyWithT Usage**: Proper use prevents race conditions without weakening security assertions +6. **Concurrent Execution Awareness**: Respect run ID isolation and never interfere with other agents' test sessions. Each test run has a unique run ID - only clean up YOUR containers (by run ID label), never perform global cleanup while tests may be running. + - **Why this matters**: Multiple agents/users may run tests concurrently on the same Docker daemon + - **Key Rule**: NEVER use global container cleanup commands - the test runner handles cleanup automatically per run ID + **CRITICAL PRINCIPLE**: Test expectations are sacred contracts that define correct system behavior. When tests fail, fix the code to match the test, never change the test to match broken code. Only timing and observability improvements are allowed - business logic expectations are immutable. +**ISOLATION PRINCIPLE**: Each test run is isolated by its unique Run ID. Never interfere with other test sessions. The system handles cleanup automatically - manual global cleanup commands are forbidden when other tests may be running. + **EventuallyWithT PRINCIPLE**: Every external call to headscale server or tailscale client must be wrapped in EventuallyWithT. Follow the five key rules strictly: one external call per block, proper variable scoping, no nesting, use CollectT for assertions, and provide descriptive messages. **Remember**: Test failures are usually code issues in Headscale that need to be fixed, not infrastructure problems to be ignored. Use the specific debugging workflows and failure patterns documented above to efficiently identify root causes. Infrastructure issues have very specific signatures - everything else is code-related. diff --git a/.editorconfig b/.editorconfig new file mode 100644 index 00000000..d91a81d8 --- /dev/null +++ b/.editorconfig @@ -0,0 +1,16 @@ +root = true + +[*] +charset = utf-8 +end_of_line = lf +indent_size = 2 +indent_style = space +insert_final_newline = true +trim_trailing_whitespace = true +max_line_length = 120 + +[*.go] +indent_style = tab + +[Makefile] +indent_style = tab diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 2830aa22..594829f9 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -5,8 +5,6 @@ on: branches: - main pull_request: - branches: - - main concurrency: group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }} @@ -17,7 +15,7 @@ jobs: runs-on: ubuntu-latest permissions: write-all steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 2 - name: Get changed files @@ -31,13 +29,12 @@ jobs: - '**/*.go' - 'integration_test/' - 'config-example.yaml' - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 if: steps.changed-files.outputs.files == 'true' - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 if: steps.changed-files.outputs.files == 'true' with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} @@ -57,7 +54,7 @@ jobs: exit $BUILD_STATUS - name: Nix gosum diverging - uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1 + uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0 if: failure() && steps.build.outcome == 'failure' with: github-token: ${{secrets.GITHUB_TOKEN}} @@ -69,7 +66,7 @@ jobs: body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}' }) - - uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 + - uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 if: steps.changed-files.outputs.files == 'true' with: name: headscale-linux @@ -84,22 +81,20 @@ jobs: - "GOARCH=arm64 GOOS=darwin" - "GOARCH=amd64 GOOS=darwin" steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} - name: Run go cross compile env: CGO_ENABLED: 0 - run: - env ${{ matrix.env }} nix develop --command -- go build -o "headscale" + run: env ${{ matrix.env }} nix develop --command -- go build -o "headscale" ./cmd/headscale - - uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 + - uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 with: name: "headscale-${{ matrix.env }}" path: "headscale" diff --git a/.github/workflows/check-generated.yml b/.github/workflows/check-generated.yml index 17073a35..43f1d62d 100644 --- a/.github/workflows/check-generated.yml +++ b/.github/workflows/check-generated.yml @@ -16,7 +16,7 @@ jobs: check-generated: runs-on: ubuntu-latest steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 2 - name: Get changed files @@ -31,7 +31,7 @@ jobs: - '**/*.proto' - 'buf.gen.yaml' - 'tools/**' - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 if: steps.changed-files.outputs.files == 'true' - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 if: steps.changed-files.outputs.files == 'true' diff --git a/.github/workflows/check-tests.yaml b/.github/workflows/check-tests.yaml index f75a2297..63a18141 100644 --- a/.github/workflows/check-tests.yaml +++ b/.github/workflows/check-tests.yaml @@ -10,7 +10,7 @@ jobs: check-tests: runs-on: ubuntu-latest steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 2 - name: Get changed files @@ -24,13 +24,12 @@ jobs: - '**/*.go' - 'integration_test/' - 'config-example.yaml' - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 if: steps.changed-files.outputs.files == 'true' - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 if: steps.changed-files.outputs.files == 'true' with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} diff --git a/.github/workflows/docs-deploy.yml b/.github/workflows/docs-deploy.yml index 7d06b6a6..0a8be5c1 100644 --- a/.github/workflows/docs-deploy.yml +++ b/.github/workflows/docs-deploy.yml @@ -21,15 +21,15 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout repository - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 0 - name: Install python - uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 + uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0 with: python-version: 3.x - name: Setup cache - uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3 + uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0 with: key: ${{ github.ref }} path: .cache diff --git a/.github/workflows/docs-test.yml b/.github/workflows/docs-test.yml index 63c547c8..cab8f95c 100644 --- a/.github/workflows/docs-test.yml +++ b/.github/workflows/docs-test.yml @@ -11,13 +11,13 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout repository - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 - name: Install python - uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 + uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0 with: python-version: 3.x - name: Setup cache - uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3 + uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0 with: key: ${{ github.ref }} path: .cache diff --git a/.github/workflows/gh-action-integration-generator.go b/.github/workflows/gh-action-integration-generator.go index 048ad768..c0a3d6aa 100644 --- a/.github/workflows/gh-action-integration-generator.go +++ b/.github/workflows/gh-action-integration-generator.go @@ -10,6 +10,55 @@ import ( "strings" ) +// testsToSplit defines tests that should be split into multiple CI jobs. +// Key is the test function name, value is a list of subtest prefixes. +// Each prefix becomes a separate CI job as "TestName/prefix". +// +// Example: TestAutoApproveMultiNetwork has subtests like: +// - TestAutoApproveMultiNetwork/authkey-tag-advertiseduringup-false-pol-database +// - TestAutoApproveMultiNetwork/webauth-user-advertiseduringup-true-pol-file +// +// Splitting by approver type (tag, user, group) creates 6 CI jobs with 4 tests each: +// - TestAutoApproveMultiNetwork/authkey-tag.* (4 tests) +// - TestAutoApproveMultiNetwork/authkey-user.* (4 tests) +// - TestAutoApproveMultiNetwork/authkey-group.* (4 tests) +// - TestAutoApproveMultiNetwork/webauth-tag.* (4 tests) +// - TestAutoApproveMultiNetwork/webauth-user.* (4 tests) +// - TestAutoApproveMultiNetwork/webauth-group.* (4 tests) +// +// This reduces load per CI job (4 tests instead of 12) to avoid infrastructure +// flakiness when running many sequential Docker-based integration tests. +var testsToSplit = map[string][]string{ + "TestAutoApproveMultiNetwork": { + "authkey-tag", + "authkey-user", + "authkey-group", + "webauth-tag", + "webauth-user", + "webauth-group", + }, +} + +// expandTests takes a list of test names and expands any that need splitting +// into multiple subtest patterns. +func expandTests(tests []string) []string { + var expanded []string + for _, test := range tests { + if prefixes, ok := testsToSplit[test]; ok { + // This test should be split into multiple jobs. + // We append ".*" to each prefix because the CI runner wraps patterns + // with ^...$ anchors. Without ".*", a pattern like "authkey$" wouldn't + // match "authkey-tag-advertiseduringup-false-pol-database". + for _, prefix := range prefixes { + expanded = append(expanded, fmt.Sprintf("%s/%s.*", test, prefix)) + } + } else { + expanded = append(expanded, test) + } + } + return expanded +} + func findTests() []string { rgBin, err := exec.LookPath("rg") if err != nil { @@ -66,8 +115,11 @@ func updateYAML(tests []string, jobName string, testPath string) { func main() { tests := findTests() - quotedTests := make([]string, len(tests)) - for i, test := range tests { + // Expand tests that should be split into multiple jobs + expandedTests := expandTests(tests) + + quotedTests := make([]string, len(expandedTests)) + for i, test := range expandedTests { quotedTests[i] = fmt.Sprintf("\"%s\"", test) } diff --git a/.github/workflows/gh-actions-updater.yaml b/.github/workflows/gh-actions-updater.yaml index 6bda3440..647e27dc 100644 --- a/.github/workflows/gh-actions-updater.yaml +++ b/.github/workflows/gh-actions-updater.yaml @@ -11,13 +11,13 @@ jobs: runs-on: ubuntu-latest steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: # [Required] Access token with `workflow` scope. token: ${{ secrets.WORKFLOW_SECRET }} - name: Run GitHub Actions Version Updater - uses: saadmk11/github-actions-version-updater@64be81ba69383f81f2be476703ea6570c4c8686e # v0.8.1 + uses: saadmk11/github-actions-version-updater@d8781caf11d11168579c8e5e94f62b068038f442 # v0.9.0 with: # [Required] Access token with `workflow` scope. token: ${{ secrets.WORKFLOW_SECRET }} diff --git a/.github/workflows/integration-test-template.yml b/.github/workflows/integration-test-template.yml index 3307262f..0a884814 100644 --- a/.github/workflows/integration-test-template.yml +++ b/.github/workflows/integration-test-template.yml @@ -28,23 +28,12 @@ jobs: # that triggered the build. HAS_TAILSCALE_SECRET: ${{ secrets.TS_OAUTH_CLIENT_ID }} steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 2 - - name: Get changed files - id: changed-files - uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2 - with: - filters: | - files: - - '*.nix' - - 'go.*' - - '**/*.go' - - 'integration_test/' - - 'config-example.yaml' - name: Tailscale if: ${{ env.HAS_TAILSCALE_SECRET }} - uses: tailscale/github-action@6986d2c82a91fbac2949fe01f5bab95cf21b5102 # v3.2.2 + uses: tailscale/github-action@a392da0a182bba0e9613b6243ebd69529b1878aa # v4.1.0 with: oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }} oauth-secret: ${{ secrets.TS_OAUTH_SECRET }} @@ -52,31 +41,72 @@ jobs: - name: Setup SSH server for Actor if: ${{ env.HAS_TAILSCALE_SECRET }} uses: alexellis/setup-sshd-actor@master - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 - if: steps.changed-files.outputs.files == 'true' - - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 - if: steps.changed-files.outputs.files == 'true' + - name: Download headscale image + uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0 with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', - '**/flake.lock') }} - restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} + name: headscale-image + path: /tmp/artifacts + - name: Download tailscale HEAD image + uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0 + with: + name: tailscale-head-image + path: /tmp/artifacts + - name: Download hi binary + uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0 + with: + name: hi-binary + path: /tmp/artifacts + - name: Download Go cache + uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0 + with: + name: go-cache + path: /tmp/artifacts + - name: Download postgres image + if: ${{ inputs.postgres_flag == '--postgres=1' }} + uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0 + with: + name: postgres-image + path: /tmp/artifacts + - name: Load Docker images, Go cache, and prepare binary + run: | + gunzip -c /tmp/artifacts/headscale-image.tar.gz | docker load + gunzip -c /tmp/artifacts/tailscale-head-image.tar.gz | docker load + if [ -f /tmp/artifacts/postgres-image.tar.gz ]; then + gunzip -c /tmp/artifacts/postgres-image.tar.gz | docker load + fi + chmod +x /tmp/artifacts/hi + docker images + # Extract Go cache to host directories for bind mounting + mkdir -p /tmp/go-cache + tar -xzf /tmp/artifacts/go-cache.tar.gz -C /tmp/go-cache + ls -la /tmp/go-cache/ /tmp/go-cache/.cache/ - name: Run Integration Test - if: always() && steps.changed-files.outputs.files == 'true' - run: - nix develop --command -- hi run --stats --ts-memory-limit=300 --hs-memory-limit=1500 "^${{ inputs.test }}$" \ + env: + HEADSCALE_INTEGRATION_HEADSCALE_IMAGE: headscale:${{ github.sha }} + HEADSCALE_INTEGRATION_TAILSCALE_IMAGE: tailscale-head:${{ github.sha }} + HEADSCALE_INTEGRATION_POSTGRES_IMAGE: ${{ inputs.postgres_flag == '--postgres=1' && format('postgres:{0}', github.sha) || '' }} + HEADSCALE_INTEGRATION_GO_CACHE: /tmp/go-cache/go + HEADSCALE_INTEGRATION_GO_BUILD_CACHE: /tmp/go-cache/.cache/go-build + run: /tmp/artifacts/hi run --stats --ts-memory-limit=300 --hs-memory-limit=1500 "^${{ inputs.test }}$" \ --timeout=120m \ ${{ inputs.postgres_flag }} - - uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 - if: always() && steps.changed-files.outputs.files == 'true' + # Sanitize test name for artifact upload (replace invalid characters: " : < > | * ? \ / with -) + - name: Sanitize test name for artifacts + if: always() + id: sanitize + run: echo "name=${TEST_NAME//[\":<>|*?\\\/]/-}" >> $GITHUB_OUTPUT + env: + TEST_NAME: ${{ inputs.test }} + - uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 + if: always() with: - name: ${{ inputs.database_name }}-${{ inputs.test }}-logs + name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-logs path: "control_logs/*/*.log" - - uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 - if: always() && steps.changed-files.outputs.files == 'true' + - uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 + if: always() with: - name: ${{ inputs.database_name }}-${{ inputs.test }}-archives - path: "control_logs/*/*.tar" + name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-artifacts + path: control_logs/ - name: Setup a blocking tmux session if: ${{ env.HAS_TAILSCALE_SECRET }} uses: alexellis/block-with-tmux-action@master diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index 2959c18a..75088b38 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -10,7 +10,7 @@ jobs: golangci-lint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 2 - name: Get changed files @@ -24,13 +24,12 @@ jobs: - '**/*.go' - 'integration_test/' - 'config-example.yaml' - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 if: steps.changed-files.outputs.files == 'true' - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 if: steps.changed-files.outputs.files == 'true' with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} @@ -46,7 +45,7 @@ jobs: prettier-lint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 2 - name: Get changed files @@ -65,13 +64,12 @@ jobs: - '**/*.css' - '**/*.scss' - '**/*.html' - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 if: steps.changed-files.outputs.files == 'true' - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 if: steps.changed-files.outputs.files == 'true' with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} @@ -83,12 +81,11 @@ jobs: proto-lint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} diff --git a/.github/workflows/nix-module-test.yml b/.github/workflows/nix-module-test.yml new file mode 100644 index 00000000..68ad9545 --- /dev/null +++ b/.github/workflows/nix-module-test.yml @@ -0,0 +1,55 @@ +name: NixOS Module Tests + +on: + push: + branches: + - main + pull_request: + branches: + - main + +concurrency: + group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }} + cancel-in-progress: true + +jobs: + nix-module-check: + runs-on: ubuntu-latest + permissions: + contents: read + + steps: + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 + with: + fetch-depth: 2 + + - name: Get changed files + id: changed-files + uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2 + with: + filters: | + nix: + - 'nix/**' + - 'flake.nix' + - 'flake.lock' + go: + - 'go.*' + - '**/*.go' + - 'cmd/**' + - 'hscontrol/**' + + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 + if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true' + + - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 + if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true' + with: + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + '**/flake.lock') }} + restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} + + - name: Run NixOS module tests + if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true' + run: | + echo "Running NixOS module integration test..." + nix build .#checks.x86_64-linux.headscale -L diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 5b6fd18d..4835e255 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -13,28 +13,27 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 0 - name: Login to DockerHub - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0 + uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} - name: Login to GHCR - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0 + uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0 with: registry: ghcr.io username: ${{ github.repository_owner }} password: ${{ secrets.GITHUB_TOKEN }} - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml index 1041f1af..0915ec2c 100644 --- a/.github/workflows/stale.yml +++ b/.github/workflows/stale.yml @@ -12,16 +12,14 @@ jobs: issues: write pull-requests: write steps: - - uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0 + - uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1 with: days-before-issue-stale: 90 days-before-issue-close: 7 stale-issue-label: "stale" - stale-issue-message: - "This issue is stale because it has been open for 90 days with no + stale-issue-message: "This issue is stale because it has been open for 90 days with no activity." - close-issue-message: - "This issue was closed because it has been inactive for 14 days + close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale." days-before-pr-stale: -1 days-before-pr-close: -1 diff --git a/.github/workflows/test-integration.yaml b/.github/workflows/test-integration.yaml index f5ec43a1..82b40044 100644 --- a/.github/workflows/test-integration.yaml +++ b/.github/workflows/test-integration.yaml @@ -7,7 +7,117 @@ concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} cancel-in-progress: true jobs: + # build: Builds binaries and Docker images once, uploads as artifacts for reuse. + # build-postgres: Pulls postgres image separately to avoid Docker Hub rate limits. + # sqlite: Runs all integration tests with SQLite backend. + # postgres: Runs a subset of tests with PostgreSQL to verify database compatibility. + build: + runs-on: ubuntu-latest + outputs: + files-changed: ${{ steps.changed-files.outputs.files }} + steps: + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 + with: + fetch-depth: 2 + - name: Get changed files + id: changed-files + uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2 + with: + filters: | + files: + - '*.nix' + - 'go.*' + - '**/*.go' + - 'integration/**' + - 'config-example.yaml' + - '.github/workflows/test-integration.yaml' + - '.github/workflows/integration-test-template.yml' + - 'Dockerfile.*' + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 + if: steps.changed-files.outputs.files == 'true' + - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 + if: steps.changed-files.outputs.files == 'true' + with: + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} + restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} + - name: Build binaries and warm Go cache + if: steps.changed-files.outputs.files == 'true' + run: | + # Build all Go binaries in one nix shell to maximize cache reuse + nix develop --command -- bash -c ' + go build -o hi ./cmd/hi + CGO_ENABLED=0 GOOS=linux go build -o headscale ./cmd/headscale + # Build integration test binary to warm the cache with all dependencies + go test -c ./integration -o /dev/null 2>/dev/null || true + ' + - name: Upload hi binary + if: steps.changed-files.outputs.files == 'true' + uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 + with: + name: hi-binary + path: hi + retention-days: 10 + - name: Package Go cache + if: steps.changed-files.outputs.files == 'true' + run: | + # Package Go module cache and build cache + tar -czf go-cache.tar.gz -C ~ go .cache/go-build + - name: Upload Go cache + if: steps.changed-files.outputs.files == 'true' + uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 + with: + name: go-cache + path: go-cache.tar.gz + retention-days: 10 + - name: Build headscale image + if: steps.changed-files.outputs.files == 'true' + run: | + docker build \ + --file Dockerfile.integration-ci \ + --tag headscale:${{ github.sha }} \ + . + docker save headscale:${{ github.sha }} | gzip > headscale-image.tar.gz + - name: Build tailscale HEAD image + if: steps.changed-files.outputs.files == 'true' + run: | + docker build \ + --file Dockerfile.tailscale-HEAD \ + --tag tailscale-head:${{ github.sha }} \ + . + docker save tailscale-head:${{ github.sha }} | gzip > tailscale-head-image.tar.gz + - name: Upload headscale image + if: steps.changed-files.outputs.files == 'true' + uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 + with: + name: headscale-image + path: headscale-image.tar.gz + retention-days: 10 + - name: Upload tailscale HEAD image + if: steps.changed-files.outputs.files == 'true' + uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 + with: + name: tailscale-head-image + path: tailscale-head-image.tar.gz + retention-days: 10 + build-postgres: + runs-on: ubuntu-latest + needs: build + if: needs.build.outputs.files-changed == 'true' + steps: + - name: Pull and save postgres image + run: | + docker pull postgres:latest + docker tag postgres:latest postgres:${{ github.sha }} + docker save postgres:${{ github.sha }} | gzip > postgres-image.tar.gz + - name: Upload postgres image + uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0 + with: + name: postgres-image + path: postgres-image.tar.gz + retention-days: 10 sqlite: + needs: build + if: needs.build.outputs.files-changed == 'true' strategy: fail-fast: false matrix: @@ -25,6 +135,13 @@ jobs: - TestACLAutogroupTagged - TestACLAutogroupSelf - TestACLPolicyPropagationOverTime + - TestACLTagPropagation + - TestACLTagPropagationPortSpecific + - TestACLGroupWithUnknownUser + - TestACLGroupAfterUserDeletion + - TestACLGroupDeletionExactReproduction + - TestACLDynamicUnknownUserAddition + - TestACLDynamicUnknownUserRemoval - TestAPIAuthenticationBypass - TestAPIAuthenticationBypassCurl - TestGRPCAuthenticationBypass @@ -32,6 +149,8 @@ jobs: - TestAuthKeyLogoutAndReloginSameUser - TestAuthKeyLogoutAndReloginNewUser - TestAuthKeyLogoutAndReloginSameUserExpiredKey + - TestAuthKeyDeleteKey + - TestAuthKeyLogoutAndReloginRoutesPreserved - TestOIDCAuthenticationPingAll - TestOIDCExpireNodesBasedOnTokenExpiry - TestOIDC024UserCreation @@ -41,6 +160,8 @@ jobs: - TestOIDCMultipleOpenedLoginUrls - TestOIDCReloginSameNodeSameUser - TestOIDCExpiryAfterRestart + - TestOIDCACLPolicyOnJoin + - TestOIDCReloginSameUserRoutesPreserved - TestAuthWebFlowAuthenticationPingAll - TestAuthWebFlowLogoutAndReloginSameUser - TestAuthWebFlowLogoutAndReloginNewUser @@ -49,13 +170,11 @@ jobs: - TestPreAuthKeyCommandWithoutExpiry - TestPreAuthKeyCommandReusableEphemeral - TestPreAuthKeyCorrectUserLoggedInCommand + - TestTaggedNodesCLIOutput - TestApiKeyCommand - - TestNodeTagCommand - - TestNodeAdvertiseTagCommand - TestNodeCommand - TestNodeExpireCommand - TestNodeRenameCommand - - TestNodeMoveCommand - TestPolicyCommand - TestPolicyBrokenConfigCommand - TestDERPVerifyEndpoint @@ -82,7 +201,12 @@ jobs: - TestEnablingExitRoutes - TestSubnetRouterMultiNetwork - TestSubnetRouterMultiNetworkExitNode - - TestAutoApproveMultiNetwork + - TestAutoApproveMultiNetwork/authkey-tag.* + - TestAutoApproveMultiNetwork/authkey-user.* + - TestAutoApproveMultiNetwork/authkey-group.* + - TestAutoApproveMultiNetwork/webauth-tag.* + - TestAutoApproveMultiNetwork/webauth-user.* + - TestAutoApproveMultiNetwork/webauth-group.* - TestSubnetRouteACLFiltering - TestHeadscale - TestTailscaleNodesJoiningHeadcale @@ -92,12 +216,46 @@ jobs: - TestSSHIsBlockedInACL - TestSSHUserOnlyIsolation - TestSSHAutogroupSelf + - TestTagsAuthKeyWithTagRequestDifferentTag + - TestTagsAuthKeyWithTagNoAdvertiseFlag + - TestTagsAuthKeyWithTagCannotAddViaCLI + - TestTagsAuthKeyWithTagCannotChangeViaCLI + - TestTagsAuthKeyWithTagAdminOverrideReauthPreserves + - TestTagsAuthKeyWithTagCLICannotModifyAdminTags + - TestTagsAuthKeyWithoutTagCannotRequestTags + - TestTagsAuthKeyWithoutTagRegisterNoTags + - TestTagsAuthKeyWithoutTagCannotAddViaCLI + - TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithReset + - TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithEmptyAdvertise + - TestTagsAuthKeyWithoutTagCLICannotReduceAdminMultiTag + - TestTagsUserLoginOwnedTagAtRegistration + - TestTagsUserLoginNonExistentTagAtRegistration + - TestTagsUserLoginUnownedTagAtRegistration + - TestTagsUserLoginAddTagViaCLIReauth + - TestTagsUserLoginRemoveTagViaCLIReauth + - TestTagsUserLoginCLINoOpAfterAdminAssignment + - TestTagsUserLoginCLICannotRemoveAdminTags + - TestTagsAuthKeyWithTagRequestNonExistentTag + - TestTagsAuthKeyWithTagRequestUnownedTag + - TestTagsAuthKeyWithoutTagRequestNonExistentTag + - TestTagsAuthKeyWithoutTagRequestUnownedTag + - TestTagsAdminAPICannotSetNonExistentTag + - TestTagsAdminAPICanSetUnownedTag + - TestTagsAdminAPICannotRemoveAllTags + - TestTagsIssue2978ReproTagReplacement + - TestTagsAdminAPICannotSetInvalidFormat + - TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags + - TestTagsAuthKeyWithoutUserInheritsTags + - TestTagsAuthKeyWithoutUserRejectsAdvertisedTags uses: ./.github/workflows/integration-test-template.yml + secrets: inherit with: test: ${{ matrix.test }} postgres_flag: "--postgres=0" database_name: "sqlite" postgres: + needs: [build, build-postgres] + if: needs.build.outputs.files-changed == 'true' strategy: fail-fast: false matrix: @@ -108,6 +266,7 @@ jobs: - TestPingAllByIPManyUpDown - TestSubnetRouterMultiNetwork uses: ./.github/workflows/integration-test-template.yml + secrets: inherit with: test: ${{ matrix.test }} postgres_flag: "--postgres=1" diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml index d43f8e83..31eb431b 100644 --- a/.github/workflows/test.yml +++ b/.github/workflows/test.yml @@ -11,7 +11,7 @@ jobs: runs-on: ubuntu-latest steps: - - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 with: fetch-depth: 2 @@ -27,13 +27,12 @@ jobs: - 'integration_test/' - 'config-example.yaml' - - uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31 + - uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34 if: steps.changed-files.outputs.files == 'true' - uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3 if: steps.changed-files.outputs.files == 'true' with: - primary-key: - nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', + primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }} restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }} diff --git a/.gitignore b/.gitignore index 28d23c09..4fec4f53 100644 --- a/.gitignore +++ b/.gitignore @@ -2,6 +2,7 @@ ignored/ tailscale/ .vscode/ .claude/ +logs/ *.prof diff --git a/.golangci.yaml b/.golangci.yaml index 79b042a0..eda3bed4 100644 --- a/.golangci.yaml +++ b/.golangci.yaml @@ -7,6 +7,7 @@ linters: - depguard - dupl - exhaustruct + - funcorder - funlen - gochecknoglobals - gochecknoinits @@ -28,6 +29,15 @@ linters: - wrapcheck - wsl settings: + forbidigo: + forbid: + # Forbid time.Sleep everywhere with context-appropriate alternatives + - pattern: 'time\.Sleep' + msg: >- + time.Sleep is forbidden. + In tests: use assert.EventuallyWithT for polling/waiting patterns. + In production code: use a backoff strategy (e.g., cenkalti/backoff) or proper synchronization primitives. + analyze-types: true gocritic: disabled-checks: - appendAssign diff --git a/.goreleaser.yml b/.goreleaser.yml index eea46d39..f77dfe38 100644 --- a/.goreleaser.yml +++ b/.goreleaser.yml @@ -125,7 +125,7 @@ kos: # bare tells KO to only use the repository # for tagging and naming the container. bare: true - base_image: gcr.io/distroless/base-debian12 + base_image: gcr.io/distroless/base-debian13 build: headscale main: ./cmd/headscale env: @@ -154,7 +154,7 @@ kos: - headscale/headscale bare: true - base_image: gcr.io/distroless/base-debian12:debug + base_image: gcr.io/distroless/base-debian13:debug build: headscale main: ./cmd/headscale env: diff --git a/.mcp.json b/.mcp.json index 1303afda..71554002 100644 --- a/.mcp.json +++ b/.mcp.json @@ -3,45 +3,31 @@ "claude-code-mcp": { "type": "stdio", "command": "npx", - "args": [ - "-y", - "@steipete/claude-code-mcp@latest" - ], + "args": ["-y", "@steipete/claude-code-mcp@latest"], "env": {} }, "sequential-thinking": { "type": "stdio", "command": "npx", - "args": [ - "-y", - "@modelcontextprotocol/server-sequential-thinking" - ], + "args": ["-y", "@modelcontextprotocol/server-sequential-thinking"], "env": {} }, "nixos": { "type": "stdio", "command": "uvx", - "args": [ - "mcp-nixos" - ], + "args": ["mcp-nixos"], "env": {} }, "context7": { "type": "stdio", "command": "npx", - "args": [ - "-y", - "@upstash/context7-mcp" - ], + "args": ["-y", "@upstash/context7-mcp"], "env": {} }, "git": { "type": "stdio", "command": "npx", - "args": [ - "-y", - "@cyanheads/git-mcp-server" - ], + "args": ["-y", "@cyanheads/git-mcp-server"], "env": {} } } diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml new file mode 100644 index 00000000..ed869775 --- /dev/null +++ b/.pre-commit-config.yaml @@ -0,0 +1,68 @@ +# prek/pre-commit configuration for headscale +# See: https://prek.j178.dev/quickstart/ +# See: https://prek.j178.dev/builtin/ + +# Global exclusions - ignore generated code +exclude: ^gen/ + +repos: + # Built-in hooks from pre-commit/pre-commit-hooks + # prek will use fast-path optimized versions automatically + # See: https://prek.j178.dev/builtin/ + - repo: https://github.com/pre-commit/pre-commit-hooks + rev: v6.0.0 + hooks: + - id: check-added-large-files + - id: check-case-conflict + - id: check-executables-have-shebangs + - id: check-json + - id: check-merge-conflict + - id: check-symlinks + - id: check-toml + - id: check-xml + - id: check-yaml + - id: detect-private-key + - id: end-of-file-fixer + - id: fix-byte-order-marker + - id: mixed-line-ending + - id: trailing-whitespace + + # Local hooks for project-specific tooling + - repo: local + hooks: + # nixpkgs-fmt for Nix files + - id: nixpkgs-fmt + name: nixpkgs-fmt + entry: nixpkgs-fmt + language: system + files: \.nix$ + + # Prettier for formatting + - id: prettier + name: prettier + entry: prettier --write --list-different + language: system + exclude: ^docs/ + types_or: + [ + javascript, + jsx, + ts, + tsx, + yaml, + json, + toml, + html, + css, + scss, + sass, + markdown, + ] + + # golangci-lint for Go code quality + - id: golangci-lint + name: golangci-lint + entry: nix develop --command golangci-lint run --new-from-rev=HEAD~1 --timeout=5m --fix + language: system + types: [go] + pass_filenames: false diff --git a/.prettierignore b/.prettierignore index 4452a8a6..ebb727cc 100644 --- a/.prettierignore +++ b/.prettierignore @@ -1,5 +1,5 @@ .github/workflows/test-integration-v2* docs/about/features.md +docs/ref/api.md docs/ref/configuration.md docs/ref/oidc.md -docs/ref/remote-cli.md diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 00000000..2432ea28 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,1051 @@ +# AGENTS.md + +This file provides guidance to AI agents when working with code in this repository. + +## Overview + +Headscale is an open-source implementation of the Tailscale control server written in Go. It provides self-hosted coordination for Tailscale networks (tailnets), managing node registration, IP allocation, policy enforcement, and DERP routing. + +## Development Commands + +### Quick Setup + +```bash +# Recommended: Use Nix for dependency management +nix develop + +# Full development workflow +make dev # runs fmt + lint + test + build +``` + +### Essential Commands + +```bash +# Build headscale binary +make build + +# Run tests +make test +go test ./... # All unit tests +go test -race ./... # With race detection + +# Run specific integration test +go run ./cmd/hi run "TestName" --postgres + +# Code formatting and linting +make fmt # Format all code (Go, docs, proto) +make lint # Lint all code (Go, proto) +make fmt-go # Format Go code only +make lint-go # Lint Go code only + +# Protocol buffer generation (after modifying proto/) +make generate + +# Clean build artifacts +make clean +``` + +### Integration Testing + +```bash +# Use the hi (Headscale Integration) test runner +go run ./cmd/hi doctor # Check system requirements +go run ./cmd/hi run "TestPattern" # Run specific test +go run ./cmd/hi run "TestPattern" --postgres # With PostgreSQL backend + +# Test artifacts are saved to control_logs/ with logs and debug data +``` + +## Pre-Commit Quality Checks + +### **MANDATORY: Automated Pre-Commit Hooks with prek** + +**CRITICAL REQUIREMENT**: This repository uses [prek](https://prek.j178.dev/) for automated pre-commit hooks. All commits are automatically validated for code quality, formatting, and common issues. + +### Initial Setup + +When you first clone the repository or enter the nix shell, install the git hooks: + +```bash +# Enter nix development environment +nix develop + +# Install prek git hooks (one-time setup) +prek install +``` + +This installs the pre-commit hook at `.git/hooks/pre-commit` which automatically runs all configured checks before each commit. + +### Configured Hooks + +The repository uses `.pre-commit-config.yaml` with the following hooks: + +**Built-in Checks** (optimized fast-path execution): + +- `check-added-large-files` - Prevents accidentally committing large files +- `check-case-conflict` - Checks for files that would conflict in case-insensitive filesystems +- `check-executables-have-shebangs` - Ensures executables have proper shebangs +- `check-json` - Validates JSON syntax +- `check-merge-conflict` - Prevents committing files with merge conflict markers +- `check-symlinks` - Checks for broken symlinks +- `check-toml` - Validates TOML syntax +- `check-xml` - Validates XML syntax +- `check-yaml` - Validates YAML syntax +- `detect-private-key` - Detects accidentally committed private keys +- `end-of-file-fixer` - Ensures files end with a newline +- `fix-byte-order-marker` - Removes UTF-8 byte order markers +- `mixed-line-ending` - Prevents mixed line endings +- `trailing-whitespace` - Removes trailing whitespace + +**Project-Specific Hooks**: + +- `nixpkgs-fmt` - Formats Nix files +- `prettier` - Formats markdown, YAML, JSON, and TOML files +- `golangci-lint` - Runs Go linter with auto-fix on changed files only + +### Manual Hook Execution + +Run hooks manually without making a commit: + +```bash +# Run hooks on staged files only +prek run + +# Run hooks on all files in the repository +prek run --all-files + +# Run a specific hook +prek run golangci-lint + +# Run hooks on specific files +prek run --files path/to/file1.go path/to/file2.go +``` + +### Workflow Pattern + +With prek installed, your normal workflow becomes: + +```bash +# 1. Make your code changes +vim hscontrol/state/state.go + +# 2. Stage your changes +git add . + +# 3. Commit - hooks run automatically +git commit -m "feat: add new feature" + +# If hooks fail, they will show which checks failed +# Fix the issues and try committing again +``` + +### Manual golangci-lint + +While golangci-lint runs automatically via prek, you can also run it manually: + +```bash +# If you have upstream remote configured (recommended) +golangci-lint run --new-from-rev=upstream/main --timeout=5m --fix + +# If you only have origin remote +golangci-lint run --new-from-rev=main --timeout=5m --fix +``` + +**Important**: Always use `--new-from-rev` to only lint changed files. This prevents formatting the entire repository and keeps changes focused on your actual modifications. + +### Skipping Hooks (Not Recommended) + +In rare cases where you need to skip hooks (e.g., work-in-progress commits), use: + +```bash +git commit --no-verify -m "WIP: work in progress" +``` + +**WARNING**: Only use `--no-verify` for temporary WIP commits on feature branches. All commits to main must pass all hooks. + +### Troubleshooting + +**Hook installation issues**: + +```bash +# Check if hooks are installed +ls -la .git/hooks/pre-commit + +# Reinstall hooks +prek install +``` + +**Hooks running slow**: + +```bash +# prek uses optimized fast-path for built-in hooks +# If running slow, check which hook is taking time with verbose output +prek run -v +``` + +**Update hook configuration**: + +```bash +# After modifying .pre-commit-config.yaml, hooks will automatically use new config +# No reinstallation needed +``` + +## Project Structure & Architecture + +### Top-Level Organization + +``` +headscale/ +├── cmd/ # Command-line applications +│ ├── headscale/ # Main headscale server binary +│ └── hi/ # Headscale Integration test runner +├── hscontrol/ # Core control plane logic +├── integration/ # End-to-end Docker-based tests +├── proto/ # Protocol buffer definitions +├── gen/ # Generated code (protobuf) +├── docs/ # Documentation +└── packaging/ # Distribution packaging +``` + +### Core Packages (`hscontrol/`) + +**Main Server (`hscontrol/`)** + +- `app.go`: Application setup, dependency injection, server lifecycle +- `handlers.go`: HTTP/gRPC API endpoints for management operations +- `grpcv1.go`: gRPC service implementation for headscale API +- `poll.go`: **Critical** - Handles Tailscale MapRequest/MapResponse protocol +- `noise.go`: Noise protocol implementation for secure client communication +- `auth.go`: Authentication flows (web, OIDC, command-line) +- `oidc.go`: OpenID Connect integration for user authentication + +**State Management (`hscontrol/state/`)** + +- `state.go`: Central coordinator for all subsystems (database, policy, IP allocation, DERP) +- `node_store.go`: **Performance-critical** - In-memory cache with copy-on-write semantics +- Thread-safe operations with deadlock detection +- Coordinates between database persistence and real-time operations + +**Database Layer (`hscontrol/db/`)** + +- `db.go`: Database abstraction, GORM setup, migration management +- `node.go`: Node lifecycle, registration, expiration, IP assignment +- `users.go`: User management, namespace isolation +- `api_key.go`: API authentication tokens +- `preauth_keys.go`: Pre-authentication keys for automated node registration +- `ip.go`: IP address allocation and management +- `policy.go`: Policy storage and retrieval +- Schema migrations in `schema.sql` with extensive test data coverage + +**CRITICAL DATABASE MIGRATION RULES**: + +1. **NEVER reorder existing migrations** - Migration order is immutable once committed +2. **ONLY add new migrations to the END** of the migrations array +3. **NEVER disable foreign keys** in new migrations - no new migrations should be added to `migrationsRequiringFKDisabled` +4. **Migration ID format**: `YYYYMMDDHHSS-short-description` (timestamp + descriptive suffix) + - Example: `202511131500-add-user-roles` + - The timestamp must be chronologically ordered +5. **New migrations go after the comment** "As of 2025-07-02, no new IDs should be added here" +6. If you need to rename a column that other migrations depend on: + - Accept that the old column name will exist in intermediate migration states + - Update code to work with the new column name + - Let AutoMigrate create the new column if needed + - Do NOT try to rename columns that later migrations reference + +**Policy Engine (`hscontrol/policy/`)** + +- `policy.go`: Core ACL evaluation logic, HuJSON parsing +- `v2/`: Next-generation policy system with improved filtering +- `matcher/`: ACL rule matching and evaluation engine +- Determines peer visibility, route approval, and network access rules +- Supports both file-based and database-stored policies + +**Network Management (`hscontrol/`)** + +- `derp/`: DERP (Designated Encrypted Relay for Packets) server implementation + - NAT traversal when direct connections fail + - Fallback relay for firewall-restricted environments +- `mapper/`: Converts internal Headscale state to Tailscale's wire protocol format + - `tail.go`: Tailscale-specific data structure generation +- `routes/`: Subnet route management and primary route selection +- `dns/`: DNS record management and MagicDNS implementation + +**Utilities & Support (`hscontrol/`)** + +- `types/`: Core data structures, configuration, validation +- `util/`: Helper functions for networking, DNS, key management +- `templates/`: Client configuration templates (Apple, Windows, etc.) +- `notifier/`: Event notification system for real-time updates +- `metrics.go`: Prometheus metrics collection +- `capver/`: Tailscale capability version management + +### Key Subsystem Interactions + +**Node Registration Flow** + +1. **Client Connection**: `noise.go` handles secure protocol handshake +2. **Authentication**: `auth.go` validates credentials (web/OIDC/preauth) +3. **State Creation**: `state.go` coordinates IP allocation via `db/ip.go` +4. **Storage**: `db/node.go` persists node, `NodeStore` caches in memory +5. **Network Setup**: `mapper/` generates initial Tailscale network map + +**Ongoing Operations** + +1. **Poll Requests**: `poll.go` receives periodic client updates +2. **State Updates**: `NodeStore` maintains real-time node information +3. **Policy Application**: `policy/` evaluates ACL rules for peer relationships +4. **Map Distribution**: `mapper/` sends network topology to all affected clients + +**Route Management** + +1. **Advertisement**: Clients announce routes via `poll.go` Hostinfo updates +2. **Storage**: `db/` persists routes, `NodeStore` caches for performance +3. **Approval**: `policy/` auto-approves routes based on ACL rules +4. **Distribution**: `routes/` selects primary routes, `mapper/` distributes to peers + +### Command-Line Tools (`cmd/`) + +**Main Server (`cmd/headscale/`)** + +- `headscale.go`: CLI parsing, configuration loading, server startup +- Supports daemon mode, CLI operations (user/node management), database operations + +**Integration Test Runner (`cmd/hi/`)** + +- `main.go`: Test execution framework with Docker orchestration +- `run.go`: Individual test execution with artifact collection +- `doctor.go`: System requirements validation +- `docker.go`: Container lifecycle management +- Essential for validating changes against real Tailscale clients + +### Generated & External Code + +**Protocol Buffers (`proto/` → `gen/`)** + +- Defines gRPC API for headscale management operations +- Client libraries can generate from these definitions +- Run `make generate` after modifying `.proto` files + +**Integration Testing (`integration/`)** + +- `scenario.go`: Docker test environment setup +- `tailscale.go`: Tailscale client container management +- Individual test files for specific functionality areas +- Real end-to-end validation with network isolation + +### Critical Performance Paths + +**High-Frequency Operations** + +1. **MapRequest Processing** (`poll.go`): Every 15-60 seconds per client +2. **NodeStore Reads** (`node_store.go`): Every operation requiring node data +3. **Policy Evaluation** (`policy/`): On every peer relationship calculation +4. **Route Lookups** (`routes/`): During network map generation + +**Database Write Patterns** + +- **Frequent**: Node heartbeats, endpoint updates, route changes +- **Moderate**: User operations, policy updates, API key management +- **Rare**: Schema migrations, bulk operations + +### Configuration & Deployment + +**Configuration** (`hscontrol/types/config.go`)\*\* + +- Database connection settings (SQLite/PostgreSQL) +- Network configuration (IP ranges, DNS settings) +- Policy mode (file vs database) +- DERP relay configuration +- OIDC provider settings + +**Key Dependencies** + +- **GORM**: Database ORM with migration support +- **Tailscale Libraries**: Core networking and protocol code +- **Zerolog**: Structured logging throughout the application +- **Buf**: Protocol buffer toolchain for code generation + +### Development Workflow Integration + +The architecture supports incremental development: + +- **Unit Tests**: Focus on individual packages (`*_test.go` files) +- **Integration Tests**: Validate cross-component interactions +- **Database Tests**: Extensive migration and data integrity validation +- **Policy Tests**: ACL rule evaluation and edge cases +- **Performance Tests**: NodeStore and high-frequency operation validation + +## Integration Testing System + +### Overview + +Headscale uses Docker-based integration tests with real Tailscale clients to validate end-to-end functionality. The integration test system is complex and requires specialized knowledge for effective execution and debugging. + +### **MANDATORY: Use the headscale-integration-tester Agent** + +**CRITICAL REQUIREMENT**: For ANY integration test execution, analysis, troubleshooting, or validation, you MUST use the `headscale-integration-tester` agent. This agent contains specialized knowledge about: + +- Test execution strategies and timing requirements +- Infrastructure vs code issue distinction (99% vs 1% failure patterns) +- Security-critical debugging rules and forbidden practices +- Comprehensive artifact analysis workflows +- Real-world failure patterns from HA debugging experiences + +### Quick Reference Commands + +```bash +# Check system requirements (always run first) +go run ./cmd/hi doctor + +# Run single test (recommended for development) +go run ./cmd/hi run "TestName" + +# Use PostgreSQL for database-heavy tests +go run ./cmd/hi run "TestName" --postgres + +# Pattern matching for related tests +go run ./cmd/hi run "TestPattern*" + +# Run multiple tests concurrently (each gets isolated run ID) +go run ./cmd/hi run "TestPingAllByIP" & +go run ./cmd/hi run "TestACLAllowUserDst" & +go run ./cmd/hi run "TestOIDCAuthenticationPingAll" & +``` + +**Concurrent Execution Support**: + +The test runner supports running multiple tests concurrently on the same Docker daemon: + +- Each test run gets a **unique Run ID** (format: `YYYYMMDD-HHMMSS-{6-char-hash}`) +- All containers are labeled with `hi.run-id` for isolation +- Container names include the run ID for easy identification (e.g., `ts-{runID}-1-74-{hash}`) +- Dynamic port allocation prevents port conflicts between concurrent runs +- Cleanup only affects containers belonging to the specific run ID +- Log directories are isolated per run: `control_logs/{runID}/` + +**Critical Notes**: + +- Tests generate ~100MB of logs per run in `control_logs/` +- Running many tests concurrently may cause resource contention (CPU/memory) +- Clean stale containers periodically: `docker system prune -f` + +### Test Artifacts Location + +All test runs save comprehensive debugging artifacts to `control_logs/TIMESTAMP-ID/` including server logs, client logs, database dumps, MapResponse protocol data, and Prometheus metrics. + +**For all integration test work, use the headscale-integration-tester agent - it contains the complete knowledge needed for effective testing and debugging.** + +## NodeStore Implementation Details + +**Key Insight from Recent Work**: The NodeStore is a critical performance optimization that caches node data in memory while ensuring consistency with the database. When working with route advertisements or node state changes: + +1. **Timing Considerations**: Route advertisements need time to propagate from clients to server. Use `require.EventuallyWithT()` patterns in tests instead of immediate assertions. + +2. **Synchronization Points**: NodeStore updates happen at specific points like `poll.go:420` after Hostinfo changes. Ensure these are maintained when modifying the polling logic. + +3. **Peer Visibility**: The NodeStore's `peersFunc` determines which nodes are visible to each other. Policy-based filtering is separate from monitoring visibility - expired nodes should remain visible for debugging but marked as expired. + +## Testing Guidelines + +### Integration Test Patterns + +#### **CRITICAL: EventuallyWithT Pattern for External Calls** + +**All external calls in integration tests MUST be wrapped in EventuallyWithT blocks** to handle eventual consistency in distributed systems. External calls include: + +- `client.Status()` - Getting Tailscale client status +- `client.Curl()` - Making HTTP requests through clients +- `client.Traceroute()` - Running network diagnostics +- `headscale.ListNodes()` - Querying headscale server state +- Any other calls that interact with external systems or network operations + +**Key Rules**: + +1. **Never use bare `require.NoError(t, err)` with external calls** - Always wrap in EventuallyWithT +2. **Keep related assertions together** - If multiple assertions depend on the same external call, keep them in the same EventuallyWithT block +3. **Split unrelated external calls** - Different external calls should be in separate EventuallyWithT blocks +4. **Never nest EventuallyWithT calls** - Each EventuallyWithT should be at the same level +5. **Declare shared variables at function scope** - Variables used across multiple EventuallyWithT blocks must be declared before first use + +**Examples**: + +```go +// CORRECT: External call wrapped in EventuallyWithT +assert.EventuallyWithT(t, func(c *assert.CollectT) { + status, err := client.Status() + assert.NoError(c, err) + + // Related assertions using the same status call + for _, peerKey := range status.Peers() { + peerStatus := status.Peer[peerKey] + assert.NotNil(c, peerStatus.PrimaryRoutes) + requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedRoutes) + } +}, 5*time.Second, 200*time.Millisecond, "Verifying client status and routes") + +// INCORRECT: Bare external call without EventuallyWithT +status, err := client.Status() // ❌ Will fail intermittently +require.NoError(t, err) + +// CORRECT: Separate EventuallyWithT for different external calls +// First external call - headscale.ListNodes() +assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 2) + requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2) +}, 10*time.Second, 500*time.Millisecond, "route state changes should propagate to nodes") + +// Second external call - client.Status() +assert.EventuallyWithT(t, func(c *assert.CollectT) { + status, err := client.Status() + assert.NoError(c, err) + + for _, peerKey := range status.Peers() { + peerStatus := status.Peer[peerKey] + requirePeerSubnetRoutesWithCollect(c, peerStatus, []netip.Prefix{tsaddr.AllIPv4(), tsaddr.AllIPv6()}) + } +}, 10*time.Second, 500*time.Millisecond, "routes should be visible to client") + +// INCORRECT: Multiple unrelated external calls in same EventuallyWithT +assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() // ❌ First external call + assert.NoError(c, err) + + status, err := client.Status() // ❌ Different external call - should be separate + assert.NoError(c, err) +}, 10*time.Second, 500*time.Millisecond, "mixed calls") + +// CORRECT: Variable scoping for shared data +var ( + srs1, srs2, srs3 *ipnstate.Status + clientStatus *ipnstate.Status + srs1PeerStatus *ipnstate.PeerStatus +) + +assert.EventuallyWithT(t, func(c *assert.CollectT) { + srs1 = subRouter1.MustStatus() // = not := + srs2 = subRouter2.MustStatus() + clientStatus = client.MustStatus() + + srs1PeerStatus = clientStatus.Peer[srs1.Self.PublicKey] + // assertions... +}, 5*time.Second, 200*time.Millisecond, "checking router status") + +// CORRECT: Wrapping client operations +assert.EventuallyWithT(t, func(c *assert.CollectT) { + result, err := client.Curl(weburl) + assert.NoError(c, err) + assert.Len(c, result, 13) +}, 5*time.Second, 200*time.Millisecond, "Verifying HTTP connectivity") + +assert.EventuallyWithT(t, func(c *assert.CollectT) { + tr, err := client.Traceroute(webip) + assert.NoError(c, err) + assertTracerouteViaIPWithCollect(c, tr, expectedRouter.MustIPv4()) +}, 5*time.Second, 200*time.Millisecond, "Verifying network path") +``` + +**Helper Functions**: + +- Use `requirePeerSubnetRoutesWithCollect` instead of `requirePeerSubnetRoutes` inside EventuallyWithT +- Use `requireNodeRouteCountWithCollect` instead of `requireNodeRouteCount` inside EventuallyWithT +- Use `assertTracerouteViaIPWithCollect` instead of `assertTracerouteViaIP` inside EventuallyWithT + +```go +// Node route checking by actual node properties, not array position +var routeNode *v1.Node +for _, node := range nodes { + if nodeIDStr := fmt.Sprintf("%d", node.GetId()); expectedRoutes[nodeIDStr] != "" { + routeNode = node + break + } +} +``` + +### Running Problematic Tests + +- Some tests require significant time (e.g., `TestNodeOnlineStatus` runs for 12 minutes) +- Infrastructure issues like disk space can cause test failures unrelated to code changes +- Use `--postgres` flag when testing database-heavy scenarios + +## Quality Assurance and Testing Requirements + +### **MANDATORY: Always Use Specialized Testing Agents** + +**CRITICAL REQUIREMENT**: For ANY task involving testing, quality assurance, review, or validation, you MUST use the appropriate specialized agent at the END of your task list. This ensures comprehensive quality validation and prevents regressions. + +**Required Agents for Different Task Types**: + +1. **Integration Testing**: Use `headscale-integration-tester` agent for: + - Running integration tests with `cmd/hi` + - Analyzing test failures and artifacts + - Troubleshooting Docker-based test infrastructure + - Validating end-to-end functionality changes + +2. **Quality Control**: Use `quality-control-enforcer` agent for: + - Code review and validation + - Ensuring best practices compliance + - Preventing common pitfalls and anti-patterns + - Validating architectural decisions + +**Agent Usage Pattern**: Always add the appropriate agent as the FINAL step in any task list to ensure quality validation occurs after all work is complete. + +### Integration Test Debugging Reference + +Test artifacts are preserved in `control_logs/TIMESTAMP-ID/` including: + +- Headscale server logs (stderr/stdout) +- Tailscale client logs and status +- Database dumps and network captures +- MapResponse JSON files for protocol debugging + +**For integration test issues, ALWAYS use the headscale-integration-tester agent - do not attempt manual debugging.** + +## EventuallyWithT Pattern for Integration Tests + +### Overview + +EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions. + +### External Calls That Must Be Wrapped + +The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT: + +- `headscale.ListNodes()` - Queries server state +- `client.Status()` - Gets client network status +- `client.Curl()` - Makes HTTP requests through the network +- `client.Traceroute()` - Performs network diagnostics +- `client.Execute()` when running commands that query state +- Any operation that reads from the headscale server or tailscale client + +### Operations That Must NOT Be Wrapped + +The following are **blocking operations** that modify state and should NOT be wrapped in EventuallyWithT: + +- `tailscale set` commands (e.g., `--advertise-routes`, `--exit-node`) +- Any command that changes configuration or state +- Use `client.MustStatus()` instead of `client.Status()` when you just need the ID for a blocking operation + +### Five Key Rules for EventuallyWithT + +1. **One External Call Per EventuallyWithT Block** + - Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status) + - Related assertions based on that single call can be grouped together + - Unrelated external calls must be in separate EventuallyWithT blocks + +2. **Variable Scoping** + - Declare variables that need to be shared across EventuallyWithT blocks at function scope + - Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block) + - Variables declared with `:=` inside EventuallyWithT are not accessible outside + +3. **No Nested EventuallyWithT** + - NEVER put an EventuallyWithT inside another EventuallyWithT + - This is a critical anti-pattern that must be avoided + +4. **Use CollectT for Assertions** + - Inside EventuallyWithT, use `assert` methods with the CollectT parameter + - Helper functions called within EventuallyWithT must accept `*assert.CollectT` + +5. **Descriptive Messages** + - Always provide a descriptive message as the last parameter + - Message should explain what condition is being waited for + +### Correct Pattern Examples + +```go +// CORRECT: Blocking operation NOT wrapped +for _, client := range allClients { + status := client.MustStatus() + command := []string{ + "tailscale", + "set", + "--advertise-routes=" + expectedRoutes[string(status.Self.ID)], + } + _, _, err = client.Execute(command) + require.NoErrorf(t, err, "failed to advertise route: %s", err) +} + +// CORRECT: Single external call with related assertions +var nodes []*v1.Node +assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err = headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 2) + requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2) +}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts") + +// CORRECT: Separate EventuallyWithT for different external call +assert.EventuallyWithT(t, func(c *assert.CollectT) { + status, err := client.Status() + assert.NoError(c, err) + for _, peerKey := range status.Peers() { + peerStatus := status.Peer[peerKey] + requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes) + } +}, 10*time.Second, 500*time.Millisecond, "client should see expected routes") +``` + +### Incorrect Patterns to Avoid + +```go +// INCORRECT: Blocking operation wrapped in EventuallyWithT +assert.EventuallyWithT(t, func(c *assert.CollectT) { + status, err := client.Status() + assert.NoError(c, err) + + // This is a blocking operation - should NOT be in EventuallyWithT! + command := []string{ + "tailscale", + "set", + "--advertise-routes=" + expectedRoutes[string(status.Self.ID)], + } + _, _, err = client.Execute(command) + assert.NoError(c, err) +}, 5*time.Second, 200*time.Millisecond, "wrong pattern") + +// INCORRECT: Multiple unrelated external calls in same EventuallyWithT +assert.EventuallyWithT(t, func(c *assert.CollectT) { + // First external call + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 2) + + // Second unrelated external call - WRONG! + status, err := client.Status() + assert.NoError(c, err) + assert.NotNil(c, status) +}, 10*time.Second, 500*time.Millisecond, "mixed operations") +``` + +## Tags-as-Identity Architecture + +### Overview + +Headscale implements a **tags-as-identity** model where tags and user ownership are mutually exclusive ways to identify nodes. This is a fundamental architectural principle that affects node registration, ownership, ACL evaluation, and API behavior. + +### Core Principle: Tags XOR User Ownership + +Every node in Headscale is **either** tagged **or** user-owned, never both: + +- **Tagged Nodes**: Ownership is defined by tags (e.g., `tag:server`, `tag:database`) + - Tags are set during registration via tagged PreAuthKey + - Tags are immutable after registration (cannot be changed via API) + - May have `UserID` set for "created by" tracking, but ownership is via tags + - Identified by: `node.IsTagged()` returns `true` + +- **User-Owned Nodes**: Ownership is defined by user assignment + - Registered via OIDC, web auth, or untagged PreAuthKey + - Node belongs to a specific user's namespace + - No tags (empty tags array) + - Identified by: `node.UserID().Valid() && !node.IsTagged()` + +### Critical Implementation Details + +#### Node Identification Methods + +```go +// Primary methods for determining node ownership +node.IsTagged() // Returns true if node has tags OR AuthKey.Tags +node.HasTag(tag) // Returns true if node has specific tag +node.IsUserOwned() // Returns true if UserID set AND not tagged + +// IMPORTANT: UserID can be set on tagged nodes for tracking! +// Always use IsTagged() to determine actual ownership, not just UserID.Valid() +``` + +#### UserID Field Semantics + +**Critical distinction**: `UserID` has different meanings depending on node type: + +- **Tagged nodes**: `UserID` is optional "created by" tracking + - Indicates which user created the tagged PreAuthKey + - Does NOT define ownership (tags define ownership) + - Example: User "alice" creates tagged PreAuthKey with `tag:server`, node gets `UserID=alice.ID` + `Tags=["tag:server"]` + +- **User-owned nodes**: `UserID` defines ownership + - Required field for non-tagged nodes + - Defines which user namespace the node belongs to + - Example: User "bob" registers via OIDC, node gets `UserID=bob.ID` + `Tags=[]` + +#### Mapper Behavior (mapper/tail.go) + +The mapper converts internal nodes to Tailscale protocol format, handling the TaggedDevices special user: + +```go +// From mapper/tail.go:102-116 +User: func() tailcfg.UserID { + // IMPORTANT: Tags-as-identity model + // Tagged nodes ALWAYS use TaggedDevices user, even if UserID is set + if node.IsTagged() { + return tailcfg.UserID(int64(types.TaggedDevices.ID)) + } + // User-owned nodes: use the actual user ID + return tailcfg.UserID(int64(node.UserID().Get())) +}() +``` + +**TaggedDevices constant** (`types.TaggedDevices.ID = 2147455555`): Special user ID for all tagged nodes in MapResponse protocol. + +#### Registration Flow + +**Tagged Node Registration** (via tagged PreAuthKey): + +1. User creates PreAuthKey with tags: `pak.Tags = ["tag:server"]` +2. Node registers with PreAuthKey +3. Node gets: `Tags = ["tag:server"]`, `UserID = user.ID` (optional tracking), `AuthKeyID = pak.ID` +4. `IsTagged()` returns `true` (ownership via tags) +5. MapResponse sends `User = TaggedDevices.ID` + +**User-Owned Node Registration** (via OIDC/web/untagged PreAuthKey): + +1. User authenticates or uses untagged PreAuthKey +2. Node registers +3. Node gets: `Tags = []`, `UserID = user.ID` (required) +4. `IsTagged()` returns `false` (ownership via user) +5. MapResponse sends `User = user.ID` + +#### API Validation (SetTags) + +The SetTags gRPC API enforces tags-as-identity rules: + +```go +// From grpcv1.go:340-347 +// User-owned nodes are nodes with UserID that are NOT tagged +isUserOwned := nodeView.UserID().Valid() && !nodeView.IsTagged() +if isUserOwned && len(request.GetTags()) > 0 { + return error("cannot set tags on user-owned nodes") +} +``` + +**Key validation rules**: + +- ✅ Can call SetTags on tagged nodes (tags already define ownership) +- ❌ Cannot set tags on user-owned nodes (would violate XOR rule) +- ❌ Cannot remove all tags from tagged nodes (would orphan the node) + +#### Database Layer (db/node.go) + +**Tag storage**: Tags are stored in PostgreSQL ARRAY column and SQLite JSON column: + +```sql +-- From schema.sql +tags TEXT[] DEFAULT '{}' NOT NULL, -- PostgreSQL +tags TEXT DEFAULT '[]' NOT NULL, -- SQLite (JSON array) +``` + +**Validation** (`state/tags.go`): + +- `validateNodeOwnership()`: Enforces tags XOR user rule +- `validateAndNormalizeTags()`: Validates tag format (`tag:name`) and uniqueness + +#### Policy Layer + +**Tag Ownership** (policy/v2/policy.go): + +```go +func NodeCanHaveTag(node types.NodeView, tag string) bool { + // Checks if node's IP is in the tagOwnerMap IP set + // This is IP-based authorization, not UserID-based + if ips, ok := pm.tagOwnerMap[Tag(tag)]; ok { + if slices.ContainsFunc(node.IPs(), ips.Contains) { + return true + } + } + return false +} +``` + +**Important**: Tag authorization is based on IP ranges in ACL, not UserID. Tags define identity, ACL authorizes that identity. + +### Testing Tags-as-Identity + +**Unit Tests** (`hscontrol/types/node_tags_test.go`): + +- `TestNodeIsTagged`: Validates IsTagged() for various scenarios +- `TestNodeOwnershipModel`: Tests tags XOR user ownership +- `TestUserTypedID`: Helper method validation + +**API Tests** (`hscontrol/grpcv1_test.go`): + +- `TestSetTags_UserXORTags`: Validates rejection of setting tags on user-owned nodes +- `TestSetTags_TaggedNode`: Validates that tagged nodes (even with UserID) are not rejected + +**Auth Tests** (`hscontrol/auth_test.go:890-928`): + +- Tests node registration with tagged PreAuthKey +- Validates tags are applied during registration + +### Common Pitfalls + +1. **Don't check only `UserID.Valid()` to determine user ownership** + - ❌ Wrong: `if node.UserID().Valid() { /* user-owned */ }` + - ✅ Correct: `if node.UserID().Valid() && !node.IsTagged() { /* user-owned */ }` + +2. **Don't assume tagged nodes never have UserID set** + - Tagged nodes MAY have UserID for "created by" tracking + - Always use `IsTagged()` to determine ownership type + +3. **Don't allow setting tags on user-owned nodes** + - This violates the tags XOR user principle + - Use API validation to prevent this + +4. **Don't forget TaggedDevices in mapper** + - All tagged nodes MUST use `TaggedDevices.ID` in MapResponse + - User ID is only for actual user-owned nodes + +### Migration Considerations + +When nodes transition between ownership models: + +- **No automatic migration**: Tags-as-identity is set at registration and immutable +- **Re-registration required**: To change from user-owned to tagged (or vice versa), node must be deleted and re-registered +- **UserID persistence**: UserID on tagged nodes is informational and not cleared + +### Architecture Benefits + +The tags-as-identity model provides: + +1. **Clear ownership semantics**: No ambiguity about who/what owns a node +2. **ACL simplicity**: Tag-based access control without user conflicts +3. **API safety**: Validation prevents invalid ownership states +4. **Protocol compatibility**: TaggedDevices special user aligns with Tailscale's model + +## Logging Patterns + +### Incremental Log Event Building + +When building log statements with multiple fields, especially with conditional fields, use the **incremental log event pattern** instead of long single-line chains. This improves readability and allows conditional field addition. + +**Pattern:** + +```go +// GOOD: Incremental building with conditional fields +logEvent := log.Debug(). + Str("node", node.Hostname). + Str("machine_key", node.MachineKey.ShortString()). + Str("node_key", node.NodeKey.ShortString()) + +if node.User != nil { + logEvent = logEvent.Str("user", node.User.Username()) +} else if node.UserID != nil { + logEvent = logEvent.Uint("user_id", *node.UserID) +} else { + logEvent = logEvent.Str("user", "none") +} + +logEvent.Msg("Registering node") +``` + +**Key rules:** + +1. **Assign chained calls back to the variable**: `logEvent = logEvent.Str(...)` - zerolog methods return a new event, so you must capture the return value +2. **Use for conditional fields**: When fields depend on runtime conditions, build incrementally +3. **Use for long log lines**: When a log line exceeds ~100 characters, split it for readability +4. **Call `.Msg()` at the end**: The final `.Msg()` or `.Msgf()` sends the log event + +**Anti-pattern to avoid:** + +```go +// BAD: Long single-line chains are hard to read and can't have conditional fields +log.Debug().Caller().Str("node", node.Hostname).Str("machine_key", node.MachineKey.ShortString()).Str("node_key", node.NodeKey.ShortString()).Str("user", node.User.Username()).Msg("Registering node") + +// BAD: Forgetting to assign the return value (field is lost!) +logEvent := log.Debug().Str("node", node.Hostname) +logEvent.Str("user", username) // This field is LOST - not assigned back +logEvent.Msg("message") // Only has "node" field +``` + +**When to use this pattern:** + +- Log statements with 4+ fields +- Any log with conditional fields +- Complex logging in loops or error handling +- When you need to add context incrementally + +**Example from codebase** (`hscontrol/db/node.go`): + +```go +logEvent := log.Debug(). + Str("node", node.Hostname). + Str("machine_key", node.MachineKey.ShortString()). + Str("node_key", node.NodeKey.ShortString()) + +if node.User != nil { + logEvent = logEvent.Str("user", node.User.Username()) +} else if node.UserID != nil { + logEvent = logEvent.Uint("user_id", *node.UserID) +} else { + logEvent = logEvent.Str("user", "none") +} + +logEvent.Msg("Registering test node") +``` + +### Avoiding Log Helper Functions + +Prefer the incremental log event pattern over creating helper functions that return multiple logging closures. Helper functions like `logPollFunc` create unnecessary indirection and allocate closures. + +**Instead of:** + +```go +// AVOID: Helper function returning closures +func logPollFunc(req tailcfg.MapRequest, node *types.Node) ( + func(string, ...any), // warnf + func(string, ...any), // infof + func(string, ...any), // tracef + func(error, string, ...any), // errf +) { + return func(msg string, a ...any) { + log.Warn(). + Caller(). + Bool("omitPeers", req.OmitPeers). + Bool("stream", req.Stream). + Uint64("node.id", node.ID.Uint64()). + Str("node.name", node.Hostname). + Msgf(msg, a...) + }, + // ... more closures +} +``` + +**Prefer:** + +```go +// BETTER: Build log events inline with shared context +func (m *mapSession) logTrace(msg string) { + log.Trace(). + Caller(). + Bool("omitPeers", m.req.OmitPeers). + Bool("stream", m.req.Stream). + Uint64("node.id", m.node.ID.Uint64()). + Str("node.name", m.node.Hostname). + Msg(msg) +} + +// Or use incremental building for complex cases +logEvent := log.Trace(). + Caller(). + Bool("omitPeers", m.req.OmitPeers). + Bool("stream", m.req.Stream). + Uint64("node.id", m.node.ID.Uint64()). + Str("node.name", m.node.Hostname) + +if additionalContext { + logEvent = logEvent.Str("extra", value) +} + +logEvent.Msg("Operation completed") +``` + +## Important Notes + +- **Dependencies**: Use `nix develop` for consistent toolchain (Go, buf, protobuf tools, linting) +- **Protocol Buffers**: Changes to `proto/` require `make generate` and should be committed separately +- **Code Style**: Enforced via golangci-lint with golines (width 88) and gofumpt formatting +- **Linting**: ALL code must pass `golangci-lint run --new-from-rev=upstream/main --timeout=5m --fix` before commit +- **Database**: Supports both SQLite (development) and PostgreSQL (production/testing) +- **Integration Tests**: Require Docker and can consume significant disk space - use headscale-integration-tester agent +- **Performance**: NodeStore optimizations are critical for scale - be careful with changes to state management +- **Quality Assurance**: Always use appropriate specialized agents for testing and validation tasks +- **Tags-as-Identity**: Tags and user ownership are mutually exclusive - always use `IsTagged()` to determine ownership diff --git a/CHANGELOG.md b/CHANGELOG.md index 7669dfcb..13a4e321 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,40 +1,189 @@ # CHANGELOG -## Next +## 0.28.0 (202x-xx-xx) + +**Minimum supported Tailscale client version: v1.74.0** + +### Tags as identity + +Tags are now implemented following the Tailscale model where tags and user ownership are mutually exclusive. Devices can be either +user-owned (authenticated via web/OIDC) or tagged (authenticated via tagged PreAuthKeys). Tagged devices receive their identity from +tags rather than users, making them suitable for servers and infrastructure. Applying a tag to a device removes user-based +ownership. See the [Tailscale tags documentation](https://tailscale.com/kb/1068/tags) for details on how tags work. + +User-owned nodes can now request tags during registration using `--advertise-tags`. Tags are validated against the `tagOwners` policy +and applied at registration time. Tags can be managed via the CLI or API after registration. Tagged nodes can return to user-owned +by re-authenticating with `tailscale up --advertise-tags= --force-reauth`. + +A one-time migration will validate and migrate any `RequestTags` (stored in hostinfo) to the tags column. Tags are validated against +your policy's `tagOwners` rules during migration. [#3011](https://github.com/juanfont/headscale/pull/3011) + +### Smarter map updates + +The map update system has been rewritten to send smaller, partial updates instead of full network maps whenever possible. This reduces bandwidth usage and improves performance, especially for large networks. The system now properly tracks peer +changes and can send removal notifications when nodes are removed due to policy changes. +[#2856](https://github.com/juanfont/headscale/pull/2856) [#2961](https://github.com/juanfont/headscale/pull/2961) + +### Pre-authentication key security improvements + +Pre-authentication keys now use bcrypt hashing for improved security [#2853](https://github.com/juanfont/headscale/pull/2853). Keys +are stored as a prefix and bcrypt hash instead of plaintext. The full key is only displayed once at creation time. When listing keys, +only the prefix is shown (e.g., `hskey-auth-{prefix}-***`). All new keys use the format `hskey-auth-{prefix}-{secret}`. Legacy plaintext keys in the format `{secret}` will continue to work for backwards compatibility. + +### Web registration templates redesign + +The OIDC callback and device registration web pages have been updated to use the Material for MkDocs design system from the official +documentation. The templates now use consistent typography, spacing, and colours across all registration flows. + +### Database migration support removed for pre-0.25.0 databases + +Headscale no longer supports direct upgrades from databases created before version 0.25.0. Users on older versions must upgrade +sequentially through each stable release, selecting the latest patch version available for each minor release. + +### BREAKING + +- **API**: The Node message in the gRPC/REST API has been simplified - the `ForcedTags`, `InvalidTags`, and `ValidTags` fields have been removed and replaced with a single `Tags` field that contains the node's applied tags [#2993](https://github.com/juanfont/headscale/pull/2993) + - API clients should use the `Tags` field instead of `ValidTags` + - The `headscale nodes list` CLI command now always shows a Tags column and the `--tags` flag has been removed +- **PreAuthKey CLI**: Commands now use ID-based operations instead of user+key combinations [#2992](https://github.com/juanfont/headscale/pull/2992) + - `headscale preauthkeys create` no longer requires `--user` flag (optional for tracking creation) + - `headscale preauthkeys list` lists all keys (no longer filtered by user) + - `headscale preauthkeys expire --id ` replaces `--user ` + - `headscale preauthkeys delete --id ` replaces `--user ` + + **Before:** + + ```bash + headscale preauthkeys create --user 1 --reusable --tags tag:server + headscale preauthkeys list --user 1 + headscale preauthkeys expire --user 1 + headscale preauthkeys delete --user 1 + ``` + + **After:** + + ```bash + headscale preauthkeys create --reusable --tags tag:server + headscale preauthkeys list + headscale preauthkeys expire --id 123 + headscale preauthkeys delete --id 123 + ``` + +- **Tags**: The gRPC `SetTags` endpoint now allows converting user-owned nodes to tagged nodes by setting tags. [#2885](https://github.com/juanfont/headscale/pull/2885) +- **Tags**: Tags are now resolved from the node's stored Tags field only [#2931](https://github.com/juanfont/headscale/pull/2931) + - `--advertise-tags` is processed during registration, not on every policy evaluation + - PreAuthKey tagged devices ignore `--advertise-tags` from clients + - User-owned nodes can use `--advertise-tags` if authorized by `tagOwners` policy + - Tags can be managed via CLI (`headscale nodes tag`) or the SetTags API after registration +- Database migration support removed for pre-0.25.0 databases [#2883](https://github.com/juanfont/headscale/pull/2883) + - If you are running a version older than 0.25.0, you must upgrade to 0.25.1 first, then upgrade to this release + - See the [upgrade path documentation](https://headscale.net/stable/about/faq/#what-is-the-recommended-update-path-can-i-skip-multiple-versions-while-updating) for detailed guidance + - In version 0.29, all migrations before 0.28.0 will also be removed +- Remove ability to move nodes between users [#2922](https://github.com/juanfont/headscale/pull/2922) + - The `headscale nodes move` CLI command has been removed + - The `MoveNode` API endpoint has been removed + - Nodes are permanently associated with their user or tag at registration time +- Add `oidc.email_verified_required` config option to control email verification requirement [#2860](https://github.com/juanfont/headscale/pull/2860) + - When `true` (default), only verified emails can authenticate via OIDC in conjunction with `oidc.allowed_domains` or + `oidc.allowed_users`. Previous versions allowed to authenticate with an unverified email but did not store the email + address in the user profile. This is now rejected during authentication with an `unverified email` error. + - When `false`, unverified emails are allowed for OIDC authentication and the email address is stored in the user + profile regardless of its verification state. +- **SSH Policy**: Wildcard (`*`) is no longer supported as an SSH destination [#3009](https://github.com/juanfont/headscale/issues/3009) + - Use `autogroup:member` for user-owned devices + - Use `autogroup:tagged` for tagged devices + - Use specific tags (e.g., `tag:server`) for targeted access + + **Before:** + + ```json + { "action": "accept", "src": ["group:admins"], "dst": ["*"], "users": ["root"] } + ``` + + **After:** + + ```json + { "action": "accept", "src": ["group:admins"], "dst": ["autogroup:member", "autogroup:tagged"], "users": ["root"] } + ``` + +- **SSH Policy**: SSH source/destination validation now enforces Tailscale's security model [#3010](https://github.com/juanfont/headscale/issues/3010) + + Per [Tailscale SSH documentation](https://tailscale.com/kb/1193/tailscale-ssh), the following rules are now enforced: + 1. **Tags cannot SSH to user-owned devices**: SSH rules with `tag:*` or `autogroup:tagged` as source cannot have username destinations (e.g., `alice@`) or `autogroup:member`/`autogroup:self` as destination + 2. **Username destinations require same-user source**: If destination is a specific username (e.g., `alice@`), the source must be that exact same user only. Use `autogroup:self` for same-user SSH access instead + + **Invalid policies now rejected at load time:** + + ```json + // INVALID: tag source to user destination + {"src": ["tag:server"], "dst": ["alice@"], ...} + + // INVALID: autogroup:tagged to autogroup:member + {"src": ["autogroup:tagged"], "dst": ["autogroup:member"], ...} + + // INVALID: group to specific user (use autogroup:self instead) + {"src": ["group:admins"], "dst": ["alice@"], ...} + ``` + + **Valid patterns:** + + ```json + // Users/groups can SSH to their own devices via autogroup:self + {"src": ["group:admins"], "dst": ["autogroup:self"], ...} + + // Users/groups can SSH to tagged devices + {"src": ["group:admins"], "dst": ["autogroup:tagged"], ...} + + // Tagged devices can SSH to other tagged devices + {"src": ["autogroup:tagged"], "dst": ["autogroup:tagged"], ...} + + // Same user can SSH to their own devices + {"src": ["alice@"], "dst": ["alice@"], ...} + ``` ### Changes +- Smarter change notifications send partial map updates and node removals instead of full maps [#2961](https://github.com/juanfont/headscale/pull/2961) + - Send lightweight endpoint and DERP region updates instead of full maps [#2856](https://github.com/juanfont/headscale/pull/2856) +- Add NixOS module in repository for faster iteration [#2857](https://github.com/juanfont/headscale/pull/2857) +- Add favicon to webpages [#2858](https://github.com/juanfont/headscale/pull/2858) +- Redesign OIDC callback and registration web templates [#2832](https://github.com/juanfont/headscale/pull/2832) +- Reclaim IPs from the IP allocator when nodes are deleted [#2831](https://github.com/juanfont/headscale/pull/2831) +- Add bcrypt hashing for pre-authentication keys [#2853](https://github.com/juanfont/headscale/pull/2853) +- Add prefix to API keys (`hskey-api-{prefix}-{secret}`) [#2853](https://github.com/juanfont/headscale/pull/2853) +- Add prefix to registration keys for web authentication tracking (`hskey-reg-{random}`) [#2853](https://github.com/juanfont/headscale/pull/2853) +- Tags can now be tagOwner of other tags [#2930](https://github.com/juanfont/headscale/pull/2930) +- Add `taildrop.enabled` configuration option to enable/disable Taildrop file sharing [#2955](https://github.com/juanfont/headscale/pull/2955) +- Allow disabling the metrics server by setting empty `metrics_listen_addr` [#2914](https://github.com/juanfont/headscale/pull/2914) +- Log ACME/autocert errors for easier debugging [#2933](https://github.com/juanfont/headscale/pull/2933) +- Improve CLI list output formatting [#2951](https://github.com/juanfont/headscale/pull/2951) +- Use Debian 13 distroless base images for containers [#2944](https://github.com/juanfont/headscale/pull/2944) +- Fix ACL policy not applied to new OIDC nodes until client restart [#2890](https://github.com/juanfont/headscale/pull/2890) +- Fix autogroup:self preventing visibility of nodes matched by other ACL rules [#2882](https://github.com/juanfont/headscale/pull/2882) +- Fix nodes being rejected after pre-authentication key expiration [#2917](https://github.com/juanfont/headscale/pull/2917) +- Fix list-routes command respecting identifier filter with JSON output [#2927](https://github.com/juanfont/headscale/pull/2927) +- **API Key CLI**: Add `--id` flag to expire/delete commands as alternative to `--prefix` [#3016](https://github.com/juanfont/headscale/pull/3016) + - `headscale apikeys expire --id ` or `--prefix ` + - `headscale apikeys delete --id ` or `--prefix ` + ## 0.27.1 (2025-11-11) **Minimum supported Tailscale client version: v1.64.0** ### Changes -- Expire nodes with a custom timestamp - [#2828](https://github.com/juanfont/headscale/pull/2828) -- Fix issue where node expiry was reset when tailscaled restarts - [#2875](https://github.com/juanfont/headscale/pull/2875) -- Fix OIDC authentication when multiple login URLs are opened - [#2861](https://github.com/juanfont/headscale/pull/2861) -- Fix node re-registration failing with expired auth keys - [#2859](https://github.com/juanfont/headscale/pull/2859) -- Remove old unused database tables and indices - [#2844](https://github.com/juanfont/headscale/pull/2844) - [#2872](https://github.com/juanfont/headscale/pull/2872) -- Ignore litestream tables during database validation - [#2843](https://github.com/juanfont/headscale/pull/2843) -- Fix exit node visibility to respect ACL rules - [#2855](https://github.com/juanfont/headscale/pull/2855) -- Fix SSH policy becoming empty when unknown user is referenced - [#2874](https://github.com/juanfont/headscale/pull/2874) -- Fix policy validation when using bypass-grpc mode - [#2854](https://github.com/juanfont/headscale/pull/2854) -- Fix autogroup:self interaction with other ACL rules - [#2842](https://github.com/juanfont/headscale/pull/2842) -- Fix flaky DERP map shuffle test - [#2848](https://github.com/juanfont/headscale/pull/2848) -- Use current stable base images for Debian and Alpine containers - [#2827](https://github.com/juanfont/headscale/pull/2827) +- Expire nodes with a custom timestamp [#2828](https://github.com/juanfont/headscale/pull/2828) +- Fix issue where node expiry was reset when tailscaled restarts [#2875](https://github.com/juanfont/headscale/pull/2875) +- Fix OIDC authentication when multiple login URLs are opened [#2861](https://github.com/juanfont/headscale/pull/2861) +- Fix node re-registration failing with expired auth keys [#2859](https://github.com/juanfont/headscale/pull/2859) +- Remove old unused database tables and indices [#2844](https://github.com/juanfont/headscale/pull/2844) [#2872](https://github.com/juanfont/headscale/pull/2872) +- Ignore litestream tables during database validation [#2843](https://github.com/juanfont/headscale/pull/2843) +- Fix exit node visibility to respect ACL rules [#2855](https://github.com/juanfont/headscale/pull/2855) +- Fix SSH policy becoming empty when unknown user is referenced [#2874](https://github.com/juanfont/headscale/pull/2874) +- Fix policy validation when using bypass-grpc mode [#2854](https://github.com/juanfont/headscale/pull/2854) +- Fix autogroup:self interaction with other ACL rules [#2842](https://github.com/juanfont/headscale/pull/2842) +- Fix flaky DERP map shuffle test [#2848](https://github.com/juanfont/headscale/pull/2848) +- Use current stable base images for Debian and Alpine containers [#2827](https://github.com/juanfont/headscale/pull/2827) ## 0.27.0 (2025-10-27) @@ -114,12 +263,9 @@ the code base over time and make it more correct and efficient. ### BREAKING -- Remove support for 32-bit binaries - [#2692](https://github.com/juanfont/headscale/pull/2692) -- Policy: Zero or empty destination port is no longer allowed - [#2606](https://github.com/juanfont/headscale/pull/2606) -- Stricter hostname validation - [#2383](https://github.com/juanfont/headscale/pull/2383) +- Remove support for 32-bit binaries [#2692](https://github.com/juanfont/headscale/pull/2692) +- Policy: Zero or empty destination port is no longer allowed [#2606](https://github.com/juanfont/headscale/pull/2606) +- Stricter hostname validation [#2383](https://github.com/juanfont/headscale/pull/2383) - Hostnames must be valid DNS labels (2-63 characters, alphanumeric and hyphens only, cannot start/end with hyphen) - **Client Registration (New Nodes)**: Invalid hostnames are automatically @@ -136,53 +282,38 @@ the code base over time and make it more correct and efficient. ### Changes -- **Database schema migration improvements for SQLite** - [#2617](https://github.com/juanfont/headscale/pull/2617) +- **Database schema migration improvements for SQLite** [#2617](https://github.com/juanfont/headscale/pull/2617) - **IMPORTANT: Backup your SQLite database before upgrading** - Introduces safer table renaming migration strategy - Addresses longstanding database integrity issues -- Add flag to directly manipulate the policy in the database - [#2765](https://github.com/juanfont/headscale/pull/2765) -- DERPmap update frequency default changed from 24h to 3h - [#2741](https://github.com/juanfont/headscale/pull/2741) +- Add flag to directly manipulate the policy in the database [#2765](https://github.com/juanfont/headscale/pull/2765) +- DERPmap update frequency default changed from 24h to 3h [#2741](https://github.com/juanfont/headscale/pull/2741) - DERPmap update mechanism has been improved with retry, and is now failing conservatively, preserving the old map upon failure. [#2741](https://github.com/juanfont/headscale/pull/2741) -- Add support for `autogroup:member`, `autogroup:tagged` - [#2572](https://github.com/juanfont/headscale/pull/2572) -- Fix bug where return routes were being removed by policy - [#2767](https://github.com/juanfont/headscale/pull/2767) +- Add support for `autogroup:member`, `autogroup:tagged` [#2572](https://github.com/juanfont/headscale/pull/2572) +- Fix bug where return routes were being removed by policy [#2767](https://github.com/juanfont/headscale/pull/2767) - Remove policy v1 code [#2600](https://github.com/juanfont/headscale/pull/2600) -- Refactor Debian/Ubuntu packaging and drop support for Ubuntu 20.04. - [#2614](https://github.com/juanfont/headscale/pull/2614) -- Remove redundant check regarding `noise` config - [#2658](https://github.com/juanfont/headscale/pull/2658) -- Refactor OpenID Connect documentation - [#2625](https://github.com/juanfont/headscale/pull/2625) -- Don't crash if config file is missing - [#2656](https://github.com/juanfont/headscale/pull/2656) -- Adds `/robots.txt` endpoint to avoid crawlers - [#2643](https://github.com/juanfont/headscale/pull/2643) -- OIDC: Use group claim from UserInfo - [#2663](https://github.com/juanfont/headscale/pull/2663) +- Refactor Debian/Ubuntu packaging and drop support for Ubuntu 20.04. [#2614](https://github.com/juanfont/headscale/pull/2614) +- Remove redundant check regarding `noise` config [#2658](https://github.com/juanfont/headscale/pull/2658) +- Refactor OpenID Connect documentation [#2625](https://github.com/juanfont/headscale/pull/2625) +- Don't crash if config file is missing [#2656](https://github.com/juanfont/headscale/pull/2656) +- Adds `/robots.txt` endpoint to avoid crawlers [#2643](https://github.com/juanfont/headscale/pull/2643) +- OIDC: Use group claim from UserInfo [#2663](https://github.com/juanfont/headscale/pull/2663) - OIDC: Update user with claims from UserInfo _before_ comparing with allowed groups, email and domain [#2663](https://github.com/juanfont/headscale/pull/2663) - Policy will now reject invalid fields, making it easier to spot spelling errors [#2764](https://github.com/juanfont/headscale/pull/2764) -- Add FAQ entry on how to recover from an invalid policy in the database - [#2776](https://github.com/juanfont/headscale/pull/2776) -- EXPERIMENTAL: Add support for `autogroup:self` - [#2789](https://github.com/juanfont/headscale/pull/2789) -- Add healthcheck command - [#2659](https://github.com/juanfont/headscale/pull/2659) +- Add FAQ entry on how to recover from an invalid policy in the database [#2776](https://github.com/juanfont/headscale/pull/2776) +- EXPERIMENTAL: Add support for `autogroup:self` [#2789](https://github.com/juanfont/headscale/pull/2789) +- Add healthcheck command [#2659](https://github.com/juanfont/headscale/pull/2659) ## 0.26.1 (2025-06-06) ### Changes -- Ensure nodes are matching both node key and machine key when connecting. - [#2642](https://github.com/juanfont/headscale/pull/2642) +- Ensure nodes are matching both node key and machine key when connecting. [#2642](https://github.com/juanfont/headscale/pull/2642) ## 0.26.0 (2025-05-14) @@ -216,12 +347,9 @@ ID | Hostname | Approved | Available | Serving (Primary) Note that if an exit route is approved (0.0.0.0/0 or ::/0), both IPv4 and IPv6 will be approved. -- Route API and CLI has been removed - [#2422](https://github.com/juanfont/headscale/pull/2422) -- Routes are now managed via the Node API - [#2422](https://github.com/juanfont/headscale/pull/2422) -- Only routes accessible to the node will be sent to the node - [#2561](https://github.com/juanfont/headscale/pull/2561) +- Route API and CLI has been removed [#2422](https://github.com/juanfont/headscale/pull/2422) +- Routes are now managed via the Node API [#2422](https://github.com/juanfont/headscale/pull/2422) +- Only routes accessible to the node will be sent to the node [#2561](https://github.com/juanfont/headscale/pull/2561) #### Policy v2 @@ -293,12 +421,9 @@ working in v1 and not tested might be broken in v2 (and vice versa). #### Other breaking changes -- Disallow `server_url` and `base_domain` to be equal - [#2544](https://github.com/juanfont/headscale/pull/2544) -- Return full user in API for pre auth keys instead of string - [#2542](https://github.com/juanfont/headscale/pull/2542) -- Pre auth key API/CLI now uses ID over username - [#2542](https://github.com/juanfont/headscale/pull/2542) +- Disallow `server_url` and `base_domain` to be equal [#2544](https://github.com/juanfont/headscale/pull/2544) +- Return full user in API for pre auth keys instead of string [#2542](https://github.com/juanfont/headscale/pull/2542) +- Pre auth key API/CLI now uses ID over username [#2542](https://github.com/juanfont/headscale/pull/2542) - A non-empty list of global nameservers needs to be specified via `dns.nameservers.global` if the configuration option `dns.override_local_dns` is enabled or is not specified in the configuration file. This aligns with @@ -308,48 +433,37 @@ working in v1 and not tested might be broken in v2 (and vice versa). ### Changes - Use Go 1.24 [#2427](https://github.com/juanfont/headscale/pull/2427) -- Add `headscale policy check` command to check policy - [#2553](https://github.com/juanfont/headscale/pull/2553) -- `oidc.map_legacy_users` and `oidc.strip_email_domain` has been removed - [#2411](https://github.com/juanfont/headscale/pull/2411) -- Add more information to `/debug` endpoint - [#2420](https://github.com/juanfont/headscale/pull/2420) +- Add `headscale policy check` command to check policy [#2553](https://github.com/juanfont/headscale/pull/2553) +- `oidc.map_legacy_users` and `oidc.strip_email_domain` has been removed [#2411](https://github.com/juanfont/headscale/pull/2411) +- Add more information to `/debug` endpoint [#2420](https://github.com/juanfont/headscale/pull/2420) - It is now possible to inspect running goroutines and take profiles - View of config, policy, filter, ssh policy per node, connected nodes and DERPmap -- OIDC: Fetch UserInfo to get EmailVerified if necessary - [#2493](https://github.com/juanfont/headscale/pull/2493) +- OIDC: Fetch UserInfo to get EmailVerified if necessary [#2493](https://github.com/juanfont/headscale/pull/2493) - If a OIDC provider doesn't include the `email_verified` claim in its ID tokens, Headscale will attempt to get it from the UserInfo endpoint. -- OIDC: Try to populate name, email and username from UserInfo - [#2545](https://github.com/juanfont/headscale/pull/2545) +- OIDC: Try to populate name, email and username from UserInfo [#2545](https://github.com/juanfont/headscale/pull/2545) - Improve performance by only querying relevant nodes from the database for node updates [#2509](https://github.com/juanfont/headscale/pull/2509) - node FQDNs in the netmap will now contain a dot (".") at the end. This aligns with behaviour of tailscale.com [#2503](https://github.com/juanfont/headscale/pull/2503) -- Restore support for "Override local DNS" - [#2438](https://github.com/juanfont/headscale/pull/2438) -- Add documentation for routes - [#2496](https://github.com/juanfont/headscale/pull/2496) +- Restore support for "Override local DNS" [#2438](https://github.com/juanfont/headscale/pull/2438) +- Add documentation for routes [#2496](https://github.com/juanfont/headscale/pull/2496) ## 0.25.1 (2025-02-25) ### Changes -- Fix issue where registration errors are sent correctly - [#2435](https://github.com/juanfont/headscale/pull/2435) -- Fix issue where routes passed on registration were not saved - [#2444](https://github.com/juanfont/headscale/pull/2444) -- Fix issue where registration page was displayed twice - [#2445](https://github.com/juanfont/headscale/pull/2445) +- Fix issue where registration errors are sent correctly [#2435](https://github.com/juanfont/headscale/pull/2435) +- Fix issue where routes passed on registration were not saved [#2444](https://github.com/juanfont/headscale/pull/2444) +- Fix issue where registration page was displayed twice [#2445](https://github.com/juanfont/headscale/pull/2445) ## 0.25.0 (2025-02-11) ### BREAKING -- Authentication flow has been rewritten - [#2374](https://github.com/juanfont/headscale/pull/2374) This change should be +- Authentication flow has been rewritten [#2374](https://github.com/juanfont/headscale/pull/2374) This change should be transparent to users with the exception of some buxfixes that has been discovered and was fixed as part of the rewrite. - When a node is registered with _a new user_, it will be registered as a new @@ -357,62 +471,44 @@ working in v1 and not tested might be broken in v2 (and vice versa). [#1310](https://github.com/juanfont/headscale/issues/1310)). - A logged out node logging in with the same user will replace the existing node. -- Remove support for Tailscale clients older than 1.62 (Capability version 87) - [#2405](https://github.com/juanfont/headscale/pull/2405) +- Remove support for Tailscale clients older than 1.62 (Capability version 87) [#2405](https://github.com/juanfont/headscale/pull/2405) ### Changes -- `oidc.map_legacy_users` is now `false` by default - [#2350](https://github.com/juanfont/headscale/pull/2350) -- Print Tailscale version instead of capability versions for outdated nodes - [#2391](https://github.com/juanfont/headscale/pull/2391) -- Do not allow renaming of users from OIDC - [#2393](https://github.com/juanfont/headscale/pull/2393) -- Change minimum hostname length to 2 - [#2393](https://github.com/juanfont/headscale/pull/2393) -- Fix migration error caused by nodes having invalid auth keys - [#2412](https://github.com/juanfont/headscale/pull/2412) -- Pre auth keys belonging to a user are no longer deleted with the user - [#2396](https://github.com/juanfont/headscale/pull/2396) -- Pre auth keys that are used by a node can no longer be deleted - [#2396](https://github.com/juanfont/headscale/pull/2396) -- Rehaul HTTP errors, return better status code and errors to users - [#2398](https://github.com/juanfont/headscale/pull/2398) -- Print headscale version and commit on server startup - [#2415](https://github.com/juanfont/headscale/pull/2415) +- `oidc.map_legacy_users` is now `false` by default [#2350](https://github.com/juanfont/headscale/pull/2350) +- Print Tailscale version instead of capability versions for outdated nodes [#2391](https://github.com/juanfont/headscale/pull/2391) +- Do not allow renaming of users from OIDC [#2393](https://github.com/juanfont/headscale/pull/2393) +- Change minimum hostname length to 2 [#2393](https://github.com/juanfont/headscale/pull/2393) +- Fix migration error caused by nodes having invalid auth keys [#2412](https://github.com/juanfont/headscale/pull/2412) +- Pre auth keys belonging to a user are no longer deleted with the user [#2396](https://github.com/juanfont/headscale/pull/2396) +- Pre auth keys that are used by a node can no longer be deleted [#2396](https://github.com/juanfont/headscale/pull/2396) +- Rehaul HTTP errors, return better status code and errors to users [#2398](https://github.com/juanfont/headscale/pull/2398) +- Print headscale version and commit on server startup [#2415](https://github.com/juanfont/headscale/pull/2415) ## 0.24.3 (2025-02-07) ### Changes -- Fix migration error caused by nodes having invalid auth keys - [#2412](https://github.com/juanfont/headscale/pull/2412) -- Pre auth keys belonging to a user are no longer deleted with the user - [#2396](https://github.com/juanfont/headscale/pull/2396) -- Pre auth keys that are used by a node can no longer be deleted - [#2396](https://github.com/juanfont/headscale/pull/2396) +- Fix migration error caused by nodes having invalid auth keys [#2412](https://github.com/juanfont/headscale/pull/2412) +- Pre auth keys belonging to a user are no longer deleted with the user [#2396](https://github.com/juanfont/headscale/pull/2396) +- Pre auth keys that are used by a node can no longer be deleted [#2396](https://github.com/juanfont/headscale/pull/2396) ## 0.24.2 (2025-01-30) ### Changes -- Fix issue where email and username being equal fails to match in Policy - [#2388](https://github.com/juanfont/headscale/pull/2388) -- Delete invalid routes before adding a NOT NULL constraint on node_id - [#2386](https://github.com/juanfont/headscale/pull/2386) +- Fix issue where email and username being equal fails to match in Policy [#2388](https://github.com/juanfont/headscale/pull/2388) +- Delete invalid routes before adding a NOT NULL constraint on node_id [#2386](https://github.com/juanfont/headscale/pull/2386) ## 0.24.1 (2025-01-23) ### Changes -- Fix migration issue with user table for PostgreSQL - [#2367](https://github.com/juanfont/headscale/pull/2367) -- Relax username validation to allow emails - [#2364](https://github.com/juanfont/headscale/pull/2364) +- Fix migration issue with user table for PostgreSQL [#2367](https://github.com/juanfont/headscale/pull/2367) +- Relax username validation to allow emails [#2364](https://github.com/juanfont/headscale/pull/2364) - Remove invalid routes and add stronger constraints for routes to avoid API panic [#2371](https://github.com/juanfont/headscale/pull/2371) -- Fix panic when `derp.update_frequency` is 0 - [#2368](https://github.com/juanfont/headscale/pull/2368) +- Fix panic when `derp.update_frequency` is 0 [#2368](https://github.com/juanfont/headscale/pull/2368) ## 0.24.0 (2025-01-17) @@ -549,12 +645,10 @@ This will also affect the way you ### BREAKING -- Remove `dns.use_username_in_magic_dns` configuration option - [#2020](https://github.com/juanfont/headscale/pull/2020), +- Remove `dns.use_username_in_magic_dns` configuration option [#2020](https://github.com/juanfont/headscale/pull/2020), [#2279](https://github.com/juanfont/headscale/pull/2279) - Having usernames in magic DNS is no longer possible. -- Remove versions older than 1.56 - [#2149](https://github.com/juanfont/headscale/pull/2149) +- Remove versions older than 1.56 [#2149](https://github.com/juanfont/headscale/pull/2149) - Clean up old code required by old versions - User gRPC/API [#2261](https://github.com/juanfont/headscale/pull/2261): - If you depend on a Headscale Web UI, you should wait with this update until @@ -567,27 +661,20 @@ This will also affect the way you - Improved compatibility of built-in DERP server with clients connecting over WebSocket [#2132](https://github.com/juanfont/headscale/pull/2132) -- Allow nodes to use SSH agent forwarding - [#2145](https://github.com/juanfont/headscale/pull/2145) -- Fixed processing of fields in post request in MoveNode rpc - [#2179](https://github.com/juanfont/headscale/pull/2179) +- Allow nodes to use SSH agent forwarding [#2145](https://github.com/juanfont/headscale/pull/2145) +- Fixed processing of fields in post request in MoveNode rpc [#2179](https://github.com/juanfont/headscale/pull/2179) - Added conversion of 'Hostname' to 'givenName' in a node with FQDN rules applied [#2198](https://github.com/juanfont/headscale/pull/2198) -- Fixed updating of hostname and givenName when it is updated in HostInfo - [#2199](https://github.com/juanfont/headscale/pull/2199) -- Fixed missing `stable-debug` container tag - [#2232](https://github.com/juanfont/headscale/pull/2232) +- Fixed updating of hostname and givenName when it is updated in HostInfo [#2199](https://github.com/juanfont/headscale/pull/2199) +- Fixed missing `stable-debug` container tag [#2232](https://github.com/juanfont/headscale/pull/2232) - Loosened up `server_url` and `base_domain` check. It was overly strict in some cases. [#2248](https://github.com/juanfont/headscale/pull/2248) - CLI for managing users now accepts `--identifier` in addition to `--name`, usage of `--identifier` is recommended [#2261](https://github.com/juanfont/headscale/pull/2261) -- Add `dns.extra_records_path` configuration option - [#2262](https://github.com/juanfont/headscale/issues/2262) -- Support client verify for DERP - [#2046](https://github.com/juanfont/headscale/pull/2046) -- Add PKCE Verifier for OIDC - [#2314](https://github.com/juanfont/headscale/pull/2314) +- Add `dns.extra_records_path` configuration option [#2262](https://github.com/juanfont/headscale/issues/2262) +- Support client verify for DERP [#2046](https://github.com/juanfont/headscale/pull/2046) +- Add PKCE Verifier for OIDC [#2314](https://github.com/juanfont/headscale/pull/2314) ## 0.23.0 (2024-09-18) @@ -651,28 +738,22 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). - Old structure has been remove and the configuration _must_ be converted. - Adds additional configuration for PostgreSQL for setting max open, idle connection and idle connection lifetime. -- API: Machine is now Node - [#1553](https://github.com/juanfont/headscale/pull/1553) -- Remove support for older Tailscale clients - [#1611](https://github.com/juanfont/headscale/pull/1611) +- API: Machine is now Node [#1553](https://github.com/juanfont/headscale/pull/1553) +- Remove support for older Tailscale clients [#1611](https://github.com/juanfont/headscale/pull/1611) - The oldest supported client is 1.42 -- Headscale checks that _at least_ one DERP is defined at start - [#1564](https://github.com/juanfont/headscale/pull/1564) +- Headscale checks that _at least_ one DERP is defined at start [#1564](https://github.com/juanfont/headscale/pull/1564) - If no DERP is configured, the server will fail to start, this can be because it cannot load the DERPMap from file or url. -- Embedded DERP server requires a private key - [#1611](https://github.com/juanfont/headscale/pull/1611) +- Embedded DERP server requires a private key [#1611](https://github.com/juanfont/headscale/pull/1611) - Add a filepath entry to [`derp.server.private_key_path`](https://github.com/juanfont/headscale/blob/b35993981297e18393706b2c963d6db882bba6aa/config-example.yaml#L95) -- Docker images are now built with goreleaser (ko) - [#1716](https://github.com/juanfont/headscale/pull/1716) +- Docker images are now built with goreleaser (ko) [#1716](https://github.com/juanfont/headscale/pull/1716) [#1763](https://github.com/juanfont/headscale/pull/1763) - Entrypoint of container image has changed from shell to headscale, require change from `headscale serve` to `serve` - `/var/lib/headscale` and `/var/run/headscale` is no longer created automatically, see [container docs](./docs/setup/install/container.md) -- Prefixes are now defined per v4 and v6 range. - [#1756](https://github.com/juanfont/headscale/pull/1756) +- Prefixes are now defined per v4 and v6 range. [#1756](https://github.com/juanfont/headscale/pull/1756) - `ip_prefixes` option is now `prefixes.v4` and `prefixes.v6` - `prefixes.allocation` can be set to assign IPs at `sequential` or `random`. [#1869](https://github.com/juanfont/headscale/pull/1869) @@ -687,30 +768,23 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). note that this option _will be removed_ when tags are fixed. - dns.base_domain can no longer be the same as (or part of) server_url. - This option brings Headscales behaviour in line with Tailscale. -- YAML files are no longer supported for headscale policy. - [#1792](https://github.com/juanfont/headscale/pull/1792) +- YAML files are no longer supported for headscale policy. [#1792](https://github.com/juanfont/headscale/pull/1792) - HuJSON is now the only supported format for policy. -- DNS configuration has been restructured - [#2034](https://github.com/juanfont/headscale/pull/2034) +- DNS configuration has been restructured [#2034](https://github.com/juanfont/headscale/pull/2034) - Please review the new [config-example.yaml](./config-example.yaml) for the new structure. ### Changes -- Use versioned migrations - [#1644](https://github.com/juanfont/headscale/pull/1644) -- Make the OIDC callback page better - [#1484](https://github.com/juanfont/headscale/pull/1484) +- Use versioned migrations [#1644](https://github.com/juanfont/headscale/pull/1644) +- Make the OIDC callback page better [#1484](https://github.com/juanfont/headscale/pull/1484) - SSH support [#1487](https://github.com/juanfont/headscale/pull/1487) -- State management has been improved - [#1492](https://github.com/juanfont/headscale/pull/1492) -- Use error group handling to ensure tests actually pass - [#1535](https://github.com/juanfont/headscale/pull/1535) based on +- State management has been improved [#1492](https://github.com/juanfont/headscale/pull/1492) +- Use error group handling to ensure tests actually pass [#1535](https://github.com/juanfont/headscale/pull/1535) based on [#1460](https://github.com/juanfont/headscale/pull/1460) - Fix hang on SIGTERM [#1492](https://github.com/juanfont/headscale/pull/1492) taken from [#1480](https://github.com/juanfont/headscale/pull/1480) -- Send logs to stderr by default - [#1524](https://github.com/juanfont/headscale/pull/1524) +- Send logs to stderr by default [#1524](https://github.com/juanfont/headscale/pull/1524) - Fix [TS-2023-006](https://tailscale.com/security-bulletins/#ts-2023-006) security UPnP issue [#1563](https://github.com/juanfont/headscale/pull/1563) - Turn off gRPC logging [#1640](https://github.com/juanfont/headscale/pull/1640) @@ -718,21 +792,15 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). - Added the possibility to manually create a DERP-map entry which can be customized, instead of automatically creating it. [#1565](https://github.com/juanfont/headscale/pull/1565) -- Add support for deleting api keys - [#1702](https://github.com/juanfont/headscale/pull/1702) +- Add support for deleting api keys [#1702](https://github.com/juanfont/headscale/pull/1702) - Add command to backfill IP addresses for nodes missing IPs from configured prefixes. [#1869](https://github.com/juanfont/headscale/pull/1869) -- Log available update as warning - [#1877](https://github.com/juanfont/headscale/pull/1877) -- Add `autogroup:internet` to Policy - [#1917](https://github.com/juanfont/headscale/pull/1917) -- Restore foreign keys and add constraints - [#1562](https://github.com/juanfont/headscale/pull/1562) +- Log available update as warning [#1877](https://github.com/juanfont/headscale/pull/1877) +- Add `autogroup:internet` to Policy [#1917](https://github.com/juanfont/headscale/pull/1917) +- Restore foreign keys and add constraints [#1562](https://github.com/juanfont/headscale/pull/1562) - Make registration page easier to use on mobile devices -- Make write-ahead-log default on and configurable for SQLite - [#1985](https://github.com/juanfont/headscale/pull/1985) -- Add APIs for managing headscale policy. - [#1792](https://github.com/juanfont/headscale/pull/1792) +- Make write-ahead-log default on and configurable for SQLite [#1985](https://github.com/juanfont/headscale/pull/1985) +- Add APIs for managing headscale policy. [#1792](https://github.com/juanfont/headscale/pull/1792) - Fix for registering nodes using preauthkeys when running on a postgres database in a non-UTC timezone. [#764](https://github.com/juanfont/headscale/issues/764) @@ -740,33 +808,25 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). - CLI commands (all except `serve`) only requires minimal configuration, no more errors or warnings from unset settings [#2109](https://github.com/juanfont/headscale/pull/2109) -- CLI results are now concistently sent to stdout and errors to stderr - [#2109](https://github.com/juanfont/headscale/pull/2109) -- Fix issue where shutting down headscale would hang - [#2113](https://github.com/juanfont/headscale/pull/2113) +- CLI results are now concistently sent to stdout and errors to stderr [#2109](https://github.com/juanfont/headscale/pull/2109) +- Fix issue where shutting down headscale would hang [#2113](https://github.com/juanfont/headscale/pull/2113) ## 0.22.3 (2023-05-12) ### Changes -- Added missing ca-certificates in Docker image - [#1463](https://github.com/juanfont/headscale/pull/1463) +- Added missing ca-certificates in Docker image [#1463](https://github.com/juanfont/headscale/pull/1463) ## 0.22.2 (2023-05-10) ### Changes -- Add environment flags to enable pprof (profiling) - [#1382](https://github.com/juanfont/headscale/pull/1382) +- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382) - Profiles are continuously generated in our integration tests. -- Fix systemd service file location in `.deb` packages - [#1391](https://github.com/juanfont/headscale/pull/1391) -- Improvements on Noise implementation - [#1379](https://github.com/juanfont/headscale/pull/1379) -- Replace node filter logic, ensuring nodes with access can see each other - [#1381](https://github.com/juanfont/headscale/pull/1381) -- Disable (or delete) both exit routes at the same time - [#1428](https://github.com/juanfont/headscale/pull/1428) +- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391) +- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379) +- Replace node filter logic, ensuring nodes with access can see each other [#1381](https://github.com/juanfont/headscale/pull/1381) +- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428) - Ditch distroless for Docker image, create default socket dir in `/var/run/headscale` [#1450](https://github.com/juanfont/headscale/pull/1450) @@ -774,65 +834,49 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). ### Changes -- Fix issue where systemd could not bind to port 80 - [#1365](https://github.com/juanfont/headscale/pull/1365) +- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365) ## 0.22.0 (2023-04-20) ### Changes -- Add `.deb` packages to release process - [#1297](https://github.com/juanfont/headscale/pull/1297) -- Update and simplify the documentation to use new `.deb` packages - [#1349](https://github.com/juanfont/headscale/pull/1349) -- Add 32-bit Arm platforms to release process - [#1297](https://github.com/juanfont/headscale/pull/1297) +- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297) +- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349) +- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297) - Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279) -- Fix issue where IPv6 could not be used in, or while using ACLs (part of - [#809](https://github.com/juanfont/headscale/issues/809)) +- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339) -- Target Go 1.20 and Tailscale 1.38 for Headscale - [#1323](https://github.com/juanfont/headscale/pull/1323) +- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323) ## 0.21.0 (2023-03-20) ### Changes -- Adding "configtest" CLI command. - [#1230](https://github.com/juanfont/headscale/pull/1230) -- Add documentation on connecting with iOS to `/apple` - [#1261](https://github.com/juanfont/headscale/pull/1261) -- Update iOS compatibility and added documentation for iOS - [#1264](https://github.com/juanfont/headscale/pull/1264) -- Allow to delete routes - [#1244](https://github.com/juanfont/headscale/pull/1244) +- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230) +- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261) +- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264) +- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244) ## 0.20.0 (2023-02-03) ### Changes -- Fix wrong behaviour in exit nodes - [#1159](https://github.com/juanfont/headscale/pull/1159) -- Align behaviour of `dns_config.restricted_nameservers` to tailscale - [#1162](https://github.com/juanfont/headscale/pull/1162) -- Make OpenID Connect authenticated client expiry time configurable - [#1191](https://github.com/juanfont/headscale/pull/1191) +- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159) +- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162) +- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191) - defaults to 180 days like Tailscale SaaS - adds option to use the expiry time from the OpenID token for the node (see config-example.yaml) -- Set ControlTime in Map info sent to nodes - [#1195](https://github.com/juanfont/headscale/pull/1195) -- Populate Tags field on Node updates sent - [#1195](https://github.com/juanfont/headscale/pull/1195) +- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195) +- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195) ## 0.19.0 (2023-01-29) ### BREAKING -- Rename Namespace to User - [#1144](https://github.com/juanfont/headscale/pull/1144) +- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144) - **BACKUP your database before upgrading** - Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u` @@ -841,35 +885,23 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). ### Changes -- Reworked routing and added support for subnet router failover - [#1024](https://github.com/juanfont/headscale/pull/1024) -- Added an OIDC AllowGroups Configuration options and authorization check - [#1041](https://github.com/juanfont/headscale/pull/1041) -- Set `db_ssl` to false by default - [#1052](https://github.com/juanfont/headscale/pull/1052) -- Fix duplicate nodes due to incorrect implementation of the protocol - [#1058](https://github.com/juanfont/headscale/pull/1058) -- Report if a machine is online in CLI more accurately - [#1062](https://github.com/juanfont/headscale/pull/1062) -- Added config option for custom DNS records - [#1035](https://github.com/juanfont/headscale/pull/1035) -- Expire nodes based on OIDC token expiry - [#1067](https://github.com/juanfont/headscale/pull/1067) -- Remove ephemeral nodes on logout - [#1098](https://github.com/juanfont/headscale/pull/1098) -- Performance improvements in ACLs - [#1129](https://github.com/juanfont/headscale/pull/1129) -- OIDC client secret can be passed via a file - [#1127](https://github.com/juanfont/headscale/pull/1127) +- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024) +- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041) +- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052) +- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058) +- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062) +- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035) +- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067) +- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098) +- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129) +- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127) ## 0.17.1 (2022-12-05) ### Changes -- Correct typo on macOS standalone profile link - [#1028](https://github.com/juanfont/headscale/pull/1028) -- Update platform docs with Fast User Switching - [#1016](https://github.com/juanfont/headscale/pull/1016) +- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028) +- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016) ## 0.17.0 (2022-11-26) @@ -879,13 +911,11 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). protocol. - Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768) -- Removed Alpine Linux container image - [#962](https://github.com/juanfont/headscale/pull/962) +- Removed Alpine Linux container image [#962](https://github.com/juanfont/headscale/pull/962) ### Important Changes -- Added support for Tailscale TS2021 protocol - [#738](https://github.com/juanfont/headscale/pull/738) +- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738) - Add experimental support for [SSH ACL](https://tailscale.com/kb/1018/acls/#tailscale-ssh) (see docs for limitations) [#847](https://github.com/juanfont/headscale/pull/847) @@ -905,81 +935,57 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). ### Changes -- Add ability to specify config location via env var `HEADSCALE_CONFIG` - [#674](https://github.com/juanfont/headscale/issues/674) -- Target Go 1.19 for Headscale - [#778](https://github.com/juanfont/headscale/pull/778) -- Target Tailscale v1.30.0 to build Headscale - [#780](https://github.com/juanfont/headscale/pull/780) +- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674) +- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778) +- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780) - Give a warning when running Headscale with reverse proxy improperly configured for WebSockets [#788](https://github.com/juanfont/headscale/pull/788) -- Fix subnet routers with Primary Routes - [#811](https://github.com/juanfont/headscale/pull/811) -- Added support for JSON logs - [#653](https://github.com/juanfont/headscale/issues/653) -- Sanitise the node key passed to registration url - [#823](https://github.com/juanfont/headscale/pull/823) -- Add support for generating pre-auth keys with tags - [#767](https://github.com/juanfont/headscale/pull/767) +- Fix subnet routers with Primary Routes [#811](https://github.com/juanfont/headscale/pull/811) +- Added support for JSON logs [#653](https://github.com/juanfont/headscale/issues/653) +- Sanitise the node key passed to registration url [#823](https://github.com/juanfont/headscale/pull/823) +- Add support for generating pre-auth keys with tags [#767](https://github.com/juanfont/headscale/pull/767) - Add support for evaluating `autoApprovers` ACL entries when a machine is registered [#763](https://github.com/juanfont/headscale/pull/763) -- Add config flag to allow Headscale to start if OIDC provider is down - [#829](https://github.com/juanfont/headscale/pull/829) -- Fix prefix length comparison bug in AutoApprovers route evaluation - [#862](https://github.com/juanfont/headscale/pull/862) -- Random node DNS suffix only applied if names collide in namespace. - [#766](https://github.com/juanfont/headscale/issues/766) -- Remove `ip_prefix` configuration option and warning - [#899](https://github.com/juanfont/headscale/pull/899) -- Add `dns_config.override_local_dns` option - [#905](https://github.com/juanfont/headscale/pull/905) -- Fix some DNS config issues - [#660](https://github.com/juanfont/headscale/issues/660) -- Make it possible to disable TS2019 with build flag - [#928](https://github.com/juanfont/headscale/pull/928) -- Fix OIDC registration issues - [#960](https://github.com/juanfont/headscale/pull/960) and +- Add config flag to allow Headscale to start if OIDC provider is down [#829](https://github.com/juanfont/headscale/pull/829) +- Fix prefix length comparison bug in AutoApprovers route evaluation [#862](https://github.com/juanfont/headscale/pull/862) +- Random node DNS suffix only applied if names collide in namespace. [#766](https://github.com/juanfont/headscale/issues/766) +- Remove `ip_prefix` configuration option and warning [#899](https://github.com/juanfont/headscale/pull/899) +- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905) +- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660) +- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928) +- Fix OIDC registration issues [#960](https://github.com/juanfont/headscale/pull/960) and [#971](https://github.com/juanfont/headscale/pull/971) -- Add support for specifying NextDNS DNS-over-HTTPS resolver - [#940](https://github.com/juanfont/headscale/pull/940) -- Make more sslmode available for postgresql connection - [#927](https://github.com/juanfont/headscale/pull/927) +- Add support for specifying NextDNS DNS-over-HTTPS resolver [#940](https://github.com/juanfont/headscale/pull/940) +- Make more sslmode available for postgresql connection [#927](https://github.com/juanfont/headscale/pull/927) ## 0.16.4 (2022-08-21) ### Changes -- Add ability to connect to PostgreSQL over TLS/SSL - [#745](https://github.com/juanfont/headscale/pull/745) -- Fix CLI registration of expired machines - [#754](https://github.com/juanfont/headscale/pull/754) +- Add ability to connect to PostgreSQL over TLS/SSL [#745](https://github.com/juanfont/headscale/pull/745) +- Fix CLI registration of expired machines [#754](https://github.com/juanfont/headscale/pull/754) ## 0.16.3 (2022-08-17) ### Changes -- Fix issue with OIDC authentication - [#747](https://github.com/juanfont/headscale/pull/747) +- Fix issue with OIDC authentication [#747](https://github.com/juanfont/headscale/pull/747) ## 0.16.2 (2022-08-14) ### Changes -- Fixed bugs in the client registration process after migration to NodeKey - [#735](https://github.com/juanfont/headscale/pull/735) +- Fixed bugs in the client registration process after migration to NodeKey [#735](https://github.com/juanfont/headscale/pull/735) ## 0.16.1 (2022-08-12) ### Changes -- Updated dependencies (including the library that lacked armhf support) - [#722](https://github.com/juanfont/headscale/pull/722) -- Fix missing group expansion in function `excludeCorrectlyTaggedNodes` - [#563](https://github.com/juanfont/headscale/issues/563) +- Updated dependencies (including the library that lacked armhf support) [#722](https://github.com/juanfont/headscale/pull/722) +- Fix missing group expansion in function `excludeCorrectlyTaggedNodes` [#563](https://github.com/juanfont/headscale/issues/563) - Improve registration protocol implementation and switch to NodeKey as main identifier [#725](https://github.com/juanfont/headscale/pull/725) -- Add ability to connect to PostgreSQL via unix socket - [#734](https://github.com/juanfont/headscale/pull/734) +- Add ability to connect to PostgreSQL via unix socket [#734](https://github.com/juanfont/headscale/pull/734) ## 0.16.0 (2022-07-25) @@ -992,44 +998,30 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). ### Changes -- **Drop** armhf (32-bit ARM) support. - [#609](https://github.com/juanfont/headscale/pull/609) -- Headscale fails to serve if the ACL policy file cannot be parsed - [#537](https://github.com/juanfont/headscale/pull/537) -- Fix labels cardinality error when registering unknown pre-auth key - [#519](https://github.com/juanfont/headscale/pull/519) -- Fix send on closed channel crash in polling - [#542](https://github.com/juanfont/headscale/pull/542) -- Fixed spurious calls to setLastStateChangeToNow from ephemeral nodes - [#566](https://github.com/juanfont/headscale/pull/566) -- Add command for moving nodes between namespaces - [#362](https://github.com/juanfont/headscale/issues/362) +- **Drop** armhf (32-bit ARM) support. [#609](https://github.com/juanfont/headscale/pull/609) +- Headscale fails to serve if the ACL policy file cannot be parsed [#537](https://github.com/juanfont/headscale/pull/537) +- Fix labels cardinality error when registering unknown pre-auth key [#519](https://github.com/juanfont/headscale/pull/519) +- Fix send on closed channel crash in polling [#542](https://github.com/juanfont/headscale/pull/542) +- Fixed spurious calls to setLastStateChangeToNow from ephemeral nodes [#566](https://github.com/juanfont/headscale/pull/566) +- Add command for moving nodes between namespaces [#362](https://github.com/juanfont/headscale/issues/362) - Added more configuration parameters for OpenID Connect (scopes, free-form parameters, domain and user allowlist) -- Add command to set tags on a node - [#525](https://github.com/juanfont/headscale/issues/525) -- Add command to view tags of nodes - [#356](https://github.com/juanfont/headscale/issues/356) -- Add --all (-a) flag to enable routes command - [#360](https://github.com/juanfont/headscale/issues/360) -- Fix issue where nodes was not updated across namespaces - [#560](https://github.com/juanfont/headscale/pull/560) -- Add the ability to rename a nodes name - [#560](https://github.com/juanfont/headscale/pull/560) +- Add command to set tags on a node [#525](https://github.com/juanfont/headscale/issues/525) +- Add command to view tags of nodes [#356](https://github.com/juanfont/headscale/issues/356) +- Add --all (-a) flag to enable routes command [#360](https://github.com/juanfont/headscale/issues/360) +- Fix issue where nodes was not updated across namespaces [#560](https://github.com/juanfont/headscale/pull/560) +- Add the ability to rename a nodes name [#560](https://github.com/juanfont/headscale/pull/560) - Node DNS names are now unique, a random suffix will be added when a node joins - This change contains database changes, remember to **backup** your database before upgrading -- Add option to enable/disable logtail (Tailscale's logging infrastructure) - [#596](https://github.com/juanfont/headscale/pull/596) +- Add option to enable/disable logtail (Tailscale's logging infrastructure) [#596](https://github.com/juanfont/headscale/pull/596) - This change disables the logs by default - Use [Prometheus]'s duration parser, supporting days (`d`), weeks (`w`) and years (`y`) [#598](https://github.com/juanfont/headscale/pull/598) -- Add support for reloading ACLs with SIGHUP - [#601](https://github.com/juanfont/headscale/pull/601) +- Add support for reloading ACLs with SIGHUP [#601](https://github.com/juanfont/headscale/pull/601) - Use new ACL syntax [#618](https://github.com/juanfont/headscale/pull/618) -- Add -c option to specify config file from command line - [#285](https://github.com/juanfont/headscale/issues/285) +- Add -c option to specify config file from command line [#285](https://github.com/juanfont/headscale/issues/285) [#612](https://github.com/juanfont/headscale/pull/601) - Add configuration option to allow Tailscale clients to use a random WireGuard port. [kb/1181/firewalls](https://tailscale.com/kb/1181/firewalls) @@ -1037,19 +1029,14 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). - Improve obtuse UX regarding missing configuration (`ephemeral_node_inactivity_timeout` not set) [#639](https://github.com/juanfont/headscale/pull/639) -- Fix nodes being shown as 'offline' in `tailscale status` - [#648](https://github.com/juanfont/headscale/pull/648) -- Improve shutdown behaviour - [#651](https://github.com/juanfont/headscale/pull/651) +- Fix nodes being shown as 'offline' in `tailscale status` [#648](https://github.com/juanfont/headscale/pull/648) +- Improve shutdown behaviour [#651](https://github.com/juanfont/headscale/pull/651) - Drop Gin as web framework in Headscale [648](https://github.com/juanfont/headscale/pull/648) [677](https://github.com/juanfont/headscale/pull/677) -- Make tailnet node updates check interval configurable - [#675](https://github.com/juanfont/headscale/pull/675) -- Fix regression with HTTP API - [#684](https://github.com/juanfont/headscale/pull/684) -- nodes ls now print both Hostname and Name(Issue - [#647](https://github.com/juanfont/headscale/issues/647) PR +- Make tailnet node updates check interval configurable [#675](https://github.com/juanfont/headscale/pull/675) +- Fix regression with HTTP API [#684](https://github.com/juanfont/headscale/pull/684) +- nodes ls now print both Hostname and Name(Issue [#647](https://github.com/juanfont/headscale/issues/647) PR [#687](https://github.com/juanfont/headscale/pull/687)) ## 0.15.0 (2022-03-20) @@ -1061,8 +1048,7 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). - Boundaries between Namespaces has been removed and all nodes can communicate by default [#357](https://github.com/juanfont/headscale/pull/357) - To limit access between nodes, use [ACLs](./docs/ref/acls.md). -- `/metrics` is now a configurable host:port endpoint: - [#344](https://github.com/juanfont/headscale/pull/344). You must update your +- `/metrics` is now a configurable host:port endpoint: [#344](https://github.com/juanfont/headscale/pull/344). You must update your `config.yaml` file to include: ```yaml metrics_listen_addr: 127.0.0.1:9090 @@ -1070,23 +1056,18 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). ### Features -- Add support for writing ACL files with YAML - [#359](https://github.com/juanfont/headscale/pull/359) -- Users can now use emails in ACL's groups - [#372](https://github.com/juanfont/headscale/issues/372) -- Add shorthand aliases for commands and subcommands - [#376](https://github.com/juanfont/headscale/pull/376) +- Add support for writing ACL files with YAML [#359](https://github.com/juanfont/headscale/pull/359) +- Users can now use emails in ACL's groups [#372](https://github.com/juanfont/headscale/issues/372) +- Add shorthand aliases for commands and subcommands [#376](https://github.com/juanfont/headscale/pull/376) - Add `/windows` endpoint for Windows configuration instructions + registry file download [#392](https://github.com/juanfont/headscale/pull/392) -- Added embedded DERP (and STUN) server into Headscale - [#388](https://github.com/juanfont/headscale/pull/388) +- Added embedded DERP (and STUN) server into Headscale [#388](https://github.com/juanfont/headscale/pull/388) ### Changes - Fix a bug were the same IP could be assigned to multiple hosts if joined in quick succession [#346](https://github.com/juanfont/headscale/pull/346) -- Simplify the code behind registration of machines - [#366](https://github.com/juanfont/headscale/pull/366) +- Simplify the code behind registration of machines [#366](https://github.com/juanfont/headscale/pull/366) - Nodes are now only written to database if they are registered successfully - Fix a limitation in the ACLs that prevented users to write rules with `*` as source [#374](https://github.com/juanfont/headscale/issues/374) @@ -1095,8 +1076,7 @@ part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460). [#371](https://github.com/juanfont/headscale/pull/371) - Apply normalization function to FQDN on hostnames when hosts registers and retrieve information [#363](https://github.com/juanfont/headscale/issues/363) -- Fix a bug that prevented the use of `tailscale logout` with OIDC - [#508](https://github.com/juanfont/headscale/issues/508) +- Fix a bug that prevented the use of `tailscale logout` with OIDC [#508](https://github.com/juanfont/headscale/issues/508) - Added Tailscale repo HEAD and unstable releases channel to the integration tests targets [#513](https://github.com/juanfont/headscale/pull/513) @@ -1123,13 +1103,11 @@ behaviour. ### Features -- Add support for configurable mTLS [docs](./docs/ref/tls.md) - [#297](https://github.com/juanfont/headscale/pull/297) +- Add support for configurable mTLS [docs](./docs/ref/tls.md) [#297](https://github.com/juanfont/headscale/pull/297) ### Changes -- Remove dependency on CGO (switch from CGO SQLite to pure Go) - [#346](https://github.com/juanfont/headscale/pull/346) +- Remove dependency on CGO (switch from CGO SQLite to pure Go) [#346](https://github.com/juanfont/headscale/pull/346) **0.13.0 (2022-02-18):** @@ -1138,7 +1116,7 @@ behaviour. - Add IPv6 support to the prefix assigned to namespaces - Add API Key support - Enable remote control of `headscale` via CLI - [docs](./docs/ref/remote-cli.md) + [docs](./docs/ref/api.md#grpc) - Enable HTTP API (beta, subject to change) - OpenID Connect users will be mapped per namespaces - Each user will get its own namespace, created if it does not exist @@ -1148,25 +1126,18 @@ behaviour. ### Changes -- `ip_prefix` is now superseded by `ip_prefixes` in the configuration - [#208](https://github.com/juanfont/headscale/pull/208) -- Upgrade `tailscale` (1.20.4) and other dependencies to latest - [#314](https://github.com/juanfont/headscale/pull/314) -- fix swapped machine<->namespace labels in `/metrics` - [#312](https://github.com/juanfont/headscale/pull/312) -- remove key-value based update mechanism for namespace changes - [#316](https://github.com/juanfont/headscale/pull/316) +- `ip_prefix` is now superseded by `ip_prefixes` in the configuration [#208](https://github.com/juanfont/headscale/pull/208) +- Upgrade `tailscale` (1.20.4) and other dependencies to latest [#314](https://github.com/juanfont/headscale/pull/314) +- fix swapped machine<->namespace labels in `/metrics` [#312](https://github.com/juanfont/headscale/pull/312) +- remove key-value based update mechanism for namespace changes [#316](https://github.com/juanfont/headscale/pull/316) **0.12.4 (2022-01-29):** ### Changes -- Make gRPC Unix Socket permissions configurable - [#292](https://github.com/juanfont/headscale/pull/292) -- Trim whitespace before reading Private Key from file - [#289](https://github.com/juanfont/headscale/pull/289) -- Add new command to generate a private key for `headscale` - [#290](https://github.com/juanfont/headscale/pull/290) +- Make gRPC Unix Socket permissions configurable [#292](https://github.com/juanfont/headscale/pull/292) +- Trim whitespace before reading Private Key from file [#289](https://github.com/juanfont/headscale/pull/289) +- Add new command to generate a private key for `headscale` [#290](https://github.com/juanfont/headscale/pull/290) - Fixed issue where hosts deleted from control server may be written back to the database, as long as they are connected to the control server [#278](https://github.com/juanfont/headscale/pull/278) @@ -1176,8 +1147,7 @@ behaviour. ### Changes - Added Alpine container [#270](https://github.com/juanfont/headscale/pull/270) -- Minor updates in dependencies - [#271](https://github.com/juanfont/headscale/pull/271) +- Minor updates in dependencies [#271](https://github.com/juanfont/headscale/pull/271) ## 0.12.2 (2022-01-11) @@ -1196,8 +1166,7 @@ tagging) ### BREAKING -- Upgrade to Tailscale 1.18 - [#229](https://github.com/juanfont/headscale/pull/229) +- Upgrade to Tailscale 1.18 [#229](https://github.com/juanfont/headscale/pull/229) - This change requires a new format for private key, private keys are now generated automatically: 1. Delete your current key @@ -1206,25 +1175,19 @@ tagging) ### Changes -- Unify configuration example - [#197](https://github.com/juanfont/headscale/pull/197) -- Add stricter linting and formatting - [#223](https://github.com/juanfont/headscale/pull/223) +- Unify configuration example [#197](https://github.com/juanfont/headscale/pull/197) +- Add stricter linting and formatting [#223](https://github.com/juanfont/headscale/pull/223) ### Features -- Add gRPC and HTTP API (HTTP API is currently disabled) - [#204](https://github.com/juanfont/headscale/pull/204) -- Use gRPC between the CLI and the server - [#206](https://github.com/juanfont/headscale/pull/206), +- Add gRPC and HTTP API (HTTP API is currently disabled) [#204](https://github.com/juanfont/headscale/pull/204) +- Use gRPC between the CLI and the server [#206](https://github.com/juanfont/headscale/pull/206), [#212](https://github.com/juanfont/headscale/pull/212) -- Beta OpenID Connect support - [#126](https://github.com/juanfont/headscale/pull/126), +- Beta OpenID Connect support [#126](https://github.com/juanfont/headscale/pull/126), [#227](https://github.com/juanfont/headscale/pull/227) ## 0.11.0 (2021-10-25) ### BREAKING -- Make headscale fetch DERP map from URL and file - [#196](https://github.com/juanfont/headscale/pull/196) +- Make headscale fetch DERP map from URL and file [#196](https://github.com/juanfont/headscale/pull/196) diff --git a/CLAUDE.md b/CLAUDE.md index d4034367..43c994c2 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,531 +1 @@ -# CLAUDE.md - -This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. - -## Overview - -Headscale is an open-source implementation of the Tailscale control server written in Go. It provides self-hosted coordination for Tailscale networks (tailnets), managing node registration, IP allocation, policy enforcement, and DERP routing. - -## Development Commands - -### Quick Setup -```bash -# Recommended: Use Nix for dependency management -nix develop - -# Full development workflow -make dev # runs fmt + lint + test + build -``` - -### Essential Commands -```bash -# Build headscale binary -make build - -# Run tests -make test -go test ./... # All unit tests -go test -race ./... # With race detection - -# Run specific integration test -go run ./cmd/hi run "TestName" --postgres - -# Code formatting and linting -make fmt # Format all code (Go, docs, proto) -make lint # Lint all code (Go, proto) -make fmt-go # Format Go code only -make lint-go # Lint Go code only - -# Protocol buffer generation (after modifying proto/) -make generate - -# Clean build artifacts -make clean -``` - -### Integration Testing -```bash -# Use the hi (Headscale Integration) test runner -go run ./cmd/hi doctor # Check system requirements -go run ./cmd/hi run "TestPattern" # Run specific test -go run ./cmd/hi run "TestPattern" --postgres # With PostgreSQL backend - -# Test artifacts are saved to control_logs/ with logs and debug data -``` - -## Project Structure & Architecture - -### Top-Level Organization - -``` -headscale/ -├── cmd/ # Command-line applications -│ ├── headscale/ # Main headscale server binary -│ └── hi/ # Headscale Integration test runner -├── hscontrol/ # Core control plane logic -├── integration/ # End-to-end Docker-based tests -├── proto/ # Protocol buffer definitions -├── gen/ # Generated code (protobuf) -├── docs/ # Documentation -└── packaging/ # Distribution packaging -``` - -### Core Packages (`hscontrol/`) - -**Main Server (`hscontrol/`)** -- `app.go`: Application setup, dependency injection, server lifecycle -- `handlers.go`: HTTP/gRPC API endpoints for management operations -- `grpcv1.go`: gRPC service implementation for headscale API -- `poll.go`: **Critical** - Handles Tailscale MapRequest/MapResponse protocol -- `noise.go`: Noise protocol implementation for secure client communication -- `auth.go`: Authentication flows (web, OIDC, command-line) -- `oidc.go`: OpenID Connect integration for user authentication - -**State Management (`hscontrol/state/`)** -- `state.go`: Central coordinator for all subsystems (database, policy, IP allocation, DERP) -- `node_store.go`: **Performance-critical** - In-memory cache with copy-on-write semantics -- Thread-safe operations with deadlock detection -- Coordinates between database persistence and real-time operations - -**Database Layer (`hscontrol/db/`)** -- `db.go`: Database abstraction, GORM setup, migration management -- `node.go`: Node lifecycle, registration, expiration, IP assignment -- `users.go`: User management, namespace isolation -- `api_key.go`: API authentication tokens -- `preauth_keys.go`: Pre-authentication keys for automated node registration -- `ip.go`: IP address allocation and management -- `policy.go`: Policy storage and retrieval -- Schema migrations in `schema.sql` with extensive test data coverage - -**Policy Engine (`hscontrol/policy/`)** -- `policy.go`: Core ACL evaluation logic, HuJSON parsing -- `v2/`: Next-generation policy system with improved filtering -- `matcher/`: ACL rule matching and evaluation engine -- Determines peer visibility, route approval, and network access rules -- Supports both file-based and database-stored policies - -**Network Management (`hscontrol/`)** -- `derp/`: DERP (Designated Encrypted Relay for Packets) server implementation - - NAT traversal when direct connections fail - - Fallback relay for firewall-restricted environments -- `mapper/`: Converts internal Headscale state to Tailscale's wire protocol format - - `tail.go`: Tailscale-specific data structure generation -- `routes/`: Subnet route management and primary route selection -- `dns/`: DNS record management and MagicDNS implementation - -**Utilities & Support (`hscontrol/`)** -- `types/`: Core data structures, configuration, validation -- `util/`: Helper functions for networking, DNS, key management -- `templates/`: Client configuration templates (Apple, Windows, etc.) -- `notifier/`: Event notification system for real-time updates -- `metrics.go`: Prometheus metrics collection -- `capver/`: Tailscale capability version management - -### Key Subsystem Interactions - -**Node Registration Flow** -1. **Client Connection**: `noise.go` handles secure protocol handshake -2. **Authentication**: `auth.go` validates credentials (web/OIDC/preauth) -3. **State Creation**: `state.go` coordinates IP allocation via `db/ip.go` -4. **Storage**: `db/node.go` persists node, `NodeStore` caches in memory -5. **Network Setup**: `mapper/` generates initial Tailscale network map - -**Ongoing Operations** -1. **Poll Requests**: `poll.go` receives periodic client updates -2. **State Updates**: `NodeStore` maintains real-time node information -3. **Policy Application**: `policy/` evaluates ACL rules for peer relationships -4. **Map Distribution**: `mapper/` sends network topology to all affected clients - -**Route Management** -1. **Advertisement**: Clients announce routes via `poll.go` Hostinfo updates -2. **Storage**: `db/` persists routes, `NodeStore` caches for performance -3. **Approval**: `policy/` auto-approves routes based on ACL rules -4. **Distribution**: `routes/` selects primary routes, `mapper/` distributes to peers - -### Command-Line Tools (`cmd/`) - -**Main Server (`cmd/headscale/`)** -- `headscale.go`: CLI parsing, configuration loading, server startup -- Supports daemon mode, CLI operations (user/node management), database operations - -**Integration Test Runner (`cmd/hi/`)** -- `main.go`: Test execution framework with Docker orchestration -- `run.go`: Individual test execution with artifact collection -- `doctor.go`: System requirements validation -- `docker.go`: Container lifecycle management -- Essential for validating changes against real Tailscale clients - -### Generated & External Code - -**Protocol Buffers (`proto/` → `gen/`)** -- Defines gRPC API for headscale management operations -- Client libraries can generate from these definitions -- Run `make generate` after modifying `.proto` files - -**Integration Testing (`integration/`)** -- `scenario.go`: Docker test environment setup -- `tailscale.go`: Tailscale client container management -- Individual test files for specific functionality areas -- Real end-to-end validation with network isolation - -### Critical Performance Paths - -**High-Frequency Operations** -1. **MapRequest Processing** (`poll.go`): Every 15-60 seconds per client -2. **NodeStore Reads** (`node_store.go`): Every operation requiring node data -3. **Policy Evaluation** (`policy/`): On every peer relationship calculation -4. **Route Lookups** (`routes/`): During network map generation - -**Database Write Patterns** -- **Frequent**: Node heartbeats, endpoint updates, route changes -- **Moderate**: User operations, policy updates, API key management -- **Rare**: Schema migrations, bulk operations - -### Configuration & Deployment - -**Configuration** (`hscontrol/types/config.go`)** -- Database connection settings (SQLite/PostgreSQL) -- Network configuration (IP ranges, DNS settings) -- Policy mode (file vs database) -- DERP relay configuration -- OIDC provider settings - -**Key Dependencies** -- **GORM**: Database ORM with migration support -- **Tailscale Libraries**: Core networking and protocol code -- **Zerolog**: Structured logging throughout the application -- **Buf**: Protocol buffer toolchain for code generation - -### Development Workflow Integration - -The architecture supports incremental development: -- **Unit Tests**: Focus on individual packages (`*_test.go` files) -- **Integration Tests**: Validate cross-component interactions -- **Database Tests**: Extensive migration and data integrity validation -- **Policy Tests**: ACL rule evaluation and edge cases -- **Performance Tests**: NodeStore and high-frequency operation validation - -## Integration Testing System - -### Overview -Headscale uses Docker-based integration tests with real Tailscale clients to validate end-to-end functionality. The integration test system is complex and requires specialized knowledge for effective execution and debugging. - -### **MANDATORY: Use the headscale-integration-tester Agent** - -**CRITICAL REQUIREMENT**: For ANY integration test execution, analysis, troubleshooting, or validation, you MUST use the `headscale-integration-tester` agent. This agent contains specialized knowledge about: - -- Test execution strategies and timing requirements -- Infrastructure vs code issue distinction (99% vs 1% failure patterns) -- Security-critical debugging rules and forbidden practices -- Comprehensive artifact analysis workflows -- Real-world failure patterns from HA debugging experiences - -### Quick Reference Commands - -```bash -# Check system requirements (always run first) -go run ./cmd/hi doctor - -# Run single test (recommended for development) -go run ./cmd/hi run "TestName" - -# Use PostgreSQL for database-heavy tests -go run ./cmd/hi run "TestName" --postgres - -# Pattern matching for related tests -go run ./cmd/hi run "TestPattern*" -``` - -**Critical Notes**: -- Only ONE test can run at a time (Docker port conflicts) -- Tests generate ~100MB of logs per run in `control_logs/` -- Clean environment before each test: `rm -rf control_logs/202507* && docker system prune -f` - -### Test Artifacts Location -All test runs save comprehensive debugging artifacts to `control_logs/TIMESTAMP-ID/` including server logs, client logs, database dumps, MapResponse protocol data, and Prometheus metrics. - -**For all integration test work, use the headscale-integration-tester agent - it contains the complete knowledge needed for effective testing and debugging.** - -## NodeStore Implementation Details - -**Key Insight from Recent Work**: The NodeStore is a critical performance optimization that caches node data in memory while ensuring consistency with the database. When working with route advertisements or node state changes: - -1. **Timing Considerations**: Route advertisements need time to propagate from clients to server. Use `require.EventuallyWithT()` patterns in tests instead of immediate assertions. - -2. **Synchronization Points**: NodeStore updates happen at specific points like `poll.go:420` after Hostinfo changes. Ensure these are maintained when modifying the polling logic. - -3. **Peer Visibility**: The NodeStore's `peersFunc` determines which nodes are visible to each other. Policy-based filtering is separate from monitoring visibility - expired nodes should remain visible for debugging but marked as expired. - -## Testing Guidelines - -### Integration Test Patterns - -#### **CRITICAL: EventuallyWithT Pattern for External Calls** - -**All external calls in integration tests MUST be wrapped in EventuallyWithT blocks** to handle eventual consistency in distributed systems. External calls include: -- `client.Status()` - Getting Tailscale client status -- `client.Curl()` - Making HTTP requests through clients -- `client.Traceroute()` - Running network diagnostics -- `headscale.ListNodes()` - Querying headscale server state -- Any other calls that interact with external systems or network operations - -**Key Rules**: -1. **Never use bare `require.NoError(t, err)` with external calls** - Always wrap in EventuallyWithT -2. **Keep related assertions together** - If multiple assertions depend on the same external call, keep them in the same EventuallyWithT block -3. **Split unrelated external calls** - Different external calls should be in separate EventuallyWithT blocks -4. **Never nest EventuallyWithT calls** - Each EventuallyWithT should be at the same level -5. **Declare shared variables at function scope** - Variables used across multiple EventuallyWithT blocks must be declared before first use - -**Examples**: - -```go -// CORRECT: External call wrapped in EventuallyWithT -assert.EventuallyWithT(t, func(c *assert.CollectT) { - status, err := client.Status() - assert.NoError(c, err) - - // Related assertions using the same status call - for _, peerKey := range status.Peers() { - peerStatus := status.Peer[peerKey] - assert.NotNil(c, peerStatus.PrimaryRoutes) - requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedRoutes) - } -}, 5*time.Second, 200*time.Millisecond, "Verifying client status and routes") - -// INCORRECT: Bare external call without EventuallyWithT -status, err := client.Status() // ❌ Will fail intermittently -require.NoError(t, err) - -// CORRECT: Separate EventuallyWithT for different external calls -// First external call - headscale.ListNodes() -assert.EventuallyWithT(t, func(c *assert.CollectT) { - nodes, err := headscale.ListNodes() - assert.NoError(c, err) - assert.Len(c, nodes, 2) - requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2) -}, 10*time.Second, 500*time.Millisecond, "route state changes should propagate to nodes") - -// Second external call - client.Status() -assert.EventuallyWithT(t, func(c *assert.CollectT) { - status, err := client.Status() - assert.NoError(c, err) - - for _, peerKey := range status.Peers() { - peerStatus := status.Peer[peerKey] - requirePeerSubnetRoutesWithCollect(c, peerStatus, []netip.Prefix{tsaddr.AllIPv4(), tsaddr.AllIPv6()}) - } -}, 10*time.Second, 500*time.Millisecond, "routes should be visible to client") - -// INCORRECT: Multiple unrelated external calls in same EventuallyWithT -assert.EventuallyWithT(t, func(c *assert.CollectT) { - nodes, err := headscale.ListNodes() // ❌ First external call - assert.NoError(c, err) - - status, err := client.Status() // ❌ Different external call - should be separate - assert.NoError(c, err) -}, 10*time.Second, 500*time.Millisecond, "mixed calls") - -// CORRECT: Variable scoping for shared data -var ( - srs1, srs2, srs3 *ipnstate.Status - clientStatus *ipnstate.Status - srs1PeerStatus *ipnstate.PeerStatus -) - -assert.EventuallyWithT(t, func(c *assert.CollectT) { - srs1 = subRouter1.MustStatus() // = not := - srs2 = subRouter2.MustStatus() - clientStatus = client.MustStatus() - - srs1PeerStatus = clientStatus.Peer[srs1.Self.PublicKey] - // assertions... -}, 5*time.Second, 200*time.Millisecond, "checking router status") - -// CORRECT: Wrapping client operations -assert.EventuallyWithT(t, func(c *assert.CollectT) { - result, err := client.Curl(weburl) - assert.NoError(c, err) - assert.Len(c, result, 13) -}, 5*time.Second, 200*time.Millisecond, "Verifying HTTP connectivity") - -assert.EventuallyWithT(t, func(c *assert.CollectT) { - tr, err := client.Traceroute(webip) - assert.NoError(c, err) - assertTracerouteViaIPWithCollect(c, tr, expectedRouter.MustIPv4()) -}, 5*time.Second, 200*time.Millisecond, "Verifying network path") -``` - -**Helper Functions**: -- Use `requirePeerSubnetRoutesWithCollect` instead of `requirePeerSubnetRoutes` inside EventuallyWithT -- Use `requireNodeRouteCountWithCollect` instead of `requireNodeRouteCount` inside EventuallyWithT -- Use `assertTracerouteViaIPWithCollect` instead of `assertTracerouteViaIP` inside EventuallyWithT - -```go -// Node route checking by actual node properties, not array position -var routeNode *v1.Node -for _, node := range nodes { - if nodeIDStr := fmt.Sprintf("%d", node.GetId()); expectedRoutes[nodeIDStr] != "" { - routeNode = node - break - } -} -``` - -### Running Problematic Tests -- Some tests require significant time (e.g., `TestNodeOnlineStatus` runs for 12 minutes) -- Infrastructure issues like disk space can cause test failures unrelated to code changes -- Use `--postgres` flag when testing database-heavy scenarios - -## Quality Assurance and Testing Requirements - -### **MANDATORY: Always Use Specialized Testing Agents** - -**CRITICAL REQUIREMENT**: For ANY task involving testing, quality assurance, review, or validation, you MUST use the appropriate specialized agent at the END of your task list. This ensures comprehensive quality validation and prevents regressions. - -**Required Agents for Different Task Types**: - -1. **Integration Testing**: Use `headscale-integration-tester` agent for: - - Running integration tests with `cmd/hi` - - Analyzing test failures and artifacts - - Troubleshooting Docker-based test infrastructure - - Validating end-to-end functionality changes - -2. **Quality Control**: Use `quality-control-enforcer` agent for: - - Code review and validation - - Ensuring best practices compliance - - Preventing common pitfalls and anti-patterns - - Validating architectural decisions - -**Agent Usage Pattern**: Always add the appropriate agent as the FINAL step in any task list to ensure quality validation occurs after all work is complete. - -### Integration Test Debugging Reference - -Test artifacts are preserved in `control_logs/TIMESTAMP-ID/` including: -- Headscale server logs (stderr/stdout) -- Tailscale client logs and status -- Database dumps and network captures -- MapResponse JSON files for protocol debugging - -**For integration test issues, ALWAYS use the headscale-integration-tester agent - do not attempt manual debugging.** - -## EventuallyWithT Pattern for Integration Tests - -### Overview -EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions. - -### External Calls That Must Be Wrapped -The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT: -- `headscale.ListNodes()` - Queries server state -- `client.Status()` - Gets client network status -- `client.Curl()` - Makes HTTP requests through the network -- `client.Traceroute()` - Performs network diagnostics -- `client.Execute()` when running commands that query state -- Any operation that reads from the headscale server or tailscale client - -### Operations That Must NOT Be Wrapped -The following are **blocking operations** that modify state and should NOT be wrapped in EventuallyWithT: -- `tailscale set` commands (e.g., `--advertise-routes`, `--exit-node`) -- Any command that changes configuration or state -- Use `client.MustStatus()` instead of `client.Status()` when you just need the ID for a blocking operation - -### Five Key Rules for EventuallyWithT - -1. **One External Call Per EventuallyWithT Block** - - Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status) - - Related assertions based on that single call can be grouped together - - Unrelated external calls must be in separate EventuallyWithT blocks - -2. **Variable Scoping** - - Declare variables that need to be shared across EventuallyWithT blocks at function scope - - Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block) - - Variables declared with `:=` inside EventuallyWithT are not accessible outside - -3. **No Nested EventuallyWithT** - - NEVER put an EventuallyWithT inside another EventuallyWithT - - This is a critical anti-pattern that must be avoided - -4. **Use CollectT for Assertions** - - Inside EventuallyWithT, use `assert` methods with the CollectT parameter - - Helper functions called within EventuallyWithT must accept `*assert.CollectT` - -5. **Descriptive Messages** - - Always provide a descriptive message as the last parameter - - Message should explain what condition is being waited for - -### Correct Pattern Examples - -```go -// CORRECT: Blocking operation NOT wrapped -for _, client := range allClients { - status := client.MustStatus() - command := []string{ - "tailscale", - "set", - "--advertise-routes=" + expectedRoutes[string(status.Self.ID)], - } - _, _, err = client.Execute(command) - require.NoErrorf(t, err, "failed to advertise route: %s", err) -} - -// CORRECT: Single external call with related assertions -var nodes []*v1.Node -assert.EventuallyWithT(t, func(c *assert.CollectT) { - nodes, err = headscale.ListNodes() - assert.NoError(c, err) - assert.Len(c, nodes, 2) - requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2) -}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts") - -// CORRECT: Separate EventuallyWithT for different external call -assert.EventuallyWithT(t, func(c *assert.CollectT) { - status, err := client.Status() - assert.NoError(c, err) - for _, peerKey := range status.Peers() { - peerStatus := status.Peer[peerKey] - requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes) - } -}, 10*time.Second, 500*time.Millisecond, "client should see expected routes") -``` - -### Incorrect Patterns to Avoid - -```go -// INCORRECT: Blocking operation wrapped in EventuallyWithT -assert.EventuallyWithT(t, func(c *assert.CollectT) { - status, err := client.Status() - assert.NoError(c, err) - - // This is a blocking operation - should NOT be in EventuallyWithT! - command := []string{ - "tailscale", - "set", - "--advertise-routes=" + expectedRoutes[string(status.Self.ID)], - } - _, _, err = client.Execute(command) - assert.NoError(c, err) -}, 5*time.Second, 200*time.Millisecond, "wrong pattern") - -// INCORRECT: Multiple unrelated external calls in same EventuallyWithT -assert.EventuallyWithT(t, func(c *assert.CollectT) { - // First external call - nodes, err := headscale.ListNodes() - assert.NoError(c, err) - assert.Len(c, nodes, 2) - - // Second unrelated external call - WRONG! - status, err := client.Status() - assert.NoError(c, err) - assert.NotNil(c, status) -}, 10*time.Second, 500*time.Millisecond, "mixed operations") -``` - -## Important Notes - -- **Dependencies**: Use `nix develop` for consistent toolchain (Go, buf, protobuf tools, linting) -- **Protocol Buffers**: Changes to `proto/` require `make generate` and should be committed separately -- **Code Style**: Enforced via golangci-lint with golines (width 88) and gofumpt formatting -- **Database**: Supports both SQLite (development) and PostgreSQL (production/testing) -- **Integration Tests**: Require Docker and can consume significant disk space - use headscale-integration-tester agent -- **Performance**: NodeStore optimizations are critical for scale - be careful with changes to state management -- **Quality Assurance**: Always use appropriate specialized agents for testing and validation tasks -- **NEVER create gists in the user's name**: Do not use the `create_gist` tool - present information directly in the response instead +@AGENTS.md diff --git a/Dockerfile.integration b/Dockerfile.integration index 72becdf9..341067e5 100644 --- a/Dockerfile.integration +++ b/Dockerfile.integration @@ -2,28 +2,43 @@ # and are in no way endorsed by Headscale's maintainers as an # official nor supported release or distribution. -FROM docker.io/golang:1.25-trixie +FROM docker.io/golang:1.25-trixie AS builder ARG VERSION=dev ENV GOPATH /go WORKDIR /go/src/headscale -RUN apt-get --update install --no-install-recommends --yes less jq sqlite3 dnsutils \ - && rm -rf /var/lib/apt/lists/* \ - && apt-get clean -RUN mkdir -p /var/run/headscale - -# Install delve debugger +# Install delve debugger first - rarely changes, good cache candidate RUN go install github.com/go-delve/delve/cmd/dlv@latest +# Download dependencies - only invalidated when go.mod/go.sum change COPY go.mod go.sum /go/src/headscale/ RUN go mod download +# Copy source and build - invalidated on any source change COPY . . # Build debug binary with debug symbols for delve RUN CGO_ENABLED=0 GOOS=linux go build -gcflags="all=-N -l" -o /go/bin/headscale ./cmd/headscale +# Runtime stage +FROM debian:trixie-slim + +RUN apt-get --update install --no-install-recommends --yes \ + bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \ + && apt-get dist-clean + +RUN mkdir -p /var/run/headscale + +# Copy binaries from builder +COPY --from=builder /go/bin/headscale /usr/local/bin/headscale +COPY --from=builder /go/bin/dlv /usr/local/bin/dlv + +# Copy source code for delve source-level debugging +COPY --from=builder /go/src/headscale /go/src/headscale + +WORKDIR /go/src/headscale + # Need to reset the entrypoint or everything will run as a busybox script ENTRYPOINT [] EXPOSE 8080/tcp 40000/tcp -CMD ["/go/bin/dlv", "--listen=0.0.0.0:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/go/bin/headscale", "--"] +CMD ["dlv", "--listen=0.0.0.0:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/usr/local/bin/headscale", "--"] diff --git a/Dockerfile.integration-ci b/Dockerfile.integration-ci new file mode 100644 index 00000000..e55ab7b9 --- /dev/null +++ b/Dockerfile.integration-ci @@ -0,0 +1,17 @@ +# Minimal CI image - expects pre-built headscale binary in build context +# For local development with delve debugging, use Dockerfile.integration instead + +FROM debian:trixie-slim + +RUN apt-get --update install --no-install-recommends --yes \ + bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \ + && apt-get dist-clean + +RUN mkdir -p /var/run/headscale + +# Copy pre-built headscale binary from build context +COPY headscale /usr/local/bin/headscale + +ENTRYPOINT [] +EXPOSE 8080/tcp +CMD ["/usr/local/bin/headscale"] diff --git a/Dockerfile.tailscale-HEAD b/Dockerfile.tailscale-HEAD index 240d528b..96edf72c 100644 --- a/Dockerfile.tailscale-HEAD +++ b/Dockerfile.tailscale-HEAD @@ -37,7 +37,9 @@ RUN GOARCH=$TARGETARCH go install -tags="${BUILD_TAGS}" -ldflags="\ -v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot FROM alpine:3.22 -RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl +# Upstream: ca-certificates ip6tables iptables iproute2 +# Tests: curl python3 (traceroute via BusyBox) +RUN apk add --no-cache ca-certificates curl ip6tables iptables iproute2 python3 COPY --from=build-env /go/bin/* /usr/local/bin/ # For compat with the previous run.sh, although ideally you should be diff --git a/Makefile b/Makefile index d9b2c76b..1e08cda9 100644 --- a/Makefile +++ b/Makefile @@ -64,7 +64,6 @@ fmt-go: check-deps $(GO_SOURCES) fmt-prettier: check-deps $(DOC_SOURCES) @echo "Formatting documentation and config files..." prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}' - prettier --write --print-width 80 --prose-wrap always CHANGELOG.md .PHONY: fmt-proto fmt-proto: check-deps $(PROTO_SOURCES) @@ -117,7 +116,7 @@ help: @echo "" @echo "Specific targets:" @echo " fmt-go - Format Go code only" - @echo " fmt-prettier - Format documentation only" + @echo " fmt-prettier - Format documentation only" @echo " fmt-proto - Format Protocol Buffer files only" @echo " lint-go - Lint Go code only" @echo " lint-proto - Lint Protocol Buffer files only" @@ -126,4 +125,4 @@ help: @echo " check-deps - Verify required tools are available" @echo "" @echo "Note: If not running in a nix shell, ensure dependencies are available:" - @echo " nix develop" \ No newline at end of file + @echo " nix develop" diff --git a/README.md b/README.md index 61a2c92c..61eb68c5 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -![headscale logo](./docs/logo/headscale3_header_stacked_left.png) +![headscale logo](./docs/assets/logo/headscale3_header_stacked_left.png) ![ci](https://github.com/juanfont/headscale/actions/workflows/test.yml/badge.svg) @@ -63,6 +63,8 @@ and container to run Headscale.** Please have a look at the [`documentation`](https://headscale.net/stable/). +For NixOS users, a module is available in [`nix/`](./nix/). + ## Talks - Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/) @@ -147,6 +149,7 @@ make build We recommend using Nix for dependency management to ensure you have all required tools. If you prefer to manage dependencies yourself, you can use Make directly: **With Nix (recommended):** + ```shell nix develop make test @@ -154,6 +157,7 @@ make build ``` **With your own dependencies:** + ```shell make test make build diff --git a/cmd/headscale/cli/api_key.go b/cmd/headscale/cli/api_key.go index bd839b7b..d821b290 100644 --- a/cmd/headscale/cli/api_key.go +++ b/cmd/headscale/cli/api_key.go @@ -9,7 +9,6 @@ import ( "github.com/juanfont/headscale/hscontrol/util" "github.com/prometheus/common/model" "github.com/pterm/pterm" - "github.com/rs/zerolog/log" "github.com/spf13/cobra" "google.golang.org/protobuf/types/known/timestamppb" ) @@ -29,15 +28,11 @@ func init() { apiKeysCmd.AddCommand(createAPIKeyCmd) expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix") - if err := expireAPIKeyCmd.MarkFlagRequired("prefix"); err != nil { - log.Fatal().Err(err).Msg("") - } + expireAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID") apiKeysCmd.AddCommand(expireAPIKeyCmd) deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix") - if err := deleteAPIKeyCmd.MarkFlagRequired("prefix"); err != nil { - log.Fatal().Err(err).Msg("") - } + deleteAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID") apiKeysCmd.AddCommand(deleteAPIKeyCmd) } @@ -154,11 +149,20 @@ var expireAPIKeyCmd = &cobra.Command{ Run: func(cmd *cobra.Command, args []string) { output, _ := cmd.Flags().GetString("output") - prefix, err := cmd.Flags().GetString("prefix") - if err != nil { + id, _ := cmd.Flags().GetUint64("id") + prefix, _ := cmd.Flags().GetString("prefix") + + switch { + case id == 0 && prefix == "": ErrorOutput( - err, - fmt.Sprintf("Error getting prefix from CLI flag: %s", err), + errMissingParameter, + "Either --id or --prefix must be provided", + output, + ) + case id != 0 && prefix != "": + ErrorOutput( + errMissingParameter, + "Only one of --id or --prefix can be provided", output, ) } @@ -167,8 +171,11 @@ var expireAPIKeyCmd = &cobra.Command{ defer cancel() defer conn.Close() - request := &v1.ExpireApiKeyRequest{ - Prefix: prefix, + request := &v1.ExpireApiKeyRequest{} + if id != 0 { + request.Id = id + } else { + request.Prefix = prefix } response, err := client.ExpireApiKey(ctx, request) @@ -191,11 +198,20 @@ var deleteAPIKeyCmd = &cobra.Command{ Run: func(cmd *cobra.Command, args []string) { output, _ := cmd.Flags().GetString("output") - prefix, err := cmd.Flags().GetString("prefix") - if err != nil { + id, _ := cmd.Flags().GetUint64("id") + prefix, _ := cmd.Flags().GetString("prefix") + + switch { + case id == 0 && prefix == "": ErrorOutput( - err, - fmt.Sprintf("Error getting prefix from CLI flag: %s", err), + errMissingParameter, + "Either --id or --prefix must be provided", + output, + ) + case id != 0 && prefix != "": + ErrorOutput( + errMissingParameter, + "Only one of --id or --prefix can be provided", output, ) } @@ -204,8 +220,11 @@ var deleteAPIKeyCmd = &cobra.Command{ defer cancel() defer conn.Close() - request := &v1.DeleteApiKeyRequest{ - Prefix: prefix, + request := &v1.DeleteApiKeyRequest{} + if id != 0 { + request.Id = id + } else { + request.Prefix = prefix } response, err := client.DeleteApiKey(ctx, request) diff --git a/cmd/headscale/cli/debug.go b/cmd/headscale/cli/debug.go index 8ce5f237..75187ddd 100644 --- a/cmd/headscale/cli/debug.go +++ b/cmd/headscale/cli/debug.go @@ -10,10 +10,6 @@ import ( "google.golang.org/grpc/status" ) -const ( - errPreAuthKeyMalformed = Error("key is malformed. expected 64 hex characters with `nodekey` prefix") -) - // Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors type Error string diff --git a/cmd/headscale/cli/nodes.go b/cmd/headscale/cli/nodes.go index e1b040f0..882460dd 100644 --- a/cmd/headscale/cli/nodes.go +++ b/cmd/headscale/cli/nodes.go @@ -4,7 +4,6 @@ import ( "fmt" "log" "net/netip" - "slices" "strconv" "strings" "time" @@ -22,7 +21,6 @@ import ( func init() { rootCmd.AddCommand(nodeCmd) listNodesCmd.Flags().StringP("user", "u", "", "Filter by user") - listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags") listNodesCmd.Flags().StringP("namespace", "n", "", "User") listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace") @@ -73,26 +71,6 @@ func init() { } nodeCmd.AddCommand(deleteNodeCmd) - moveNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") - - err = moveNodeCmd.MarkFlagRequired("identifier") - if err != nil { - log.Fatal(err.Error()) - } - - moveNodeCmd.Flags().Uint64P("user", "u", 0, "New user") - - moveNodeCmd.Flags().StringP("namespace", "n", "", "User") - moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace") - moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage - moveNodeNamespaceFlag.Hidden = true - - err = moveNodeCmd.MarkFlagRequired("user") - if err != nil { - log.Fatal(err.Error()) - } - nodeCmd.AddCommand(moveNodeCmd) - tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") tagCmd.MarkFlagRequired("identifier") tagCmd.Flags().StringSliceP("tags", "t", []string{}, "List of tags to add to the node") @@ -168,10 +146,6 @@ var listNodesCmd = &cobra.Command{ if err != nil { ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) } - showTags, err := cmd.Flags().GetBool("tags") - if err != nil { - ErrorOutput(err, fmt.Sprintf("Error getting tags flag: %s", err), output) - } ctx, client, conn, cancel := newHeadscaleCLIWithConfig() defer cancel() @@ -194,7 +168,7 @@ var listNodesCmd = &cobra.Command{ SuccessOutput(response.GetNodes(), "", output) } - tableData, err := nodesToPtables(user, showTags, response.GetNodes()) + tableData, err := nodesToPtables(user, response.GetNodes()) if err != nil { ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output) } @@ -240,10 +214,6 @@ var listNodeRoutesCmd = &cobra.Command{ ) } - if output != "" { - SuccessOutput(response.GetNodes(), "", output) - } - nodes := response.GetNodes() if identifier != 0 { for _, node := range response.GetNodes() { @@ -258,6 +228,11 @@ var listNodeRoutesCmd = &cobra.Command{ return (n.GetSubnetRoutes() != nil && len(n.GetSubnetRoutes()) > 0) || (n.GetApprovedRoutes() != nil && len(n.GetApprovedRoutes()) > 0) || (n.GetAvailableRoutes() != nil && len(n.GetAvailableRoutes()) > 0) }) + if output != "" { + SuccessOutput(nodes, "", output) + return + } + tableData, err := nodeRoutesToPtables(nodes) if err != nil { ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output) @@ -455,66 +430,6 @@ var deleteNodeCmd = &cobra.Command{ }, } -var moveNodeCmd = &cobra.Command{ - Use: "move", - Short: "Move node to another user", - Aliases: []string{"mv"}, - Run: func(cmd *cobra.Command, args []string) { - output, _ := cmd.Flags().GetString("output") - - identifier, err := cmd.Flags().GetUint64("identifier") - if err != nil { - ErrorOutput( - err, - fmt.Sprintf("Error converting ID to integer: %s", err), - output, - ) - } - - user, err := cmd.Flags().GetUint64("user") - if err != nil { - ErrorOutput( - err, - fmt.Sprintf("Error getting user: %s", err), - output, - ) - } - - ctx, client, conn, cancel := newHeadscaleCLIWithConfig() - defer cancel() - defer conn.Close() - - getRequest := &v1.GetNodeRequest{ - NodeId: identifier, - } - - _, err = client.GetNode(ctx, getRequest) - if err != nil { - ErrorOutput( - err, - "Error getting node: "+status.Convert(err).Message(), - output, - ) - } - - moveRequest := &v1.MoveNodeRequest{ - NodeId: identifier, - User: user, - } - - moveResponse, err := client.MoveNode(ctx, moveRequest) - if err != nil { - ErrorOutput( - err, - "Error moving node: "+status.Convert(err).Message(), - output, - ) - } - - SuccessOutput(moveResponse.GetNode(), "Node moved to another user", output) - }, -} - var backfillNodeIPsCmd = &cobra.Command{ Use: "backfillips", Short: "Backfill IPs missing from nodes", @@ -561,7 +476,6 @@ be assigned to nodes.`, func nodesToPtables( currentUser string, - showTags bool, nodes []*v1.Node, ) (pterm.TableData, error) { tableHeader := []string{ @@ -571,6 +485,7 @@ func nodesToPtables( "MachineKey", "NodeKey", "User", + "Tags", "IP addresses", "Ephemeral", "Last seen", @@ -578,13 +493,6 @@ func nodesToPtables( "Connected", "Expired", } - if showTags { - tableHeader = append(tableHeader, []string{ - "ForcedTags", - "InvalidTags", - "ValidTags", - }...) - } tableData := pterm.TableData{tableHeader} for _, node := range nodes { @@ -639,25 +547,17 @@ func nodesToPtables( expired = pterm.LightRed("yes") } - var forcedTags string - for _, tag := range node.GetForcedTags() { - forcedTags += "," + tag + // TODO(kradalby): as part of CLI rework, we should add the posibility to show "unusable" tags as mentioned in + // https://github.com/juanfont/headscale/issues/2981 + var tagsBuilder strings.Builder + + for _, tag := range node.GetTags() { + tagsBuilder.WriteString("\n" + tag) } - forcedTags = strings.TrimLeft(forcedTags, ",") - var invalidTags string - for _, tag := range node.GetInvalidTags() { - if !slices.Contains(node.GetForcedTags(), tag) { - invalidTags += "," + pterm.LightRed(tag) - } - } - invalidTags = strings.TrimLeft(invalidTags, ",") - var validTags string - for _, tag := range node.GetValidTags() { - if !slices.Contains(node.GetForcedTags(), tag) { - validTags += "," + pterm.LightGreen(tag) - } - } - validTags = strings.TrimLeft(validTags, ",") + + tags := tagsBuilder.String() + + tags = strings.TrimLeft(tags, "\n") var user string if currentUser == "" || (currentUser == node.GetUser().GetName()) { @@ -684,6 +584,7 @@ func nodesToPtables( machineKey.ShortString(), nodeKey.ShortString(), user, + tags, strings.Join([]string{IPV4Address, IPV6Address}, ", "), strconv.FormatBool(ephemeral), lastSeenTime, @@ -691,9 +592,6 @@ func nodesToPtables( online, expired, } - if showTags { - nodeData = append(nodeData, []string{forcedTags, invalidTags, validTags}...) - } tableData = append( tableData, nodeData, @@ -719,9 +617,9 @@ func nodeRoutesToPtables( nodeData := []string{ strconv.FormatUint(node.GetId(), util.Base10), node.GetGivenName(), - strings.Join(node.GetApprovedRoutes(), ", "), - strings.Join(node.GetAvailableRoutes(), ", "), - strings.Join(node.GetSubnetRoutes(), ", "), + strings.Join(node.GetApprovedRoutes(), "\n"), + strings.Join(node.GetAvailableRoutes(), "\n"), + strings.Join(node.GetSubnetRoutes(), "\n"), } tableData = append( tableData, diff --git a/cmd/headscale/cli/policy.go b/cmd/headscale/cli/policy.go index f99d5390..2aaebcfa 100644 --- a/cmd/headscale/cli/policy.go +++ b/cmd/headscale/cli/policy.go @@ -69,8 +69,7 @@ var getPolicy = &cobra.Command{ } d, err := db.NewHeadscaleDatabase( - cfg.Database, - cfg.BaseDomain, + cfg, nil, ) if err != nil { @@ -145,8 +144,7 @@ var setPolicy = &cobra.Command{ } d, err := db.NewHeadscaleDatabase( - cfg.Database, - cfg.BaseDomain, + cfg, nil, ) if err != nil { diff --git a/cmd/headscale/cli/preauthkeys.go b/cmd/headscale/cli/preauthkeys.go index c0c08831..51133200 100644 --- a/cmd/headscale/cli/preauthkeys.go +++ b/cmd/headscale/cli/preauthkeys.go @@ -20,20 +20,10 @@ const ( func init() { rootCmd.AddCommand(preauthkeysCmd) - preauthkeysCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)") - - preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User") - pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace") - pakNamespaceFlag.Deprecated = deprecateNamespaceMessage - pakNamespaceFlag.Hidden = true - - err := preauthkeysCmd.MarkPersistentFlagRequired("user") - if err != nil { - log.Fatal().Err(err).Msg("") - } preauthkeysCmd.AddCommand(listPreAuthKeys) preauthkeysCmd.AddCommand(createPreAuthKeyCmd) preauthkeysCmd.AddCommand(expirePreAuthKeyCmd) + preauthkeysCmd.AddCommand(deletePreAuthKeyCmd) createPreAuthKeyCmd.PersistentFlags(). Bool("reusable", false, "Make the preauthkey reusable") createPreAuthKeyCmd.PersistentFlags(). @@ -42,6 +32,9 @@ func init() { StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)") createPreAuthKeyCmd.Flags(). StringSlice("tags", []string{}, "Tags to automatically assign to node") + createPreAuthKeyCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)") + expirePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID") + deletePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID") } var preauthkeysCmd = &cobra.Command{ @@ -52,25 +45,16 @@ var preauthkeysCmd = &cobra.Command{ var listPreAuthKeys = &cobra.Command{ Use: "list", - Short: "List the preauthkeys for this user", + Short: "List all preauthkeys", Aliases: []string{"ls", "show"}, Run: func(cmd *cobra.Command, args []string) { output, _ := cmd.Flags().GetString("output") - user, err := cmd.Flags().GetUint64("user") - if err != nil { - ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) - } - ctx, client, conn, cancel := newHeadscaleCLIWithConfig() defer cancel() defer conn.Close() - request := &v1.ListPreAuthKeysRequest{ - User: user, - } - - response, err := client.ListPreAuthKeys(ctx, request) + response, err := client.ListPreAuthKeys(ctx, &v1.ListPreAuthKeysRequest{}) if err != nil { ErrorOutput( err, @@ -88,13 +72,13 @@ var listPreAuthKeys = &cobra.Command{ tableData := pterm.TableData{ { "ID", - "Key", + "Key/Prefix", "Reusable", "Ephemeral", "Used", "Expiration", "Created", - "Tags", + "Owner", }, } for _, key := range response.GetPreAuthKeys() { @@ -103,14 +87,15 @@ var listPreAuthKeys = &cobra.Command{ expiration = ColourTime(key.GetExpiration().AsTime()) } - aclTags := "" - - for _, tag := range key.GetAclTags() { - aclTags += "," + tag + var owner string + if len(key.GetAclTags()) > 0 { + owner = strings.Join(key.GetAclTags(), "\n") + } else if key.GetUser() != nil { + owner = key.GetUser().GetName() + } else { + owner = "-" } - aclTags = strings.TrimLeft(aclTags, ",") - tableData = append(tableData, []string{ strconv.FormatUint(key.GetId(), 10), key.GetKey(), @@ -119,7 +104,7 @@ var listPreAuthKeys = &cobra.Command{ strconv.FormatBool(key.GetUsed()), expiration, key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"), - aclTags, + owner, }) } @@ -136,16 +121,12 @@ var listPreAuthKeys = &cobra.Command{ var createPreAuthKeyCmd = &cobra.Command{ Use: "create", - Short: "Creates a new preauthkey in the specified user", + Short: "Creates a new preauthkey", Aliases: []string{"c", "new"}, Run: func(cmd *cobra.Command, args []string) { output, _ := cmd.Flags().GetString("output") - user, err := cmd.Flags().GetUint64("user") - if err != nil { - ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) - } - + user, _ := cmd.Flags().GetUint64("user") reusable, _ := cmd.Flags().GetBool("reusable") ephemeral, _ := cmd.Flags().GetBool("ephemeral") tags, _ := cmd.Flags().GetStringSlice("tags") @@ -194,21 +175,21 @@ var createPreAuthKeyCmd = &cobra.Command{ } var expirePreAuthKeyCmd = &cobra.Command{ - Use: "expire KEY", + Use: "expire", Short: "Expire a preauthkey", Aliases: []string{"revoke", "exp", "e"}, - Args: func(cmd *cobra.Command, args []string) error { - if len(args) < 1 { - return errMissingParameter - } - - return nil - }, Run: func(cmd *cobra.Command, args []string) { output, _ := cmd.Flags().GetString("output") - user, err := cmd.Flags().GetUint64("user") - if err != nil { - ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) + id, _ := cmd.Flags().GetUint64("id") + + if id == 0 { + ErrorOutput( + errMissingParameter, + "Error: missing --id parameter", + output, + ) + + return } ctx, client, conn, cancel := newHeadscaleCLIWithConfig() @@ -216,8 +197,7 @@ var expirePreAuthKeyCmd = &cobra.Command{ defer conn.Close() request := &v1.ExpirePreAuthKeyRequest{ - User: user, - Key: args[0], + Id: id, } response, err := client.ExpirePreAuthKey(ctx, request) @@ -232,3 +212,42 @@ var expirePreAuthKeyCmd = &cobra.Command{ SuccessOutput(response, "Key expired", output) }, } + +var deletePreAuthKeyCmd = &cobra.Command{ + Use: "delete", + Short: "Delete a preauthkey", + Aliases: []string{"del", "rm", "d"}, + Run: func(cmd *cobra.Command, args []string) { + output, _ := cmd.Flags().GetString("output") + id, _ := cmd.Flags().GetUint64("id") + + if id == 0 { + ErrorOutput( + errMissingParameter, + "Error: missing --id parameter", + output, + ) + + return + } + + ctx, client, conn, cancel := newHeadscaleCLIWithConfig() + defer cancel() + defer conn.Close() + + request := &v1.DeletePreAuthKeyRequest{ + Id: id, + } + + response, err := client.DeletePreAuthKey(ctx, request) + if err != nil { + ErrorOutput( + err, + fmt.Sprintf("Cannot delete Pre Auth Key: %s\n", err), + output, + ) + } + + SuccessOutput(response, "Key deleted", output) + }, +} diff --git a/cmd/headscale/cli/utils.go b/cmd/headscale/cli/utils.go index f6b5f71a..0d0025d3 100644 --- a/cmd/headscale/cli/utils.go +++ b/cmd/headscale/cli/utils.go @@ -130,7 +130,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g return ctx, client, conn, cancel } -func output(result interface{}, override string, outputFormat string) string { +func output(result any, override string, outputFormat string) string { var jsonBytes []byte var err error switch outputFormat { @@ -158,7 +158,7 @@ func output(result interface{}, override string, outputFormat string) string { } // SuccessOutput prints the result to stdout and exits with status code 0. -func SuccessOutput(result interface{}, override string, outputFormat string) { +func SuccessOutput(result any, override string, outputFormat string) { fmt.Println(output(result, override, outputFormat)) os.Exit(0) } diff --git a/cmd/headscale/headscale_test.go b/cmd/headscale/headscale_test.go index 00c4a276..2a9fbce6 100644 --- a/cmd/headscale/headscale_test.go +++ b/cmd/headscale/headscale_test.go @@ -9,34 +9,17 @@ import ( "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" "github.com/spf13/viper" - "gopkg.in/check.v1" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" ) -func Test(t *testing.T) { - check.TestingT(t) -} - -var _ = check.Suite(&Suite{}) - -type Suite struct{} - -func (s *Suite) SetUpSuite(c *check.C) { -} - -func (s *Suite) TearDownSuite(c *check.C) { -} - -func (*Suite) TestConfigFileLoading(c *check.C) { +func TestConfigFileLoading(t *testing.T) { tmpDir, err := os.MkdirTemp("", "headscale") - if err != nil { - c.Fatal(err) - } + require.NoError(t, err) defer os.RemoveAll(tmpDir) path, err := os.Getwd() - if err != nil { - c.Fatal(err) - } + require.NoError(t, err) cfgFile := filepath.Join(tmpDir, "config.yaml") @@ -45,70 +28,54 @@ func (*Suite) TestConfigFileLoading(c *check.C) { filepath.Clean(path+"/../../config-example.yaml"), cfgFile, ) - if err != nil { - c.Fatal(err) - } + require.NoError(t, err) // Load example config, it should load without validation errors err = types.LoadConfig(cfgFile, true) - c.Assert(err, check.IsNil) + require.NoError(t, err) // Test that config file was interpreted correctly - c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080") - c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080") - c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090") - c.Assert(viper.GetString("database.type"), check.Equals, "sqlite") - c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite") - c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "") - c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http") - c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01") - c.Assert( - util.GetFileMode("unix_socket_permission"), - check.Equals, - fs.FileMode(0o770), - ) - c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false) + assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url")) + assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr")) + assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr")) + assert.Equal(t, "sqlite", viper.GetString("database.type")) + assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path")) + assert.Empty(t, viper.GetString("tls_letsencrypt_hostname")) + assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen")) + assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type")) + assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission")) + assert.False(t, viper.GetBool("logtail.enabled")) } -func (*Suite) TestConfigLoading(c *check.C) { +func TestConfigLoading(t *testing.T) { tmpDir, err := os.MkdirTemp("", "headscale") - if err != nil { - c.Fatal(err) - } + require.NoError(t, err) defer os.RemoveAll(tmpDir) path, err := os.Getwd() - if err != nil { - c.Fatal(err) - } + require.NoError(t, err) // Symlink the example config file err = os.Symlink( filepath.Clean(path+"/../../config-example.yaml"), filepath.Join(tmpDir, "config.yaml"), ) - if err != nil { - c.Fatal(err) - } + require.NoError(t, err) // Load example config, it should load without validation errors err = types.LoadConfig(tmpDir, false) - c.Assert(err, check.IsNil) + require.NoError(t, err) // Test that config file was interpreted correctly - c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080") - c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080") - c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090") - c.Assert(viper.GetString("database.type"), check.Equals, "sqlite") - c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite") - c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "") - c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http") - c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01") - c.Assert( - util.GetFileMode("unix_socket_permission"), - check.Equals, - fs.FileMode(0o770), - ) - c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false) - c.Assert(viper.GetBool("randomize_client_port"), check.Equals, false) + assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url")) + assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr")) + assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr")) + assert.Equal(t, "sqlite", viper.GetString("database.type")) + assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path")) + assert.Empty(t, viper.GetString("tls_letsencrypt_hostname")) + assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen")) + assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type")) + assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission")) + assert.False(t, viper.GetBool("logtail.enabled")) + assert.False(t, viper.GetBool("randomize_client_port")) } diff --git a/cmd/hi/README.md b/cmd/hi/README.md new file mode 100644 index 00000000..17324219 --- /dev/null +++ b/cmd/hi/README.md @@ -0,0 +1,6 @@ +# hi + +hi (headscale integration runner) is an entirely "vibe coded" wrapper around our +[integration test suite](../integration). It essentially runs the docker +commands for you with some added benefits of extracting resources like logs and +databases. diff --git a/cmd/hi/cleanup.go b/cmd/hi/cleanup.go index fd78c66f..7c5b5214 100644 --- a/cmd/hi/cleanup.go +++ b/cmd/hi/cleanup.go @@ -3,9 +3,13 @@ package main import ( "context" "fmt" + "log" + "os" + "path/filepath" "strings" "time" + "github.com/cenkalti/backoff/v5" "github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/image" @@ -14,9 +18,11 @@ import ( ) // cleanupBeforeTest performs cleanup operations before running tests. +// Only removes stale (stopped/exited) test containers to avoid interfering with concurrent test runs. func cleanupBeforeTest(ctx context.Context) error { - if err := killTestContainers(ctx); err != nil { - return fmt.Errorf("failed to kill test containers: %w", err) + err := cleanupStaleTestContainers(ctx) + if err != nil { + return fmt.Errorf("failed to clean stale test containers: %w", err) } if err := pruneDockerNetworks(ctx); err != nil { @@ -26,11 +32,25 @@ func cleanupBeforeTest(ctx context.Context) error { return nil } -// cleanupAfterTest removes the test container after completion. -func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID string) error { - return cli.ContainerRemove(ctx, containerID, container.RemoveOptions{ +// cleanupAfterTest removes the test container and all associated integration test containers for the run. +func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID, runID string) error { + // Remove the main test container + err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{ Force: true, }) + if err != nil { + return fmt.Errorf("failed to remove test container: %w", err) + } + + // Clean up integration test containers for this run only + if runID != "" { + err := killTestContainersByRunID(ctx, runID) + if err != nil { + return fmt.Errorf("failed to clean up containers for run %s: %w", runID, err) + } + } + + return nil } // killTestContainers terminates and removes all test containers. @@ -83,30 +103,122 @@ func killTestContainers(ctx context.Context) error { return nil } +// killTestContainersByRunID terminates and removes all test containers for a specific run ID. +// This function filters containers by the hi.run-id label to only affect containers +// belonging to the specified test run, leaving other concurrent test runs untouched. +func killTestContainersByRunID(ctx context.Context, runID string) error { + cli, err := createDockerClient() + if err != nil { + return fmt.Errorf("failed to create Docker client: %w", err) + } + defer cli.Close() + + // Filter containers by hi.run-id label + containers, err := cli.ContainerList(ctx, container.ListOptions{ + All: true, + Filters: filters.NewArgs( + filters.Arg("label", "hi.run-id="+runID), + ), + }) + if err != nil { + return fmt.Errorf("failed to list containers for run %s: %w", runID, err) + } + + removed := 0 + + for _, cont := range containers { + // Kill the container if it's running + if cont.State == "running" { + _ = cli.ContainerKill(ctx, cont.ID, "KILL") + } + + // Remove the container with retry logic + if removeContainerWithRetry(ctx, cli, cont.ID) { + removed++ + } + } + + if removed > 0 { + fmt.Printf("Removed %d containers for run ID %s\n", removed, runID) + } + + return nil +} + +// cleanupStaleTestContainers removes stopped/exited test containers without affecting running tests. +// This is useful for cleaning up leftover containers from previous crashed or interrupted test runs +// without interfering with currently running concurrent tests. +func cleanupStaleTestContainers(ctx context.Context) error { + cli, err := createDockerClient() + if err != nil { + return fmt.Errorf("failed to create Docker client: %w", err) + } + defer cli.Close() + + // Only get stopped/exited containers + containers, err := cli.ContainerList(ctx, container.ListOptions{ + All: true, + Filters: filters.NewArgs( + filters.Arg("status", "exited"), + filters.Arg("status", "dead"), + ), + }) + if err != nil { + return fmt.Errorf("failed to list stopped containers: %w", err) + } + + removed := 0 + + for _, cont := range containers { + // Only remove containers that look like test containers + shouldRemove := false + + for _, name := range cont.Names { + if strings.Contains(name, "headscale-test-suite") || + strings.Contains(name, "hs-") || + strings.Contains(name, "ts-") || + strings.Contains(name, "derp-") { + shouldRemove = true + break + } + } + + if shouldRemove { + if removeContainerWithRetry(ctx, cli, cont.ID) { + removed++ + } + } + } + + if removed > 0 { + fmt.Printf("Removed %d stale test containers\n", removed) + } + + return nil +} + +const ( + containerRemoveInitialInterval = 100 * time.Millisecond + containerRemoveMaxElapsedTime = 2 * time.Second +) + // removeContainerWithRetry attempts to remove a container with exponential backoff retry logic. func removeContainerWithRetry(ctx context.Context, cli *client.Client, containerID string) bool { - maxRetries := 3 - baseDelay := 100 * time.Millisecond + expBackoff := backoff.NewExponentialBackOff() + expBackoff.InitialInterval = containerRemoveInitialInterval - for attempt := range maxRetries { + _, err := backoff.Retry(ctx, func() (struct{}, error) { err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{ Force: true, }) - if err == nil { - return true + if err != nil { + return struct{}{}, err } - // If this is the last attempt, don't wait - if attempt == maxRetries-1 { - break - } + return struct{}{}, nil + }, backoff.WithBackOff(expBackoff), backoff.WithMaxElapsedTime(containerRemoveMaxElapsedTime)) - // Wait with exponential backoff - delay := baseDelay * time.Duration(1< 0 || removedDirs > 0 { + const bytesPerMB = 1024 * 1024 + log.Printf("Cleaned up %d files and %d directories (freed ~%.2f MB)", + removedFiles, removedDirs, float64(totalSize)/bytesPerMB) + } + + return nil +} + +// getDirSize calculates the total size of a directory. +func getDirSize(path string) (int64, error) { + var size int64 + + err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error { + if err != nil { + return err + } + + if !info.IsDir() { + size += info.Size() + } + + return nil + }) + + return size, err +} diff --git a/cmd/hi/docker.go b/cmd/hi/docker.go index 1143bf77..a6b94b25 100644 --- a/cmd/hi/docker.go +++ b/cmd/hi/docker.go @@ -89,6 +89,9 @@ func runTestContainer(ctx context.Context, config *RunConfig) error { } log.Printf("Starting test: %s", config.TestPattern) + log.Printf("Run ID: %s", runID) + log.Printf("Monitor with: docker logs -f %s", containerName) + log.Printf("Logs directory: %s", logsDir) // Start stats collection for container resource monitoring (if enabled) var statsCollector *StatsCollector @@ -149,11 +152,27 @@ func runTestContainer(ctx context.Context, config *RunConfig) error { shouldCleanup := config.CleanAfter && (!config.KeepOnFailure || exitCode == 0) if shouldCleanup { if config.Verbose { - log.Printf("Running post-test cleanup...") + log.Printf("Running post-test cleanup for run %s...", runID) } - if cleanErr := cleanupAfterTest(ctx, cli, resp.ID); cleanErr != nil && config.Verbose { + + cleanErr := cleanupAfterTest(ctx, cli, resp.ID, runID) + + if cleanErr != nil && config.Verbose { log.Printf("Warning: post-test cleanup failed: %v", cleanErr) } + + // Clean up artifacts from successful tests to save disk space in CI + if exitCode == 0 { + if config.Verbose { + log.Printf("Test succeeded, cleaning up artifacts to save disk space...") + } + + cleanErr := cleanupSuccessfulTestArtifacts(logsDir, config.Verbose) + + if cleanErr != nil && config.Verbose { + log.Printf("Warning: artifact cleanup failed: %v", cleanErr) + } + } } if err != nil { @@ -202,6 +221,28 @@ func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunC fmt.Sprintf("HEADSCALE_INTEGRATION_POSTGRES=%d", boolToInt(config.UsePostgres)), "HEADSCALE_INTEGRATION_RUN_ID=" + runID, } + + // Pass through CI environment variable for CI detection + if ci := os.Getenv("CI"); ci != "" { + env = append(env, "CI="+ci) + } + + // Pass through all HEADSCALE_INTEGRATION_* environment variables + for _, e := range os.Environ() { + if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_") { + // Skip the ones we already set explicitly + if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_POSTGRES=") || + strings.HasPrefix(e, "HEADSCALE_INTEGRATION_RUN_ID=") { + continue + } + + env = append(env, e) + } + } + + // Set GOCACHE to a known location (used by both bind mount and volume cases) + env = append(env, "GOCACHE=/cache/go-build") + containerConfig := &container.Config{ Image: "golang:" + config.GoVersion, Cmd: goTestCmd, @@ -221,20 +262,43 @@ func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunC log.Printf("Using Docker socket: %s", dockerSocketPath) } + binds := []string{ + fmt.Sprintf("%s:%s", projectRoot, projectRoot), + dockerSocketPath + ":/var/run/docker.sock", + logsDir + ":/tmp/control", + } + + // Use bind mounts for Go cache if provided via environment variables, + // otherwise fall back to Docker volumes for local development + var mounts []mount.Mount + + goCache := os.Getenv("HEADSCALE_INTEGRATION_GO_CACHE") + goBuildCache := os.Getenv("HEADSCALE_INTEGRATION_GO_BUILD_CACHE") + + if goCache != "" { + binds = append(binds, goCache+":/go") + } else { + mounts = append(mounts, mount.Mount{ + Type: mount.TypeVolume, + Source: "hs-integration-go-cache", + Target: "/go", + }) + } + + if goBuildCache != "" { + binds = append(binds, goBuildCache+":/cache/go-build") + } else { + mounts = append(mounts, mount.Mount{ + Type: mount.TypeVolume, + Source: "hs-integration-go-build-cache", + Target: "/cache/go-build", + }) + } + hostConfig := &container.HostConfig{ AutoRemove: false, // We'll remove manually for better control - Binds: []string{ - fmt.Sprintf("%s:%s", projectRoot, projectRoot), - dockerSocketPath + ":/var/run/docker.sock", - logsDir + ":/tmp/control", - }, - Mounts: []mount.Mount{ - { - Type: mount.TypeVolume, - Source: "hs-integration-go-cache", - Target: "/go", - }, - }, + Binds: binds, + Mounts: mounts, } return cli.ContainerCreate(ctx, containerConfig, hostConfig, nil, nil, containerName) @@ -357,10 +421,10 @@ func boolToInt(b bool) int { // DockerContext represents Docker context information. type DockerContext struct { - Name string `json:"Name"` - Metadata map[string]interface{} `json:"Metadata"` - Endpoints map[string]interface{} `json:"Endpoints"` - Current bool `json:"Current"` + Name string `json:"Name"` + Metadata map[string]any `json:"Metadata"` + Endpoints map[string]any `json:"Endpoints"` + Current bool `json:"Current"` } // createDockerClient creates a Docker client with context detection. @@ -375,7 +439,7 @@ func createDockerClient() (*client.Client, error) { if contextInfo != nil { if endpoints, ok := contextInfo.Endpoints["docker"]; ok { - if endpointMap, ok := endpoints.(map[string]interface{}); ok { + if endpointMap, ok := endpoints.(map[string]any); ok { if host, ok := endpointMap["Host"].(string); ok { if runConfig.Verbose { log.Printf("Using Docker host from context '%s': %s", contextInfo.Name, host) @@ -701,63 +765,3 @@ func extractContainerFiles(ctx context.Context, cli *client.Client, containerID, // This function is kept for potential future use or other file types return nil } - -// logExtractionError logs extraction errors with appropriate level based on error type. -func logExtractionError(artifactType, containerName string, err error, verbose bool) { - if errors.Is(err, ErrFileNotFoundInTar) { - // File not found is expected and only logged in verbose mode - if verbose { - log.Printf("No %s found in container %s", artifactType, containerName) - } - } else { - // Other errors are actual failures and should be logged as warnings - log.Printf("Warning: failed to extract %s from %s: %v", artifactType, containerName, err) - } -} - -// extractSingleFile copies a single file from a container. -func extractSingleFile(ctx context.Context, cli *client.Client, containerID, sourcePath, fileName, logsDir string, verbose bool) error { - tarReader, _, err := cli.CopyFromContainer(ctx, containerID, sourcePath) - if err != nil { - return fmt.Errorf("failed to copy %s from container: %w", sourcePath, err) - } - defer tarReader.Close() - - // Extract the single file from the tar - filePath := filepath.Join(logsDir, fileName) - if err := extractFileFromTar(tarReader, filepath.Base(sourcePath), filePath); err != nil { - return fmt.Errorf("failed to extract file from tar: %w", err) - } - - if verbose { - log.Printf("Extracted %s from %s", fileName, containerID[:12]) - } - - return nil -} - -// extractDirectory copies a directory from a container and extracts its contents. -func extractDirectory(ctx context.Context, cli *client.Client, containerID, sourcePath, dirName, logsDir string, verbose bool) error { - tarReader, _, err := cli.CopyFromContainer(ctx, containerID, sourcePath) - if err != nil { - return fmt.Errorf("failed to copy %s from container: %w", sourcePath, err) - } - defer tarReader.Close() - - // Create target directory - targetDir := filepath.Join(logsDir, dirName) - if err := os.MkdirAll(targetDir, 0o755); err != nil { - return fmt.Errorf("failed to create directory %s: %w", targetDir, err) - } - - // Extract the directory from the tar - if err := extractDirectoryFromTar(tarReader, targetDir); err != nil { - return fmt.Errorf("failed to extract directory from tar: %w", err) - } - - if verbose { - log.Printf("Extracted %s/ from %s", dirName, containerID[:12]) - } - - return nil -} diff --git a/cmd/hi/run.go b/cmd/hi/run.go index ea43490c..1694399d 100644 --- a/cmd/hi/run.go +++ b/cmd/hi/run.go @@ -19,7 +19,7 @@ type RunConfig struct { FailFast bool `flag:"failfast,default=true,Stop on first test failure"` UsePostgres bool `flag:"postgres,default=false,Use PostgreSQL instead of SQLite"` GoVersion string `flag:"go-version,Go version to use (auto-detected from go.mod)"` - CleanBefore bool `flag:"clean-before,default=true,Clean resources before test"` + CleanBefore bool `flag:"clean-before,default=true,Clean stale resources before test"` CleanAfter bool `flag:"clean-after,default=true,Clean resources after test"` KeepOnFailure bool `flag:"keep-on-failure,default=false,Keep containers on test failure"` LogsDir string `flag:"logs-dir,default=control_logs,Control logs directory"` diff --git a/cmd/hi/tar_utils.go b/cmd/hi/tar_utils.go deleted file mode 100644 index cfeeef5e..00000000 --- a/cmd/hi/tar_utils.go +++ /dev/null @@ -1,105 +0,0 @@ -package main - -import ( - "archive/tar" - "errors" - "fmt" - "io" - "os" - "path/filepath" - "strings" -) - -// ErrFileNotFoundInTar indicates a file was not found in the tar archive. -var ErrFileNotFoundInTar = errors.New("file not found in tar") - -// extractFileFromTar extracts a single file from a tar reader. -func extractFileFromTar(tarReader io.Reader, fileName, outputPath string) error { - tr := tar.NewReader(tarReader) - - for { - header, err := tr.Next() - if err == io.EOF { - break - } - if err != nil { - return fmt.Errorf("failed to read tar header: %w", err) - } - - // Check if this is the file we're looking for - if filepath.Base(header.Name) == fileName { - if header.Typeflag == tar.TypeReg { - // Create the output file - outFile, err := os.Create(outputPath) - if err != nil { - return fmt.Errorf("failed to create output file: %w", err) - } - defer outFile.Close() - - // Copy file contents - if _, err := io.Copy(outFile, tr); err != nil { - return fmt.Errorf("failed to copy file contents: %w", err) - } - - return nil - } - } - } - - return fmt.Errorf("%w: %s", ErrFileNotFoundInTar, fileName) -} - -// extractDirectoryFromTar extracts all files from a tar reader to a target directory. -func extractDirectoryFromTar(tarReader io.Reader, targetDir string) error { - tr := tar.NewReader(tarReader) - - for { - header, err := tr.Next() - if err == io.EOF { - break - } - if err != nil { - return fmt.Errorf("failed to read tar header: %w", err) - } - - // Clean the path to prevent directory traversal - cleanName := filepath.Clean(header.Name) - if strings.Contains(cleanName, "..") { - continue // Skip potentially dangerous paths - } - - targetPath := filepath.Join(targetDir, cleanName) - - switch header.Typeflag { - case tar.TypeDir: - // Create directory - if err := os.MkdirAll(targetPath, os.FileMode(header.Mode)); err != nil { - return fmt.Errorf("failed to create directory %s: %w", targetPath, err) - } - case tar.TypeReg: - // Ensure parent directories exist - if err := os.MkdirAll(filepath.Dir(targetPath), 0o755); err != nil { - return fmt.Errorf("failed to create parent directories for %s: %w", targetPath, err) - } - - // Create file - outFile, err := os.Create(targetPath) - if err != nil { - return fmt.Errorf("failed to create file %s: %w", targetPath, err) - } - - if _, err := io.Copy(outFile, tr); err != nil { - outFile.Close() - return fmt.Errorf("failed to copy file contents: %w", err) - } - outFile.Close() - - // Set file permissions - if err := os.Chmod(targetPath, os.FileMode(header.Mode)); err != nil { - return fmt.Errorf("failed to set file permissions: %w", err) - } - } - } - - return nil -} diff --git a/config-example.yaml b/config-example.yaml index ec14dc03..dbb08202 100644 --- a/config-example.yaml +++ b/config-example.yaml @@ -20,6 +20,7 @@ listen_addr: 127.0.0.1:8080 # Address to listen to /metrics and /debug, you may want # to keep this endpoint private to your internal network +# Use an emty value to disable the metrics listener. metrics_listen_addr: 127.0.0.1:9090 # Address to listen for gRPC. @@ -361,6 +362,12 @@ unix_socket_permission: "0770" # # required "openid" scope. # scope: ["openid", "profile", "email"] # +# # Only verified email addresses are synchronized to the user profile by +# # default. Unverified emails may be allowed in case an identity provider +# # does not send the "email_verified: true" claim or email verification is +# # not required. +# email_verified_required: true +# # # Provide custom key/value pairs which get sent to the identity provider's # # authorization endpoint. # extra_params: @@ -407,3 +414,23 @@ logtail: # default static port 41641. This option is intended as a workaround for some buggy # firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information. randomize_client_port: false + +# Taildrop configuration +# Taildrop is the file sharing feature of Tailscale, allowing nodes to send files to each other. +# https://tailscale.com/kb/1106/taildrop/ +taildrop: + # Enable or disable Taildrop for all nodes. + # When enabled, nodes can send files to other nodes owned by the same user. + # Tagged devices and cross-user transfers are not permitted by Tailscale clients. + enabled: true +# Advanced performance tuning parameters. +# The defaults are carefully chosen and should rarely need adjustment. +# Only modify these if you have identified a specific performance issue. +# +# tuning: +# # NodeStore write batching configuration. +# # The NodeStore batches write operations before rebuilding peer relationships, +# # which is computationally expensive. Batching reduces rebuild frequency. +# # +# # node_store_batch_size: 100 +# # node_store_batch_timeout: 500ms diff --git a/derp-example.yaml b/derp-example.yaml index 532475ef..ea93427c 100644 --- a/derp-example.yaml +++ b/derp-example.yaml @@ -1,6 +1,6 @@ # If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/ regions: - 1: null # Disable DERP region with ID 1 + 1: null # Disable DERP region with ID 1 900: regionid: 900 regioncode: custom diff --git a/docs/about/faq.md b/docs/about/faq.md index ecedf198..f1361590 100644 --- a/docs/about/faq.md +++ b/docs/about/faq.md @@ -157,7 +157,7 @@ indicates which part of the policy is invalid. Follow these steps to fix your po !!! warning "Full server configuration required" The above commands to get/set the policy require a complete server configuration file including database settings. A - minimal config to [control Headscale via remote CLI](../ref/remote-cli.md) is not sufficient. You may use `headscale + minimal config to [control Headscale via remote CLI](../ref/api.md#grpc) is not sufficient. You may use `headscale -c /path/to/config.yaml` to specify the path to an alternative configuration file. ## How can I avoid to send logs to Tailscale Inc? diff --git a/docs/about/features.md b/docs/about/features.md index 81862b70..83197d64 100644 --- a/docs/about/features.md +++ b/docs/about/features.md @@ -14,6 +14,7 @@ provides on overview of Headscale's feature and compatibility with the Tailscale - [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains) - [x] [Extra DNS records (Headscale only)](../ref/dns.md#setting-extra-dns-records) - [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop) +- [x] [Tags](https://tailscale.com/kb/1068/tags) - [x] [Routes](../ref/routes.md) - [x] [Subnet routers](../ref/routes.md#subnet-router) - [x] [Exit nodes](../ref/routes.md#exit-node) diff --git a/docs/assets/favicon.png b/docs/assets/favicon.png new file mode 100644 index 00000000..4989810f Binary files /dev/null and b/docs/assets/favicon.png differ diff --git a/docs/images/headscale-acl-network.png b/docs/assets/images/headscale-acl-network.png similarity index 100% rename from docs/images/headscale-acl-network.png rename to docs/assets/images/headscale-acl-network.png diff --git a/docs/logo/headscale3-dots.pdf b/docs/assets/logo/headscale3-dots.pdf similarity index 100% rename from docs/logo/headscale3-dots.pdf rename to docs/assets/logo/headscale3-dots.pdf diff --git a/docs/logo/headscale3-dots.png b/docs/assets/logo/headscale3-dots.png similarity index 100% rename from docs/logo/headscale3-dots.png rename to docs/assets/logo/headscale3-dots.png diff --git a/docs/logo/headscale3-dots.svg b/docs/assets/logo/headscale3-dots.svg similarity index 97% rename from docs/logo/headscale3-dots.svg rename to docs/assets/logo/headscale3-dots.svg index 6a20973c..f7120395 100644 --- a/docs/logo/headscale3-dots.svg +++ b/docs/assets/logo/headscale3-dots.svg @@ -1 +1 @@ - \ No newline at end of file + diff --git a/docs/logo/headscale3_header_stacked_left.pdf b/docs/assets/logo/headscale3_header_stacked_left.pdf similarity index 100% rename from docs/logo/headscale3_header_stacked_left.pdf rename to docs/assets/logo/headscale3_header_stacked_left.pdf diff --git a/docs/logo/headscale3_header_stacked_left.png b/docs/assets/logo/headscale3_header_stacked_left.png similarity index 100% rename from docs/logo/headscale3_header_stacked_left.png rename to docs/assets/logo/headscale3_header_stacked_left.png diff --git a/docs/logo/headscale3_header_stacked_left.svg b/docs/assets/logo/headscale3_header_stacked_left.svg similarity index 99% rename from docs/logo/headscale3_header_stacked_left.svg rename to docs/assets/logo/headscale3_header_stacked_left.svg index d00af00e..0c3702c6 100644 --- a/docs/logo/headscale3_header_stacked_left.svg +++ b/docs/assets/logo/headscale3_header_stacked_left.svg @@ -1 +1 @@ - \ No newline at end of file + diff --git a/docs/ref/acls.md b/docs/ref/acls.md index 53ab24ac..fff66715 100644 --- a/docs/ref/acls.md +++ b/docs/ref/acls.md @@ -65,7 +65,7 @@ servers. - billing.internal - router.internal -![ACL implementation example](../images/headscale-acl-network.png) +![ACL implementation example](../assets/images/headscale-acl-network.png) When [registering the servers](../usage/getting-started.md#register-a-node) we will need to add the flag `--advertise-tags=tag:,tag:`, and the user @@ -222,7 +222,7 @@ Allows access to the internet through [exit nodes](routes.md#exit-node). Can onl ### `autogroup:member` -Includes all users who are direct members of the tailnet. Does not include users from shared devices. +Includes all untagged devices. ```json { diff --git a/docs/ref/api.md b/docs/ref/api.md new file mode 100644 index 00000000..a99e679c --- /dev/null +++ b/docs/ref/api.md @@ -0,0 +1,129 @@ +# API +Headscale provides a [HTTP REST API](#rest-api) and a [gRPC interface](#grpc) which may be used to integrate a [web +interface](integration/web-ui.md), [remote control Headscale](#setup-remote-control) or provide a base for custom +integration and tooling. + +Both interfaces require a valid API key before use. To create an API key, log into your Headscale server and generate +one with the default expiration of 90 days: + +```shell +headscale apikeys create +``` + +Copy the output of the command and save it for later. Please note that you can not retrieve an API key again. If the API +key is lost, expire the old one, and create a new one. + +To list the API keys currently associated with the server: + +```shell +headscale apikeys list +``` + +and to expire an API key: + +```shell +headscale apikeys expire --prefix +``` + +## REST API + +- API endpoint: `/api/v1`, e.g. `https://headscale.example.com/api/v1` +- Documentation: `/swagger`, e.g. `https://headscale.example.com/swagger` +- Headscale Version: `/version`, e.g. `https://headscale.example.com/version` +- Authenticate using HTTP Bearer authentication by sending the [API key](#api) with the HTTP `Authorization: Bearer + ` header. + +Start by [creating an API key](#api) and test it with the examples below. Read the API documentation provided by your +Headscale server at `/swagger` for details. + +=== "Get details for all users" + + ```console + curl -H "Authorization: Bearer " \ + https://headscale.example.com/api/v1/user + ``` + +=== "Get details for user 'bob'" + + ```console + curl -H "Authorization: Bearer " \ + https://headscale.example.com/api/v1/user?name=bob + ``` + +=== "Register a node" + + ```console + curl -H "Authorization: Bearer " \ + -d user= -d key= \ + https://headscale.example.com/api/v1/node/register + ``` + +## gRPC + +The gRPC interface can be used to control a Headscale instance from a remote machine with the `headscale` binary. + +### Prerequisite + +- A workstation to run `headscale` (any supported platform, e.g. Linux). +- A Headscale server with gRPC enabled. +- Connections to the gRPC port (default: `50443`) are allowed. +- Remote access requires an encrypted connection via TLS. +- An [API key](#api) to authenticate with the Headscale server. + +### Setup remote control + +1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make + sure to use the same version as on the server. + +1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale` + +1. Make `headscale` executable: `chmod +x /usr/local/bin/headscale` + +1. [Create an API key](#api) on the Headscale server. + +1. Provide the connection parameters for the remote Headscale server either via a minimal YAML configuration file or + via environment variables: + + === "Minimal YAML configuration file" + + ```yaml title="config.yaml" + cli: + address: : + api_key: + ``` + + === "Environment variables" + + ```shell + export HEADSCALE_CLI_ADDRESS=":" + export HEADSCALE_CLI_API_KEY="" + ``` + + This instructs the `headscale` binary to connect to a remote instance at `:`, instead of + connecting to the local instance. + +1. Test the connection by listing all nodes: + + ```shell + headscale nodes list + ``` + + You should now be able to see a list of your nodes from your workstation, and you can + now control the Headscale server from your workstation. + +### Behind a proxy + +It's possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as Headscale. + +While this is _not a supported_ feature, an example on how this can be set up on +[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91). + +### Troubleshooting + +- Make sure you have the _same_ Headscale version on your server and workstation. +- Ensure that connections to the gRPC port are allowed. +- Verify that your TLS certificate is valid and trusted. +- If you don't have access to a trusted certificate (e.g. from Let's Encrypt), either: + - Add your self-signed certificate to the trust store of your OS _or_ + - Disable certificate verification by either setting `cli.insecure: true` in the configuration file or by setting + `HEADSCALE_CLI_INSECURE=1` via an environment variable. We do **not** recommend to disable certificate validation. diff --git a/docs/ref/debug.md b/docs/ref/debug.md index 2c6ef5b2..f2899d69 100644 --- a/docs/ref/debug.md +++ b/docs/ref/debug.md @@ -64,6 +64,9 @@ Headscale provides a metrics and debug endpoint. It allows to introspect differe Keep the metrics and debug endpoint private to your internal network and don't expose it to the Internet. + The metrics and debug interface can be disabled completely by setting `metrics_listen_addr: null` in the + [configuration file](./configuration.md). + Query metrics via and get an overview of available debug information via . Metrics may be queried from outside localhost but the debug interface is subject to additional protection despite listening on all interfaces. diff --git a/docs/ref/integration/tools.md b/docs/ref/integration/tools.md index d5849ffe..2cf7d619 100644 --- a/docs/ref/integration/tools.md +++ b/docs/ref/integration/tools.md @@ -7,10 +7,16 @@ This page collects third-party tools, client libraries, and scripts related to headscale. -| Name | Repository Link | Description | -| --------------------- | --------------------------------------------------------------- | -------------------------------------------------------------------- | -| tailscale-manager | [Github](https://github.com/singlestore-labs/tailscale-manager) | Dynamically manage Tailscale route advertisements | -| headscalebacktosqlite | [Github](https://github.com/bigbozza/headscalebacktosqlite) | Migrate headscale from PostgreSQL back to SQLite | -| headscale-pf | [Github](https://github.com/YouSysAdmin/headscale-pf) | Populates user groups based on user groups in Jumpcloud or Authentik | -| headscale-client-go | [Github](https://github.com/hibare/headscale-client-go) | A Go client implementation for the Headscale HTTP API. | -| headscale-zabbix | [Github](https://github.com/dblanque/headscale-zabbix) | A Zabbix Monitoring Template for the Headscale Service. | +- [headscale-operator](https://github.com/infradohq/headscale-operator) - Headscale Kubernetes Operator +- [tailscale-manager](https://github.com/singlestore-labs/tailscale-manager) - Dynamically manage Tailscale route + advertisements +- [headscalebacktosqlite](https://github.com/bigbozza/headscalebacktosqlite) - Migrate headscale from PostgreSQL back to + SQLite +- [headscale-pf](https://github.com/YouSysAdmin/headscale-pf) - Populates user groups based on user groups in Jumpcloud + or Authentik +- [headscale-client-go](https://github.com/hibare/headscale-client-go) - A Go client implementation for the Headscale + HTTP API. +- [headscale-zabbix](https://github.com/dblanque/headscale-zabbix) - A Zabbix Monitoring Template for the Headscale + Service. +- [tailscale-exporter](https://github.com/adinhodovic/tailscale-exporter) - A Prometheus exporter for Headscale that + provides network-level metrics using the Headscale API. diff --git a/docs/ref/integration/web-ui.md b/docs/ref/integration/web-ui.md index e0436a87..12238b94 100644 --- a/docs/ref/integration/web-ui.md +++ b/docs/ref/integration/web-ui.md @@ -7,14 +7,18 @@ Headscale doesn't provide a built-in web interface but users may pick one from the available options. -| Name | Repository Link | Description | -| ---------------------- | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------- | -| headscale-ui | [Github](https://github.com/gurucomputing/headscale-ui) | A web frontend for the headscale Tailscale-compatible coordination server | -| HeadscaleUi | [GitHub](https://github.com/simcu/headscale-ui) | A static headscale admin ui, no backend environment required | -| Headplane | [GitHub](https://github.com/tale/headplane) | An advanced Tailscale inspired frontend for headscale | -| headscale-admin | [Github](https://github.com/GoodiesHQ/headscale-admin) | Headscale-Admin is meant to be a simple, modern web interface for headscale | -| ouroboros | [Github](https://github.com/yellowsink/ouroboros) | Ouroboros is designed for users to manage their own devices, rather than for admins | -| unraid-headscale-admin | [Github](https://github.com/ich777/unraid-headscale-admin) | A simple headscale admin UI for Unraid, it offers Local (`docker exec`) and API Mode | -| headscale-console | [Github](https://github.com/rickli-cloud/headscale-console) | WebAssembly-based client supporting SSH, VNC and RDP with optional self-service capabilities | +- [headscale-ui](https://github.com/gurucomputing/headscale-ui) - A web frontend for the headscale Tailscale-compatible + coordination server +- [HeadscaleUi](https://github.com/simcu/headscale-ui) - A static headscale admin ui, no backend environment required +- [Headplane](https://github.com/tale/headplane) - An advanced Tailscale inspired frontend for headscale +- [headscale-admin](https://github.com/GoodiesHQ/headscale-admin) - Headscale-Admin is meant to be a simple, modern web + interface for headscale +- [ouroboros](https://github.com/yellowsink/ouroboros) - Ouroboros is designed for users to manage their own devices, + rather than for admins +- [unraid-headscale-admin](https://github.com/ich777/unraid-headscale-admin) - A simple headscale admin UI for Unraid, + it offers Local (`docker exec`) and API Mode +- [headscale-console](https://github.com/rickli-cloud/headscale-console) - WebAssembly-based client supporting SSH, VNC + and RDP with optional self-service capabilities +- [headscale-piying](https://github.com/wszgrcy/headscale-piying) - headscale web ui,support visual ACL configuration You can ask for support on our [Discord server](https://discord.gg/c84AZQhmpx) in the "web-interfaces" channel. diff --git a/docs/ref/oidc.md b/docs/ref/oidc.md index f56da4f2..f6ec1bcd 100644 --- a/docs/ref/oidc.md +++ b/docs/ref/oidc.md @@ -77,6 +77,7 @@ are configured, a user needs to pass all of them. * Check the email domain of each authenticating user against the list of allowed domains and only authorize users whose email domain matches `example.com`. + * A verified email address is required [unless email verification is disabled](#control-email-verification). * Access allowed: `alice@example.com` * Access denied: `bob@example.net` @@ -93,6 +94,7 @@ are configured, a user needs to pass all of them. * Check the email address of each authenticating user against the list of allowed email addresses and only authorize users whose email is part of the `allowed_users` list. + * A verified email address is required [unless email verification is disabled](#control-email-verification). * Access allowed: `alice@example.com`, `bob@example.net` * Access denied: `mallory@example.net` @@ -123,6 +125,23 @@ are configured, a user needs to pass all of them. - "headscale_users" ``` +### Control email verification + +Headscale uses the `email` claim from the identity provider to synchronize the email address to its user profile. By +default, a user's email address is only synchronized when the identity provider reports the email address as verified +via the `email_verified: true` claim. + +Unverified emails may be allowed in case an identity provider does not send the `email_verified` claim or email +verification is not required. In that case, a user's email address is always synchronized to the user profile. + +```yaml hl_lines="5" +oidc: + issuer: "https://sso.example.com" + client_id: "headscale" + client_secret: "generated-secret" + email_verified_required: false +``` + ### Customize node expiration The node expiration is the amount of time a node is authenticated with OpenID Connect until it expires and needs to @@ -189,7 +208,7 @@ endpoint. | Headscale profile | OIDC claim | Notes / examples | | ------------------- | -------------------- | ------------------------------------------------------------------------------------------------- | -| email address | `email` | Only used when `email_verified: true` | +| email address | `email` | Only verified emails are synchronized, unless `email_verified_required: false` is configured | | display name | `name` | eg: `Sam Smith` | | username | `preferred_username` | Depends on identity provider, eg: `ssmith`, `ssmith@idp.example.com`, `\\example.com\ssmith` | | profile picture | `picture` | URL to a profile picture or avatar | @@ -205,8 +224,6 @@ endpoint. - The username must be at least two characters long. - It must only contain letters, digits, hyphens, dots, underscores, and up to a single `@`. - The username must start with a letter. -- A user's email address is only synchronized to the local user profile when the identity provider marks the email - address as verified (`email_verified: true`). Please see the [GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC) for OIDC related issues. @@ -305,5 +322,13 @@ Entra ID is: `https://login.microsoftonline.com//v2.0`. The followi - `domain_hint: example.com` to use your own domain - `prompt: select_account` to force an account picker during login -Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their group ID instead +When using Microsoft Entra ID together with the [allowed groups filter](#authorize-users-with-filters), configure the +Headscale OIDC scope without the `groups` claim, for example: + +```yaml +oidc: + scope: ["openid", "profile", "email"] +``` + +Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their group ID(UUID) instead of the group name. diff --git a/docs/ref/remote-cli.md b/docs/ref/remote-cli.md deleted file mode 100644 index 61df67fd..00000000 --- a/docs/ref/remote-cli.md +++ /dev/null @@ -1,99 +0,0 @@ -# Controlling headscale with remote CLI - -This documentation has the goal of showing a user how-to control a headscale instance -from a remote machine with the `headscale` command line binary. - -## Prerequisite - -- A workstation to run `headscale` (any supported platform, e.g. Linux). -- A headscale server with gRPC enabled. -- Connections to the gRPC port (default: `50443`) are allowed. -- Remote access requires an encrypted connection via TLS. -- An API key to authenticate with the headscale server. - -## Create an API key - -We need to create an API key to authenticate with the remote headscale server when using it from our workstation. - -To create an API key, log into your headscale server and generate a key: - -```shell -headscale apikeys create --expiration 90d -``` - -Copy the output of the command and save it for later. Please note that you can not retrieve a key again, -if the key is lost, expire the old one, and create a new key. - -To list the keys currently associated with the server: - -```shell -headscale apikeys list -``` - -and to expire a key: - -```shell -headscale apikeys expire --prefix "" -``` - -## Download and configure headscale - -1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make - sure to use the same version as on the server. - -1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale` - -1. Make `headscale` executable: - - ```shell - chmod +x /usr/local/bin/headscale - ``` - -1. Provide the connection parameters for the remote headscale server either via a minimal YAML configuration file or via - environment variables: - - === "Minimal YAML configuration file" - - ```yaml title="config.yaml" - cli: - address: : - api_key: - ``` - - === "Environment variables" - - ```shell - export HEADSCALE_CLI_ADDRESS=":" - export HEADSCALE_CLI_API_KEY="" - ``` - - This instructs the `headscale` binary to connect to a remote instance at `:`, instead of - connecting to the local instance. - -1. Test the connection - - Let us run the headscale command to verify that we can connect by listing our nodes: - - ```shell - headscale nodes list - ``` - - You should now be able to see a list of your nodes from your workstation, and you can - now control the headscale server from your workstation. - -## Behind a proxy - -It is possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as headscale. - -While this is _not a supported_ feature, an example on how this can be set up on -[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91). - -## Troubleshooting - -- Make sure you have the _same_ headscale version on your server and workstation. -- Ensure that connections to the gRPC port are allowed. -- Verify that your TLS certificate is valid and trusted. -- If you don't have access to a trusted certificate (e.g. from Let's Encrypt), either: - - Add your self-signed certificate to the trust store of your OS _or_ - - Disable certificate verification by either setting `cli.insecure: true` in the configuration file or by setting - `HEADSCALE_CLI_INSECURE=1` via an environment variable. We do **not** recommend to disable certificate validation. diff --git a/docs/ref/routes.md b/docs/ref/routes.md index a1c438b7..af8a3778 100644 --- a/docs/ref/routes.md +++ b/docs/ref/routes.md @@ -42,8 +42,9 @@ can be used. ```console $ headscale nodes list-routes -ID | Hostname | Approved | Available | Serving (Primary) -1 | myrouter | | 10.0.0.0/8, 192.168.0.0/24 | +ID | Hostname | Approved | Available | Serving (Primary) +1 | myrouter | | 10.0.0.0/8 | + | | | 192.168.0.0/24 | ``` Approve all desired routes of a subnet router by specifying them as comma separated list: @@ -57,8 +58,9 @@ The node `myrouter` can now route the IPv4 networks `10.0.0.0/8` and `192.168.0. ```console $ headscale nodes list-routes -ID | Hostname | Approved | Available | Serving (Primary) -1 | myrouter | 10.0.0.0/8, 192.168.0.0/24 | 10.0.0.0/8, 192.168.0.0/24 | 10.0.0.0/8, 192.168.0.0/24 +ID | Hostname | Approved | Available | Serving (Primary) +1 | myrouter | 10.0.0.0/8 | 10.0.0.0/8 | 10.0.0.0/8 + | | 192.168.0.0/24 | 192.168.0.0/24 | 192.168.0.0/24 ``` #### Use the subnet router @@ -109,9 +111,9 @@ approval of routes served with a subnet router. The ACL snippet below defines the tag `tag:router` owned by the user `alice`. This tag is used for `routes` in the `autoApprovers` section. The IPv4 route `192.168.0.0/24` is automatically approved once announced by a subnet router -owned by the user `alice` and that also advertises the tag `tag:router`. +that advertises the tag `tag:router`. -```json title="Subnet routers owned by alice and tagged with tag:router are automatically approved" +```json title="Subnet routers tagged with tag:router are automatically approved" { "tagOwners": { "tag:router": ["alice@"] @@ -168,8 +170,9 @@ available, but needs to be approved: ```console $ headscale nodes list-routes -ID | Hostname | Approved | Available | Serving (Primary) -1 | myexit | | 0.0.0.0/0, ::/0 | +ID | Hostname | Approved | Available | Serving (Primary) +1 | myexit | | 0.0.0.0/0 | + | | | ::/0 | ``` For exit nodes, it is sufficient to approve either the IPv4 or IPv6 route. The other will be approved automatically. @@ -183,8 +186,9 @@ The node `myexit` is now approved as exit node for the tailnet: ```console $ headscale nodes list-routes -ID | Hostname | Approved | Available | Serving (Primary) -1 | myexit | 0.0.0.0/0, ::/0 | 0.0.0.0/0, ::/0 | 0.0.0.0/0, ::/0 +ID | Hostname | Approved | Available | Serving (Primary) +1 | myexit | 0.0.0.0/0 | 0.0.0.0/0 | 0.0.0.0/0 + | | ::/0 | ::/0 | ::/0 ``` #### Use the exit node @@ -256,10 +260,9 @@ in a tailnet. Headscale supports the `autoApprovers` section of an ACL to automa soon as it joins the tailnet. The ACL snippet below defines the tag `tag:exit` owned by the user `alice`. This tag is used for `exitNode` in the -`autoApprovers` section. A new exit node which is owned by the user `alice` and that also advertises the tag `tag:exit` -is automatically approved: +`autoApprovers` section. A new exit node that advertises the tag `tag:exit` is automatically approved: -```json title="Exit nodes owned by alice and tagged with tag:exit are automatically approved" +```json title="Exit nodes tagged with tag:exit are automatically approved" { "tagOwners": { "tag:exit": ["alice@"] diff --git a/docs/setup/install/container.md b/docs/setup/install/container.md index 98caa19c..dca22537 100644 --- a/docs/setup/install/container.md +++ b/docs/setup/install/container.md @@ -18,10 +18,10 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c ## Configure and run headscale -1. Create a directory on the Docker host to store headscale's [configuration](../../ref/configuration.md) and the [SQLite](https://www.sqlite.org/) database: +1. Create a directory on the container host to store headscale's [configuration](../../ref/configuration.md) and the [SQLite](https://www.sqlite.org/) database: ```shell - mkdir -p ./headscale/{config,lib,run} + mkdir -p ./headscale/{config,lib} cd ./headscale ``` @@ -34,9 +34,10 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c docker run \ --name headscale \ --detach \ - --volume "$(pwd)/config:/etc/headscale" \ + --read-only \ + --tmpfs /var/run/headscale \ + --volume "$(pwd)/config:/etc/headscale:ro" \ --volume "$(pwd)/lib:/var/lib/headscale" \ - --volume "$(pwd)/run:/var/run/headscale" \ --publish 127.0.0.1:8080:8080 \ --publish 127.0.0.1:9090:9090 \ --health-cmd "CMD headscale health" \ @@ -57,15 +58,17 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c image: docker.io/headscale/headscale: restart: unless-stopped container_name: headscale + read_only: true + tmpfs: + - /var/run/headscale ports: - "127.0.0.1:8080:8080" - "127.0.0.1:9090:9090" volumes: # Please set to the absolute path # of the previously created headscale directory. - - /config:/etc/headscale + - /config:/etc/headscale:ro - /lib:/var/lib/headscale - - /run:/var/run/headscale command: serve healthcheck: test: ["CMD", "headscale", "health"] @@ -88,45 +91,10 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c Verify headscale is available: ```shell - curl http://127.0.0.1:9090/metrics + curl http://127.0.0.1:8080/health ``` -1. Create a headscale user: - - ```shell - docker exec -it headscale \ - headscale users create myfirstuser - ``` - -### Register a machine (normal login) - -On a client machine, execute the `tailscale up` command to login: - -```shell -tailscale up --login-server YOUR_HEADSCALE_URL -``` - -To register a machine when running headscale in a container, take the headscale command and pass it to the container: - -```shell -docker exec -it headscale \ - headscale nodes register --user myfirstuser --key -``` - -### Register a machine using a pre authenticated key - -Generate a key using the command line for the user with ID 1: - -```shell -docker exec -it headscale \ - headscale preauthkeys create --user 1 --reusable --expiration 24h -``` - -This will return a pre-authenticated key that can be used to connect a node to headscale with the `tailscale up` command: - -```shell -tailscale up --login-server --authkey -``` +Continue on the [getting started page](../../usage/getting-started.md) to register your first machine. ## Debugging headscale running in Docker diff --git a/docs/setup/install/official.md b/docs/setup/install/official.md index 17d23b23..56fd0c9c 100644 --- a/docs/setup/install/official.md +++ b/docs/setup/install/official.md @@ -42,6 +42,8 @@ distributions are Ubuntu 22.04 or newer, Debian 12 or newer. sudo systemctl status headscale ``` +Continue on the [getting started page](../../usage/getting-started.md) to register your first machine. + ## Using standalone binaries (advanced) !!! warning "Advanced" @@ -115,3 +117,5 @@ managed by systemd. ```shell systemctl status headscale ``` + +Continue on the [getting started page](../../usage/getting-started.md) to register your first machine. diff --git a/docs/setup/requirements.md b/docs/setup/requirements.md index 627e24ed..ae1ea660 100644 --- a/docs/setup/requirements.md +++ b/docs/setup/requirements.md @@ -28,7 +28,7 @@ The ports in use vary with the intended scenario and enabled features. Some of t - STUN, required if the [embedded DERP server](../ref/derp.md) is enabled - tcp/50443 - Expose publicly: yes - - Only required if the gRPC interface is used to [remote-control Headscale](../ref/remote-cli.md). + - Only required if the gRPC interface is used to [remote-control Headscale](../ref/api.md#grpc). - tcp/9090 - Expose publicly: no - [Metrics and debug endpoint](../ref/debug.md#metrics-and-debug-endpoint) diff --git a/docs/usage/getting-started.md b/docs/usage/getting-started.md index 7d2c62da..a69d89a3 100644 --- a/docs/usage/getting-started.md +++ b/docs/usage/getting-started.md @@ -9,8 +9,8 @@ This page helps you get started with headscale and provides a few usage examples installation instructions. * The configuration file exists and is adjusted to suit your environment, see [Configuration](../ref/configuration.md) for details. - * Headscale is reachable from the Internet. Verify this by opening client specific setup instructions in your - browser, e.g. https://headscale.example.com/windows + * Headscale is reachable from the Internet. Verify this by visiting the health endpoint: + https://headscale.example.com/health * The Tailscale client is installed, see [Client and operating system support](../about/clients.md) for more information. @@ -41,6 +41,23 @@ options, run: headscale --help ``` +!!! note "Manage headscale from another local user" + + By default only the user `headscale` or `root` will have the necessary permissions to access the unix socket + (`/var/run/headscale/headscale.sock`) that is used to communicate with the service. In order to be able to + communicate with the headscale service you have to make sure the unix socket is accessible by the user that runs + the commands. In general you can achieve this by any of the following methods: + + * using `sudo` + * run the commands as user `headscale` + * add your user to the `headscale` group + + To verify you can run the following command using your preferred method: + + ```shell + headscale users list + ``` + ## Manage headscale users In headscale, a node (also known as machine or device) is always assigned to a diff --git a/flake.lock b/flake.lock index 37f45d79..9f77e322 100644 --- a/flake.lock +++ b/flake.lock @@ -20,11 +20,11 @@ }, "nixpkgs": { "locked": { - "lastModified": 1760533177, - "narHash": "sha256-OwM1sFustLHx+xmTymhucZuNhtq98fHIbfO8Swm5L8A=", + "lastModified": 1768875095, + "narHash": "sha256-dYP3DjiL7oIiiq3H65tGIXXIT1Waiadmv93JS0sS+8A=", "owner": "NixOS", "repo": "nixpkgs", - "rev": "35f590344ff791e6b1d6d6b8f3523467c9217caf", + "rev": "ed142ab1b3a092c4d149245d0c4126a5d7ea00b0", "type": "github" }, "original": { diff --git a/flake.nix b/flake.nix index f8eb6dd1..3b5cff09 100644 --- a/flake.nix +++ b/flake.nix @@ -6,239 +6,232 @@ flake-utils.url = "github:numtide/flake-utils"; }; - outputs = { - self, - nixpkgs, - flake-utils, - ... - }: let - headscaleVersion = self.shortRev or self.dirtyShortRev; - commitHash = self.rev or self.dirtyRev; - in + outputs = + { self + , nixpkgs + , flake-utils + , ... + }: + let + headscaleVersion = self.shortRev or self.dirtyShortRev; + commitHash = self.rev or self.dirtyRev; + in { - overlay = _: prev: let - pkgs = nixpkgs.legacyPackages.${prev.system}; - buildGo = pkgs.buildGo125Module; - vendorHash = "sha256-VOi4PGZ8I+2MiwtzxpKc/4smsL5KcH/pHVkjJfAFPJ0="; - in { - headscale = buildGo { - pname = "headscale"; - version = headscaleVersion; - src = pkgs.lib.cleanSource self; - - # Only run unit tests when testing a build - checkFlags = ["-short"]; - - # When updating go.mod or go.sum, a new sha will need to be calculated, - # update this if you have a mismatch after doing a change to those files. - inherit vendorHash; - - subPackages = ["cmd/headscale"]; - - ldflags = [ - "-s" - "-w" - "-X github.com/juanfont/headscale/hscontrol/types.Version=${headscaleVersion}" - "-X github.com/juanfont/headscale/hscontrol/types.GitCommitHash=${commitHash}" - ]; - }; - - hi = buildGo { - pname = "hi"; - version = headscaleVersion; - src = pkgs.lib.cleanSource self; - - checkFlags = ["-short"]; - inherit vendorHash; - - subPackages = ["cmd/hi"]; - }; - - protoc-gen-grpc-gateway = buildGo rec { - pname = "grpc-gateway"; - version = "2.24.0"; - - src = pkgs.fetchFromGitHub { - owner = "grpc-ecosystem"; - repo = "grpc-gateway"; - rev = "v${version}"; - sha256 = "sha256-lUEoqXJF1k4/il9bdDTinkUV5L869njZNYqObG/mHyA="; - }; - - vendorHash = "sha256-Ttt7bPKU+TMKRg5550BS6fsPwYp0QJqcZ7NLrhttSdw="; - - nativeBuildInputs = [pkgs.installShellFiles]; - - subPackages = ["protoc-gen-grpc-gateway" "protoc-gen-openapiv2"]; - }; - - protobuf-language-server = buildGo rec { - pname = "protobuf-language-server"; - version = "2546944"; - - src = pkgs.fetchFromGitHub { - owner = "lasorda"; - repo = "protobuf-language-server"; - rev = "${version}"; - sha256 = "sha256-Cbr3ktT86RnwUntOiDKRpNTClhdyrKLTQG2ZEd6fKDc="; - }; - - vendorHash = "sha256-PfT90dhfzJZabzLTb1D69JCO+kOh2khrlpF5mCDeypk="; - - subPackages = ["."]; - }; - - # Upstream does not override buildGoModule properly, - # importing a specific module, so comment out for now. - # golangci-lint = prev.golangci-lint.override { - # buildGoModule = buildGo; - # }; - # golangci-lint-langserver = prev.golangci-lint.override { - # buildGoModule = buildGo; - # }; - - # The package uses buildGo125Module, not the convention. - # goreleaser = prev.goreleaser.override { - # buildGoModule = buildGo; - # }; - - gotestsum = prev.gotestsum.override { - buildGoModule = buildGo; - }; - - gotests = prev.gotests.override { - buildGoModule = buildGo; - }; - - gofumpt = prev.gofumpt.override { - buildGoModule = buildGo; - }; - - # gopls = prev.gopls.override { - # buildGoModule = buildGo; - # }; + # NixOS module + nixosModules = rec { + headscale = import ./nix/module.nix; + default = headscale; }; + + overlays.default = _: prev: + let + pkgs = nixpkgs.legacyPackages.${prev.stdenv.hostPlatform.system}; + buildGo = pkgs.buildGo125Module; + vendorHash = "sha256-dWsDgI5K+8mFw4PA5gfFBPCSqBJp5RcZzm0ML1+HsWw="; + in + { + headscale = buildGo { + pname = "headscale"; + version = headscaleVersion; + src = pkgs.lib.cleanSource self; + + # Only run unit tests when testing a build + checkFlags = [ "-short" ]; + + # When updating go.mod or go.sum, a new sha will need to be calculated, + # update this if you have a mismatch after doing a change to those files. + inherit vendorHash; + + subPackages = [ "cmd/headscale" ]; + + meta = { + mainProgram = "headscale"; + }; + }; + + hi = buildGo { + pname = "hi"; + version = headscaleVersion; + src = pkgs.lib.cleanSource self; + + checkFlags = [ "-short" ]; + inherit vendorHash; + + subPackages = [ "cmd/hi" ]; + }; + + protoc-gen-grpc-gateway = buildGo rec { + pname = "grpc-gateway"; + version = "2.27.4"; + + src = pkgs.fetchFromGitHub { + owner = "grpc-ecosystem"; + repo = "grpc-gateway"; + rev = "v${version}"; + sha256 = "sha256-4bhEQTVV04EyX/qJGNMIAQDcMWcDVr1tFkEjBHpc2CA="; + }; + + vendorHash = "sha256-ohZW/uPdt08Y2EpIQ2yeyGSjV9O58+QbQQqYrs6O8/g="; + + nativeBuildInputs = [ pkgs.installShellFiles ]; + + subPackages = [ "protoc-gen-grpc-gateway" "protoc-gen-openapiv2" ]; + }; + + protobuf-language-server = buildGo rec { + pname = "protobuf-language-server"; + version = "1cf777d"; + + src = pkgs.fetchFromGitHub { + owner = "lasorda"; + repo = "protobuf-language-server"; + rev = "1cf777de4d35a6e493a689e3ca1a6183ce3206b6"; + sha256 = "sha256-9MkBQPxr/TDr/sNz/Sk7eoZwZwzdVbE5u6RugXXk5iY="; + }; + + vendorHash = "sha256-4nTpKBe7ekJsfQf+P6edT/9Vp2SBYbKz1ITawD3bhkI="; + + subPackages = [ "." ]; + }; + + # Upstream does not override buildGoModule properly, + # importing a specific module, so comment out for now. + # golangci-lint = prev.golangci-lint.override { + # buildGoModule = buildGo; + # }; + # golangci-lint-langserver = prev.golangci-lint.override { + # buildGoModule = buildGo; + # }; + + # The package uses buildGo125Module, not the convention. + # goreleaser = prev.goreleaser.override { + # buildGoModule = buildGo; + # }; + + gotestsum = prev.gotestsum.override { + buildGoModule = buildGo; + }; + + gotests = prev.gotests.override { + buildGoModule = buildGo; + }; + + gofumpt = prev.gofumpt.override { + buildGoModule = buildGo; + }; + + # gopls = prev.gopls.override { + # buildGoModule = buildGo; + # }; + }; } // flake-utils.lib.eachDefaultSystem - (system: let - pkgs = import nixpkgs { - overlays = [self.overlay]; - inherit system; - }; - buildDeps = with pkgs; [git go_1_25 gnumake]; - devDeps = with pkgs; - buildDeps - ++ [ - golangci-lint - golangci-lint-langserver - golines - nodePackages.prettier - goreleaser - nfpm - gotestsum - gotests - gofumpt - gopls - ksh - ko - yq-go - ripgrep - postgresql - - # 'dot' is needed for pprof graphs - # go tool pprof -http=: - graphviz - - # Protobuf dependencies - protobuf - protoc-gen-go - protoc-gen-go-grpc - protoc-gen-grpc-gateway - buf - clang-tools # clang-format - protobuf-language-server - - # Add hi to make it even easier to use ci runner. - hi - ] - ++ lib.optional pkgs.stdenv.isLinux [traceroute]; - - # Add entry to build a docker image with headscale - # caveat: only works on Linux - # - # Usage: - # nix build .#headscale-docker - # docker load < result - headscale-docker = pkgs.dockerTools.buildLayeredImage { - name = "headscale"; - tag = headscaleVersion; - contents = [pkgs.headscale]; - config.Entrypoint = [(pkgs.headscale + "/bin/headscale")]; - }; - in rec { - # `nix develop` - devShell = pkgs.mkShell { - buildInputs = - devDeps + (system: + let + pkgs = import nixpkgs { + overlays = [ self.overlays.default ]; + inherit system; + }; + buildDeps = with pkgs; [ git go_1_25 gnumake ]; + devDeps = with pkgs; + buildDeps ++ [ - (pkgs.writeShellScriptBin - "nix-vendor-sri" - '' - set -eu + golangci-lint + golangci-lint-langserver + golines + nodePackages.prettier + nixpkgs-fmt + goreleaser + nfpm + gotestsum + gotests + gofumpt + gopls + ksh + ko + yq-go + ripgrep + postgresql + prek - OUT=$(mktemp -d -t nar-hash-XXXXXX) - rm -rf "$OUT" + # 'dot' is needed for pprof graphs + # go tool pprof -http=: + graphviz - go mod vendor -o "$OUT" - go run tailscale.com/cmd/nardump --sri "$OUT" - rm -rf "$OUT" - '') + # Protobuf dependencies + protobuf + protoc-gen-go + protoc-gen-go-grpc + protoc-gen-grpc-gateway + buf + clang-tools # clang-format + protobuf-language-server + ] + ++ lib.optional pkgs.stdenv.isLinux [ traceroute ]; - (pkgs.writeShellScriptBin - "go-mod-update-all" - '' - cat go.mod | ${pkgs.silver-searcher}/bin/ag "\t" | ${pkgs.silver-searcher}/bin/ag -v indirect | ${pkgs.gawk}/bin/awk '{print $1}' | ${pkgs.findutils}/bin/xargs go get -u - go mod tidy - '') - ]; + # Add entry to build a docker image with headscale + # caveat: only works on Linux + # + # Usage: + # nix build .#headscale-docker + # docker load < result + headscale-docker = pkgs.dockerTools.buildLayeredImage { + name = "headscale"; + tag = headscaleVersion; + contents = [ pkgs.headscale ]; + config.Entrypoint = [ (pkgs.headscale + "/bin/headscale") ]; + }; + in + { + # `nix develop` + devShells.default = pkgs.mkShell { + buildInputs = + devDeps + ++ [ + (pkgs.writeShellScriptBin + "nix-vendor-sri" + '' + set -eu - shellHook = '' - export PATH="$PWD/result/bin:$PATH" - ''; - }; + OUT=$(mktemp -d -t nar-hash-XXXXXX) + rm -rf "$OUT" - # `nix build` - packages = with pkgs; { - inherit headscale; - inherit headscale-docker; - }; - defaultPackage = pkgs.headscale; + go mod vendor -o "$OUT" + go run tailscale.com/cmd/nardump --sri "$OUT" + rm -rf "$OUT" + '') - # `nix run` - apps.headscale = flake-utils.lib.mkApp { - drv = packages.headscale; - }; - apps.default = apps.headscale; - - checks = { - format = - pkgs.runCommand "check-format" - { - buildInputs = with pkgs; [ - gnumake - nixpkgs-fmt - golangci-lint - nodePackages.prettier - golines - clang-tools + (pkgs.writeShellScriptBin + "go-mod-update-all" + '' + cat go.mod | ${pkgs.silver-searcher}/bin/ag "\t" | ${pkgs.silver-searcher}/bin/ag -v indirect | ${pkgs.gawk}/bin/awk '{print $1}' | ${pkgs.findutils}/bin/xargs go get -u + go mod tidy + '') ]; - } '' - ${pkgs.nixpkgs-fmt}/bin/nixpkgs-fmt ${./.} - ${pkgs.golangci-lint}/bin/golangci-lint run --fix --timeout 10m - ${pkgs.nodePackages.prettier}/bin/prettier --write '**/**.{ts,js,md,yaml,yml,sass,css,scss,html}' - ${pkgs.golines}/bin/golines --max-len=88 --base-formatter=gofumpt -w ${./.} - ${pkgs.clang-tools}/bin/clang-format -i ${./.} + + shellHook = '' + export PATH="$PWD/result/bin:$PATH" + export CGO_ENABLED=0 ''; - }; - }); + }; + + # `nix build` + packages = with pkgs; { + inherit headscale; + inherit headscale-docker; + default = headscale; + }; + + # `nix run` + apps.headscale = flake-utils.lib.mkApp { + drv = pkgs.headscale; + }; + apps.default = flake-utils.lib.mkApp { + drv = pkgs.headscale; + }; + + checks = { + headscale = pkgs.testers.nixosTest (import ./nix/tests/headscale.nix); + }; + }); } diff --git a/gen/go/headscale/v1/apikey.pb.go b/gen/go/headscale/v1/apikey.pb.go index a9f6a7b8..0c855738 100644 --- a/gen/go/headscale/v1/apikey.pb.go +++ b/gen/go/headscale/v1/apikey.pb.go @@ -1,6 +1,6 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.10 +// protoc-gen-go v1.36.11 // protoc (unknown) // source: headscale/v1/apikey.proto @@ -189,6 +189,7 @@ func (x *CreateApiKeyResponse) GetApiKey() string { type ExpireApiKeyRequest struct { state protoimpl.MessageState `protogen:"open.v1"` Prefix string `protobuf:"bytes,1,opt,name=prefix,proto3" json:"prefix,omitempty"` + Id uint64 `protobuf:"varint,2,opt,name=id,proto3" json:"id,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -230,6 +231,13 @@ func (x *ExpireApiKeyRequest) GetPrefix() string { return "" } +func (x *ExpireApiKeyRequest) GetId() uint64 { + if x != nil { + return x.Id + } + return 0 +} + type ExpireApiKeyResponse struct { state protoimpl.MessageState `protogen:"open.v1"` unknownFields protoimpl.UnknownFields @@ -349,6 +357,7 @@ func (x *ListApiKeysResponse) GetApiKeys() []*ApiKey { type DeleteApiKeyRequest struct { state protoimpl.MessageState `protogen:"open.v1"` Prefix string `protobuf:"bytes,1,opt,name=prefix,proto3" json:"prefix,omitempty"` + Id uint64 `protobuf:"varint,2,opt,name=id,proto3" json:"id,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -390,6 +399,13 @@ func (x *DeleteApiKeyRequest) GetPrefix() string { return "" } +func (x *DeleteApiKeyRequest) GetId() uint64 { + if x != nil { + return x.Id + } + return 0 +} + type DeleteApiKeyResponse struct { state protoimpl.MessageState `protogen:"open.v1"` unknownFields protoimpl.UnknownFields @@ -445,15 +461,17 @@ const file_headscale_v1_apikey_proto_rawDesc = "" + "expiration\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\n" + "expiration\"/\n" + "\x14CreateApiKeyResponse\x12\x17\n" + - "\aapi_key\x18\x01 \x01(\tR\x06apiKey\"-\n" + + "\aapi_key\x18\x01 \x01(\tR\x06apiKey\"=\n" + "\x13ExpireApiKeyRequest\x12\x16\n" + - "\x06prefix\x18\x01 \x01(\tR\x06prefix\"\x16\n" + + "\x06prefix\x18\x01 \x01(\tR\x06prefix\x12\x0e\n" + + "\x02id\x18\x02 \x01(\x04R\x02id\"\x16\n" + "\x14ExpireApiKeyResponse\"\x14\n" + "\x12ListApiKeysRequest\"F\n" + "\x13ListApiKeysResponse\x12/\n" + - "\bapi_keys\x18\x01 \x03(\v2\x14.headscale.v1.ApiKeyR\aapiKeys\"-\n" + + "\bapi_keys\x18\x01 \x03(\v2\x14.headscale.v1.ApiKeyR\aapiKeys\"=\n" + "\x13DeleteApiKeyRequest\x12\x16\n" + - "\x06prefix\x18\x01 \x01(\tR\x06prefix\"\x16\n" + + "\x06prefix\x18\x01 \x01(\tR\x06prefix\x12\x0e\n" + + "\x02id\x18\x02 \x01(\x04R\x02id\"\x16\n" + "\x14DeleteApiKeyResponseB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3" var ( diff --git a/gen/go/headscale/v1/device.pb.go b/gen/go/headscale/v1/device.pb.go index 8b150f96..e2362b05 100644 --- a/gen/go/headscale/v1/device.pb.go +++ b/gen/go/headscale/v1/device.pb.go @@ -1,6 +1,6 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.10 +// protoc-gen-go v1.36.11 // protoc (unknown) // source: headscale/v1/device.proto diff --git a/gen/go/headscale/v1/headscale.pb.go b/gen/go/headscale/v1/headscale.pb.go index 2c594f5a..3d16778c 100644 --- a/gen/go/headscale/v1/headscale.pb.go +++ b/gen/go/headscale/v1/headscale.pb.go @@ -1,6 +1,6 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.10 +// protoc-gen-go v1.36.11 // protoc (unknown) // source: headscale/v1/headscale.proto @@ -109,7 +109,7 @@ const file_headscale_v1_headscale_proto_rawDesc = "" + "\x1cheadscale/v1/headscale.proto\x12\fheadscale.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17headscale/v1/user.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/node.proto\x1a\x19headscale/v1/apikey.proto\x1a\x19headscale/v1/policy.proto\"\x0f\n" + "\rHealthRequest\"E\n" + "\x0eHealthResponse\x123\n" + - "\x15database_connectivity\x18\x01 \x01(\bR\x14databaseConnectivity2\x80\x17\n" + + "\x15database_connectivity\x18\x01 \x01(\bR\x14databaseConnectivity2\x8c\x17\n" + "\x10HeadscaleService\x12h\n" + "\n" + "CreateUser\x12\x1f.headscale.v1.CreateUserRequest\x1a .headscale.v1.CreateUserResponse\"\x17\x82\xd3\xe4\x93\x02\x11:\x01*\"\f/api/v1/user\x12\x80\x01\n" + @@ -119,7 +119,8 @@ const file_headscale_v1_headscale_proto_rawDesc = "" + "DeleteUser\x12\x1f.headscale.v1.DeleteUserRequest\x1a .headscale.v1.DeleteUserResponse\"\x19\x82\xd3\xe4\x93\x02\x13*\x11/api/v1/user/{id}\x12b\n" + "\tListUsers\x12\x1e.headscale.v1.ListUsersRequest\x1a\x1f.headscale.v1.ListUsersResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/user\x12\x80\x01\n" + "\x10CreatePreAuthKey\x12%.headscale.v1.CreatePreAuthKeyRequest\x1a&.headscale.v1.CreatePreAuthKeyResponse\"\x1d\x82\xd3\xe4\x93\x02\x17:\x01*\"\x12/api/v1/preauthkey\x12\x87\x01\n" + - "\x10ExpirePreAuthKey\x12%.headscale.v1.ExpirePreAuthKeyRequest\x1a&.headscale.v1.ExpirePreAuthKeyResponse\"$\x82\xd3\xe4\x93\x02\x1e:\x01*\"\x19/api/v1/preauthkey/expire\x12z\n" + + "\x10ExpirePreAuthKey\x12%.headscale.v1.ExpirePreAuthKeyRequest\x1a&.headscale.v1.ExpirePreAuthKeyResponse\"$\x82\xd3\xe4\x93\x02\x1e:\x01*\"\x19/api/v1/preauthkey/expire\x12}\n" + + "\x10DeletePreAuthKey\x12%.headscale.v1.DeletePreAuthKeyRequest\x1a&.headscale.v1.DeletePreAuthKeyResponse\"\x1a\x82\xd3\xe4\x93\x02\x14*\x12/api/v1/preauthkey\x12z\n" + "\x0fListPreAuthKeys\x12$.headscale.v1.ListPreAuthKeysRequest\x1a%.headscale.v1.ListPreAuthKeysResponse\"\x1a\x82\xd3\xe4\x93\x02\x14\x12\x12/api/v1/preauthkey\x12}\n" + "\x0fDebugCreateNode\x12$.headscale.v1.DebugCreateNodeRequest\x1a%.headscale.v1.DebugCreateNodeResponse\"\x1d\x82\xd3\xe4\x93\x02\x17:\x01*\"\x12/api/v1/debug/node\x12f\n" + "\aGetNode\x12\x1c.headscale.v1.GetNodeRequest\x1a\x1d.headscale.v1.GetNodeResponse\"\x1e\x82\xd3\xe4\x93\x02\x18\x12\x16/api/v1/node/{node_id}\x12n\n" + @@ -132,8 +133,7 @@ const file_headscale_v1_headscale_proto_rawDesc = "" + "ExpireNode\x12\x1f.headscale.v1.ExpireNodeRequest\x1a .headscale.v1.ExpireNodeResponse\"%\x82\xd3\xe4\x93\x02\x1f\"\x1d/api/v1/node/{node_id}/expire\x12\x81\x01\n" + "\n" + "RenameNode\x12\x1f.headscale.v1.RenameNodeRequest\x1a .headscale.v1.RenameNodeResponse\"0\x82\xd3\xe4\x93\x02*\"(/api/v1/node/{node_id}/rename/{new_name}\x12b\n" + - "\tListNodes\x12\x1e.headscale.v1.ListNodesRequest\x1a\x1f.headscale.v1.ListNodesResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/node\x12q\n" + - "\bMoveNode\x12\x1d.headscale.v1.MoveNodeRequest\x1a\x1e.headscale.v1.MoveNodeResponse\"&\x82\xd3\xe4\x93\x02 :\x01*\"\x1b/api/v1/node/{node_id}/user\x12\x80\x01\n" + + "\tListNodes\x12\x1e.headscale.v1.ListNodesRequest\x1a\x1f.headscale.v1.ListNodesResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/node\x12\x80\x01\n" + "\x0fBackfillNodeIPs\x12$.headscale.v1.BackfillNodeIPsRequest\x1a%.headscale.v1.BackfillNodeIPsResponse\" \x82\xd3\xe4\x93\x02\x1a\"\x18/api/v1/node/backfillips\x12p\n" + "\fCreateApiKey\x12!.headscale.v1.CreateApiKeyRequest\x1a\".headscale.v1.CreateApiKeyResponse\"\x19\x82\xd3\xe4\x93\x02\x13:\x01*\"\x0e/api/v1/apikey\x12w\n" + "\fExpireApiKey\x12!.headscale.v1.ExpireApiKeyRequest\x1a\".headscale.v1.ExpireApiKeyResponse\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/api/v1/apikey/expire\x12j\n" + @@ -165,17 +165,17 @@ var file_headscale_v1_headscale_proto_goTypes = []any{ (*ListUsersRequest)(nil), // 5: headscale.v1.ListUsersRequest (*CreatePreAuthKeyRequest)(nil), // 6: headscale.v1.CreatePreAuthKeyRequest (*ExpirePreAuthKeyRequest)(nil), // 7: headscale.v1.ExpirePreAuthKeyRequest - (*ListPreAuthKeysRequest)(nil), // 8: headscale.v1.ListPreAuthKeysRequest - (*DebugCreateNodeRequest)(nil), // 9: headscale.v1.DebugCreateNodeRequest - (*GetNodeRequest)(nil), // 10: headscale.v1.GetNodeRequest - (*SetTagsRequest)(nil), // 11: headscale.v1.SetTagsRequest - (*SetApprovedRoutesRequest)(nil), // 12: headscale.v1.SetApprovedRoutesRequest - (*RegisterNodeRequest)(nil), // 13: headscale.v1.RegisterNodeRequest - (*DeleteNodeRequest)(nil), // 14: headscale.v1.DeleteNodeRequest - (*ExpireNodeRequest)(nil), // 15: headscale.v1.ExpireNodeRequest - (*RenameNodeRequest)(nil), // 16: headscale.v1.RenameNodeRequest - (*ListNodesRequest)(nil), // 17: headscale.v1.ListNodesRequest - (*MoveNodeRequest)(nil), // 18: headscale.v1.MoveNodeRequest + (*DeletePreAuthKeyRequest)(nil), // 8: headscale.v1.DeletePreAuthKeyRequest + (*ListPreAuthKeysRequest)(nil), // 9: headscale.v1.ListPreAuthKeysRequest + (*DebugCreateNodeRequest)(nil), // 10: headscale.v1.DebugCreateNodeRequest + (*GetNodeRequest)(nil), // 11: headscale.v1.GetNodeRequest + (*SetTagsRequest)(nil), // 12: headscale.v1.SetTagsRequest + (*SetApprovedRoutesRequest)(nil), // 13: headscale.v1.SetApprovedRoutesRequest + (*RegisterNodeRequest)(nil), // 14: headscale.v1.RegisterNodeRequest + (*DeleteNodeRequest)(nil), // 15: headscale.v1.DeleteNodeRequest + (*ExpireNodeRequest)(nil), // 16: headscale.v1.ExpireNodeRequest + (*RenameNodeRequest)(nil), // 17: headscale.v1.RenameNodeRequest + (*ListNodesRequest)(nil), // 18: headscale.v1.ListNodesRequest (*BackfillNodeIPsRequest)(nil), // 19: headscale.v1.BackfillNodeIPsRequest (*CreateApiKeyRequest)(nil), // 20: headscale.v1.CreateApiKeyRequest (*ExpireApiKeyRequest)(nil), // 21: headscale.v1.ExpireApiKeyRequest @@ -189,17 +189,17 @@ var file_headscale_v1_headscale_proto_goTypes = []any{ (*ListUsersResponse)(nil), // 29: headscale.v1.ListUsersResponse (*CreatePreAuthKeyResponse)(nil), // 30: headscale.v1.CreatePreAuthKeyResponse (*ExpirePreAuthKeyResponse)(nil), // 31: headscale.v1.ExpirePreAuthKeyResponse - (*ListPreAuthKeysResponse)(nil), // 32: headscale.v1.ListPreAuthKeysResponse - (*DebugCreateNodeResponse)(nil), // 33: headscale.v1.DebugCreateNodeResponse - (*GetNodeResponse)(nil), // 34: headscale.v1.GetNodeResponse - (*SetTagsResponse)(nil), // 35: headscale.v1.SetTagsResponse - (*SetApprovedRoutesResponse)(nil), // 36: headscale.v1.SetApprovedRoutesResponse - (*RegisterNodeResponse)(nil), // 37: headscale.v1.RegisterNodeResponse - (*DeleteNodeResponse)(nil), // 38: headscale.v1.DeleteNodeResponse - (*ExpireNodeResponse)(nil), // 39: headscale.v1.ExpireNodeResponse - (*RenameNodeResponse)(nil), // 40: headscale.v1.RenameNodeResponse - (*ListNodesResponse)(nil), // 41: headscale.v1.ListNodesResponse - (*MoveNodeResponse)(nil), // 42: headscale.v1.MoveNodeResponse + (*DeletePreAuthKeyResponse)(nil), // 32: headscale.v1.DeletePreAuthKeyResponse + (*ListPreAuthKeysResponse)(nil), // 33: headscale.v1.ListPreAuthKeysResponse + (*DebugCreateNodeResponse)(nil), // 34: headscale.v1.DebugCreateNodeResponse + (*GetNodeResponse)(nil), // 35: headscale.v1.GetNodeResponse + (*SetTagsResponse)(nil), // 36: headscale.v1.SetTagsResponse + (*SetApprovedRoutesResponse)(nil), // 37: headscale.v1.SetApprovedRoutesResponse + (*RegisterNodeResponse)(nil), // 38: headscale.v1.RegisterNodeResponse + (*DeleteNodeResponse)(nil), // 39: headscale.v1.DeleteNodeResponse + (*ExpireNodeResponse)(nil), // 40: headscale.v1.ExpireNodeResponse + (*RenameNodeResponse)(nil), // 41: headscale.v1.RenameNodeResponse + (*ListNodesResponse)(nil), // 42: headscale.v1.ListNodesResponse (*BackfillNodeIPsResponse)(nil), // 43: headscale.v1.BackfillNodeIPsResponse (*CreateApiKeyResponse)(nil), // 44: headscale.v1.CreateApiKeyResponse (*ExpireApiKeyResponse)(nil), // 45: headscale.v1.ExpireApiKeyResponse @@ -215,17 +215,17 @@ var file_headscale_v1_headscale_proto_depIdxs = []int32{ 5, // 3: headscale.v1.HeadscaleService.ListUsers:input_type -> headscale.v1.ListUsersRequest 6, // 4: headscale.v1.HeadscaleService.CreatePreAuthKey:input_type -> headscale.v1.CreatePreAuthKeyRequest 7, // 5: headscale.v1.HeadscaleService.ExpirePreAuthKey:input_type -> headscale.v1.ExpirePreAuthKeyRequest - 8, // 6: headscale.v1.HeadscaleService.ListPreAuthKeys:input_type -> headscale.v1.ListPreAuthKeysRequest - 9, // 7: headscale.v1.HeadscaleService.DebugCreateNode:input_type -> headscale.v1.DebugCreateNodeRequest - 10, // 8: headscale.v1.HeadscaleService.GetNode:input_type -> headscale.v1.GetNodeRequest - 11, // 9: headscale.v1.HeadscaleService.SetTags:input_type -> headscale.v1.SetTagsRequest - 12, // 10: headscale.v1.HeadscaleService.SetApprovedRoutes:input_type -> headscale.v1.SetApprovedRoutesRequest - 13, // 11: headscale.v1.HeadscaleService.RegisterNode:input_type -> headscale.v1.RegisterNodeRequest - 14, // 12: headscale.v1.HeadscaleService.DeleteNode:input_type -> headscale.v1.DeleteNodeRequest - 15, // 13: headscale.v1.HeadscaleService.ExpireNode:input_type -> headscale.v1.ExpireNodeRequest - 16, // 14: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest - 17, // 15: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest - 18, // 16: headscale.v1.HeadscaleService.MoveNode:input_type -> headscale.v1.MoveNodeRequest + 8, // 6: headscale.v1.HeadscaleService.DeletePreAuthKey:input_type -> headscale.v1.DeletePreAuthKeyRequest + 9, // 7: headscale.v1.HeadscaleService.ListPreAuthKeys:input_type -> headscale.v1.ListPreAuthKeysRequest + 10, // 8: headscale.v1.HeadscaleService.DebugCreateNode:input_type -> headscale.v1.DebugCreateNodeRequest + 11, // 9: headscale.v1.HeadscaleService.GetNode:input_type -> headscale.v1.GetNodeRequest + 12, // 10: headscale.v1.HeadscaleService.SetTags:input_type -> headscale.v1.SetTagsRequest + 13, // 11: headscale.v1.HeadscaleService.SetApprovedRoutes:input_type -> headscale.v1.SetApprovedRoutesRequest + 14, // 12: headscale.v1.HeadscaleService.RegisterNode:input_type -> headscale.v1.RegisterNodeRequest + 15, // 13: headscale.v1.HeadscaleService.DeleteNode:input_type -> headscale.v1.DeleteNodeRequest + 16, // 14: headscale.v1.HeadscaleService.ExpireNode:input_type -> headscale.v1.ExpireNodeRequest + 17, // 15: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest + 18, // 16: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest 19, // 17: headscale.v1.HeadscaleService.BackfillNodeIPs:input_type -> headscale.v1.BackfillNodeIPsRequest 20, // 18: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest 21, // 19: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest @@ -240,17 +240,17 @@ var file_headscale_v1_headscale_proto_depIdxs = []int32{ 29, // 28: headscale.v1.HeadscaleService.ListUsers:output_type -> headscale.v1.ListUsersResponse 30, // 29: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse 31, // 30: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse - 32, // 31: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse - 33, // 32: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse - 34, // 33: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse - 35, // 34: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse - 36, // 35: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse - 37, // 36: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse - 38, // 37: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse - 39, // 38: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse - 40, // 39: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse - 41, // 40: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse - 42, // 41: headscale.v1.HeadscaleService.MoveNode:output_type -> headscale.v1.MoveNodeResponse + 32, // 31: headscale.v1.HeadscaleService.DeletePreAuthKey:output_type -> headscale.v1.DeletePreAuthKeyResponse + 33, // 32: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse + 34, // 33: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse + 35, // 34: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse + 36, // 35: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse + 37, // 36: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse + 38, // 37: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse + 39, // 38: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse + 40, // 39: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse + 41, // 40: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse + 42, // 41: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse 43, // 42: headscale.v1.HeadscaleService.BackfillNodeIPs:output_type -> headscale.v1.BackfillNodeIPsResponse 44, // 43: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse 45, // 44: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse diff --git a/gen/go/headscale/v1/headscale.pb.gw.go b/gen/go/headscale/v1/headscale.pb.gw.go index 2a8ac365..ab851614 100644 --- a/gen/go/headscale/v1/headscale.pb.gw.go +++ b/gen/go/headscale/v1/headscale.pb.gw.go @@ -43,6 +43,9 @@ func request_HeadscaleService_CreateUser_0(ctx context.Context, marshaler runtim if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.CreateUser(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -65,6 +68,9 @@ func request_HeadscaleService_RenameUser_0(ctx context.Context, marshaler runtim metadata runtime.ServerMetadata err error ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["old_id"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "old_id") @@ -117,6 +123,9 @@ func request_HeadscaleService_DeleteUser_0(ctx context.Context, marshaler runtim metadata runtime.ServerMetadata err error ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["id"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "id") @@ -154,6 +163,9 @@ func request_HeadscaleService_ListUsers_0(ctx context.Context, marshaler runtime protoReq ListUsersRequest metadata runtime.ServerMetadata ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } if err := req.ParseForm(); err != nil { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } @@ -187,6 +199,9 @@ func request_HeadscaleService_CreatePreAuthKey_0(ctx context.Context, marshaler if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.CreatePreAuthKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -211,6 +226,9 @@ func request_HeadscaleService_ExpirePreAuthKey_0(ctx context.Context, marshaler if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.ExpirePreAuthKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -227,18 +245,48 @@ func local_request_HeadscaleService_ExpirePreAuthKey_0(ctx context.Context, mars return msg, metadata, err } -var filter_HeadscaleService_ListPreAuthKeys_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)} +var filter_HeadscaleService_DeletePreAuthKey_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)} + +func request_HeadscaleService_DeletePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) { + var ( + protoReq DeletePreAuthKeyRequest + metadata runtime.ServerMetadata + ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } + if err := req.ParseForm(); err != nil { + return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + } + if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeletePreAuthKey_0); err != nil { + return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + } + msg, err := client.DeletePreAuthKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) + return msg, metadata, err +} + +func local_request_HeadscaleService_DeletePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) { + var ( + protoReq DeletePreAuthKeyRequest + metadata runtime.ServerMetadata + ) + if err := req.ParseForm(); err != nil { + return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + } + if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeletePreAuthKey_0); err != nil { + return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + } + msg, err := server.DeletePreAuthKey(ctx, &protoReq) + return msg, metadata, err +} func request_HeadscaleService_ListPreAuthKeys_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) { var ( protoReq ListPreAuthKeysRequest metadata runtime.ServerMetadata ) - if err := req.ParseForm(); err != nil { - return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) - } - if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ListPreAuthKeys_0); err != nil { - return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) } msg, err := client.ListPreAuthKeys(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err @@ -249,12 +297,6 @@ func local_request_HeadscaleService_ListPreAuthKeys_0(ctx context.Context, marsh protoReq ListPreAuthKeysRequest metadata runtime.ServerMetadata ) - if err := req.ParseForm(); err != nil { - return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) - } - if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ListPreAuthKeys_0); err != nil { - return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) - } msg, err := server.ListPreAuthKeys(ctx, &protoReq) return msg, metadata, err } @@ -267,6 +309,9 @@ func request_HeadscaleService_DebugCreateNode_0(ctx context.Context, marshaler r if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.DebugCreateNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -289,6 +334,9 @@ func request_HeadscaleService_GetNode_0(ctx context.Context, marshaler runtime.M metadata runtime.ServerMetadata err error ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["node_id"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id") @@ -328,6 +376,9 @@ func request_HeadscaleService_SetTags_0(ctx context.Context, marshaler runtime.M if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["node_id"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id") @@ -370,6 +421,9 @@ func request_HeadscaleService_SetApprovedRoutes_0(ctx context.Context, marshaler if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["node_id"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id") @@ -410,6 +464,9 @@ func request_HeadscaleService_RegisterNode_0(ctx context.Context, marshaler runt protoReq RegisterNodeRequest metadata runtime.ServerMetadata ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } if err := req.ParseForm(); err != nil { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } @@ -441,6 +498,9 @@ func request_HeadscaleService_DeleteNode_0(ctx context.Context, marshaler runtim metadata runtime.ServerMetadata err error ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["node_id"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id") @@ -479,6 +539,9 @@ func request_HeadscaleService_ExpireNode_0(ctx context.Context, marshaler runtim metadata runtime.ServerMetadata err error ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["node_id"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id") @@ -527,6 +590,9 @@ func request_HeadscaleService_RenameNode_0(ctx context.Context, marshaler runtim metadata runtime.ServerMetadata err error ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["node_id"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id") @@ -580,6 +646,9 @@ func request_HeadscaleService_ListNodes_0(ctx context.Context, marshaler runtime protoReq ListNodesRequest metadata runtime.ServerMetadata ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } if err := req.ParseForm(); err != nil { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } @@ -605,48 +674,6 @@ func local_request_HeadscaleService_ListNodes_0(ctx context.Context, marshaler r return msg, metadata, err } -func request_HeadscaleService_MoveNode_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) { - var ( - protoReq MoveNodeRequest - metadata runtime.ServerMetadata - err error - ) - if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { - return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) - } - val, ok := pathParams["node_id"] - if !ok { - return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id") - } - protoReq.NodeId, err = runtime.Uint64(val) - if err != nil { - return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err) - } - msg, err := client.MoveNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) - return msg, metadata, err -} - -func local_request_HeadscaleService_MoveNode_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) { - var ( - protoReq MoveNodeRequest - metadata runtime.ServerMetadata - err error - ) - if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { - return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) - } - val, ok := pathParams["node_id"] - if !ok { - return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "node_id") - } - protoReq.NodeId, err = runtime.Uint64(val) - if err != nil { - return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err) - } - msg, err := server.MoveNode(ctx, &protoReq) - return msg, metadata, err -} - var filter_HeadscaleService_BackfillNodeIPs_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)} func request_HeadscaleService_BackfillNodeIPs_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) { @@ -654,6 +681,9 @@ func request_HeadscaleService_BackfillNodeIPs_0(ctx context.Context, marshaler r protoReq BackfillNodeIPsRequest metadata runtime.ServerMetadata ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } if err := req.ParseForm(); err != nil { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } @@ -687,6 +717,9 @@ func request_HeadscaleService_CreateApiKey_0(ctx context.Context, marshaler runt if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.CreateApiKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -711,6 +744,9 @@ func request_HeadscaleService_ExpireApiKey_0(ctx context.Context, marshaler runt if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.ExpireApiKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -732,6 +768,9 @@ func request_HeadscaleService_ListApiKeys_0(ctx context.Context, marshaler runti protoReq ListApiKeysRequest metadata runtime.ServerMetadata ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.ListApiKeys(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -745,12 +784,17 @@ func local_request_HeadscaleService_ListApiKeys_0(ctx context.Context, marshaler return msg, metadata, err } +var filter_HeadscaleService_DeleteApiKey_0 = &utilities.DoubleArray{Encoding: map[string]int{"prefix": 0}, Base: []int{1, 1, 0}, Check: []int{0, 1, 2}} + func request_HeadscaleService_DeleteApiKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) { var ( protoReq DeleteApiKeyRequest metadata runtime.ServerMetadata err error ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } val, ok := pathParams["prefix"] if !ok { return nil, metadata, status.Errorf(codes.InvalidArgument, "missing parameter %s", "prefix") @@ -759,6 +803,12 @@ func request_HeadscaleService_DeleteApiKey_0(ctx context.Context, marshaler runt if err != nil { return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "prefix", err) } + if err := req.ParseForm(); err != nil { + return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + } + if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeleteApiKey_0); err != nil { + return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + } msg, err := client.DeleteApiKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -777,6 +827,12 @@ func local_request_HeadscaleService_DeleteApiKey_0(ctx context.Context, marshale if err != nil { return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "prefix", err) } + if err := req.ParseForm(); err != nil { + return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + } + if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeleteApiKey_0); err != nil { + return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) + } msg, err := server.DeleteApiKey(ctx, &protoReq) return msg, metadata, err } @@ -786,6 +842,9 @@ func request_HeadscaleService_GetPolicy_0(ctx context.Context, marshaler runtime protoReq GetPolicyRequest metadata runtime.ServerMetadata ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.GetPolicy(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -807,6 +866,9 @@ func request_HeadscaleService_SetPolicy_0(ctx context.Context, marshaler runtime if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) { return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err) } + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.SetPolicy(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -828,6 +890,9 @@ func request_HeadscaleService_Health_0(ctx context.Context, marshaler runtime.Ma protoReq HealthRequest metadata runtime.ServerMetadata ) + if req.Body != nil { + _, _ = io.Copy(io.Discard, req.Body) + } msg, err := client.Health(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD)) return msg, metadata, err } @@ -967,6 +1032,26 @@ func RegisterHeadscaleServiceHandlerServer(ctx context.Context, mux *runtime.Ser } forward_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) }) + mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeletePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { + ctx, cancel := context.WithCancel(req.Context()) + defer cancel() + var stream runtime.ServerTransportStream + ctx = grpc.NewContextWithServerTransportStream(ctx, &stream) + inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req) + annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeletePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey")) + if err != nil { + runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err) + return + } + resp, md, err := local_request_HeadscaleService_DeletePreAuthKey_0(annotatedContext, inboundMarshaler, server, req, pathParams) + md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer()) + annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md) + if err != nil { + runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err) + return + } + forward_HeadscaleService_DeletePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) + }) mux.Handle(http.MethodGet, pattern_HeadscaleService_ListPreAuthKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { ctx, cancel := context.WithCancel(req.Context()) defer cancel() @@ -1167,26 +1252,6 @@ func RegisterHeadscaleServiceHandlerServer(ctx context.Context, mux *runtime.Ser } forward_HeadscaleService_ListNodes_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) }) - mux.Handle(http.MethodPost, pattern_HeadscaleService_MoveNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { - ctx, cancel := context.WithCancel(req.Context()) - defer cancel() - var stream runtime.ServerTransportStream - ctx = grpc.NewContextWithServerTransportStream(ctx, &stream) - inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req) - annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/MoveNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/user")) - if err != nil { - runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err) - return - } - resp, md, err := local_request_HeadscaleService_MoveNode_0(annotatedContext, inboundMarshaler, server, req, pathParams) - md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer()) - annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md) - if err != nil { - runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err) - return - } - forward_HeadscaleService_MoveNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) - }) mux.Handle(http.MethodPost, pattern_HeadscaleService_BackfillNodeIPs_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { ctx, cancel := context.WithCancel(req.Context()) defer cancel() @@ -1489,6 +1554,23 @@ func RegisterHeadscaleServiceHandlerClient(ctx context.Context, mux *runtime.Ser } forward_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) }) + mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeletePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { + ctx, cancel := context.WithCancel(req.Context()) + defer cancel() + inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req) + annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeletePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey")) + if err != nil { + runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err) + return + } + resp, md, err := request_HeadscaleService_DeletePreAuthKey_0(annotatedContext, inboundMarshaler, client, req, pathParams) + annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md) + if err != nil { + runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err) + return + } + forward_HeadscaleService_DeletePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) + }) mux.Handle(http.MethodGet, pattern_HeadscaleService_ListPreAuthKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { ctx, cancel := context.WithCancel(req.Context()) defer cancel() @@ -1659,23 +1741,6 @@ func RegisterHeadscaleServiceHandlerClient(ctx context.Context, mux *runtime.Ser } forward_HeadscaleService_ListNodes_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) }) - mux.Handle(http.MethodPost, pattern_HeadscaleService_MoveNode_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { - ctx, cancel := context.WithCancel(req.Context()) - defer cancel() - inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req) - annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/MoveNode", runtime.WithHTTPPathPattern("/api/v1/node/{node_id}/user")) - if err != nil { - runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err) - return - } - resp, md, err := request_HeadscaleService_MoveNode_0(annotatedContext, inboundMarshaler, client, req, pathParams) - annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md) - if err != nil { - runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err) - return - } - forward_HeadscaleService_MoveNode_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) - }) mux.Handle(http.MethodPost, pattern_HeadscaleService_BackfillNodeIPs_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { ctx, cancel := context.WithCancel(req.Context()) defer cancel() @@ -1822,6 +1887,7 @@ var ( pattern_HeadscaleService_ListUsers_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "user"}, "")) pattern_HeadscaleService_CreatePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, "")) pattern_HeadscaleService_ExpirePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "preauthkey", "expire"}, "")) + pattern_HeadscaleService_DeletePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, "")) pattern_HeadscaleService_ListPreAuthKeys_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, "")) pattern_HeadscaleService_DebugCreateNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "debug", "node"}, "")) pattern_HeadscaleService_GetNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "node", "node_id"}, "")) @@ -1832,7 +1898,6 @@ var ( pattern_HeadscaleService_ExpireNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "node", "node_id", "expire"}, "")) pattern_HeadscaleService_RenameNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"api", "v1", "node", "node_id", "rename", "new_name"}, "")) pattern_HeadscaleService_ListNodes_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "node"}, "")) - pattern_HeadscaleService_MoveNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4}, []string{"api", "v1", "node", "node_id", "user"}, "")) pattern_HeadscaleService_BackfillNodeIPs_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "node", "backfillips"}, "")) pattern_HeadscaleService_CreateApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "apikey"}, "")) pattern_HeadscaleService_ExpireApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "apikey", "expire"}, "")) @@ -1850,6 +1915,7 @@ var ( forward_HeadscaleService_ListUsers_0 = runtime.ForwardResponseMessage forward_HeadscaleService_CreatePreAuthKey_0 = runtime.ForwardResponseMessage forward_HeadscaleService_ExpirePreAuthKey_0 = runtime.ForwardResponseMessage + forward_HeadscaleService_DeletePreAuthKey_0 = runtime.ForwardResponseMessage forward_HeadscaleService_ListPreAuthKeys_0 = runtime.ForwardResponseMessage forward_HeadscaleService_DebugCreateNode_0 = runtime.ForwardResponseMessage forward_HeadscaleService_GetNode_0 = runtime.ForwardResponseMessage @@ -1860,7 +1926,6 @@ var ( forward_HeadscaleService_ExpireNode_0 = runtime.ForwardResponseMessage forward_HeadscaleService_RenameNode_0 = runtime.ForwardResponseMessage forward_HeadscaleService_ListNodes_0 = runtime.ForwardResponseMessage - forward_HeadscaleService_MoveNode_0 = runtime.ForwardResponseMessage forward_HeadscaleService_BackfillNodeIPs_0 = runtime.ForwardResponseMessage forward_HeadscaleService_CreateApiKey_0 = runtime.ForwardResponseMessage forward_HeadscaleService_ExpireApiKey_0 = runtime.ForwardResponseMessage diff --git a/gen/go/headscale/v1/headscale_grpc.pb.go b/gen/go/headscale/v1/headscale_grpc.pb.go index bd8428c2..a3963935 100644 --- a/gen/go/headscale/v1/headscale_grpc.pb.go +++ b/gen/go/headscale/v1/headscale_grpc.pb.go @@ -1,6 +1,6 @@ // Code generated by protoc-gen-go-grpc. DO NOT EDIT. // versions: -// - protoc-gen-go-grpc v1.5.1 +// - protoc-gen-go-grpc v1.6.0 // - protoc (unknown) // source: headscale/v1/headscale.proto @@ -25,6 +25,7 @@ const ( HeadscaleService_ListUsers_FullMethodName = "/headscale.v1.HeadscaleService/ListUsers" HeadscaleService_CreatePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/CreatePreAuthKey" HeadscaleService_ExpirePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/ExpirePreAuthKey" + HeadscaleService_DeletePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/DeletePreAuthKey" HeadscaleService_ListPreAuthKeys_FullMethodName = "/headscale.v1.HeadscaleService/ListPreAuthKeys" HeadscaleService_DebugCreateNode_FullMethodName = "/headscale.v1.HeadscaleService/DebugCreateNode" HeadscaleService_GetNode_FullMethodName = "/headscale.v1.HeadscaleService/GetNode" @@ -35,7 +36,6 @@ const ( HeadscaleService_ExpireNode_FullMethodName = "/headscale.v1.HeadscaleService/ExpireNode" HeadscaleService_RenameNode_FullMethodName = "/headscale.v1.HeadscaleService/RenameNode" HeadscaleService_ListNodes_FullMethodName = "/headscale.v1.HeadscaleService/ListNodes" - HeadscaleService_MoveNode_FullMethodName = "/headscale.v1.HeadscaleService/MoveNode" HeadscaleService_BackfillNodeIPs_FullMethodName = "/headscale.v1.HeadscaleService/BackfillNodeIPs" HeadscaleService_CreateApiKey_FullMethodName = "/headscale.v1.HeadscaleService/CreateApiKey" HeadscaleService_ExpireApiKey_FullMethodName = "/headscale.v1.HeadscaleService/ExpireApiKey" @@ -58,6 +58,7 @@ type HeadscaleServiceClient interface { // --- PreAuthKeys start --- CreatePreAuthKey(ctx context.Context, in *CreatePreAuthKeyRequest, opts ...grpc.CallOption) (*CreatePreAuthKeyResponse, error) ExpirePreAuthKey(ctx context.Context, in *ExpirePreAuthKeyRequest, opts ...grpc.CallOption) (*ExpirePreAuthKeyResponse, error) + DeletePreAuthKey(ctx context.Context, in *DeletePreAuthKeyRequest, opts ...grpc.CallOption) (*DeletePreAuthKeyResponse, error) ListPreAuthKeys(ctx context.Context, in *ListPreAuthKeysRequest, opts ...grpc.CallOption) (*ListPreAuthKeysResponse, error) // --- Node start --- DebugCreateNode(ctx context.Context, in *DebugCreateNodeRequest, opts ...grpc.CallOption) (*DebugCreateNodeResponse, error) @@ -69,7 +70,6 @@ type HeadscaleServiceClient interface { ExpireNode(ctx context.Context, in *ExpireNodeRequest, opts ...grpc.CallOption) (*ExpireNodeResponse, error) RenameNode(ctx context.Context, in *RenameNodeRequest, opts ...grpc.CallOption) (*RenameNodeResponse, error) ListNodes(ctx context.Context, in *ListNodesRequest, opts ...grpc.CallOption) (*ListNodesResponse, error) - MoveNode(ctx context.Context, in *MoveNodeRequest, opts ...grpc.CallOption) (*MoveNodeResponse, error) BackfillNodeIPs(ctx context.Context, in *BackfillNodeIPsRequest, opts ...grpc.CallOption) (*BackfillNodeIPsResponse, error) // --- ApiKeys start --- CreateApiKey(ctx context.Context, in *CreateApiKeyRequest, opts ...grpc.CallOption) (*CreateApiKeyResponse, error) @@ -151,6 +151,16 @@ func (c *headscaleServiceClient) ExpirePreAuthKey(ctx context.Context, in *Expir return out, nil } +func (c *headscaleServiceClient) DeletePreAuthKey(ctx context.Context, in *DeletePreAuthKeyRequest, opts ...grpc.CallOption) (*DeletePreAuthKeyResponse, error) { + cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) + out := new(DeletePreAuthKeyResponse) + err := c.cc.Invoke(ctx, HeadscaleService_DeletePreAuthKey_FullMethodName, in, out, cOpts...) + if err != nil { + return nil, err + } + return out, nil +} + func (c *headscaleServiceClient) ListPreAuthKeys(ctx context.Context, in *ListPreAuthKeysRequest, opts ...grpc.CallOption) (*ListPreAuthKeysResponse, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(ListPreAuthKeysResponse) @@ -251,16 +261,6 @@ func (c *headscaleServiceClient) ListNodes(ctx context.Context, in *ListNodesReq return out, nil } -func (c *headscaleServiceClient) MoveNode(ctx context.Context, in *MoveNodeRequest, opts ...grpc.CallOption) (*MoveNodeResponse, error) { - cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) - out := new(MoveNodeResponse) - err := c.cc.Invoke(ctx, HeadscaleService_MoveNode_FullMethodName, in, out, cOpts...) - if err != nil { - return nil, err - } - return out, nil -} - func (c *headscaleServiceClient) BackfillNodeIPs(ctx context.Context, in *BackfillNodeIPsRequest, opts ...grpc.CallOption) (*BackfillNodeIPsResponse, error) { cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) out := new(BackfillNodeIPsResponse) @@ -353,6 +353,7 @@ type HeadscaleServiceServer interface { // --- PreAuthKeys start --- CreatePreAuthKey(context.Context, *CreatePreAuthKeyRequest) (*CreatePreAuthKeyResponse, error) ExpirePreAuthKey(context.Context, *ExpirePreAuthKeyRequest) (*ExpirePreAuthKeyResponse, error) + DeletePreAuthKey(context.Context, *DeletePreAuthKeyRequest) (*DeletePreAuthKeyResponse, error) ListPreAuthKeys(context.Context, *ListPreAuthKeysRequest) (*ListPreAuthKeysResponse, error) // --- Node start --- DebugCreateNode(context.Context, *DebugCreateNodeRequest) (*DebugCreateNodeResponse, error) @@ -364,7 +365,6 @@ type HeadscaleServiceServer interface { ExpireNode(context.Context, *ExpireNodeRequest) (*ExpireNodeResponse, error) RenameNode(context.Context, *RenameNodeRequest) (*RenameNodeResponse, error) ListNodes(context.Context, *ListNodesRequest) (*ListNodesResponse, error) - MoveNode(context.Context, *MoveNodeRequest) (*MoveNodeResponse, error) BackfillNodeIPs(context.Context, *BackfillNodeIPsRequest) (*BackfillNodeIPsResponse, error) // --- ApiKeys start --- CreateApiKey(context.Context, *CreateApiKeyRequest) (*CreateApiKeyResponse, error) @@ -387,79 +387,79 @@ type HeadscaleServiceServer interface { type UnimplementedHeadscaleServiceServer struct{} func (UnimplementedHeadscaleServiceServer) CreateUser(context.Context, *CreateUserRequest) (*CreateUserResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method CreateUser not implemented") + return nil, status.Error(codes.Unimplemented, "method CreateUser not implemented") } func (UnimplementedHeadscaleServiceServer) RenameUser(context.Context, *RenameUserRequest) (*RenameUserResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method RenameUser not implemented") + return nil, status.Error(codes.Unimplemented, "method RenameUser not implemented") } func (UnimplementedHeadscaleServiceServer) DeleteUser(context.Context, *DeleteUserRequest) (*DeleteUserResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteUser not implemented") + return nil, status.Error(codes.Unimplemented, "method DeleteUser not implemented") } func (UnimplementedHeadscaleServiceServer) ListUsers(context.Context, *ListUsersRequest) (*ListUsersResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ListUsers not implemented") + return nil, status.Error(codes.Unimplemented, "method ListUsers not implemented") } func (UnimplementedHeadscaleServiceServer) CreatePreAuthKey(context.Context, *CreatePreAuthKeyRequest) (*CreatePreAuthKeyResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method CreatePreAuthKey not implemented") + return nil, status.Error(codes.Unimplemented, "method CreatePreAuthKey not implemented") } func (UnimplementedHeadscaleServiceServer) ExpirePreAuthKey(context.Context, *ExpirePreAuthKeyRequest) (*ExpirePreAuthKeyResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ExpirePreAuthKey not implemented") + return nil, status.Error(codes.Unimplemented, "method ExpirePreAuthKey not implemented") +} +func (UnimplementedHeadscaleServiceServer) DeletePreAuthKey(context.Context, *DeletePreAuthKeyRequest) (*DeletePreAuthKeyResponse, error) { + return nil, status.Error(codes.Unimplemented, "method DeletePreAuthKey not implemented") } func (UnimplementedHeadscaleServiceServer) ListPreAuthKeys(context.Context, *ListPreAuthKeysRequest) (*ListPreAuthKeysResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ListPreAuthKeys not implemented") + return nil, status.Error(codes.Unimplemented, "method ListPreAuthKeys not implemented") } func (UnimplementedHeadscaleServiceServer) DebugCreateNode(context.Context, *DebugCreateNodeRequest) (*DebugCreateNodeResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method DebugCreateNode not implemented") + return nil, status.Error(codes.Unimplemented, "method DebugCreateNode not implemented") } func (UnimplementedHeadscaleServiceServer) GetNode(context.Context, *GetNodeRequest) (*GetNodeResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method GetNode not implemented") + return nil, status.Error(codes.Unimplemented, "method GetNode not implemented") } func (UnimplementedHeadscaleServiceServer) SetTags(context.Context, *SetTagsRequest) (*SetTagsResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method SetTags not implemented") + return nil, status.Error(codes.Unimplemented, "method SetTags not implemented") } func (UnimplementedHeadscaleServiceServer) SetApprovedRoutes(context.Context, *SetApprovedRoutesRequest) (*SetApprovedRoutesResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method SetApprovedRoutes not implemented") + return nil, status.Error(codes.Unimplemented, "method SetApprovedRoutes not implemented") } func (UnimplementedHeadscaleServiceServer) RegisterNode(context.Context, *RegisterNodeRequest) (*RegisterNodeResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method RegisterNode not implemented") + return nil, status.Error(codes.Unimplemented, "method RegisterNode not implemented") } func (UnimplementedHeadscaleServiceServer) DeleteNode(context.Context, *DeleteNodeRequest) (*DeleteNodeResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteNode not implemented") + return nil, status.Error(codes.Unimplemented, "method DeleteNode not implemented") } func (UnimplementedHeadscaleServiceServer) ExpireNode(context.Context, *ExpireNodeRequest) (*ExpireNodeResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ExpireNode not implemented") + return nil, status.Error(codes.Unimplemented, "method ExpireNode not implemented") } func (UnimplementedHeadscaleServiceServer) RenameNode(context.Context, *RenameNodeRequest) (*RenameNodeResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method RenameNode not implemented") + return nil, status.Error(codes.Unimplemented, "method RenameNode not implemented") } func (UnimplementedHeadscaleServiceServer) ListNodes(context.Context, *ListNodesRequest) (*ListNodesResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ListNodes not implemented") -} -func (UnimplementedHeadscaleServiceServer) MoveNode(context.Context, *MoveNodeRequest) (*MoveNodeResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method MoveNode not implemented") + return nil, status.Error(codes.Unimplemented, "method ListNodes not implemented") } func (UnimplementedHeadscaleServiceServer) BackfillNodeIPs(context.Context, *BackfillNodeIPsRequest) (*BackfillNodeIPsResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method BackfillNodeIPs not implemented") + return nil, status.Error(codes.Unimplemented, "method BackfillNodeIPs not implemented") } func (UnimplementedHeadscaleServiceServer) CreateApiKey(context.Context, *CreateApiKeyRequest) (*CreateApiKeyResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method CreateApiKey not implemented") + return nil, status.Error(codes.Unimplemented, "method CreateApiKey not implemented") } func (UnimplementedHeadscaleServiceServer) ExpireApiKey(context.Context, *ExpireApiKeyRequest) (*ExpireApiKeyResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ExpireApiKey not implemented") + return nil, status.Error(codes.Unimplemented, "method ExpireApiKey not implemented") } func (UnimplementedHeadscaleServiceServer) ListApiKeys(context.Context, *ListApiKeysRequest) (*ListApiKeysResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method ListApiKeys not implemented") + return nil, status.Error(codes.Unimplemented, "method ListApiKeys not implemented") } func (UnimplementedHeadscaleServiceServer) DeleteApiKey(context.Context, *DeleteApiKeyRequest) (*DeleteApiKeyResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method DeleteApiKey not implemented") + return nil, status.Error(codes.Unimplemented, "method DeleteApiKey not implemented") } func (UnimplementedHeadscaleServiceServer) GetPolicy(context.Context, *GetPolicyRequest) (*GetPolicyResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method GetPolicy not implemented") + return nil, status.Error(codes.Unimplemented, "method GetPolicy not implemented") } func (UnimplementedHeadscaleServiceServer) SetPolicy(context.Context, *SetPolicyRequest) (*SetPolicyResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method SetPolicy not implemented") + return nil, status.Error(codes.Unimplemented, "method SetPolicy not implemented") } func (UnimplementedHeadscaleServiceServer) Health(context.Context, *HealthRequest) (*HealthResponse, error) { - return nil, status.Errorf(codes.Unimplemented, "method Health not implemented") + return nil, status.Error(codes.Unimplemented, "method Health not implemented") } func (UnimplementedHeadscaleServiceServer) mustEmbedUnimplementedHeadscaleServiceServer() {} func (UnimplementedHeadscaleServiceServer) testEmbeddedByValue() {} @@ -472,7 +472,7 @@ type UnsafeHeadscaleServiceServer interface { } func RegisterHeadscaleServiceServer(s grpc.ServiceRegistrar, srv HeadscaleServiceServer) { - // If the following call pancis, it indicates UnimplementedHeadscaleServiceServer was + // If the following call panics, it indicates UnimplementedHeadscaleServiceServer was // embedded by pointer and is nil. This will cause panics if an // unimplemented method is ever invoked, so we test this at initialization // time to prevent it from happening at runtime later due to I/O. @@ -590,6 +590,24 @@ func _HeadscaleService_ExpirePreAuthKey_Handler(srv interface{}, ctx context.Con return interceptor(ctx, in, info, handler) } +func _HeadscaleService_DeletePreAuthKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeletePreAuthKeyRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(HeadscaleServiceServer).DeletePreAuthKey(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: HeadscaleService_DeletePreAuthKey_FullMethodName, + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(HeadscaleServiceServer).DeletePreAuthKey(ctx, req.(*DeletePreAuthKeyRequest)) + } + return interceptor(ctx, in, info, handler) +} + func _HeadscaleService_ListPreAuthKeys_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(ListPreAuthKeysRequest) if err := dec(in); err != nil { @@ -770,24 +788,6 @@ func _HeadscaleService_ListNodes_Handler(srv interface{}, ctx context.Context, d return interceptor(ctx, in, info, handler) } -func _HeadscaleService_MoveNode_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { - in := new(MoveNodeRequest) - if err := dec(in); err != nil { - return nil, err - } - if interceptor == nil { - return srv.(HeadscaleServiceServer).MoveNode(ctx, in) - } - info := &grpc.UnaryServerInfo{ - Server: srv, - FullMethod: HeadscaleService_MoveNode_FullMethodName, - } - handler := func(ctx context.Context, req interface{}) (interface{}, error) { - return srv.(HeadscaleServiceServer).MoveNode(ctx, req.(*MoveNodeRequest)) - } - return interceptor(ctx, in, info, handler) -} - func _HeadscaleService_BackfillNodeIPs_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(BackfillNodeIPsRequest) if err := dec(in); err != nil { @@ -963,6 +963,10 @@ var HeadscaleService_ServiceDesc = grpc.ServiceDesc{ MethodName: "ExpirePreAuthKey", Handler: _HeadscaleService_ExpirePreAuthKey_Handler, }, + { + MethodName: "DeletePreAuthKey", + Handler: _HeadscaleService_DeletePreAuthKey_Handler, + }, { MethodName: "ListPreAuthKeys", Handler: _HeadscaleService_ListPreAuthKeys_Handler, @@ -1003,10 +1007,6 @@ var HeadscaleService_ServiceDesc = grpc.ServiceDesc{ MethodName: "ListNodes", Handler: _HeadscaleService_ListNodes_Handler, }, - { - MethodName: "MoveNode", - Handler: _HeadscaleService_MoveNode_Handler, - }, { MethodName: "BackfillNodeIPs", Handler: _HeadscaleService_BackfillNodeIPs_Handler, diff --git a/gen/go/headscale/v1/node.pb.go b/gen/go/headscale/v1/node.pb.go index f04c7e2d..b4b7e8f6 100644 --- a/gen/go/headscale/v1/node.pb.go +++ b/gen/go/headscale/v1/node.pb.go @@ -1,6 +1,6 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.10 +// protoc-gen-go v1.36.11 // protoc (unknown) // source: headscale/v1/node.proto @@ -75,27 +75,29 @@ func (RegisterMethod) EnumDescriptor() ([]byte, []int) { } type Node struct { - state protoimpl.MessageState `protogen:"open.v1"` - Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"` - MachineKey string `protobuf:"bytes,2,opt,name=machine_key,json=machineKey,proto3" json:"machine_key,omitempty"` - NodeKey string `protobuf:"bytes,3,opt,name=node_key,json=nodeKey,proto3" json:"node_key,omitempty"` - DiscoKey string `protobuf:"bytes,4,opt,name=disco_key,json=discoKey,proto3" json:"disco_key,omitempty"` - IpAddresses []string `protobuf:"bytes,5,rep,name=ip_addresses,json=ipAddresses,proto3" json:"ip_addresses,omitempty"` - Name string `protobuf:"bytes,6,opt,name=name,proto3" json:"name,omitempty"` - User *User `protobuf:"bytes,7,opt,name=user,proto3" json:"user,omitempty"` - LastSeen *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=last_seen,json=lastSeen,proto3" json:"last_seen,omitempty"` - Expiry *timestamppb.Timestamp `protobuf:"bytes,10,opt,name=expiry,proto3" json:"expiry,omitempty"` - PreAuthKey *PreAuthKey `protobuf:"bytes,11,opt,name=pre_auth_key,json=preAuthKey,proto3" json:"pre_auth_key,omitempty"` - CreatedAt *timestamppb.Timestamp `protobuf:"bytes,12,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"` - RegisterMethod RegisterMethod `protobuf:"varint,13,opt,name=register_method,json=registerMethod,proto3,enum=headscale.v1.RegisterMethod" json:"register_method,omitempty"` - ForcedTags []string `protobuf:"bytes,18,rep,name=forced_tags,json=forcedTags,proto3" json:"forced_tags,omitempty"` - InvalidTags []string `protobuf:"bytes,19,rep,name=invalid_tags,json=invalidTags,proto3" json:"invalid_tags,omitempty"` - ValidTags []string `protobuf:"bytes,20,rep,name=valid_tags,json=validTags,proto3" json:"valid_tags,omitempty"` - GivenName string `protobuf:"bytes,21,opt,name=given_name,json=givenName,proto3" json:"given_name,omitempty"` - Online bool `protobuf:"varint,22,opt,name=online,proto3" json:"online,omitempty"` - ApprovedRoutes []string `protobuf:"bytes,23,rep,name=approved_routes,json=approvedRoutes,proto3" json:"approved_routes,omitempty"` - AvailableRoutes []string `protobuf:"bytes,24,rep,name=available_routes,json=availableRoutes,proto3" json:"available_routes,omitempty"` - SubnetRoutes []string `protobuf:"bytes,25,rep,name=subnet_routes,json=subnetRoutes,proto3" json:"subnet_routes,omitempty"` + state protoimpl.MessageState `protogen:"open.v1"` + Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"` + MachineKey string `protobuf:"bytes,2,opt,name=machine_key,json=machineKey,proto3" json:"machine_key,omitempty"` + NodeKey string `protobuf:"bytes,3,opt,name=node_key,json=nodeKey,proto3" json:"node_key,omitempty"` + DiscoKey string `protobuf:"bytes,4,opt,name=disco_key,json=discoKey,proto3" json:"disco_key,omitempty"` + IpAddresses []string `protobuf:"bytes,5,rep,name=ip_addresses,json=ipAddresses,proto3" json:"ip_addresses,omitempty"` + Name string `protobuf:"bytes,6,opt,name=name,proto3" json:"name,omitempty"` + User *User `protobuf:"bytes,7,opt,name=user,proto3" json:"user,omitempty"` + LastSeen *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=last_seen,json=lastSeen,proto3" json:"last_seen,omitempty"` + Expiry *timestamppb.Timestamp `protobuf:"bytes,10,opt,name=expiry,proto3" json:"expiry,omitempty"` + PreAuthKey *PreAuthKey `protobuf:"bytes,11,opt,name=pre_auth_key,json=preAuthKey,proto3" json:"pre_auth_key,omitempty"` + CreatedAt *timestamppb.Timestamp `protobuf:"bytes,12,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"` + RegisterMethod RegisterMethod `protobuf:"varint,13,opt,name=register_method,json=registerMethod,proto3,enum=headscale.v1.RegisterMethod" json:"register_method,omitempty"` + // Deprecated + // repeated string forced_tags = 18; + // repeated string invalid_tags = 19; + // repeated string valid_tags = 20; + GivenName string `protobuf:"bytes,21,opt,name=given_name,json=givenName,proto3" json:"given_name,omitempty"` + Online bool `protobuf:"varint,22,opt,name=online,proto3" json:"online,omitempty"` + ApprovedRoutes []string `protobuf:"bytes,23,rep,name=approved_routes,json=approvedRoutes,proto3" json:"approved_routes,omitempty"` + AvailableRoutes []string `protobuf:"bytes,24,rep,name=available_routes,json=availableRoutes,proto3" json:"available_routes,omitempty"` + SubnetRoutes []string `protobuf:"bytes,25,rep,name=subnet_routes,json=subnetRoutes,proto3" json:"subnet_routes,omitempty"` + Tags []string `protobuf:"bytes,26,rep,name=tags,proto3" json:"tags,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -214,27 +216,6 @@ func (x *Node) GetRegisterMethod() RegisterMethod { return RegisterMethod_REGISTER_METHOD_UNSPECIFIED } -func (x *Node) GetForcedTags() []string { - if x != nil { - return x.ForcedTags - } - return nil -} - -func (x *Node) GetInvalidTags() []string { - if x != nil { - return x.InvalidTags - } - return nil -} - -func (x *Node) GetValidTags() []string { - if x != nil { - return x.ValidTags - } - return nil -} - func (x *Node) GetGivenName() string { if x != nil { return x.GivenName @@ -270,6 +251,13 @@ func (x *Node) GetSubnetRoutes() []string { return nil } +func (x *Node) GetTags() []string { + if x != nil { + return x.Tags + } + return nil +} + type RegisterNodeRequest struct { state protoimpl.MessageState `protogen:"open.v1"` User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"` @@ -1006,102 +994,6 @@ func (x *ListNodesResponse) GetNodes() []*Node { return nil } -type MoveNodeRequest struct { - state protoimpl.MessageState `protogen:"open.v1"` - NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"` - User uint64 `protobuf:"varint,2,opt,name=user,proto3" json:"user,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *MoveNodeRequest) Reset() { - *x = MoveNodeRequest{} - mi := &file_headscale_v1_node_proto_msgTypes[17] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *MoveNodeRequest) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*MoveNodeRequest) ProtoMessage() {} - -func (x *MoveNodeRequest) ProtoReflect() protoreflect.Message { - mi := &file_headscale_v1_node_proto_msgTypes[17] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use MoveNodeRequest.ProtoReflect.Descriptor instead. -func (*MoveNodeRequest) Descriptor() ([]byte, []int) { - return file_headscale_v1_node_proto_rawDescGZIP(), []int{17} -} - -func (x *MoveNodeRequest) GetNodeId() uint64 { - if x != nil { - return x.NodeId - } - return 0 -} - -func (x *MoveNodeRequest) GetUser() uint64 { - if x != nil { - return x.User - } - return 0 -} - -type MoveNodeResponse struct { - state protoimpl.MessageState `protogen:"open.v1"` - Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"` - unknownFields protoimpl.UnknownFields - sizeCache protoimpl.SizeCache -} - -func (x *MoveNodeResponse) Reset() { - *x = MoveNodeResponse{} - mi := &file_headscale_v1_node_proto_msgTypes[18] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) -} - -func (x *MoveNodeResponse) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*MoveNodeResponse) ProtoMessage() {} - -func (x *MoveNodeResponse) ProtoReflect() protoreflect.Message { - mi := &file_headscale_v1_node_proto_msgTypes[18] - if x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use MoveNodeResponse.ProtoReflect.Descriptor instead. -func (*MoveNodeResponse) Descriptor() ([]byte, []int) { - return file_headscale_v1_node_proto_rawDescGZIP(), []int{18} -} - -func (x *MoveNodeResponse) GetNode() *Node { - if x != nil { - return x.Node - } - return nil -} - type DebugCreateNodeRequest struct { state protoimpl.MessageState `protogen:"open.v1"` User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"` @@ -1114,7 +1006,7 @@ type DebugCreateNodeRequest struct { func (x *DebugCreateNodeRequest) Reset() { *x = DebugCreateNodeRequest{} - mi := &file_headscale_v1_node_proto_msgTypes[19] + mi := &file_headscale_v1_node_proto_msgTypes[17] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1126,7 +1018,7 @@ func (x *DebugCreateNodeRequest) String() string { func (*DebugCreateNodeRequest) ProtoMessage() {} func (x *DebugCreateNodeRequest) ProtoReflect() protoreflect.Message { - mi := &file_headscale_v1_node_proto_msgTypes[19] + mi := &file_headscale_v1_node_proto_msgTypes[17] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1139,7 +1031,7 @@ func (x *DebugCreateNodeRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use DebugCreateNodeRequest.ProtoReflect.Descriptor instead. func (*DebugCreateNodeRequest) Descriptor() ([]byte, []int) { - return file_headscale_v1_node_proto_rawDescGZIP(), []int{19} + return file_headscale_v1_node_proto_rawDescGZIP(), []int{17} } func (x *DebugCreateNodeRequest) GetUser() string { @@ -1179,7 +1071,7 @@ type DebugCreateNodeResponse struct { func (x *DebugCreateNodeResponse) Reset() { *x = DebugCreateNodeResponse{} - mi := &file_headscale_v1_node_proto_msgTypes[20] + mi := &file_headscale_v1_node_proto_msgTypes[18] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1191,7 +1083,7 @@ func (x *DebugCreateNodeResponse) String() string { func (*DebugCreateNodeResponse) ProtoMessage() {} func (x *DebugCreateNodeResponse) ProtoReflect() protoreflect.Message { - mi := &file_headscale_v1_node_proto_msgTypes[20] + mi := &file_headscale_v1_node_proto_msgTypes[18] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1204,7 +1096,7 @@ func (x *DebugCreateNodeResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use DebugCreateNodeResponse.ProtoReflect.Descriptor instead. func (*DebugCreateNodeResponse) Descriptor() ([]byte, []int) { - return file_headscale_v1_node_proto_rawDescGZIP(), []int{20} + return file_headscale_v1_node_proto_rawDescGZIP(), []int{18} } func (x *DebugCreateNodeResponse) GetNode() *Node { @@ -1223,7 +1115,7 @@ type BackfillNodeIPsRequest struct { func (x *BackfillNodeIPsRequest) Reset() { *x = BackfillNodeIPsRequest{} - mi := &file_headscale_v1_node_proto_msgTypes[21] + mi := &file_headscale_v1_node_proto_msgTypes[19] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1235,7 +1127,7 @@ func (x *BackfillNodeIPsRequest) String() string { func (*BackfillNodeIPsRequest) ProtoMessage() {} func (x *BackfillNodeIPsRequest) ProtoReflect() protoreflect.Message { - mi := &file_headscale_v1_node_proto_msgTypes[21] + mi := &file_headscale_v1_node_proto_msgTypes[19] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1248,7 +1140,7 @@ func (x *BackfillNodeIPsRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use BackfillNodeIPsRequest.ProtoReflect.Descriptor instead. func (*BackfillNodeIPsRequest) Descriptor() ([]byte, []int) { - return file_headscale_v1_node_proto_rawDescGZIP(), []int{21} + return file_headscale_v1_node_proto_rawDescGZIP(), []int{19} } func (x *BackfillNodeIPsRequest) GetConfirmed() bool { @@ -1267,7 +1159,7 @@ type BackfillNodeIPsResponse struct { func (x *BackfillNodeIPsResponse) Reset() { *x = BackfillNodeIPsResponse{} - mi := &file_headscale_v1_node_proto_msgTypes[22] + mi := &file_headscale_v1_node_proto_msgTypes[20] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -1279,7 +1171,7 @@ func (x *BackfillNodeIPsResponse) String() string { func (*BackfillNodeIPsResponse) ProtoMessage() {} func (x *BackfillNodeIPsResponse) ProtoReflect() protoreflect.Message { - mi := &file_headscale_v1_node_proto_msgTypes[22] + mi := &file_headscale_v1_node_proto_msgTypes[20] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -1292,7 +1184,7 @@ func (x *BackfillNodeIPsResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use BackfillNodeIPsResponse.ProtoReflect.Descriptor instead. func (*BackfillNodeIPsResponse) Descriptor() ([]byte, []int) { - return file_headscale_v1_node_proto_rawDescGZIP(), []int{22} + return file_headscale_v1_node_proto_rawDescGZIP(), []int{20} } func (x *BackfillNodeIPsResponse) GetChanges() []string { @@ -1306,7 +1198,7 @@ var File_headscale_v1_node_proto protoreflect.FileDescriptor const file_headscale_v1_node_proto_rawDesc = "" + "\n" + - "\x17headscale/v1/node.proto\x12\fheadscale.v1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/user.proto\"\x98\x06\n" + + "\x17headscale/v1/node.proto\x12\fheadscale.v1\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/user.proto\"\xc9\x05\n" + "\x04Node\x12\x0e\n" + "\x02id\x18\x01 \x01(\x04R\x02id\x12\x1f\n" + "\vmachine_key\x18\x02 \x01(\tR\n" + @@ -1323,19 +1215,15 @@ const file_headscale_v1_node_proto_rawDesc = "" + "preAuthKey\x129\n" + "\n" + "created_at\x18\f \x01(\v2\x1a.google.protobuf.TimestampR\tcreatedAt\x12E\n" + - "\x0fregister_method\x18\r \x01(\x0e2\x1c.headscale.v1.RegisterMethodR\x0eregisterMethod\x12\x1f\n" + - "\vforced_tags\x18\x12 \x03(\tR\n" + - "forcedTags\x12!\n" + - "\finvalid_tags\x18\x13 \x03(\tR\vinvalidTags\x12\x1d\n" + - "\n" + - "valid_tags\x18\x14 \x03(\tR\tvalidTags\x12\x1d\n" + + "\x0fregister_method\x18\r \x01(\x0e2\x1c.headscale.v1.RegisterMethodR\x0eregisterMethod\x12\x1d\n" + "\n" + "given_name\x18\x15 \x01(\tR\tgivenName\x12\x16\n" + "\x06online\x18\x16 \x01(\bR\x06online\x12'\n" + "\x0fapproved_routes\x18\x17 \x03(\tR\x0eapprovedRoutes\x12)\n" + "\x10available_routes\x18\x18 \x03(\tR\x0favailableRoutes\x12#\n" + - "\rsubnet_routes\x18\x19 \x03(\tR\fsubnetRoutesJ\x04\b\t\x10\n" + - "J\x04\b\x0e\x10\x12\";\n" + + "\rsubnet_routes\x18\x19 \x03(\tR\fsubnetRoutes\x12\x12\n" + + "\x04tags\x18\x1a \x03(\tR\x04tagsJ\x04\b\t\x10\n" + + "J\x04\b\x0e\x10\x15\";\n" + "\x13RegisterNodeRequest\x12\x12\n" + "\x04user\x18\x01 \x01(\tR\x04user\x12\x10\n" + "\x03key\x18\x02 \x01(\tR\x03key\">\n" + @@ -1371,12 +1259,7 @@ const file_headscale_v1_node_proto_rawDesc = "" + "\x10ListNodesRequest\x12\x12\n" + "\x04user\x18\x01 \x01(\tR\x04user\"=\n" + "\x11ListNodesResponse\x12(\n" + - "\x05nodes\x18\x01 \x03(\v2\x12.headscale.v1.NodeR\x05nodes\">\n" + - "\x0fMoveNodeRequest\x12\x17\n" + - "\anode_id\x18\x01 \x01(\x04R\x06nodeId\x12\x12\n" + - "\x04user\x18\x02 \x01(\x04R\x04user\":\n" + - "\x10MoveNodeResponse\x12&\n" + - "\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"j\n" + + "\x05nodes\x18\x01 \x03(\v2\x12.headscale.v1.NodeR\x05nodes\"j\n" + "\x16DebugCreateNodeRequest\x12\x12\n" + "\x04user\x18\x01 \x01(\tR\x04user\x12\x10\n" + "\x03key\x18\x02 \x01(\tR\x03key\x12\x12\n" + @@ -1407,7 +1290,7 @@ func file_headscale_v1_node_proto_rawDescGZIP() []byte { } var file_headscale_v1_node_proto_enumTypes = make([]protoimpl.EnumInfo, 1) -var file_headscale_v1_node_proto_msgTypes = make([]protoimpl.MessageInfo, 23) +var file_headscale_v1_node_proto_msgTypes = make([]protoimpl.MessageInfo, 21) var file_headscale_v1_node_proto_goTypes = []any{ (RegisterMethod)(0), // 0: headscale.v1.RegisterMethod (*Node)(nil), // 1: headscale.v1.Node @@ -1427,38 +1310,35 @@ var file_headscale_v1_node_proto_goTypes = []any{ (*RenameNodeResponse)(nil), // 15: headscale.v1.RenameNodeResponse (*ListNodesRequest)(nil), // 16: headscale.v1.ListNodesRequest (*ListNodesResponse)(nil), // 17: headscale.v1.ListNodesResponse - (*MoveNodeRequest)(nil), // 18: headscale.v1.MoveNodeRequest - (*MoveNodeResponse)(nil), // 19: headscale.v1.MoveNodeResponse - (*DebugCreateNodeRequest)(nil), // 20: headscale.v1.DebugCreateNodeRequest - (*DebugCreateNodeResponse)(nil), // 21: headscale.v1.DebugCreateNodeResponse - (*BackfillNodeIPsRequest)(nil), // 22: headscale.v1.BackfillNodeIPsRequest - (*BackfillNodeIPsResponse)(nil), // 23: headscale.v1.BackfillNodeIPsResponse - (*User)(nil), // 24: headscale.v1.User - (*timestamppb.Timestamp)(nil), // 25: google.protobuf.Timestamp - (*PreAuthKey)(nil), // 26: headscale.v1.PreAuthKey + (*DebugCreateNodeRequest)(nil), // 18: headscale.v1.DebugCreateNodeRequest + (*DebugCreateNodeResponse)(nil), // 19: headscale.v1.DebugCreateNodeResponse + (*BackfillNodeIPsRequest)(nil), // 20: headscale.v1.BackfillNodeIPsRequest + (*BackfillNodeIPsResponse)(nil), // 21: headscale.v1.BackfillNodeIPsResponse + (*User)(nil), // 22: headscale.v1.User + (*timestamppb.Timestamp)(nil), // 23: google.protobuf.Timestamp + (*PreAuthKey)(nil), // 24: headscale.v1.PreAuthKey } var file_headscale_v1_node_proto_depIdxs = []int32{ - 24, // 0: headscale.v1.Node.user:type_name -> headscale.v1.User - 25, // 1: headscale.v1.Node.last_seen:type_name -> google.protobuf.Timestamp - 25, // 2: headscale.v1.Node.expiry:type_name -> google.protobuf.Timestamp - 26, // 3: headscale.v1.Node.pre_auth_key:type_name -> headscale.v1.PreAuthKey - 25, // 4: headscale.v1.Node.created_at:type_name -> google.protobuf.Timestamp + 22, // 0: headscale.v1.Node.user:type_name -> headscale.v1.User + 23, // 1: headscale.v1.Node.last_seen:type_name -> google.protobuf.Timestamp + 23, // 2: headscale.v1.Node.expiry:type_name -> google.protobuf.Timestamp + 24, // 3: headscale.v1.Node.pre_auth_key:type_name -> headscale.v1.PreAuthKey + 23, // 4: headscale.v1.Node.created_at:type_name -> google.protobuf.Timestamp 0, // 5: headscale.v1.Node.register_method:type_name -> headscale.v1.RegisterMethod 1, // 6: headscale.v1.RegisterNodeResponse.node:type_name -> headscale.v1.Node 1, // 7: headscale.v1.GetNodeResponse.node:type_name -> headscale.v1.Node 1, // 8: headscale.v1.SetTagsResponse.node:type_name -> headscale.v1.Node 1, // 9: headscale.v1.SetApprovedRoutesResponse.node:type_name -> headscale.v1.Node - 25, // 10: headscale.v1.ExpireNodeRequest.expiry:type_name -> google.protobuf.Timestamp + 23, // 10: headscale.v1.ExpireNodeRequest.expiry:type_name -> google.protobuf.Timestamp 1, // 11: headscale.v1.ExpireNodeResponse.node:type_name -> headscale.v1.Node 1, // 12: headscale.v1.RenameNodeResponse.node:type_name -> headscale.v1.Node 1, // 13: headscale.v1.ListNodesResponse.nodes:type_name -> headscale.v1.Node - 1, // 14: headscale.v1.MoveNodeResponse.node:type_name -> headscale.v1.Node - 1, // 15: headscale.v1.DebugCreateNodeResponse.node:type_name -> headscale.v1.Node - 16, // [16:16] is the sub-list for method output_type - 16, // [16:16] is the sub-list for method input_type - 16, // [16:16] is the sub-list for extension type_name - 16, // [16:16] is the sub-list for extension extendee - 0, // [0:16] is the sub-list for field type_name + 1, // 14: headscale.v1.DebugCreateNodeResponse.node:type_name -> headscale.v1.Node + 15, // [15:15] is the sub-list for method output_type + 15, // [15:15] is the sub-list for method input_type + 15, // [15:15] is the sub-list for extension type_name + 15, // [15:15] is the sub-list for extension extendee + 0, // [0:15] is the sub-list for field type_name } func init() { file_headscale_v1_node_proto_init() } @@ -1474,7 +1354,7 @@ func file_headscale_v1_node_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_node_proto_rawDesc), len(file_headscale_v1_node_proto_rawDesc)), NumEnums: 1, - NumMessages: 23, + NumMessages: 21, NumExtensions: 0, NumServices: 0, }, diff --git a/gen/go/headscale/v1/policy.pb.go b/gen/go/headscale/v1/policy.pb.go index fefcfb22..faa3fc40 100644 --- a/gen/go/headscale/v1/policy.pb.go +++ b/gen/go/headscale/v1/policy.pb.go @@ -1,6 +1,6 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.10 +// protoc-gen-go v1.36.11 // protoc (unknown) // source: headscale/v1/policy.proto diff --git a/gen/go/headscale/v1/preauthkey.pb.go b/gen/go/headscale/v1/preauthkey.pb.go index 661f170d..ff902d45 100644 --- a/gen/go/headscale/v1/preauthkey.pb.go +++ b/gen/go/headscale/v1/preauthkey.pb.go @@ -1,6 +1,6 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.10 +// protoc-gen-go v1.36.11 // protoc (unknown) // source: headscale/v1/preauthkey.proto @@ -252,8 +252,7 @@ func (x *CreatePreAuthKeyResponse) GetPreAuthKey() *PreAuthKey { type ExpirePreAuthKeyRequest struct { state protoimpl.MessageState `protogen:"open.v1"` - User uint64 `protobuf:"varint,1,opt,name=user,proto3" json:"user,omitempty"` - Key string `protobuf:"bytes,2,opt,name=key,proto3" json:"key,omitempty"` + Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } @@ -288,20 +287,13 @@ func (*ExpirePreAuthKeyRequest) Descriptor() ([]byte, []int) { return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{3} } -func (x *ExpirePreAuthKeyRequest) GetUser() uint64 { +func (x *ExpirePreAuthKeyRequest) GetId() uint64 { if x != nil { - return x.User + return x.Id } return 0 } -func (x *ExpirePreAuthKeyRequest) GetKey() string { - if x != nil { - return x.Key - } - return "" -} - type ExpirePreAuthKeyResponse struct { state protoimpl.MessageState `protogen:"open.v1"` unknownFields protoimpl.UnknownFields @@ -338,16 +330,95 @@ func (*ExpirePreAuthKeyResponse) Descriptor() ([]byte, []int) { return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{4} } +type DeletePreAuthKeyRequest struct { + state protoimpl.MessageState `protogen:"open.v1"` + Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeletePreAuthKeyRequest) Reset() { + *x = DeletePreAuthKeyRequest{} + mi := &file_headscale_v1_preauthkey_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeletePreAuthKeyRequest) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeletePreAuthKeyRequest) ProtoMessage() {} + +func (x *DeletePreAuthKeyRequest) ProtoReflect() protoreflect.Message { + mi := &file_headscale_v1_preauthkey_proto_msgTypes[5] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeletePreAuthKeyRequest.ProtoReflect.Descriptor instead. +func (*DeletePreAuthKeyRequest) Descriptor() ([]byte, []int) { + return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{5} +} + +func (x *DeletePreAuthKeyRequest) GetId() uint64 { + if x != nil { + return x.Id + } + return 0 +} + +type DeletePreAuthKeyResponse struct { + state protoimpl.MessageState `protogen:"open.v1"` + unknownFields protoimpl.UnknownFields + sizeCache protoimpl.SizeCache +} + +func (x *DeletePreAuthKeyResponse) Reset() { + *x = DeletePreAuthKeyResponse{} + mi := &file_headscale_v1_preauthkey_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) +} + +func (x *DeletePreAuthKeyResponse) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*DeletePreAuthKeyResponse) ProtoMessage() {} + +func (x *DeletePreAuthKeyResponse) ProtoReflect() protoreflect.Message { + mi := &file_headscale_v1_preauthkey_proto_msgTypes[6] + if x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use DeletePreAuthKeyResponse.ProtoReflect.Descriptor instead. +func (*DeletePreAuthKeyResponse) Descriptor() ([]byte, []int) { + return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{6} +} + type ListPreAuthKeysRequest struct { state protoimpl.MessageState `protogen:"open.v1"` - User uint64 `protobuf:"varint,1,opt,name=user,proto3" json:"user,omitempty"` unknownFields protoimpl.UnknownFields sizeCache protoimpl.SizeCache } func (x *ListPreAuthKeysRequest) Reset() { *x = ListPreAuthKeysRequest{} - mi := &file_headscale_v1_preauthkey_proto_msgTypes[5] + mi := &file_headscale_v1_preauthkey_proto_msgTypes[7] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -359,7 +430,7 @@ func (x *ListPreAuthKeysRequest) String() string { func (*ListPreAuthKeysRequest) ProtoMessage() {} func (x *ListPreAuthKeysRequest) ProtoReflect() protoreflect.Message { - mi := &file_headscale_v1_preauthkey_proto_msgTypes[5] + mi := &file_headscale_v1_preauthkey_proto_msgTypes[7] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -372,14 +443,7 @@ func (x *ListPreAuthKeysRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use ListPreAuthKeysRequest.ProtoReflect.Descriptor instead. func (*ListPreAuthKeysRequest) Descriptor() ([]byte, []int) { - return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{5} -} - -func (x *ListPreAuthKeysRequest) GetUser() uint64 { - if x != nil { - return x.User - } - return 0 + return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{7} } type ListPreAuthKeysResponse struct { @@ -391,7 +455,7 @@ type ListPreAuthKeysResponse struct { func (x *ListPreAuthKeysResponse) Reset() { *x = ListPreAuthKeysResponse{} - mi := &file_headscale_v1_preauthkey_proto_msgTypes[6] + mi := &file_headscale_v1_preauthkey_proto_msgTypes[8] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -403,7 +467,7 @@ func (x *ListPreAuthKeysResponse) String() string { func (*ListPreAuthKeysResponse) ProtoMessage() {} func (x *ListPreAuthKeysResponse) ProtoReflect() protoreflect.Message { - mi := &file_headscale_v1_preauthkey_proto_msgTypes[6] + mi := &file_headscale_v1_preauthkey_proto_msgTypes[8] if x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -416,7 +480,7 @@ func (x *ListPreAuthKeysResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ListPreAuthKeysResponse.ProtoReflect.Descriptor instead. func (*ListPreAuthKeysResponse) Descriptor() ([]byte, []int) { - return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{6} + return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{8} } func (x *ListPreAuthKeysResponse) GetPreAuthKeys() []*PreAuthKey { @@ -455,13 +519,14 @@ const file_headscale_v1_preauthkey_proto_rawDesc = "" + "\bacl_tags\x18\x05 \x03(\tR\aaclTags\"V\n" + "\x18CreatePreAuthKeyResponse\x12:\n" + "\fpre_auth_key\x18\x01 \x01(\v2\x18.headscale.v1.PreAuthKeyR\n" + - "preAuthKey\"?\n" + - "\x17ExpirePreAuthKeyRequest\x12\x12\n" + - "\x04user\x18\x01 \x01(\x04R\x04user\x12\x10\n" + - "\x03key\x18\x02 \x01(\tR\x03key\"\x1a\n" + - "\x18ExpirePreAuthKeyResponse\",\n" + - "\x16ListPreAuthKeysRequest\x12\x12\n" + - "\x04user\x18\x01 \x01(\x04R\x04user\"W\n" + + "preAuthKey\")\n" + + "\x17ExpirePreAuthKeyRequest\x12\x0e\n" + + "\x02id\x18\x01 \x01(\x04R\x02id\"\x1a\n" + + "\x18ExpirePreAuthKeyResponse\")\n" + + "\x17DeletePreAuthKeyRequest\x12\x0e\n" + + "\x02id\x18\x01 \x01(\x04R\x02id\"\x1a\n" + + "\x18DeletePreAuthKeyResponse\"\x18\n" + + "\x16ListPreAuthKeysRequest\"W\n" + "\x17ListPreAuthKeysResponse\x12<\n" + "\rpre_auth_keys\x18\x01 \x03(\v2\x18.headscale.v1.PreAuthKeyR\vpreAuthKeysB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3" @@ -477,30 +542,32 @@ func file_headscale_v1_preauthkey_proto_rawDescGZIP() []byte { return file_headscale_v1_preauthkey_proto_rawDescData } -var file_headscale_v1_preauthkey_proto_msgTypes = make([]protoimpl.MessageInfo, 7) +var file_headscale_v1_preauthkey_proto_msgTypes = make([]protoimpl.MessageInfo, 9) var file_headscale_v1_preauthkey_proto_goTypes = []any{ (*PreAuthKey)(nil), // 0: headscale.v1.PreAuthKey (*CreatePreAuthKeyRequest)(nil), // 1: headscale.v1.CreatePreAuthKeyRequest (*CreatePreAuthKeyResponse)(nil), // 2: headscale.v1.CreatePreAuthKeyResponse (*ExpirePreAuthKeyRequest)(nil), // 3: headscale.v1.ExpirePreAuthKeyRequest (*ExpirePreAuthKeyResponse)(nil), // 4: headscale.v1.ExpirePreAuthKeyResponse - (*ListPreAuthKeysRequest)(nil), // 5: headscale.v1.ListPreAuthKeysRequest - (*ListPreAuthKeysResponse)(nil), // 6: headscale.v1.ListPreAuthKeysResponse - (*User)(nil), // 7: headscale.v1.User - (*timestamppb.Timestamp)(nil), // 8: google.protobuf.Timestamp + (*DeletePreAuthKeyRequest)(nil), // 5: headscale.v1.DeletePreAuthKeyRequest + (*DeletePreAuthKeyResponse)(nil), // 6: headscale.v1.DeletePreAuthKeyResponse + (*ListPreAuthKeysRequest)(nil), // 7: headscale.v1.ListPreAuthKeysRequest + (*ListPreAuthKeysResponse)(nil), // 8: headscale.v1.ListPreAuthKeysResponse + (*User)(nil), // 9: headscale.v1.User + (*timestamppb.Timestamp)(nil), // 10: google.protobuf.Timestamp } var file_headscale_v1_preauthkey_proto_depIdxs = []int32{ - 7, // 0: headscale.v1.PreAuthKey.user:type_name -> headscale.v1.User - 8, // 1: headscale.v1.PreAuthKey.expiration:type_name -> google.protobuf.Timestamp - 8, // 2: headscale.v1.PreAuthKey.created_at:type_name -> google.protobuf.Timestamp - 8, // 3: headscale.v1.CreatePreAuthKeyRequest.expiration:type_name -> google.protobuf.Timestamp - 0, // 4: headscale.v1.CreatePreAuthKeyResponse.pre_auth_key:type_name -> headscale.v1.PreAuthKey - 0, // 5: headscale.v1.ListPreAuthKeysResponse.pre_auth_keys:type_name -> headscale.v1.PreAuthKey - 6, // [6:6] is the sub-list for method output_type - 6, // [6:6] is the sub-list for method input_type - 6, // [6:6] is the sub-list for extension type_name - 6, // [6:6] is the sub-list for extension extendee - 0, // [0:6] is the sub-list for field type_name + 9, // 0: headscale.v1.PreAuthKey.user:type_name -> headscale.v1.User + 10, // 1: headscale.v1.PreAuthKey.expiration:type_name -> google.protobuf.Timestamp + 10, // 2: headscale.v1.PreAuthKey.created_at:type_name -> google.protobuf.Timestamp + 10, // 3: headscale.v1.CreatePreAuthKeyRequest.expiration:type_name -> google.protobuf.Timestamp + 0, // 4: headscale.v1.CreatePreAuthKeyResponse.pre_auth_key:type_name -> headscale.v1.PreAuthKey + 0, // 5: headscale.v1.ListPreAuthKeysResponse.pre_auth_keys:type_name -> headscale.v1.PreAuthKey + 6, // [6:6] is the sub-list for method output_type + 6, // [6:6] is the sub-list for method input_type + 6, // [6:6] is the sub-list for extension type_name + 6, // [6:6] is the sub-list for extension extendee + 0, // [0:6] is the sub-list for field type_name } func init() { file_headscale_v1_preauthkey_proto_init() } @@ -515,7 +582,7 @@ func file_headscale_v1_preauthkey_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_preauthkey_proto_rawDesc), len(file_headscale_v1_preauthkey_proto_rawDesc)), NumEnums: 0, - NumMessages: 7, + NumMessages: 9, NumExtensions: 0, NumServices: 0, }, diff --git a/gen/go/headscale/v1/user.pb.go b/gen/go/headscale/v1/user.pb.go index fa6d49bb..5f05d084 100644 --- a/gen/go/headscale/v1/user.pb.go +++ b/gen/go/headscale/v1/user.pb.go @@ -1,6 +1,6 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.36.10 +// protoc-gen-go v1.36.11 // protoc (unknown) // source: headscale/v1/user.proto diff --git a/gen/openapiv2/headscale/v1/headscale.swagger.json b/gen/openapiv2/headscale/v1/headscale.swagger.json index 6a7b48ad..1db1db94 100644 --- a/gen/openapiv2/headscale/v1/headscale.swagger.json +++ b/gen/openapiv2/headscale/v1/headscale.swagger.json @@ -124,6 +124,13 @@ "in": "path", "required": true, "type": "string" + }, + { + "name": "id", + "in": "query", + "required": false, + "type": "string", + "format": "uint64" } ], "tags": [ @@ -496,45 +503,6 @@ ] } }, - "/api/v1/node/{nodeId}/user": { - "post": { - "operationId": "HeadscaleService_MoveNode", - "responses": { - "200": { - "description": "A successful response.", - "schema": { - "$ref": "#/definitions/v1MoveNodeResponse" - } - }, - "default": { - "description": "An unexpected error response.", - "schema": { - "$ref": "#/definitions/rpcStatus" - } - } - }, - "parameters": [ - { - "name": "nodeId", - "in": "path", - "required": true, - "type": "string", - "format": "uint64" - }, - { - "name": "body", - "in": "body", - "required": true, - "schema": { - "$ref": "#/definitions/HeadscaleServiceMoveNodeBody" - } - } - ], - "tags": [ - "HeadscaleService" - ] - } - }, "/api/v1/policy": { "get": { "summary": "--- Policy start ---", @@ -605,9 +573,29 @@ } } }, + "tags": [ + "HeadscaleService" + ] + }, + "delete": { + "operationId": "HeadscaleService_DeletePreAuthKey", + "responses": { + "200": { + "description": "A successful response.", + "schema": { + "$ref": "#/definitions/v1DeletePreAuthKeyResponse" + } + }, + "default": { + "description": "An unexpected error response.", + "schema": { + "$ref": "#/definitions/rpcStatus" + } + } + }, "parameters": [ { - "name": "user", + "name": "id", "in": "query", "required": false, "type": "string", @@ -826,15 +814,6 @@ } }, "definitions": { - "HeadscaleServiceMoveNodeBody": { - "type": "object", - "properties": { - "user": { - "type": "string", - "format": "uint64" - } - } - }, "HeadscaleServiceSetApprovedRoutesBody": { "type": "object", "properties": { @@ -1029,6 +1008,9 @@ "v1DeleteNodeResponse": { "type": "object" }, + "v1DeletePreAuthKeyResponse": { + "type": "object" + }, "v1DeleteUserResponse": { "type": "object" }, @@ -1037,6 +1019,10 @@ "properties": { "prefix": { "type": "string" + }, + "id": { + "type": "string", + "format": "uint64" } } }, @@ -1054,12 +1040,9 @@ "v1ExpirePreAuthKeyRequest": { "type": "object", "properties": { - "user": { + "id": { "type": "string", "format": "uint64" - }, - "key": { - "type": "string" } } }, @@ -1142,14 +1125,6 @@ } } }, - "v1MoveNodeResponse": { - "type": "object", - "properties": { - "node": { - "$ref": "#/definitions/v1Node" - } - } - }, "v1Node": { "type": "object", "properties": { @@ -1196,26 +1171,9 @@ "registerMethod": { "$ref": "#/definitions/v1RegisterMethod" }, - "forcedTags": { - "type": "array", - "items": { - "type": "string" - } - }, - "invalidTags": { - "type": "array", - "items": { - "type": "string" - } - }, - "validTags": { - "type": "array", - "items": { - "type": "string" - } - }, "givenName": { - "type": "string" + "type": "string", + "title": "Deprecated\nrepeated string forced_tags = 18;\nrepeated string invalid_tags = 19;\nrepeated string valid_tags = 20;" }, "online": { "type": "boolean" @@ -1237,6 +1195,12 @@ "items": { "type": "string" } + }, + "tags": { + "type": "array", + "items": { + "type": "string" + } } } }, diff --git a/go.mod b/go.mod index 67c6c089..5cc9a7dd 100644 --- a/go.mod +++ b/go.mod @@ -1,9 +1,9 @@ module github.com/juanfont/headscale -go 1.25 +go 1.25.5 require ( - github.com/arl/statsviz v0.7.2 + github.com/arl/statsviz v0.8.0 github.com/cenkalti/backoff/v5 v5.0.3 github.com/chasefleming/elem-go v0.31.0 github.com/coder/websocket v1.8.14 @@ -11,48 +11,47 @@ require ( github.com/creachadair/command v0.2.0 github.com/creachadair/flax v0.0.5 github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc - github.com/docker/docker v28.5.1+incompatible + github.com/docker/docker v28.5.2+incompatible github.com/fsnotify/fsnotify v1.9.0 github.com/glebarez/sqlite v1.11.0 github.com/go-gormigrate/gormigrate/v2 v2.1.5 github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced - github.com/gofrs/uuid/v5 v5.3.2 + github.com/gofrs/uuid/v5 v5.4.0 github.com/google/go-cmp v0.7.0 github.com/gorilla/mux v1.8.1 - github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 + github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 github.com/jagottsicher/termcolor v1.0.2 github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25 github.com/ory/dockertest/v3 v3.12.0 github.com/philip-bui/grpc-zerolog v1.0.1 github.com/pkg/profile v1.7.0 github.com/prometheus/client_golang v1.23.2 - github.com/prometheus/common v0.66.1 + github.com/prometheus/common v0.67.5 github.com/pterm/pterm v0.12.82 - github.com/puzpuzpuz/xsync/v4 v4.2.0 + github.com/puzpuzpuz/xsync/v4 v4.3.0 github.com/rs/zerolog v1.34.0 github.com/samber/lo v1.52.0 github.com/sasha-s/go-deadlock v0.3.6 - github.com/spf13/cobra v1.10.1 + github.com/spf13/cobra v1.10.2 github.com/spf13/viper v1.21.0 github.com/stretchr/testify v1.11.1 - github.com/tailscale/hujson v0.0.0-20250226034555-ec1d1c113d33 - github.com/tailscale/squibble v0.0.0-20251030164342-4d5df9caa993 - github.com/tailscale/tailsql v0.0.0-20250421235516-02f85f087b97 + github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a + github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f + github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09 github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e go4.org/netipx v0.0.0-20231129151722-fdeea329fbba - golang.org/x/crypto v0.43.0 - golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b - golang.org/x/net v0.46.0 - golang.org/x/oauth2 v0.32.0 - golang.org/x/sync v0.17.0 - google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4 - google.golang.org/grpc v1.75.1 - google.golang.org/protobuf v1.36.10 - gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c + golang.org/x/crypto v0.46.0 + golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 + golang.org/x/net v0.48.0 + golang.org/x/oauth2 v0.34.0 + golang.org/x/sync v0.19.0 + google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b + google.golang.org/grpc v1.78.0 + google.golang.org/protobuf v1.36.11 gopkg.in/yaml.v3 v3.0.1 gorm.io/driver/postgres v1.6.0 - gorm.io/gorm v1.31.0 - tailscale.com v1.86.5 + gorm.io/gorm v1.31.1 + tailscale.com v1.94.0 zgo.at/zcache/v2 v2.4.1 zombiezen.com/go/postgrestest v1.0.1 ) @@ -75,10 +74,10 @@ require ( // together, e.g: // go get modernc.org/libc@v1.55.3 modernc.org/sqlite@v1.33.1 require ( - modernc.org/libc v1.66.10 // indirect + modernc.org/libc v1.67.6 // indirect modernc.org/mathutil v1.7.1 // indirect modernc.org/memory v1.11.0 // indirect - modernc.org/sqlite v1.39.1 + modernc.org/sqlite v1.44.3 ) require ( @@ -92,32 +91,32 @@ require ( github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 // indirect github.com/akutz/memconn v0.1.0 // indirect github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa // indirect - github.com/aws/aws-sdk-go-v2 v1.36.0 // indirect + github.com/aws/aws-sdk-go-v2 v1.41.0 // indirect github.com/aws/aws-sdk-go-v2/config v1.29.5 // indirect github.com/aws/aws-sdk-go-v2/credentials v1.17.58 // indirect github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27 // indirect - github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.31 // indirect - github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.31 // indirect + github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 // indirect + github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 // indirect github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2 // indirect - github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.2 // indirect - github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.12 // indirect - github.com/aws/aws-sdk-go-v2/service/ssm v1.45.0 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 // indirect github.com/aws/aws-sdk-go-v2/service/sso v1.24.14 // indirect github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13 // indirect - github.com/aws/aws-sdk-go-v2/service/sts v1.33.13 // indirect - github.com/aws/smithy-go v1.22.2 // indirect + github.com/aws/aws-sdk-go-v2/service/sts v1.41.5 // indirect + github.com/aws/smithy-go v1.24.0 // indirect + github.com/axiomhq/hyperloglog v0.0.0-20240319100328-84253e514e02 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/cenkalti/backoff/v4 v4.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/clipperhouse/uax29/v2 v2.2.0 // indirect github.com/containerd/console v1.0.5 // indirect github.com/containerd/continuity v0.4.5 // indirect - github.com/containerd/errdefs v0.3.0 // indirect + github.com/containerd/errdefs v1.0.0 // indirect github.com/containerd/errdefs/pkg v0.3.0 // indirect - github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 // indirect github.com/creachadair/mds v0.25.10 // indirect + github.com/creachadair/msync v0.7.1 // indirect github.com/dblohm7/wingoes v0.0.0-20240123200102-b75a8a7d7eb0 // indirect - github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e // indirect + github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc // indirect github.com/distribution/reference v0.6.0 // indirect github.com/docker/cli v28.5.1+incompatible // indirect github.com/docker/go-connections v0.6.0 // indirect @@ -125,31 +124,29 @@ require ( github.com/dustin/go-humanize v1.0.1 // indirect github.com/felixge/fgprof v0.9.5 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect - github.com/fxamacker/cbor/v2 v2.7.0 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect github.com/gaissmai/bart v0.18.0 // indirect github.com/glebarez/go-sqlite v1.22.0 // indirect github.com/go-jose/go-jose/v3 v3.0.4 // indirect github.com/go-jose/go-jose/v4 v4.1.3 // indirect github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/stdr v1.2.2 // indirect - github.com/go-ole/go-ole v1.3.0 // indirect github.com/go-viper/mapstructure/v2 v2.4.0 // indirect github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 // indirect - github.com/golang-jwt/jwt/v5 v5.2.2 // indirect - github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect + github.com/golang-jwt/jwt/v5 v5.3.0 // indirect + github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect github.com/golang/protobuf v1.5.4 // indirect - github.com/google/btree v1.1.2 // indirect + github.com/google/btree v1.1.3 // indirect github.com/google/go-github v17.0.0+incompatible // indirect github.com/google/go-querystring v1.1.0 // indirect - github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806 // indirect github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d // indirect github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect github.com/google/uuid v1.6.0 // indirect github.com/gookit/color v1.6.0 // indirect - github.com/gorilla/websocket v1.5.3 // indirect + github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect github.com/hashicorp/go-version v1.7.0 // indirect github.com/hdevalence/ed25519consensus v0.2.0 // indirect - github.com/illarion/gonotify/v3 v3.0.2 // indirect + github.com/huin/goupnp v1.3.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/jackc/pgpassfile v1.0.0 // indirect github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect @@ -157,21 +154,15 @@ require ( github.com/jackc/puddle/v2 v2.2.2 // indirect github.com/jinzhu/inflection v1.0.0 // indirect github.com/jinzhu/now v1.1.5 // indirect - github.com/jmespath/go-jmespath v0.4.0 // indirect github.com/jsimonetti/rtnetlink v1.4.1 // indirect - github.com/klauspost/compress v1.18.1 // indirect - github.com/kr/pretty v0.3.1 // indirect - github.com/kr/text v0.2.0 // indirect + github.com/klauspost/compress v1.18.2 // indirect github.com/lib/pq v1.10.9 // indirect github.com/lithammer/fuzzysearch v1.1.8 // indirect github.com/mattn/go-colorable v0.1.14 // indirect github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-runewidth v0.0.19 // indirect - github.com/mdlayher/genetlink v1.3.2 // indirect github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42 // indirect - github.com/mdlayher/sdnotify v1.0.0 // indirect github.com/mdlayher/socket v0.5.0 // indirect - github.com/miekg/dns v1.1.58 // indirect github.com/mitchellh/go-ps v1.0.0 // indirect github.com/moby/docker-image-spec v1.3.1 // indirect github.com/moby/sys/atomicwriter v0.1.0 // indirect @@ -185,13 +176,13 @@ require ( github.com/opencontainers/runc v1.3.2 // indirect github.com/pelletier/go-toml/v2 v2.2.4 // indirect github.com/petermattis/goid v0.0.0-20250904145737-900bdf8bb490 // indirect + github.com/pires/go-proxyproto v0.8.1 // indirect github.com/pkg/errors v0.9.1 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/prometheus-community/pro-bing v0.4.0 // indirect github.com/prometheus/client_model v0.6.2 // indirect github.com/prometheus/procfs v0.16.1 // indirect github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect - github.com/rogpeppe/go-internal v1.14.1 // indirect github.com/safchain/ethtool v0.3.0 // indirect github.com/sagikazarmark/locafero v0.12.0 // indirect github.com/sirupsen/logrus v1.9.3 // indirect @@ -201,36 +192,33 @@ require ( github.com/subosito/gotenv v1.6.0 // indirect github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e // indirect github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 // indirect - github.com/tailscale/goupnp v1.0.1-0.20210804011211-c64d0f06ea05 // indirect - github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7 // indirect github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc // indirect - github.com/tailscale/setec v0.0.0-20250305161714-445cadbbca3d // indirect + github.com/tailscale/setec v0.0.0-20251203133219-2ab774e4129a // indirect github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976 // indirect github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da // indirect - github.com/vishvananda/netns v0.0.5 // indirect github.com/x448/float16 v0.8.4 // indirect github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect github.com/xeipuuv/gojsonschema v1.2.0 // indirect github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect - go.opentelemetry.io/auto/sdk v1.1.0 // indirect - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 // indirect - go.opentelemetry.io/otel v1.37.0 // indirect + go.opentelemetry.io/auto/sdk v1.2.1 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 // indirect + go.opentelemetry.io/otel v1.39.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0 // indirect - go.opentelemetry.io/otel/metric v1.37.0 // indirect - go.opentelemetry.io/otel/trace v1.37.0 // indirect - go.yaml.in/yaml/v2 v2.4.2 // indirect + go.opentelemetry.io/otel/metric v1.39.0 // indirect + go.opentelemetry.io/otel/trace v1.39.0 // indirect + go.yaml.in/yaml/v2 v2.4.3 // indirect go.yaml.in/yaml/v3 v3.0.4 // indirect go4.org/mem v0.0.0-20240501181205-ae6ca9944745 // indirect - golang.org/x/mod v0.29.0 // indirect - golang.org/x/sys v0.37.0 // indirect - golang.org/x/term v0.36.0 // indirect - golang.org/x/text v0.30.0 // indirect - golang.org/x/time v0.11.0 // indirect - golang.org/x/tools v0.38.0 // indirect + golang.org/x/mod v0.30.0 // indirect + golang.org/x/sys v0.40.0 // indirect + golang.org/x/term v0.38.0 // indirect + golang.org/x/text v0.32.0 // indirect + golang.org/x/time v0.12.0 // indirect + golang.org/x/tools v0.39.0 // indirect golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect golang.zx2c4.com/wireguard/windows v0.5.3 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20250929231259-57b25ae835d4 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b // indirect gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 // indirect ) diff --git a/go.sum b/go.sum index e78e9aff..1021d749 100644 --- a/go.sum +++ b/go.sum @@ -16,8 +16,8 @@ filippo.io/mkcert v1.4.4 h1:8eVbbwfVlaqUM7OwuftKc2nuYOoTDQWqsoXmzoXZdbc= filippo.io/mkcert v1.4.4/go.mod h1:VyvOchVuAye3BoUsPUOOofKygVwLV2KQMVFJNRq+1dA= github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg= github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= -github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c h1:pxW6RcqyfI9/kWtOwnv/G+AzdKuy2ZrqINhenH4HyNs= -github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= +github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= +github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= github.com/MarvinJWendt/testza v0.1.0/go.mod h1:7AxNvlfeHP7Z/hDQ5JtE3OKYT3XFUeLCDE2DQninSqs= github.com/MarvinJWendt/testza v0.2.1/go.mod h1:God7bhG8n6uQxwdScay+gjm9/LnO4D3kkcZX4hv9Rp8= github.com/MarvinJWendt/testza v0.2.8/go.mod h1:nwIcjmr0Zz+Rcwfh3/4UhBp7ePKVhuBExvZqnKYWlII= @@ -37,11 +37,11 @@ github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7V github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4= github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8= github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4= -github.com/arl/statsviz v0.7.2 h1:xnuIfRiXE4kvxEcfGL+IE3mKH1BXNHuE+eJELIh7oOA= -github.com/arl/statsviz v0.7.2/go.mod h1:XlrbiT7xYT03xaW9JMMfD8KFUhBOESJwfyNJu83PbB0= +github.com/arl/statsviz v0.8.0 h1:O6GjjVxEDxcByAucOSl29HaGYLXsuwA3ujJw8H9E7/U= +github.com/arl/statsviz v0.8.0/go.mod h1:XlrbiT7xYT03xaW9JMMfD8KFUhBOESJwfyNJu83PbB0= github.com/atomicgo/cursor v0.0.1/go.mod h1:cBON2QmmrysudxNBFthvMtN32r3jxVRIvzkUiF/RuIk= -github.com/aws/aws-sdk-go-v2 v1.36.0 h1:b1wM5CcE65Ujwn565qcwgtOTT1aT4ADOHHgglKjG7fk= -github.com/aws/aws-sdk-go-v2 v1.36.0/go.mod h1:5PMILGVKiW32oDzjj6RU52yrNrDPUHcbZQYr1sM7qmM= +github.com/aws/aws-sdk-go-v2 v1.41.0 h1:tNvqh1s+v0vFYdA1xq0aOJH+Y5cRyZ5upu6roPgPKd4= +github.com/aws/aws-sdk-go-v2 v1.41.0/go.mod h1:MayyLB8y+buD9hZqkCW3kX1AKq07Y5pXxtgB+rRFhz0= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.8 h1:zAxi9p3wsZMIaVCdoiQp2uZ9k1LsZvmAnoTBeZPXom0= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.8/go.mod h1:3XkePX5dSaxveLAYY7nsbsZZrKxCyEuE5pM4ziFxyGg= github.com/aws/aws-sdk-go-v2/config v1.29.5 h1:4lS2IB+wwkj5J43Tq/AwvnscBerBJtQQ6YS7puzCI1k= @@ -50,20 +50,20 @@ github.com/aws/aws-sdk-go-v2/credentials v1.17.58 h1:/d7FUpAPU8Lf2KUdjniQvfNdlMI github.com/aws/aws-sdk-go-v2/credentials v1.17.58/go.mod h1:aVYW33Ow10CyMQGFgC0ptMRIqJWvJ4nxZb0sUiuQT/A= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27 h1:7lOW8NUwE9UZekS1DYoiPdVAqZ6A+LheHWb+mHbNOq8= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27/go.mod h1:w1BASFIPOPUae7AgaH4SbjNbfdkxuggLyGfNFTn8ITY= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.31 h1:lWm9ucLSRFiI4dQQafLrEOmEDGry3Swrz0BIRdiHJqQ= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.31/go.mod h1:Huu6GG0YTfbPphQkDSo4dEGmQRTKb9k9G7RdtyQWxuI= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.31 h1:ACxDklUKKXb48+eg5ROZXi1vDgfMyfIA/WyvqHcHI0o= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.31/go.mod h1:yadnfsDwqXeVaohbGc/RaD287PuyRw2wugkh5ZL2J6k= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 h1:rgGwPzb82iBYSvHMHXc8h9mRoOUBZIGFgKb9qniaZZc= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16/go.mod h1:L/UxsGeKpGoIj6DxfhOWHWQ/kGKcd4I1VncE4++IyKA= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 h1:1jtGzuV7c82xnqOVfx2F0xmJcOw5374L7N6juGW6x6U= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16/go.mod h1:M2E5OQf+XLe+SZGmmpaI2yy+J326aFf6/+54PoxSANc= github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2 h1:Pg9URiobXy85kgFev3og2CuOZ8JZUBENF+dcgWBaYNk= github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc= github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.31 h1:8IwBjuLdqIO1dGB+dZ9zJEl8wzY3bVYxcs0Xyu/Lsc0= github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.31/go.mod h1:8tMBcuVjL4kP/ECEIWTCWtwV2kj6+ouEKl4cqR4iWLw= -github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.2 h1:D4oz8/CzT9bAEYtVhSBmFj2dNOtaHOtMKc2vHBwYizA= -github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.2/go.mod h1:Za3IHqTQ+yNcRHxu1OFucBh0ACZT4j4VQFF0BqpZcLY= +github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 h1:0ryTNEdJbzUCEWkVXEXoqlXV72J5keC1GvILMOuD00E= +github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4/go.mod h1:HQ4qwNZh32C3CBeO6iJLQlgtMzqeG17ziAA/3KDJFow= github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.5 h1:siiQ+jummya9OLPDEyHVb2dLW4aOMe22FGDd0sAfuSw= github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.5/go.mod h1:iHVx2J9pWzITdP5MJY6qWfG34TfD9EA+Qi3eV6qQCXw= -github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.12 h1:O+8vD2rGjfihBewr5bT+QUfYUHIxCVgG61LHoT59shM= -github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.12/go.mod h1:usVdWJaosa66NMvmCrr08NcWDBRv4E6+YFG2pUdw1Lk= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 h1:oHjJHeUy0ImIV0bsrX0X91GkV5nJAyv1l1CC9lnO0TI= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16/go.mod h1:iRSNGgOYmiYwSCXxXaKb9HfOEj40+oTKn8pTxMlYkRM= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.12 h1:tkVNm99nkJnFo1H9IIQb5QkCiPcvCDn3Pos+IeTbGRA= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.12/go.mod h1:dIVlquSPUMqEJtx2/W17SM2SuESRaVEhEV9alcMqxjw= github.com/aws/aws-sdk-go-v2/service/s3 v1.75.3 h1:JBod0SnNqcWQ0+uAyzeRFG1zCHotW8DukumYYyNy0zo= @@ -74,10 +74,12 @@ github.com/aws/aws-sdk-go-v2/service/sso v1.24.14 h1:c5WJ3iHz7rLIgArznb3JCSQT3uU github.com/aws/aws-sdk-go-v2/service/sso v1.24.14/go.mod h1:+JJQTxB6N4niArC14YNtxcQtwEqzS3o9Z32n7q33Rfs= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13 h1:f1L/JtUkVODD+k1+IiSJUUv8A++2qVr+Xvb3xWXETMU= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13/go.mod h1:tvqlFoja8/s0o+UruA1Nrezo/df0PzdunMDDurUfg6U= -github.com/aws/aws-sdk-go-v2/service/sts v1.33.13 h1:3LXNnmtH3TURctC23hnC0p/39Q5gre3FI7BNOiDcVWc= -github.com/aws/aws-sdk-go-v2/service/sts v1.33.13/go.mod h1:7Yn+p66q/jt38qMoVfNvjbm3D89mGBnkwDcijgtih8w= -github.com/aws/smithy-go v1.22.2 h1:6D9hW43xKFrRx/tXXfAlIZc4JI+yQe6snnWcQyxSyLQ= -github.com/aws/smithy-go v1.22.2/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg= +github.com/aws/aws-sdk-go-v2/service/sts v1.41.5 h1:SciGFVNZ4mHdm7gpD1dgZYnCuVdX1s+lFTg4+4DOy70= +github.com/aws/aws-sdk-go-v2/service/sts v1.41.5/go.mod h1:iW40X4QBmUxdP+fZNOpfmkdMZqsovezbAeO+Ubiv2pk= +github.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk= +github.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0= +github.com/axiomhq/hyperloglog v0.0.0-20240319100328-84253e514e02 h1:bXAPYSbdYbS5VTy92NIUbeDI1qyggi+JYh5op9IFlcQ= +github.com/axiomhq/hyperloglog v0.0.0-20240319100328-84253e514e02/go.mod h1:k08r+Yj1PRAmuayFiRK6MYuR5Ve4IuZtTfxErMIh0+c= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= @@ -108,8 +110,8 @@ github.com/containerd/console v1.0.5 h1:R0ymNeydRqH2DmakFNdmjR2k0t7UPuiOV/N/27/q github.com/containerd/console v1.0.5/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk= github.com/containerd/continuity v0.4.5 h1:ZRoN1sXq9u7V6QoHMcVWGhOwDFqZ4B9i5H6un1Wh0x4= github.com/containerd/continuity v0.4.5/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE= -github.com/containerd/errdefs v0.3.0 h1:FSZgGOeK4yuT/+DnF07/Olde/q4KBoMsaamhXxIMDp4= -github.com/containerd/errdefs v0.3.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M= +github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI= +github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M= github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE= github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk= github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I= @@ -124,13 +126,12 @@ github.com/creachadair/command v0.2.0 h1:qTA9cMMhZePAxFoNdnk6F6nn94s1qPndIg9hJbq github.com/creachadair/command v0.2.0/go.mod h1:j+Ar+uYnFsHpkMeV9kGj6lJ45y9u2xqtg8FYy6cm+0o= github.com/creachadair/flax v0.0.5 h1:zt+CRuXQASxwQ68e9GHAOnEgAU29nF0zYMHOCrL5wzE= github.com/creachadair/flax v0.0.5/go.mod h1:F1PML0JZLXSNDMNiRGK2yjm5f+L9QCHchyHBldFymj8= -github.com/creachadair/mds v0.25.2 h1:xc0S0AfDq5GX9KUR5sLvi5XjA61/P6S5e0xFs1vA18Q= -github.com/creachadair/mds v0.25.2/go.mod h1:+s4CFteFRj4eq2KcGHW8Wei3u9NyzSPzNV32EvjyK/Q= github.com/creachadair/mds v0.25.10 h1:9k9JB35D1xhOCFl0liBhagBBp8fWWkKZrA7UXsfoHtA= github.com/creachadair/mds v0.25.10/go.mod h1:4hatI3hRM+qhzuAmqPRFvaBM8mONkS7nsLxkcuTYUIs= +github.com/creachadair/msync v0.7.1 h1:SeZmuEBXQPe5GqV/C94ER7QIZPwtvFbeQiykzt/7uho= +github.com/creachadair/msync v0.7.1/go.mod h1:8CcFlLsSujfHE5wWm19uUBLHIPDAUr6LXDwneVMO008= github.com/creachadair/taskgroup v0.13.2 h1:3KyqakBuFsm3KkXi/9XIb0QcA8tEzLHLgaoidf0MdVc= github.com/creachadair/taskgroup v0.13.2/go.mod h1:i3V1Zx7H8RjwljUEeUWYT30Lmb9poewSb2XI1yTwD0g= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/creack/pty v1.1.23 h1:4M6+isWdcStXEf15G/RbrMPOQj1dZ7HPZCGwE4kOeP0= github.com/creack/pty v1.1.23/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= @@ -139,6 +140,8 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1 github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dblohm7/wingoes v0.0.0-20240123200102-b75a8a7d7eb0 h1:vrC07UZcgPzu/OjWsmQKMGg3LoPSz9jh/pQXIrHjUj4= github.com/dblohm7/wingoes v0.0.0-20240123200102-b75a8a7d7eb0/go.mod h1:Nx87SkVqTKd8UtT+xu7sM/l+LgXs6c0aHrlKusR+2EQ= +github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc h1:8WFBn63wegobsYAX0YjD+8suexZDga5CctH4CCTx2+8= +github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc/go.mod h1:c9O8+fpSOX1DM8cPNSkX/qsBWdkD4yd2dpciOWQjpBw= github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e h1:vUmf0yezR0y7jJ5pceLHthLaYf4bA5T14B6q39S4q2Q= github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e/go.mod h1:YTIHhz/QFSYnu/EhlF2SpU2Uk+32abacUYA5ZPljz1A= github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk= @@ -147,8 +150,8 @@ github.com/djherbis/times v1.6.0 h1:w2ctJ92J8fBvWPxugmXIv7Nz7Q3iDMKNx9v5ocVH20c= github.com/djherbis/times v1.6.0/go.mod h1:gOHeRAz2h+VJNZ5Gmc/o7iD9k4wW7NMVqieYCY99oc0= github.com/docker/cli v28.5.1+incompatible h1:ESutzBALAD6qyCLqbQSEf1a/U8Ybms5agw59yGVc+yY= github.com/docker/cli v28.5.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= -github.com/docker/docker v28.5.1+incompatible h1:Bm8DchhSD2J6PsFzxC35TZo4TLGR2PdW/E69rU45NhM= -github.com/docker/docker v28.5.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/docker/docker v28.5.2+incompatible h1:DBX0Y0zAjZbSrm1uzOkdr1onVghKaftjlSWt4AFexzM= +github.com/docker/docker v28.5.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94= github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE= github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4= @@ -164,8 +167,8 @@ github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHk github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= -github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= -github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= github.com/gaissmai/bart v0.18.0 h1:jQLBT/RduJu0pv/tLwXE+xKPgtWJejbxuXAR+wLJafo= github.com/gaissmai/bart v0.18.0/go.mod h1:JJzMAhNF5Rjo4SF4jWBrANuJfqY+FvsFhW7t1UZJ+XY= github.com/github/fakeca v0.1.0 h1:Km/MVOFvclqxPM9dZBC4+QE564nU4gz4iZ0D9pMw28I= @@ -201,16 +204,16 @@ github.com/gobwas/ws v1.2.1/go.mod h1:hRKAFb8wOxFROYNsT1bqfWnhX+b5MFeJM9r2ZSwg/K github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 h1:sQspH8M4niEijh3PFscJRLDnkL547IeP7kpPe3uUhEg= github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466/go.mod h1:ZiQxhyQ+bbbfxUKVvjfO498oPYvtYhZzycal3G/NHmU= -github.com/gofrs/uuid/v5 v5.3.2 h1:2jfO8j3XgSwlz/wHqemAEugfnTlikAYHhnqQ8Xh4fE0= -github.com/gofrs/uuid/v5 v5.3.2/go.mod h1:CDOjlDMVAtN56jqyRUZh58JT31Tiw7/oQyEXZV+9bD8= -github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8= -github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= -github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= -github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/gofrs/uuid/v5 v5.4.0 h1:EfbpCTjqMuGyq5ZJwxqzn3Cbr2d0rUZU7v5ycAk/e/0= +github.com/gofrs/uuid/v5 v5.4.0/go.mod h1:CDOjlDMVAtN56jqyRUZh58JT31Tiw7/oQyEXZV+9bD8= +github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo= +github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE= +github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ= +github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= -github.com/google/btree v1.1.2 h1:xf4v41cLI2Z6FxbKm+8Bu+m8ifhj15JuZ9sa0jZCMUU= -github.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= +github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= +github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= @@ -239,14 +242,19 @@ github.com/gookit/color v1.6.0 h1:JjJXBTk1ETNyqyilJhkTXJYYigHG24TM9Xa2M1xAhRA= github.com/gookit/color v1.6.0/go.mod h1:9ACFc7/1IpHGBW8RwuDm/0YEnhg3dwwXpoMsmtyHfjs= github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY= github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= -github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg= -github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3/go.mod h1:zQrxl1YP88HQlA6i9c63DSVPFklWpGX4OWAc9bFuaH4= +github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo= +github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 h1:kEISI/Gx67NzH3nJxAmY/dGac80kKZgZt134u7Y/k1s= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4/go.mod h1:6Nz966r3vQYCqIzWsuEl9d7cf7mRhtDmm++sOxlnfxI= github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru v0.6.0 h1:uL2shRDx7RTrOrTCUZEGP/wJUFiUI8QT6E7z5o8jga4= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= github.com/hdevalence/ed25519consensus v0.2.0 h1:37ICyZqdyj0lAZ8P4D1d1id3HqbbG1N3iBb1Tb4rdcU= github.com/hdevalence/ed25519consensus v0.2.0/go.mod h1:w3BHWjwJbFU29IRHL1Iqkw3sus+7FctEyM4RqDxYNzo= +github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc= +github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8= github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w= github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab/go.mod h1:gx7rwoVhcfuVKG5uya9Hs3Sxj7EIvldVofAWIUtGouw= github.com/illarion/gonotify/v3 v3.0.2 h1:O7S6vcopHexutmpObkeWsnzMJt/r1hONIEogeVNmJMk= @@ -273,15 +281,11 @@ github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ= github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8= github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= -github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= -github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= github.com/jsimonetti/rtnetlink v1.4.1 h1:JfD4jthWBqZMEffc5RjgmlzpYttAVw1sdnmiNaPO3hE= github.com/jsimonetti/rtnetlink v1.4.1/go.mod h1:xJjT7t59UIZ62GLZbv6PLLo8VFrostJMPBAheR6OM8w= -github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= -github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= -github.com/klauspost/compress v1.18.1 h1:bcSGx7UbpBqMChDtsF28Lw6v/G94LPrrbMbdC3JH2co= -github.com/klauspost/compress v1.18.1/go.mod h1:ZQFFVG+MdnR0P+l6wpXgIL4NTtwiKIdBnrBd8Nrxr+0= +github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk= +github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4= github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg= github.com/klauspost/cpuid/v2 v2.0.10/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c= github.com/klauspost/cpuid/v2 v2.0.12/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c= @@ -292,7 +296,6 @@ github.com/kortschak/wol v0.0.0-20200729010619-da482cc4850a/go.mod h1:YTtCCM3ryy github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8= github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= @@ -368,7 +371,8 @@ github.com/philip-bui/grpc-zerolog v1.0.1 h1:EMacvLRUd2O1K0eWod27ZP5CY1iTNkhBDLS github.com/philip-bui/grpc-zerolog v1.0.1/go.mod h1:qXbiq/2X4ZUMMshsqlWyTHOcw7ns+GZmlqZZN05ZHcQ= github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ= github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4= -github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= +github.com/pires/go-proxyproto v0.8.1 h1:9KEixbdJfhrbtjpz/ZwCdWDD2Xem0NZ38qMYaASJgp0= +github.com/pires/go-proxyproto v0.8.1/go.mod h1:ZKAAyp3cgy5Y5Mo4n9AlScrkCZwUy0g3Jf+slqQVcuU= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA= @@ -384,8 +388,8 @@ github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= -github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs= -github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA= +github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4= +github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw= github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg= github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is= github.com/pterm/pterm v0.12.27/go.mod h1:PhQ89w4i95rhgE+xedAoqous6K9X+r6aSOI2eFF7DZI= @@ -397,12 +401,11 @@ github.com/pterm/pterm v0.12.36/go.mod h1:NjiL09hFhT/vWjQHSj1athJpx6H8cjpHXNAK5b github.com/pterm/pterm v0.12.40/go.mod h1:ffwPLwlbXxP+rxT0GsgDTzS3y3rmpAO1NMjUkGTYf8s= github.com/pterm/pterm v0.12.82 h1:+D9wYhCaeaK0FIQoZtqbNQuNpe2lB2tajKKsTd5paVQ= github.com/pterm/pterm v0.12.82/go.mod h1:TyuyrPjnxfwP+ccJdBTeWHtd/e0ybQHkOS/TakajZCw= -github.com/puzpuzpuz/xsync/v4 v4.2.0 h1:dlxm77dZj2c3rxq0/XNvvUKISAmovoXF4a4qM6Wvkr0= -github.com/puzpuzpuz/xsync/v4 v4.2.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo= +github.com/puzpuzpuz/xsync/v4 v4.3.0 h1:w/bWkEJdYuRNYhHn5eXnIT8LzDM1O629X1I9MJSkD7Q= +github.com/puzpuzpuz/xsync/v4 v4.3.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo= github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE= github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo= github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= -github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0= @@ -426,8 +429,8 @@ github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I= github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg= github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY= github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo= -github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s= -github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0= +github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= +github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk= github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= @@ -453,22 +456,18 @@ github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 h1:Gzfnfk2TWrk8 github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55/go.mod h1:4k4QO+dQ3R5FofL+SanAUZe+/QfeK0+OIuwDIRu2vSg= github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869 h1:SRL6irQkKGQKKLzvQP/ke/2ZuB7Py5+XuqtOgSj+iMM= github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869/go.mod h1:ikbF+YT089eInTp9f2vmvy4+ZVnW5hzX1q2WknxSprQ= -github.com/tailscale/goupnp v1.0.1-0.20210804011211-c64d0f06ea05 h1:4chzWmimtJPxRs2O36yuGRW3f9SYV+bMTTvMBI0EKio= -github.com/tailscale/goupnp v1.0.1-0.20210804011211-c64d0f06ea05/go.mod h1:PdCqy9JzfWMJf1H5UJW2ip33/d4YkoKN0r67yKH1mG8= -github.com/tailscale/hujson v0.0.0-20250226034555-ec1d1c113d33 h1:idh63uw+gsG05HwjZsAENCG4KZfyvjK03bpjxa5qRRk= -github.com/tailscale/hujson v0.0.0-20250226034555-ec1d1c113d33/go.mod h1:EbW0wDK/qEUYI0A5bqq0C2kF8JTQwWONmGDBbzsxxHo= +github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a h1:a6TNDN9CgG+cYjaeN8l2mc4kSz2iMiCDQxPEyltUV/I= +github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a/go.mod h1:EbW0wDK/qEUYI0A5bqq0C2kF8JTQwWONmGDBbzsxxHo= github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7 h1:uFsXVBE9Qr4ZoF094vE6iYTLDl0qCiKzYXlL6UeWObU= github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7/go.mod h1:NzVQi3Mleb+qzq8VmcWpSkcSYxXIg0DkI6XDzpVkhJ0= github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc h1:24heQPtnFR+yfntqhI3oAu9i27nEojcQ4NuBQOo5ZFA= github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc/go.mod h1:f93CXfllFsO9ZQVq+Zocb1Gp4G5Fz0b0rXHLOzt/Djc= -github.com/tailscale/setec v0.0.0-20250305161714-445cadbbca3d h1:mnqtPWYyvNiPU9l9tzO2YbHXU/xV664XthZYA26lOiE= -github.com/tailscale/setec v0.0.0-20250305161714-445cadbbca3d/go.mod h1:9BzmlFc3OLqLzLTF/5AY+BMs+clxMqyhSGzgXIm8mNI= -github.com/tailscale/squibble v0.0.0-20250108170732-a4ca58afa694 h1:95eIP97c88cqAFU/8nURjgI9xxPbD+Ci6mY/a79BI/w= -github.com/tailscale/squibble v0.0.0-20250108170732-a4ca58afa694/go.mod h1:veguaG8tVg1H/JG5RfpoUW41I+O8ClPElo/fTYr8mMk= -github.com/tailscale/squibble v0.0.0-20251030164342-4d5df9caa993 h1:FyiiAvDAxpB0DrW2GW3KOVfi3YFOtsQUEeFWbf55JJU= -github.com/tailscale/squibble v0.0.0-20251030164342-4d5df9caa993/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4= -github.com/tailscale/tailsql v0.0.0-20250421235516-02f85f087b97 h1:JJkDnrAhHvOCttk8z9xeZzcDlzzkRA7+Duxj9cwOyxk= -github.com/tailscale/tailsql v0.0.0-20250421235516-02f85f087b97/go.mod h1:9jS8HxwsP2fU4ESZ7DZL+fpH/U66EVlVMzdgznH12RM= +github.com/tailscale/setec v0.0.0-20251203133219-2ab774e4129a h1:TApskGPim53XY5WRt5hX4DnO8V6CmVoimSklryIoGMM= +github.com/tailscale/setec v0.0.0-20251203133219-2ab774e4129a/go.mod h1:+6WyG6kub5/5uPsMdYQuSti8i6F5WuKpFWLQnZt/Mms= +github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f h1:CL6gu95Y1o2ko4XiWPvWkJka0QmQWcUyPywWVWDPQbQ= +github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4= +github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09 h1:Fc9lE2cDYJbBLpCqnVmoLdf7McPqoHZiDxDPPpkJM04= +github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09/go.mod h1:QMNhC4XGFiXKngHVLXE+ERDmQoH0s5fD7AUxupykocQ= github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976 h1:UBPHPtv8+nEAy2PD8RyAhOYvau1ek0HDJqLS/Pysi14= github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976/go.mod h1:agQPE6y6ldqCOui2gkIh7ZMztTkIQKH049tv8siLuNQ= github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6 h1:l10Gi6w9jxvinoiq15g8OToDdASBni4CyJOdHY1Hr8M= @@ -487,7 +486,6 @@ github.com/u-root/u-root v0.14.0 h1:Ka4T10EEML7dQ5XDvO9c3MBN8z4nuSnGjcd1jmU2ivg= github.com/u-root/u-root v0.14.0/go.mod h1:hAyZorapJe4qzbLWlAkmSVCJGbfoU9Pu4jpJ1WMluqE= github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701 h1:pyC9PaHYZFgEKFdlp3G8RaCKgVpHZnecvArXvPXcFkM= github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701/go.mod h1:P3a5rG4X7tI17Nn3aOIAYr5HbIMukwXG0urG0WuL8OA= -github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0= github.com/vishvananda/netns v0.0.5 h1:DfiHV+j8bA32MFM7bfEunvT8IAqQ/NzSJHtcmW5zdEY= github.com/vishvananda/netns v0.0.5/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM= github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= @@ -503,30 +501,30 @@ github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1z github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no= github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= -go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 h1:yd02MEjBdJkG3uabWP9apV+OuWRIXGDuJEUJbOHmCFU= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0/go.mod h1:umTcuxiv1n/s/S6/c2AT/g2CQ7u5C59sHDNmfSwgz7Q= -go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ= -go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I= +go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= +go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 h1:ssfIgGNANqpVFCndZvcuyKbl0g+UAVcbBcqGkG28H0Y= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0/go.mod h1:GQ/474YrbE4Jx8gZ4q5I4hrhUzM6UPzyrqJYV2AqPoQ= +go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48= +go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 h1:dNzwXjZKpMpE2JhmO+9HsPl42NIXFIFSUSSs0fiqra0= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0/go.mod h1:90PoxvaEB5n6AOdZvi+yWJQoE95U8Dhhw2bSyRqnTD0= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0 h1:nRVXXvf78e00EwY6Wp0YII8ww2JVWshZ20HfTlE11AM= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0/go.mod h1:r49hO7CgrxY9Voaj3Xe8pANWtr0Oq916d0XAmOoCZAQ= -go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE= -go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E= -go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI= -go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg= -go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc= -go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps= -go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4= -go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0= +go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0= +go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs= +go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18= +go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE= +go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8= +go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew= +go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI= +go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA= go.opentelemetry.io/proto/otlp v1.6.0 h1:jQjP+AQyTf+Fe7OKj/MfkDrmK4MNVtw2NpXsf9fefDI= go.opentelemetry.io/proto/otlp v1.6.0/go.mod h1:cicgGehlFuNdgZkcALOCh3VE6K/u2tAjzlRhDwmVpZc= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= -go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= -go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0= +go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8= go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= go4.org/mem v0.0.0-20240501181205-ae6ca9944745 h1:Tl++JLUCe4sxGu8cTpDzRLd3tN7US4hOxG5YpKCzkek= @@ -536,36 +534,34 @@ go4.org/netipx v0.0.0-20231129151722-fdeea329fbba/go.mod h1:PLyyIXexvUFg3Owu6p/W golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= -golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04= -golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0= -golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b h1:18qgiDvlvH7kk8Ioa8Ov+K6xCi0GMvmGfGW0sgd/SYA= -golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= +golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU= +golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= +golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f h1:phY1HzDcf18Aq9A8KkmRtY9WvOFIxN8wgfvy6Zm1DV8= golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/image v0.27.0 h1:C8gA4oWU/tKkdCfYT6T2u4faJu3MeNS5O8UPWlPF61w= golang.org/x/image v0.27.0/go.mod h1:xbdrClrAUway1MUTEZDq9mz/UpRwYAkFFNUslZtcB+g= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA= -golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w= +golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk= +golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= -golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4= -golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210= -golang.org/x/oauth2 v0.32.0 h1:jsCblLleRMDrxMN29H3z/k1KliIvpLgCkE6R8FXXNgY= -golang.org/x/oauth2 v0.32.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= +golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU= +golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY= +golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw= +golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug= -golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -586,8 +582,8 @@ golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ= -golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= +golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ= +golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= @@ -595,24 +591,24 @@ golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuX golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= -golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q= -golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss= +golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q= +golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k= -golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM= -golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0= -golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= +golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU= +golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY= +golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE= +golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ= -golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs= +golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ= +golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 h1:B82qJJgjvYKsXS9jeunTOisW56dUokqW/FOteYJJ/yg= @@ -621,51 +617,50 @@ golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI= gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk= gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E= -google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4 h1:8XJ4pajGwOlasW+L13MnEGA8W4115jJySQtVfS2/IBU= -google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4/go.mod h1:NnuHhy+bxcg30o7FnVAZbXsPHUDQ9qKWAQKCD7VxFtk= -google.golang.org/genproto/googleapis/rpc v0.0.0-20250929231259-57b25ae835d4 h1:i8QOKZfYg6AbGVZzUAY3LrNWCKF8O6zFisU9Wl9RER4= -google.golang.org/genproto/googleapis/rpc v0.0.0-20250929231259-57b25ae835d4/go.mod h1:HSkG/KdJWusxU1F6CNrwNDjBMgisKxGnc5dAZfT0mjQ= -google.golang.org/grpc v1.75.1 h1:/ODCNEuf9VghjgO3rqLcfg8fiOP0nSluljWFlDxELLI= -google.golang.org/grpc v1.75.1/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ= -google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE= -google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= +google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b h1:uA40e2M6fYRBf0+8uN5mLlqUtV192iiksiICIBkYJ1E= +google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:Xa7le7qx2vmqB/SzWUBa7KdMjpdpAHlh5QCSnjessQk= +google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b h1:Mv8VFug0MP9e5vUxfBcE3vUkV6CImK3cMNMIDFjmzxU= +google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ= +google.golang.org/grpc v1.78.0 h1:K1XZG/yGDJnzMdd/uZHAkVqJE+xIDOcmdSFZkBUicNc= +google.golang.org/grpc v1.78.0/go.mod h1:I47qjTo4OKbMkjA/aOOwxDIiPSBofUtQUI5EfpWvW7U= +google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= +google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= -gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gorm.io/driver/postgres v1.6.0 h1:2dxzU8xJ+ivvqTRph34QX+WrRaJlmfyPqXmoGVjMBa4= gorm.io/driver/postgres v1.6.0/go.mod h1:vUw0mrGgrTK+uPHEhAdV4sfFELrByKVGnaVRkXDhtWo= -gorm.io/gorm v1.31.0 h1:0VlycGreVhK7RF/Bwt51Fk8v0xLiiiFdbGDPIZQ7mJY= -gorm.io/gorm v1.31.0/go.mod h1:XyQVbO2k6YkOis7C2437jSit3SsDK72s7n7rsSHd+Gs= +gorm.io/gorm v1.31.1 h1:7CA8FTFz/gRfgqgpeKIBcervUn3xSyPUmr6B2WXJ7kg= +gorm.io/gorm v1.31.1/go.mod h1:XyQVbO2k6YkOis7C2437jSit3SsDK72s7n7rsSHd+Gs= gotest.tools/v3 v3.5.1 h1:EENdUnS3pdur5nybKYIh2Vfgc8IUNBjxDPSjtiJcOzU= gotest.tools/v3 v3.5.1/go.mod h1:isy3WKz7GK6uNw/sbHzfKBLvlvXwUyV06n6brMxxopU= gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 h1:2gap+Kh/3F47cO6hAu3idFvsJ0ue6TRcEi2IUkv/F8k= gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633/go.mod h1:5DMfjtclAbTIjbXqO1qCe2K5GKKxWz2JHvCChuTcJEM= -honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= -honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= +honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0 h1:5SXjd4ET5dYijLaf0O3aOenC0Z4ZafIWSpjUzsQaNho= +honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0/go.mod h1:EPDDhEZqVHhWuPI5zPAsjU0U7v9xNIWjoOVyZ5ZcniQ= howett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM= howett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g= -modernc.org/cc/v4 v4.26.5 h1:xM3bX7Mve6G8K8b+T11ReenJOT+BmVqQj0FY5T4+5Y4= -modernc.org/cc/v4 v4.26.5/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0= -modernc.org/ccgo/v4 v4.28.1 h1:wPKYn5EC/mYTqBO373jKjvX2n+3+aK7+sICCv4Fjy1A= -modernc.org/ccgo/v4 v4.28.1/go.mod h1:uD+4RnfrVgE6ec9NGguUNdhqzNIeeomeXf6CL0GTE5Q= +modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis= +modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0= +modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc= +modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM= modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA= modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc= modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI= modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito= +modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE= +modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY= modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks= modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI= -modernc.org/libc v1.66.10 h1:yZkb3YeLx4oynyR+iUsXsybsX4Ubx7MQlSYEw4yj59A= -modernc.org/libc v1.66.10/go.mod h1:8vGSEwvoUoltr4dlywvHqjtAqHBaw0j1jI7iFBTAr2I= +modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI= +modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE= modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU= modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg= modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI= @@ -674,16 +669,16 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8= modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns= modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w= modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE= -modernc.org/sqlite v1.39.1 h1:H+/wGFzuSCIEVCvXYVHX5RQglwhMOvtHSv+VtidL2r4= -modernc.org/sqlite v1.39.1/go.mod h1:9fjQZ0mB1LLP0GYrp39oOJXx/I2sxEnZtzCmEQIKvGE= +modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY= +modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA= modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0= modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A= modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y= modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM= software.sslmate.com/src/go-pkcs12 v0.4.0 h1:H2g08FrTvSFKUj+D309j1DPfk5APnIdAQAB8aEykJ5k= software.sslmate.com/src/go-pkcs12 v0.4.0/go.mod h1:Qiz0EyvDRJjjxGyUQa2cCNZn/wMyzrRJ/qcDXOQazLI= -tailscale.com v1.86.5 h1:yBtWFjuLYDmxVnfnvPbZNZcKADCYgNfMd0rUAOA9XCs= -tailscale.com v1.86.5/go.mod h1:Lm8dnzU2i/Emw15r6sl3FRNp/liSQ/nYw6ZSQvIdZ1M= +tailscale.com v1.94.0 h1:5oW3SF35aU9ekHDhP2J4CHewnA2NxE7SRilDB2pVjaA= +tailscale.com v1.94.0/go.mod h1:gLnVrEOP32GWvroaAHHGhjSGMPJ1i4DvqNwEg+Yuov4= zgo.at/zcache/v2 v2.4.1 h1:Dfjoi8yI0Uq7NCc4lo2kaQJJmp9Mijo21gef+oJstbY= zgo.at/zcache/v2 v2.4.1/go.mod h1:gyCeoLVo01QjDZynjime8xUGHHMbsLiPyUTBpDGd4Gk= zombiezen.com/go/postgrestest v1.0.1 h1:aXoADQAJmZDU3+xilYVut0pHhgc0sF8ZspPW9gFNwP4= diff --git a/hscontrol/app.go b/hscontrol/app.go index eb5528ba..aa011503 100644 --- a/hscontrol/app.go +++ b/hscontrol/app.go @@ -5,6 +5,7 @@ import ( "crypto/tls" "errors" "fmt" + "io" "net" "net/http" _ "net/http/pprof" // nolint @@ -270,7 +271,7 @@ func (h *Headscale) scheduledTasks(ctx context.Context) { return case <-expireTicker.C: - var expiredNodeChanges []change.ChangeSet + var expiredNodeChanges []change.Change var changed bool lastExpiryCheck, expiredNodeChanges, changed = h.state.ExpireExpiredNodes(lastExpiryCheck) @@ -304,7 +305,7 @@ func (h *Headscale) scheduledTasks(ctx context.Context) { } h.state.SetDERPMap(derpMap) - h.Change(change.DERPSet) + h.Change(change.DERPMap()) case records, ok := <-extraRecordsUpdate: if !ok { @@ -312,7 +313,7 @@ func (h *Headscale) scheduledTasks(ctx context.Context) { } h.cfg.TailcfgDNSConfig.ExtraRecords = records - h.Change(change.ExtraRecordsSet) + h.Change(change.ExtraRecords()) } } } @@ -476,8 +477,8 @@ func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *mux.Router { apiRouter := router.PathPrefix("/api").Subrouter() apiRouter.Use(h.httpAuthenticationMiddleware) apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP) - - router.PathPrefix("/").HandlerFunc(notFoundHandler) + router.HandleFunc("/favicon.ico", FaviconHandler) + router.PathPrefix("/").HandlerFunc(BlankHandler) return router } @@ -729,16 +730,27 @@ func (h *Headscale) Serve() error { log.Info(). Msgf("listening and serving HTTP on: %s", h.cfg.Addr) - debugHTTPListener, err := net.Listen("tcp", h.cfg.MetricsAddr) - if err != nil { - return fmt.Errorf("failed to bind to TCP address: %w", err) + // Only start debug/metrics server if address is configured + var debugHTTPServer *http.Server + + var debugHTTPListener net.Listener + + if h.cfg.MetricsAddr != "" { + debugHTTPListener, err = (&net.ListenConfig{}).Listen(ctx, "tcp", h.cfg.MetricsAddr) + if err != nil { + return fmt.Errorf("failed to bind to TCP address: %w", err) + } + + debugHTTPServer = h.debugHTTPServer() + + errorGroup.Go(func() error { return debugHTTPServer.Serve(debugHTTPListener) }) + + log.Info(). + Msgf("listening and serving debug and metrics on: %s", h.cfg.MetricsAddr) + } else { + log.Info().Msg("metrics server disabled (metrics_listen_addr is empty)") } - debugHTTPServer := h.debugHTTPServer() - errorGroup.Go(func() error { return debugHTTPServer.Serve(debugHTTPListener) }) - - log.Info(). - Msgf("listening and serving debug and metrics on: %s", h.cfg.MetricsAddr) var tailsqlContext context.Context if tailsqlEnabled { @@ -794,16 +806,25 @@ func (h *Headscale) Serve() error { h.ephemeralGC.Close() // Gracefully shut down servers - ctx, cancel := context.WithTimeout( - context.Background(), + shutdownCtx, cancel := context.WithTimeout( + context.WithoutCancel(ctx), types.HTTPShutdownTimeout, ) - info("shutting down debug http server") - if err := debugHTTPServer.Shutdown(ctx); err != nil { - log.Error().Err(err).Msg("failed to shutdown prometheus http") + defer cancel() + + if debugHTTPServer != nil { + info("shutting down debug http server") + + err := debugHTTPServer.Shutdown(shutdownCtx) + if err != nil { + log.Error().Err(err).Msg("failed to shutdown prometheus http") + } } + info("shutting down main http server") - if err := httpServer.Shutdown(ctx); err != nil { + + err := httpServer.Shutdown(shutdownCtx) + if err != nil { log.Error().Err(err).Msg("failed to shutdown http") } @@ -829,7 +850,10 @@ func (h *Headscale) Serve() error { // Close network listeners info("closing network listeners") - debugHTTPListener.Close() + + if debugHTTPListener != nil { + debugHTTPListener.Close() + } httpListener.Close() grpcGatewayConn.Close() @@ -847,9 +871,6 @@ func (h *Headscale) Serve() error { log.Info(). Msg("Headscale stopped") - // And we're done: - cancel() - return } } @@ -877,6 +898,11 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) { Cache: autocert.DirCache(h.cfg.TLS.LetsEncrypt.CacheDir), Client: &acme.Client{ DirectoryURL: h.cfg.ACMEURL, + HTTPClient: &http.Client{ + Transport: &acmeLogger{ + rt: http.DefaultTransport, + }, + }, }, Email: h.cfg.ACMEEmail, } @@ -935,18 +961,6 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) { } } -func notFoundHandler( - writer http.ResponseWriter, - req *http.Request, -) { - log.Trace(). - Interface("header", req.Header). - Interface("proto", req.Proto). - Interface("url", req.URL). - Msg("Request did not match") - writer.WriteHeader(http.StatusNotFound) -} - func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) { dir := filepath.Dir(path) err := util.EnsureDir(dir) @@ -994,6 +1008,31 @@ func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) { // Change is used to send changes to nodes. // All change should be enqueued here and empty will be automatically // ignored. -func (h *Headscale) Change(cs ...change.ChangeSet) { +func (h *Headscale) Change(cs ...change.Change) { h.mapBatcher.AddWork(cs...) } + +// Provide some middleware that can inspect the ACME/autocert https calls +// and log when things are failing. +type acmeLogger struct { + rt http.RoundTripper +} + +// RoundTrip will log when ACME/autocert failures happen either when err != nil OR +// when http status codes indicate a failure has occurred. +func (l *acmeLogger) RoundTrip(req *http.Request) (*http.Response, error) { + resp, err := l.rt.RoundTrip(req) + if err != nil { + log.Error().Err(err).Str("url", req.URL.String()).Msg("ACME request failed") + return nil, err + } + + if resp.StatusCode >= http.StatusBadRequest { + defer resp.Body.Close() + + body, _ := io.ReadAll(resp.Body) + log.Error().Int("status_code", resp.StatusCode).Str("url", req.URL.String()).Bytes("body", body).Msg("ACME request returned error") + } + + return resp, nil +} diff --git a/hscontrol/assets/assets.go b/hscontrol/assets/assets.go new file mode 100644 index 00000000..13904247 --- /dev/null +++ b/hscontrol/assets/assets.go @@ -0,0 +1,24 @@ +// Package assets provides embedded static assets for Headscale. +// All static files (favicon, CSS, SVG) are embedded here for +// centralized asset management. +package assets + +import ( + _ "embed" +) + +// Favicon is the embedded favicon.png file served at /favicon.ico +// +//go:embed favicon.png +var Favicon []byte + +// CSS is the embedded style.css stylesheet used in HTML templates. +// Contains Material for MkDocs design system styles. +// +//go:embed style.css +var CSS string + +// SVG is the embedded headscale.svg logo used in HTML templates. +// +//go:embed headscale.svg +var SVG string diff --git a/hscontrol/assets/favicon.png b/hscontrol/assets/favicon.png new file mode 100644 index 00000000..4989810f Binary files /dev/null and b/hscontrol/assets/favicon.png differ diff --git a/hscontrol/assets/headscale.svg b/hscontrol/assets/headscale.svg new file mode 100644 index 00000000..caf19697 --- /dev/null +++ b/hscontrol/assets/headscale.svg @@ -0,0 +1 @@ + diff --git a/hscontrol/assets/oidc_callback_template.html b/hscontrol/assets/oidc_callback_template.html deleted file mode 100644 index 2236f365..00000000 --- a/hscontrol/assets/oidc_callback_template.html +++ /dev/null @@ -1,307 +0,0 @@ - - - - - - Headscale Authentication Succeeded - - - -
-
- -
- -
-
Signed in via your OIDC provider
-

- {{.Verb}} as {{.User}}, you can now close this window. -

-
-
-
-

Not sure how to get started?

-

- Check out beginner and advanced guides on, or read more in the - documentation. -

- - - - - - - View the headscale documentation - - - - - - - - View the tailscale documentation - -
-
- - diff --git a/hscontrol/assets/style.css b/hscontrol/assets/style.css new file mode 100644 index 00000000..d1eac385 --- /dev/null +++ b/hscontrol/assets/style.css @@ -0,0 +1,143 @@ +/* CSS Variables from Material for MkDocs */ +:root { + --md-default-fg-color: rgba(0, 0, 0, 0.87); + --md-default-fg-color--light: rgba(0, 0, 0, 0.54); + --md-default-fg-color--lighter: rgba(0, 0, 0, 0.32); + --md-default-fg-color--lightest: rgba(0, 0, 0, 0.07); + --md-code-fg-color: #36464e; + --md-code-bg-color: #f5f5f5; + --md-primary-fg-color: #4051b5; + --md-accent-fg-color: #526cfe; + --md-typeset-a-color: var(--md-primary-fg-color); + --md-text-font: "Roboto", -apple-system, BlinkMacSystemFont, "Segoe UI", "Helvetica Neue", Arial, sans-serif; + --md-code-font: "Roboto Mono", "SF Mono", Monaco, "Cascadia Code", Consolas, "Courier New", monospace; +} + +/* Base Typography */ +.md-typeset { + font-size: 0.8rem; + line-height: 1.6; + color: var(--md-default-fg-color); + font-family: var(--md-text-font); + overflow-wrap: break-word; + text-align: left; +} + +/* Headings */ +.md-typeset h1 { + color: var(--md-default-fg-color--light); + font-size: 2em; + line-height: 1.3; + margin: 0 0 1.25em; + font-weight: 300; + letter-spacing: -0.01em; +} + +.md-typeset h1:not(:first-child) { + margin-top: 2em; +} + +.md-typeset h2 { + font-size: 1.5625em; + line-height: 1.4; + margin: 2.4em 0 0.64em; + font-weight: 300; + letter-spacing: -0.01em; + color: var(--md-default-fg-color--light); +} + +.md-typeset h3 { + font-size: 1.25em; + line-height: 1.5; + margin: 2em 0 0.8em; + font-weight: 400; + letter-spacing: -0.01em; + color: var(--md-default-fg-color--light); +} + +/* Paragraphs and block elements */ +.md-typeset p { + margin: 1em 0; +} + +.md-typeset blockquote, +.md-typeset dl, +.md-typeset figure, +.md-typeset ol, +.md-typeset pre, +.md-typeset ul { + margin-bottom: 1em; + margin-top: 1em; +} + +/* Lists */ +.md-typeset ol, +.md-typeset ul { + padding-left: 2em; +} + +/* Links */ +.md-typeset a { + color: var(--md-typeset-a-color); + text-decoration: none; + word-break: break-word; +} + +.md-typeset a:hover, +.md-typeset a:focus { + color: var(--md-accent-fg-color); +} + +/* Code (inline) */ +.md-typeset code { + background-color: var(--md-code-bg-color); + color: var(--md-code-fg-color); + border-radius: 0.1rem; + font-size: 0.85em; + font-family: var(--md-code-font); + padding: 0 0.2941176471em; + word-break: break-word; +} + +/* Code blocks (pre) */ +.md-typeset pre { + display: block; + line-height: 1.4; + margin: 1em 0; + overflow-x: auto; +} + +.md-typeset pre > code { + background-color: var(--md-code-bg-color); + color: var(--md-code-fg-color); + display: block; + padding: 0.7720588235em 1.1764705882em; + font-family: var(--md-code-font); + font-size: 0.85em; + line-height: 1.4; + overflow-wrap: break-word; + word-wrap: break-word; + white-space: pre-wrap; +} + +/* Links in code */ +.md-typeset a code { + color: currentcolor; +} + +/* Logo */ +.headscale-logo { + display: block; + width: 400px; + max-width: 100%; + height: auto; + margin: 0 0 3rem 0; + padding: 0; +} + +@media (max-width: 768px) { + .headscale-logo { + width: 200px; + margin-left: 0; + } +} diff --git a/hscontrol/auth.go b/hscontrol/auth.go index 447035da..ac5968e3 100644 --- a/hscontrol/auth.go +++ b/hscontrol/auth.go @@ -11,7 +11,6 @@ import ( "time" "github.com/juanfont/headscale/hscontrol/types" - "github.com/juanfont/headscale/hscontrol/types/change" "github.com/juanfont/headscale/hscontrol/util" "github.com/rs/zerolog/log" "gorm.io/gorm" @@ -234,11 +233,7 @@ func isAuthKey(req tailcfg.RegisterRequest) bool { } func nodeToRegisterResponse(node types.NodeView) *tailcfg.RegisterResponse { - return &tailcfg.RegisterResponse{ - // TODO(kradalby): Only send for user-owned nodes - // and not tagged nodes when tags is working. - User: node.UserView().TailscaleUser(), - Login: node.UserView().TailscaleLogin(), + resp := &tailcfg.RegisterResponse{ NodeKeyExpired: node.IsExpired(), // Headscale does not implement the concept of machine authorization @@ -246,6 +241,18 @@ func nodeToRegisterResponse(node types.NodeView) *tailcfg.RegisterResponse { // Revisit this if #2176 gets implemented. MachineAuthorized: true, } + + // For tagged nodes, use the TaggedDevices special user + // For user-owned nodes, include User and Login information from the actual user + if node.IsTagged() { + resp.User = types.TaggedDevices.View().TailscaleUser() + resp.Login = types.TaggedDevices.View().TailscaleLogin() + } else if node.Owner().Valid() { + resp.User = node.Owner().TailscaleUser() + resp.Login = node.Owner().TailscaleLogin() + } + + return resp } func (h *Headscale) waitForFollowup( @@ -364,16 +371,13 @@ func (h *Headscale) handleRegisterWithAuthKey( // eventbus. // TODO(kradalby): This needs to be ran as part of the batcher maybe? // now since we dont update the node/pol here anymore - routeChange := h.state.AutoApproveRoutes(node) - - if _, _, err := h.state.SaveNode(node); err != nil { - return nil, fmt.Errorf("saving auto approved routes to node: %w", err) + routesChange, err := h.state.AutoApproveRoutes(node) + if err != nil { + return nil, fmt.Errorf("auto approving routes: %w", err) } - if routeChange && changed.Empty() { - changed = change.NodeAdded(node.ID()) - } - h.Change(changed) + // Send both changes. Empty changes are ignored by Change(). + h.Change(changed, routesChange) // TODO(kradalby): I think this is covered above, but we need to validate that. // // If policy changed due to node registration, send a separate policy change @@ -385,8 +389,8 @@ func (h *Headscale) handleRegisterWithAuthKey( resp := &tailcfg.RegisterResponse{ MachineAuthorized: true, NodeKeyExpired: node.IsExpired(), - User: node.UserView().TailscaleUser(), - Login: node.UserView().TailscaleLogin(), + User: node.Owner().TailscaleUser(), + Login: node.Owner().TailscaleLogin(), } log.Trace(). diff --git a/hscontrol/auth_tags_test.go b/hscontrol/auth_tags_test.go new file mode 100644 index 00000000..bbaa834b --- /dev/null +++ b/hscontrol/auth_tags_test.go @@ -0,0 +1,689 @@ +package hscontrol + +import ( + "testing" + "time" + + "github.com/juanfont/headscale/hscontrol/types" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "tailscale.com/tailcfg" + "tailscale.com/types/key" +) + +// TestTaggedPreAuthKeyCreatesTaggedNode tests that a PreAuthKey with tags creates +// a tagged node with: +// - Tags from the PreAuthKey +// - UserID tracking who created the key (informational "created by") +// - IsTagged() returns true. +func TestTaggedPreAuthKeyCreatesTaggedNode(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + tags := []string{"tag:server", "tag:prod"} + + // Create a tagged PreAuthKey + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags) + require.NoError(t, err) + require.NotEmpty(t, pak.Tags, "PreAuthKey should have tags") + require.ElementsMatch(t, tags, pak.Tags, "PreAuthKey should have specified tags") + + // Register a node using the tagged key + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, resp.MachineAuthorized) + + // Verify the node was created with tags + node, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + + // Critical assertions for tags-as-identity model + assert.True(t, node.IsTagged(), "Node should be tagged") + assert.ElementsMatch(t, tags, node.Tags().AsSlice(), "Node should have tags from PreAuthKey") + assert.True(t, node.UserID().Valid(), "Node should have UserID tracking creator") + assert.Equal(t, user.ID, node.UserID().Get(), "UserID should track PreAuthKey creator") + + // Verify node is identified correctly + assert.True(t, node.IsTagged(), "Tagged node is not user-owned") + assert.True(t, node.HasTag("tag:server"), "Node should have tag:server") + assert.True(t, node.HasTag("tag:prod"), "Node should have tag:prod") + assert.False(t, node.HasTag("tag:other"), "Node should not have tag:other") +} + +// TestReAuthDoesNotReapplyTags tests that when a node re-authenticates using the +// same PreAuthKey, the tags are NOT re-applied. Tags are only set during initial +// authentication. This is critical for the container restart scenario (#2830). +// +// NOTE: This test verifies that re-authentication preserves the node's current tags +// without testing tag modification via SetNodeTags (which requires ACL policy setup). +func TestReAuthDoesNotReapplyTags(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + initialTags := []string{"tag:server", "tag:dev"} + + // Create a tagged PreAuthKey with reusable=true for re-auth + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, initialTags) + require.NoError(t, err) + + // Initial registration + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "reauth-test-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, resp.MachineAuthorized) + + // Verify initial tags + node, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + require.True(t, node.IsTagged()) + require.ElementsMatch(t, initialTags, node.Tags().AsSlice()) + + // Re-authenticate with the SAME PreAuthKey (container restart scenario) + // Key behavior: Tags should NOT be re-applied during re-auth + reAuthReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, // Same key + }, + NodeKey: nodeKey.Public(), // Same node key + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "reauth-test-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + reAuthResp, err := app.handleRegisterWithAuthKey(reAuthReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, reAuthResp.MachineAuthorized) + + // CRITICAL: Tags should remain unchanged after re-auth + // They should match the original tags, proving they weren't re-applied + nodeAfterReauth, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + assert.True(t, nodeAfterReauth.IsTagged(), "Node should still be tagged") + assert.ElementsMatch(t, initialTags, nodeAfterReauth.Tags().AsSlice(), "Tags should remain unchanged on re-auth") + + // Verify only one node was created (no duplicates) + nodes := app.state.ListNodesByUser(types.UserID(user.ID)) + assert.Equal(t, 1, nodes.Len(), "Should have exactly one node") +} + +// NOTE: TestSetTagsOnUserOwnedNode functionality is covered by gRPC tests in grpcv1_test.go +// which properly handle ACL policy setup. The test verifies that SetTags can convert +// user-owned nodes to tagged nodes while preserving UserID. + +// TestCannotRemoveAllTags tests that attempting to remove all tags from a +// tagged node fails with ErrCannotRemoveAllTags. Once a node is tagged, +// it must always have at least one tag (Tailscale requirement). +func TestCannotRemoveAllTags(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + tags := []string{"tag:server"} + + // Create a tagged node + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags) + require.NoError(t, err) + + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, resp.MachineAuthorized) + + // Verify node is tagged + node, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + require.True(t, node.IsTagged()) + + // Attempt to remove all tags by setting empty array + _, _, err = app.state.SetNodeTags(node.ID(), []string{}) + require.Error(t, err, "Should not be able to remove all tags") + require.ErrorIs(t, err, types.ErrCannotRemoveAllTags, "Error should be ErrCannotRemoveAllTags") + + // Verify node still has original tags + nodeAfter, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + assert.True(t, nodeAfter.IsTagged(), "Node should still be tagged") + assert.ElementsMatch(t, tags, nodeAfter.Tags().AsSlice(), "Tags should be unchanged") +} + +// TestUserOwnedNodeCreatedWithUntaggedPreAuthKey tests that using a PreAuthKey +// without tags creates a user-owned node (no tags, UserID is the owner). +func TestUserOwnedNodeCreatedWithUntaggedPreAuthKey(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("node-owner") + + // Create an untagged PreAuthKey + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) + require.NoError(t, err) + require.Empty(t, pak.Tags, "PreAuthKey should not be tagged") + require.Empty(t, pak.Tags, "PreAuthKey should have no tags") + + // Register a node + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "user-owned-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, resp.MachineAuthorized) + + // Verify node is user-owned + node, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + + // Critical assertions for user-owned node + assert.False(t, node.IsTagged(), "Node should not be tagged") + assert.False(t, node.IsTagged(), "Node should be user-owned (not tagged)") + assert.Empty(t, node.Tags().AsSlice(), "Node should have no tags") + assert.True(t, node.UserID().Valid(), "Node should have UserID") + assert.Equal(t, user.ID, node.UserID().Get(), "UserID should be the PreAuthKey owner") +} + +// TestMultipleNodesWithSameReusableTaggedPreAuthKey tests that a reusable +// PreAuthKey with tags can be used to register multiple nodes, and all nodes +// receive the same tags from the key. +func TestMultipleNodesWithSameReusableTaggedPreAuthKey(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + tags := []string{"tag:server", "tag:prod"} + + // Create a REUSABLE tagged PreAuthKey + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags) + require.NoError(t, err) + require.ElementsMatch(t, tags, pak.Tags) + + // Register first node + machineKey1 := key.NewMachine() + nodeKey1 := key.NewNode() + + regReq1 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey1.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node-1", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey1.Public()) + require.NoError(t, err) + require.True(t, resp1.MachineAuthorized) + + // Register second node with SAME PreAuthKey + machineKey2 := key.NewMachine() + nodeKey2 := key.NewNode() + + regReq2 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, // Same key + }, + NodeKey: nodeKey2.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node-2", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp2, err := app.handleRegisterWithAuthKey(regReq2, machineKey2.Public()) + require.NoError(t, err) + require.True(t, resp2.MachineAuthorized) + + // Verify both nodes exist and have the same tags + node1, found := app.state.GetNodeByNodeKey(nodeKey1.Public()) + require.True(t, found) + node2, found := app.state.GetNodeByNodeKey(nodeKey2.Public()) + require.True(t, found) + + // Both nodes should be tagged with the same tags + assert.True(t, node1.IsTagged(), "First node should be tagged") + assert.True(t, node2.IsTagged(), "Second node should be tagged") + assert.ElementsMatch(t, tags, node1.Tags().AsSlice(), "First node should have PreAuthKey tags") + assert.ElementsMatch(t, tags, node2.Tags().AsSlice(), "Second node should have PreAuthKey tags") + + // Both nodes should track the same creator + assert.Equal(t, user.ID, node1.UserID().Get(), "First node should track creator") + assert.Equal(t, user.ID, node2.UserID().Get(), "Second node should track creator") + + // Verify we have exactly 2 nodes + nodes := app.state.ListNodesByUser(types.UserID(user.ID)) + assert.Equal(t, 2, nodes.Len(), "Should have exactly two nodes") +} + +// TestNonReusableTaggedPreAuthKey tests that a non-reusable PreAuthKey with tags +// can only be used once. The second attempt should fail. +func TestNonReusableTaggedPreAuthKey(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + tags := []string{"tag:server"} + + // Create a NON-REUSABLE tagged PreAuthKey + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, tags) + require.NoError(t, err) + require.ElementsMatch(t, tags, pak.Tags) + + // Register first node - should succeed + machineKey1 := key.NewMachine() + nodeKey1 := key.NewNode() + + regReq1 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey1.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node-1", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey1.Public()) + require.NoError(t, err) + require.True(t, resp1.MachineAuthorized) + + // Verify first node was created with tags + node1, found := app.state.GetNodeByNodeKey(nodeKey1.Public()) + require.True(t, found) + assert.True(t, node1.IsTagged()) + assert.ElementsMatch(t, tags, node1.Tags().AsSlice()) + + // Attempt to register second node with SAME non-reusable key - should fail + machineKey2 := key.NewMachine() + nodeKey2 := key.NewNode() + + regReq2 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, // Same non-reusable key + }, + NodeKey: nodeKey2.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node-2", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + _, err = app.handleRegisterWithAuthKey(regReq2, machineKey2.Public()) + require.Error(t, err, "Should not be able to reuse non-reusable PreAuthKey") + + // Verify only one node was created + nodes := app.state.ListNodesByUser(types.UserID(user.ID)) + assert.Equal(t, 1, nodes.Len(), "Should have exactly one node") +} + +// TestExpiredTaggedPreAuthKey tests that an expired PreAuthKey with tags +// cannot be used to register a node. +func TestExpiredTaggedPreAuthKey(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + tags := []string{"tag:server"} + + // Create a PreAuthKey that expires immediately + expiration := time.Now().Add(-1 * time.Hour) // Already expired + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, &expiration, tags) + require.NoError(t, err) + require.ElementsMatch(t, tags, pak.Tags) + + // Attempt to register with expired key + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + _, err = app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.Error(t, err, "Should not be able to use expired PreAuthKey") + + // Verify no node was created + _, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + assert.False(t, found, "No node should be created with expired key") +} + +// TestSingleVsMultipleTags tests that PreAuthKeys work correctly with both +// a single tag and multiple tags. +func TestSingleVsMultipleTags(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + + // Test with single tag + singleTag := []string{"tag:server"} + pak1, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, singleTag) + require.NoError(t, err) + + machineKey1 := key.NewMachine() + nodeKey1 := key.NewNode() + + regReq1 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak1.Key, + }, + NodeKey: nodeKey1.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "single-tag-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey1.Public()) + require.NoError(t, err) + require.True(t, resp1.MachineAuthorized) + + node1, found := app.state.GetNodeByNodeKey(nodeKey1.Public()) + require.True(t, found) + assert.True(t, node1.IsTagged()) + assert.ElementsMatch(t, singleTag, node1.Tags().AsSlice()) + + // Test with multiple tags + multipleTags := []string{"tag:server", "tag:prod", "tag:database"} + pak2, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, multipleTags) + require.NoError(t, err) + + machineKey2 := key.NewMachine() + nodeKey2 := key.NewNode() + + regReq2 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak2.Key, + }, + NodeKey: nodeKey2.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "multi-tag-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp2, err := app.handleRegisterWithAuthKey(regReq2, machineKey2.Public()) + require.NoError(t, err) + require.True(t, resp2.MachineAuthorized) + + node2, found := app.state.GetNodeByNodeKey(nodeKey2.Public()) + require.True(t, found) + assert.True(t, node2.IsTagged()) + assert.ElementsMatch(t, multipleTags, node2.Tags().AsSlice()) + + // Verify HasTag works for all tags + assert.True(t, node2.HasTag("tag:server")) + assert.True(t, node2.HasTag("tag:prod")) + assert.True(t, node2.HasTag("tag:database")) + assert.False(t, node2.HasTag("tag:other")) +} + +// TestTaggedPreAuthKeyDisablesKeyExpiry tests that nodes registered with +// a tagged PreAuthKey have key expiry disabled (expiry is nil). +func TestTaggedPreAuthKeyDisablesKeyExpiry(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + tags := []string{"tag:server", "tag:prod"} + + // Create a tagged PreAuthKey + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags) + require.NoError(t, err) + require.ElementsMatch(t, tags, pak.Tags) + + // Register a node using the tagged key + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + // Client requests an expiry time, but for tagged nodes it should be ignored + clientRequestedExpiry := time.Now().Add(24 * time.Hour) + + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-expiry-test", + }, + Expiry: clientRequestedExpiry, + } + + resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, resp.MachineAuthorized) + + // Verify the node has key expiry DISABLED (expiry is nil/zero) + node, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + + // Critical assertion: Tagged nodes should have expiry disabled + assert.True(t, node.IsTagged(), "Node should be tagged") + assert.False(t, node.Expiry().Valid(), "Tagged node should have expiry disabled (nil)") +} + +// TestUntaggedPreAuthKeyPreservesKeyExpiry tests that nodes registered with +// an untagged PreAuthKey preserve the client's requested key expiry. +func TestUntaggedPreAuthKeyPreservesKeyExpiry(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("node-owner") + + // Create an untagged PreAuthKey + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) + require.NoError(t, err) + require.Empty(t, pak.Tags, "PreAuthKey should not be tagged") + + // Register a node + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + // Client requests an expiry time + clientRequestedExpiry := time.Now().Add(24 * time.Hour) + + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "untagged-expiry-test", + }, + Expiry: clientRequestedExpiry, + } + + resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, resp.MachineAuthorized) + + // Verify the node has the client's requested expiry + node, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + + // Critical assertion: User-owned nodes should preserve client expiry + assert.False(t, node.IsTagged(), "Node should not be tagged") + assert.True(t, node.Expiry().Valid(), "User-owned node should have expiry set") + // Allow some tolerance for test execution time + assert.WithinDuration(t, clientRequestedExpiry, node.Expiry().Get(), 5*time.Second, + "User-owned node should have the client's requested expiry") +} + +// TestTaggedNodeReauthPreservesDisabledExpiry tests that when a tagged node +// re-authenticates, the disabled expiry is preserved (not updated from client request). +func TestTaggedNodeReauthPreservesDisabledExpiry(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + tags := []string{"tag:server"} + + // Create a reusable tagged PreAuthKey + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags) + require.NoError(t, err) + + // Initial registration + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-reauth-test", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, resp.MachineAuthorized) + + // Verify initial registration has expiry disabled + node, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + require.True(t, node.IsTagged()) + require.False(t, node.Expiry().Valid(), "Initial registration should have expiry disabled") + + // Re-authenticate with a NEW expiry request (should be ignored for tagged nodes) + newRequestedExpiry := time.Now().Add(48 * time.Hour) + reAuthReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-reauth-test", + }, + Expiry: newRequestedExpiry, // Client requests new expiry + } + + reAuthResp, err := app.handleRegisterWithAuthKey(reAuthReq, machineKey.Public()) + require.NoError(t, err) + require.True(t, reAuthResp.MachineAuthorized) + + // Verify expiry is STILL disabled after re-auth + nodeAfterReauth, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + + // Critical assertion: Tagged node should preserve disabled expiry on re-auth + assert.True(t, nodeAfterReauth.IsTagged(), "Node should still be tagged") + assert.False(t, nodeAfterReauth.Expiry().Valid(), + "Tagged node should have expiry PRESERVED as disabled after re-auth") +} + +// TestReAuthWithDifferentMachineKey tests the edge case where a node attempts +// to re-authenticate with the same NodeKey but a DIFFERENT MachineKey. +// This scenario should be handled gracefully (currently creates a new node). +func TestReAuthWithDifferentMachineKey(t *testing.T) { + app := createTestApp(t) + + user := app.state.CreateUserForTest("tag-creator") + tags := []string{"tag:server"} + + // Create a reusable tagged PreAuthKey + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags) + require.NoError(t, err) + + // Initial registration + machineKey1 := key.NewMachine() + nodeKey := key.NewNode() // Same NodeKey for both attempts + + regReq1 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "test-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp1, err := app.handleRegisterWithAuthKey(regReq1, machineKey1.Public()) + require.NoError(t, err) + require.True(t, resp1.MachineAuthorized) + + // Verify initial node + node1, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + assert.True(t, node1.IsTagged()) + + // Re-authenticate with DIFFERENT MachineKey but SAME NodeKey + machineKey2 := key.NewMachine() // Different machine key + + regReq2 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), // Same NodeKey + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "test-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp2, err := app.handleRegisterWithAuthKey(regReq2, machineKey2.Public()) + require.NoError(t, err) + require.True(t, resp2.MachineAuthorized) + + // Verify the node still exists and has tags + // Note: Depending on implementation, this might be the same node or a new node + node2, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + assert.True(t, node2.IsTagged()) + assert.ElementsMatch(t, tags, node2.Tags().AsSlice()) +} diff --git a/hscontrol/auth_test.go b/hscontrol/auth_test.go index bf6da356..1677642f 100644 --- a/hscontrol/auth_test.go +++ b/hscontrol/auth_test.go @@ -70,7 +70,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "preauth_key_valid_new_node", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("preauth-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -111,7 +112,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "preauth_key_reusable_multiple_nodes", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("reusable-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -177,7 +179,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "preauth_key_single_use_exhausted", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("single-use-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) if err != nil { return "", err } @@ -264,7 +267,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "preauth_key_ephemeral_node", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("ephemeral-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, true, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, true, nil, nil) if err != nil { return "", err } @@ -370,7 +374,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "existing_node_logout", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("logout-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -429,7 +434,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "existing_node_machine_key_mismatch", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("mismatch-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -477,7 +483,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "existing_node_key_extension_not_allowed", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("extend-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -525,7 +532,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "existing_node_expired_forces_reauth", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("reauth-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -585,7 +593,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "ephemeral_node_logout_deletion", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("ephemeral-logout-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, true, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, true, nil, nil) if err != nil { return "", err } @@ -659,9 +668,10 @@ func TestAuthenticationFlows(t *testing.T) { } app.state.SetRegistrationCacheEntry(regID, nodeToRegister) - // Simulate successful registration + // Simulate successful registration - send to buffered channel + // The channel is buffered (size 1), so this can complete immediately + // and handleRegister will receive the value when it starts waiting go func() { - time.Sleep(20 * time.Millisecond) user := app.state.CreateUserForTest("followup-user") node := app.state.CreateNodeForTest(user, "followup-success-node") registered <- node @@ -767,7 +777,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "empty_hostname", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("empty-hostname-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -805,7 +816,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "nil_hostinfo", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("nil-hostinfo-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -848,7 +860,8 @@ func TestAuthenticationFlows(t *testing.T) { setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("expired-pak-user") expiry := time.Now().Add(-1 * time.Hour) // Expired 1 hour ago - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, &expiry, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, &expiry, nil) if err != nil { return "", err } @@ -880,7 +893,8 @@ func TestAuthenticationFlows(t *testing.T) { setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("tagged-pak-user") tags := []string{"tag:server", "tag:database"} - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, tags) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags) if err != nil { return "", err } @@ -914,6 +928,82 @@ func TestAuthenticationFlows(t *testing.T) { }, }, + // === ADVERTISE-TAGS (RequestTags) SCENARIOS === + // Tests for client-provided tags via --advertise-tags flag + + // TEST: PreAuthKey registration rejects client-provided RequestTags + // WHAT: Tests that PreAuthKey registrations cannot use client-provided tags + // INPUT: PreAuthKey registration with RequestTags in Hostinfo + // EXPECTED: Registration fails with "requested tags [...] are invalid or not permitted" error + // WHY: PreAuthKey nodes get their tags from the key itself, not from client requests + { + name: "preauth_key_rejects_request_tags", + setupFunc: func(t *testing.T, app *Headscale) (string, error) { + t.Helper() + + user := app.state.CreateUserForTest("pak-requesttags-user") + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) + if err != nil { + return "", err + } + + return pak.Key, nil + }, + request: func(authKey string) tailcfg.RegisterRequest { + return tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: authKey, + }, + NodeKey: nodeKey1.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "pak-requesttags-node", + RequestTags: []string{"tag:unauthorized"}, + }, + Expiry: time.Now().Add(24 * time.Hour), + } + }, + machineKey: machineKey1.Public, + wantError: true, + }, + + // TEST: Tagged PreAuthKey ignores client-provided RequestTags + // WHAT: Tests that tagged PreAuthKey uses key tags, not client RequestTags + // INPUT: Tagged PreAuthKey registration with different RequestTags + // EXPECTED: Registration fails because RequestTags are rejected for PreAuthKey + // WHY: Tags-as-identity: PreAuthKey tags are authoritative, client cannot override + { + name: "tagged_preauth_key_rejects_client_request_tags", + setupFunc: func(t *testing.T, app *Headscale) (string, error) { + t.Helper() + + user := app.state.CreateUserForTest("tagged-pak-clienttags-user") + keyTags := []string{"tag:authorized"} + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, keyTags) + if err != nil { + return "", err + } + + return pak.Key, nil + }, + request: func(authKey string) tailcfg.RegisterRequest { + return tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: authKey, + }, + NodeKey: nodeKey1.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-pak-clienttags-node", + RequestTags: []string{"tag:client-wants-this"}, // Should be rejected + }, + Expiry: time.Now().Add(24 * time.Hour), + } + }, + machineKey: machineKey1.Public, + wantError: true, // RequestTags rejected for PreAuthKey registrations + }, + // === RE-AUTHENTICATION SCENARIOS === // TEST: Existing node re-authenticates with new pre-auth key // WHAT: Tests that existing node can re-authenticate using new pre-auth key @@ -926,7 +1016,7 @@ func TestAuthenticationFlows(t *testing.T) { user := app.state.CreateUserForTest("reauth-user") // First, register with initial auth key - pak1, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + pak1, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -953,7 +1043,7 @@ func TestAuthenticationFlows(t *testing.T) { }, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore") // Create new auth key for re-authentication - pak2, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + pak2, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -992,7 +1082,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "existing_node_reauth_interactive_flow", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("interactive-reauth-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1053,7 +1144,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "node_key_rotation_same_machine", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("rotation-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1081,7 +1173,7 @@ func TestAuthenticationFlows(t *testing.T) { }, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore") // Create new auth key for rotation - pakRotation, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + pakRotation, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1129,7 +1221,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "malformed_expiry_zero_time", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("zero-expiry-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1167,7 +1260,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "malformed_hostinfo_invalid_data", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("invalid-hostinfo-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1185,8 +1279,9 @@ func TestAuthenticationFlows(t *testing.T) { OS: "unknown-os", OSVersion: "999.999.999", DeviceModel: "test-device-model", - RequestTags: []string{"invalid:tag", "another!tag"}, - Services: []tailcfg.Service{{Proto: "tcp", Port: 65535}}, + // Note: RequestTags are not included for PreAuthKey registrations + // since tags come from the key itself, not client requests. + Services: []tailcfg.Service{{Proto: "tcp", Port: 65535}}, }, Expiry: time.Now().Add(24 * time.Hour), } @@ -1230,8 +1325,8 @@ func TestAuthenticationFlows(t *testing.T) { app.state.SetRegistrationCacheEntry(regID, nodeToRegister) // Simulate registration that returns nil (cache expired during auth) + // The channel is buffered (size 1), so this can complete immediately go func() { - time.Sleep(20 * time.Millisecond) registered <- nil // Nil indicates cache expiry }() @@ -1298,9 +1393,13 @@ func TestAuthenticationFlows(t *testing.T) { // === AUTH PROVIDER EDGE CASES === // TEST: Interactive workflow preserves custom hostinfo // WHAT: Tests that custom hostinfo fields are preserved through interactive flow - // INPUT: Interactive registration with detailed hostinfo (OS, version, model, etc.) + // INPUT: Interactive registration with detailed hostinfo (OS, version, model) // EXPECTED: Node registers with all hostinfo fields preserved // WHY: Ensures interactive flow doesn't lose custom hostinfo data + // NOTE: RequestTags are NOT tested here because tag authorization via + // advertise-tags requires the user to have existing nodes (for IP-based + // ownership verification). New users registering their first node cannot + // claim tags via RequestTags - they must use a tagged PreAuthKey instead. { name: "interactive_workflow_with_custom_hostinfo", setupFunc: func(t *testing.T, app *Headscale) (string, error) { @@ -1314,7 +1413,6 @@ func TestAuthenticationFlows(t *testing.T) { OS: "linux", OSVersion: "20.04", DeviceModel: "server", - RequestTags: []string{"tag:server"}, }, Expiry: time.Now().Add(24 * time.Hour), } @@ -1336,7 +1434,6 @@ func TestAuthenticationFlows(t *testing.T) { assert.Equal(t, "linux", node.Hostinfo().OS()) assert.Equal(t, "20.04", node.Hostinfo().OSVersion()) assert.Equal(t, "server", node.Hostinfo().DeviceModel()) - assert.Contains(t, node.Hostinfo().RequestTags().AsSlice(), "tag:server") } }, }, @@ -1353,7 +1450,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "preauth_key_usage_count_tracking", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("usage-count-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) // Single use + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) // Single use if err != nil { return "", err } @@ -1432,7 +1530,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "concurrent_registration_same_node_key", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("concurrent-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1473,7 +1572,8 @@ func TestAuthenticationFlows(t *testing.T) { user := app.state.CreateUserForTest("future-expiry-user") // Auth key expires in the future expiry := time.Now().Add(48 * time.Hour) - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, &expiry, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, &expiry, nil) if err != nil { return "", err } @@ -1517,7 +1617,7 @@ func TestAuthenticationFlows(t *testing.T) { user2 := app.state.CreateUserForTest("user2-context") // Register node with user1's auth key - pak1, err := app.state.CreatePreAuthKey(types.UserID(user1.ID), true, false, nil, nil) + pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1544,7 +1644,7 @@ func TestAuthenticationFlows(t *testing.T) { }, 1*time.Second, 50*time.Millisecond, "waiting for node to be available in NodeStore") // Return user2's auth key for re-authentication - pak2, err := app.state.CreatePreAuthKey(types.UserID(user2.ID), true, false, nil, nil) + pak2, err := app.state.CreatePreAuthKey(user2.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1571,15 +1671,15 @@ func TestAuthenticationFlows(t *testing.T) { // Verify NEW node was created for user2 node2, found := app.state.GetNodeByMachineKey(machineKey1.Public(), types.UserID(2)) require.True(t, found, "new node should exist for user2") - assert.Equal(t, uint(2), node2.UserID(), "new node should belong to user2") + assert.Equal(t, uint(2), node2.UserID().Get(), "new node should belong to user2") user := node2.User() - assert.Equal(t, "user2-context", user.Username(), "new node should show user2 username") + assert.Equal(t, "user2-context", user.Name(), "new node should show user2 username") // Verify original node still exists for user1 node1, found := app.state.GetNodeByMachineKey(machineKey1.Public(), types.UserID(1)) require.True(t, found, "original node should still exist for user1") - assert.Equal(t, uint(1), node1.UserID(), "original node should still belong to user1") + assert.Equal(t, uint(1), node1.UserID().Get(), "original node should still belong to user1") // Verify they are different nodes (different IDs) assert.NotEqual(t, node1.ID(), node2.ID(), "should be different node IDs") @@ -1595,7 +1695,8 @@ func TestAuthenticationFlows(t *testing.T) { setupFunc: func(t *testing.T, app *Headscale) (string, error) { // Create user1 and register a node with auth key user1 := app.state.CreateUserForTest("interactive-user-1") - pak1, err := app.state.CreatePreAuthKey(types.UserID(user1.ID), true, false, nil, nil) + + pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1645,16 +1746,16 @@ func TestAuthenticationFlows(t *testing.T) { // User1's original node should STILL exist (not transferred) node1, found1 := app.state.GetNodeByMachineKey(machineKey1.Public(), types.UserID(1)) require.True(t, found1, "user1's original node should still exist") - assert.Equal(t, uint(1), node1.UserID(), "user1's node should still belong to user1") + assert.Equal(t, uint(1), node1.UserID().Get(), "user1's node should still belong to user1") assert.Equal(t, nodeKey1.Public(), node1.NodeKey(), "user1's node should have original node key") // User2 should have a NEW node created node2, found2 := app.state.GetNodeByMachineKey(machineKey1.Public(), types.UserID(2)) require.True(t, found2, "user2 should have new node created") - assert.Equal(t, uint(2), node2.UserID(), "user2's node should belong to user2") + assert.Equal(t, uint(2), node2.UserID().Get(), "user2's node should belong to user2") user := node2.User() - assert.Equal(t, "interactive-test-user", user.Username(), "user2's node should show correct username") + assert.Equal(t, "interactive-test-user", user.Name(), "user2's node should show correct username") // Both nodes should have the same machine key but different IDs assert.NotEqual(t, node1.ID(), node2.ID(), "should be different nodes (different IDs)") @@ -1720,7 +1821,8 @@ func TestAuthenticationFlows(t *testing.T) { name: "logout_with_exactly_now_expiry", setupFunc: func(t *testing.T, app *Headscale) (string, error) { user := app.state.CreateUserForTest("exact-now-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1813,7 +1915,8 @@ func TestAuthenticationFlows(t *testing.T) { setupFunc: func(t *testing.T, app *Headscale) (string, error) { // First create a node under user1 user1 := app.state.CreateUserForTest("existing-user-1") - pak1, err := app.state.CreatePreAuthKey(types.UserID(user1.ID), true, false, nil, nil) + + pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -1863,7 +1966,7 @@ func TestAuthenticationFlows(t *testing.T) { // User1's original node with nodeKey1 should STILL exist node1, found1 := app.state.GetNodeByNodeKey(nodeKey1.Public()) require.True(t, found1, "user1's original node with nodeKey1 should still exist") - assert.Equal(t, uint(1), node1.UserID(), "user1's node should still belong to user1") + assert.Equal(t, uint(1), node1.UserID().Get(), "user1's node should still belong to user1") assert.Equal(t, uint64(1), node1.ID().Uint64(), "user1's node should be ID=1") // User2 should have a NEW node with nodeKey2 @@ -1872,7 +1975,7 @@ func TestAuthenticationFlows(t *testing.T) { assert.Equal(t, "existing-node-user2", node2.Hostname(), "hostname should be from new registration") user := node2.User() - assert.Equal(t, "interactive-test-user", user.Username(), "user2's node should belong to user2") + assert.Equal(t, "interactive-test-user", user.Name(), "user2's node should belong to user2") assert.Equal(t, machineKey1.Public(), node2.MachineKey(), "machine key should be the same") // Verify it's a NEW node, not transferred @@ -1978,11 +2081,8 @@ func TestAuthenticationFlows(t *testing.T) { }(i) } - // All should wait since no auth completion happened - // After a short delay, they should timeout or be waiting - time.Sleep(100 * time.Millisecond) - - // Now complete the authentication to signal one of them + // Complete the authentication to signal the waiting goroutines + // The goroutines will receive from the buffered channel when ready registrationID, err := extractRegistrationIDFromAuthURL(authURL) require.NoError(t, err) @@ -2022,7 +2122,8 @@ func TestAuthenticationFlows(t *testing.T) { setupFunc: func(t *testing.T, app *Headscale) (string, error) { // Register initial node user := app.state.CreateUserForTest("rotation-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) if err != nil { return "", err } @@ -2072,7 +2173,7 @@ func TestAuthenticationFlows(t *testing.T) { // User1's original node with nodeKey1 should STILL exist oldNode, foundOld := app.state.GetNodeByNodeKey(nodeKey1.Public()) require.True(t, foundOld, "user1's original node with nodeKey1 should still exist") - assert.Equal(t, uint(1), oldNode.UserID(), "user1's node should still belong to user1") + assert.Equal(t, uint(1), oldNode.UserID().Get(), "user1's node should still belong to user1") assert.Equal(t, uint64(1), oldNode.ID().Uint64(), "user1's node should be ID=1") // User2 should have a NEW node with nodeKey2 @@ -2082,7 +2183,7 @@ func TestAuthenticationFlows(t *testing.T) { assert.Equal(t, machineKey1.Public(), newNode.MachineKey()) user := newNode.User() - assert.Equal(t, "interactive-test-user", user.Username(), "user2's node should belong to user2") + assert.Equal(t, "interactive-test-user", user.Name(), "user2's node should belong to user2") // Verify it's a NEW node, not transferred assert.NotEqual(t, uint64(1), newNode.ID().Uint64(), "should be a NEW node (different ID)") @@ -2305,10 +2406,8 @@ func TestAuthenticationFlows(t *testing.T) { responseChan <- resp }() - // Give followup time to start waiting - time.Sleep(50 * time.Millisecond) - // Complete authentication for second registration + // The goroutine will receive the node from the buffered channel _, _, err = app.state.HandleNodeFromAuthPath( regID2, types.UserID(user.ID), @@ -2333,7 +2432,7 @@ func TestAuthenticationFlows(t *testing.T) { assert.True(t, found, "node should be registered") if found { assert.Equal(t, "pending-node-2", node.Hostname()) - assert.Equal(t, "second-registration-user", node.User().Name) + assert.Equal(t, "second-registration-user", node.User().Name()) } // First registration should still be in cache (not completed) @@ -2501,10 +2600,7 @@ func runInteractiveWorkflowTest(t *testing.T, tt struct { responseChan <- resp }() - // Give the followup request time to start waiting - time.Sleep(50 * time.Millisecond) - - // Now complete the authentication - this will signal the waiting followup request + // Complete the authentication - the goroutine will receive from the buffered channel user := app.state.CreateUserForTest("interactive-test-user") _, _, err = app.state.HandleNodeFromAuthPath( registrationID, @@ -2593,7 +2689,7 @@ func TestNodeStoreLookup(t *testing.T) { nodeKey := key.NewNode() user := app.state.CreateUserForTest("test-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) require.NoError(t, err) // Register a node @@ -2642,9 +2738,9 @@ func TestPreAuthKeyLogoutAndReloginDifferentUser(t *testing.T) { user2 := app.state.CreateUserForTest("user2") // Create pre-auth keys for both users - pak1, err := app.state.CreatePreAuthKey(types.UserID(user1.ID), true, false, nil, nil) + pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil) require.NoError(t, err) - pak2, err := app.state.CreatePreAuthKey(types.UserID(user2.ID), true, false, nil, nil) + pak2, err := app.state.CreatePreAuthKey(user2.TypedID(), true, false, nil, nil) require.NoError(t, err) // Create machine and node keys for 4 nodes (2 per user) @@ -2720,7 +2816,7 @@ func TestPreAuthKeyLogoutAndReloginDifferentUser(t *testing.T) { t.Logf("All nodes logged out") // Create a new pre-auth key for user1 (reusable for all nodes) - newPak1, err := app.state.CreatePreAuthKey(types.UserID(user1.ID), true, false, nil, nil) + newPak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil) require.NoError(t, err) // Re-login all nodes using user1's new pre-auth key @@ -2760,12 +2856,12 @@ func TestPreAuthKeyLogoutAndReloginDifferentUser(t *testing.T) { require.Equal(t, 2, user2NodesAfter.Len(), "user2 should still have 2 nodes (old nodes from original registration)") // Verify original nodes still exist with original users - for i := 0; i < 2; i++ { + for i := range 2 { node := nodes[i] // User1's original nodes should still be owned by user1 registeredNode, found := app.state.GetNodeByMachineKey(node.machineKey.Public(), types.UserID(user1.ID)) require.True(t, found, "User1's original node %s should still exist", node.hostname) - require.Equal(t, user1.ID, registeredNode.UserID(), "Node %s should still belong to user1", node.hostname) + require.Equal(t, user1.ID, registeredNode.UserID().Get(), "Node %s should still belong to user1", node.hostname) t.Logf("✓ User1's original node %s (ID=%d) still owned by user1", node.hostname, registeredNode.ID().Uint64()) } @@ -2774,7 +2870,7 @@ func TestPreAuthKeyLogoutAndReloginDifferentUser(t *testing.T) { // User2's original nodes should still be owned by user2 registeredNode, found := app.state.GetNodeByMachineKey(node.machineKey.Public(), types.UserID(user2.ID)) require.True(t, found, "User2's original node %s should still exist", node.hostname) - require.Equal(t, user2.ID, registeredNode.UserID(), "Node %s should still belong to user2", node.hostname) + require.Equal(t, user2.ID, registeredNode.UserID().Get(), "Node %s should still belong to user2", node.hostname) t.Logf("✓ User2's original node %s (ID=%d) still owned by user2", node.hostname, registeredNode.ID().Uint64()) } @@ -2785,7 +2881,7 @@ func TestPreAuthKeyLogoutAndReloginDifferentUser(t *testing.T) { // Should be able to find a node with user1 and this machine key (the new one) newNode, found := app.state.GetNodeByMachineKey(node.machineKey.Public(), types.UserID(user1.ID)) require.True(t, found, "Should have created new node for user1 with machine key from %s", node.hostname) - require.Equal(t, user1.ID, newNode.UserID(), "New node should belong to user1") + require.Equal(t, user1.ID, newNode.UserID().Get(), "New node should belong to user1") t.Logf("✓ New node created for user1 with machine key from %s (ID=%d)", node.hostname, newNode.ID().Uint64()) } } @@ -2813,7 +2909,7 @@ func TestWebFlowReauthDifferentUser(t *testing.T) { // Step 1: Register node for user1 via pre-auth key (simulating initial web flow registration) user1 := app.state.CreateUserForTest("user1") - pak1, err := app.state.CreatePreAuthKey(types.UserID(user1.ID), true, false, nil, nil) + pak1, err := app.state.CreatePreAuthKey(user1.TypedID(), true, false, nil, nil) require.NoError(t, err) regReq1 := tailcfg.RegisterRequest{ @@ -2834,7 +2930,7 @@ func TestWebFlowReauthDifferentUser(t *testing.T) { // Verify node exists for user1 user1Node, found := app.state.GetNodeByMachineKey(machineKey.Public(), types.UserID(user1.ID)) require.True(t, found, "Node should exist for user1") - require.Equal(t, user1.ID, user1Node.UserID(), "Node should belong to user1") + require.Equal(t, user1.ID, user1Node.UserID().Get(), "Node should belong to user1") user1NodeID := user1Node.ID() t.Logf("✓ User1 node created with ID: %d", user1NodeID) @@ -2896,7 +2992,7 @@ func TestWebFlowReauthDifferentUser(t *testing.T) { t.Fatal("User1's node was transferred or deleted - this breaks the integration test!") } - assert.Equal(t, user1.ID, user1NodeAfter.UserID(), "User1's node should still belong to user1") + assert.Equal(t, user1.ID, user1NodeAfter.UserID().Get(), "User1's node should still belong to user1") assert.Equal(t, user1NodeID, user1NodeAfter.ID(), "Should be the same node (same ID)") assert.True(t, user1NodeAfter.IsExpired(), "User1's node should still be expired") t.Logf("✓ User1's original node still exists (ID: %d, expired: %v)", user1NodeAfter.ID(), user1NodeAfter.IsExpired()) @@ -2911,7 +3007,7 @@ func TestWebFlowReauthDifferentUser(t *testing.T) { t.Fatal("User2 doesn't have a node - registration failed!") } - assert.Equal(t, user2.ID, user2Node.UserID(), "User2's node should belong to user2") + assert.Equal(t, user2.ID, user2Node.UserID().Get(), "User2's node should belong to user2") assert.NotEqual(t, user1NodeID, user2Node.ID(), "Should be a NEW node (different ID), not transfer!") assert.Equal(t, machineKey.Public(), user2Node.MachineKey(), "Should have same machine key") assert.Equal(t, nodeKey2.Public(), user2Node.NodeKey(), "Should have new node key") @@ -2921,7 +3017,7 @@ func TestWebFlowReauthDifferentUser(t *testing.T) { t.Run("returned_node_is_user2_new_node", func(t *testing.T) { // The node returned from HandleNodeFromAuthPath should be user2's NEW node - assert.Equal(t, user2.ID, node.UserID(), "Returned node should belong to user2") + assert.Equal(t, user2.ID, node.UserID().Get(), "Returned node should belong to user2") assert.NotEqual(t, user1NodeID, node.ID(), "Returned node should be NEW, not transferred from user1") t.Logf("✓ HandleNodeFromAuthPath returned user2's new node (ID: %d)", node.ID()) }) @@ -2949,10 +3045,11 @@ func TestWebFlowReauthDifferentUser(t *testing.T) { user2Nodes := 0 for i := 0; i < allNodesSlice.Len(); i++ { n := allNodesSlice.At(i) - if n.UserID() == user1.ID { + if n.UserID().Get() == user1.ID { user1Nodes++ } - if n.UserID() == user2.ID { + + if n.UserID().Get() == user2.ID { user2Nodes++ } } @@ -3026,7 +3123,11 @@ func TestGitHubIssue2830_NodeRestartWithUsedPreAuthKey(t *testing.T) { // Create user and single-use pre-auth key user := app.state.CreateUserForTest("test-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) // reusable=false + pakNew, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) // reusable=false + require.NoError(t, err) + + // Fetch the full pre-auth key to check Reusable field + pak, err := app.state.GetPreAuthKey(pakNew.Key) require.NoError(t, err) require.False(t, pak.Reusable, "key should be single-use for this test") @@ -3036,7 +3137,7 @@ func TestGitHubIssue2830_NodeRestartWithUsedPreAuthKey(t *testing.T) { // STEP 1: Initial registration with pre-auth key (simulates fresh node joining) initialReq := tailcfg.RegisterRequest{ Auth: &tailcfg.RegisterResponseAuth{ - AuthKey: pak.Key, + AuthKey: pakNew.Key, }, NodeKey: nodeKey.Public(), Hostinfo: &tailcfg.Hostinfo{ @@ -3060,7 +3161,7 @@ func TestGitHubIssue2830_NodeRestartWithUsedPreAuthKey(t *testing.T) { assert.Equal(t, machineKey.Public(), node.MachineKey()) // Verify pre-auth key is now marked as used - usedPak, err := app.state.GetPreAuthKey(pak.Key) + usedPak, err := app.state.GetPreAuthKey(pakNew.Key) require.NoError(t, err) assert.True(t, usedPak.Used, "pre-auth key should be marked as used after initial registration") @@ -3073,7 +3174,7 @@ func TestGitHubIssue2830_NodeRestartWithUsedPreAuthKey(t *testing.T) { t.Log("Step 2: Node restart - re-registration with same (now used) pre-auth key") restartReq := tailcfg.RegisterRequest{ Auth: &tailcfg.RegisterResponseAuth{ - AuthKey: pak.Key, // Same key, now marked as Used=true + AuthKey: pakNew.Key, // Same key, now marked as Used=true }, NodeKey: nodeKey.Public(), // Same node key Hostinfo: &tailcfg.Hostinfo{ @@ -3113,7 +3214,11 @@ func TestNodeReregistrationWithReusablePreAuthKey(t *testing.T) { app := createTestApp(t) user := app.state.CreateUserForTest("test-user") - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) // reusable=true + pakNew, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) // reusable=true + require.NoError(t, err) + + // Fetch the full pre-auth key to check Reusable field + pak, err := app.state.GetPreAuthKey(pakNew.Key) require.NoError(t, err) require.True(t, pak.Reusable) @@ -3123,7 +3228,7 @@ func TestNodeReregistrationWithReusablePreAuthKey(t *testing.T) { // Initial registration initialReq := tailcfg.RegisterRequest{ Auth: &tailcfg.RegisterResponseAuth{ - AuthKey: pak.Key, + AuthKey: pakNew.Key, }, NodeKey: nodeKey.Public(), Hostinfo: &tailcfg.Hostinfo{ @@ -3140,7 +3245,7 @@ func TestNodeReregistrationWithReusablePreAuthKey(t *testing.T) { // Node restart - re-registration with reusable key restartReq := tailcfg.RegisterRequest{ Auth: &tailcfg.RegisterResponseAuth{ - AuthKey: pak.Key, // Reusable key + AuthKey: pakNew.Key, // Reusable key }, NodeKey: nodeKey.Public(), Hostinfo: &tailcfg.Hostinfo{ @@ -3165,7 +3270,7 @@ func TestNodeReregistrationWithExpiredPreAuthKey(t *testing.T) { user := app.state.CreateUserForTest("test-user") expiry := time.Now().Add(-1 * time.Hour) // Already expired - pak, err := app.state.CreatePreAuthKey(types.UserID(user.ID), true, false, &expiry, nil) + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, &expiry, nil) require.NoError(t, err) machineKey := key.NewMachine() @@ -3187,6 +3292,94 @@ func TestNodeReregistrationWithExpiredPreAuthKey(t *testing.T) { assert.Error(t, err, "expired pre-auth key should be rejected") assert.Contains(t, err.Error(), "authkey expired", "error should mention key expiration") } + +// TestIssue2830_ExistingNodeReregistersWithExpiredKey tests the fix for issue #2830. +// When a node is already registered and the pre-auth key expires, the node should +// still be able to re-register (e.g., after a container restart) using the same +// expired key. The key was only needed for initial authentication. +func TestIssue2830_ExistingNodeReregistersWithExpiredKey(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + + user := app.state.CreateUserForTest("test-user") + + // Create a valid key (will expire it later) + expiry := time.Now().Add(1 * time.Hour) + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, &expiry, nil) + require.NoError(t, err) + + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + // Register the node initially (key is still valid) + req := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "issue2830-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp, err := app.handleRegister(context.Background(), req, machineKey.Public()) + require.NoError(t, err, "initial registration should succeed") + require.NotNil(t, resp) + require.True(t, resp.MachineAuthorized, "node should be authorized after initial registration") + + // Verify node was created + allNodes := app.state.ListNodes() + require.Equal(t, 1, allNodes.Len()) + initialNodeID := allNodes.At(0).ID() + + // Now expire the key by updating it in the database to have an expiry in the past. + // This simulates the real-world scenario where a key expires after initial registration. + pastExpiry := time.Now().Add(-1 * time.Hour) + err = app.state.DB().DB.Model(&types.PreAuthKey{}). + Where("id = ?", pak.ID). + Update("expiration", pastExpiry).Error + require.NoError(t, err, "should be able to update key expiration") + + // Reload the key to verify it's now expired + expiredPak, err := app.state.GetPreAuthKey(pak.Key) + require.NoError(t, err) + require.NotNil(t, expiredPak.Expiration) + require.True(t, expiredPak.Expiration.Before(time.Now()), "key should be expired") + + // Verify the expired key would fail validation + err = expiredPak.Validate() + require.Error(t, err, "key should fail validation when expired") + require.Contains(t, err.Error(), "authkey expired") + + // Attempt to re-register with the SAME key (now expired). + // This should SUCCEED because: + // - The node already exists with the same MachineKey and User + // - The fix allows existing nodes to re-register even with expired keys + // - The key was only needed for initial authentication + req2 := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, // Same key as initial registration (now expired) + }, + NodeKey: nodeKey.Public(), // Same NodeKey as initial registration + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "issue2830-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp2, err := app.handleRegister(context.Background(), req2, machineKey.Public()) + require.NoError(t, err, "re-registration should succeed even with expired key for existing node") + assert.NotNil(t, resp2) + assert.True(t, resp2.MachineAuthorized, "node should remain authorized after re-registration") + + // Verify we still have only one node (re-registered, not created new) + allNodes = app.state.ListNodes() + require.Equal(t, 1, allNodes.Len(), "should have exactly one node (re-registered)") + assert.Equal(t, initialNodeID, allNodes.At(0).ID(), "node ID should not change on re-registration") +} + // TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey tests that an existing node // can re-register using a pre-auth key that's already marked as Used=true, as long as: // 1. The node is re-registering with the same MachineKey it originally used @@ -3196,7 +3389,8 @@ func TestNodeReregistrationWithExpiredPreAuthKey(t *testing.T) { // // Background: When Docker/Kubernetes containers restart, they keep their persistent state // (including the MachineKey), but container entrypoints unconditionally run: -// tailscale up --authkey=$TS_AUTHKEY +// +// tailscale up --authkey=$TS_AUTHKEY // // This caused nodes to be rejected after restart because the pre-auth key was already // marked as Used=true from the initial registration. The fix allows re-registration of @@ -3209,7 +3403,11 @@ func TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey(t *testing. // Create a SINGLE-USE pre-auth key (reusable=false) // This is the type of key that triggers the bug in issue #2830 - preAuthKey, err := app.state.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) + preAuthKeyNew, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) + require.NoError(t, err) + + // Fetch the full pre-auth key to check Reusable and Used fields + preAuthKey, err := app.state.GetPreAuthKey(preAuthKeyNew.Key) require.NoError(t, err) require.False(t, preAuthKey.Reusable, "Pre-auth key must be single-use to test issue #2830") require.False(t, preAuthKey.Used, "Pre-auth key should not be used yet") @@ -3222,7 +3420,7 @@ func TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey(t *testing. // This simulates the first time the container starts and runs 'tailscale up --authkey=...' initialReq := tailcfg.RegisterRequest{ Auth: &tailcfg.RegisterResponseAuth{ - AuthKey: preAuthKey.Key, + AuthKey: preAuthKeyNew.Key, // Use the full key from creation }, NodeKey: nodeKey.Public(), Hostinfo: &tailcfg.Hostinfo{ @@ -3238,7 +3436,7 @@ func TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey(t *testing. require.Equal(t, "testuser", initialResp.User.DisplayName, "User should match the pre-auth key's user") // Verify the pre-auth key is now marked as Used - updatedKey, err := app.state.GetPreAuthKey(preAuthKey.Key) + updatedKey, err := app.state.GetPreAuthKey(preAuthKeyNew.Key) require.NoError(t, err) require.True(t, updatedKey.Used, "Pre-auth key should be marked as Used after initial registration") @@ -3253,7 +3451,7 @@ func TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey(t *testing. // This is exactly what happens when a container restarts reregisterReq := tailcfg.RegisterRequest{ Auth: &tailcfg.RegisterResponseAuth{ - AuthKey: preAuthKey.Key, // Same key, now marked as Used=true + AuthKey: preAuthKeyNew.Key, // Same key, now marked as Used=true }, NodeKey: nodeKey.Public(), // Same NodeKey Hostinfo: &tailcfg.Hostinfo{ @@ -3280,7 +3478,7 @@ func TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey(t *testing. attackReq := tailcfg.RegisterRequest{ Auth: &tailcfg.RegisterResponseAuth{ - AuthKey: preAuthKey.Key, // Try to use the same key + AuthKey: preAuthKeyNew.Key, // Try to use the same key }, NodeKey: differentNodeKey.Public(), Hostinfo: &tailcfg.Hostinfo{ @@ -3297,3 +3495,233 @@ func TestGitHubIssue2830_ExistingNodeCanReregisterWithUsedPreAuthKey(t *testing. nodesAfterAttack := app.state.ListNodesByUser(types.UserID(user.ID)) require.Equal(t, 1, nodesAfterAttack.Len(), "Should still have exactly one node (attack prevented)") } + +// TestWebAuthRejectsUnauthorizedRequestTags tests that web auth registrations +// validate RequestTags against policy and reject unauthorized tags. +func TestWebAuthRejectsUnauthorizedRequestTags(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + + // Create a user that will authenticate via web auth + user := app.state.CreateUserForTest("webauth-tags-user") + + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + // Simulate a registration cache entry (as would be created during web auth) + registrationID := types.MustRegistrationID() + regEntry := types.NewRegisterNode(types.Node{ + MachineKey: machineKey.Public(), + NodeKey: nodeKey.Public(), + Hostname: "webauth-tags-node", + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "webauth-tags-node", + RequestTags: []string{"tag:unauthorized"}, // This tag is not in policy + }, + }) + app.state.SetRegistrationCacheEntry(registrationID, regEntry) + + // Complete the web auth - should fail because tag is unauthorized + _, _, err := app.state.HandleNodeFromAuthPath( + registrationID, + types.UserID(user.ID), + nil, // no expiry + "webauth", + ) + + // Expect error due to unauthorized tags + require.Error(t, err, "HandleNodeFromAuthPath should reject unauthorized RequestTags") + require.Contains(t, err.Error(), "requested tags", + "Error should indicate requested tags are invalid or not permitted") + require.Contains(t, err.Error(), "tag:unauthorized", + "Error should mention the rejected tag") + + // Verify no node was created + _, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.False(t, found, "Node should not be created when tags are unauthorized") +} + +// TestWebAuthReauthWithEmptyTagsRemovesAllTags tests that when an existing tagged node +// reauths with empty RequestTags, all tags are removed and ownership returns to user. +// This is the fix for issue #2979. +func TestWebAuthReauthWithEmptyTagsRemovesAllTags(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + + // Create a user + user := app.state.CreateUserForTest("reauth-untag-user") + + // Update policy manager to recognize the new user + // This is necessary because CreateUserForTest doesn't update the policy manager + err := app.state.UpdatePolicyManagerUsersForTest() + require.NoError(t, err, "Failed to update policy manager users") + + // Set up policy that allows the user to own these tags + policy := `{ + "tagOwners": { + "tag:valid-owned": ["reauth-untag-user@"], + "tag:second": ["reauth-untag-user@"] + }, + "acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}] + }` + _, err = app.state.SetPolicy([]byte(policy)) + require.NoError(t, err, "Failed to set policy") + + machineKey := key.NewMachine() + nodeKey1 := key.NewNode() + + // Step 1: Initial registration with tags + registrationID1 := types.MustRegistrationID() + regEntry1 := types.NewRegisterNode(types.Node{ + MachineKey: machineKey.Public(), + NodeKey: nodeKey1.Public(), + Hostname: "reauth-untag-node", + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "reauth-untag-node", + RequestTags: []string{"tag:valid-owned", "tag:second"}, + }, + }) + app.state.SetRegistrationCacheEntry(registrationID1, regEntry1) + + // Complete initial registration with tags + node, _, err := app.state.HandleNodeFromAuthPath( + registrationID1, + types.UserID(user.ID), + nil, + "webauth", + ) + require.NoError(t, err, "Initial registration should succeed") + require.True(t, node.IsTagged(), "Node should be tagged after initial registration") + require.ElementsMatch(t, []string{"tag:valid-owned", "tag:second"}, node.Tags().AsSlice()) + t.Logf("Initial registration complete - Node ID: %d, Tags: %v, IsTagged: %t", + node.ID().Uint64(), node.Tags().AsSlice(), node.IsTagged()) + + // Step 2: Reauth with EMPTY tags to untag + nodeKey2 := key.NewNode() // New node key for reauth + registrationID2 := types.MustRegistrationID() + regEntry2 := types.NewRegisterNode(types.Node{ + MachineKey: machineKey.Public(), // Same machine key + NodeKey: nodeKey2.Public(), // Different node key (rotation) + Hostname: "reauth-untag-node", + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "reauth-untag-node", + RequestTags: []string{}, // EMPTY - should untag + }, + }) + app.state.SetRegistrationCacheEntry(registrationID2, regEntry2) + + // Complete reauth with empty tags + nodeAfterReauth, _, err := app.state.HandleNodeFromAuthPath( + registrationID2, + types.UserID(user.ID), + nil, + "webauth", + ) + require.NoError(t, err, "Reauth should succeed") + + // Verify tags were removed + require.False(t, nodeAfterReauth.IsTagged(), "Node should NOT be tagged after reauth with empty tags") + require.Empty(t, nodeAfterReauth.Tags().AsSlice(), "Node should have no tags") + + // Verify ownership returned to user + require.True(t, nodeAfterReauth.UserID().Valid(), "Node should have a user ID") + require.Equal(t, user.ID, nodeAfterReauth.UserID().Get(), "Node should be owned by the user again") + + // Verify it's the same node (not a new one) + require.Equal(t, node.ID(), nodeAfterReauth.ID(), "Should be the same node after reauth") + + t.Logf("Reauth complete - Node ID: %d, Tags: %v, IsTagged: %t, UserID: %d", + nodeAfterReauth.ID().Uint64(), nodeAfterReauth.Tags().AsSlice(), + nodeAfterReauth.IsTagged(), nodeAfterReauth.UserID().Get()) +} + +// TestAuthKeyTaggedToUserOwnedViaReauth tests that a node originally registered +// with a tagged pre-auth key can transition to user-owned by re-authenticating +// via web auth with empty RequestTags. This ensures authkey-tagged nodes are +// not permanently locked to being tagged. +func TestAuthKeyTaggedToUserOwnedViaReauth(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + + // Create a user + user := app.state.CreateUserForTest("authkey-to-user") + + // Create a tagged pre-auth key + authKeyTags := []string{"tag:server", "tag:prod"} + pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, authKeyTags) + require.NoError(t, err, "Failed to create tagged pre-auth key") + + machineKey := key.NewMachine() + nodeKey1 := key.NewNode() + + // Step 1: Initial registration with tagged pre-auth key + regReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey1.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "authkey-tagged-node", + }, + Expiry: time.Now().Add(24 * time.Hour), + } + + resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public()) + require.NoError(t, err, "Initial registration should succeed") + require.True(t, resp.MachineAuthorized, "Node should be authorized") + + // Verify initial state: node is tagged via authkey + node, found := app.state.GetNodeByNodeKey(nodeKey1.Public()) + require.True(t, found, "Node should be found") + require.True(t, node.IsTagged(), "Node should be tagged after authkey registration") + require.ElementsMatch(t, authKeyTags, node.Tags().AsSlice(), "Node should have authkey tags") + require.NotNil(t, node.AuthKey(), "Node should have AuthKey reference") + require.Positive(t, node.AuthKey().Tags().Len(), "AuthKey should have tags") + + t.Logf("Initial registration complete - Node ID: %d, Tags: %v, IsTagged: %t, AuthKey.Tags.Len: %d", + node.ID().Uint64(), node.Tags().AsSlice(), node.IsTagged(), node.AuthKey().Tags().Len()) + + // Step 2: Reauth via web auth with EMPTY tags to transition to user-owned + nodeKey2 := key.NewNode() // New node key for reauth + registrationID := types.MustRegistrationID() + regEntry := types.NewRegisterNode(types.Node{ + MachineKey: machineKey.Public(), // Same machine key + NodeKey: nodeKey2.Public(), // Different node key (rotation) + Hostname: "authkey-tagged-node", + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "authkey-tagged-node", + RequestTags: []string{}, // EMPTY - should untag + }, + }) + app.state.SetRegistrationCacheEntry(registrationID, regEntry) + + // Complete reauth with empty tags + nodeAfterReauth, _, err := app.state.HandleNodeFromAuthPath( + registrationID, + types.UserID(user.ID), + nil, + "webauth", + ) + require.NoError(t, err, "Reauth should succeed") + + // Verify tags were removed (authkey-tagged → user-owned transition) + require.False(t, nodeAfterReauth.IsTagged(), "Node should NOT be tagged after reauth with empty tags") + require.Empty(t, nodeAfterReauth.Tags().AsSlice(), "Node should have no tags") + + // Verify ownership returned to user + require.True(t, nodeAfterReauth.UserID().Valid(), "Node should have a user ID") + require.Equal(t, user.ID, nodeAfterReauth.UserID().Get(), "Node should be owned by the user") + + // Verify it's the same node (not a new one) + require.Equal(t, node.ID(), nodeAfterReauth.ID(), "Should be the same node after reauth") + + // AuthKey reference should still exist (for audit purposes) + require.NotNil(t, nodeAfterReauth.AuthKey(), "AuthKey reference should be preserved") + + t.Logf("Reauth complete - Node ID: %d, Tags: %v, IsTagged: %t, UserID: %d", + nodeAfterReauth.ID().Uint64(), nodeAfterReauth.Tags().AsSlice(), + nodeAfterReauth.IsTagged(), nodeAfterReauth.UserID().Get()) +} diff --git a/hscontrol/capver/capver.go b/hscontrol/capver/capver.go index b6bbca5b..61d67444 100644 --- a/hscontrol/capver/capver.go +++ b/hscontrol/capver/capver.go @@ -12,7 +12,13 @@ import ( "tailscale.com/util/set" ) -const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = 90 +const ( + // minVersionParts is the minimum number of version parts needed for major.minor. + minVersionParts = 2 + + // legacyDERPCapVer is the capability version when LegacyDERP can be cleaned up. + legacyDERPCapVer = 111 +) // CanOldCodeBeCleanedUp is intended to be called on startup to see if // there are old code that can ble cleaned up, entries should contain @@ -21,7 +27,7 @@ const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = 90 // // All uses of Capability version checks should be listed here. func CanOldCodeBeCleanedUp() { - if MinSupportedCapabilityVersion >= 111 { + if MinSupportedCapabilityVersion >= legacyDERPCapVer { panic("LegacyDERP can be cleaned up in tail.go") } } @@ -29,12 +35,14 @@ func CanOldCodeBeCleanedUp() { func tailscaleVersSorted() []string { vers := xmaps.Keys(tailscaleToCapVer) sort.Strings(vers) + return vers } func capVersSorted() []tailcfg.CapabilityVersion { capVers := xmaps.Keys(capVerToTailscaleVer) slices.Sort(capVers) + return capVers } @@ -44,11 +52,25 @@ func TailscaleVersion(ver tailcfg.CapabilityVersion) string { } // CapabilityVersion returns the CapabilityVersion for the given Tailscale version. +// It accepts both full versions (v1.90.1) and minor versions (v1.90). func CapabilityVersion(ver string) tailcfg.CapabilityVersion { if !strings.HasPrefix(ver, "v") { ver = "v" + ver } - return tailscaleToCapVer[ver] + + // Try direct lookup first (works for minor versions like v1.90) + if cv, ok := tailscaleToCapVer[ver]; ok { + return cv + } + + // Try extracting minor version from full version (v1.90.1 -> v1.90) + parts := strings.Split(strings.TrimPrefix(ver, "v"), ".") + if len(parts) >= minVersionParts { + minor := "v" + parts[0] + "." + parts[1] + return tailscaleToCapVer[minor] + } + + return 0 } // TailscaleLatest returns the n latest Tailscale versions. @@ -73,10 +95,12 @@ func TailscaleLatestMajorMinor(n int, stripV bool) []string { } majors := set.Set[string]{} + for _, vers := range tailscaleVersSorted() { if stripV { vers = strings.TrimPrefix(vers, "v") } + v := strings.Split(vers, ".") majors.Add(v[0] + "." + v[1]) } diff --git a/hscontrol/capver/capver_generated.go b/hscontrol/capver/capver_generated.go index 534ead02..11ad89cc 100644 --- a/hscontrol/capver/capver_generated.go +++ b/hscontrol/capver/capver_generated.go @@ -5,47 +5,80 @@ package capver import "tailscale.com/tailcfg" var tailscaleToCapVer = map[string]tailcfg.CapabilityVersion{ - "v1.64.0": 90, - "v1.64.1": 90, - "v1.64.2": 90, - "v1.66.0": 95, - "v1.66.1": 95, - "v1.66.2": 95, - "v1.66.3": 95, - "v1.66.4": 95, - "v1.68.0": 97, - "v1.68.1": 97, - "v1.68.2": 97, - "v1.70.0": 102, - "v1.72.0": 104, - "v1.72.1": 104, - "v1.74.0": 106, - "v1.74.1": 106, - "v1.76.0": 106, - "v1.76.1": 106, - "v1.76.6": 106, - "v1.78.0": 109, - "v1.78.1": 109, - "v1.80.0": 113, - "v1.80.1": 113, - "v1.80.2": 113, - "v1.80.3": 113, - "v1.82.0": 115, - "v1.82.5": 115, - "v1.84.0": 116, - "v1.84.1": 116, - "v1.84.2": 116, + "v1.24": 32, + "v1.26": 32, + "v1.28": 32, + "v1.30": 41, + "v1.32": 46, + "v1.34": 51, + "v1.36": 56, + "v1.38": 58, + "v1.40": 61, + "v1.42": 62, + "v1.44": 63, + "v1.46": 65, + "v1.48": 68, + "v1.50": 74, + "v1.52": 79, + "v1.54": 79, + "v1.56": 82, + "v1.58": 85, + "v1.60": 87, + "v1.62": 88, + "v1.64": 90, + "v1.66": 95, + "v1.68": 97, + "v1.70": 102, + "v1.72": 104, + "v1.74": 106, + "v1.76": 106, + "v1.78": 109, + "v1.80": 113, + "v1.82": 115, + "v1.84": 116, + "v1.86": 123, + "v1.88": 125, + "v1.90": 130, + "v1.92": 131, } var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{ - 90: "v1.64.0", - 95: "v1.66.0", - 97: "v1.68.0", - 102: "v1.70.0", - 104: "v1.72.0", - 106: "v1.74.0", - 109: "v1.78.0", - 113: "v1.80.0", - 115: "v1.82.0", - 116: "v1.84.0", + 32: "v1.24", + 41: "v1.30", + 46: "v1.32", + 51: "v1.34", + 56: "v1.36", + 58: "v1.38", + 61: "v1.40", + 62: "v1.42", + 63: "v1.44", + 65: "v1.46", + 68: "v1.48", + 74: "v1.50", + 79: "v1.52", + 82: "v1.56", + 85: "v1.58", + 87: "v1.60", + 88: "v1.62", + 90: "v1.64", + 95: "v1.66", + 97: "v1.68", + 102: "v1.70", + 104: "v1.72", + 106: "v1.74", + 109: "v1.78", + 113: "v1.80", + 115: "v1.82", + 116: "v1.84", + 123: "v1.86", + 125: "v1.88", + 130: "v1.90", + 131: "v1.92", } + +// SupportedMajorMinorVersions is the number of major.minor Tailscale versions supported. +const SupportedMajorMinorVersions = 10 + +// MinSupportedCapabilityVersion represents the minimum capability version +// supported by this Headscale instance (latest 10 minor versions) +const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = 106 diff --git a/hscontrol/capver/capver_test.go b/hscontrol/capver/capver_test.go index 42f1df71..5c5d5b44 100644 --- a/hscontrol/capver/capver_test.go +++ b/hscontrol/capver/capver_test.go @@ -4,34 +4,10 @@ import ( "testing" "github.com/google/go-cmp/cmp" - "tailscale.com/tailcfg" ) func TestTailscaleLatestMajorMinor(t *testing.T) { - tests := []struct { - n int - stripV bool - expected []string - }{ - {3, false, []string{"v1.80", "v1.82", "v1.84"}}, - {2, true, []string{"1.82", "1.84"}}, - // Lazy way to see all supported versions - {10, true, []string{ - "1.66", - "1.68", - "1.70", - "1.72", - "1.74", - "1.76", - "1.78", - "1.80", - "1.82", - "1.84", - }}, - {0, false, nil}, - } - - for _, test := range tests { + for _, test := range tailscaleLatestMajorMinorTests { t.Run("", func(t *testing.T) { output := TailscaleLatestMajorMinor(test.n, test.stripV) if diff := cmp.Diff(output, test.expected); diff != "" { @@ -42,19 +18,7 @@ func TestTailscaleLatestMajorMinor(t *testing.T) { } func TestCapVerMinimumTailscaleVersion(t *testing.T) { - tests := []struct { - input tailcfg.CapabilityVersion - expected string - }{ - {90, "v1.64.0"}, - {95, "v1.66.0"}, - {106, "v1.74.0"}, - {109, "v1.78.0"}, - {9001, ""}, // Test case for a version higher than any in the map - {60, ""}, // Test case for a version lower than any in the map - } - - for _, test := range tests { + for _, test := range capVerMinimumTailscaleVersionTests { t.Run("", func(t *testing.T) { output := TailscaleVersion(test.input) if output != test.expected { diff --git a/hscontrol/capver/capver_test_data.go b/hscontrol/capver/capver_test_data.go new file mode 100644 index 00000000..91928d29 --- /dev/null +++ b/hscontrol/capver/capver_test_data.go @@ -0,0 +1,40 @@ +package capver + +// Generated DO NOT EDIT + +import "tailscale.com/tailcfg" + +var tailscaleLatestMajorMinorTests = []struct { + n int + stripV bool + expected []string +}{ + {3, false, []string{"v1.88", "v1.90", "v1.92"}}, + {2, true, []string{"1.90", "1.92"}}, + {10, true, []string{ + "1.74", + "1.76", + "1.78", + "1.80", + "1.82", + "1.84", + "1.86", + "1.88", + "1.90", + "1.92", + }}, + {0, false, nil}, +} + +var capVerMinimumTailscaleVersionTests = []struct { + input tailcfg.CapabilityVersion + expected string +}{ + {106, "v1.74"}, + {32, "v1.24"}, + {41, "v1.30"}, + {46, "v1.32"}, + {51, "v1.34"}, + {9001, ""}, // Test case for a version higher than any in the map + {60, ""}, // Test case for a version lower than any in the map +} diff --git a/hscontrol/db/api_key.go b/hscontrol/db/api_key.go index 51083145..7457670c 100644 --- a/hscontrol/db/api_key.go +++ b/hscontrol/db/api_key.go @@ -9,33 +9,64 @@ import ( "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" "golang.org/x/crypto/bcrypt" + "gorm.io/gorm" ) const ( - apiPrefixLength = 7 - apiKeyLength = 32 + apiKeyPrefix = "hskey-api-" //nolint:gosec // This is a prefix, not a credential + apiKeyPrefixLength = 12 + apiKeyHashLength = 64 + + // Legacy format constants. + legacyAPIPrefixLength = 7 + legacyAPIKeyLength = 32 ) -var ErrAPIKeyFailedToParse = errors.New("failed to parse ApiKey") +var ( + ErrAPIKeyFailedToParse = errors.New("failed to parse ApiKey") + ErrAPIKeyGenerationFailed = errors.New("failed to generate API key") + ErrAPIKeyInvalidGeneration = errors.New("generated API key failed validation") +) // CreateAPIKey creates a new ApiKey in a user, and returns it. func (hsdb *HSDatabase) CreateAPIKey( expiration *time.Time, ) (string, *types.APIKey, error) { - prefix, err := util.GenerateRandomStringURLSafe(apiPrefixLength) + // Generate public prefix (12 chars) + prefix, err := util.GenerateRandomStringURLSafe(apiKeyPrefixLength) if err != nil { return "", nil, err } - toBeHashed, err := util.GenerateRandomStringURLSafe(apiKeyLength) + // Validate prefix + if len(prefix) != apiKeyPrefixLength { + return "", nil, fmt.Errorf("%w: generated prefix has invalid length: expected %d, got %d", ErrAPIKeyInvalidGeneration, apiKeyPrefixLength, len(prefix)) + } + + if !isValidBase64URLSafe(prefix) { + return "", nil, fmt.Errorf("%w: generated prefix contains invalid characters", ErrAPIKeyInvalidGeneration) + } + + // Generate secret (64 chars) + secret, err := util.GenerateRandomStringURLSafe(apiKeyHashLength) if err != nil { return "", nil, err } - // Key to return to user, this will only be visible _once_ - keyStr := prefix + "." + toBeHashed + // Validate secret + if len(secret) != apiKeyHashLength { + return "", nil, fmt.Errorf("%w: generated secret has invalid length: expected %d, got %d", ErrAPIKeyInvalidGeneration, apiKeyHashLength, len(secret)) + } - hash, err := bcrypt.GenerateFromPassword([]byte(toBeHashed), bcrypt.DefaultCost) + if !isValidBase64URLSafe(secret) { + return "", nil, fmt.Errorf("%w: generated secret contains invalid characters", ErrAPIKeyInvalidGeneration) + } + + // Full key string (shown ONCE to user) + keyStr := apiKeyPrefix + prefix + "-" + secret + + // bcrypt hash of secret + hash, err := bcrypt.GenerateFromPassword([]byte(secret), bcrypt.DefaultCost) if err != nil { return "", nil, err } @@ -103,23 +134,164 @@ func (hsdb *HSDatabase) ExpireAPIKey(key *types.APIKey) error { } func (hsdb *HSDatabase) ValidateAPIKey(keyStr string) (bool, error) { - prefix, hash, found := strings.Cut(keyStr, ".") - if !found { - return false, ErrAPIKeyFailedToParse - } - - key, err := hsdb.GetAPIKey(prefix) + key, err := validateAPIKey(hsdb.DB, keyStr) if err != nil { - return false, fmt.Errorf("failed to validate api key: %w", err) - } - - if key.Expiration.Before(time.Now()) { - return false, nil - } - - if err := bcrypt.CompareHashAndPassword(key.Hash, []byte(hash)); err != nil { return false, err } + if key.Expiration != nil && key.Expiration.Before(time.Now()) { + return false, nil + } + return true, nil } + +// ParseAPIKeyPrefix extracts the database prefix from a display prefix. +// Handles formats: "hskey-api-{12chars}-***", "hskey-api-{12chars}", or just "{12chars}". +// Returns the 12-character prefix suitable for database lookup. +func ParseAPIKeyPrefix(displayPrefix string) (string, error) { + // If it's already just the 12-character prefix, return it + if len(displayPrefix) == apiKeyPrefixLength && isValidBase64URLSafe(displayPrefix) { + return displayPrefix, nil + } + + // If it starts with the API key prefix, parse it + if strings.HasPrefix(displayPrefix, apiKeyPrefix) { + // Remove the "hskey-api-" prefix + _, remainder, found := strings.Cut(displayPrefix, apiKeyPrefix) + if !found { + return "", fmt.Errorf("%w: invalid display prefix format", ErrAPIKeyFailedToParse) + } + + // Extract just the first 12 characters (the actual prefix) + if len(remainder) < apiKeyPrefixLength { + return "", fmt.Errorf("%w: prefix too short", ErrAPIKeyFailedToParse) + } + + prefix := remainder[:apiKeyPrefixLength] + + // Validate it's base64 URL-safe + if !isValidBase64URLSafe(prefix) { + return "", fmt.Errorf("%w: prefix contains invalid characters", ErrAPIKeyFailedToParse) + } + + return prefix, nil + } + + // For legacy 7-character prefixes or other formats, return as-is + return displayPrefix, nil +} + +// validateAPIKey validates an API key and returns the key if valid. +// Handles both new (hskey-api-{prefix}-{secret}) and legacy (prefix.secret) formats. +func validateAPIKey(db *gorm.DB, keyStr string) (*types.APIKey, error) { + // Validate input is not empty + if keyStr == "" { + return nil, ErrAPIKeyFailedToParse + } + + // Check for new format: hskey-api-{prefix}-{secret} + _, prefixAndSecret, found := strings.Cut(keyStr, apiKeyPrefix) + + if !found { + // Legacy format: prefix.secret + return validateLegacyAPIKey(db, keyStr) + } + + // New format: parse and verify + const expectedMinLength = apiKeyPrefixLength + 1 + apiKeyHashLength + if len(prefixAndSecret) < expectedMinLength { + return nil, fmt.Errorf( + "%w: key too short, expected at least %d chars after prefix, got %d", + ErrAPIKeyFailedToParse, + expectedMinLength, + len(prefixAndSecret), + ) + } + + // Use fixed-length parsing + prefix := prefixAndSecret[:apiKeyPrefixLength] + + // Validate separator at expected position + if prefixAndSecret[apiKeyPrefixLength] != '-' { + return nil, fmt.Errorf( + "%w: expected separator '-' at position %d, got '%c'", + ErrAPIKeyFailedToParse, + apiKeyPrefixLength, + prefixAndSecret[apiKeyPrefixLength], + ) + } + + secret := prefixAndSecret[apiKeyPrefixLength+1:] + + // Validate secret length + if len(secret) != apiKeyHashLength { + return nil, fmt.Errorf( + "%w: secret length mismatch, expected %d chars, got %d", + ErrAPIKeyFailedToParse, + apiKeyHashLength, + len(secret), + ) + } + + // Validate prefix contains only base64 URL-safe characters + if !isValidBase64URLSafe(prefix) { + return nil, fmt.Errorf( + "%w: prefix contains invalid characters (expected base64 URL-safe: A-Za-z0-9_-)", + ErrAPIKeyFailedToParse, + ) + } + + // Validate secret contains only base64 URL-safe characters + if !isValidBase64URLSafe(secret) { + return nil, fmt.Errorf( + "%w: secret contains invalid characters (expected base64 URL-safe: A-Za-z0-9_-)", + ErrAPIKeyFailedToParse, + ) + } + + // Look up by prefix (indexed) + var key types.APIKey + + err := db.First(&key, "prefix = ?", prefix).Error + if err != nil { + return nil, fmt.Errorf("API key not found: %w", err) + } + + // Verify bcrypt hash + err = bcrypt.CompareHashAndPassword(key.Hash, []byte(secret)) + if err != nil { + return nil, fmt.Errorf("invalid API key: %w", err) + } + + return &key, nil +} + +// validateLegacyAPIKey validates a legacy format API key (prefix.secret). +func validateLegacyAPIKey(db *gorm.DB, keyStr string) (*types.APIKey, error) { + // Legacy format uses "." as separator + prefix, secret, found := strings.Cut(keyStr, ".") + if !found { + return nil, ErrAPIKeyFailedToParse + } + + // Legacy prefix is 7 chars + if len(prefix) != legacyAPIPrefixLength { + return nil, fmt.Errorf("%w: legacy prefix length mismatch", ErrAPIKeyFailedToParse) + } + + var key types.APIKey + + err := db.First(&key, "prefix = ?", prefix).Error + if err != nil { + return nil, fmt.Errorf("API key not found: %w", err) + } + + // Verify bcrypt (key.Hash stores bcrypt of full secret) + err = bcrypt.CompareHashAndPassword(key.Hash, []byte(secret)) + if err != nil { + return nil, fmt.Errorf("invalid API key: %w", err) + } + + return &key, nil +} diff --git a/hscontrol/db/api_key_test.go b/hscontrol/db/api_key_test.go index c0b4e988..a34dd94b 100644 --- a/hscontrol/db/api_key_test.go +++ b/hscontrol/db/api_key_test.go @@ -1,89 +1,275 @@ package db import ( + "strings" + "testing" "time" - "gopkg.in/check.v1" + "github.com/juanfont/headscale/hscontrol/types" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "golang.org/x/crypto/bcrypt" ) -func (*Suite) TestCreateAPIKey(c *check.C) { +func TestCreateAPIKey(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + apiKeyStr, apiKey, err := db.CreateAPIKey(nil) - c.Assert(err, check.IsNil) - c.Assert(apiKey, check.NotNil) + require.NoError(t, err) + require.NotNil(t, apiKey) // Did we get a valid key? - c.Assert(apiKey.Prefix, check.NotNil) - c.Assert(apiKey.Hash, check.NotNil) - c.Assert(apiKeyStr, check.Not(check.Equals), "") + assert.NotNil(t, apiKey.Prefix) + assert.NotNil(t, apiKey.Hash) + assert.NotEmpty(t, apiKeyStr) _, err = db.ListAPIKeys() - c.Assert(err, check.IsNil) + require.NoError(t, err) keys, err := db.ListAPIKeys() - c.Assert(err, check.IsNil) - c.Assert(len(keys), check.Equals, 1) + require.NoError(t, err) + assert.Len(t, keys, 1) } -func (*Suite) TestAPIKeyDoesNotExist(c *check.C) { +func TestAPIKeyDoesNotExist(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + key, err := db.GetAPIKey("does-not-exist") - c.Assert(err, check.NotNil) - c.Assert(key, check.IsNil) + require.Error(t, err) + assert.Nil(t, key) } -func (*Suite) TestValidateAPIKeyOk(c *check.C) { +func TestValidateAPIKeyOk(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + nowPlus2 := time.Now().Add(2 * time.Hour) apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2) - c.Assert(err, check.IsNil) - c.Assert(apiKey, check.NotNil) + require.NoError(t, err) + require.NotNil(t, apiKey) valid, err := db.ValidateAPIKey(apiKeyStr) - c.Assert(err, check.IsNil) - c.Assert(valid, check.Equals, true) + require.NoError(t, err) + assert.True(t, valid) } -func (*Suite) TestValidateAPIKeyNotOk(c *check.C) { +func TestValidateAPIKeyNotOk(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + nowMinus2 := time.Now().Add(time.Duration(-2) * time.Hour) apiKeyStr, apiKey, err := db.CreateAPIKey(&nowMinus2) - c.Assert(err, check.IsNil) - c.Assert(apiKey, check.NotNil) + require.NoError(t, err) + require.NotNil(t, apiKey) valid, err := db.ValidateAPIKey(apiKeyStr) - c.Assert(err, check.IsNil) - c.Assert(valid, check.Equals, false) + require.NoError(t, err) + assert.False(t, valid) now := time.Now() apiKeyStrNow, apiKey, err := db.CreateAPIKey(&now) - c.Assert(err, check.IsNil) - c.Assert(apiKey, check.NotNil) + require.NoError(t, err) + require.NotNil(t, apiKey) validNow, err := db.ValidateAPIKey(apiKeyStrNow) - c.Assert(err, check.IsNil) - c.Assert(validNow, check.Equals, false) + require.NoError(t, err) + assert.False(t, validNow) validSilly, err := db.ValidateAPIKey("nota.validkey") - c.Assert(err, check.NotNil) - c.Assert(validSilly, check.Equals, false) + require.Error(t, err) + assert.False(t, validSilly) validWithErr, err := db.ValidateAPIKey("produceerrorkey") - c.Assert(err, check.NotNil) - c.Assert(validWithErr, check.Equals, false) + require.Error(t, err) + assert.False(t, validWithErr) } -func (*Suite) TestExpireAPIKey(c *check.C) { +func TestExpireAPIKey(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + nowPlus2 := time.Now().Add(2 * time.Hour) apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2) - c.Assert(err, check.IsNil) - c.Assert(apiKey, check.NotNil) + require.NoError(t, err) + require.NotNil(t, apiKey) valid, err := db.ValidateAPIKey(apiKeyStr) - c.Assert(err, check.IsNil) - c.Assert(valid, check.Equals, true) + require.NoError(t, err) + assert.True(t, valid) err = db.ExpireAPIKey(apiKey) - c.Assert(err, check.IsNil) - c.Assert(apiKey.Expiration, check.NotNil) + require.NoError(t, err) + assert.NotNil(t, apiKey.Expiration) notValid, err := db.ValidateAPIKey(apiKeyStr) - c.Assert(err, check.IsNil) - c.Assert(notValid, check.Equals, false) + require.NoError(t, err) + assert.False(t, notValid) +} + +func TestAPIKeyWithPrefix(t *testing.T) { + tests := []struct { + name string + test func(*testing.T, *HSDatabase) + }{ + { + name: "new_key_with_prefix", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() + + keyStr, apiKey, err := db.CreateAPIKey(nil) + require.NoError(t, err) + + // Verify format: hskey-api-{12-char-prefix}-{64-char-secret} + assert.True(t, strings.HasPrefix(keyStr, "hskey-api-")) + + _, prefixAndSecret, found := strings.Cut(keyStr, "hskey-api-") + assert.True(t, found) + assert.GreaterOrEqual(t, len(prefixAndSecret), 12+1+64) + + prefix := prefixAndSecret[:12] + assert.Len(t, prefix, 12) + assert.Equal(t, byte('-'), prefixAndSecret[12]) + secret := prefixAndSecret[13:] + assert.Len(t, secret, 64) + + // Verify stored fields + assert.Len(t, apiKey.Prefix, types.NewAPIKeyPrefixLength) + assert.NotNil(t, apiKey.Hash) + }, + }, + { + name: "new_key_can_be_retrieved", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() + + keyStr, createdKey, err := db.CreateAPIKey(nil) + require.NoError(t, err) + + // Validate the created key + valid, err := db.ValidateAPIKey(keyStr) + require.NoError(t, err) + assert.True(t, valid) + + // Verify prefix is correct length + assert.Len(t, createdKey.Prefix, types.NewAPIKeyPrefixLength) + }, + }, + { + name: "invalid_key_format_rejected", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() + + invalidKeys := []string{ + "", + "hskey-api-short", + "hskey-api-ABCDEFGHIJKL-tooshort", + "hskey-api-ABC$EFGHIJKL-" + strings.Repeat("a", 64), + "hskey-api-ABCDEFGHIJKL" + strings.Repeat("a", 64), // missing separator + } + + for _, invalidKey := range invalidKeys { + valid, err := db.ValidateAPIKey(invalidKey) + require.Error(t, err, "key should be rejected: %s", invalidKey) + assert.False(t, valid) + } + }, + }, + { + name: "legacy_key_still_works", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() + + // Insert legacy API key directly (7-char prefix + 32-char secret) + legacyPrefix := "abcdefg" + legacySecret := strings.Repeat("x", 32) + legacyKey := legacyPrefix + "." + legacySecret + hash, err := bcrypt.GenerateFromPassword([]byte(legacySecret), bcrypt.DefaultCost) + require.NoError(t, err) + + now := time.Now() + err = db.DB.Exec(` + INSERT INTO api_keys (prefix, hash, created_at) + VALUES (?, ?, ?) + `, legacyPrefix, hash, now).Error + require.NoError(t, err) + + // Validate legacy key + valid, err := db.ValidateAPIKey(legacyKey) + require.NoError(t, err) + assert.True(t, valid) + }, + }, + { + name: "wrong_secret_rejected", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() + + keyStr, _, err := db.CreateAPIKey(nil) + require.NoError(t, err) + + // Tamper with the secret + _, prefixAndSecret, _ := strings.Cut(keyStr, "hskey-api-") + prefix := prefixAndSecret[:12] + tamperedKey := "hskey-api-" + prefix + "-" + strings.Repeat("x", 64) + + valid, err := db.ValidateAPIKey(tamperedKey) + require.Error(t, err) + assert.False(t, valid) + }, + }, + { + name: "expired_key_rejected", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() + + // Create expired key + expired := time.Now().Add(-1 * time.Hour) + keyStr, _, err := db.CreateAPIKey(&expired) + require.NoError(t, err) + + // Should fail validation + valid, err := db.ValidateAPIKey(keyStr) + require.NoError(t, err) + assert.False(t, valid) + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + + tt.test(t, db) + }) + } +} + +func TestGetAPIKeyByID(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + + // Create an API key + _, apiKey, err := db.CreateAPIKey(nil) + require.NoError(t, err) + require.NotNil(t, apiKey) + + // Retrieve by ID + retrievedKey, err := db.GetAPIKeyByID(apiKey.ID) + require.NoError(t, err) + require.NotNil(t, retrievedKey) + assert.Equal(t, apiKey.ID, retrievedKey.ID) + assert.Equal(t, apiKey.Prefix, retrievedKey.Prefix) +} + +func TestGetAPIKeyByIDNotFound(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + + // Try to get a non-existent key by ID + key, err := db.GetAPIKeyByID(99999) + require.Error(t, err) + assert.Nil(t, key) } diff --git a/hscontrol/db/db.go b/hscontrol/db/db.go index 4eefee91..a1429aa6 100644 --- a/hscontrol/db/db.go +++ b/hscontrol/db/db.go @@ -2,7 +2,6 @@ package db import ( "context" - "database/sql" _ "embed" "encoding/json" "errors" @@ -11,12 +10,12 @@ import ( "path/filepath" "slices" "strconv" - "strings" "time" "github.com/glebarez/sqlite" "github.com/go-gormigrate/gormigrate/v2" "github.com/juanfont/headscale/hscontrol/db/sqliteconfig" + "github.com/juanfont/headscale/hscontrol/policy" "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" "github.com/rs/zerolog/log" @@ -26,7 +25,6 @@ import ( "gorm.io/gorm/logger" "gorm.io/gorm/schema" "tailscale.com/net/tsaddr" - "tailscale.com/util/set" "zgo.at/zcache/v2" ) @@ -47,29 +45,19 @@ const ( contextTimeoutSecs = 10 ) -// KV is a key-value store in a psql table. For future use... -// TODO(kradalby): Is this used for anything? -type KV struct { - Key string - Value string -} - type HSDatabase struct { DB *gorm.DB - cfg *types.DatabaseConfig + cfg *types.Config regCache *zcache.Cache[types.RegistrationID, types.RegisterNode] - - baseDomain string } -// TODO(kradalby): assemble this struct from toptions or something typed -// rather than arguments. +// NewHeadscaleDatabase creates a new database connection and runs migrations. +// It accepts the full configuration to allow migrations access to policy settings. func NewHeadscaleDatabase( - cfg types.DatabaseConfig, - baseDomain string, + cfg *types.Config, regCache *zcache.Cache[types.RegistrationID, types.RegisterNode], ) (*HSDatabase, error) { - dbConn, err := openDB(cfg) + dbConn, err := openDB(cfg.Database) if err != nil { return nil, err } @@ -79,497 +67,10 @@ func NewHeadscaleDatabase( gormigrate.DefaultOptions, []*gormigrate.Migration{ // New migrations must be added as transactions at the end of this list. - // The initial migration here is quite messy, completely out of order and - // has no versioning and is the tech debt of not having versioned migrations - // prior to this point. This first migration is all DB changes to bring a DB - // up to 0.23.0. - { - ID: "202312101416", - Migrate: func(tx *gorm.DB) error { - if cfg.Type == types.DatabasePostgres { - tx.Exec(`create extension if not exists "uuid-ossp";`) - } + // Migrations start from v0.25.0. If upgrading from v0.24.x or earlier, + // you must first upgrade to v0.25.1 before upgrading to this version. - _ = tx.Migrator().RenameTable("namespaces", "users") - - // the big rename from Machine to Node - _ = tx.Migrator().RenameTable("machines", "nodes") - _ = tx.Migrator(). - RenameColumn(&types.Route{}, "machine_id", "node_id") - - err = tx.AutoMigrate(types.User{}) - if err != nil { - return err - } - - _ = tx.Migrator(). - RenameColumn(&types.Node{}, "namespace_id", "user_id") - _ = tx.Migrator(). - RenameColumn(&types.PreAuthKey{}, "namespace_id", "user_id") - - _ = tx.Migrator(). - RenameColumn(&types.Node{}, "ip_address", "ip_addresses") - _ = tx.Migrator().RenameColumn(&types.Node{}, "name", "hostname") - - // GivenName is used as the primary source of DNS names, make sure - // the field is populated and normalized if it was not when the - // node was registered. - _ = tx.Migrator(). - RenameColumn(&types.Node{}, "nickname", "given_name") - - dbConn.Model(&types.Node{}).Where("auth_key_id = ?", 0).Update("auth_key_id", nil) - // If the Node table has a column for registered, - // find all occurrences of "false" and drop them. Then - // remove the column. - if tx.Migrator().HasColumn(&types.Node{}, "registered") { - log.Info(). - Msg(`Database has legacy "registered" column in node, removing...`) - - nodes := types.Nodes{} - if err := tx.Not("registered").Find(&nodes).Error; err != nil { - log.Error().Err(err).Msg("Error accessing db") - } - - for _, node := range nodes { - log.Info(). - Str("node", node.Hostname). - Str("machine_key", node.MachineKey.ShortString()). - Msg("Deleting unregistered node") - if err := tx.Delete(&types.Node{}, node.ID).Error; err != nil { - log.Error(). - Err(err). - Str("node", node.Hostname). - Str("machine_key", node.MachineKey.ShortString()). - Msg("Error deleting unregistered node") - } - } - - err := tx.Migrator().DropColumn(&types.Node{}, "registered") - if err != nil { - log.Error().Err(err).Msg("Error dropping registered column") - } - } - - // Remove any invalid routes associated with a node that does not exist. - if tx.Migrator().HasTable(&types.Route{}) && tx.Migrator().HasTable(&types.Node{}) { - err := tx.Exec("delete from routes where node_id not in (select id from nodes)").Error - if err != nil { - return err - } - } - err = tx.AutoMigrate(&types.Route{}) - if err != nil { - return err - } - - err = tx.AutoMigrate(&types.Node{}) - if err != nil { - return err - } - - // Ensure all keys have correct prefixes - // https://github.com/tailscale/tailscale/blob/main/types/key/node.go#L35 - type result struct { - ID uint64 - MachineKey string - NodeKey string - DiscoKey string - } - var results []result - err = tx.Raw("SELECT id, node_key, machine_key, disco_key FROM nodes"). - Find(&results). - Error - if err != nil { - return err - } - - for _, node := range results { - mKey := node.MachineKey - if !strings.HasPrefix(node.MachineKey, "mkey:") { - mKey = "mkey:" + node.MachineKey - } - nKey := node.NodeKey - if !strings.HasPrefix(node.NodeKey, "nodekey:") { - nKey = "nodekey:" + node.NodeKey - } - - dKey := node.DiscoKey - if !strings.HasPrefix(node.DiscoKey, "discokey:") { - dKey = "discokey:" + node.DiscoKey - } - - err := tx.Exec( - "UPDATE nodes SET machine_key = @mKey, node_key = @nKey, disco_key = @dKey WHERE ID = @id", - sql.Named("mKey", mKey), - sql.Named("nKey", nKey), - sql.Named("dKey", dKey), - sql.Named("id", node.ID), - ).Error - if err != nil { - return err - } - } - - if tx.Migrator().HasColumn(&types.Node{}, "enabled_routes") { - log.Info(). - Msgf("Database has legacy enabled_routes column in node, migrating...") - - type NodeAux struct { - ID uint64 - EnabledRoutes []netip.Prefix `gorm:"serializer:json"` - } - - nodesAux := []NodeAux{} - err := tx.Table("nodes"). - Select("id, enabled_routes"). - Scan(&nodesAux). - Error - if err != nil { - log.Fatal().Err(err).Msg("Error accessing db") - } - for _, node := range nodesAux { - for _, prefix := range node.EnabledRoutes { - if err != nil { - log.Error(). - Err(err). - Str("enabled_route", prefix.String()). - Msg("Error parsing enabled_route") - - continue - } - - err = tx.Preload("Node"). - Where("node_id = ? AND prefix = ?", node.ID, prefix). - First(&types.Route{}). - Error - if err == nil { - log.Info(). - Str("enabled_route", prefix.String()). - Msg("Route already migrated to new table, skipping") - - continue - } - - route := types.Route{ - NodeID: node.ID, - Advertised: true, - Enabled: true, - Prefix: prefix, - } - if err := tx.Create(&route).Error; err != nil { - log.Error().Err(err).Msg("Error creating route") - } else { - log.Info(). - Uint64("node.id", route.NodeID). - Str("prefix", prefix.String()). - Msg("Route migrated") - } - } - } - - err = tx.Migrator().DropColumn(&types.Node{}, "enabled_routes") - if err != nil { - log.Error(). - Err(err). - Msg("Error dropping enabled_routes column") - } - } - - if tx.Migrator().HasColumn(&types.Node{}, "given_name") { - nodes := types.Nodes{} - if err := tx.Find(&nodes).Error; err != nil { - log.Error().Err(err).Msg("Error accessing db") - } - - for item, node := range nodes { - if node.GivenName == "" { - if err != nil { - log.Error(). - Caller(). - Str("hostname", node.Hostname). - Err(err). - Msg("Failed to normalize node hostname in DB migration") - } - - err = tx.Model(nodes[item]).Updates(types.Node{ - GivenName: node.Hostname, - }).Error - if err != nil { - log.Error(). - Caller(). - Str("hostname", node.Hostname). - Err(err). - Msg("Failed to save normalized node name in DB migration") - } - } - } - } - - err = tx.AutoMigrate(&KV{}) - if err != nil { - return err - } - - err = tx.AutoMigrate(&types.PreAuthKey{}) - if err != nil { - return err - } - - type preAuthKeyACLTag struct { - ID uint64 `gorm:"primary_key"` - PreAuthKeyID uint64 - Tag string - } - err = tx.AutoMigrate(&preAuthKeyACLTag{}) - if err != nil { - return err - } - - _ = tx.Migrator().DropTable("shared_machines") - - err = tx.AutoMigrate(&types.APIKey{}) - if err != nil { - return err - } - - return nil - }, - Rollback: func(tx *gorm.DB) error { - return nil - }, - }, - { - // drop key-value table, it is not used, and has not contained - // useful data for a long time or ever. - ID: "202312101430", - Migrate: func(tx *gorm.DB) error { - return tx.Migrator().DropTable("kvs") - }, - Rollback: func(tx *gorm.DB) error { - return nil - }, - }, - { - // remove last_successful_update from node table, - // no longer used. - ID: "202402151347", - Migrate: func(tx *gorm.DB) error { - _ = tx.Migrator().DropColumn(&types.Node{}, "last_successful_update") - return nil - }, - Rollback: func(tx *gorm.DB) error { - return nil - }, - }, - { - // Replace column with IP address list with dedicated - // IP v4 and v6 column. - // Note that previously, the list _could_ contain more - // than two addresses, which should not really happen. - // In that case, the first occurrence of each type will - // be kept. - ID: "2024041121742", - Migrate: func(tx *gorm.DB) error { - _ = tx.Migrator().AddColumn(&types.Node{}, "ipv4") - _ = tx.Migrator().AddColumn(&types.Node{}, "ipv6") - - type node struct { - ID uint64 `gorm:"column:id"` - Addresses string `gorm:"column:ip_addresses"` - } - - var nodes []node - - _ = tx.Raw("SELECT id, ip_addresses FROM nodes").Scan(&nodes).Error - - for _, node := range nodes { - addrs := strings.Split(node.Addresses, ",") - - if len(addrs) == 0 { - return fmt.Errorf("no addresses found for node(%d)", node.ID) - } - - var v4 *netip.Addr - var v6 *netip.Addr - - for _, addrStr := range addrs { - addr, err := netip.ParseAddr(addrStr) - if err != nil { - return fmt.Errorf("parsing IP for node(%d) from database: %w", node.ID, err) - } - - if addr.Is4() && v4 == nil { - v4 = &addr - } - - if addr.Is6() && v6 == nil { - v6 = &addr - } - } - - if v4 != nil { - err = tx.Model(&types.Node{}).Where("id = ?", node.ID).Update("ipv4", v4.String()).Error - if err != nil { - return fmt.Errorf("saving ip addresses to new columns: %w", err) - } - } - - if v6 != nil { - err = tx.Model(&types.Node{}).Where("id = ?", node.ID).Update("ipv6", v6.String()).Error - if err != nil { - return fmt.Errorf("saving ip addresses to new columns: %w", err) - } - } - } - - _ = tx.Migrator().DropColumn(&types.Node{}, "ip_addresses") - - return nil - }, - Rollback: func(tx *gorm.DB) error { - return nil - }, - }, - { - ID: "202406021630", - Migrate: func(tx *gorm.DB) error { - err := tx.AutoMigrate(&types.Policy{}) - if err != nil { - return err - } - - return nil - }, - Rollback: func(db *gorm.DB) error { return nil }, - }, - // denormalise the ACL tags for preauth keys back onto - // the preauth key table. We dont normalise or reuse and - // it is just a bunch of work for extra work. - { - ID: "202409271400", - Migrate: func(tx *gorm.DB) error { - preauthkeyTags := map[uint64]set.Set[string]{} - - type preAuthKeyACLTag struct { - ID uint64 `gorm:"primary_key"` - PreAuthKeyID uint64 - Tag string - } - - var aclTags []preAuthKeyACLTag - if err := tx.Find(&aclTags).Error; err != nil { - return err - } - - // Store the current tags. - for _, tag := range aclTags { - if preauthkeyTags[tag.PreAuthKeyID] == nil { - preauthkeyTags[tag.PreAuthKeyID] = set.SetOf([]string{tag.Tag}) - } else { - preauthkeyTags[tag.PreAuthKeyID].Add(tag.Tag) - } - } - - // Add tags column and restore the tags. - _ = tx.Migrator().AddColumn(&types.PreAuthKey{}, "tags") - for keyID, tags := range preauthkeyTags { - s := tags.Slice() - j, err := json.Marshal(s) - if err != nil { - return err - } - if err := tx.Model(&types.PreAuthKey{}).Where("id = ?", keyID).Update("tags", string(j)).Error; err != nil { - return err - } - } - - // Drop the old table. - _ = tx.Migrator().DropTable(&preAuthKeyACLTag{}) - - return nil - }, - Rollback: func(db *gorm.DB) error { return nil }, - }, - { - // Pick up new user fields used for OIDC and to - // populate the user with more interesting information. - ID: "202407191627", - Migrate: func(tx *gorm.DB) error { - // Fix an issue where the automigration in GORM expected a constraint to - // exists that didn't, and add the one it wanted. - // Fixes https://github.com/juanfont/headscale/issues/2351 - if cfg.Type == types.DatabasePostgres { - err := tx.Exec(` -BEGIN; -DO $$ -BEGIN - IF NOT EXISTS ( - SELECT 1 FROM pg_constraint - WHERE conname = 'uni_users_name' - ) THEN - ALTER TABLE users ADD CONSTRAINT uni_users_name UNIQUE (name); - END IF; -END $$; - -DO $$ -BEGIN - IF EXISTS ( - SELECT 1 FROM pg_constraint - WHERE conname = 'users_name_key' - ) THEN - ALTER TABLE users DROP CONSTRAINT users_name_key; - END IF; -END $$; -COMMIT; -`).Error - if err != nil { - return fmt.Errorf("failed to rename constraint: %w", err) - } - } - - err := tx.AutoMigrate(&types.User{}) - if err != nil { - return fmt.Errorf("automigrating types.User: %w", err) - } - - return nil - }, - Rollback: func(db *gorm.DB) error { return nil }, - }, - { - // The unique constraint of Name has been dropped - // in favour of a unique together of name and - // provider identity. - ID: "202408181235", - Migrate: func(tx *gorm.DB) error { - err := tx.AutoMigrate(&types.User{}) - if err != nil { - return fmt.Errorf("automigrating types.User: %w", err) - } - - // Set up indexes and unique constraints outside of GORM, it does not support - // conditional unique constraints. - // This ensures the following: - // - A user name and provider_identifier is unique - // - A provider_identifier is unique - // - A user name is unique if there is no provider_identifier is not set - for _, idx := range []string{ - "DROP INDEX IF EXISTS idx_provider_identifier", - "DROP INDEX IF EXISTS idx_name_provider_identifier", - "CREATE UNIQUE INDEX IF NOT EXISTS idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;", - "CREATE UNIQUE INDEX IF NOT EXISTS idx_name_provider_identifier ON users (name,provider_identifier);", - "CREATE UNIQUE INDEX IF NOT EXISTS idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;", - } { - err = tx.Exec(idx).Error - if err != nil { - return fmt.Errorf("creating username index: %w", err) - } - } - - return nil - }, - Rollback: func(db *gorm.DB) error { return nil }, - }, + // v0.25.0 { // Add a constraint to routes ensuring they cannot exist without a node. ID: "202501221827", @@ -639,6 +140,7 @@ AND auth_key_id NOT IN ( }, Rollback: func(db *gorm.DB) error { return nil }, }, + // v0.26.0 // Migrate all routes from the Route table to the new field ApprovedRoutes // in the Node table. Then drop the Route table. { @@ -733,6 +235,7 @@ AND auth_key_id NOT IN ( }, Rollback: func(db *gorm.DB) error { return nil }, }, + // v0.27.0 // Schema migration to ensure all tables match the expected schema. // This migration recreates all tables to match the exact structure in schema.sql, // preserving all data during the process. @@ -741,7 +244,7 @@ AND auth_key_id NOT IN ( ID: "202507021200", Migrate: func(tx *gorm.DB) error { // Only run on SQLite - if cfg.Type != types.DatabaseSqlite { + if cfg.Database.Type != types.DatabaseSqlite { log.Info().Msg("Skipping schema migration on non-SQLite database") return nil } @@ -932,6 +435,7 @@ AND auth_key_id NOT IN ( }, Rollback: func(db *gorm.DB) error { return nil }, }, + // v0.27.1 { // Drop all tables that are no longer in use and has existed. // They potentially still present from broken migrations in the past. @@ -991,18 +495,266 @@ AND auth_key_id NOT IN ( // - NEVER use gorm.AutoMigrate, write the exact migration steps needed // - AutoMigrate depends on the struct staying exactly the same, which it won't over time. // - Never write migrations that requires foreign keys to be disabled. + // - ALL errors in migrations must be handled properly. + + { + // Add columns for prefix and hash for pre auth keys, implementing + // them with the same security model as api keys. + ID: "202511011637-preauthkey-bcrypt", + Migrate: func(tx *gorm.DB) error { + // Check and add prefix column if it doesn't exist + if !tx.Migrator().HasColumn(&types.PreAuthKey{}, "prefix") { + err := tx.Migrator().AddColumn(&types.PreAuthKey{}, "prefix") + if err != nil { + return fmt.Errorf("adding prefix column: %w", err) + } + } + + // Check and add hash column if it doesn't exist + if !tx.Migrator().HasColumn(&types.PreAuthKey{}, "hash") { + err := tx.Migrator().AddColumn(&types.PreAuthKey{}, "hash") + if err != nil { + return fmt.Errorf("adding hash column: %w", err) + } + } + + // Create partial unique index to allow multiple legacy keys (NULL/empty prefix) + // while enforcing uniqueness for new bcrypt-based keys + err := tx.Exec("CREATE UNIQUE INDEX IF NOT EXISTS idx_pre_auth_keys_prefix ON pre_auth_keys(prefix) WHERE prefix IS NOT NULL AND prefix != ''").Error + if err != nil { + return fmt.Errorf("creating prefix index: %w", err) + } + + return nil + }, + Rollback: func(db *gorm.DB) error { return nil }, + }, + { + ID: "202511122344-remove-newline-index", + Migrate: func(tx *gorm.DB) error { + // Reformat multi-line indexes to single-line for consistency + // This migration drops and recreates the three user identity indexes + // to match the single-line format expected by schema validation + + // Drop existing multi-line indexes + dropIndexes := []string{ + `DROP INDEX IF EXISTS idx_provider_identifier`, + `DROP INDEX IF EXISTS idx_name_provider_identifier`, + `DROP INDEX IF EXISTS idx_name_no_provider_identifier`, + } + + for _, dropSQL := range dropIndexes { + err := tx.Exec(dropSQL).Error + if err != nil { + return fmt.Errorf("dropping index: %w", err) + } + } + + // Recreate indexes in single-line format + createIndexes := []string{ + `CREATE UNIQUE INDEX idx_provider_identifier ON users(provider_identifier) WHERE provider_identifier IS NOT NULL`, + `CREATE UNIQUE INDEX idx_name_provider_identifier ON users(name, provider_identifier)`, + `CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(name) WHERE provider_identifier IS NULL`, + } + + for _, createSQL := range createIndexes { + err := tx.Exec(createSQL).Error + if err != nil { + return fmt.Errorf("creating index: %w", err) + } + } + + return nil + }, + Rollback: func(db *gorm.DB) error { return nil }, + }, + { + // Rename forced_tags column to tags in nodes table. + // This must run after migration 202505141324 which creates tables with forced_tags. + ID: "202511131445-node-forced-tags-to-tags", + Migrate: func(tx *gorm.DB) error { + // Rename the column from forced_tags to tags + err := tx.Migrator().RenameColumn(&types.Node{}, "forced_tags", "tags") + if err != nil { + return fmt.Errorf("renaming forced_tags to tags: %w", err) + } + + return nil + }, + Rollback: func(db *gorm.DB) error { return nil }, + }, + { + // Migrate RequestTags from host_info JSON to tags column. + // In 0.27.x, tags from --advertise-tags (ValidTags) were stored only in + // host_info.RequestTags, not in the tags column (formerly forced_tags). + // This migration validates RequestTags against the policy's tagOwners + // and merges validated tags into the tags column. + // Fixes: https://github.com/juanfont/headscale/issues/3006 + ID: "202601121700-migrate-hostinfo-request-tags", + Migrate: func(tx *gorm.DB) error { + // 1. Load policy from file or database based on configuration + policyData, err := PolicyBytes(tx, cfg) + if err != nil { + log.Warn().Err(err).Msg("Failed to load policy, skipping RequestTags migration (tags will be validated on node reconnect)") + return nil + } + + if len(policyData) == 0 { + log.Info().Msg("No policy found, skipping RequestTags migration (tags will be validated on node reconnect)") + return nil + } + + // 2. Load users and nodes to create PolicyManager + users, err := ListUsers(tx) + if err != nil { + return fmt.Errorf("loading users for RequestTags migration: %w", err) + } + + nodes, err := ListNodes(tx) + if err != nil { + return fmt.Errorf("loading nodes for RequestTags migration: %w", err) + } + + // 3. Create PolicyManager (handles HuJSON parsing, groups, nested tags, etc.) + polMan, err := policy.NewPolicyManager(policyData, users, nodes.ViewSlice()) + if err != nil { + log.Warn().Err(err).Msg("Failed to parse policy, skipping RequestTags migration (tags will be validated on node reconnect)") + return nil + } + + // 4. Process each node + for _, node := range nodes { + if node.Hostinfo == nil { + continue + } + + requestTags := node.Hostinfo.RequestTags + if len(requestTags) == 0 { + continue + } + + existingTags := node.Tags + + var validatedTags, rejectedTags []string + + nodeView := node.View() + + for _, tag := range requestTags { + if polMan.NodeCanHaveTag(nodeView, tag) { + if !slices.Contains(existingTags, tag) { + validatedTags = append(validatedTags, tag) + } + } else { + rejectedTags = append(rejectedTags, tag) + } + } + + if len(validatedTags) == 0 { + if len(rejectedTags) > 0 { + log.Debug(). + Uint64("node.id", uint64(node.ID)). + Str("node.name", node.Hostname). + Strs("rejected_tags", rejectedTags). + Msg("RequestTags rejected during migration (not authorized)") + } + + continue + } + + mergedTags := append(existingTags, validatedTags...) + slices.Sort(mergedTags) + mergedTags = slices.Compact(mergedTags) + + tagsJSON, err := json.Marshal(mergedTags) + if err != nil { + return fmt.Errorf("serializing merged tags for node %d: %w", node.ID, err) + } + + err = tx.Exec("UPDATE nodes SET tags = ? WHERE id = ?", string(tagsJSON), node.ID).Error + if err != nil { + return fmt.Errorf("updating tags for node %d: %w", node.ID, err) + } + + log.Info(). + Uint64("node.id", uint64(node.ID)). + Str("node.name", node.Hostname). + Strs("validated_tags", validatedTags). + Strs("rejected_tags", rejectedTags). + Strs("existing_tags", existingTags). + Strs("merged_tags", mergedTags). + Msg("Migrated validated RequestTags from host_info to tags column") + } + + return nil + }, + Rollback: func(db *gorm.DB) error { return nil }, + }, }, ) - if err := runMigrations(cfg, dbConn, migrations); err != nil { - log.Fatal().Err(err).Msgf("Migration failed: %v", err) + migrations.InitSchema(func(tx *gorm.DB) error { + // Create all tables using AutoMigrate + err := tx.AutoMigrate( + &types.User{}, + &types.PreAuthKey{}, + &types.APIKey{}, + &types.Node{}, + &types.Policy{}, + ) + if err != nil { + return err + } + + // Drop all indexes (both GORM-created and potentially pre-existing ones) + // to ensure we can recreate them in the correct format + dropIndexes := []string{ + `DROP INDEX IF EXISTS "idx_users_deleted_at"`, + `DROP INDEX IF EXISTS "idx_api_keys_prefix"`, + `DROP INDEX IF EXISTS "idx_policies_deleted_at"`, + `DROP INDEX IF EXISTS "idx_provider_identifier"`, + `DROP INDEX IF EXISTS "idx_name_provider_identifier"`, + `DROP INDEX IF EXISTS "idx_name_no_provider_identifier"`, + `DROP INDEX IF EXISTS "idx_pre_auth_keys_prefix"`, + } + + for _, dropSQL := range dropIndexes { + err := tx.Exec(dropSQL).Error + if err != nil { + return err + } + } + + // Recreate indexes without backticks to match schema.sql format + indexes := []string{ + `CREATE INDEX idx_users_deleted_at ON users(deleted_at)`, + `CREATE UNIQUE INDEX idx_api_keys_prefix ON api_keys(prefix)`, + `CREATE INDEX idx_policies_deleted_at ON policies(deleted_at)`, + `CREATE UNIQUE INDEX idx_provider_identifier ON users(provider_identifier) WHERE provider_identifier IS NOT NULL`, + `CREATE UNIQUE INDEX idx_name_provider_identifier ON users(name, provider_identifier)`, + `CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(name) WHERE provider_identifier IS NULL`, + `CREATE UNIQUE INDEX idx_pre_auth_keys_prefix ON pre_auth_keys(prefix) WHERE prefix IS NOT NULL AND prefix != ''`, + } + + for _, indexSQL := range indexes { + err := tx.Exec(indexSQL).Error + if err != nil { + return err + } + } + + return nil + }) + + err = runMigrations(cfg.Database, dbConn, migrations) + if err != nil { + return nil, fmt.Errorf("migration failed: %w", err) } // Validate that the schema ends up in the expected state. // This is currently only done on sqlite as squibble does not // support Postgres and we use our sqlite schema as our source of // truth. - if cfg.Type == types.DatabaseSqlite { + if cfg.Database.Type == types.DatabaseSqlite { sqlConn, err := dbConn.DB() if err != nil { return nil, fmt.Errorf("getting DB from gorm: %w", err) @@ -1034,10 +786,8 @@ AND auth_key_id NOT IN ( db := HSDatabase{ DB: dbConn, - cfg: &cfg, + cfg: cfg, regCache: regCache, - - baseDomain: baseDomain, } return &db, err @@ -1156,13 +906,8 @@ func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormig // These are migrations that perform complex schema changes that GORM cannot handle safely with FK enabled // NO NEW MIGRATIONS SHOULD BE ADDED HERE. ALL NEW MIGRATIONS MUST RUN WITH FOREIGN KEYS ENABLED. migrationsRequiringFKDisabled := map[string]bool{ - "202312101416": true, // Initial migration with complex table/column renames - "202402151347": true, // Migration that removes last_successful_update column - "2024041121742": true, // Migration that changes IP address storage format - "202407191627": true, // User table automigration with FK constraint issues - "202408181235": true, // User table automigration with FK constraint issues - "202501221827": true, // Route table automigration with FK constraint issues - "202501311657": true, // PreAuthKey table automigration with FK constraint issues + "202501221827": true, // Route table automigration with FK constraint issues + "202501311657": true, // PreAuthKey table automigration with FK constraint issues // Add other migration IDs here as they are identified to need FK disabled } @@ -1176,21 +921,17 @@ func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormig // Only IDs that are in the migrationsRequiringFKDisabled map will be processed with FK disabled // any other new migrations are ran after. migrationIDs := []string{ - "202312101416", - "202312101430", - "202402151347", - "2024041121742", - "202406021630", - "202407191627", - "202408181235", - "202409271400", + // v0.25.0 "202501221827", "202501311657", "202502070949", + + // v0.26.0 "202502131714", "202502171819", "202505091439", "202505141324", + // As of 2025-07-02, no new IDs should be added here. // They will be ran by the migrations.Migrate() call below. } @@ -1289,7 +1030,7 @@ func (hsdb *HSDatabase) Close() error { return err } - if hsdb.cfg.Type == types.DatabaseSqlite && hsdb.cfg.Sqlite.WriteAheadLog { + if hsdb.cfg.Database.Type == types.DatabaseSqlite && hsdb.cfg.Database.Sqlite.WriteAheadLog { db.Exec("VACUUM") } diff --git a/hscontrol/db/db_test.go b/hscontrol/db/db_test.go index 47245c39..3cd0d14e 100644 --- a/hscontrol/db/db_test.go +++ b/hscontrol/db/db_test.go @@ -2,19 +2,14 @@ package db import ( "database/sql" - "net/netip" "os" "os/exec" "path/filepath" - "slices" "strings" "testing" "time" - "github.com/google/go-cmp/cmp" - "github.com/google/go-cmp/cmp/cmpopts" "github.com/juanfont/headscale/hscontrol/types" - "github.com/juanfont/headscale/hscontrol/util" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "gorm.io/gorm" @@ -25,400 +20,14 @@ import ( // and validates data integrity after migration. All migrations that require data validation // should be added here. func TestSQLiteMigrationAndDataValidation(t *testing.T) { - ipp := func(p string) netip.Prefix { - return netip.MustParsePrefix(p) - } - r := func(id uint64, p string, a, e, i bool) types.Route { - return types.Route{ - NodeID: id, - Prefix: ipp(p), - Advertised: a, - Enabled: e, - IsPrimary: i, - } - } tests := []struct { dbPath string wantFunc func(*testing.T, *HSDatabase) }{ - { - dbPath: "testdata/sqlite/0-22-3-to-0-23-0-routes-are-dropped-2063_dump.sql", - wantFunc: func(t *testing.T, hsdb *HSDatabase) { - t.Helper() - // Comprehensive data preservation validation for 0.22.3->0.23.0 migration - // Expected data from dump: 4 users, 17 pre_auth_keys, 14 machines/nodes, 12 routes - - // Verify users data preservation - should have 4 users - users, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) { - return ListUsers(rx) - }) - require.NoError(t, err) - assert.Len(t, users, 4, "should preserve all 4 users from original schema") - - // Verify pre_auth_keys data preservation - should have 17 keys - preAuthKeys, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) { - var keys []types.PreAuthKey - err := rx.Find(&keys).Error - return keys, err - }) - require.NoError(t, err) - assert.Len(t, preAuthKeys, 17, "should preserve all 17 pre_auth_keys from original schema") - - // Verify all nodes data preservation - should have 14 nodes - allNodes, err := Read(hsdb.DB, func(rx *gorm.DB) (types.Nodes, error) { - return ListNodes(rx) - }) - require.NoError(t, err) - assert.Len(t, allNodes, 14, "should preserve all 14 machines/nodes from original schema") - - // Verify specific nodes and their route migration with detailed validation - nodes, err := Read(hsdb.DB, func(rx *gorm.DB) (types.Nodes, error) { - n1, err := GetNodeByID(rx, 1) - n26, err := GetNodeByID(rx, 26) - n31, err := GetNodeByID(rx, 31) - n32, err := GetNodeByID(rx, 32) - if err != nil { - return nil, err - } - - return types.Nodes{n1, n26, n31, n32}, nil - }) - require.NoError(t, err) - assert.Len(t, nodes, 4, "should have retrieved 4 specific nodes") - - // Validate specific node data from dump file - nodesByID := make(map[uint64]*types.Node) - for i := range nodes { - nodesByID[nodes[i].ID.Uint64()] = nodes[i] - } - - node1 := nodesByID[1] - node26 := nodesByID[26] - node31 := nodesByID[31] - node32 := nodesByID[32] - - require.NotNil(t, node1, "node 1 should exist") - require.NotNil(t, node26, "node 26 should exist") - require.NotNil(t, node31, "node 31 should exist") - require.NotNil(t, node32, "node 32 should exist") - - // Validate node data using cmp.Diff - expectedNodes := map[uint64]struct { - Hostname string - GivenName string - IPv4 string - }{ - 1: {Hostname: "test_hostname", GivenName: "test_given_name", IPv4: "100.64.0.1"}, - 26: {Hostname: "test_hostname", GivenName: "test_given_name", IPv4: "100.64.0.19"}, - 31: {Hostname: "test_hostname", GivenName: "test_given_name", IPv4: "100.64.0.7"}, - 32: {Hostname: "test_hostname", GivenName: "test_given_name", IPv4: "100.64.0.11"}, - } - - for nodeID, expected := range expectedNodes { - node := nodesByID[nodeID] - require.NotNil(t, node, "node %d should exist", nodeID) - - actual := struct { - Hostname string - GivenName string - IPv4 string - }{ - Hostname: node.Hostname, - GivenName: node.GivenName, - IPv4: node.IPv4.String(), - } - - if diff := cmp.Diff(expected, actual); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() node %d mismatch (-want +got):\n%s", nodeID, diff) - } - } - - // Validate that routes were properly migrated from routes table to approved_routes - // Based on the dump file routes data: - // Node 1 (machine_id 1): routes 1,2,3 (0.0.0.0/0 enabled, ::/0 enabled, 10.9.110.0/24 enabled+primary) - // Node 26 (machine_id 26): route 6 (172.100.100.0/24 enabled+primary), route 7 (172.100.100.0/24 disabled) - // Node 31 (machine_id 31): routes 8,10 (0.0.0.0/0 enabled, ::/0 enabled), routes 9,11 (duplicates disabled) - // Node 32 (machine_id 32): route 12 (192.168.0.24/32 enabled+primary) - want := [][]netip.Prefix{ - {ipp("0.0.0.0/0"), ipp("10.9.110.0/24"), ipp("::/0")}, // node 1: 3 enabled routes - {ipp("172.100.100.0/24")}, // node 26: 1 enabled route - {ipp("0.0.0.0/0"), ipp("::/0")}, // node 31: 2 enabled routes - {ipp("192.168.0.24/32")}, // node 32: 1 enabled route - } - var got [][]netip.Prefix - for _, node := range nodes { - got = append(got, node.ApprovedRoutes) - } - - if diff := cmp.Diff(want, got, util.PrefixComparer); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() route migration mismatch (-want +got):\n%s", diff) - } - - // Verify routes table was dropped after migration - var routesTableExists bool - err = hsdb.DB.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='routes'").Row().Scan(&routesTableExists) - require.NoError(t, err) - assert.False(t, routesTableExists, "routes table should have been dropped after migration") - }, - }, - { - dbPath: "testdata/sqlite/0-22-3-to-0-23-0-routes-fail-foreign-key-2076_dump.sql", - wantFunc: func(t *testing.T, hsdb *HSDatabase) { - t.Helper() - // Comprehensive data preservation validation for foreign key constraint issue case - // Expected data from dump: 4 users, 2 pre_auth_keys, 8 nodes - - // Verify users data preservation - users, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) { - return ListUsers(rx) - }) - require.NoError(t, err) - assert.Len(t, users, 4, "should preserve all 4 users from original schema") - - // Verify pre_auth_keys data preservation - preAuthKeys, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) { - var keys []types.PreAuthKey - err := rx.Find(&keys).Error - return keys, err - }) - require.NoError(t, err) - assert.Len(t, preAuthKeys, 2, "should preserve all 2 pre_auth_keys from original schema") - - // Verify all nodes data preservation - allNodes, err := Read(hsdb.DB, func(rx *gorm.DB) (types.Nodes, error) { - return ListNodes(rx) - }) - require.NoError(t, err) - assert.Len(t, allNodes, 8, "should preserve all 8 nodes from original schema") - - // Verify specific node route migration - node, err := Read(hsdb.DB, func(rx *gorm.DB) (*types.Node, error) { - return GetNodeByID(rx, 13) - }) - require.NoError(t, err) - - assert.Len(t, node.ApprovedRoutes, 3) - _ = types.Routes{ - // These routes exists, but have no nodes associated with them - // when the migration starts. - // r(1, "0.0.0.0/0", true, false), - // r(1, "::/0", true, false), - // r(3, "0.0.0.0/0", true, false), - // r(3, "::/0", true, false), - // r(5, "0.0.0.0/0", true, false), - // r(5, "::/0", true, false), - // r(6, "0.0.0.0/0", true, false), - // r(6, "::/0", true, false), - // r(6, "10.0.0.0/8", true, false, false), - // r(7, "0.0.0.0/0", true, false), - // r(7, "::/0", true, false), - // r(7, "10.0.0.0/8", true, false, false), - // r(9, "0.0.0.0/0", true, false), - // r(9, "::/0", true, false), - // r(9, "10.0.0.0/8", true, false), - // r(11, "0.0.0.0/0", true, false), - // r(11, "::/0", true, false), - // r(11, "10.0.0.0/8", true, true), - // r(12, "0.0.0.0/0", true, false), - // r(12, "::/0", true, false), - // r(12, "10.0.0.0/8", true, false, false), - // - // These nodes exists, so routes should be kept. - r(13, "10.0.0.0/8", true, false, false), - r(13, "0.0.0.0/0", true, true, false), - r(13, "::/0", true, true, false), - r(13, "10.18.80.2/32", true, true, true), - } - want := []netip.Prefix{ipp("0.0.0.0/0"), ipp("10.18.80.2/32"), ipp("::/0")} - if diff := cmp.Diff(want, node.ApprovedRoutes, util.PrefixComparer); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() route migration mismatch (-want +got):\n%s", diff) - } - - // Verify routes table was dropped after migration - var routesTableExists bool - err = hsdb.DB.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='routes'").Row().Scan(&routesTableExists) - require.NoError(t, err) - assert.False(t, routesTableExists, "routes table should have been dropped after migration") - }, - }, // at 14:15:06 ❯ go run ./cmd/headscale preauthkeys list // ID | Key | Reusable | Ephemeral | Used | Expiration | Created | Tags // 1 | 09b28f.. | false | false | false | 2024-09-27 | 2024-09-27 | tag:derp // 2 | 3112b9.. | false | false | false | 2024-09-27 | 2024-09-27 | tag:derp - // 3 | 7c23b9.. | false | false | false | 2024-09-27 | 2024-09-27 | tag:derp,tag:merp - // 4 | f20155.. | false | false | false | 2024-09-27 | 2024-09-27 | tag:test - // 5 | b212b9.. | false | false | false | 2024-09-27 | 2024-09-27 | tag:test,tag:woop,tag:dedu - { - dbPath: "testdata/sqlite/0-23-0-to-0-24-0-preauthkey-tags-table_dump.sql", - wantFunc: func(t *testing.T, hsdb *HSDatabase) { - t.Helper() - // Comprehensive data preservation validation for pre-auth key tags migration - // Expected data from dump: 2 users (kratest, testkra), 5 pre_auth_keys with specific tags - - // Verify users data preservation with specific user data - users, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) { - return ListUsers(rx) - }) - require.NoError(t, err) - assert.Len(t, users, 2, "should preserve all 2 users from original schema") - - // Validate specific user data from dump file using cmp.Diff - expectedUsers := []types.User{ - {Model: gorm.Model{ID: 1}, Name: "kratest"}, - {Model: gorm.Model{ID: 2}, Name: "testkra"}, - } - - if diff := cmp.Diff(expectedUsers, users, - cmpopts.IgnoreFields(types.User{}, "CreatedAt", "UpdatedAt", "DeletedAt", "DisplayName", "Email", "ProviderIdentifier", "Provider", "ProfilePicURL")); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() users mismatch (-want +got):\n%s", diff) - } - - // Create maps for easier access in later validations - usersByName := make(map[string]*types.User) - for i := range users { - usersByName[users[i].Name] = &users[i] - } - kratest := usersByName["kratest"] - testkra := usersByName["testkra"] - - // Verify all pre_auth_keys data preservation - allKeys, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) { - var keys []types.PreAuthKey - err := rx.Find(&keys).Error - return keys, err - }) - require.NoError(t, err) - assert.Len(t, allKeys, 5, "should preserve all 5 pre_auth_keys from original schema") - - // Verify specific pre-auth keys and their tag migration with exact data validation - keys, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) { - kratest, err := ListPreAuthKeysByUser(rx, 1) // kratest - if err != nil { - return nil, err - } - - testkra, err := ListPreAuthKeysByUser(rx, 2) // testkra - if err != nil { - return nil, err - } - - return append(kratest, testkra...), nil - }) - require.NoError(t, err) - assert.Len(t, keys, 5) - - // Create map for easier validation by ID - keysByID := make(map[uint64]*types.PreAuthKey) - for i := range keys { - keysByID[keys[i].ID] = &keys[i] - } - - // Validate specific pre-auth key data and tag migration from pre_auth_key_acl_tags table - key1 := keysByID[1] - key2 := keysByID[2] - key3 := keysByID[3] - key4 := keysByID[4] - key5 := keysByID[5] - - require.NotNil(t, key1, "pre_auth_key 1 should exist") - require.NotNil(t, key2, "pre_auth_key 2 should exist") - require.NotNil(t, key3, "pre_auth_key 3 should exist") - require.NotNil(t, key4, "pre_auth_key 4 should exist") - require.NotNil(t, key5, "pre_auth_key 5 should exist") - - // Validate specific pre-auth key data and tag migration using cmp.Diff - expectedKeys := []types.PreAuthKey{ - { - ID: 1, - Key: "09b28f8c3351984874d46dace0a70177a8721933a950b663", - UserID: kratest.ID, - Tags: []string{"tag:derp"}, - }, - { - ID: 2, - Key: "3112b953cb344191b2d5aec1b891250125bf7b437eac5d26", - UserID: kratest.ID, - Tags: []string{"tag:derp"}, - }, - { - ID: 3, - Key: "7c23b9f215961e7609527aef78bf82fb19064b002d78c36f", - UserID: kratest.ID, - Tags: []string{"tag:derp", "tag:merp"}, - }, - { - ID: 4, - Key: "f2015583852b725220cc4b107fb288a4cf7ac259bd458a32", - UserID: testkra.ID, - Tags: []string{"tag:test"}, - }, - { - ID: 5, - Key: "b212b990165e897944dd3772786544402729fb349da50f57", - UserID: testkra.ID, - Tags: []string{"tag:test", "tag:woop", "tag:dedu"}, - }, - } - - if diff := cmp.Diff(expectedKeys, keys, cmp.Comparer(func(a, b []string) bool { - slices.Sort(a) - slices.Sort(b) - return slices.Equal(a, b) - }), cmpopts.IgnoreFields(types.PreAuthKey{}, "User", "CreatedAt", "Reusable", "Ephemeral", "Used", "Expiration")); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() pre-auth key tags migration mismatch (-want +got):\n%s", diff) - } - - // Verify pre_auth_key_acl_tags table was dropped after migration - if hsdb.DB.Migrator().HasTable("pre_auth_key_acl_tags") { - t.Errorf("TestSQLiteMigrationAndDataValidation() table pre_auth_key_acl_tags should not exist after migration") - } - }, - }, - { - dbPath: "testdata/sqlite/0-23-0-to-0-24-0-no-more-special-types_dump.sql", - wantFunc: func(t *testing.T, hsdb *HSDatabase) { - t.Helper() - // Comprehensive data preservation validation for special types removal migration - // Expected data from dump: 2 users, 2 pre_auth_keys, 12 nodes - - // Verify users data preservation - users, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) { - return ListUsers(rx) - }) - require.NoError(t, err) - assert.Len(t, users, 2, "should preserve all 2 users from original schema") - - // Verify pre_auth_keys data preservation - preAuthKeys, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) { - var keys []types.PreAuthKey - err := rx.Find(&keys).Error - return keys, err - }) - require.NoError(t, err) - assert.Len(t, preAuthKeys, 2, "should preserve all 2 pre_auth_keys from original schema") - - // Verify nodes data preservation and field validation - nodes, err := Read(hsdb.DB, func(rx *gorm.DB) (types.Nodes, error) { - return ListNodes(rx) - }) - require.NoError(t, err) - assert.Len(t, nodes, 12, "should preserve all 12 nodes from original schema") - - for _, node := range nodes { - assert.Falsef(t, node.MachineKey.IsZero(), "expected non zero machinekey") - assert.Contains(t, node.MachineKey.String(), "mkey:") - assert.Falsef(t, node.NodeKey.IsZero(), "expected non zero nodekey") - assert.Contains(t, node.NodeKey.String(), "nodekey:") - assert.Falsef(t, node.DiscoKey.IsZero(), "expected non zero discokey") - assert.Contains(t, node.DiscoKey.String(), "discokey:") - assert.NotNil(t, node.IPv4) - assert.NotNil(t, node.IPv6) - assert.Len(t, node.Endpoints, 1) - assert.NotNil(t, node.Hostinfo) - assert.NotNil(t, node.MachineKey) - } - }, - }, { dbPath: "testdata/sqlite/failing-node-preauth-constraint_dump.sql", wantFunc: func(t *testing.T, hsdb *HSDatabase) { @@ -458,251 +67,81 @@ func TestSQLiteMigrationAndDataValidation(t *testing.T) { } }, }, + // Test for RequestTags migration (202601121700-migrate-hostinfo-request-tags) + // and forced_tags->tags rename migration (202511131445-node-forced-tags-to-tags) + // + // This test validates that: + // 1. The forced_tags column is renamed to tags + // 2. RequestTags from host_info are validated against policy tagOwners + // 3. Authorized tags are migrated to the tags column + // 4. Unauthorized tags are rejected + // 5. Existing tags are preserved + // 6. Group membership is evaluated for tag authorization { - dbPath: "testdata/sqlite/wrongly-migrated-schema-0.25.1_dump.sql", + dbPath: "testdata/sqlite/request_tags_migration_test.sql", wantFunc: func(t *testing.T, hsdb *HSDatabase) { t.Helper() - // Test migration of a database that was wrongly migrated in 0.25.1 - // This database has several issues: - // 1. Missing proper user unique constraints (idx_provider_identifier, idx_name_provider_identifier, idx_name_no_provider_identifier) - // 2. Still has routes table that should have been migrated to node.approved_routes - // 3. Wrong FOREIGN KEY constraint on pre_auth_keys (CASCADE instead of SET NULL) - // 4. Missing some required indexes - // Verify users table data is preserved with specific user data - users, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) { - return ListUsers(rx) - }) - require.NoError(t, err) - assert.Len(t, users, 2, "should preserve existing users") - - // Validate specific user data from dump file using cmp.Diff - expectedUsers := []types.User{ - {Model: gorm.Model{ID: 1}, Name: "user2"}, - {Model: gorm.Model{ID: 2}, Name: "user1"}, - } - - if diff := cmp.Diff(expectedUsers, users, - cmpopts.IgnoreFields(types.User{}, "CreatedAt", "UpdatedAt", "DeletedAt", "DisplayName", "Email", "ProviderIdentifier", "Provider", "ProfilePicURL")); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() users mismatch (-want +got):\n%s", diff) - } - - // Create maps for easier access in later validations - usersByName := make(map[string]*types.User) - for i := range users { - usersByName[users[i].Name] = &users[i] - } - user1 := usersByName["user1"] - user2 := usersByName["user2"] - - // Verify nodes table data is preserved and routes migrated to approved_routes nodes, err := Read(hsdb.DB, func(rx *gorm.DB) (types.Nodes, error) { return ListNodes(rx) }) require.NoError(t, err) - assert.Len(t, nodes, 3, "should preserve existing nodes") + require.Len(t, nodes, 7, "should have all 7 nodes") - // Validate specific node data from dump file - nodesByID := make(map[uint64]*types.Node) - for i := range nodes { - nodesByID[nodes[i].ID.Uint64()] = nodes[i] - } - - node1 := nodesByID[1] - node2 := nodesByID[2] - node3 := nodesByID[3] - require.NotNil(t, node1, "node 1 should exist") - require.NotNil(t, node2, "node 2 should exist") - require.NotNil(t, node3, "node 3 should exist") - - // Validate specific node field data using cmp.Diff - expectedNodes := map[uint64]struct { - Hostname string - GivenName string - IPv4 string - IPv6 string - UserID uint - }{ - 1: {Hostname: "node1", GivenName: "node1", IPv4: "100.64.0.1", IPv6: "fd7a:115c:a1e0::1", UserID: user2.ID}, - 2: {Hostname: "node2", GivenName: "node2", IPv4: "100.64.0.2", IPv6: "fd7a:115c:a1e0::2", UserID: user2.ID}, - 3: {Hostname: "node3", GivenName: "node3", IPv4: "100.64.0.3", IPv6: "fd7a:115c:a1e0::3", UserID: user1.ID}, - } - - for nodeID, expected := range expectedNodes { - node := nodesByID[nodeID] - require.NotNil(t, node, "node %d should exist", nodeID) - - actual := struct { - Hostname string - GivenName string - IPv4 string - IPv6 string - UserID uint - }{ - Hostname: node.Hostname, - GivenName: node.GivenName, - IPv4: node.IPv4.String(), - IPv6: func() string { - if node.IPv6 != nil { - return node.IPv6.String() - } else { - return "" - } - }(), - UserID: node.UserID, + // Helper to find node by hostname + findNode := func(hostname string) *types.Node { + for _, n := range nodes { + if n.Hostname == hostname { + return n + } } - if diff := cmp.Diff(expected, actual); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() node %d basic fields mismatch (-want +got):\n%s", nodeID, diff) - } - - // Special validation for MachineKey content for node 1 only - if nodeID == 1 { - assert.Contains(t, node.MachineKey.String(), "mkey:1efe4388236c1c83fe0a19d3ce7c321ab81e138a4da57917c231ce4c01944409") - } + return nil } - // Check that routes were migrated from routes table to node.approved_routes using cmp.Diff - // Original routes table had 4 routes for nodes 1, 2, 3: - // Node 1: 0.0.0.0/0 (enabled), ::/0 (enabled) -> should have 2 approved routes - // Node 2: 192.168.100.0/24 (enabled) -> should have 1 approved route - // Node 3: 10.0.0.0/8 (disabled) -> should have 0 approved routes - expectedRoutes := map[uint64][]netip.Prefix{ - 1: {netip.MustParsePrefix("0.0.0.0/0"), netip.MustParsePrefix("::/0")}, - 2: {netip.MustParsePrefix("192.168.100.0/24")}, - 3: nil, - } + // Node 1: user1 has RequestTags for tag:server (authorized) + // Expected: tags = ["tag:server"] + node1 := findNode("node1") + require.NotNil(t, node1, "node1 should exist") + assert.Contains(t, node1.Tags, "tag:server", "node1 should have tag:server migrated from RequestTags") - actualRoutes := map[uint64][]netip.Prefix{ - 1: node1.ApprovedRoutes, - 2: node2.ApprovedRoutes, - 3: node3.ApprovedRoutes, - } + // Node 2: user1 has RequestTags for tag:unauthorized (NOT authorized) + // Expected: tags = [] (unchanged) + node2 := findNode("node2") + require.NotNil(t, node2, "node2 should exist") + assert.Empty(t, node2.Tags, "node2 should have empty tags (unauthorized tag rejected)") - if diff := cmp.Diff(expectedRoutes, actualRoutes, util.PrefixComparer); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() routes migration mismatch (-want +got):\n%s", diff) - } + // Node 3: user2 has RequestTags for tag:client (authorized) + existing tag:existing + // Expected: tags = ["tag:client", "tag:existing"] + node3 := findNode("node3") + require.NotNil(t, node3, "node3 should exist") + assert.Contains(t, node3.Tags, "tag:client", "node3 should have tag:client migrated from RequestTags") + assert.Contains(t, node3.Tags, "tag:existing", "node3 should preserve existing tag") - // Verify pre_auth_keys data is preserved with specific key data - preAuthKeys, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) { - var keys []types.PreAuthKey - err := rx.Find(&keys).Error - return keys, err - }) - require.NoError(t, err) - assert.Len(t, preAuthKeys, 2, "should preserve existing pre_auth_keys") + // Node 4: user1 has RequestTags for tag:server which already exists + // Expected: tags = ["tag:server"] (no duplicates) + node4 := findNode("node4") + require.NotNil(t, node4, "node4 should exist") + assert.Equal(t, []string{"tag:server"}, node4.Tags, "node4 should have tag:server without duplicates") - // Validate specific pre_auth_key data from dump file using cmp.Diff - expectedKeys := []types.PreAuthKey{ - { - ID: 1, - Key: "3d133ec953e31fd41edbd935371234f762b4bae300cea618", - UserID: user2.ID, - Reusable: true, - Used: true, - }, - { - ID: 2, - Key: "9813cc1df1832259fb6322dad788bb9bec89d8a01eef683a", - UserID: user1.ID, - Reusable: true, - Used: true, - }, - } + // Node 5: user2 has no RequestTags + // Expected: tags = [] (unchanged) + node5 := findNode("node5") + require.NotNil(t, node5, "node5 should exist") + assert.Empty(t, node5.Tags, "node5 should have empty tags (no RequestTags)") - if diff := cmp.Diff(expectedKeys, preAuthKeys, - cmpopts.IgnoreFields(types.PreAuthKey{}, "User", "CreatedAt", "Expiration", "Ephemeral", "Tags")); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() pre_auth_keys mismatch (-want +got):\n%s", diff) - } + // Node 6: admin1 has RequestTags for tag:admin (authorized via group:admins) + // Expected: tags = ["tag:admin"] + node6 := findNode("node6") + require.NotNil(t, node6, "node6 should exist") + assert.Contains(t, node6.Tags, "tag:admin", "node6 should have tag:admin migrated via group membership") - // Verify api_keys data is preserved with specific key data - var apiKeys []struct { - ID uint64 - Prefix string - Hash []byte - CreatedAt string - Expiration string - LastSeen string - } - err = hsdb.DB.Raw("SELECT id, prefix, hash, created_at, expiration, last_seen FROM api_keys").Scan(&apiKeys).Error - require.NoError(t, err) - assert.Len(t, apiKeys, 1, "should preserve existing api_keys") - - // Validate specific api_key data from dump file using cmp.Diff - expectedAPIKey := struct { - ID uint64 - Prefix string - Hash []byte - }{ - ID: 1, - Prefix: "ak_test", - Hash: []byte{0xde, 0xad, 0xbe, 0xef}, - } - - actualAPIKey := struct { - ID uint64 - Prefix string - Hash []byte - }{ - ID: apiKeys[0].ID, - Prefix: apiKeys[0].Prefix, - Hash: apiKeys[0].Hash, - } - - if diff := cmp.Diff(expectedAPIKey, actualAPIKey); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() api_key mismatch (-want +got):\n%s", diff) - } - - // Validate date fields separately since they need Contains check - assert.Contains(t, apiKeys[0].CreatedAt, "2025-12-31", "created_at should be preserved") - assert.Contains(t, apiKeys[0].Expiration, "2025-06-18", "expiration should be preserved") - - // Verify that routes table no longer exists (should have been dropped) - var routesTableExists bool - err = hsdb.DB.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='routes'").Row().Scan(&routesTableExists) - require.NoError(t, err) - assert.False(t, routesTableExists, "routes table should have been dropped") - - // Verify all required indexes exist with correct structure using cmp.Diff - expectedIndexes := []string{ - "idx_users_deleted_at", - "idx_provider_identifier", - "idx_name_provider_identifier", - "idx_name_no_provider_identifier", - "idx_api_keys_prefix", - "idx_policies_deleted_at", - } - - expectedIndexMap := make(map[string]bool) - for _, index := range expectedIndexes { - expectedIndexMap[index] = true - } - - actualIndexMap := make(map[string]bool) - for _, indexName := range expectedIndexes { - var indexExists bool - err = hsdb.DB.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name=?", indexName).Row().Scan(&indexExists) - require.NoError(t, err) - actualIndexMap[indexName] = indexExists - } - - if diff := cmp.Diff(expectedIndexMap, actualIndexMap); diff != "" { - t.Errorf("TestSQLiteMigrationAndDataValidation() indexes existence mismatch (-want +got):\n%s", diff) - } - - // Verify proper foreign key constraints are set - // Check that pre_auth_keys has correct FK constraint (SET NULL, not CASCADE) - var preAuthKeyConstraint string - err = hsdb.DB.Raw("SELECT sql FROM sqlite_master WHERE type='table' AND name='pre_auth_keys'").Row().Scan(&preAuthKeyConstraint) - require.NoError(t, err) - assert.Contains(t, preAuthKeyConstraint, "ON DELETE SET NULL", "pre_auth_keys should have SET NULL constraint") - assert.NotContains(t, preAuthKeyConstraint, "ON DELETE CASCADE", "pre_auth_keys should not have CASCADE constraint") - - // Verify that user unique constraints work properly - // Try to create duplicate local user (should fail) - err = hsdb.DB.Create(&types.User{Name: users[0].Name}).Error - require.Error(t, err, "should not allow duplicate local usernames") - assert.Contains(t, err.Error(), "UNIQUE constraint", "should fail with unique constraint error") + // Node 7: user1 has RequestTags for tag:server (authorized) and tag:forbidden (unauthorized) + // Expected: tags = ["tag:server"] (only authorized tag) + node7 := findNode("node7") + require.NotNil(t, node7, "node7 should exist") + assert.Contains(t, node7.Tags, "tag:server", "node7 should have tag:server migrated") + assert.NotContains(t, node7.Tags, "tag:forbidden", "node7 should NOT have tag:forbidden (unauthorized)") }, }, } @@ -869,25 +308,7 @@ func TestPostgresMigrationAndDataValidation(t *testing.T) { name string dbPath string wantFunc func(*testing.T, *HSDatabase) - }{ - { - name: "user-idx-breaking", - dbPath: "testdata/postgres/pre-24-postgresdb.pssql.dump", - wantFunc: func(t *testing.T, hsdb *HSDatabase) { - t.Helper() - users, err := Read(hsdb.DB, func(rx *gorm.DB) ([]types.User, error) { - return ListUsers(rx) - }) - require.NoError(t, err) - - for _, user := range users { - assert.NotEmpty(t, user.Name) - assert.Empty(t, user.ProfilePicURL) - assert.Empty(t, user.Email) - } - }, - }, - } + }{} for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { @@ -911,7 +332,7 @@ func TestPostgresMigrationAndDataValidation(t *testing.T) { t.Fatalf("failed to restore postgres database: %s", err) } - db = newHeadscaleDBFromPostgresURL(t, u) + db := newHeadscaleDBFromPostgresURL(t, u) if tt.wantFunc != nil { tt.wantFunc(t, db) @@ -944,13 +365,17 @@ func dbForTestWithPath(t *testing.T, sqlFilePath string) *HSDatabase { } db, err := NewHeadscaleDatabase( - types.DatabaseConfig{ - Type: "sqlite3", - Sqlite: types.SqliteConfig{ - Path: dbPath, + &types.Config{ + Database: types.DatabaseConfig{ + Type: "sqlite3", + Sqlite: types.SqliteConfig{ + Path: dbPath, + }, + }, + Policy: types.PolicyConfig{ + Mode: types.PolicyModeDB, }, }, - "", emptyCache(), ) if err != nil { @@ -970,11 +395,10 @@ func dbForTestWithPath(t *testing.T, sqlFilePath string) *HSDatabase { // in the testdata directory. It verifies they can be successfully migrated to the current // schema version. This test only validates migration success, not data integrity. // -// A lot of the schemas have been automatically generated with old Headscale binaries on empty databases -// (no user/node data): -// - `headscale__schema.sql` (created with `sqlite3 headscale.db .schema`) -// - `headscale__dump.sql` (created with `sqlite3 headscale.db .dump`) -// where `_dump.sql` contains the migration steps that have been applied to the database. +// All test database files are SQL dumps (created with `sqlite3 headscale.db .dump`) generated +// with old Headscale binaries on empty databases (no user/node data). These dumps include the +// migration history in the `migrations` table, which allows the migration system to correctly +// skip already-applied migrations and only run new ones. func TestSQLiteAllTestdataMigrations(t *testing.T) { t.Parallel() schemas, err := os.ReadDir("testdata/sqlite") @@ -1000,13 +424,17 @@ func TestSQLiteAllTestdataMigrations(t *testing.T) { require.NoError(t, err) _, err = NewHeadscaleDatabase( - types.DatabaseConfig{ - Type: "sqlite3", - Sqlite: types.SqliteConfig{ - Path: dbPath, + &types.Config{ + Database: types.DatabaseConfig{ + Type: "sqlite3", + Sqlite: types.SqliteConfig{ + Path: dbPath, + }, + }, + Policy: types.PolicyConfig{ + Mode: types.PolicyModeDB, }, }, - "", emptyCache(), ) require.NoError(t, err) diff --git a/hscontrol/db/ephemeral_garbage_collector_test.go b/hscontrol/db/ephemeral_garbage_collector_test.go index b9edad79..d118b7fd 100644 --- a/hscontrol/db/ephemeral_garbage_collector_test.go +++ b/hscontrol/db/ephemeral_garbage_collector_test.go @@ -1,9 +1,9 @@ package db import ( - "math/rand" "runtime" "sync" + "sync/atomic" "testing" "time" @@ -68,31 +68,18 @@ func TestEphemeralGarbageCollectorGoRoutineLeak(t *testing.T) { gc.Cancel(nodeID) } - // Create a channel to signal when we're done with cleanup checks - cleanupDone := make(chan struct{}) + // Close GC + gc.Close() - // Close GC and check for leaks in a separate goroutine - go func() { - // Close GC - gc.Close() - - // Give any potential leaked goroutines a chance to exit - // Still need a small sleep here as we're checking for absence of goroutines - time.Sleep(oneHundred) - - // Check for leaked goroutines + // Wait for goroutines to clean up and verify no leaks + assert.EventuallyWithT(t, func(c *assert.CollectT) { finalGoroutines := runtime.NumGoroutine() - t.Logf("Final number of goroutines: %d", finalGoroutines) - // NB: We have to allow for a small number of extra goroutines because of test itself - assert.LessOrEqual(t, finalGoroutines, initialGoroutines+5, + assert.LessOrEqual(c, finalGoroutines, initialGoroutines+5, "There are significantly more goroutines after GC usage, which suggests a leak") + }, time.Second, 10*time.Millisecond, "goroutines should clean up after GC close") - close(cleanupDone) - }() - - // Wait for cleanup to complete - <-cleanupDone + t.Logf("Final number of goroutines: %d", runtime.NumGoroutine()) } // TestEphemeralGarbageCollectorReschedule is a test for the rescheduling of nodes in EphemeralGarbageCollector(). @@ -103,10 +90,14 @@ func TestEphemeralGarbageCollectorReschedule(t *testing.T) { var deletedIDs []types.NodeID var deleteMutex sync.Mutex + deletionNotifier := make(chan types.NodeID, 1) + deleteFunc := func(nodeID types.NodeID) { deleteMutex.Lock() deletedIDs = append(deletedIDs, nodeID) deleteMutex.Unlock() + + deletionNotifier <- nodeID } // Start GC @@ -125,10 +116,15 @@ func TestEphemeralGarbageCollectorReschedule(t *testing.T) { // Reschedule the same node with a shorter expiry gc.Schedule(nodeID, shortExpiry) - // Wait for deletion - time.Sleep(shortExpiry * 2) + // Wait for deletion notification with timeout + select { + case deletedNodeID := <-deletionNotifier: + assert.Equal(t, nodeID, deletedNodeID, "The correct node should be deleted") + case <-time.After(time.Second): + t.Fatal("Timed out waiting for node deletion") + } - // Verify that the node was deleted once + // Verify that the node was deleted exactly once deleteMutex.Lock() assert.Len(t, deletedIDs, 1, "Node should be deleted exactly once") assert.Equal(t, nodeID, deletedIDs[0], "The correct node should be deleted") @@ -203,18 +199,24 @@ func TestEphemeralGarbageCollectorCloseBeforeTimerFires(t *testing.T) { var deletedIDs []types.NodeID var deleteMutex sync.Mutex + deletionNotifier := make(chan types.NodeID, 1) + deleteFunc := func(nodeID types.NodeID) { deleteMutex.Lock() deletedIDs = append(deletedIDs, nodeID) deleteMutex.Unlock() + + deletionNotifier <- nodeID } // Start the GC gc := NewEphemeralGarbageCollector(deleteFunc) go gc.Start() - const longExpiry = 1 * time.Hour - const shortExpiry = fifty + const ( + longExpiry = 1 * time.Hour + shortWait = fifty * 2 + ) // Schedule node deletion with a long expiry gc.Schedule(types.NodeID(1), longExpiry) @@ -222,8 +224,13 @@ func TestEphemeralGarbageCollectorCloseBeforeTimerFires(t *testing.T) { // Close the GC before the timer gc.Close() - // Wait a short time - time.Sleep(shortExpiry * 2) + // Verify that no deletion occurred within a reasonable time + select { + case <-deletionNotifier: + t.Fatal("Node was deleted after GC was closed, which should not happen") + case <-time.After(shortWait): + // Expected: no deletion should occur + } // Verify that no deletion occurred deleteMutex.Lock() @@ -265,29 +272,17 @@ func TestEphemeralGarbageCollectorScheduleAfterClose(t *testing.T) { // Close GC right away gc.Close() - // Use a channel to signal when we should check for goroutine count - gcClosedCheck := make(chan struct{}) - go func() { - // Give the GC time to fully close and clean up resources - // This is still time-based but only affects when we check the goroutine count, - // not the actual test logic - time.Sleep(oneHundred) - close(gcClosedCheck) - }() - // Now try to schedule node for deletion with a very short expiry // If the Schedule operation incorrectly creates a timer, it would fire quickly nodeID := types.NodeID(1) gc.Schedule(nodeID, 1*time.Millisecond) - // Set up a timeout channel for our test - timeout := time.After(fiveHundred) - // Check if any node was deleted (which shouldn't happen) + // Use timeout to wait for potential deletion select { case <-nodeDeleted: t.Fatal("Node was deleted after GC was closed, which should not happen") - case <-timeout: + case <-time.After(fiveHundred): // This is the expected path - no deletion should occur } @@ -298,13 +293,14 @@ func TestEphemeralGarbageCollectorScheduleAfterClose(t *testing.T) { assert.Equal(t, 0, nodesDeleted, "No nodes should be deleted when Schedule is called after Close") // Check for goroutine leaks after GC is fully closed - <-gcClosedCheck - finalGoroutines := runtime.NumGoroutine() - t.Logf("Final number of goroutines: %d", finalGoroutines) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + finalGoroutines := runtime.NumGoroutine() + // Allow for small fluctuations in goroutine count for testing routines etc + assert.LessOrEqual(c, finalGoroutines, initialGoroutines+2, + "There should be no significant goroutine leaks when Schedule is called after Close") + }, time.Second, 10*time.Millisecond, "goroutines should clean up after GC close") - // Allow for small fluctuations in goroutine count for testing routines etc - assert.LessOrEqual(t, finalGoroutines, initialGoroutines+2, - "There should be no significant goroutine leaks when Schedule is called after Close") + t.Logf("Final number of goroutines: %d", runtime.NumGoroutine()) } // TestEphemeralGarbageCollectorConcurrentScheduleAndClose tests the behavior of the garbage collector @@ -331,7 +327,8 @@ func TestEphemeralGarbageCollectorConcurrentScheduleAndClose(t *testing.T) { // Number of concurrent scheduling goroutines const numSchedulers = 10 const nodesPerScheduler = 50 - const schedulingDuration = fiveHundred + + const closeAfterNodes = 25 // Close GC after this many nodes per scheduler // Use WaitGroup to wait for all scheduling goroutines to finish var wg sync.WaitGroup @@ -340,6 +337,9 @@ func TestEphemeralGarbageCollectorConcurrentScheduleAndClose(t *testing.T) { // Create a stopper channel to signal scheduling goroutines to stop stopScheduling := make(chan struct{}) + // Track how many nodes have been scheduled + var scheduledCount int64 + // Launch goroutines that continuously schedule nodes for schedulerIndex := range numSchedulers { go func(schedulerID int) { @@ -355,18 +355,23 @@ func TestEphemeralGarbageCollectorConcurrentScheduleAndClose(t *testing.T) { default: nodeID := types.NodeID(baseNodeID + j + 1) gc.Schedule(nodeID, 1*time.Hour) // Long expiry to ensure it doesn't trigger during test + atomic.AddInt64(&scheduledCount, 1) - // Random (short) sleep to introduce randomness/variability - time.Sleep(time.Duration(rand.Intn(5)) * time.Millisecond) + // Yield to other goroutines to introduce variability + runtime.Gosched() } } }(schedulerIndex) } - // After a short delay, close the garbage collector while schedulers are still running + // Close the garbage collector after some nodes have been scheduled go func() { defer wg.Done() - time.Sleep(schedulingDuration / 2) + + // Wait until enough nodes have been scheduled + for atomic.LoadInt64(&scheduledCount) < int64(numSchedulers*closeAfterNodes) { + runtime.Gosched() + } // Close GC gc.Close() @@ -378,14 +383,13 @@ func TestEphemeralGarbageCollectorConcurrentScheduleAndClose(t *testing.T) { // Wait for all goroutines to complete wg.Wait() - // Wait a bit longer to allow any leaked goroutines to do their work - time.Sleep(oneHundred) + // Check for leaks using EventuallyWithT + assert.EventuallyWithT(t, func(c *assert.CollectT) { + finalGoroutines := runtime.NumGoroutine() + // Allow for a reasonable small variable routine count due to testing + assert.LessOrEqual(c, finalGoroutines, initialGoroutines+5, + "There should be no significant goroutine leaks during concurrent Schedule and Close operations") + }, time.Second, 10*time.Millisecond, "goroutines should clean up") - // Check for leaks - finalGoroutines := runtime.NumGoroutine() - t.Logf("Final number of goroutines: %d", finalGoroutines) - - // Allow for a reasonable small variable routine count due to testing - assert.LessOrEqual(t, finalGoroutines, initialGoroutines+5, - "There should be no significant goroutine leaks during concurrent Schedule and Close operations") + t.Logf("Final number of goroutines: %d", runtime.NumGoroutine()) } diff --git a/hscontrol/db/ip.go b/hscontrol/db/ip.go index 244bb3db..972d8e72 100644 --- a/hscontrol/db/ip.go +++ b/hscontrol/db/ip.go @@ -341,3 +341,12 @@ func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) { return ret, err } + +func (i *IPAllocator) FreeIPs(ips []netip.Addr) { + i.mu.Lock() + defer i.mu.Unlock() + + for _, ip := range ips { + i.usedIPs.Remove(ip) + } +} diff --git a/hscontrol/db/ip_test.go b/hscontrol/db/ip_test.go index f558cdf7..7ba335e8 100644 --- a/hscontrol/db/ip_test.go +++ b/hscontrol/db/ip_test.go @@ -6,7 +6,6 @@ import ( "strings" "testing" - "github.com/davecgh/go-spew/spew" "github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp/cmpopts" "github.com/juanfont/headscale/hscontrol/types" @@ -96,7 +95,7 @@ func TestIPAllocatorSequential(t *testing.T) { db.DB.Save(&user) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.1"), IPv6: nap("fd7a:115c:a1e0::1"), }) @@ -124,7 +123,7 @@ func TestIPAllocatorSequential(t *testing.T) { db.DB.Save(&user) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.2"), IPv6: nap("fd7a:115c:a1e0::2"), }) @@ -159,8 +158,6 @@ func TestIPAllocatorSequential(t *testing.T) { types.IPAllocationStrategySequential, ) - spew.Dump(alloc) - var got4s []netip.Addr var got6s []netip.Addr @@ -263,8 +260,6 @@ func TestIPAllocatorRandom(t *testing.T) { alloc, _ := NewIPAllocator(db, tt.prefix4, tt.prefix6, types.IPAllocationStrategyRandom) - spew.Dump(alloc) - for range tt.getCount { got4, got6, err := alloc.Next() if err != nil { @@ -314,7 +309,7 @@ func TestBackfillIPAddresses(t *testing.T) { db.DB.Save(&user) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.1"), }) @@ -339,7 +334,7 @@ func TestBackfillIPAddresses(t *testing.T) { db.DB.Save(&user) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv6: nap("fd7a:115c:a1e0::1"), }) @@ -364,7 +359,7 @@ func TestBackfillIPAddresses(t *testing.T) { db.DB.Save(&user) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.1"), IPv6: nap("fd7a:115c:a1e0::1"), }) @@ -388,7 +383,7 @@ func TestBackfillIPAddresses(t *testing.T) { db.DB.Save(&user) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.1"), IPv6: nap("fd7a:115c:a1e0::1"), }) @@ -412,19 +407,19 @@ func TestBackfillIPAddresses(t *testing.T) { db.DB.Save(&user) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.1"), }) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.2"), }) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.3"), }) db.DB.Save(&types.Node{ - User: user, + User: &user, IPv4: nap("100.64.0.4"), }) diff --git a/hscontrol/db/node.go b/hscontrol/db/node.go index 060196a9..bf407bb4 100644 --- a/hscontrol/db/node.go +++ b/hscontrol/db/node.go @@ -196,8 +196,9 @@ func SetTags( tags []string, ) error { if len(tags) == 0 { - // if no tags are provided, we remove all forced tags - if err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("forced_tags", "[]").Error; err != nil { + // if no tags are provided, we remove all tags + err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("tags", "[]").Error + if err != nil { return fmt.Errorf("removing tags: %w", err) } @@ -211,7 +212,8 @@ func SetTags( return err } - if err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("forced_tags", string(b)).Error; err != nil { + err = tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("tags", string(b)).Error + if err != nil { return fmt.Errorf("updating tags: %w", err) } @@ -349,12 +351,20 @@ func RegisterNodeForTest(tx *gorm.DB, node types.Node, ipv4 *netip.Addr, ipv6 *n panic("RegisterNodeForTest can only be called during tests") } - log.Debug(). + logEvent := log.Debug(). Str("node", node.Hostname). Str("machine_key", node.MachineKey.ShortString()). - Str("node_key", node.NodeKey.ShortString()). - Str("user", node.User.Username()). - Msg("Registering test node") + Str("node_key", node.NodeKey.ShortString()) + + if node.User != nil { + logEvent = logEvent.Str("user", node.User.Username()) + } else if node.UserID != nil { + logEvent = logEvent.Uint("user_id", *node.UserID) + } else { + logEvent = logEvent.Str("user", "none") + } + + logEvent.Msg("Registering test node") // If the a new node is registered with the same machine key, to the same user, // update the existing node. @@ -642,7 +652,7 @@ func (hsdb *HSDatabase) CreateNodeForTest(user *types.User, hostname ...string) } // Create a preauth key for the node - pak, err := hsdb.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) + pak, err := hsdb.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) if err != nil { panic(fmt.Sprintf("failed to create preauth key for test node: %v", err)) } @@ -656,7 +666,7 @@ func (hsdb *HSDatabase) CreateNodeForTest(user *types.User, hostname ...string) NodeKey: nodeKey.Public(), DiscoKey: discoKey.Public(), Hostname: nodeName, - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, AuthKeyID: ptr.To(pak.ID), } @@ -749,15 +759,25 @@ func (hsdb *HSDatabase) allocateTestIPs(nodeID types.NodeID) (*netip.Addr, *neti } // Use simple sequential allocation for tests - // IPv4: 100.64.0.x (where x is nodeID) - // IPv6: fd7a:115c:a1e0::x (where x is nodeID) + // IPv4: 100.64.x.y (where x = nodeID/256, y = nodeID%256) + // IPv6: fd7a:115c:a1e0::x:y (where x = high byte, y = low byte) + // This supports up to 65535 nodes + const ( + maxTestNodes = 65535 + ipv4ByteDivisor = 256 + ) - if nodeID > 254 { - return nil, nil, fmt.Errorf("test node ID %d too large for simple IP allocation", nodeID) + if nodeID > maxTestNodes { + return nil, nil, ErrCouldNotAllocateIP } - ipv4 := netip.AddrFrom4([4]byte{100, 64, 0, byte(nodeID)}) - ipv6 := netip.AddrFrom16([16]byte{0xfd, 0x7a, 0x11, 0x5c, 0xa1, 0xe0, 0, 0, 0, 0, 0, 0, 0, 0, 0, byte(nodeID)}) + // Split nodeID into high and low bytes for IPv4 (100.64.high.low) + highByte := byte(nodeID / ipv4ByteDivisor) + lowByte := byte(nodeID % ipv4ByteDivisor) + ipv4 := netip.AddrFrom4([4]byte{100, 64, highByte, lowByte}) + + // For IPv6, use the last two bytes of the address (fd7a:115c:a1e0::high:low) + ipv6 := netip.AddrFrom16([16]byte{0xfd, 0x7a, 0x11, 0x5c, 0xa1, 0xe0, 0, 0, 0, 0, 0, 0, 0, 0, highByte, lowByte}) return &ipv4, &ipv6, nil } diff --git a/hscontrol/db/node_test.go b/hscontrol/db/node_test.go index 0efd0e8b..7e00f9ca 100644 --- a/hscontrol/db/node_test.go +++ b/hscontrol/db/node_test.go @@ -6,7 +6,9 @@ import ( "math/big" "net/netip" "regexp" + "runtime" "sync" + "sync/atomic" "testing" "time" @@ -16,7 +18,6 @@ import ( "github.com/juanfont/headscale/hscontrol/util" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" - "gopkg.in/check.v1" "gorm.io/gorm" "tailscale.com/net/tsaddr" "tailscale.com/tailcfg" @@ -24,70 +25,85 @@ import ( "tailscale.com/types/ptr" ) -func (s *Suite) TestGetNode(c *check.C) { +func TestGetNode(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + user := db.CreateUserForTest("test") - _, err := db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.NotNil) + _, err = db.getNode(types.UserID(user.ID), "testnode") + require.Error(t, err) node := db.CreateNodeForTest(user, "testnode") _, err = db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.IsNil) - c.Assert(node.Hostname, check.Equals, "testnode") + require.NoError(t, err) + assert.Equal(t, "testnode", node.Hostname) } -func (s *Suite) TestGetNodeByID(c *check.C) { +func TestGetNodeByID(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + user := db.CreateUserForTest("test") - _, err := db.GetNodeByID(0) - c.Assert(err, check.NotNil) + _, err = db.GetNodeByID(0) + require.Error(t, err) node := db.CreateNodeForTest(user, "testnode") retrievedNode, err := db.GetNodeByID(node.ID) - c.Assert(err, check.IsNil) - c.Assert(retrievedNode.Hostname, check.Equals, "testnode") + require.NoError(t, err) + assert.Equal(t, "testnode", retrievedNode.Hostname) } -func (s *Suite) TestHardDeleteNode(c *check.C) { +func TestHardDeleteNode(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + user := db.CreateUserForTest("test") node := db.CreateNodeForTest(user, "testnode3") - err := db.DeleteNode(node) - c.Assert(err, check.IsNil) + err = db.DeleteNode(node) + require.NoError(t, err) _, err = db.getNode(types.UserID(user.ID), "testnode3") - c.Assert(err, check.NotNil) + require.Error(t, err) } -func (s *Suite) TestListPeers(c *check.C) { +func TestListPeersManyNodes(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + user := db.CreateUserForTest("test") - _, err := db.GetNodeByID(0) - c.Assert(err, check.NotNil) + _, err = db.GetNodeByID(0) + require.Error(t, err) nodes := db.CreateNodesForTest(user, 11, "testnode") firstNode := nodes[0] peersOfFirstNode, err := db.ListPeers(firstNode.ID) - c.Assert(err, check.IsNil) + require.NoError(t, err) - c.Assert(len(peersOfFirstNode), check.Equals, 10) - c.Assert(peersOfFirstNode[0].Hostname, check.Equals, "testnode-1") - c.Assert(peersOfFirstNode[5].Hostname, check.Equals, "testnode-6") - c.Assert(peersOfFirstNode[9].Hostname, check.Equals, "testnode-10") + assert.Len(t, peersOfFirstNode, 10) + assert.Equal(t, "testnode-1", peersOfFirstNode[0].Hostname) + assert.Equal(t, "testnode-6", peersOfFirstNode[5].Hostname) + assert.Equal(t, "testnode-10", peersOfFirstNode[9].Hostname) } -func (s *Suite) TestExpireNode(c *check.C) { - user, err := db.CreateUser(types.User{Name: "test"}) - c.Assert(err, check.IsNil) +func TestExpireNode(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) - pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) - c.Assert(err, check.IsNil) + user, err := db.CreateUser(types.User{Name: "test"}) + require.NoError(t, err) + + pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) + require.NoError(t, err) _, err = db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.NotNil) + require.Error(t, err) nodeKey := key.NewNode() machineKey := key.NewMachine() @@ -97,7 +113,7 @@ func (s *Suite) TestExpireNode(c *check.C) { MachineKey: machineKey.Public(), NodeKey: nodeKey.Public(), Hostname: "testnode", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, AuthKeyID: ptr.To(pak.ID), Expiry: &time.Time{}, @@ -105,30 +121,33 @@ func (s *Suite) TestExpireNode(c *check.C) { db.DB.Save(node) nodeFromDB, err := db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.IsNil) - c.Assert(nodeFromDB, check.NotNil) + require.NoError(t, err) + require.NotNil(t, nodeFromDB) - c.Assert(nodeFromDB.IsExpired(), check.Equals, false) + assert.False(t, nodeFromDB.IsExpired()) now := time.Now() err = db.NodeSetExpiry(nodeFromDB.ID, now) - c.Assert(err, check.IsNil) + require.NoError(t, err) nodeFromDB, err = db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.IsNil) + require.NoError(t, err) - c.Assert(nodeFromDB.IsExpired(), check.Equals, true) + assert.True(t, nodeFromDB.IsExpired()) } -func (s *Suite) TestSetTags(c *check.C) { - user, err := db.CreateUser(types.User{Name: "test"}) - c.Assert(err, check.IsNil) +func TestSetTags(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) - pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) - c.Assert(err, check.IsNil) + user, err := db.CreateUser(types.User{Name: "test"}) + require.NoError(t, err) + + pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) + require.NoError(t, err) _, err = db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.NotNil) + require.Error(t, err) nodeKey := key.NewNode() machineKey := key.NewMachine() @@ -138,40 +157,29 @@ func (s *Suite) TestSetTags(c *check.C) { MachineKey: machineKey.Public(), NodeKey: nodeKey.Public(), Hostname: "testnode", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, AuthKeyID: ptr.To(pak.ID), } trx := db.DB.Save(node) - c.Assert(trx.Error, check.IsNil) + require.NoError(t, trx.Error) // assign simple tags sTags := []string{"tag:test", "tag:foo"} err = db.SetTags(node.ID, sTags) - c.Assert(err, check.IsNil) + require.NoError(t, err) node, err = db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.IsNil) - c.Assert(node.ForcedTags, check.DeepEquals, sTags) + require.NoError(t, err) + assert.Equal(t, sTags, node.Tags) // assign duplicate tags, expect no errors but no doubles in DB eTags := []string{"tag:bar", "tag:test", "tag:unknown", "tag:test"} err = db.SetTags(node.ID, eTags) - c.Assert(err, check.IsNil) + require.NoError(t, err) node, err = db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.IsNil) - c.Assert( - node.ForcedTags, - check.DeepEquals, - []string{"tag:bar", "tag:test", "tag:unknown"}, - ) - - // test removing tags - err = db.SetTags(node.ID, []string{}) - c.Assert(err, check.IsNil) - node, err = db.getNode(types.UserID(user.ID), "testnode") - c.Assert(err, check.IsNil) - c.Assert(node.ForcedTags, check.DeepEquals, []string{}) + require.NoError(t, err) + assert.Equal(t, []string{"tag:bar", "tag:test", "tag:unknown"}, node.Tags) } func TestHeadscale_generateGivenName(t *testing.T) { @@ -430,7 +438,7 @@ func TestAutoApproveRoutes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "testnode", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: tt.routes, @@ -446,13 +454,13 @@ func TestAutoApproveRoutes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "taggednode", - UserID: taggedUser.ID, + UserID: &taggedUser.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: tt.routes, }, - ForcedTags: []string{"tag:exit"}, - IPv4: ptr.To(netip.MustParseAddr("100.64.0.2")), + Tags: []string{"tag:exit"}, + IPv4: ptr.To(netip.MustParseAddr("100.64.0.2")), } err = adb.DB.Save(&nodeTagged).Error @@ -514,23 +522,48 @@ func TestEphemeralGarbageCollectorOrder(t *testing.T) { got := []types.NodeID{} var mu sync.Mutex + deletionCount := make(chan struct{}, 10) + e := NewEphemeralGarbageCollector(func(ni types.NodeID) { mu.Lock() defer mu.Unlock() got = append(got, ni) + + deletionCount <- struct{}{} }) go e.Start() - go e.Schedule(1, 1*time.Second) - go e.Schedule(2, 2*time.Second) - go e.Schedule(3, 3*time.Second) - go e.Schedule(4, 4*time.Second) + // Use shorter timeouts for faster tests + go e.Schedule(1, 50*time.Millisecond) + go e.Schedule(2, 100*time.Millisecond) + go e.Schedule(3, 150*time.Millisecond) + go e.Schedule(4, 200*time.Millisecond) - time.Sleep(time.Second) + // Wait for first deletion (node 1 at 50ms) + select { + case <-deletionCount: + case <-time.After(time.Second): + t.Fatal("timeout waiting for first deletion") + } + + // Cancel nodes 2 and 4 go e.Cancel(2) go e.Cancel(4) - time.Sleep(6 * time.Second) + // Wait for node 3 to be deleted (at 150ms) + select { + case <-deletionCount: + case <-time.After(time.Second): + t.Fatal("timeout waiting for second deletion") + } + + // Give a bit more time for any unexpected deletions + select { + case <-deletionCount: + // Unexpected - more deletions than expected + case <-time.After(300 * time.Millisecond): + // Expected - no more deletions + } e.Close() @@ -548,20 +581,30 @@ func TestEphemeralGarbageCollectorLoads(t *testing.T) { want := 1000 + var deletedCount int64 + e := NewEphemeralGarbageCollector(func(ni types.NodeID) { mu.Lock() defer mu.Unlock() - time.Sleep(time.Duration(generateRandomNumber(t, 3)) * time.Millisecond) + // Yield to other goroutines to introduce variability + runtime.Gosched() got = append(got, ni) + + atomic.AddInt64(&deletedCount, 1) }) go e.Start() + // Use shorter expiry for faster tests for i := range want { - go e.Schedule(types.NodeID(i), 1*time.Second) + go e.Schedule(types.NodeID(i), 100*time.Millisecond) //nolint:gosec // test code, no overflow risk } - time.Sleep(10 * time.Second) + // Wait for all deletions to complete + assert.EventuallyWithT(t, func(c *assert.CollectT) { + count := atomic.LoadInt64(&deletedCount) + assert.Equal(c, int64(want), count, "all nodes should be deleted") + }, 10*time.Second, 50*time.Millisecond, "waiting for all deletions") e.Close() @@ -593,10 +636,10 @@ func TestListEphemeralNodes(t *testing.T) { user, err := db.CreateUser(types.User{Name: "test"}) require.NoError(t, err) - pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) + pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) require.NoError(t, err) - pakEph, err := db.CreatePreAuthKey(types.UserID(user.ID), false, true, nil, nil) + pakEph, err := db.CreatePreAuthKey(user.TypedID(), false, true, nil, nil) require.NoError(t, err) node := types.Node{ @@ -604,7 +647,7 @@ func TestListEphemeralNodes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "test", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, AuthKeyID: ptr.To(pak.ID), } @@ -614,7 +657,7 @@ func TestListEphemeralNodes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "ephemeral", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, AuthKeyID: ptr.To(pakEph.ID), } @@ -657,7 +700,7 @@ func TestNodeNaming(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "test", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{}, } @@ -667,7 +710,7 @@ func TestNodeNaming(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "test", - UserID: user2.ID, + UserID: &user2.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{}, } @@ -680,7 +723,7 @@ func TestNodeNaming(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "我的电脑", - UserID: user2.ID, + UserID: &user2.ID, RegisterMethod: util.RegisterMethodAuthKey, } @@ -688,7 +731,7 @@ func TestNodeNaming(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "a", - UserID: user2.ID, + UserID: &user2.ID, RegisterMethod: util.RegisterMethodAuthKey, } @@ -808,7 +851,7 @@ func TestRenameNodeComprehensive(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "testnode", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{}, } @@ -931,7 +974,7 @@ func TestListPeers(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "test1", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{}, } @@ -941,7 +984,7 @@ func TestListPeers(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "test2", - UserID: user2.ID, + UserID: &user2.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{}, } @@ -1016,7 +1059,7 @@ func TestListNodes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "test1", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{}, } @@ -1026,7 +1069,7 @@ func TestListNodes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "test2", - UserID: user2.ID, + UserID: &user2.ID, RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{}, } diff --git a/hscontrol/db/policy.go b/hscontrol/db/policy.go index 49b419b5..bdc8af41 100644 --- a/hscontrol/db/policy.go +++ b/hscontrol/db/policy.go @@ -2,8 +2,10 @@ package db import ( "errors" + "os" "github.com/juanfont/headscale/hscontrol/types" + "github.com/juanfont/headscale/hscontrol/util" "gorm.io/gorm" "gorm.io/gorm/clause" ) @@ -24,14 +26,22 @@ func (hsdb *HSDatabase) SetPolicy(policy string) (*types.Policy, error) { // GetPolicy returns the latest policy in the database. func (hsdb *HSDatabase) GetPolicy() (*types.Policy, error) { + return GetPolicy(hsdb.DB) +} + +// GetPolicy returns the latest policy from the database. +// This standalone function can be used in contexts where HSDatabase is not available, +// such as during migrations. +func GetPolicy(tx *gorm.DB) (*types.Policy, error) { var p types.Policy // Query: // SELECT * FROM policies ORDER BY id DESC LIMIT 1; - if err := hsdb.DB. + err := tx. Order("id DESC"). Limit(1). - First(&p).Error; err != nil { + First(&p).Error + if err != nil { if errors.Is(err, gorm.ErrRecordNotFound) { return nil, types.ErrPolicyNotFound } @@ -41,3 +51,41 @@ func (hsdb *HSDatabase) GetPolicy() (*types.Policy, error) { return &p, nil } + +// PolicyBytes loads policy configuration from file or database based on the configured mode. +// Returns nil if no policy is configured, which is valid. +// This standalone function can be used in contexts where HSDatabase is not available, +// such as during migrations. +func PolicyBytes(tx *gorm.DB, cfg *types.Config) ([]byte, error) { + switch cfg.Policy.Mode { + case types.PolicyModeFile: + path := cfg.Policy.Path + + // It is fine to start headscale without a policy file. + if len(path) == 0 { + return nil, nil + } + + absPath := util.AbsolutePathFromConfigPath(path) + + return os.ReadFile(absPath) + + case types.PolicyModeDB: + p, err := GetPolicy(tx) + if err != nil { + if errors.Is(err, types.ErrPolicyNotFound) { + return nil, nil + } + + return nil, err + } + + if p.Data == "" { + return nil, nil + } + + return []byte(p.Data), nil + } + + return nil, nil +} diff --git a/hscontrol/db/preauth_keys.go b/hscontrol/db/preauth_keys.go index 94575269..c5904353 100644 --- a/hscontrol/db/preauth_keys.go +++ b/hscontrol/db/preauth_keys.go @@ -1,8 +1,6 @@ package db import ( - "crypto/rand" - "encoding/hex" "errors" "fmt" "slices" @@ -10,42 +8,69 @@ import ( "time" "github.com/juanfont/headscale/hscontrol/types" + "github.com/juanfont/headscale/hscontrol/util" + "golang.org/x/crypto/bcrypt" "gorm.io/gorm" "tailscale.com/util/set" ) var ( - ErrPreAuthKeyNotFound = errors.New("AuthKey not found") - ErrPreAuthKeyExpired = errors.New("AuthKey expired") - ErrSingleUseAuthKeyHasBeenUsed = errors.New("AuthKey has already been used") + ErrPreAuthKeyNotFound = errors.New("auth-key not found") + ErrPreAuthKeyExpired = errors.New("auth-key expired") + ErrSingleUseAuthKeyHasBeenUsed = errors.New("auth-key has already been used") ErrUserMismatch = errors.New("user mismatch") - ErrPreAuthKeyACLTagInvalid = errors.New("AuthKey tag is invalid") + ErrPreAuthKeyACLTagInvalid = errors.New("auth-key tag is invalid") ) func (hsdb *HSDatabase) CreatePreAuthKey( - uid types.UserID, + uid *types.UserID, reusable bool, ephemeral bool, expiration *time.Time, aclTags []string, -) (*types.PreAuthKey, error) { - return Write(hsdb.DB, func(tx *gorm.DB) (*types.PreAuthKey, error) { +) (*types.PreAuthKeyNew, error) { + return Write(hsdb.DB, func(tx *gorm.DB) (*types.PreAuthKeyNew, error) { return CreatePreAuthKey(tx, uid, reusable, ephemeral, expiration, aclTags) }) } +const ( + authKeyPrefix = "hskey-auth-" + authKeyPrefixLength = 12 + authKeyLength = 64 +) + // CreatePreAuthKey creates a new PreAuthKey in a user, and returns it. +// The uid parameter can be nil for system-created tagged keys. +// For tagged keys, uid tracks "created by" (who created the key). +// For user-owned keys, uid tracks the node owner. func CreatePreAuthKey( tx *gorm.DB, - uid types.UserID, + uid *types.UserID, reusable bool, ephemeral bool, expiration *time.Time, aclTags []string, -) (*types.PreAuthKey, error) { - user, err := GetUserByID(tx, uid) - if err != nil { - return nil, err +) (*types.PreAuthKeyNew, error) { + // Validate: must be tagged OR user-owned, not neither + if uid == nil && len(aclTags) == 0 { + return nil, ErrPreAuthKeyNotTaggedOrOwned + } + + var ( + user *types.User + userID *uint + ) + + if uid != nil { + var err error + + user, err = GetUserByID(tx, *uid) + if err != nil { + return nil, err + } + + userID = &user.ID } // Remove duplicates and sort for consistency @@ -65,49 +90,190 @@ func CreatePreAuthKey( } now := time.Now().UTC() - // TODO(kradalby): unify the key generations spread all over the code. - kstr, err := generateKey() + + prefix, err := util.GenerateRandomStringURLSafe(authKeyPrefixLength) + if err != nil { + return nil, err + } + + // Validate generated prefix (should always be valid, but be defensive) + if len(prefix) != authKeyPrefixLength { + return nil, fmt.Errorf("%w: generated prefix has invalid length: expected %d, got %d", ErrPreAuthKeyFailedToParse, authKeyPrefixLength, len(prefix)) + } + + if !isValidBase64URLSafe(prefix) { + return nil, fmt.Errorf("%w: generated prefix contains invalid characters", ErrPreAuthKeyFailedToParse) + } + + toBeHashed, err := util.GenerateRandomStringURLSafe(authKeyLength) + if err != nil { + return nil, err + } + + // Validate generated hash (should always be valid, but be defensive) + if len(toBeHashed) != authKeyLength { + return nil, fmt.Errorf("%w: generated hash has invalid length: expected %d, got %d", ErrPreAuthKeyFailedToParse, authKeyLength, len(toBeHashed)) + } + + if !isValidBase64URLSafe(toBeHashed) { + return nil, fmt.Errorf("%w: generated hash contains invalid characters", ErrPreAuthKeyFailedToParse) + } + + keyStr := authKeyPrefix + prefix + "-" + toBeHashed + + hash, err := bcrypt.GenerateFromPassword([]byte(toBeHashed), bcrypt.DefaultCost) if err != nil { return nil, err } key := types.PreAuthKey{ - Key: kstr, - UserID: user.ID, - User: *user, + UserID: userID, // nil for system-created keys, or "created by" for tagged keys + User: user, // nil for system-created keys Reusable: reusable, Ephemeral: ephemeral, CreatedAt: &now, Expiration: expiration, - Tags: aclTags, + Tags: aclTags, // empty for user-owned keys + Prefix: prefix, // Store prefix + Hash: hash, // Store hash } if err := tx.Save(&key).Error; err != nil { return nil, fmt.Errorf("failed to create key in the database: %w", err) } - return &key, nil + return &types.PreAuthKeyNew{ + ID: key.ID, + Key: keyStr, + Reusable: key.Reusable, + Ephemeral: key.Ephemeral, + Tags: key.Tags, + Expiration: key.Expiration, + CreatedAt: key.CreatedAt, + User: key.User, + }, nil } -func (hsdb *HSDatabase) ListPreAuthKeys(uid types.UserID) ([]types.PreAuthKey, error) { +func (hsdb *HSDatabase) ListPreAuthKeys() ([]types.PreAuthKey, error) { return Read(hsdb.DB, func(rx *gorm.DB) ([]types.PreAuthKey, error) { - return ListPreAuthKeysByUser(rx, uid) + return ListPreAuthKeys(rx) }) } -// ListPreAuthKeysByUser returns the list of PreAuthKeys for a user. -func ListPreAuthKeysByUser(tx *gorm.DB, uid types.UserID) ([]types.PreAuthKey, error) { - user, err := GetUserByID(tx, uid) +// ListPreAuthKeys returns all PreAuthKeys in the database. +func ListPreAuthKeys(tx *gorm.DB) ([]types.PreAuthKey, error) { + var keys []types.PreAuthKey + + err := tx.Preload("User").Find(&keys).Error if err != nil { return nil, err } - keys := []types.PreAuthKey{} - if err := tx.Preload("User").Where(&types.PreAuthKey{UserID: user.ID}).Find(&keys).Error; err != nil { - return nil, err + return keys, nil +} + +var ( + ErrPreAuthKeyFailedToParse = errors.New("failed to parse auth-key") + ErrPreAuthKeyNotTaggedOrOwned = errors.New("auth-key must be either tagged or owned by user") +) + +func findAuthKey(tx *gorm.DB, keyStr string) (*types.PreAuthKey, error) { + var pak types.PreAuthKey + + // Validate input is not empty + if keyStr == "" { + return nil, ErrPreAuthKeyFailedToParse } - return keys, nil + _, prefixAndHash, found := strings.Cut(keyStr, authKeyPrefix) + + if !found { + // Legacy format (plaintext) - backwards compatibility + err := tx.Preload("User").First(&pak, "key = ?", keyStr).Error + if err != nil { + return nil, ErrPreAuthKeyNotFound + } + + return &pak, nil + } + + // New format: hskey-auth-{12-char-prefix}-{64-char-hash} + // Expected minimum length: 12 (prefix) + 1 (separator) + 64 (hash) = 77 + const expectedMinLength = authKeyPrefixLength + 1 + authKeyLength + if len(prefixAndHash) < expectedMinLength { + return nil, fmt.Errorf( + "%w: key too short, expected at least %d chars after prefix, got %d", + ErrPreAuthKeyFailedToParse, + expectedMinLength, + len(prefixAndHash), + ) + } + + // Use fixed-length parsing instead of separator-based to handle dashes in base64 URL-safe + prefix := prefixAndHash[:authKeyPrefixLength] + + // Validate separator at expected position + if prefixAndHash[authKeyPrefixLength] != '-' { + return nil, fmt.Errorf( + "%w: expected separator '-' at position %d, got '%c'", + ErrPreAuthKeyFailedToParse, + authKeyPrefixLength, + prefixAndHash[authKeyPrefixLength], + ) + } + + hash := prefixAndHash[authKeyPrefixLength+1:] + + // Validate hash length + if len(hash) != authKeyLength { + return nil, fmt.Errorf( + "%w: hash length mismatch, expected %d chars, got %d", + ErrPreAuthKeyFailedToParse, + authKeyLength, + len(hash), + ) + } + + // Validate prefix contains only base64 URL-safe characters + if !isValidBase64URLSafe(prefix) { + return nil, fmt.Errorf( + "%w: prefix contains invalid characters (expected base64 URL-safe: A-Za-z0-9_-)", + ErrPreAuthKeyFailedToParse, + ) + } + + // Validate hash contains only base64 URL-safe characters + if !isValidBase64URLSafe(hash) { + return nil, fmt.Errorf( + "%w: hash contains invalid characters (expected base64 URL-safe: A-Za-z0-9_-)", + ErrPreAuthKeyFailedToParse, + ) + } + + // Look up key by prefix + err := tx.Preload("User").First(&pak, "prefix = ?", prefix).Error + if err != nil { + return nil, ErrPreAuthKeyNotFound + } + + // Verify hash matches + err = bcrypt.CompareHashAndPassword(pak.Hash, []byte(hash)) + if err != nil { + return nil, fmt.Errorf("invalid auth key: %w", err) + } + + return &pak, nil +} + +// isValidBase64URLSafe checks if a string contains only base64 URL-safe characters. +func isValidBase64URLSafe(s string) bool { + for _, c := range s { + if (c < 'A' || c > 'Z') && (c < 'a' || c > 'z') && (c < '0' || c > '9') && c != '-' && c != '_' { + return false + } + } + + return true } func (hsdb *HSDatabase) GetPreAuthKey(key string) (*types.PreAuthKey, error) { @@ -117,29 +283,41 @@ func (hsdb *HSDatabase) GetPreAuthKey(key string) (*types.PreAuthKey, error) { // GetPreAuthKey returns a PreAuthKey for a given key. The caller is responsible // for checking if the key is usable (expired or used). func GetPreAuthKey(tx *gorm.DB, key string) (*types.PreAuthKey, error) { - pak := types.PreAuthKey{} - if err := tx.Preload("User").First(&pak, "key = ?", key).Error; err != nil { - return nil, ErrPreAuthKeyNotFound - } - - return &pak, nil + return findAuthKey(tx, key) } // DestroyPreAuthKey destroys a preauthkey. Returns error if the PreAuthKey -// does not exist. -func DestroyPreAuthKey(tx *gorm.DB, pak types.PreAuthKey) error { +// does not exist. This also clears the auth_key_id on any nodes that reference +// this key. +func DestroyPreAuthKey(tx *gorm.DB, id uint64) error { return tx.Transaction(func(db *gorm.DB) error { - if result := db.Unscoped().Delete(pak); result.Error != nil { - return result.Error + // First, clear the foreign key reference on any nodes using this key + err := db.Model(&types.Node{}). + Where("auth_key_id = ?", id). + Update("auth_key_id", nil).Error + if err != nil { + return fmt.Errorf("failed to clear auth_key_id on nodes: %w", err) + } + + // Then delete the pre-auth key + err = tx.Unscoped().Delete(&types.PreAuthKey{}, id).Error + if err != nil { + return err } return nil }) } -func (hsdb *HSDatabase) ExpirePreAuthKey(k *types.PreAuthKey) error { +func (hsdb *HSDatabase) ExpirePreAuthKey(id uint64) error { return hsdb.Write(func(tx *gorm.DB) error { - return ExpirePreAuthKey(tx, k) + return ExpirePreAuthKey(tx, id) + }) +} + +func (hsdb *HSDatabase) DeletePreAuthKey(id uint64) error { + return hsdb.Write(func(tx *gorm.DB) error { + return DestroyPreAuthKey(tx, id) }) } @@ -155,17 +333,7 @@ func UsePreAuthKey(tx *gorm.DB, k *types.PreAuthKey) error { } // MarkExpirePreAuthKey marks a PreAuthKey as expired. -func ExpirePreAuthKey(tx *gorm.DB, k *types.PreAuthKey) error { +func ExpirePreAuthKey(tx *gorm.DB, id uint64) error { now := time.Now() - return tx.Model(&types.PreAuthKey{}).Where("id = ?", k.ID).Update("expiration", now).Error -} - -func generateKey() (string, error) { - size := 24 - bytes := make([]byte, size) - if _, err := rand.Read(bytes); err != nil { - return "", err - } - - return hex.EncodeToString(bytes), nil + return tx.Model(&types.PreAuthKey{}).Where("id = ?", id).Update("expiration", now).Error } diff --git a/hscontrol/db/preauth_keys_test.go b/hscontrol/db/preauth_keys_test.go index 605e7442..7c5dcbd7 100644 --- a/hscontrol/db/preauth_keys_test.go +++ b/hscontrol/db/preauth_keys_test.go @@ -1,84 +1,447 @@ package db import ( + "fmt" "slices" + "strings" "testing" + "time" "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" - "gopkg.in/check.v1" "tailscale.com/types/ptr" ) -func (*Suite) TestCreatePreAuthKey(c *check.C) { - // ID does not exist - _, err := db.CreatePreAuthKey(12345, true, false, nil, nil) - c.Assert(err, check.NotNil) +func TestCreatePreAuthKey(t *testing.T) { + tests := []struct { + name string + test func(*testing.T, *HSDatabase) + }{ + { + name: "error_invalid_user_id", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() - user, err := db.CreateUser(types.User{Name: "test"}) - c.Assert(err, check.IsNil) + _, err := db.CreatePreAuthKey(ptr.To(types.UserID(12345)), true, false, nil, nil) + assert.Error(t, err) + }, + }, + { + name: "success_create_and_list", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() - key, err := db.CreatePreAuthKey(types.UserID(user.ID), true, false, nil, nil) - c.Assert(err, check.IsNil) + user, err := db.CreateUser(types.User{Name: "test"}) + require.NoError(t, err) - // Did we get a valid key? - c.Assert(key.Key, check.NotNil) - c.Assert(len(key.Key), check.Equals, 48) + key, err := db.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) + require.NoError(t, err) + assert.NotEmpty(t, key.Key) - // Make sure the User association is populated - c.Assert(key.User.ID, check.Equals, user.ID) + // List keys for the user + keys, err := db.ListPreAuthKeys() + require.NoError(t, err) + assert.Len(t, keys, 1) - // ID does not exist - _, err = db.ListPreAuthKeys(1000000) - c.Assert(err, check.NotNil) + // Verify User association is populated + assert.Equal(t, user.ID, keys[0].User.ID) + }, + }, + } - keys, err := db.ListPreAuthKeys(types.UserID(user.ID)) - c.Assert(err, check.IsNil) - c.Assert(len(keys), check.Equals, 1) + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) - // Make sure the User association is populated - c.Assert((keys)[0].User.ID, check.Equals, user.ID) + tt.test(t, db) + }) + } } -func (*Suite) TestPreAuthKeyACLTags(c *check.C) { - user, err := db.CreateUser(types.User{Name: "test8"}) - c.Assert(err, check.IsNil) +func TestPreAuthKeyACLTags(t *testing.T) { + tests := []struct { + name string + test func(*testing.T, *HSDatabase) + }{ + { + name: "reject_malformed_tags", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() - _, err = db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, []string{"badtag"}) - c.Assert(err, check.NotNil) // Confirm that malformed tags are rejected + user, err := db.CreateUser(types.User{Name: "test-tags-1"}) + require.NoError(t, err) - tags := []string{"tag:test1", "tag:test2"} - tagsWithDuplicate := []string{"tag:test1", "tag:test2", "tag:test2"} - _, err = db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, tagsWithDuplicate) - c.Assert(err, check.IsNil) + _, err = db.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"badtag"}) + assert.Error(t, err) + }, + }, + { + name: "deduplicate_and_sort_tags", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() - listedPaks, err := db.ListPreAuthKeys(types.UserID(user.ID)) - c.Assert(err, check.IsNil) - gotTags := listedPaks[0].Proto().GetAclTags() - slices.Sort(gotTags) - c.Assert(gotTags, check.DeepEquals, tags) + user, err := db.CreateUser(types.User{Name: "test-tags-2"}) + require.NoError(t, err) + + expectedTags := []string{"tag:test1", "tag:test2"} + tagsWithDuplicate := []string{"tag:test1", "tag:test2", "tag:test2"} + + _, err = db.CreatePreAuthKey(user.TypedID(), false, false, nil, tagsWithDuplicate) + require.NoError(t, err) + + listedPaks, err := db.ListPreAuthKeys() + require.NoError(t, err) + require.Len(t, listedPaks, 1) + + gotTags := listedPaks[0].Proto().GetAclTags() + slices.Sort(gotTags) + assert.Equal(t, expectedTags, gotTags) + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + + tt.test(t, db) + }) + } } func TestCannotDeleteAssignedPreAuthKey(t *testing.T) { db, err := newSQLiteTestDB() require.NoError(t, err) user, err := db.CreateUser(types.User{Name: "test8"}) - assert.NoError(t, err) + require.NoError(t, err) - key, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, []string{"tag:good"}) - assert.NoError(t, err) + key, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"tag:good"}) + require.NoError(t, err) node := types.Node{ ID: 0, Hostname: "testest", - UserID: user.ID, + UserID: &user.ID, RegisterMethod: util.RegisterMethodAuthKey, AuthKeyID: ptr.To(key.ID), } db.DB.Save(&node) - err = db.DB.Delete(key).Error + err = db.DB.Delete(&types.PreAuthKey{ID: key.ID}).Error require.ErrorContains(t, err, "constraint failed: FOREIGN KEY constraint failed") } + +func TestPreAuthKeyAuthentication(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + + user := db.CreateUserForTest("test-user") + + tests := []struct { + name string + setupKey func() string // Returns key string to test + wantFindErr bool // Error when finding the key + wantValidateErr bool // Error when validating the key + validateResult func(*testing.T, *types.PreAuthKey) + }{ + { + name: "legacy_key_plaintext", + setupKey: func() string { + // Insert legacy key directly using GORM (simulate existing production key) + // Note: We use raw SQL to bypass GORM's handling and set prefix to empty string + // which simulates how legacy keys exist in production databases + legacyKey := "abc123def456ghi789jkl012mno345pqr678stu901vwx234yz" + now := time.Now() + + // Use raw SQL to insert with empty prefix to avoid UNIQUE constraint + err := db.DB.Exec(` + INSERT INTO pre_auth_keys (key, user_id, reusable, ephemeral, used, created_at) + VALUES (?, ?, ?, ?, ?, ?) + `, legacyKey, user.ID, true, false, false, now).Error + require.NoError(t, err) + + return legacyKey + }, + wantFindErr: false, + wantValidateErr: false, + validateResult: func(t *testing.T, pak *types.PreAuthKey) { + t.Helper() + + assert.Equal(t, user.ID, *pak.UserID) + assert.NotEmpty(t, pak.Key) // Legacy keys have Key populated + assert.Empty(t, pak.Prefix) // Legacy keys have empty Prefix + assert.Nil(t, pak.Hash) // Legacy keys have nil Hash + }, + }, + { + name: "new_key_bcrypt", + setupKey: func() string { + // Create new key via API + keyStr, err := db.CreatePreAuthKey( + user.TypedID(), + true, false, nil, []string{"tag:test"}, + ) + require.NoError(t, err) + + return keyStr.Key + }, + wantFindErr: false, + wantValidateErr: false, + validateResult: func(t *testing.T, pak *types.PreAuthKey) { + t.Helper() + + assert.Equal(t, user.ID, *pak.UserID) + assert.Empty(t, pak.Key) // New keys have empty Key + assert.NotEmpty(t, pak.Prefix) // New keys have Prefix + assert.NotNil(t, pak.Hash) // New keys have Hash + assert.Len(t, pak.Prefix, 12) // Prefix is 12 chars + }, + }, + { + name: "new_key_format_validation", + setupKey: func() string { + keyStr, err := db.CreatePreAuthKey( + user.TypedID(), + true, false, nil, nil, + ) + require.NoError(t, err) + + // Verify format: hskey-auth-{12-char-prefix}-{64-char-hash} + // Use fixed-length parsing since prefix/hash can contain dashes (base64 URL-safe) + assert.True(t, strings.HasPrefix(keyStr.Key, "hskey-auth-")) + + // Extract prefix and hash using fixed-length parsing like the real code does + _, prefixAndHash, found := strings.Cut(keyStr.Key, "hskey-auth-") + assert.True(t, found) + assert.GreaterOrEqual(t, len(prefixAndHash), 12+1+64) // prefix + '-' + hash minimum + + prefix := prefixAndHash[:12] + assert.Len(t, prefix, 12) // Prefix is 12 chars + assert.Equal(t, byte('-'), prefixAndHash[12]) // Separator + hash := prefixAndHash[13:] + assert.Len(t, hash, 64) // Hash is 64 chars + + return keyStr.Key + }, + wantFindErr: false, + wantValidateErr: false, + }, + { + name: "invalid_bcrypt_hash", + setupKey: func() string { + // Create valid key + key, err := db.CreatePreAuthKey( + user.TypedID(), + true, false, nil, nil, + ) + require.NoError(t, err) + + keyStr := key.Key + + // Return key with tampered hash using fixed-length parsing + _, prefixAndHash, _ := strings.Cut(keyStr, "hskey-auth-") + prefix := prefixAndHash[:12] + + return "hskey-auth-" + prefix + "-" + "wrong_hash_here_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" + }, + wantFindErr: true, + wantValidateErr: false, + }, + { + name: "empty_key", + setupKey: func() string { + return "" + }, + wantFindErr: true, + wantValidateErr: false, + }, + { + name: "key_too_short", + setupKey: func() string { + return "hskey-auth-short" + }, + wantFindErr: true, + wantValidateErr: false, + }, + { + name: "missing_separator", + setupKey: func() string { + return "hskey-auth-ABCDEFGHIJKLabcdefghijklmnopqrstuvwxyz1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ" + }, + wantFindErr: true, + wantValidateErr: false, + }, + { + name: "hash_too_short", + setupKey: func() string { + return "hskey-auth-ABCDEFGHIJKL-short" + }, + wantFindErr: true, + wantValidateErr: false, + }, + { + name: "prefix_with_invalid_chars", + setupKey: func() string { + return "hskey-auth-ABC$EF@HIJKL-" + strings.Repeat("a", 64) + }, + wantFindErr: true, + wantValidateErr: false, + }, + { + name: "hash_with_invalid_chars", + setupKey: func() string { + return "hskey-auth-ABCDEFGHIJKL-" + "invalid$chars" + strings.Repeat("a", 54) + }, + wantFindErr: true, + wantValidateErr: false, + }, + { + name: "prefix_not_found_in_db", + setupKey: func() string { + // Create a validly formatted key but with a prefix that doesn't exist + return "hskey-auth-NotInDB12345-" + strings.Repeat("a", 64) + }, + wantFindErr: true, + wantValidateErr: false, + }, + { + name: "expired_legacy_key", + setupKey: func() string { + legacyKey := "expired_legacy_key_123456789012345678901234" + now := time.Now() + expiration := time.Now().Add(-1 * time.Hour) // Expired 1 hour ago + + // Use raw SQL to avoid UNIQUE constraint on empty prefix + err := db.DB.Exec(` + INSERT INTO pre_auth_keys (key, user_id, reusable, ephemeral, used, created_at, expiration) + VALUES (?, ?, ?, ?, ?, ?, ?) + `, legacyKey, user.ID, true, false, false, now, expiration).Error + require.NoError(t, err) + + return legacyKey + }, + wantFindErr: false, + wantValidateErr: true, + }, + { + name: "used_single_use_legacy_key", + setupKey: func() string { + legacyKey := "used_legacy_key_123456789012345678901234567" + now := time.Now() + + // Use raw SQL to avoid UNIQUE constraint on empty prefix + err := db.DB.Exec(` + INSERT INTO pre_auth_keys (key, user_id, reusable, ephemeral, used, created_at) + VALUES (?, ?, ?, ?, ?, ?) + `, legacyKey, user.ID, false, false, true, now).Error + require.NoError(t, err) + + return legacyKey + }, + wantFindErr: false, + wantValidateErr: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + keyStr := tt.setupKey() + + pak, err := db.GetPreAuthKey(keyStr) + + if tt.wantFindErr { + assert.Error(t, err) + return + } + + require.NoError(t, err) + require.NotNil(t, pak) + + // Check validation if needed + if tt.wantValidateErr { + err := pak.Validate() + assert.Error(t, err) + + return + } + + if tt.validateResult != nil { + tt.validateResult(t, pak) + } + }) + } +} + +func TestMultipleLegacyKeysAllowed(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + + user, err := db.CreateUser(types.User{Name: "test-legacy"}) + require.NoError(t, err) + + // Create multiple legacy keys by directly inserting with empty prefix + // This simulates the migration scenario where existing databases have multiple + // plaintext keys without prefix/hash fields + now := time.Now() + + for i := range 5 { + legacyKey := fmt.Sprintf("legacy_key_%d_%s", i, strings.Repeat("x", 40)) + + err := db.DB.Exec(` + INSERT INTO pre_auth_keys (key, prefix, hash, user_id, reusable, ephemeral, used, created_at) + VALUES (?, '', NULL, ?, ?, ?, ?, ?) + `, legacyKey, user.ID, true, false, false, now).Error + require.NoError(t, err, "should allow multiple legacy keys with empty prefix") + } + + // Verify all legacy keys can be retrieved + var legacyKeys []types.PreAuthKey + + err = db.DB.Where("prefix = '' OR prefix IS NULL").Find(&legacyKeys).Error + require.NoError(t, err) + assert.Len(t, legacyKeys, 5, "should have created 5 legacy keys") + + // Now create new bcrypt-based keys - these should have unique prefixes + key1, err := db.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) + require.NoError(t, err) + assert.NotEmpty(t, key1.Key) + + key2, err := db.CreatePreAuthKey(user.TypedID(), true, false, nil, nil) + require.NoError(t, err) + assert.NotEmpty(t, key2.Key) + + // Verify the new keys have different prefixes + pak1, err := db.GetPreAuthKey(key1.Key) + require.NoError(t, err) + assert.NotEmpty(t, pak1.Prefix) + + pak2, err := db.GetPreAuthKey(key2.Key) + require.NoError(t, err) + assert.NotEmpty(t, pak2.Prefix) + + assert.NotEqual(t, pak1.Prefix, pak2.Prefix, "new keys should have unique prefixes") + + // Verify we cannot manually insert duplicate non-empty prefixes + duplicatePrefix := "test_prefix1" + hash1 := []byte("hash1") + hash2 := []byte("hash2") + + // First insert should succeed + err = db.DB.Exec(` + INSERT INTO pre_auth_keys (key, prefix, hash, user_id, reusable, ephemeral, used, created_at) + VALUES ('', ?, ?, ?, ?, ?, ?, ?) + `, duplicatePrefix, hash1, user.ID, true, false, false, now).Error + require.NoError(t, err, "first key with prefix should succeed") + + // Second insert with same prefix should fail + err = db.DB.Exec(` + INSERT INTO pre_auth_keys (key, prefix, hash, user_id, reusable, ephemeral, used, created_at) + VALUES ('', ?, ?, ?, ?, ?, ?, ?) + `, duplicatePrefix, hash2, user.ID, true, false, false, now).Error + require.Error(t, err, "duplicate non-empty prefix should be rejected") + assert.Contains(t, err.Error(), "UNIQUE constraint failed", "should fail with UNIQUE constraint error") +} diff --git a/hscontrol/db/schema.sql b/hscontrol/db/schema.sql index 175e2aff..ef0a2a0e 100644 --- a/hscontrol/db/schema.sql +++ b/hscontrol/db/schema.sql @@ -34,20 +34,15 @@ CREATE INDEX idx_users_deleted_at ON users(deleted_at); -- - Cannot create another local user "alice" (blocked by idx_name_no_provider_identifier) -- - Cannot create another user with provider_identifier="alice_github" (blocked by idx_provider_identifier) -- - Cannot create user "bob" with provider_identifier="alice_github" (blocked by idx_name_provider_identifier) -CREATE UNIQUE INDEX idx_provider_identifier ON users( - provider_identifier -) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users( - name, - provider_identifier -); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users( - name -) WHERE provider_identifier IS NULL; +CREATE UNIQUE INDEX idx_provider_identifier ON users(provider_identifier) WHERE provider_identifier IS NOT NULL; +CREATE UNIQUE INDEX idx_name_provider_identifier ON users(name, provider_identifier); +CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(name) WHERE provider_identifier IS NULL; CREATE TABLE pre_auth_keys( id integer PRIMARY KEY AUTOINCREMENT, key text, + prefix text, + hash blob, user_id integer, reusable numeric, ephemeral numeric DEFAULT false, @@ -59,6 +54,7 @@ CREATE TABLE pre_auth_keys( CONSTRAINT fk_pre_auth_keys_user FOREIGN KEY(user_id) REFERENCES users(id) ON DELETE SET NULL ); +CREATE UNIQUE INDEX idx_pre_auth_keys_prefix ON pre_auth_keys(prefix) WHERE prefix IS NOT NULL AND prefix != ''; CREATE TABLE api_keys( id integer PRIMARY KEY AUTOINCREMENT, @@ -85,7 +81,7 @@ CREATE TABLE nodes( given_name varchar(63), user_id integer, register_method text, - forced_tags text, + tags text, auth_key_id integer, last_seen datetime, expiry datetime, diff --git a/hscontrol/db/sqliteconfig/config.go b/hscontrol/db/sqliteconfig/config.go index 3c1608d7..d27977a4 100644 --- a/hscontrol/db/sqliteconfig/config.go +++ b/hscontrol/db/sqliteconfig/config.go @@ -16,6 +16,7 @@ var ( ErrInvalidAutoVacuum = errors.New("invalid auto_vacuum") ErrWALAutocheckpoint = errors.New("wal_autocheckpoint must be >= -1") ErrInvalidSynchronous = errors.New("invalid synchronous") + ErrInvalidTxLock = errors.New("invalid txlock") ) const ( @@ -225,6 +226,62 @@ func (s Synchronous) String() string { return string(s) } +// TxLock represents SQLite transaction lock mode. +// Transaction lock mode determines when write locks are acquired during transactions. +// +// Lock Acquisition Behavior: +// +// DEFERRED - SQLite default, acquire lock lazily: +// - Transaction starts without any lock +// - First read acquires SHARED lock +// - First write attempts to upgrade to RESERVED lock +// - If another transaction holds RESERVED: SQLITE_BUSY (potential deadlock) +// - Can cause deadlocks when multiple connections attempt concurrent writes +// +// IMMEDIATE - Recommended for write-heavy workloads: +// - Transaction immediately acquires RESERVED lock at BEGIN +// - If lock unavailable, waits up to busy_timeout before failing +// - Other writers queue orderly instead of deadlocking +// - Prevents the upgrade-lock deadlock scenario +// - Slight overhead for read-only transactions that don't need locks +// +// EXCLUSIVE - Maximum isolation: +// - Transaction immediately acquires EXCLUSIVE lock at BEGIN +// - No other connections can read or write +// - Highest isolation but lowest concurrency +// - Rarely needed in practice +type TxLock string + +const ( + // TxLockDeferred acquires locks lazily (SQLite default). + // Risk of SQLITE_BUSY deadlocks with concurrent writers. Use for read-heavy workloads. + TxLockDeferred TxLock = "deferred" + + // TxLockImmediate acquires write lock immediately (RECOMMENDED for production). + // Prevents deadlocks by acquiring RESERVED lock at transaction start. + // Writers queue orderly, respecting busy_timeout. + TxLockImmediate TxLock = "immediate" + + // TxLockExclusive acquires exclusive lock immediately. + // Maximum isolation, no concurrent reads or writes. Rarely needed. + TxLockExclusive TxLock = "exclusive" +) + +// IsValid returns true if the TxLock is valid. +func (t TxLock) IsValid() bool { + switch t { + case TxLockDeferred, TxLockImmediate, TxLockExclusive, "": + return true + default: + return false + } +} + +// String returns the string representation. +func (t TxLock) String() string { + return string(t) +} + // Config holds SQLite database configuration with type-safe enums. // This configuration balances performance, durability, and operational requirements // for Headscale's SQLite database usage patterns. @@ -236,6 +293,7 @@ type Config struct { WALAutocheckpoint int // pages (-1 = default/not set, 0 = disabled, >0 = enabled) Synchronous Synchronous // synchronous mode (affects durability vs performance) ForeignKeys bool // enable foreign key constraints (data integrity) + TxLock TxLock // transaction lock mode (affects write concurrency) } // Default returns the production configuration optimized for Headscale's usage patterns. @@ -244,6 +302,7 @@ type Config struct { // - Data durability with good performance (NORMAL synchronous) // - Automatic space management (INCREMENTAL auto-vacuum) // - Data integrity (foreign key constraints enabled) +// - Safe concurrent writes (IMMEDIATE transaction lock) // - Reasonable timeout for busy database scenarios (10s) func Default(path string) *Config { return &Config{ @@ -254,6 +313,7 @@ func Default(path string) *Config { WALAutocheckpoint: 1000, Synchronous: SynchronousNormal, ForeignKeys: true, + TxLock: TxLockImmediate, } } @@ -292,6 +352,10 @@ func (c *Config) Validate() error { return fmt.Errorf("%w: %s", ErrInvalidSynchronous, c.Synchronous) } + if c.TxLock != "" && !c.TxLock.IsValid() { + return fmt.Errorf("%w: %s", ErrInvalidTxLock, c.TxLock) + } + return nil } @@ -332,12 +396,20 @@ func (c *Config) ToURL() (string, error) { baseURL = "file:" + c.Path } - // Add parameters without encoding = signs - if len(pragmas) > 0 { - var queryParts []string - for _, pragma := range pragmas { - queryParts = append(queryParts, "_pragma="+pragma) - } + // Build query parameters + queryParts := make([]string, 0, 1+len(pragmas)) + + // Add _txlock first (it's a connection parameter, not a pragma) + if c.TxLock != "" { + queryParts = append(queryParts, "_txlock="+string(c.TxLock)) + } + + // Add pragma parameters + for _, pragma := range pragmas { + queryParts = append(queryParts, "_pragma="+pragma) + } + + if len(queryParts) > 0 { baseURL += "?" + strings.Join(queryParts, "&") } diff --git a/hscontrol/db/sqliteconfig/config_test.go b/hscontrol/db/sqliteconfig/config_test.go index edc215ed..66955bb9 100644 --- a/hscontrol/db/sqliteconfig/config_test.go +++ b/hscontrol/db/sqliteconfig/config_test.go @@ -71,6 +71,52 @@ func TestSynchronous(t *testing.T) { } } +func TestTxLock(t *testing.T) { + tests := []struct { + mode TxLock + valid bool + }{ + {TxLockDeferred, true}, + {TxLockImmediate, true}, + {TxLockExclusive, true}, + {TxLock(""), true}, // empty is valid (uses driver default) + {TxLock("IMMEDIATE"), false}, // uppercase is invalid + {TxLock("INVALID"), false}, + } + + for _, tt := range tests { + name := string(tt.mode) + if name == "" { + name = "empty" + } + + t.Run(name, func(t *testing.T) { + if got := tt.mode.IsValid(); got != tt.valid { + t.Errorf("TxLock(%q).IsValid() = %v, want %v", tt.mode, got, tt.valid) + } + }) + } +} + +func TestTxLockString(t *testing.T) { + tests := []struct { + mode TxLock + want string + }{ + {TxLockDeferred, "deferred"}, + {TxLockImmediate, "immediate"}, + {TxLockExclusive, "exclusive"}, + } + + for _, tt := range tests { + t.Run(tt.want, func(t *testing.T) { + if got := tt.mode.String(); got != tt.want { + t.Errorf("TxLock.String() = %q, want %q", got, tt.want) + } + }) + } +} + func TestConfigValidate(t *testing.T) { tests := []struct { name string @@ -104,6 +150,21 @@ func TestConfigValidate(t *testing.T) { }, wantErr: true, }, + { + name: "invalid txlock", + config: &Config{ + Path: "/path/to/db.sqlite", + TxLock: TxLock("INVALID"), + }, + wantErr: true, + }, + { + name: "valid txlock immediate", + config: &Config{ + Path: "/path/to/db.sqlite", + TxLock: TxLockImmediate, + }, + }, } for _, tt := range tests { @@ -123,9 +184,9 @@ func TestConfigToURL(t *testing.T) { want string }{ { - name: "default config", + name: "default config includes txlock immediate", config: Default("/path/to/db.sqlite"), - want: "file:/path/to/db.sqlite?_pragma=busy_timeout=10000&_pragma=journal_mode=WAL&_pragma=auto_vacuum=INCREMENTAL&_pragma=wal_autocheckpoint=1000&_pragma=synchronous=NORMAL&_pragma=foreign_keys=ON", + want: "file:/path/to/db.sqlite?_txlock=immediate&_pragma=busy_timeout=10000&_pragma=journal_mode=WAL&_pragma=auto_vacuum=INCREMENTAL&_pragma=wal_autocheckpoint=1000&_pragma=synchronous=NORMAL&_pragma=foreign_keys=ON", }, { name: "memory config", @@ -183,6 +244,47 @@ func TestConfigToURL(t *testing.T) { }, want: "file:/full.db?_pragma=busy_timeout=15000&_pragma=journal_mode=WAL&_pragma=auto_vacuum=FULL&_pragma=wal_autocheckpoint=1000&_pragma=synchronous=EXTRA&_pragma=foreign_keys=ON", }, + { + name: "with txlock immediate", + config: &Config{ + Path: "/test.db", + BusyTimeout: 5000, + TxLock: TxLockImmediate, + WALAutocheckpoint: -1, + ForeignKeys: true, + }, + want: "file:/test.db?_txlock=immediate&_pragma=busy_timeout=5000&_pragma=foreign_keys=ON", + }, + { + name: "with txlock deferred", + config: &Config{ + Path: "/test.db", + TxLock: TxLockDeferred, + WALAutocheckpoint: -1, + ForeignKeys: true, + }, + want: "file:/test.db?_txlock=deferred&_pragma=foreign_keys=ON", + }, + { + name: "with txlock exclusive", + config: &Config{ + Path: "/test.db", + TxLock: TxLockExclusive, + WALAutocheckpoint: -1, + }, + want: "file:/test.db?_txlock=exclusive", + }, + { + name: "empty txlock omitted from URL", + config: &Config{ + Path: "/test.db", + TxLock: "", + BusyTimeout: 1000, + WALAutocheckpoint: -1, + ForeignKeys: true, + }, + want: "file:/test.db?_pragma=busy_timeout=1000&_pragma=foreign_keys=ON", + }, } for _, tt := range tests { @@ -209,3 +311,10 @@ func TestConfigToURLInvalid(t *testing.T) { t.Error("Config.ToURL() with invalid config should return error") } } + +func TestDefaultConfigHasTxLockImmediate(t *testing.T) { + config := Default("/test.db") + if config.TxLock != TxLockImmediate { + t.Errorf("Default().TxLock = %q, want %q", config.TxLock, TxLockImmediate) + } +} diff --git a/hscontrol/db/suite_test.go b/hscontrol/db/suite_test.go index 0589ff81..15a85cf8 100644 --- a/hscontrol/db/suite_test.go +++ b/hscontrol/db/suite_test.go @@ -9,62 +9,31 @@ import ( "testing" "github.com/juanfont/headscale/hscontrol/types" - "gopkg.in/check.v1" + "github.com/rs/zerolog" "zombiezen.com/go/postgrestest" ) -func Test(t *testing.T) { - check.TestingT(t) -} - -var _ = check.Suite(&Suite{}) - -type Suite struct{} - -var ( - tmpDir string - db *HSDatabase -) - -func (s *Suite) SetUpTest(c *check.C) { - s.ResetDB(c) -} - -func (s *Suite) TearDownTest(c *check.C) { - // os.RemoveAll(tmpDir) -} - -func (s *Suite) ResetDB(c *check.C) { - // if len(tmpDir) != 0 { - // os.RemoveAll(tmpDir) - // } - - var err error - db, err = newSQLiteTestDB() - if err != nil { - c.Fatal(err) - } -} - -// TODO(kradalby): make this a t.Helper when we dont depend -// on check test framework. func newSQLiteTestDB() (*HSDatabase, error) { - var err error - tmpDir, err = os.MkdirTemp("", "headscale-db-test-*") + tmpDir, err := os.MkdirTemp("", "headscale-db-test-*") if err != nil { return nil, err } log.Printf("database path: %s", tmpDir+"/headscale_test.db") + zerolog.SetGlobalLevel(zerolog.Disabled) - db, err = NewHeadscaleDatabase( - types.DatabaseConfig{ - Type: types.DatabaseSqlite, - Sqlite: types.SqliteConfig{ - Path: tmpDir + "/headscale_test.db", + db, err := NewHeadscaleDatabase( + &types.Config{ + Database: types.DatabaseConfig{ + Type: types.DatabaseSqlite, + Sqlite: types.SqliteConfig{ + Path: tmpDir + "/headscale_test.db", + }, + }, + Policy: types.PolicyConfig{ + Mode: types.PolicyModeDB, }, }, - "", emptyCache(), ) if err != nil { @@ -107,18 +76,22 @@ func newHeadscaleDBFromPostgresURL(t *testing.T, pu *url.URL) *HSDatabase { port, _ := strconv.Atoi(pu.Port()) db, err := NewHeadscaleDatabase( - types.DatabaseConfig{ - Type: types.DatabasePostgres, - Postgres: types.PostgresConfig{ - Host: pu.Hostname(), - User: pu.User.Username(), - Name: strings.TrimLeft(pu.Path, "/"), - Pass: pass, - Port: port, - Ssl: "disable", + &types.Config{ + Database: types.DatabaseConfig{ + Type: types.DatabasePostgres, + Postgres: types.PostgresConfig{ + Host: pu.Hostname(), + User: pu.User.Username(), + Name: strings.TrimLeft(pu.Path, "/"), + Pass: pass, + Port: port, + Ssl: "disable", + }, + }, + Policy: types.PolicyConfig{ + Mode: types.PolicyModeDB, }, }, - "", emptyCache(), ) if err != nil { diff --git a/hscontrol/db/testdata/postgres/pre-24-postgresdb.pssql.dump b/hscontrol/db/testdata/postgres/pre-24-postgresdb.pssql.dump deleted file mode 100644 index 7f8df28b..00000000 Binary files a/hscontrol/db/testdata/postgres/pre-24-postgresdb.pssql.dump and /dev/null differ diff --git a/hscontrol/db/testdata/sqlite/0-22-3-to-0-23-0-routes-are-dropped-2063_dump.sql b/hscontrol/db/testdata/sqlite/0-22-3-to-0-23-0-routes-are-dropped-2063_dump.sql deleted file mode 100644 index 5d18ba5a..00000000 --- a/hscontrol/db/testdata/sqlite/0-22-3-to-0-23-0-routes-are-dropped-2063_dump.sql +++ /dev/null @@ -1,59 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-01-20 15:24:32.036023546+00:00','2023-01-20 15:24:32.036023546+00:00',NULL,'test_username'); -INSERT INTO users VALUES(2,'2023-01-20 15:24:37.819763186+00:00','2023-01-20 15:24:37.819763186+00:00',NULL,'test_username2'); -INSERT INTO users VALUES(3,'2023-03-14 18:44:35.748065603+00:00','2023-03-14 18:44:35.748065603+00:00',NULL,'test_username3'); -INSERT INTO users VALUES(4,'2023-10-28 10:11:58.184072133+00:00','2023-10-28 10:11:58.184072133+00:00',NULL,'test_username4'); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,"user_id" integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -INSERT INTO pre_auth_keys VALUES(1,'abc',3,0,0,1,'2023-03-14 18:48:00.276961537+00:00','2023-03-14 19:47:00+00:00'); -INSERT INTO pre_auth_keys VALUES(2,'abc',3,0,0,1,'2023-03-19 02:03:37.127380891+00:00','2023-03-19 03:03:00+00:00'); -INSERT INTO pre_auth_keys VALUES(3,'abc',3,0,0,1,'2023-07-12 16:29:17.255869839+00:00','2023-07-12 17:29:17.253994971+00:00'); -INSERT INTO pre_auth_keys VALUES(4,'abc',4,0,0,0,'2023-10-28 10:12:25.105521216+00:00','2023-10-28 11:12:25.103560646+00:00'); -INSERT INTO pre_auth_keys VALUES(5,'abc',4,0,0,1,'2023-10-28 11:19:38.120019211+00:00','2023-10-28 12:19:38.118375437+00:00'); -INSERT INTO pre_auth_keys VALUES(6,'abc',4,0,0,1,'2023-10-28 11:47:58.0406679+00:00','2023-10-28 12:47:58.036681164+00:00'); -INSERT INTO pre_auth_keys VALUES(7,'abc',4,0,0,1,'2023-10-30 15:14:21.038723484+00:00','2023-10-30 16:14:21.03662712+00:00'); -INSERT INTO pre_auth_keys VALUES(8,'abc',4,0,0,1,'2023-10-30 16:18:12.358594006+00:00','2023-10-30 17:18:12.357173229+00:00'); -INSERT INTO pre_auth_keys VALUES(9,'abc',4,0,0,1,'2023-10-31 16:00:05.806877017+00:00','2023-10-31 17:00:05.801991493+00:00'); -INSERT INTO pre_auth_keys VALUES(10,'abc',1,0,0,1,'2023-10-31 16:17:04.813054795+00:00','2023-10-31 17:17:04.809264757+00:00'); -INSERT INTO pre_auth_keys VALUES(11,'abc',1,0,0,1,'2023-11-20 21:03:07.524801178+00:00','2023-11-20 22:03:07.521904023+00:00'); -INSERT INTO pre_auth_keys VALUES(12,'abc',1,0,0,1,'2024-01-09 16:53:26.73433598+00:00','2024-01-09 17:53:26.730815243+00:00'); -INSERT INTO pre_auth_keys VALUES(13,'abc',1,0,0,0,'2024-01-15 10:57:54.79892743+00:00','2024-01-15 11:57:54.797213855+00:00'); -INSERT INTO pre_auth_keys VALUES(14,'abc',3,0,0,1,'2024-02-09 11:11:44.760824633+00:00','2024-02-09 12:11:44.757791384+00:00'); -INSERT INTO pre_auth_keys VALUES(15,'abc',1,0,0,1,'2024-02-09 15:58:39.383257853+00:00','2024-02-09 16:58:39.381325589+00:00'); -INSERT INTO pre_auth_keys VALUES(16,'abc',3,0,0,1,'2024-02-15 14:28:52.808875211+00:00','2024-02-15 15:28:52.806886242+00:00'); -INSERT INTO pre_auth_keys VALUES(17,'abc',3,0,0,1,'2024-03-22 09:14:48.915733965+00:00','2024-03-22 10:14:48.913376104+00:00'); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),"user_id" integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -INSERT INTO machines VALUES(1,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.1','test_hostname','test_given_name',2,'cli','null',0,'2024-08-21 08:40:52.222825283+00:00','2024-08-21 08:18:52.242124403+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-01-20 15:25:20.356129891+00:00','2024-08-21 08:40:52.222881482+00:00',NULL); -INSERT INTO machines VALUES(3,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.3','test_hostname','test_given_name',1,'cli','null',0,'2024-08-21 08:40:56.62914289+00:00','2024-08-21 08:18:46.63998217+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-01-20 15:35:03.370060441+00:00','2024-08-21 08:40:56.629253788+00:00',NULL); -INSERT INTO machines VALUES(6,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.6','test_hostname','test_given_name',3,'authkey','[]',2,'2023-03-20 19:41:31.098518915+00:00','2023-03-20 19:40:35.242992108+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-03-19 02:04:05.008046059+00:00','2023-03-20 19:41:31.098577514+00:00',NULL); -INSERT INTO machines VALUES(9,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.9','test_hostname','test_given_name',1,'cli','null',0,'2024-07-29 13:49:30.299050593+00:00','2024-07-29 13:48:10.308424118+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-07-27 08:56:16.70263804+00:00','2024-07-29 13:49:30.299142293+00:00',NULL); -INSERT INTO machines VALUES(14,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.2','test_hostname','test_given_name',1,'cli','null',0,'2024-08-21 08:40:55.887260826+00:00','2024-08-21 08:18:45.907361083+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-10-10 18:41:13.821106556+00:00','2024-08-21 08:40:55.887344424+00:00',NULL); -INSERT INTO machines VALUES(24,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.16','test_hostname','test_given_name',1,'cli','null',0,'2024-08-21 08:40:53.141335445+00:00','2024-08-21 08:18:43.153446351+00:00','0001-01-01 00:00:00+00:00','{}','[]','2023-12-05 10:35:59.888379379+00:00','2024-08-21 08:40:53.141393344+00:00',NULL); -INSERT INTO machines VALUES(26,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.19','test_hostname','test_given_name',1,'authkey','[]',12,'2024-08-21 08:40:53.117322135+00:00','2024-08-21 08:18:43.129839273+00:00','0001-01-01 00:00:00+00:00','{}','[]','2024-01-09 16:54:15.190375117+00:00','2024-08-21 08:40:53.117392533+00:00',NULL); -INSERT INTO machines VALUES(27,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.10','test_hostname','test_given_name',3,'authkey','[]',14,'2024-08-21 08:40:17.737469943+00:00','2024-08-21 08:18:47.747596039+00:00','0001-01-01 00:00:00+00:00','{}','[]','2024-02-09 11:12:06.970690104+00:00','2024-08-21 08:40:17.737524142+00:00',NULL); -INSERT INTO machines VALUES(28,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.5','test_hostname','test_given_name',1,'cli','null',0,'2024-08-16 16:10:15.388306191+00:00','2024-08-16 15:42:50.445601237+00:00','0001-01-01 00:00:00+00:00','{}','[]','2024-02-09 12:05:40.426476827+00:00','2024-08-16 16:10:15.38838619+00:00',NULL); -INSERT INTO machines VALUES(30,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.4','test_hostname','test_given_name',1,'authkey','[]',15,'2024-08-21 08:40:58.279528619+00:00','2024-08-21 08:18:48.290788011+00:00','0001-01-01 00:00:00+00:00','{}','[]','2024-02-09 15:58:53.072455801+00:00','2024-08-21 08:40:58.279725115+00:00',NULL); -INSERT INTO machines VALUES(31,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.7','test_hostname','test_given_name',3,'authkey','[]',16,'2024-08-21 08:40:59.118548501+00:00','2024-08-21 08:18:49.131796048+00:00','0001-01-01 00:00:00+00:00','{}','[]','2024-02-15 14:29:11.751167192+00:00','2024-08-21 08:40:59.118634099+00:00',NULL); -INSERT INTO machines VALUES(32,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.11','test_hostname','test_given_name',1,'cli','null',0,'2024-08-21 08:40:11.534414289+00:00','2024-08-21 08:18:51.548310242+00:00','0001-01-01 00:00:00+00:00','{}','[]','2024-03-05 21:21:27.482961607+00:00','2024-08-21 08:40:11.534488588+00:00',NULL); -INSERT INTO machines VALUES(33,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.12','test_hostname','test_given_name',3,'authkey','[]',17,'2024-07-24 14:02:32.230464112+00:00','2024-07-24 13:58:06.34346502+00:00','0001-01-01 00:00:00+00:00','{}','[]','2024-03-22 09:15:06.999732064+00:00','2024-07-24 14:02:32.230542511+00:00',NULL); -INSERT INTO machines VALUES(34,'f3c6d0e51ae64e6020827a70fb345e8ca1225acfe2e5064b6d3611ed2ffc0a7b','c2ef0d2e0ec342d836e45690c7f6e4b986553b78997e08c55fbe28238283e30f','7808fad26f34fb919895224f80e01979074d8b8af4b81d2fa465134c20795e11','100.64.0.8','test_hostname','test_given_name',1,'cli','null',0,'2024-08-20 18:55:03.645637741+00:00','2024-08-20 18:53:33.573112275+00:00','0001-01-01 00:00:00+00:00','{}','[]','2024-05-12 18:10:18.529512769+00:00','2024-08-20 18:55:03.64569204+00:00',NULL); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -INSERT INTO routes VALUES(1,'2023-06-07 19:33:30.351228569+00:00','2024-02-09 12:12:46.846169943+00:00',NULL,1,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(2,'2023-06-07 19:33:30.374825485+00:00','2024-02-09 12:12:46.85904672+00:00',NULL,1,'::/0',1,1,0); -INSERT INTO routes VALUES(3,'2023-06-07 19:40:59.371440019+00:00','2024-02-09 12:12:50.579469958+00:00',NULL,1,'10.9.110.0/24',1,1,1); -INSERT INTO routes VALUES(6,'2024-01-09 16:54:15.230556693+00:00','2024-01-09 16:56:18.454994668+00:00',NULL,26,'172.100.100.0/24',1,1,1); -INSERT INTO routes VALUES(7,'2024-01-09 16:54:15.244301546+00:00','2024-01-09 16:54:15.244301546+00:00',NULL,26,'172.100.100.0/24',1,0,0); -INSERT INTO routes VALUES(8,'2024-02-15 14:29:11.80450206+00:00','2024-02-15 14:31:33.693110756+00:00',NULL,31,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(9,'2024-02-15 14:29:11.819498226+00:00','2024-02-15 14:29:11.819498226+00:00',NULL,31,'0.0.0.0/0',1,0,0); -INSERT INTO routes VALUES(10,'2024-02-15 14:29:11.83141704+00:00','2024-02-15 14:31:33.710242987+00:00',NULL,31,'::/0',1,1,0); -INSERT INTO routes VALUES(11,'2024-02-15 14:29:11.843494351+00:00','2024-02-15 14:29:11.843494351+00:00',NULL,31,'::/0',1,0,0); -INSERT INTO routes VALUES(12,'2024-08-16 14:58:46.162834956+00:00','2024-08-16 14:59:46.380224608+00:00',NULL,32,'192.168.0.24/32',1,1,1); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON "users"(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/0-22-3-to-0-23-0-routes-fail-foreign-key-2076_dump.sql b/hscontrol/db/testdata/sqlite/0-22-3-to-0-23-0-routes-fail-foreign-key-2076_dump.sql deleted file mode 100644 index 638ba9df..00000000 --- a/hscontrol/db/testdata/sqlite/0-22-3-to-0-23-0-routes-fail-foreign-key-2076_dump.sql +++ /dev/null @@ -1,52 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),"user_id" integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -INSERT INTO machines VALUES(2,'dda8a58fa5db0d02a7351518342b1c1b6c57be2300e9d2c2600b42310c272560','8133aaff1056f74ba062d65ed71c619531b0d8bb3f455f9623ca62b21c765796','d5d8b487956f8ab8186682cd1cc8b1c0d1f38df3e0af2f1e100db219d72d05ca','fd7a:115c:a1e0::2,100.64.0.2','user6','user6',2,'oidc',NULL,0,'2023-08-10 20:39:17.723753+00:00','2023-08-10 20:38:29.407452+00:00','2023-08-11 04:31:47.962903+00:00','{"IPNVersion":"1.46.1-te42e60103-g4cea91365","BackendLogID":"3b49a2a5ab87c374de316f31901b27386e91803b577512c47fb770c4c2491f62","OS":"macOS","OSVersion":"12.6.8","Package":"IPNExtension","DeviceModel":"MacBookPro16,1","Hostname":"user6","Machine":"x86_64","GoArch":"amd64","GoArchVar":"v1","GoVersion":"go1.21rc3","Services":[{"Proto":"peerapi6","Port":47166},{"Proto":"peerapi4","Port":46590},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":false,"HairPinning":false,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":true,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":2,"DERPLatency":{"1-v4":0.113968906,"10-v4":0.023436893,"11-v4":0.198267412,"12-v4":0.061961581,"13-v4":0.031651191,"14-v4":0.146547143,"16-v4":0.114270957,"17-v4":0.023428486,"18-v4":0.144492636,"19-v4":0.150996698,"2-v4":0.012436127,"20-v4":0.166397065,"21-v4":0.064436639,"22-v4":0.169095616,"24-v4":0.070928559,"3-v4":0.197250582,"4-v4":0.162028147,"5-v4":0.220367619,"6-v4":0.225762119,"7-v4":0.114449341,"8-v4":0.135539131,"9-v4":0.054314979}},"Userspace":false,"UserspaceRouter":true}','["67.58.239.134:41641","192.168.2.63:41641"]','2023-08-08 03:42:32.227355+00:00','2023-08-11 04:31:47.968673+00:00',NULL); -INSERT INTO machines VALUES(13,'5418566216128c1d55cb729d08b020c0948a4974c7ec8631ca2d87c94a57c62f','2df745735e7a024f59385687404555917d904c0cc6b2b4c376d86005fa0c50e9','bc23afab27bd633c993e5feb0cbc64b463bd06bd74e5b0ed34aa0c59db1241cc','fd7a:115c:a1e0::1,100.64.0.1','private-access','private-access',1,'authkey','["tag:routers-general"]',2,'2024-08-21 09:10:42.082477+00:00','2024-08-21 09:10:42.124509+00:00','0001-01-01 00:00:00+00:00','{"IPNVersion":"1.68.1-t92eacec73","BackendLogID":"ace1ce1f1029d84582595b60a979159f7ffc2d088c58bc58ce957d5b5d76ab65","OS":"linux","OSVersion":"6.1.102-111.182.amzn2023.aarch64","Container":true,"Distro":"alpine","DistroVersion":"3.18.6","Desktop":false,"Hostname":"private-access","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.22.4","RoutableIPs":["0.0.0.0/0","::/0","10.0.0.0/8","10.18.80.2/32"],"Services":[{"Proto":"peerapi6","Port":59780},{"Proto":"peerapi4","Port":58436},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":false,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":true,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":10,"DERPLatency":{"1-v4":0.072225784,"10-v4":0.006838935,"11-v4":0.190929032,"12-v4":0.059929235,"13-v4":0.034135764,"14-v4":0.160624902,"16-v4":0.081274428,"17-v4":0.027732861,"18-v4":0.152569261,"19-v4":0.156006631,"2-v4":0.021369602,"20-v4":0.191327522,"21-v4":0.072034124,"22-v4":0.17290295,"23-v4":0.24451928,"24-v4":0.07036978,"3-v4":0.218839858,"4-v4":0.154432955,"5-v4":0.205852211,"7-v4":0.106623559,"8-v4":0.142767263,"9-v4":0.055452594}},"Cloud":"aws","Userspace":false,"UserspaceRouter":false}','["54.218.81.157:55438","10.18.95.117:55438","172.17.0.1:55438"]','2023-08-09 07:25:28.410294+00:00','2024-08-21 09:10:42.125774+00:00',NULL); -INSERT INTO machines VALUES(14,'45b7283a09376c6e4a0e91c791cd7ed62d3ab67af04e2ab65ea769f13b5d01a8','3f122817b7daec63358d7f32037157cc0742b236ebe10159200e04661846dc12','ca8c018c9a84e1ee5bea886b353b85a1cbc41b26bad1a8ea09250a73ee3bc224','fd7a:115c:a1e0::3,100.64.0.3','user5','user5',2,'oidc',NULL,0,'2023-09-05 20:17:56.144957+00:00','2023-09-05 20:17:15.131619+00:00','2023-09-02 02:14:29.91958+00:00','{"IPNVersion":"1.48.1-t528f95da6-gc4e4acad7","BackendLogID":"105b508bb534b33b1fbbfcab5d1b35b2f8aa14898f7f1deb79f82a0b4556a337","OS":"macOS","OSVersion":"12.6.8","Package":"IPNExtension","DeviceModel":"MacBookPro16,1","Hostname":"user5","Machine":"x86_64","GoArch":"amd64","GoArchVar":"v1","GoVersion":"go1.21.0","Services":[{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":false,"HairPinning":false,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":true,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":2,"DERPLatency":{"1-v4":0.110587393,"10-v4":0.021509598,"11-v4":0.196329449,"12-v4":0.059015905,"13-v4":0.028908204,"14-v4":0.14475452,"16-v4":0.089070015,"17-v4":0.021616309,"18-v4":0.14320796,"19-v4":0.144814647,"2-v4":0.010198952,"20-v4":0.164708226,"21-v4":0.110530864,"22-v4":0.169718648,"24-v4":0.110470627,"3-v4":0.198647468,"4-v4":0.162606553,"5-v4":0.218460376,"6-v4":0.23733193,"7-v4":0.114587062,"8-v4":0.140155524,"9-v4":0.052894305}},"Userspace":false,"UserspaceRouter":true}','["67.58.239.134:41641","192.168.2.63:41641"]','2023-08-10 20:41:58.499623+00:00','2023-09-05 20:17:56.146062+00:00',NULL); -INSERT INTO machines VALUES(15,'46b10c2ee93b1da403a87d7d57eecd9b3bf47ce58425742bc44bb8967984f3e9','35e6f69d23d17c13c48c45c0b74d1c3db85b21736151bcc06933939dface88b0','393ae3076213aae7a3ac11bc08fb0ad5c690ee5c50a4522f209fb5b018ecc958','fd7a:115c:a1e0::4,100.64.0.4','user3','user3',3,'oidc',NULL,0,'2023-09-06 02:43:29.503895+00:00','2023-09-06 02:43:49.511926+00:00','2023-09-06 23:17:20.322291+00:00','{"IPNVersion":"1.46.1-te42e60103-g4cea91365","BackendLogID":"8f10b84dbe2a45b68242bbf236952cfe88fdce68de11d6c21d4ea9b6a3cbe0e4","OS":"macOS","OSVersion":"13.5.0","Package":"IPNExtension","DeviceModel":"MacBookPro16,1","Hostname":"user3","Machine":"x86_64","GoArch":"amd64","GoArchVar":"v1","GoVersion":"go1.21rc3","Services":[{"Proto":"peerapi6","Port":40203},{"Proto":"peerapi4","Port":37067},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":false,"HairPinning":false,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":true,"WorkingICMPv4":false,"UPnP":true,"PMP":false,"PCP":false,"PreferredDERP":13,"DERPLatency":{"12-v4":0.026085574,"13-v4":0.00640859,"9-v4":0.024989145}},"Userspace":false,"UserspaceRouter":true}','["8.36.226.139:41641","192.168.5.49:41641"]','2023-08-10 21:11:29.400994+00:00','2023-09-06 02:43:49.512388+00:00',NULL); -INSERT INTO machines VALUES(16,'785f4b01cbbafb26f69acc95913f87c18f91b400526a8fea867e57d1f6979199','26ff15bb8b839d2648dd5ec3dd751c31a96c56604c9bcc7865dad2a285092b11','03dd9981e5e2b580dfc7c4fc6614e99dfa44671810a04f43b351d4a081fedfd7','fd7a:115c:a1e0::5,100.64.0.5','user1','user1',4,'oidc',NULL,0,'2023-09-06 02:44:13.123335+00:00','2023-09-06 02:43:43.131322+00:00','2023-09-06 22:13:57.020939+00:00','{"IPNVersion":"1.48.1-t528f95da6-gc4e4acad7","BackendLogID":"40e9bb1b2cc2f8a56648c6a892cfc00c0f03d2457822a59606e95d75a1e82fec","OS":"macOS","OSVersion":"13.5.0","Package":"IPNExtension","DeviceModel":"MacBookPro15,1","Hostname":"user1","Machine":"x86_64","GoArch":"amd64","GoArchVar":"v1","GoVersion":"go1.21.0","Services":[{"Proto":"peerapi6","Port":44445},{"Proto":"peerapi4","Port":41053},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":false,"HairPinning":false,"WorkingIPv6":true,"OSHasIPv6":true,"WorkingUDP":true,"WorkingICMPv4":false,"HavePortMap":true,"UPnP":true,"PMP":false,"PCP":false,"PreferredDERP":2,"DERPLatency":{"10-v4":0.178866462,"17-v4":0.17890099,"17-v6":0.178871041,"2-v4":0.083849135,"2-v6":0.085625895}},"Userspace":false,"UserspaceRouter":true}','["73.241.238.118:59407","73.241.238.118:57965","[2601:646:c500:3ad0:694f:e60a:54dd:9236]:41641","73.241.238.118:11709","73.241.238.118:41902","73.241.238.118:57674","73.241.238.118:41641","10.0.0.158:57965","[2601:646:c500:3ad0::670e]:57965","[2601:646:c500:3ad0:ce6:7064:450d:a01a]:57965","[2601:646:c500:3ad0:694f:e60a:54dd:9236]:57965"]','2023-08-10 23:58:34.357314+00:00','2023-09-06 02:44:13.123879+00:00',NULL); -INSERT INTO machines VALUES(17,'85892b41a04ffab78009c9129a87006e9af9335d4e3521939acddfb5a3edc0f1','7e7d018c338666e72b3b5ec2cc5e987bb294a88320740f07c9a337f07c2228b6','96425c4e4c62419e780b920f117183b77be80bc370f3d528bbd0bf4384703f2f','fd7a:115c:a1e0::6,100.64.0.6','user4','user4',2,'oidc',NULL,0,'2023-09-12 22:58:03.464908+00:00','2023-09-12 22:58:03.37758+00:00','2023-09-12 22:58:08.958716+00:00','{"IPNVersion":"1.48.1-t528f95da6-gc4e4acad7","BackendLogID":"5feae6bec74c6b933912936cfe4647f220d31d4aae9595838e0d5ab93c6ae277","OS":"macOS","OSVersion":"13.5.2","Package":"IPNExtension","DeviceModel":"MacBookPro18,1","Hostname":"user4","NoLogsNoSupport":true,"Machine":"arm64","GoArch":"arm64","GoVersion":"go1.21.0","Services":[{"Proto":"peerapi6","Port":64551},{"Proto":"peerapi4","Port":61927},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":false,"HairPinning":false,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":true,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":2,"DERPLatency":{"10-v4":0.021871917,"17-v4":0.072667291,"2-v4":0.010047084}},"Userspace":false,"UserspaceRouter":true}','["67.58.239.134:41641","192.168.2.205:41641"]','2023-08-24 06:16:01.898323+00:00','2023-09-12 22:58:08.961416+00:00',NULL); -INSERT INTO machines VALUES(18,'d1b2d75439d096b762a8c197377a37c7e771edd51f2a2146114af94756afa696','2da1fa2a8beeed09aede703c44b045bdb21a8819e505b91ed54b66599a3d61e4','37ad3823ffdada188c1710388f1cd242646e66a60d7554f38e069d8df90b064f','fd7a:115c:a1e0::7,100.64.0.7','user7','user7',3,'oidc',NULL,0,'2023-09-14 03:45:00.450227+00:00','2023-09-14 03:44:59.144507+00:00','2023-09-13 22:23:33.608107+00:00','{"IPNVersion":"1.48.1-t528f95da6-gc4e4acad7","BackendLogID":"fdcd2df9a10448a038e3afbdd166762d8917c2f10f4d49c2d257c2269f7a3ef3","OS":"macOS","OSVersion":"13.5.2","Package":"IPNExtension","DeviceModel":"MacBookPro16,1","Hostname":"user7","NoLogsNoSupport":true,"Machine":"x86_64","GoArch":"amd64","GoArchVar":"v1","GoVersion":"go1.21.0","Services":[{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":false,"HairPinning":false,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":true,"WorkingICMPv4":false,"UPnP":true,"PMP":false,"PCP":false,"PreferredDERP":13,"DERPLatency":{"1-v4":0.106408989,"10-v4":0.03490791,"11-v4":0.175664039,"12-v4":0.025765898,"13-v4":0.006238122,"14-v4":0.113346949,"16-v4":0.047132419,"17-v4":0.034994123,"18-v4":0.117520649,"19-v4":0.114357219,"2-v4":0.031949238,"20-v4":0.186615346,"21-v4":0.035247054,"22-v4":0.130433232,"23-v4":0.219779149,"24-v4":0.077523355,"3-v4":0.195639797,"4-v4":0.123427047,"5-v4":0.172791876,"6-v4":0.249922387,"7-v4":0.11796353,"8-v4":0.108542864,"9-v4":0.025837496}},"Userspace":false,"UserspaceRouter":true}','["8.36.226.139:41641","192.168.5.49:41641"]','2023-09-12 17:26:45.871277+00:00','2023-09-14 03:45:00.451656+00:00',NULL); -INSERT INTO machines VALUES(19,'0509c7634a68d962a11eb677b0914303e626af043ec1fe4e12c2dc5d45516cb8','5ba0a0384cffe3faf0a050ad8c5bd73a508faa63441b0fb8502c57313a4c523f','12d047ca5a667e2f4a02ac77994c3b6bd85359efa3810df372c0aad3a576a018','fd7a:115c:a1e0::8,100.64.0.8','user2','user2',2,'oidc','null',0,'2024-07-05 23:25:37.618298+00:00','2024-07-05 23:25:37.386862+00:00','2024-07-06 23:07:22.092685+00:00','{"IPNVersion":"1.68.2-tc79c500c7","BackendLogID":"047f3d6f49ca7c0990ff0084cb14f3aa3b8c41d40e4979ae7626d267bf8bca73","OS":"macOS","Package":"tailscaled","Hostname":"user2","Machine":"arm64","GoArch":"arm64","GoVersion":"go1.22.4","Services":[{"Proto":"peerapi-dns-proxy","Port":1}],"Userspace":false,"UserspaceRouter":true}','["156.47.240.192:61820","192.168.2.205:61820"]','2024-07-05 23:07:20.301108+00:00','2024-07-06 23:07:22.101717+00:00',NULL); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -INSERT INTO pre_auth_key_acl_tags VALUES(1,1,'tag:routers-general'); -INSERT INTO pre_auth_key_acl_tags VALUES(2,2,'tag:routers-general'); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,"user_id" integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -INSERT INTO pre_auth_keys VALUES(1,'832bf4265e26f5a5d4ea3593f652ed0df64cf9250f70ee23',1,'t','t','t','2023-08-08 01:52:38.951578+00:00','2033-08-05 01:52:38.949719+00:00'); -INSERT INTO pre_auth_keys VALUES(2,'c304da0964e5ff7b299a6426b413ad729b898bc2c2d829aa',1,'f','f','t','2023-08-09 07:12:47.593913+00:00','2023-08-09 08:12:47.591239+00:00'); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -INSERT INTO routes VALUES(1,'2023-08-08 03:37:36.017468+00:00','2023-08-08 03:37:36.031681+00:00',NULL,1,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(2,'2023-08-08 03:37:36.021527+00:00','2023-08-08 03:37:36.035791+00:00',NULL,1,'::/0',1,1,0); -INSERT INTO routes VALUES(3,'2023-08-08 04:30:41.584823+00:00','2023-08-08 04:30:41.597683+00:00',NULL,3,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(4,'2023-08-08 04:30:41.589517+00:00','2023-08-08 04:30:41.601622+00:00',NULL,3,'::/0',1,1,0); -INSERT INTO routes VALUES(7,'2023-08-08 15:23:37.871901+00:00','2023-08-08 15:23:37.883309+00:00',NULL,5,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(8,'2023-08-08 15:23:37.874743+00:00','2023-08-08 15:23:37.886933+00:00',NULL,5,'::/0',1,1,0); -INSERT INTO routes VALUES(9,'2023-08-09 02:32:42.366585+00:00','2023-08-09 02:32:42.377397+00:00',NULL,6,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(10,'2023-08-09 02:32:42.368994+00:00','2023-08-09 02:32:42.37993+00:00',NULL,6,'::/0',1,1,0); -INSERT INTO routes VALUES(11,'2023-08-09 02:32:42.370746+00:00','2023-08-09 02:32:42.370746+00:00',NULL,6,'10.0.0.0/8',1,0,0); -INSERT INTO routes VALUES(12,'2023-08-09 02:55:13.294542+00:00','2023-08-09 02:55:13.307086+00:00',NULL,7,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(13,'2023-08-09 02:55:13.297551+00:00','2023-08-09 02:55:13.310197+00:00',NULL,7,'::/0',1,1,0); -INSERT INTO routes VALUES(14,'2023-08-09 02:55:13.299975+00:00','2023-08-09 02:55:13.299975+00:00',NULL,7,'10.0.0.0/8',1,0,0); -INSERT INTO routes VALUES(89,'2023-08-09 05:10:16.439999+00:00','2023-08-09 05:10:16.455326+00:00',NULL,9,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(90,'2023-08-09 05:10:16.445257+00:00','2023-08-09 05:10:16.458885+00:00',NULL,9,'::/0',1,1,0); -INSERT INTO routes VALUES(91,'2023-08-09 05:10:16.448348+00:00','2023-08-09 06:18:14.219819+00:00',NULL,9,'10.0.0.0/8',1,1,0); -INSERT INTO routes VALUES(166,'2023-08-09 06:18:09.248571+00:00','2023-08-09 06:18:09.262669+00:00',NULL,11,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(167,'2023-08-09 06:18:09.251732+00:00','2023-08-09 06:18:09.266107+00:00',NULL,11,'::/0',1,1,0); -INSERT INTO routes VALUES(168,'2023-08-09 06:18:09.253883+00:00','2023-08-09 06:18:14.222585+00:00',NULL,11,'10.0.0.0/8',1,1,1); -INSERT INTO routes VALUES(169,'2023-08-09 06:34:33.350442+00:00','2023-08-09 06:34:33.361231+00:00',NULL,12,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(170,'2023-08-09 06:34:33.352935+00:00','2023-08-09 06:34:33.364731+00:00',NULL,12,'::/0',1,1,0); -INSERT INTO routes VALUES(171,'2023-08-09 06:34:33.354735+00:00','2023-08-09 06:34:33.354735+00:00',NULL,12,'10.0.0.0/8',1,0,0); -INSERT INTO routes VALUES(172,'2023-08-09 07:25:28.459499+00:00','2023-08-09 07:25:28.459499+00:00',NULL,13,'10.0.0.0/8',1,0,0); -INSERT INTO routes VALUES(173,'2023-08-09 07:25:28.465389+00:00','2023-08-09 07:25:28.506247+00:00',NULL,13,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(174,'2023-08-09 07:25:28.473006+00:00','2023-08-09 07:25:28.51642+00:00',NULL,13,'::/0',1,1,0); -INSERT INTO routes VALUES(175,'2023-08-11 22:09:28.25414+00:00','2023-08-11 22:09:30.021129+00:00',NULL,13,'10.18.80.2/32',1,1,1); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-08-08 01:46:05.562873+00:00','2023-08-08 01:46:05.562873+00:00',NULL,'routers'); -INSERT INTO users VALUES(2,'2023-08-08 03:42:32.218269+00:00','2023-08-08 03:42:32.218269+00:00',NULL,'user2'); -INSERT INTO users VALUES(3,'2023-08-10 21:11:29.386521+00:00','2023-08-10 21:11:29.386521+00:00',NULL,'user3'); -INSERT INTO users VALUES(4,'2023-08-10 23:58:34.343848+00:00','2023-08-10 23:58:34.343848+00:00',NULL,'user4'); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/0-23-0-to-0-24-0-no-more-special-types_dump.sql b/hscontrol/db/testdata/sqlite/0-23-0-to-0-24-0-no-more-special-types_dump.sql deleted file mode 100644 index 015a9b45..00000000 --- a/hscontrol/db/testdata/sqlite/0-23-0-to-0-24-0-no-more-special-types_dump.sql +++ /dev/null @@ -1,40 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -INSERT INTO users VALUES(1,'2024-09-27 14:26:08.573622915+00:00','2024-09-27 14:26:08.573622915+00:00',NULL,'user2'); -INSERT INTO users VALUES(2,'2024-09-27 14:26:17.094350688+00:00','2024-09-27 14:26:17.094350688+00:00',NULL,'user1'); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'3d133ec953e31fd41edbd935371234f762b4bae300cea618',1,1,0,1,'2024-09-27 14:26:14.737869796+00:00','2024-09-28 14:26:14.736601748+00:00'); -INSERT INTO pre_auth_keys VALUES(2,'9813cc1df1832259fb6322dad788bb9bec89d8a01eef683a',2,1,0,1,'2024-09-27 14:26:23.181049239+00:00','2024-09-28 14:26:23.179903567+00:00'); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`hostinfo` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:1efe4388236c1c83fe0a19d3ce7c321ab81e138a4da57917c231ce4c01944409','nodekey:4091de8ee569b46a0cf322ae7350e80f3af4ccfd6d83a27ad4ce455982bd0f52','discokey:0ec0a701b7596a230fff993483c12019951899920fbc1eefa90f73f05147ea20','["172.19.0.5:50477"]','{"IPNVersion":"1.74.1-t0ca17be4a","BackendLogID":"ef6d8598273807218f7589f6c957f09d0caa2c146b71a22ab417bc2cd2c61e32","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.9","Desktop":false,"Package":"container","Hostname":"ts-1-74-9r0gv5","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.1","Services":[{"Proto":"peerapi4","Port":48783},{"Proto":"peerapi6","Port":61271},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.74.1-t0ca17be4a","BackendLogID":"ef6d8598273807218f7589f6c957f09d0caa2c146b71a22ab417bc2cd2c61e32","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.9","Desktop":false,"Package":"container","Hostname":"ts-1-74-9r0gv5","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.1","Services":[{"Proto":"peerapi4","Port":48783},{"Proto":"peerapi6","Port":61271},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.75.24.109','fd7a:115c:a1e0:6558:d71d:c420:6693:2137','ts-1-74-9r0gv5','ts-1-74-9r0gv5',1,'authkey','[]',1,'2024-09-27 14:26:16.86266134+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:14.818308317+00:00','2024-09-27 14:26:16.865671604+00:00',NULL); -INSERT INTO nodes VALUES(2,'mkey:779766343bd0311dd043e61f4e5ab13b43dbd9fef3c243aad406aac43146f566','nodekey:ae80297e118d23f00e029c89c82c53cf575803c40e0dfab5bf3f34213b265731','discokey:591540881c8a783dcfeeb1dbe049ce9a9b74347b6a96c0f17452735cb1de6c2f','["172.19.0.9:59415"]','{"IPNVersion":"1.73.0-dev20240911-t98f4dd985","BackendLogID":"98d5b64d1713a8048c260dc0a18d453bae0f144fdcccb31445356d30ef890a0b","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.9","Desktop":false,"Hostname":"ts-head-2rx0pf","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.1","Services":[{"Proto":"peerapi4","Port":62304},{"Proto":"peerapi6","Port":58495},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.73.0-dev20240911-t98f4dd985","BackendLogID":"98d5b64d1713a8048c260dc0a18d453bae0f144fdcccb31445356d30ef890a0b","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.9","Desktop":false,"Hostname":"ts-head-2rx0pf","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.1","Services":[{"Proto":"peerapi4","Port":62304},{"Proto":"peerapi6","Port":58495},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.66.134.61','fd7a:115c:a1e0:9f2a:7124:e520:c18e:1a70','ts-head-2rx0pf','ts-head-2rx0pf',1,'authkey','[]',1,'2024-09-27 14:26:16.872651222+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:14.824358847+00:00','2024-09-27 14:26:16.875317527+00:00',NULL); -INSERT INTO nodes VALUES(3,'mkey:233ecd117c36c1e5a635b1658fd54369fddf38b5312adf8aae38dfe6506fdf47','nodekey:2a53f1bbefae24a4724201379a05d32c84fc8c86fb2c856334a904ac53a3b827','discokey:acda4e99407eed3b807b81649998d69f93e9c28ce6e4dc1032686b45a70bca09','["172.19.0.6:38747"]','{"IPNVersion":"1.72.1-tf4a95663c","BackendLogID":"6d2ed9adf12339635cbe3098b0000898e1206217be28203e914c561b48d18d14","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.8","Desktop":false,"Package":"container","Hostname":"ts-1-72-tiaqxm","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.22.5","Services":[{"Proto":"peerapi4","Port":35537},{"Proto":"peerapi6","Port":52493},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.72.1-tf4a95663c","BackendLogID":"6d2ed9adf12339635cbe3098b0000898e1206217be28203e914c561b48d18d14","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.8","Desktop":false,"Package":"container","Hostname":"ts-1-72-tiaqxm","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.22.5","Services":[{"Proto":"peerapi4","Port":35537},{"Proto":"peerapi6","Port":52493},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.122.53.94','fd7a:115c:a1e0:7ff3:3849:312e:e0ce:b130','ts-1-72-tiaqxm','ts-1-72-tiaqxm',1,'authkey','[]',1,'2024-09-27 14:26:16.876448699+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:14.826628358+00:00','2024-09-27 14:26:16.877346245+00:00',NULL); -INSERT INTO nodes VALUES(4,'mkey:faa7947734ef7763fd18f23502b934d53d6f8120f6ff95dd3fd1efcda16b9b60','nodekey:2b463a60a90b18a43b64aab223e1f52887d67579b645eec44489c51ff0246e59','discokey:a4a8a7340733bc6fd01f9e3932e2c76311b06e63ed6d5f014e703bd02d664923','["172.19.0.4:40007"]','{"IPNVersion":"1.58.2-tb0e1bbb62","BackendLogID":"d136d72e6814dece6a96d1be183c76365a2998969a33b6dbcbcfa3fbc36c3441","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.5","Desktop":false,"Hostname":"ts-1-58-1nvqo9","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.21.5","Services":[{"Proto":"peerapi4","Port":39487},{"Proto":"peerapi6","Port":54215},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.58.2-tb0e1bbb62","BackendLogID":"d136d72e6814dece6a96d1be183c76365a2998969a33b6dbcbcfa3fbc36c3441","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.5","Desktop":false,"Hostname":"ts-1-58-1nvqo9","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.21.5","Services":[{"Proto":"peerapi4","Port":39487},{"Proto":"peerapi6","Port":54215},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.81.207.12','fd7a:115c:a1e0:6eab:4356:d2b2:3ec6:db5e','ts-1-58-1nvqo9','ts-1-58-1nvqo9',1,'authkey','[]',1,'2024-09-27 14:26:16.867951657+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:14.830098209+00:00','2024-09-27 14:26:16.872950015+00:00',NULL); -INSERT INTO nodes VALUES(5,'mkey:1dc2e7d2021e5e0ecfd5769e007088c570c24f92d72f8c81e9cbafaf651c321e','nodekey:ffde0f4d4b9a2365e2c62c47e86597565c7f95a47f0efe3468501d45e844cf31','discokey:aca5c0f12ea887b46ec89d4a2f5759587689a33757026760557bc9cf94979f4d','["172.19.0.7:46344"]','{"IPNVersion":"1.56.1-tf1ea3161a","BackendLogID":"c6ebd81c20af7b4aa066b458c48232d155b39b4cf2f51eb2247fdf75b7f133f6","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.5","Desktop":false,"Hostname":"ts-1-56-z9manh","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.21.5","Services":[{"Proto":"peerapi4","Port":58612},{"Proto":"peerapi6","Port":41402},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.56.1-tf1ea3161a","BackendLogID":"c6ebd81c20af7b4aa066b458c48232d155b39b4cf2f51eb2247fdf75b7f133f6","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.5","Desktop":false,"Hostname":"ts-1-56-z9manh","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.21.5","Services":[{"Proto":"peerapi4","Port":58612},{"Proto":"peerapi6","Port":41402},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.66.133.106','fd7a:115c:a1e0:fbd2:e73f:b9c7:6f65:b22f','ts-1-56-z9manh','ts-1-56-z9manh',1,'authkey','[]',1,'2024-09-27 14:26:16.878707293+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:14.830352877+00:00','2024-09-27 14:26:16.878994587+00:00',NULL); -INSERT INTO nodes VALUES(6,'mkey:a6b1a81c08622e5ba6ce0411be5d7a4a1cc9c56e1e2cb255a1411540a0353c5c','nodekey:951c0e942b007a32eada6c4de231537a49144ff36144ba09989142e48400aa72','discokey:a3addac173e10230313a823b88444fd082183dbd7a13b503c70eba1766ee5b5f','["172.19.0.8:40676"]','{"IPNVersion":"1.73.100-t7dcf65a10","BackendLogID":"352b9c938ef6fffd1e9d7b357370a9413abf57590c97f2176684b0572426c72c","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.8","Desktop":false,"Package":"container","Hostname":"ts-unstable-qxkobr","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.0","Services":[{"Proto":"peerapi4","Port":56973},{"Proto":"peerapi6","Port":51259},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.73.100-t7dcf65a10","BackendLogID":"352b9c938ef6fffd1e9d7b357370a9413abf57590c97f2176684b0572426c72c","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.8","Desktop":false,"Package":"container","Hostname":"ts-unstable-qxkobr","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.0","Services":[{"Proto":"peerapi4","Port":56973},{"Proto":"peerapi6","Port":51259},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.90.160.114','fd7a:115c:a1e0:8f5b:5b4c:2619:eb78:588d','ts-unstable-qxkobr','ts-unstable-qxkobr',1,'authkey','[]',1,'2024-09-27 14:26:16.872053136+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:14.846863625+00:00','2024-09-27 14:26:16.873881395+00:00',NULL); -INSERT INTO nodes VALUES(7,'mkey:49add8704a1d2ed2981cff46ca48eb8ebaf3c8c454b9b8ad23ad1f9e422aff48','nodekey:8fea100f6c7f4ad2d8104701611d1e071f3b7f97a74e447a668e8de15547e126','discokey:73ca6a00fbcd8d630566f9329c82c9a3d024430cdfec638347e87ad19add737d','["172.19.0.13:44781"]','{"IPNVersion":"1.58.2-tb0e1bbb62","BackendLogID":"40d54901b911fd583dfb54bbf328450fa1da24291a65c68a1b6799808e2c2952","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.5","Desktop":false,"Hostname":"ts-1-58-fvd14i","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.21.5","Services":[{"Proto":"peerapi4","Port":57976},{"Proto":"peerapi6","Port":55540},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.58.2-tb0e1bbb62","BackendLogID":"40d54901b911fd583dfb54bbf328450fa1da24291a65c68a1b6799808e2c2952","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.5","Desktop":false,"Hostname":"ts-1-58-fvd14i","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.21.5","Services":[{"Proto":"peerapi4","Port":57976},{"Proto":"peerapi6","Port":55540},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.104.229.213','fd7a:115c:a1e0:2652:4049:544d:31b3:7e65','ts-1-58-fvd14i','ts-1-58-fvd14i',2,'authkey','[]',2,'2024-09-27 14:26:25.289265425+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:23.255563272+00:00','2024-09-27 14:26:25.293329154+00:00',NULL); -INSERT INTO nodes VALUES(8,'mkey:232725e99c1aa8f275259e6a88e59f77d3e5a8644f987c3be56b9a9d8b0b6803','nodekey:9fff7b41b1e7739214078737b4d473518509916fd429f37dbcd21538cb420413','discokey:184f4a42db7e71f4c002befb59fe200aada0514b3290d4609258797f414c4246','["172.19.0.12:40625"]','{"IPNVersion":"1.74.1-t0ca17be4a","BackendLogID":"40d20ee5cc6374aecea21b6d996be3dddfde1a14a72f43b88abcf54ac4dc7393","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.9","Desktop":false,"Package":"container","Hostname":"ts-1-74-m8xzkz","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.1","Services":[{"Proto":"peerapi4","Port":50864},{"Proto":"peerapi6","Port":57864},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.74.1-t0ca17be4a","BackendLogID":"40d20ee5cc6374aecea21b6d996be3dddfde1a14a72f43b88abcf54ac4dc7393","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.9","Desktop":false,"Package":"container","Hostname":"ts-1-74-m8xzkz","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.1","Services":[{"Proto":"peerapi4","Port":50864},{"Proto":"peerapi6","Port":57864},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.103.73.167','fd7a:115c:a1e0:9094:220b:d07c:2ada:afa8','ts-1-74-m8xzkz','ts-1-74-m8xzkz',2,'authkey','[]',2,'2024-09-27 14:26:25.298808764+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:23.257602115+00:00','2024-09-27 14:26:25.299937061+00:00',NULL); -INSERT INTO nodes VALUES(9,'mkey:d86a2d3bf33f28068f41bcf28eaef65874e75645b8a209b8e8ff75b982fd3030','nodekey:b791922bf53ca56cf20eddab18f70819dd974e88f81e0899468bbdb5ce57a947','discokey:3053d621f5f6a66bed296ca47950c73c6baeac05f363e6b93f4de436e5480912','["172.19.0.11:35503"]','{"IPNVersion":"1.56.1-tf1ea3161a","BackendLogID":"0dcd9913179fdd831ed881b2c4bbdbd34f8c35cdc291b770af1de2bd6f65406f","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.5","Desktop":false,"Hostname":"ts-1-56-d68ebk","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.21.5","Services":[{"Proto":"peerapi4","Port":60691},{"Proto":"peerapi6","Port":52294},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.56.1-tf1ea3161a","BackendLogID":"0dcd9913179fdd831ed881b2c4bbdbd34f8c35cdc291b770af1de2bd6f65406f","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.5","Desktop":false,"Hostname":"ts-1-56-d68ebk","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.21.5","Services":[{"Proto":"peerapi4","Port":60691},{"Proto":"peerapi6","Port":52294},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.66.151.35','fd7a:115c:a1e0:3bb0:cb5d:ce50:aff7:e909','ts-1-56-d68ebk','ts-1-56-d68ebk',2,'authkey','[]',2,'2024-09-27 14:26:25.295234746+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:23.259854709+00:00','2024-09-27 14:26:25.299175891+00:00',NULL); -INSERT INTO nodes VALUES(10,'mkey:7c5d3c34416dcba5331cb4e982f2a39628e0f20af864f2dc3d374c78c3127f3f','nodekey:f9ce61c929baab74b66c15d06e1ed029b4a810b208cbcf09fc833579ef4c2d1d','discokey:6cf5becf49993e8346022e336d009aed97fc336b06bf5233f41fdd958125552e','["172.19.0.10:49139"]','{"IPNVersion":"1.73.100-t7dcf65a10","BackendLogID":"c8babece2de39ecf1fb604a6308ed4682bc80d57a20469f2a0a963839a3cf89a","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.8","Desktop":false,"Package":"container","Hostname":"ts-unstable-rww2w3","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.0","Services":[{"Proto":"peerapi4","Port":34210},{"Proto":"peerapi6","Port":33839},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.73.100-t7dcf65a10","BackendLogID":"c8babece2de39ecf1fb604a6308ed4682bc80d57a20469f2a0a963839a3cf89a","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.8","Desktop":false,"Package":"container","Hostname":"ts-unstable-rww2w3","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.0","Services":[{"Proto":"peerapi4","Port":34210},{"Proto":"peerapi6","Port":33839},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.80.216.124','fd7a:115c:a1e0:862b:7842:203c:b321:bc21','ts-unstable-rww2w3','ts-unstable-rww2w3',2,'authkey','[]',2,'2024-09-27 14:26:25.306378593+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:23.264221272+00:00','2024-09-27 14:26:25.307441765+00:00',NULL); -INSERT INTO nodes VALUES(11,'mkey:011d835011ce062b6d4599e12db61b40a0232841733bc562924e81f8fa27e918','nodekey:b0102b4105fee529bf2b639cda917764435532499bbe1e5b196912ade3fad74e','discokey:28b9ca33497f8e61af9a650ab7043c1c889c3dda3fc5502e57b26aa34320f92f','["172.19.0.14:33800"]','{"IPNVersion":"1.72.1-tf4a95663c","BackendLogID":"073c48058a504ed882178e356d461aa1ea43d2f0b18eb791dc776aafecb2916e","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.8","Desktop":false,"Package":"container","Hostname":"ts-1-72-au8tie","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.22.5","Services":[{"Proto":"peerapi4","Port":38962},{"Proto":"peerapi6","Port":42239},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.72.1-tf4a95663c","BackendLogID":"073c48058a504ed882178e356d461aa1ea43d2f0b18eb791dc776aafecb2916e","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.8","Desktop":false,"Package":"container","Hostname":"ts-1-72-au8tie","Machine":"aarch64","GoArch":"arm64","GoVersion":"go1.22.5","Services":[{"Proto":"peerapi4","Port":38962},{"Proto":"peerapi6","Port":42239},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.88.147.164','fd7a:115c:a1e0:82b7:17b5:cc3d:7714:4a61','ts-1-72-au8tie','ts-1-72-au8tie',2,'authkey','[]',2,'2024-09-27 14:26:25.305073961+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:23.266659201+00:00','2024-09-27 14:26:25.305586505+00:00',NULL); -INSERT INTO nodes VALUES(12,'mkey:7c2bcc073197a9e56d0eb13f0328e75768223569c2f4468d83d9ae05bfa50c5d','nodekey:d51788b3908c85e1d80ccc7f91e12d482e5da7dfb30d2f65f7161592d67a7f2e','discokey:a332f1553d1ad958abbe91dc0e3fc661e8f78864839f4bc7f8c8b0f12ef4e450','["172.19.0.15:47328"]','{"IPNVersion":"1.73.0-dev20240911-t98f4dd985","BackendLogID":"51a7c7677a3758f7a6c77fa555ab4834a0f5e7482bb64ddcde02c73ec9c2a138","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.9","Desktop":false,"Hostname":"ts-head-ujp4xa","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.1","Services":[{"Proto":"peerapi4","Port":55532},{"Proto":"peerapi6","Port":55720},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','{"IPNVersion":"1.73.0-dev20240911-t98f4dd985","BackendLogID":"51a7c7677a3758f7a6c77fa555ab4834a0f5e7482bb64ddcde02c73ec9c2a138","OS":"linux","OSVersion":"6.8.0-36-generic","Container":true,"Distro":"alpine","DistroVersion":"3.18.9","Desktop":false,"Hostname":"ts-head-ujp4xa","Machine":"aarch64","GoArch":"arm64","GoArchVar":"v8.0","GoVersion":"go1.23.1","Services":[{"Proto":"peerapi4","Port":55532},{"Proto":"peerapi6","Port":55720},{"Proto":"peerapi-dns-proxy","Port":1}],"NetInfo":{"MappingVariesByDestIP":null,"HairPinning":null,"WorkingIPv6":false,"OSHasIPv6":true,"WorkingUDP":false,"WorkingICMPv4":false,"UPnP":false,"PMP":false,"PCP":false,"PreferredDERP":999,"FirewallMode":"ipt-default"},"Userspace":false,"UserspaceRouter":false,"AppConnector":false}','100.120.55.122','fd7a:115c:a1e0:b9b:d74c:e55a:c201:eb5','ts-head-ujp4xa','ts-head-ujp4xa',2,'authkey','[]',2,'2024-09-27 14:26:25.30691872+00:00','0001-01-01 00:00:00+00:00','2024-09-27 14:26:23.274483531+00:00','2024-09-27 14:26:25.30791685+00:00',NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',12); -INSERT INTO sqlite_sequence VALUES('users',2); -INSERT INTO sqlite_sequence VALUES('pre_auth_keys',2); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/0-23-0-to-0-24-0-preauthkey-tags-table_dump.sql b/hscontrol/db/testdata/sqlite/0-23-0-to-0-24-0-preauthkey-tags-table_dump.sql deleted file mode 100644 index 3231b97f..00000000 --- a/hscontrol/db/testdata/sqlite/0-23-0-to-0-24-0-preauthkey-tags-table_dump.sql +++ /dev/null @@ -1,40 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -INSERT INTO users VALUES(1,'2024-09-27 14:12:26.861201+02:00','2024-09-27 14:12:26.861201+02:00',NULL,'kratest'); -INSERT INTO users VALUES(2,'2024-09-27 14:12:33.550973+02:00','2024-09-27 14:12:33.550973+02:00',NULL,'testkra'); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'09b28f8c3351984874d46dace0a70177a8721933a950b663',1,0,0,0,'2024-09-27 12:12:38.242797+00:00','2024-09-27 13:12:38.241063+00:00'); -INSERT INTO pre_auth_keys VALUES(2,'3112b953cb344191b2d5aec1b891250125bf7b437eac5d26',1,0,0,0,'2024-09-27 12:12:55.365396+00:00','2024-09-27 13:12:55.363595+00:00'); -INSERT INTO pre_auth_keys VALUES(3,'7c23b9f215961e7609527aef78bf82fb19064b002d78c36f',1,0,0,0,'2024-09-27 12:13:32.99926+00:00','2024-09-27 13:13:32.997848+00:00'); -INSERT INTO pre_auth_keys VALUES(4,'f2015583852b725220cc4b107fb288a4cf7ac259bd458a32',2,0,0,0,'2024-09-27 12:14:52.719795+00:00','2024-09-27 13:14:52.717783+00:00'); -INSERT INTO pre_auth_keys VALUES(5,'b212b990165e897944dd3772786544402729fb349da50f57',2,0,0,0,'2024-09-27 12:15:06.869006+00:00','2024-09-27 13:15:06.86619+00:00'); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_key_acl_tags VALUES(1,1,'tag:derp'); -INSERT INTO pre_auth_key_acl_tags VALUES(2,2,'tag:derp'); -INSERT INTO pre_auth_key_acl_tags VALUES(3,3,'tag:derp'); -INSERT INTO pre_auth_key_acl_tags VALUES(4,3,'tag:merp'); -INSERT INTO pre_auth_key_acl_tags VALUES(5,4,'tag:test'); -INSERT INTO pre_auth_key_acl_tags VALUES(6,5,'tag:test'); -INSERT INTO pre_auth_key_acl_tags VALUES(7,5,'tag:woop'); -INSERT INTO pre_auth_key_acl_tags VALUES(8,5,'tag:dedu'); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -INSERT INTO sqlite_sequence VALUES('users',2); -INSERT INTO sqlite_sequence VALUES('pre_auth_keys',5); -INSERT INTO sqlite_sequence VALUES('pre_auth_key_acl_tags',8); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.0.sql b/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.0.sql deleted file mode 100644 index 3d0f6ab0..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.0.sql +++ /dev/null @@ -1,97 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,"user_id" integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'c0e73b52c1706fcceea368ae07c19c0e91fbe1f78eff2f7b',1,0,1,0,'2022-02-26 17:02:32.568878371+00:00','2022-02-26 18:11:15.388354971+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'ad293159f9506a02d6de4730e5a2ddb74f9fd4033919ecc5',1,0,1,1,'2022-02-26 17:08:07.828690446+00:00','2022-02-26 18:11:18.890985216+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(3,'66c7e0fbf74010ea2e153d35ffa0f3c48380ef38960d1fd1',1,0,0,1,'2022-02-26 17:11:54.149663776+00:00','2022-02-26 17:16:54.147175388+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(4,'cdbb69c88fabdc2609050c7f99e8ebdea6fcf81a50d802fc',1,0,0,1,'2022-02-26 17:15:34.160746962+00:00','2022-02-26 17:20:34.15935255+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(6,'0a99889a217a2f9cdd20082faadd5c90e72c83a47093a63a',1,0,0,1,'2022-03-04 07:27:17.535172209+00:00','2022-03-04 07:32:17.531871524+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(10,'6e88409e3883c1a47249e428030afd4a46bdaf3adb3b74c1',4,0,0,1,'2022-04-01 20:43:21.757703546+00:00','2022-04-01 20:48:21.756458144+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(13,'9d3c2bde9fcef181a3141dfe59d449bf03f6779dad3580e0',1,0,0,1,'2022-06-26 13:19:48.533865696+00:00','2022-06-26 13:24:48.532164552+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(16,'00b123d52ea8f58379b740fdc5c898b02330ab9b366cb1b4',1,0,0,1,'2023-02-12 06:21:30.15120385+00:00','2023-02-12 06:26:30.140082454+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(17,'c471ce93392c0d3040af0cf6166f4f578c3c66dd180b6e0b',1,0,0,1,'2023-02-12 06:26:55.829311638+00:00','2023-02-12 06:31:55.824701077+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(18,'fe0a438e67687540efcbbbc28c8e6c1b8ac1216f99de33d4',1,0,0,1,'2023-02-12 06:31:13.245185592+00:00','2023-02-12 06:36:13.241695106+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(19,'5a469043f1fa43ff11e54ea242dd882a81aea68f168b9a34',1,0,0,1,'2023-02-12 06:31:13.622545545+00:00','2023-02-12 06:36:13.560890824+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(20,'8b31d9a38282dfe07ffcebdfbd40db6b9e49997c93bed570',1,0,0,1,'2023-02-28 12:45:48.518939706+00:00','2023-02-28 12:50:48.445951259+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(21,'47a4216f2b4e5885d4e53a3de2ffe95521d8a708ca26d31e',1,0,0,1,'2023-02-28 12:45:48.53865321+00:00','2023-02-28 12:50:48.439132728+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(22,'1b7be83871f396e4544c8445acfc8d308dbfe29c7f0197f0',1,0,0,1,'2023-02-28 12:45:48.538806791+00:00','2023-02-28 12:50:48.445073692+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(23,'c9f364adba95c6c46c162eaa3786702805595841c0150927',1,0,0,1,'2023-05-05 08:08:16.73107293+00:00','2023-05-05 08:13:16.722921676+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(24,'a90c87e30fa22ffee39e5ce157dd22c909f2026295e3bce4',1,0,0,1,'2023-08-14 14:36:52.042138928+00:00','2023-08-14 14:41:52.038644473+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(25,'bb059cde5663619a8918bde19b9f8236085725554d9d78c2',1,0,0,1,'2023-08-14 16:15:33.722630834+00:00','2023-08-14 16:20:33.719604033+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(26,'0733b8fc67adc82644c87a95947b24c5d368c633cff92eb4',1,0,0,1,'2023-08-29 06:30:44.934900329+00:00','2023-08-29 06:35:44.931280114+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(27,'567f3a39066fd99c567d14bcc374b25cee0ad71af08b9054',1,0,0,1,'2023-11-03 17:53:45.857200883+00:00','2023-11-03 17:58:45.853742836+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(30,'8aa13c93cbef46d36c8159da51fb41a469ae04f932980890',1,0,0,1,'2024-07-29 12:24:27.140614087+00:00','2024-07-29 12:29:27.136525982+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(31,'9314b171abc433d73b8db297c1c5c65dae0be53d39a71520',1,0,0,1,'2024-08-07 18:35:12.375119763+00:00','2024-08-07 18:40:12.371590392+00:00',NULL); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,"node_id" integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(7,'2023-01-15 14:41:49.445054371+01:00','2023-01-29 10:12:11.959527554+01:00',NULL,14,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(8,'2023-01-15 14:41:49.452617331+01:00','2023-01-29 10:12:13.154311101+01:00',NULL,14,'::/0',1,1,0); -INSERT INTO routes VALUES(9,'2024-05-05 07:16:02.944084847+02:00','2024-05-05 13:25:02.977065366+02:00',NULL,3,'10.138.98.32/27',1,1,1); -INSERT INTO routes VALUES(10,'2024-05-05 07:16:02.95709561+02:00','2024-05-05 13:25:04.513930181+02:00',NULL,3,'192.168.208.0/20',1,1,1); -INSERT INTO routes VALUES(11,'2024-05-05 12:46:08.770263657+02:00','2024-05-05 12:46:39.012076195+02:00',NULL,27,'172.19.128.0/20',1,1,1); -INSERT INTO routes VALUES(12,'2024-05-05 12:46:08.779527998+02:00','2024-05-05 12:46:39.843489833+02:00',NULL,27,'10.68.0.0/14',1,1,1); -INSERT INTO routes VALUES(13,'2024-05-05 12:46:08.787239639+02:00','2024-05-05 12:46:40.983839193+02:00',NULL,27,'10.243.219.68/31',1,1,1); -INSERT INTO routes VALUES(14,'2024-05-05 12:46:08.794135892+02:00','2024-08-18 08:45:35.831813635+02:00',NULL,27,'10.30.155.128/25',1,1,1); -INSERT INTO routes VALUES(15,'2024-05-05 12:46:08.799975671+02:00','2024-05-05 12:46:44.333956126+02:00',NULL,27,'10.64.0.0/13',1,1,1); -INSERT INTO routes VALUES(16,'2024-05-05 12:46:08.806879988+02:00','2024-05-05 12:46:45.422824419+02:00',NULL,27,'192.168.0.0/16',1,1,1); -INSERT INTO routes VALUES(17,'2024-08-08 20:44:22.76306591+02:00','2024-12-21 12:10:07.339044415+01:00',NULL,31,'172.27.33.0/24',1,1,1); -INSERT INTO routes VALUES(18,'2024-08-08 20:47:33.469970726+02:00','2024-12-21 12:10:07.442783446+01:00',NULL,31,'10.151.196.0/23',1,1,1); -INSERT INTO routes VALUES(21,'2024-08-08 20:54:05.146666051+02:00','2024-12-21 12:10:07.538738125+01:00',NULL,31,'192.168.33.128/26',1,1,1); -INSERT INTO routes VALUES(23,'2024-08-08 20:54:05.162212208+02:00','2024-12-21 12:10:07.644714175+01:00',NULL,31,'10.69.87.96/27',1,1,1); -INSERT INTO routes VALUES(25,'2024-08-08 20:54:05.179419681+02:00','2024-12-21 12:10:07.753927883+01:00',NULL,31,'192.168.240.184/30',1,1,1); -INSERT INTO routes VALUES(26,'2024-08-08 20:54:05.186132539+02:00','2024-12-21 12:10:07.871905187+01:00',NULL,31,'172.19.162.160/27',1,1,1); -INSERT INTO routes VALUES(28,'2024-08-08 20:54:05.202442818+02:00','2024-12-21 12:10:07.972132539+01:00',NULL,31,'172.30.190.136/30',1,1,1); -INSERT INTO routes VALUES(31,'2024-08-08 20:54:05.246698925+02:00','2024-12-21 12:10:08.150358433+01:00',NULL,31,'10.241.118.90/31',1,1,1); -INSERT INTO routes VALUES(32,'2024-08-08 20:54:05.256984635+02:00','2024-12-21 12:10:08.349521909+01:00',NULL,31,'192.168.0.0/17',1,1,1); -INSERT INTO routes VALUES(37,'2024-08-08 20:54:05.300971626+02:00','2024-12-21 12:10:08.553265285+01:00',NULL,31,'192.168.192.0/19',1,1,1); -INSERT INTO routes VALUES(43,'2024-08-08 20:54:05.383430747+02:00','2024-12-21 12:10:08.66112581+01:00',NULL,31,'172.29.254.8/29',1,1,1); -INSERT INTO routes VALUES(47,'2024-08-08 20:54:05.443181025+02:00','2024-12-21 12:10:08.826993878+01:00',NULL,31,'172.18.8.0/22',1,1,1); -INSERT INTO routes VALUES(48,'2024-08-08 20:54:05.449778605+02:00','2024-12-21 12:10:09.237117302+01:00',NULL,31,'10.169.34.250/31',1,1,1); -INSERT INTO routes VALUES(49,'2024-09-03 06:43:34.875117755+02:00','2024-12-21 12:10:09.342259317+01:00',NULL,31,'172.24.0.0/16',1,1,1); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -INSERT INTO api_keys VALUES(1,'S2nn85NliD',X'24326224313224437447593364536a63344d6548427a3266435a6d7a4f6f3972575178476b616e703962563435326157387753476364753947454b75','2022-12-25 21:35:28.644697962+01:00','2023-03-22 06:22:18.724817647+01:00',NULL); -INSERT INTO api_keys VALUES(2,'1KZkpEyiMH',X'24326224313224746474656e31394e30496e4f626952327a6f466f574f337749306d73684a453242455270723865557a483171747943436b4c333965','2023-03-22 06:22:18.339101298+01:00','2023-04-13 09:32:24.318715268+02:00',NULL); -INSERT INTO api_keys VALUES(3,'6yBMrqvEDX',X'2432622431322450757377786d6145352e503449364f4c36767069394f7a3948783837494879723050457752397a684b49594b56434d5250384c7a53','2023-04-13 09:32:23.864995051+02:00','2023-07-12 07:32:23.45+00:00',NULL); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,"hostname" text,"user_id" integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`given_name` varchar(63),`forced_tags` text,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:6a9f06fccf0eff626075428e16c07c2e93464667a67c84dd0efadb1a74761d72','nodekey:d0d67c955181f1a9369a471496e1004678eebd93660394949848e61c7132ef9e','discokey:7667a90b1915f5cb3f725d159cd3f8bc3e46f6244df25f45d7fadc830a6d064e','web-05',1,'authKey',3,'2025-01-24 17:18:58.879117852+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["198.139.228.73:16016","[a5dd:542c:6998:997c:2d87:61b0:c760:ab6f]:56919","112.221.140.130:19743","[6520:763a:5fe2:8e88:5c8e:5b03:eeb1:5588]:20840","[7670:9084:a917:1173:70ce:2798:c805:66dc]:50297","[6f5b:219b:2f0b:cc08:a034:6936:d96b:11be]:28318","60.210.210.203:14631","66.181.120.138:15464"]','2022-02-26 18:11:55.92837512+01:00','2025-01-24 17:18:58.87921789+01:00',NULL,'node001',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(3,'mkey:ba9e14b963dd3a5953c9a32a58947ab98fa5ca44c492210b694a2a3fef2b06aa','nodekey:9d11ad4d2879c577b58cf0533f25b19eaa2d85f79f38becbc5a89c464df074af','discokey:6fce22b965a1d56764028a8e04abe388ea727db682c21c321e20f8900b0883ff','lt-38',7,'authKey',NULL,'2025-01-24 21:00:03.638659475+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["51.107.242.7:55191","[ce6f:3f5b:da2f:8382:f3a1:e2d4:1c47:fe5e]:25604"]','2022-03-01 19:26:46.242187887+01:00','2025-01-24 21:00:03.638914306+01:00',NULL,'node003',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:f67b11e9aff08b81a9d1e13ac991b7951575a60da19e32e89d0342f349aa6df5','nodekey:6fc38db24bf19fd60a01408ec2ad89add0a28013d1723540bd188e4c72cd006f','discokey:1f79ad4a4889d4333d6ac1c0c3879ffbe422d7d466e7532a17ab0974832f4f7e','laptop-41',1,'authKey',6,'2025-01-23 10:46:09.470903946+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c199:e8ca:c1bb:8364:dde8:bf3e:9467:c050]:61218","168.120.224.160:52272","184.61.212.248:29690","[32a5:51:afc3:d256:5b83:97cd:bb34:13c1]:63964"]','2022-03-04 08:27:19.383566031+01:00','2025-01-23 10:46:09.47099081+01:00',NULL,'node004',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(5,'mkey:a98f853b653cd80bb22b2610fa9fd575989eb00cf7d726e131f2f2b600752b20','nodekey:08816b57ddb7244a6893e67601caf0bbe341ab32e851d5e9272c265aff2dfa6d','discokey:fd250ccb73dab47b884d89d99263d1ee60f2901d9e9782357f5718a7b6b11b05','lt-42',8,'authKey',NULL,'2025-01-24 15:55:31.927025416+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[875:166d:8d6f:e053:bbe6:1c6d:843f:25b4]:11215","196.188.180.254:52309","180.187.196.169:52502"]','2022-03-05 13:54:23.660591381+01:00','2025-01-24 15:55:31.927258245+01:00',NULL,'node005',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(6,'mkey:5cb6e4bdb7176296568c0bc83af4b3731b3a59c18d985818e74c92bbbda4b1cd','nodekey:ccf1754b9a331d4f2dbb8dce7cab241a2613dff1ae489f468f4ba5f930c724fc','discokey:9a208d9bbd1b1fc35285a325e4b3c91a9151e812bdc8b49278a8d2271895b4ec','web-14',1,'authkey',NULL,'2025-01-22 13:52:30.302827092+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["166.111.217.41:13979","[cda1:789a:9f90:f1b0:df03:71a6:2bf8:b975]:20326","[ebe:748c:c594:2bd:9c6b:3518:a118:c54d]:30356"]','2022-03-21 15:52:20.739594362+01:00','2025-01-22 13:52:30.303085169+01:00',NULL,'node006',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(7,'mkey:bee1775fbab5753b808bfdd48d573fb328354515d0141ee3fcc18d49d13bc917','nodekey:6ad35a7f984fa1de838c163807b20ecc35cbbfe7ec2ec72880c5e50cad0a14a3','discokey:534f4aaa96a2f2f411d499c8d6867803c3462c25fdc5ca72b41c6634e84c1d8c','db-70',4,'authkey',10,'2025-01-24 23:11:36.767081207+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["3.26.132.25:924","[1293:e44:fa71:9ae2:ee7c:64ff:5bb2:705a]:8071","[2263:abdc:1461:efa1:8cca:7009:f84:3747]:3731","[ca4e:2134:74e7:8888:8b8:32ea:a7cd:e6fc]:53863","[f38c:fd3b:43da:7790:faf4:7da2:8df9:7bd1]:32916","147.168.107.56:35336"]','2022-04-01 22:43:27.318756043+02:00','2025-01-24 23:11:36.767680407+01:00',NULL,'node007',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(8,'mkey:e16c0484cd8458ffbd8ddf0ce4908e82ac5bd0ecf329cb1a5b618b89a4a72742','nodekey:5b9be700a2b1b5dca7c1b2953b8e06e0df8d6702f2aa0143ea9a674e0600e6c4','discokey:9eb34382a19dbf5e405bc3801acb7b1e0bc6f1b5c7c4632ea4c277b3241284eb','laptop-34',7,'authkey',NULL,'2025-01-24 10:53:08.967670007+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c901:4170:7a7:1a98:5474:f0a8:1722:830b]:48874","39.36.112.35:10587","68.226.133.92:29098","11.142.49.170:22792"]','2022-04-03 09:38:46.178224968+02:00','2025-01-24 10:53:08.967898387+01:00',NULL,'node008',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(9,'mkey:ab4157b902f026d4f8fd2180a463453049a7fab9488eff84c5e78cefaccb6e9b','nodekey:b848066a92746001cc9678f5b459f0849be0a00ac0126c95d4bb9ba25686f671','discokey:f52b49b12ffcd4ba40697f9f3367fc65db950348315d8da283a68181b2c0ab85','laptop-05',7,'cli',NULL,'2025-01-23 17:54:23.678930678+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["116.140.148.134:50513","[862d:91db:1faa:c25:4bb3:1882:ebe2:dc36]:34449","[d95c:d244:1457:ef8d:92a3:17f8:a38c:efed]:16770","215.159.49.214:32988","136.206.88.82:20612"]','2022-04-09 09:43:07.09027176+02:00','2025-01-23 17:54:23.679048069+01:00',NULL,'node009',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(10,'mkey:b6578985271145f84a1edd7bb3410551addc2b122cc3d3645916eecc612d287f','nodekey:5f327f104bc03d7a238a03a4f34b05c1c8a569215b25623f16a74c1afcbd49a6','discokey:df34ffd51349629e037e70c81280b8afc0dbeff254e8c0621033294998bee371','db-93',1,'authkey',NULL,'2025-01-18 10:08:12.942196784+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["215.10.187.249:19488","[bfe7:ebf6:e338:79a6:dba:4051:a7df:10]:31519"]','2022-06-26 13:57:07.40762063+02:00','2025-01-18 10:08:12.942403965+01:00',NULL,'node010',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(11,'mkey:1aee722e316e87b8ae056d72807d1ebb4338198e6ba62e36ec4e3b07873d218a','nodekey:99b698ecc210ced8962dd6862962c80c78301e5ecf598e011b1361c916db48f3','discokey:7f6ec7e95b93b35c02b66e1f28070998ddbf32d273252ef8500d7ab8c14a7d3f','email-44',1,'authkey',13,'2025-01-24 21:00:02.220787051+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["87.139.180.4:64685","[bae8:1f82:ff78:e8a8:235e:a50e:c131:2e8]:43129","[3486:5bc7:ada8:8da1:9df9:3cc6:d018:ee8b]:65112","43.182.159.11:27315"]','2022-06-26 15:19:50.566498735+02:00','2025-01-24 21:00:02.221354141+01:00',NULL,'node011',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(14,'mkey:4edbeff705459e7e5d0df80d42abce8d3db49137e013cbe38416aa2d0e77e4de','nodekey:149c771cccddad67f62a6cdf8ebc282692d3f8138e6749e3bcb8044209f30ea8','discokey:7773ea1a30faff0ef3d64186b00fc7f0d584c68eea4e3d75ad6dab5cfb54bee4','email-49',1,'authkey',NULL,'2025-01-23 06:42:37.593813869+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["61.59.238.85:46024","221.176.225.4:60304","[59f1:edd4:571:ac10:e700:fdbe:6084:e1eb]:43774","[88a8:d815:f927:150a:2c5b:9952:2587:818f]:33169"]','2022-09-26 16:07:54.206927686+02:00','2025-01-23 06:42:37.594039595+01:00',NULL,'node014',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(23,'mkey:ec475b374671562006704de6f1be902a3d75079e8c9a3b52e6c3d6dc37d3086a','nodekey:655232f4f4205ddc98661144bd7a05a975f44a3f01ed22e18927e6265329104a','discokey:b30f00fbda4b3a7ed66ff30ac8c41c0c1c04a775756543f27155e1b93f947eca','db-05',1,'authkey',23,'2025-01-21 14:51:26.001774373+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f2e:fd97:b513:b737:33f3:dd2b:df4:a90e]:41096","155.239.169.182:41163","[c908:b81c:b4c8:38d3:e311:b3f6:3067:227c]:11541","[f41f:b9e9:9205:210a:4ff9:ab34:57fd:3148]:21852"]','2023-05-05 10:08:17.301597525+02:00','2025-01-21 14:51:26.002382861+01:00',NULL,'node023','[]','100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(24,'mkey:cba1a93252cc66c9c35844b930b8177743e0ca3113b4a311c9ba3eecf98711d6','nodekey:7e3609ff22b18500773cdb42aa9a1c70de5a1b47f4fc5b85f8ec9620cc2d3a56','discokey:f3289ecda9660d9b664adef04a184e4c940003a0abf08c2a04bd46d87ab3c90c','db-91',1,'authkey',25,'2025-01-13 15:02:48.676721758+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e9c0:d5fd:b25c:8cc7:4c58:b666:3fc1:2814]:11543","30.125.129.134:36303","[213c:e72:d34:c59c:d4fa:8bb5:f6d3:d691]:24926","83.92.114.14:38013","223.127.253.143:5148","[e946:63ba:5369:b6db:2d37:8922:57e5:faa8]:51269","39.143.235.96:56031"]','2023-08-14 18:15:34.292188686+02:00','2025-01-13 15:02:48.67679754+01:00',NULL,'node024','[]','100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(25,'mkey:94e59581d400e61de9be0f51fd13a3610f6110f7c43f8b361c5b53e5b1821d6e','nodekey:d6203177612c4aff07781f0286b7122009b77d10bda31f18a01a3acdc73608da','discokey:8ed1202f1da5d66b2a9603359e9812f357a0f584636fa05b9a0a575e57e530cb','email-69',1,'authkey',26,'2025-01-21 07:41:05.58787908+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["110.245.60.227:50528","33.141.84.140:39332","141.199.13.112:28607","94.8.65.228:34575","58.211.127.215:33647"]','2023-08-29 08:30:45.518580154+02:00','2025-01-21 07:41:05.58844064+01:00',NULL,'node025','[]','100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(26,'mkey:04be3f20a566914418fa6e299f6b40484b6855fb2440cc997f2d457ef6a627a3','nodekey:257eb51ebf3703ce03c5ee07794cbbb6f94b24fd06ca914dc910e7741d55fea5','discokey:2ab4bdebe4ffe03d3a66246b6bb8a8289cb73c0f07aa4484d1ef3fdd6e17dd23','desktop-14',1,'authkey',27,'2025-01-24 23:00:24.184727782+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[3d31:7ec5:7635:7b6c:c7d0:cf70:bb45:6922]:16454","[80d6:f9fd:44f8:58b7:2049:70d4:721:139d]:31272","173.46.200.67:17932","62.215.78.119:52651"]','2023-11-03 18:53:48.108033118+01:00','2025-01-24 23:00:24.185295273+01:00',NULL,'node026','[]','100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(27,'mkey:b0c1eaf2df8d6cdc328d4cdde4ca8f8ee66fc736845c38b9fbb47df19e5137fe','nodekey:c20b516b77faec9611cb1644e7ac0578f7542751d78bcb9eefda307d610838d2','discokey:79580271b2031ba56f1616b7f8a491e7389cefb24e906d9bb99d3da5d045de9b','srv-56',7,'authkey',NULL,'2025-01-24 20:00:03.216855282+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["13.225.162.245:60672","[5101:751a:5e00:cf46:8b50:8ba9:3c8e:4c7c]:33862","[b9fe:3630:1807:826:a4a6:e5d2:532c:bdc8]:43617","[25b5:5b72:2ecc:193c:b61e:329:81c6:5af6]:39854","214.128.115.35:43726","[436d:61b:9834:b0e0:b2ed:b452:6330:34ee]:59171","[13a0:b8ff:bebb:d48:e16e:869d:b542:531b]:17201","44.157.88.255:43418"]','2023-12-29 19:18:10.814399482+01:00','2025-01-24 20:00:03.217108129+01:00',NULL,'node027','[]','100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(29,'mkey:6e0a32694e0a2bddbcc71d9489cbb76439f43ceffa40ab8bf070a695be120390','nodekey:0c36c35d4cba9e0b70a945ab88622097fa8659da6c7a20b0e4cbee1dea649075','discokey:ce88ad8d792775af9639e6295818cb416fb0ad24bfa6550f00722ee8095399c7','db-48',1,'authkey',NULL,'2025-01-24 20:00:05.821912941+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[60b9:155d:3b4a:1ade:51ce:e1f3:a6b8:8e5e]:42022","[ce74:4042:3405:dbfb:8e1b:2f01:5a55:6f49]:20536","[37ed:24ee:1ca9:2cdf:be79:d10a:1247:263c]:27485","199.22.246.223:36955"]','2024-06-02 09:46:39.307697473+02:00','2025-01-24 20:00:05.822130692+01:00',NULL,'node029','[]','100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(30,'mkey:93318ff3d9920d6be01eb5166a608fd76c094851aa925e55a2725c6dc99bbb3f','nodekey:cab6cdb81dff8d8a9eccb84d2ff9561bea8840a090f3ea1ae27e94b02ccd6744','discokey:4b472fdd9a6ab4f56f46139d55d525e17dfedd61aa11a0c62a02ebd80aef354e','email-50',1,'authkey',30,'2025-01-17 00:08:28.17211431+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2f41:257d:dc1b:c709:ea6c:1ad2:ba8e:af48]:60708","[c7df:a6c9:ae51:6570:f6b6:7d2c:1803:50f0]:6218","62.26.91.22:45257","37.207.25.123:5981"]','2024-07-29 14:24:27.684620193+02:00','2025-01-17 00:08:28.17277623+01:00',NULL,'node030','[]','100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(31,'mkey:ac01d01cd1e709938d07f3585184eef337024d32e80ceeddcb943ff8ca54cdd4','nodekey:d5906ed493f8df3fddb09fed829736f1274cb8120d6bcd33846a90460215a40d','discokey:0d4fc211bccbaa9f89e9ac73a33f4f682afe9d6ba7f722f5373cb1254787f2ef','srv-37',1,'authkey',31,'2025-01-24 17:05:22.207236228+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["49.111.96.135:35528","[abc1:b7cd:f18:6ed2:4b5a:9dce:3f08:8c17]:333","206.118.184.48:48132","7.64.160.114:11101"]','2024-08-07 20:35:12.944541318+02:00','2025-01-24 17:05:22.207871517+01:00',NULL,'node031','[]','100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(32,'mkey:8010732f3bbb0f58f3002b645258dbbc7d6c5cfebf8a485685a84da9b30dcf00','nodekey:d34dac3b348cd2d100ba36a85b42542cfe1dc74d51197caf50b138fea1401168','discokey:e28e9f435da77357942130f3923bb584773e6812d8d27fdc6dce20cc0cc0f752','lt-44',1,'cli',NULL,'2025-01-24 23:20:53.282115214+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["85.212.234.244:43593","161.5.250.27:14020","[cd55:bf01:5540:c30:3981:7f71:a471:4993]:25285","24.16.62.70:35569","213.115.82.246:7596","195.216.27.246:54466","[c47f:35c9:41fe:6f5:a40d:3e8b:6679:411e]:12963","19.205.218.60:63516","116.228.193.194:44296","[ee98:8de3:1f5b:731c:751b:ba46:a4a7:8760]:23245","[6eb3:8c34:e6c9:4c6a:25c9:89ca:c165:354e]:65112","[7003:63bb:6429:d463:96a6:990:2bdc:217d]:4300","[d820:584e:2b46:6915:6420:3056:f76:9e33]:48885"]','2024-10-26 07:18:04.947942936+02:00','2025-01-24 23:20:53.282346841+01:00',NULL,'node032',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2022-02-26 18:00:56.104436744+01:00','2024-09-14 08:51:13.31135114+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2022-04-01 22:30:02.657653341+02:00','2024-09-14 08:55:27.877614108+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2024-05-05 07:08:47.915309504+02:00','2024-09-14 08:55:02.778476991+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2024-09-14 08:57:26.215082073+02:00','2024-09-14 08:57:26.215082073+02:00',NULL,'user008','','',NULL,NULL,''); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.1.sql b/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.1.sql deleted file mode 100644 index af4d482b..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.1.sql +++ /dev/null @@ -1,95 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,"user_id" integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'c0e73b52c1706fcceea368ae07c19c0e91fbe1f78eff2f7b',1,0,1,0,'2022-02-26 17:02:32.568878371+00:00','2022-02-26 18:11:15.388354971+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'ad293159f9506a02d6de4730e5a2ddb74f9fd4033919ecc5',1,0,1,1,'2022-02-26 17:08:07.828690446+00:00','2022-02-26 18:11:18.890985216+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(3,'66c7e0fbf74010ea2e153d35ffa0f3c48380ef38960d1fd1',1,0,0,1,'2022-02-26 17:11:54.149663776+00:00','2022-02-26 17:16:54.147175388+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(4,'cdbb69c88fabdc2609050c7f99e8ebdea6fcf81a50d802fc',1,0,0,1,'2022-02-26 17:15:34.160746962+00:00','2022-02-26 17:20:34.15935255+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(6,'0a99889a217a2f9cdd20082faadd5c90e72c83a47093a63a',1,0,0,1,'2022-03-04 07:27:17.535172209+00:00','2022-03-04 07:32:17.531871524+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(10,'6e88409e3883c1a47249e428030afd4a46bdaf3adb3b74c1',4,0,0,1,'2022-04-01 20:43:21.757703546+00:00','2022-04-01 20:48:21.756458144+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(13,'9d3c2bde9fcef181a3141dfe59d449bf03f6779dad3580e0',1,0,0,1,'2022-06-26 13:19:48.533865696+00:00','2022-06-26 13:24:48.532164552+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(16,'00b123d52ea8f58379b740fdc5c898b02330ab9b366cb1b4',1,0,0,1,'2023-02-12 06:21:30.15120385+00:00','2023-02-12 06:26:30.140082454+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(17,'c471ce93392c0d3040af0cf6166f4f578c3c66dd180b6e0b',1,0,0,1,'2023-02-12 06:26:55.829311638+00:00','2023-02-12 06:31:55.824701077+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(18,'fe0a438e67687540efcbbbc28c8e6c1b8ac1216f99de33d4',1,0,0,1,'2023-02-12 06:31:13.245185592+00:00','2023-02-12 06:36:13.241695106+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(19,'5a469043f1fa43ff11e54ea242dd882a81aea68f168b9a34',1,0,0,1,'2023-02-12 06:31:13.622545545+00:00','2023-02-12 06:36:13.560890824+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(20,'8b31d9a38282dfe07ffcebdfbd40db6b9e49997c93bed570',1,0,0,1,'2023-02-28 12:45:48.518939706+00:00','2023-02-28 12:50:48.445951259+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(21,'47a4216f2b4e5885d4e53a3de2ffe95521d8a708ca26d31e',1,0,0,1,'2023-02-28 12:45:48.53865321+00:00','2023-02-28 12:50:48.439132728+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(22,'1b7be83871f396e4544c8445acfc8d308dbfe29c7f0197f0',1,0,0,1,'2023-02-28 12:45:48.538806791+00:00','2023-02-28 12:50:48.445073692+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(23,'c9f364adba95c6c46c162eaa3786702805595841c0150927',1,0,0,1,'2023-05-05 08:08:16.73107293+00:00','2023-05-05 08:13:16.722921676+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(24,'a90c87e30fa22ffee39e5ce157dd22c909f2026295e3bce4',1,0,0,1,'2023-08-14 14:36:52.042138928+00:00','2023-08-14 14:41:52.038644473+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(25,'bb059cde5663619a8918bde19b9f8236085725554d9d78c2',1,0,0,1,'2023-08-14 16:15:33.722630834+00:00','2023-08-14 16:20:33.719604033+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(26,'0733b8fc67adc82644c87a95947b24c5d368c633cff92eb4',1,0,0,1,'2023-08-29 06:30:44.934900329+00:00','2023-08-29 06:35:44.931280114+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(27,'567f3a39066fd99c567d14bcc374b25cee0ad71af08b9054',1,0,0,1,'2023-11-03 17:53:45.857200883+00:00','2023-11-03 17:58:45.853742836+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(30,'8aa13c93cbef46d36c8159da51fb41a469ae04f932980890',1,0,0,1,'2024-07-29 12:24:27.140614087+00:00','2024-07-29 12:29:27.136525982+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(31,'9314b171abc433d73b8db297c1c5c65dae0be53d39a71520',1,0,0,1,'2024-08-07 18:35:12.375119763+00:00','2024-08-07 18:40:12.371590392+00:00',NULL); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -INSERT INTO api_keys VALUES(1,'S2nn85NliD',X'24326224313224626f6979756458356735466b4e316d795a44596f622e744441784835724a39306538436c7a4f51384e374f3153466479454c706261','2022-12-25 21:35:28.644697962+01:00','2023-03-22 06:22:18.724817647+01:00',NULL); -INSERT INTO api_keys VALUES(2,'1KZkpEyiMH',X'243262243132247652305238364b30727533524b5735777658706f5275684d37727173306549352f585772656a32394e4472594a7339556f55493843','2023-03-22 06:22:18.339101298+01:00','2023-04-13 09:32:24.318715268+02:00',NULL); -INSERT INTO api_keys VALUES(3,'6yBMrqvEDX',X'24326224313224676f6d4556557a2e45695a7354775663414c714252656f4e3572714d364643322f2f7466794652344d7a6c344f504953656845504b','2023-04-13 09:32:23.864995051+02:00','2023-07-12 07:32:23.45+00:00',NULL); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,"hostname" text,"user_id" integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`given_name` varchar(63),`forced_tags` text,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:6a9f06fccf0eff626075428e16c07c2e93464667a67c84dd0efadb1a74761d72','nodekey:d0d67c955181f1a9369a471496e1004678eebd93660394949848e61c7132ef9e','discokey:7667a90b1915f5cb3f725d159cd3f8bc3e46f6244df25f45d7fadc830a6d064e','web-05',1,'authKey',3,'2025-01-30 19:38:27.812993984+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["198.139.228.73:16016","[a5dd:542c:6998:997c:2d87:61b0:c760:ab6f]:56919","112.221.140.130:19743","[6520:763a:5fe2:8e88:5c8e:5b03:eeb1:5588]:20840","[7670:9084:a917:1173:70ce:2798:c805:66dc]:50297"]','2022-02-26 18:11:55.92837512+01:00','2025-01-30 19:38:27.813720595+01:00',NULL,'node001',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(3,'mkey:67287ef80f62664d68f3050e68ced4e8f78e173151801529551d42fd8e951b40','nodekey:6afc027970c1ddf9b3544b8c49223dcafe436a51384f80bb341a8f8322bef6d8','discokey:86693fc9f959f11015f3ff457b8d62381a6542e2eef097e2b6bdd679175c975b','srv-24',7,'authKey',NULL,'2025-01-29 12:28:39.61495602+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2cbd:607c:5e91:ea51:38ed:557:fd0:3405]:48307","[790b:6b18:8ac1:afa8:c2d3:f6b:37f6:623f]:41499"]','2022-03-01 19:26:46.242187887+01:00','2025-01-29 12:28:39.615036412+01:00',NULL,'node003',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:e0630dc6f407e6064c010dbef10b982c6e9d731706cc13b10993b542e81edf47','nodekey:44f96cda7202665271958eac8a3cfb120ea72209ee88d93fac3c23aeafa3a993','discokey:48b58784e2acdad30028ceed18bc9e7bd5fae3605dbef1a40f3f38a705adef9d','desktop-44',1,'authKey',6,'2025-01-30 17:44:08.255021526+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b53b:c286:a007:e851:dc6e:8152:82a5:22ca]:25604","[ff9b:6bbd:77bc:323d:12e4:22ba:b712:80c]:37729","[1e76:19af:e817:9a33:4f83:366e:b686:6362]:39521","57.71.157.166:9055","[c199:e8ca:c1bb:8364:dde8:bf3e:9467:c050]:61218"]','2022-03-04 08:27:19.383566031+01:00','2025-01-30 17:44:08.255111155+01:00',NULL,'node004',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(5,'mkey:4f7673e616b40b313ef69fd390e9c4bdb4c80f66840defb02c37d6f2fa152c5c','nodekey:91ee8b593bb0599e9583a692dabef9528423878f737f7bfdf339fc02bbe8ab47','discokey:fb3b2c151a0d277e77837dc2a5e96fe8f05bbefa770a84822689ec2c63dfe358','email-24',8,'authKey',NULL,'2025-01-30 07:11:28.440822489+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6add:4288:4372:c9ce:7cee:4852:e6d0:9d56]:44968","[884b:b2cc:e2f3:ab06:131b:e30c:69a2:363]:32943","49.81.27.127:23839"]','2022-03-05 13:54:23.660591381+01:00','2025-01-30 07:11:28.441136461+01:00',NULL,'node005',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(6,'mkey:e8a549e38c8301af3c044ff9ca1df86c814bc9c9a62bddd28ff47ebd9b113d07','nodekey:f0188064db0f31b49e1048044186d20700406a7a7b71b054507c90d5a79e7de4','discokey:30687e0c15fc480ef15f9d91011c9fa458c18370db11cdc8b72f36e814e9ece2','lt-01',1,'authkey',NULL,'2025-01-30 18:10:50.161868774+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[be53:e324:1587:4d86:d24c:8662:cd5c:a4c8]:58105","59.239.218.25:49807","[cfa4:ad2e:1410:3b78:ce03:b437:6749:ff3a]:39835"]','2022-03-21 15:52:20.739594362+01:00','2025-01-30 18:10:50.16216299+01:00',NULL,'node006',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(7,'mkey:45c951d703f782a16a6db8b4dc872952308f86883499d0613f03318d18f3e7fe','nodekey:70837c16d309644e71e6b251bdb66c4e6a5c9b9f6e23a3b803142b8f00d1017a','discokey:ebe6dd61b6b2bafccce9d01ba0d58ea2ee93a90ae6cafb4383453fb4a17441b4','desktop-61',4,'authkey',10,'2025-01-30 15:57:13.811124896+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e5e8:7d01:63fc:4220:3b4a:df5:ebe:748c]:47047","[7513:6321:b35:b633:b39e:41ed:10a1:3fdd]:462","[8dc0:10e:95b6:9954:2614:8dcc:6564:e167]:58499","[8e6d:95be:19a9:316a:d37e:bcb6:5e05:d7f0]:36162","[5bb2:705a:7f5c:3c9:db83:956a:1ce:5291]:9510"]','2022-04-01 22:43:27.318756043+02:00','2025-01-30 15:57:13.811810941+01:00',NULL,'node007',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(8,'mkey:3dff46782f13efbfe7b1a9d5d3275f48954a1abe29111b484fdb3c6edfb795bd','nodekey:080424852e9cbe3c2eaac52c1890cf485bbc2604615ec3c293b80322f17dbe63','discokey:2f6a05491ef99cafaf968c0c62b5c9d19083ef070991ee677d14b0ac9ce08377','db-70',7,'authkey',NULL,'2025-01-30 18:49:07.297878703+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["166.240.74.240:40259","[7311:ce3c:e8cf:c591:533:7132:4504:52ca]:48923","[4912:5cdb:3ed8:7c52:6b8b:2d37:59bf:cf98]:20693","[bad1:e653:831e:aa61:61e4:f824:996a:aac9]:11845","87.221.84.2:11620","71.36.112.35:10587"]','2022-04-03 09:38:46.178224968+02:00','2025-01-30 18:49:07.298161737+01:00',NULL,'node008',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(9,'mkey:90c42ae0cf8652f585ac27eb51f0890b9a101b20a834eefed6ea4bd503313550','nodekey:099ecaa8aae2e406befa1c6343b85d8e14ab4f3171835637762069ccda6065d5','discokey:de9ca6bbc281fca8ccffe16d3576385a1864854a0054e74c755c73c9e66d7610','lt-30',7,'cli',NULL,'2025-01-30 17:13:53.726978413+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["32.34.123.165:35870","[1daf:274c:55b2:7413:3c1:bd3d:acc5:4eb1]:33732","182.237.82.235:32290","6.93.152.196:16212","[a38c:efed:7c57:2d0f:4348:fbba:e351:9d7]:10415"]','2022-04-09 09:43:07.09027176+02:00','2025-01-30 17:13:53.72707237+01:00',NULL,'node009',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(10,'mkey:6df036352af9c6f03145a8a580466877042296ec2ce47f9cbb8d079baba8925e','nodekey:32934a036fc17e051dd08ca451cae3929e3636eb834771c350d2bffdda7e721c','discokey:0478a62f4092127fd4ca7f4fc35dcf40219bccc9ebf3a35ab77e3dc61269a55c','email-93',1,'authkey',NULL,'2025-01-26 10:25:51.849469454+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["70.180.67.60:48287","141.100.183.252:19523"]','2022-06-26 13:57:07.40762063+02:00','2025-01-26 10:25:51.849815197+01:00',NULL,'node010',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(11,'mkey:9d0a61f24aedbb4580224879acfb30b61cf10b3b293667ee110b1319ae4703ef','nodekey:d3b8ad82d1b771993c45ac10037fea8926693262b4f754b845545f3b9e6760d3','discokey:b5f97e790d1452ad4f7e27bcca9611e7d8bb080c4b2860e57c46a784b038f09f','web-33',1,'authkey',13,'2025-01-30 07:37:07.518404681+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["128.57.253.0:39500","25.178.136.37:18298","86.9.216.195:48221","[c131:2e8:80a6:4825:7e56:cc6f:9dcc:37d5]:18109"]','2022-06-26 15:19:50.566498735+02:00','2025-01-30 07:37:07.519108399+01:00',NULL,'node011',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(14,'mkey:a177d1fd989be953fd86bb5dc04c13feb917b019d5e0bfeda2715ea88abf7ff6','nodekey:8f6020fe3f16fb9da556b43079fb6885f53a8733318599b3b25706f6c7d99d9d','discokey:f2276c628270c0eaee429a55185611ebcb961bf07b02de854a0bc4382564eae4','email-37',1,'authkey',NULL,'2025-01-30 06:18:09.988928832+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ff2e:b9f0:9890:1aab:42e5:c2ac:a177:66d5]:11839","223.36.72.39:33254","[ec6b:fbd2:bbc3:162b:74ef:b955:514:2656]:46024","221.176.225.4:60304"]','2022-09-26 16:07:54.206927686+02:00','2025-01-30 06:18:09.989199262+01:00',NULL,'node014',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(23,'mkey:0bd4a2eac9b6f9b77ab9ee9ea42b36b74d73e0544a2c482526b2385e8bcb0c34','nodekey:1a5c66957c54314f72ea26fed12e48b9c4149c1f640300e39d6190226bc1aaf8','discokey:9f151fb1569ef5341c8f54eec62b82cb7e5f21540ebe9271ae6449a31250123a','laptop-22',1,'authkey',23,'2025-01-30 07:37:08.458181354+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["55.247.173.145:28477","197.123.99.18:53038","102.60.187.246:41096","155.239.169.182:41163"]','2023-05-05 10:08:17.301597525+02:00','2025-01-30 07:37:08.526403731+01:00',NULL,'node023','[]','100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(24,'mkey:972ef94d28e97680b5bb3ec45a8f7afff8d6988dc9685466fbba21ad96dd0d76','nodekey:a27328ce98ebfb68b59db23adcbd8f402f0ca88c656f8d8a37aafdd5226dd84a','discokey:2cec5f97c2c29ca3bafadbddddebdec872360cf725298c62bb824a6093c838df','db-54',1,'authkey',25,'2025-01-13 15:02:48.676721758+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["172.58.255.148:13691","31.145.94.155:5734","[e9c0:d5fd:b25c:8cc7:4c58:b666:3fc1:2814]:11543","30.125.129.134:36303","[213c:e72:d34:c59c:d4fa:8bb5:f6d3:d691]:24926","83.92.114.14:38013","223.127.253.143:5148"]','2023-08-14 18:15:34.292188686+02:00','2025-01-13 15:02:48.67679754+01:00',NULL,'node024','[]','100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(25,'mkey:7c84b8eea26e6d6a14e08ee933806ca4fb854bd0a168ee576efc58a69523ea0f','nodekey:00cb375ac5580c5718d7879942eba4378d73bf4026ae583a72c6d72f9a2f5958','discokey:69afa6724b1c46f7947a232489f51bd458db562c1144d41ba1b6075395e62529','db-61',1,'authkey',26,'2025-01-30 07:37:08.558409068+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9599:6347:eeed:7573:eb34:676:69d5:954]:17338","87.151.18.221:6598","110.245.60.227:50528","33.141.84.140:39332"]','2023-08-29 08:30:45.518580154+02:00','2025-01-30 07:37:08.559668856+01:00',NULL,'node025','[]','100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(26,'mkey:5a1b0920cafe0a8b54af7194baceac2029c4b12afc0e542423637f47cc4e53e7','nodekey:927c5ab7c0b8288ca36897c5f371487e9dc2200f28507029e1864086d95b5b1e','discokey:260c9a090d46eef056c0d411d4097b41aa0e9902f15cfa59906ee85f526f52b6','srv-51',1,'authkey',27,'2025-01-30 19:34:43.188756627+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6676:38fc:5af9:3b60:41b7:e46e:bbe1:c8bf]:30898","[4aba:d70:c25d:5d4d:1bc0:d119:4749:9efc]:37269","[9320:382a:5779:2fa7:b46f:c4d5:f889:3155]:46058","[3d31:7ec5:7635:7b6c:c7d0:cf70:bb45:6922]:16454","[80d6:f9fd:44f8:58b7:2049:70d4:721:139d]:31272"]','2023-11-03 18:53:48.108033118+01:00','2025-01-30 19:34:43.189464994+01:00',NULL,'node026','[]','100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(27,'mkey:9776d9e03b05d823cd49eae6194bd35af003ab124abcb8665e51befcbb97f7dd','nodekey:66e111175598e5b35832a9cd18ffdf2661caad4eb206a432cbb7a2916e64a3d9','discokey:0abf0367186a9e7b1fba741a8f4020db8e21fe9091aa0726ad5ebf7cb8e5634f','email-21',7,'authkey',NULL,'2025-01-30 00:13:32.682147532+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[60da:30ec:55bd:5ab3:3fe7:5492:1fc8:92f3]:36566","105.238.97.70:40193","[8c84:2d7e:5e1a:2f51:8754:aa54:b421:a19f]:60672","[5101:751a:5e00:cf46:8b50:8ba9:3c8e:4c7c]:33862","[b9fe:3630:1807:826:a4a6:e5d2:532c:bdc8]:43617","[25b5:5b72:2ecc:193c:b61e:329:81c6:5af6]:39854","214.128.115.35:43726","[436d:61b:9834:b0e0:b2ed:b452:6330:34ee]:59171","[13a0:b8ff:bebb:d48:e16e:869d:b542:531b]:17201","44.157.88.255:43418"]','2023-12-29 19:18:10.814399482+01:00','2025-01-30 00:13:32.682255545+01:00',NULL,'node027','[]','100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(29,'mkey:6e0a32694e0a2bddbcc71d9489cbb76439f43ceffa40ab8bf070a695be120390','nodekey:0c36c35d4cba9e0b70a945ab88622097fa8659da6c7a20b0e4cbee1dea649075','discokey:ce88ad8d792775af9639e6295818cb416fb0ad24bfa6550f00722ee8095399c7','db-48',1,'authkey',NULL,'2025-01-30 11:50:53.588423938+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[60b9:155d:3b4a:1ade:51ce:e1f3:a6b8:8e5e]:42022","[ce74:4042:3405:dbfb:8e1b:2f01:5a55:6f49]:20536","[37ed:24ee:1ca9:2cdf:be79:d10a:1247:263c]:27485","199.22.246.223:36955"]','2024-06-02 09:46:39.307697473+02:00','2025-01-30 11:50:53.588974887+01:00',NULL,'node029','[]','100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(30,'mkey:93318ff3d9920d6be01eb5166a608fd76c094851aa925e55a2725c6dc99bbb3f','nodekey:cab6cdb81dff8d8a9eccb84d2ff9561bea8840a090f3ea1ae27e94b02ccd6744','discokey:4b472fdd9a6ab4f56f46139d55d525e17dfedd61aa11a0c62a02ebd80aef354e','email-50',1,'authkey',30,'2025-01-30 07:37:11.127249774+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2f41:257d:dc1b:c709:ea6c:1ad2:ba8e:af48]:60708","[c7df:a6c9:ae51:6570:f6b6:7d2c:1803:50f0]:6218","62.26.91.22:45257","37.207.25.123:5981"]','2024-07-29 14:24:27.684620193+02:00','2025-01-30 07:37:11.129269195+01:00',NULL,'node030','[]','100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(31,'mkey:ac01d01cd1e709938d07f3585184eef337024d32e80ceeddcb943ff8ca54cdd4','nodekey:d5906ed493f8df3fddb09fed829736f1274cb8120d6bcd33846a90460215a40d','discokey:0d4fc211bccbaa9f89e9ac73a33f4f682afe9d6ba7f722f5373cb1254787f2ef','srv-37',1,'authkey',31,'2025-01-30 19:36:44.453423774+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["49.111.96.135:35528","[abc1:b7cd:f18:6ed2:4b5a:9dce:3f08:8c17]:333","206.118.184.48:48132","7.64.160.114:11101","44.192.177.204:34267","192.170.230.144:10199"]','2024-08-07 20:35:12.944541318+02:00','2025-01-30 19:36:44.454213203+01:00',NULL,'node031','[]','100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(32,'mkey:f4cccb12f91a1b6c5a6dc5af6d307a821f701d5de57a17bc08f5599be53adf5f','nodekey:ed136fe41759c3e7be39bf15fdd6754769ce4cbb675540ece6fbf4a87b4ecfaf','discokey:f5c1095eb1350ff6a44d708bfe56d311f444d0f8e634b123230e6ce3ba854477','desktop-95',1,'cli',NULL,'2025-01-30 18:56:16.654079824+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["183.48.47.238:43648","200.130.212.85:63469","216.16.62.70:35569","213.115.82.246:7596","195.216.27.246:54466","[c47f:35c9:41fe:6f5:a40d:3e8b:6679:411e]:12963","19.205.218.60:63516","116.228.193.194:44296","[ee98:8de3:1f5b:731c:751b:ba46:a4a7:8760]:23245","[6eb3:8c34:e6c9:4c6a:25c9:89ca:c165:354e]:65112","[7003:63bb:6429:d463:96a6:990:2bdc:217d]:4300","[d820:584e:2b46:6915:6420:3056:f76:9e33]:48885","25.167.184.15:1967","82.253.97.208:8684","[3a58:c14f:7b83:917b:d62:dbb9:7a9a:1e81]:49391","[b4c2:50f5:986b:a9e7:bc25:1cb2:ef17:3144]:38454","198.9.176.244:6032"]','2024-10-26 07:18:04.947942936+02:00','2025-01-30 18:56:16.654364791+01:00',NULL,'node032',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2022-02-26 18:00:56.104436744+01:00','2024-09-14 08:51:13.31135114+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2022-04-01 22:30:02.657653341+02:00','2024-09-14 08:55:27.877614108+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2024-05-05 07:08:47.915309504+02:00','2024-09-14 08:55:02.778476991+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2024-09-14 08:57:26.215082073+02:00','2024-09-14 08:57:26.215082073+02:00',NULL,'user008','','',NULL,NULL,''); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(7,'2023-01-15 14:41:49.445054371+01:00','2023-01-29 10:12:11.959527554+01:00',NULL,14,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(8,'2023-01-15 14:41:49.452617331+01:00','2023-01-29 10:12:13.154311101+01:00',NULL,14,'::/0',1,1,0); -INSERT INTO routes VALUES(9,'2024-05-05 07:16:02.944084847+02:00','2024-05-05 13:25:02.977065366+02:00',NULL,3,'10.138.98.32/27',1,1,1); -INSERT INTO routes VALUES(10,'2024-05-05 07:16:02.95709561+02:00','2024-05-05 13:25:04.513930181+02:00',NULL,3,'192.168.208.0/20',1,1,1); -INSERT INTO routes VALUES(11,'2024-05-05 12:46:08.770263657+02:00','2024-05-05 12:46:39.012076195+02:00',NULL,27,'172.19.128.0/20',1,1,1); -INSERT INTO routes VALUES(12,'2024-05-05 12:46:08.779527998+02:00','2024-05-05 12:46:39.843489833+02:00',NULL,27,'10.68.0.0/14',1,1,1); -INSERT INTO routes VALUES(13,'2024-05-05 12:46:08.787239639+02:00','2024-05-05 12:46:40.983839193+02:00',NULL,27,'10.243.219.68/31',1,1,1); -INSERT INTO routes VALUES(14,'2024-05-05 12:46:08.794135892+02:00','2024-08-18 08:45:35.831813635+02:00',NULL,27,'10.30.155.128/25',1,1,1); -INSERT INTO routes VALUES(15,'2024-05-05 12:46:08.799975671+02:00','2024-05-05 12:46:44.333956126+02:00',NULL,27,'10.64.0.0/13',1,1,1); -INSERT INTO routes VALUES(16,'2024-05-05 12:46:08.806879988+02:00','2024-05-05 12:46:45.422824419+02:00',NULL,27,'192.168.0.0/16',1,1,1); -INSERT INTO routes VALUES(17,'2024-08-08 20:44:22.76306591+02:00','2024-12-21 12:10:07.339044415+01:00',NULL,31,'172.27.33.0/24',1,1,1); -INSERT INTO routes VALUES(18,'2024-08-08 20:47:33.469970726+02:00','2024-12-21 12:10:07.442783446+01:00',NULL,31,'10.151.196.0/23',1,1,1); -INSERT INTO routes VALUES(21,'2024-08-08 20:54:05.146666051+02:00','2024-12-21 12:10:07.538738125+01:00',NULL,31,'192.168.33.128/26',1,1,1); -INSERT INTO routes VALUES(23,'2024-08-08 20:54:05.162212208+02:00','2024-12-21 12:10:07.644714175+01:00',NULL,31,'10.69.87.96/27',1,1,1); -INSERT INTO routes VALUES(25,'2024-08-08 20:54:05.179419681+02:00','2024-12-21 12:10:07.753927883+01:00',NULL,31,'192.168.240.184/30',1,1,1); -INSERT INTO routes VALUES(26,'2024-08-08 20:54:05.186132539+02:00','2024-12-21 12:10:07.871905187+01:00',NULL,31,'172.19.162.160/27',1,1,1); -INSERT INTO routes VALUES(28,'2024-08-08 20:54:05.202442818+02:00','2024-12-21 12:10:07.972132539+01:00',NULL,31,'172.30.190.136/30',1,1,1); -INSERT INTO routes VALUES(31,'2024-08-08 20:54:05.246698925+02:00','2024-12-21 12:10:08.150358433+01:00',NULL,31,'10.241.118.90/31',1,1,1); -INSERT INTO routes VALUES(32,'2024-08-08 20:54:05.256984635+02:00','2024-12-21 12:10:08.349521909+01:00',NULL,31,'192.168.0.0/17',1,1,1); -INSERT INTO routes VALUES(37,'2024-08-08 20:54:05.300971626+02:00','2024-12-21 12:10:08.553265285+01:00',NULL,31,'192.168.192.0/19',1,1,1); -INSERT INTO routes VALUES(43,'2024-08-08 20:54:05.383430747+02:00','2024-12-21 12:10:08.66112581+01:00',NULL,31,'172.29.254.8/29',1,1,1); -INSERT INTO routes VALUES(47,'2024-08-08 20:54:05.443181025+02:00','2024-12-21 12:10:08.826993878+01:00',NULL,31,'172.18.8.0/22',1,1,1); -INSERT INTO routes VALUES(48,'2024-08-08 20:54:05.449778605+02:00','2024-12-21 12:10:09.237117302+01:00',NULL,31,'10.169.34.250/31',1,1,1); -INSERT INTO routes VALUES(49,'2024-09-03 06:43:34.875117755+02:00','2024-12-21 12:10:09.342259317+01:00',NULL,31,'172.24.0.0/16',1,1,1); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.2.sql b/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.2.sql deleted file mode 100644 index 836865cc..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.24.2.sql +++ /dev/null @@ -1,95 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,"user_id" integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'c0e73b52c1706fcceea368ae07c19c0e91fbe1f78eff2f7b',1,0,1,0,'2022-02-26 17:02:32.568878371+00:00','2022-02-26 18:11:15.388354971+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'ad293159f9506a02d6de4730e5a2ddb74f9fd4033919ecc5',1,0,1,1,'2022-02-26 17:08:07.828690446+00:00','2022-02-26 18:11:18.890985216+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(3,'66c7e0fbf74010ea2e153d35ffa0f3c48380ef38960d1fd1',1,0,0,1,'2022-02-26 17:11:54.149663776+00:00','2022-02-26 17:16:54.147175388+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(4,'cdbb69c88fabdc2609050c7f99e8ebdea6fcf81a50d802fc',1,0,0,1,'2022-02-26 17:15:34.160746962+00:00','2022-02-26 17:20:34.15935255+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(6,'0a99889a217a2f9cdd20082faadd5c90e72c83a47093a63a',1,0,0,1,'2022-03-04 07:27:17.535172209+00:00','2022-03-04 07:32:17.531871524+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(10,'6e88409e3883c1a47249e428030afd4a46bdaf3adb3b74c1',4,0,0,1,'2022-04-01 20:43:21.757703546+00:00','2022-04-01 20:48:21.756458144+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(13,'9d3c2bde9fcef181a3141dfe59d449bf03f6779dad3580e0',1,0,0,1,'2022-06-26 13:19:48.533865696+00:00','2022-06-26 13:24:48.532164552+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(16,'00b123d52ea8f58379b740fdc5c898b02330ab9b366cb1b4',1,0,0,1,'2023-02-12 06:21:30.15120385+00:00','2023-02-12 06:26:30.140082454+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(17,'c471ce93392c0d3040af0cf6166f4f578c3c66dd180b6e0b',1,0,0,1,'2023-02-12 06:26:55.829311638+00:00','2023-02-12 06:31:55.824701077+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(18,'fe0a438e67687540efcbbbc28c8e6c1b8ac1216f99de33d4',1,0,0,1,'2023-02-12 06:31:13.245185592+00:00','2023-02-12 06:36:13.241695106+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(19,'5a469043f1fa43ff11e54ea242dd882a81aea68f168b9a34',1,0,0,1,'2023-02-12 06:31:13.622545545+00:00','2023-02-12 06:36:13.560890824+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(20,'8b31d9a38282dfe07ffcebdfbd40db6b9e49997c93bed570',1,0,0,1,'2023-02-28 12:45:48.518939706+00:00','2023-02-28 12:50:48.445951259+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(21,'47a4216f2b4e5885d4e53a3de2ffe95521d8a708ca26d31e',1,0,0,1,'2023-02-28 12:45:48.53865321+00:00','2023-02-28 12:50:48.439132728+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(22,'1b7be83871f396e4544c8445acfc8d308dbfe29c7f0197f0',1,0,0,1,'2023-02-28 12:45:48.538806791+00:00','2023-02-28 12:50:48.445073692+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(23,'c9f364adba95c6c46c162eaa3786702805595841c0150927',1,0,0,1,'2023-05-05 08:08:16.73107293+00:00','2023-05-05 08:13:16.722921676+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(24,'a90c87e30fa22ffee39e5ce157dd22c909f2026295e3bce4',1,0,0,1,'2023-08-14 14:36:52.042138928+00:00','2023-08-14 14:41:52.038644473+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(25,'bb059cde5663619a8918bde19b9f8236085725554d9d78c2',1,0,0,1,'2023-08-14 16:15:33.722630834+00:00','2023-08-14 16:20:33.719604033+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(26,'0733b8fc67adc82644c87a95947b24c5d368c633cff92eb4',1,0,0,1,'2023-08-29 06:30:44.934900329+00:00','2023-08-29 06:35:44.931280114+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(27,'567f3a39066fd99c567d14bcc374b25cee0ad71af08b9054',1,0,0,1,'2023-11-03 17:53:45.857200883+00:00','2023-11-03 17:58:45.853742836+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(30,'8aa13c93cbef46d36c8159da51fb41a469ae04f932980890',1,0,0,1,'2024-07-29 12:24:27.140614087+00:00','2024-07-29 12:29:27.136525982+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(31,'9314b171abc433d73b8db297c1c5c65dae0be53d39a71520',1,0,0,1,'2024-08-07 18:35:12.375119763+00:00','2024-08-07 18:40:12.371590392+00:00',NULL); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -INSERT INTO api_keys VALUES(1,'S2nn85NliD',X'243262243132247a447a433335436450323462305a332e77487845574f4837583374565134787030332f7046786e33726e69795a5336666762613569','2022-12-25 21:35:28.644697962+01:00','2023-03-22 06:22:18.724817647+01:00',NULL); -INSERT INTO api_keys VALUES(2,'1KZkpEyiMH',X'2432622431322438673039635a2f41376a6b6e444e69523531684e564f4d573665423056366179484b726c304565586f6851674373786d612e4c414b','2023-03-22 06:22:18.339101298+01:00','2023-04-13 09:32:24.318715268+02:00',NULL); -INSERT INTO api_keys VALUES(3,'6yBMrqvEDX',X'24326224313224764f6938636f6a677265624236424a33743559754e654742344d46674c427a4441314e38546166364578326151706b73305334544b','2023-04-13 09:32:23.864995051+02:00','2023-07-12 07:32:23.45+00:00',NULL); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,"hostname" text,"user_id" integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`given_name` varchar(63),`forced_tags` text,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:6a9f06fccf0eff626075428e16c07c2e93464667a67c84dd0efadb1a74761d72','nodekey:d0d67c955181f1a9369a471496e1004678eebd93660394949848e61c7132ef9e','discokey:7667a90b1915f5cb3f725d159cd3f8bc3e46f6244df25f45d7fadc830a6d064e','web-05',1,'authKey',3,'2025-02-03 20:14:56.533547475+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["198.139.228.73:16016","[a5dd:542c:6998:997c:2d87:61b0:c760:ab6f]:56919","112.221.140.130:19743","[6520:763a:5fe2:8e88:5c8e:5b03:eeb1:5588]:20840","[7670:9084:a917:1173:70ce:2798:c805:66dc]:50297"]','2022-02-26 18:11:55.92837512+01:00','2025-02-03 20:14:56.533664987+01:00',NULL,'node001',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(3,'mkey:67287ef80f62664d68f3050e68ced4e8f78e173151801529551d42fd8e951b40','nodekey:6afc027970c1ddf9b3544b8c49223dcafe436a51384f80bb341a8f8322bef6d8','discokey:86693fc9f959f11015f3ff457b8d62381a6542e2eef097e2b6bdd679175c975b','srv-24',7,'authKey',NULL,'2025-01-29 12:28:39.61495602+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2cbd:607c:5e91:ea51:38ed:557:fd0:3405]:48307","[790b:6b18:8ac1:afa8:c2d3:f6b:37f6:623f]:41499"]','2022-03-01 19:26:46.242187887+01:00','2025-01-29 12:28:39.615036412+01:00',NULL,'node003',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:e0630dc6f407e6064c010dbef10b982c6e9d731706cc13b10993b542e81edf47','nodekey:44f96cda7202665271958eac8a3cfb120ea72209ee88d93fac3c23aeafa3a993','discokey:48b58784e2acdad30028ceed18bc9e7bd5fae3605dbef1a40f3f38a705adef9d','desktop-44',1,'authKey',6,'2025-02-03 15:13:03.806943975+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b53b:c286:a007:e851:dc6e:8152:82a5:22ca]:25604","[ff9b:6bbd:77bc:323d:12e4:22ba:b712:80c]:37729","[1e76:19af:e817:9a33:4f83:366e:b686:6362]:39521","57.71.157.166:9055","[c199:e8ca:c1bb:8364:dde8:bf3e:9467:c050]:61218"]','2022-03-04 08:27:19.383566031+01:00','2025-02-03 15:13:03.807014419+01:00',NULL,'node004',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(5,'mkey:4f7673e616b40b313ef69fd390e9c4bdb4c80f66840defb02c37d6f2fa152c5c','nodekey:91ee8b593bb0599e9583a692dabef9528423878f737f7bfdf339fc02bbe8ab47','discokey:fb3b2c151a0d277e77837dc2a5e96fe8f05bbefa770a84822689ec2c63dfe358','email-24',8,'authKey',NULL,'2025-02-03 11:10:38.911072801+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6add:4288:4372:c9ce:7cee:4852:e6d0:9d56]:44968","[884b:b2cc:e2f3:ab06:131b:e30c:69a2:363]:32943","49.81.27.127:23839"]','2022-03-05 13:54:23.660591381+01:00','2025-02-03 11:10:38.911239065+01:00',NULL,'node005',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(6,'mkey:e8a549e38c8301af3c044ff9ca1df86c814bc9c9a62bddd28ff47ebd9b113d07','nodekey:f0188064db0f31b49e1048044186d20700406a7a7b71b054507c90d5a79e7de4','discokey:30687e0c15fc480ef15f9d91011c9fa458c18370db11cdc8b72f36e814e9ece2','lt-01',1,'authkey',NULL,'2025-01-31 17:38:20.368668048+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[be53:e324:1587:4d86:d24c:8662:cd5c:a4c8]:58105","59.239.218.25:49807","[cfa4:ad2e:1410:3b78:ce03:b437:6749:ff3a]:39835"]','2022-03-21 15:52:20.739594362+01:00','2025-01-31 17:38:20.369001909+01:00',NULL,'node006',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(7,'mkey:45c951d703f782a16a6db8b4dc872952308f86883499d0613f03318d18f3e7fe','nodekey:70837c16d309644e71e6b251bdb66c4e6a5c9b9f6e23a3b803142b8f00d1017a','discokey:ebe6dd61b6b2bafccce9d01ba0d58ea2ee93a90ae6cafb4383453fb4a17441b4','desktop-61',4,'authkey',10,'2025-02-03 22:34:14.985433991+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e5e8:7d01:63fc:4220:3b4a:df5:ebe:748c]:47047","[7513:6321:b35:b633:b39e:41ed:10a1:3fdd]:462","[8dc0:10e:95b6:9954:2614:8dcc:6564:e167]:58499","[8e6d:95be:19a9:316a:d37e:bcb6:5e05:d7f0]:36162","[5bb2:705a:7f5c:3c9:db83:956a:1ce:5291]:9510"]','2022-04-01 22:43:27.318756043+02:00','2025-02-03 22:34:14.986222409+01:00',NULL,'node007',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(8,'mkey:3dff46782f13efbfe7b1a9d5d3275f48954a1abe29111b484fdb3c6edfb795bd','nodekey:080424852e9cbe3c2eaac52c1890cf485bbc2604615ec3c293b80322f17dbe63','discokey:2f6a05491ef99cafaf968c0c62b5c9d19083ef070991ee677d14b0ac9ce08377','db-70',7,'authkey',NULL,'2025-02-03 18:06:33.350444688+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["166.240.74.240:40259","[7311:ce3c:e8cf:c591:533:7132:4504:52ca]:48923","[4912:5cdb:3ed8:7c52:6b8b:2d37:59bf:cf98]:20693","[bad1:e653:831e:aa61:61e4:f824:996a:aac9]:11845"]','2022-04-03 09:38:46.178224968+02:00','2025-02-03 18:06:33.350771325+01:00',NULL,'node008',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(9,'mkey:05d3833026a1ecd188cc6d2b9b7f9c8a0c51baefabf20a167c85c0c6ef722669','nodekey:d281e1bea5bb2e5d2d01bf476a5ea0df5578a663211c2281dc757024f6233f90','discokey:010f415b1e41652b3eabcebdbaa1e2e62fb1412f817304c125776ecd45621fba','desktop-21',7,'cli',NULL,'2025-02-02 07:50:20.222960357+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6102:58dc:7ffb:200a:53e:b50d:38d5:45a8]:29126","32.34.123.165:35870","[1daf:274c:55b2:7413:3c1:bd3d:acc5:4eb1]:33732","182.237.82.235:32290","6.93.152.196:16212"]','2022-04-09 09:43:07.09027176+02:00','2025-02-02 07:50:20.223029247+01:00',NULL,'node009',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(10,'mkey:521fa99ee8b613be5da1bc0fa53e3a11806db23487572f13ce3a99b5750f47d8','nodekey:c2fd5b20a911b8193b7bc8e2dfe68598422759afb0cae2a1f6faf8a975133f68','discokey:99030d8be2cf14592175083adec1a15fdadb1181db721c9ce4f9e5ed485e8422','db-82',1,'authkey',NULL,'2025-02-01 10:52:14.603941206+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b12:6f03:406e:1855:3cf9:8eb6:e610:adbf]:18034","70.180.67.60:48287"]','2022-06-26 13:57:07.40762063+02:00','2025-02-01 10:52:14.604222347+01:00',NULL,'node010',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(11,'mkey:ad3363557d233776dee0fa50f7091eaa3b6b14b8d3b0243f0cc0f61662a01769','nodekey:f7a297c8b1bde8fea7e03e10a583e168a37a5c72d9ba3c39129536d4d80e9e6d','discokey:c4af50cd6bb472f0bc2bc3ded6336fb76708a902ba0032cdd6cb90eb819d9db1','laptop-60',1,'authkey',13,'2025-02-04 01:00:03.417141337+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[dba:4050:a7df:10:aa7c:ee6b:2610:5a14]:31519","128.57.253.0:39500","25.178.136.37:18298","86.9.216.195:48221"]','2022-06-26 15:19:50.566498735+02:00','2025-02-04 01:00:03.417910968+01:00',NULL,'node011',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(14,'mkey:224a190ba40d8ebf84ac065a47d4e370d127151dc5b608d2f062855e67951a5d','nodekey:a81c40b004ca1a472f2d2240ea3b86f55d5f52252254d5fab7643ba0e059b684','discokey:31644ea51b89d09e6568b6f78b604a62ee7f4a5e7c0a56f194717cdcc43a6384','email-58',1,'authkey',NULL,'2025-02-01 06:18:54.156166977+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["178.84.150.172:60924","[171f:b843:ff2e:b9f0:9890:1aac:42e5:c2ab]:47842","174.7.154.174:51720"]','2022-09-26 16:07:54.206927686+02:00','2025-02-01 06:18:54.156467144+01:00',NULL,'node014',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(23,'mkey:bafb298852c932db46635dc6ff11a41910936907bbc6bcc5dd2d8c485c2e5b0c','nodekey:bb4a7ef9aa1a821da2f76215bcb534c3a6a7658432dafaf88a1aecff0e75f782','discokey:f0e92a4c7368ee694d70928c8c4e14b207fd7f69dfead643ce4f616cdd7a1c13','web-96',1,'authkey',23,'2025-02-01 06:44:19.76979225+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["21.139.115.42:33169","211.231.253.41:1096","171.127.122.217:28477","197.123.99.18:53038"]','2023-05-05 10:08:17.301597525+02:00','2025-02-01 06:44:19.770530333+01:00',NULL,'node023','[]','100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(24,'mkey:7f865473ff4681875a1b31d7160ef2f4ba3c59cb9cfd815a6085dd266e1027a6','nodekey:86b906c4f3540ec63474279bf542136b24fe741f7944ca88c3bd3cea132906b7','discokey:c2f69d5e677759839a7761f43eb250d4faa1873ddbc53f3e4f1bc4ef98ab2041','db-96',1,'authkey',25,'2025-01-13 15:02:48.676721758+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f41f:b9e9:9205:210a:4ff9:ab34:57fd:3148]:21852","[3d49:412:9781:b399:f9a5:587e:bd2e:a579]:3775","[867:a7e:7e04:bfbb:6286:f95f:1abd:9b5a]:63765","147.22.45.153:11543","30.125.129.134:36303","[213c:e72:d34:c59c:d4fa:8bb5:f6d3:d691]:24926","83.92.114.14:38013"]','2023-08-14 18:15:34.292188686+02:00','2025-01-13 15:02:48.67679754+01:00',NULL,'node024','[]','100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(25,'mkey:1be02a7f57e51fa42c3af087a1464b33d33ad2802979d102c8b10ac146091537','nodekey:1fc37765c4fbcf068f0dd37f2b98d4d1fe35fc7c9e20ea8ed2be4b52b97481b4','discokey:9ea68ef044c00841bb6187dafedc8811caa4b12d123e00119eba0462b9dac6b2','db-52',1,'authkey',26,'2025-02-01 06:27:05.583933643+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f538:9546:a467:a160:1e3f:ad81:b5aa:f88d]:56031","14.178.248.81:54186","[dcb5:9eee:2075:acba:1a31:1669:21dd:744b]:48312","91.212.243.143:50528","33.141.84.140:39332"]','2023-08-29 08:30:45.518580154+02:00','2025-02-01 06:27:05.584696672+01:00',NULL,'node025','[]','100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(26,'mkey:5a1b0920cafe0a8b54af7194baceac2029c4b12afc0e542423637f47cc4e53e7','nodekey:927c5ab7c0b8288ca36897c5f371487e9dc2200f28507029e1864086d95b5b1e','discokey:260c9a090d46eef056c0d411d4097b41aa0e9902f15cfa59906ee85f526f52b6','srv-51',1,'authkey',27,'2025-02-03 23:37:12.671308191+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6676:38fc:5af9:3b60:41b7:e46e:bbe1:c8bf]:30898","[4aba:d70:c25d:5d4d:1bc0:d119:4749:9efc]:37269","[9320:382a:5779:2fa7:b46f:c4d5:f889:3155]:46058","[3d31:7ec5:7635:7b6c:c7d0:cf70:bb45:6922]:16454"]','2023-11-03 18:53:48.108033118+01:00','2025-02-03 23:37:12.672006599+01:00',NULL,'node026','[]','100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(27,'mkey:2ce8df9ae8623e48e779b951eab64134bf2806d747f1fb4d7669634fa48d9f22','nodekey:3e85d212d118f58a3a72de4d251ab9414dbe682ac3ebb36ff1283eb6fd438c7a','discokey:9e21249dc8a197f51dbd3916f74c977e222cb96639c4f17abb1856044c4593a1','srv-15',7,'authkey',NULL,'2025-02-02 21:00:46.231324685+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["169.217.181.115:16351","[60da:30ec:55bd:5ab3:3fe7:5492:1fc8:92f3]:36566","105.238.97.70:40193","[8c84:2d7e:5e1a:2f51:8754:aa54:b421:a19f]:60672","[5101:751a:5e00:cf46:8b50:8ba9:3c8e:4c7c]:33862","[b9fe:3630:1807:826:a4a6:e5d2:532c:bdc8]:43617","[25b5:5b72:2ecc:193c:b61e:329:81c6:5af6]:39854","214.128.115.35:43726","[436d:61b:9834:b0e0:b2ed:b452:6330:34ee]:59171"]','2023-12-29 19:18:10.814399482+01:00','2025-02-02 21:00:46.231636043+01:00',NULL,'node027','[]','100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(29,'mkey:2e4b2ebd78830b52e6d3bc2df58a4ae2e650aa2d45d05e556b6f412ec26797b3','nodekey:b12b04927b21983c33d4e6343dbc3ca854cdddea3d53f116c79d2029cb017a36','discokey:9cd971dee5320ec78268f1c672aeea1f7ab1e88d1d97a27d1f6fd08154f2dc9b','srv-55',1,'authkey',NULL,'2025-02-03 21:59:46.11595827+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c4c8:bd7a:122a:a5f6:664b:2744:fa5:2148]:38679","[1b3d:ef50:26cd:7cdc:d63b:eafd:a9d6:9432]:33036","[3b4a:1ade:51ce:e1f2:a6b8:8e5f:76d7:a0ac]:49522","141.1.118.254:20536"]','2024-06-02 09:46:39.307697473+02:00','2025-02-03 21:59:46.116259679+01:00',NULL,'node029','[]','100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(30,'mkey:a186e5e70f59e2fd804383e339a670ed2c312ec724ed0a389fe032921f697416','nodekey:2695f24b44d79bb02448dca69cd123fb5ff88dfccb27df9f9f57fb161ccb8697','discokey:726229d0e17c4a9253ae9c3fea0917aa2b5ace2d5b7bb6b17702502d9d22cae3','srv-45',1,'authkey',30,'2025-02-01 06:45:41.071375563+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[cec4:d571:6561:992:e89f:e420:6681:42b5]:43741","96.163.138.14:53535","[7692:18d5:2f41:257d:dc1b:c70a:ea6c:1ad1]:56437","203.8.194.84:16031","54.25.61.205:2000"]','2024-07-29 14:24:27.684620193+02:00','2025-02-01 06:45:41.072086485+01:00',NULL,'node030','[]','100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(31,'mkey:7b374d2331a08697dc07b40066b707444d63dc02ec83a683e3cfb75f507e7219','nodekey:972d7a74740a75e43160bfa760aebfc273f3cab325abf9fdaa05d188615af553','discokey:9eaa7700bbc80de6cf2a7e115ff486727bc0bf063ccb2df44350c1a3e360fe90','lt-91',1,'authkey',31,'2025-02-03 20:18:00.015726064+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c7b3:52c3:f5c3:87c:f5ed:213:85f4:44c5]:18304","49.111.96.135:35528","[abc1:b7cd:f18:6ed2:4b5a:9dce:3f08:8c17]:333","206.118.184.48:48132","7.64.160.114:11101"]','2024-08-07 20:35:12.944541318+02:00','2025-02-03 20:18:00.016420173+01:00',NULL,'node031','[]','100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(32,'mkey:8010732f3bbb0f58f3002b645258dbbc7d6c5cfebf8a485685a84da9b30dcf00','nodekey:d34dac3b348cd2d100ba36a85b42542cfe1dc74d51197caf50b138fea1401168','discokey:e28e9f435da77357942130f3923bb584773e6812d8d27fdc6dce20cc0cc0f752','lt-44',1,'cli',NULL,'2025-02-04 01:12:27.847003546+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["85.212.234.244:43593","161.5.250.27:14020","[cd55:bf01:5540:c30:3981:7f71:a471:4993]:25285","24.16.62.70:35569","213.115.82.246:7596","195.216.27.246:54466","[c47f:35c9:41fe:6f5:a40d:3e8b:6679:411e]:12963","19.205.218.60:63516","116.228.193.194:44296","[ee98:8de3:1f5b:731c:751b:ba46:a4a7:8760]:23245","[6eb3:8c34:e6c9:4c6a:25c9:89ca:c165:354e]:65112","[7003:63bb:6429:d463:96a6:990:2bdc:217d]:4300","[d820:584e:2b46:6915:6420:3056:f76:9e33]:48885"]','2024-10-26 07:18:04.947942936+02:00','2025-02-04 01:12:27.847267263+01:00',NULL,'node032',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2022-02-26 18:00:56.104436744+01:00','2024-09-14 08:51:13.31135114+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2022-04-01 22:30:02.657653341+02:00','2024-09-14 08:55:27.877614108+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2024-05-05 07:08:47.915309504+02:00','2024-09-14 08:55:02.778476991+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2024-09-14 08:57:26.215082073+02:00','2024-09-14 08:57:26.215082073+02:00',NULL,'user008','','',NULL,NULL,''); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(7,'2023-01-15 14:41:49.445054371+01:00','2023-01-29 10:12:11.959527554+01:00',NULL,14,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(8,'2023-01-15 14:41:49.452617331+01:00','2023-01-29 10:12:13.154311101+01:00',NULL,14,'::/0',1,1,0); -INSERT INTO routes VALUES(9,'2024-05-05 07:16:02.944084847+02:00','2024-05-05 13:25:02.977065366+02:00',NULL,3,'10.138.98.32/27',1,1,1); -INSERT INTO routes VALUES(10,'2024-05-05 07:16:02.95709561+02:00','2024-05-05 13:25:04.513930181+02:00',NULL,3,'192.168.208.0/20',1,1,1); -INSERT INTO routes VALUES(11,'2024-05-05 12:46:08.770263657+02:00','2024-05-05 12:46:39.012076195+02:00',NULL,27,'172.19.128.0/20',1,1,1); -INSERT INTO routes VALUES(12,'2024-05-05 12:46:08.779527998+02:00','2024-05-05 12:46:39.843489833+02:00',NULL,27,'10.68.0.0/14',1,1,1); -INSERT INTO routes VALUES(13,'2024-05-05 12:46:08.787239639+02:00','2024-05-05 12:46:40.983839193+02:00',NULL,27,'10.243.219.68/31',1,1,1); -INSERT INTO routes VALUES(14,'2024-05-05 12:46:08.794135892+02:00','2024-08-18 08:45:35.831813635+02:00',NULL,27,'10.30.155.128/25',1,1,1); -INSERT INTO routes VALUES(15,'2024-05-05 12:46:08.799975671+02:00','2024-05-05 12:46:44.333956126+02:00',NULL,27,'10.64.0.0/13',1,1,1); -INSERT INTO routes VALUES(16,'2024-05-05 12:46:08.806879988+02:00','2024-05-05 12:46:45.422824419+02:00',NULL,27,'192.168.0.0/16',1,1,1); -INSERT INTO routes VALUES(17,'2024-08-08 20:44:22.76306591+02:00','2024-12-21 12:10:07.339044415+01:00',NULL,31,'172.27.33.0/24',1,1,1); -INSERT INTO routes VALUES(18,'2024-08-08 20:47:33.469970726+02:00','2024-12-21 12:10:07.442783446+01:00',NULL,31,'10.151.196.0/23',1,1,1); -INSERT INTO routes VALUES(21,'2024-08-08 20:54:05.146666051+02:00','2024-12-21 12:10:07.538738125+01:00',NULL,31,'192.168.33.128/26',1,1,1); -INSERT INTO routes VALUES(23,'2024-08-08 20:54:05.162212208+02:00','2024-12-21 12:10:07.644714175+01:00',NULL,31,'10.69.87.96/27',1,1,1); -INSERT INTO routes VALUES(25,'2024-08-08 20:54:05.179419681+02:00','2024-12-21 12:10:07.753927883+01:00',NULL,31,'192.168.240.184/30',1,1,1); -INSERT INTO routes VALUES(26,'2024-08-08 20:54:05.186132539+02:00','2024-12-21 12:10:07.871905187+01:00',NULL,31,'172.19.162.160/27',1,1,1); -INSERT INTO routes VALUES(28,'2024-08-08 20:54:05.202442818+02:00','2024-12-21 12:10:07.972132539+01:00',NULL,31,'172.30.190.136/30',1,1,1); -INSERT INTO routes VALUES(31,'2024-08-08 20:54:05.246698925+02:00','2024-12-21 12:10:08.150358433+01:00',NULL,31,'10.241.118.90/31',1,1,1); -INSERT INTO routes VALUES(32,'2024-08-08 20:54:05.256984635+02:00','2024-12-21 12:10:08.349521909+01:00',NULL,31,'192.168.0.0/17',1,1,1); -INSERT INTO routes VALUES(37,'2024-08-08 20:54:05.300971626+02:00','2024-12-21 12:10:08.553265285+01:00',NULL,31,'192.168.192.0/19',1,1,1); -INSERT INTO routes VALUES(43,'2024-08-08 20:54:05.383430747+02:00','2024-12-21 12:10:08.66112581+01:00',NULL,31,'172.29.254.8/29',1,1,1); -INSERT INTO routes VALUES(47,'2024-08-08 20:54:05.443181025+02:00','2024-12-21 12:10:08.826993878+01:00',NULL,31,'172.18.8.0/22',1,1,1); -INSERT INTO routes VALUES(48,'2024-08-08 20:54:05.449778605+02:00','2024-12-21 12:10:09.237117302+01:00',NULL,31,'10.169.34.250/31',1,1,1); -INSERT INTO routes VALUES(49,'2024-09-03 06:43:34.875117755+02:00','2024-12-21 12:10:09.342259317+01:00',NULL,31,'172.24.0.0/16',1,1,1); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.25.0.sql b/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.25.0.sql deleted file mode 100644 index b2542d00..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.25.0.sql +++ /dev/null @@ -1,98 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,"user_id" integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'c0e73b52c1706fcceea368ae07c19c0e91fbe1f78eff2f7b',1,0,1,0,'2022-02-26 17:02:32.568878371+00:00','2022-02-26 18:11:15.388354971+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'ad293159f9506a02d6de4730e5a2ddb74f9fd4033919ecc5',1,0,1,1,'2022-02-26 17:08:07.828690446+00:00','2022-02-26 18:11:18.890985216+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(3,'66c7e0fbf74010ea2e153d35ffa0f3c48380ef38960d1fd1',1,0,0,1,'2022-02-26 17:11:54.149663776+00:00','2022-02-26 17:16:54.147175388+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(4,'cdbb69c88fabdc2609050c7f99e8ebdea6fcf81a50d802fc',1,0,0,1,'2022-02-26 17:15:34.160746962+00:00','2022-02-26 17:20:34.15935255+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(6,'0a99889a217a2f9cdd20082faadd5c90e72c83a47093a63a',1,0,0,1,'2022-03-04 07:27:17.535172209+00:00','2022-03-04 07:32:17.531871524+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(10,'6e88409e3883c1a47249e428030afd4a46bdaf3adb3b74c1',4,0,0,1,'2022-04-01 20:43:21.757703546+00:00','2022-04-01 20:48:21.756458144+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(13,'9d3c2bde9fcef181a3141dfe59d449bf03f6779dad3580e0',1,0,0,1,'2022-06-26 13:19:48.533865696+00:00','2022-06-26 13:24:48.532164552+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(16,'00b123d52ea8f58379b740fdc5c898b02330ab9b366cb1b4',1,0,0,1,'2023-02-12 06:21:30.15120385+00:00','2023-02-12 06:26:30.140082454+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(17,'c471ce93392c0d3040af0cf6166f4f578c3c66dd180b6e0b',1,0,0,1,'2023-02-12 06:26:55.829311638+00:00','2023-02-12 06:31:55.824701077+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(18,'fe0a438e67687540efcbbbc28c8e6c1b8ac1216f99de33d4',1,0,0,1,'2023-02-12 06:31:13.245185592+00:00','2023-02-12 06:36:13.241695106+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(19,'5a469043f1fa43ff11e54ea242dd882a81aea68f168b9a34',1,0,0,1,'2023-02-12 06:31:13.622545545+00:00','2023-02-12 06:36:13.560890824+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(20,'8b31d9a38282dfe07ffcebdfbd40db6b9e49997c93bed570',1,0,0,1,'2023-02-28 12:45:48.518939706+00:00','2023-02-28 12:50:48.445951259+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(21,'47a4216f2b4e5885d4e53a3de2ffe95521d8a708ca26d31e',1,0,0,1,'2023-02-28 12:45:48.53865321+00:00','2023-02-28 12:50:48.439132728+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(22,'1b7be83871f396e4544c8445acfc8d308dbfe29c7f0197f0',1,0,0,1,'2023-02-28 12:45:48.538806791+00:00','2023-02-28 12:50:48.445073692+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(23,'c9f364adba95c6c46c162eaa3786702805595841c0150927',1,0,0,1,'2023-05-05 08:08:16.73107293+00:00','2023-05-05 08:13:16.722921676+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(24,'a90c87e30fa22ffee39e5ce157dd22c909f2026295e3bce4',1,0,0,1,'2023-08-14 14:36:52.042138928+00:00','2023-08-14 14:41:52.038644473+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(25,'bb059cde5663619a8918bde19b9f8236085725554d9d78c2',1,0,0,1,'2023-08-14 16:15:33.722630834+00:00','2023-08-14 16:20:33.719604033+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(26,'0733b8fc67adc82644c87a95947b24c5d368c633cff92eb4',1,0,0,1,'2023-08-29 06:30:44.934900329+00:00','2023-08-29 06:35:44.931280114+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(27,'567f3a39066fd99c567d14bcc374b25cee0ad71af08b9054',1,0,0,1,'2023-11-03 17:53:45.857200883+00:00','2023-11-03 17:58:45.853742836+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(30,'8aa13c93cbef46d36c8159da51fb41a469ae04f932980890',1,0,0,1,'2024-07-29 12:24:27.140614087+00:00','2024-07-29 12:29:27.136525982+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(31,'9314b171abc433d73b8db297c1c5c65dae0be53d39a71520',1,0,0,1,'2024-08-07 18:35:12.375119763+00:00','2024-08-07 18:40:12.371590392+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(32,'6a8fd483387734664f2a911545150a9bc0d9f12a744fe913',7,0,0,0,'2025-02-23 06:56:50.319087142+00:00','2025-02-23 07:56:50.31726796+00:00','[]'); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -INSERT INTO api_keys VALUES(1,'S2nn85NliD',X'2432622431322471797064584939764e3848783749615457324d53536534503451464b39573657556d417370354c396f38427037477538554d6d4d57','2022-12-25 21:35:28.644697962+01:00','2023-03-22 06:22:18.724817647+01:00',NULL); -INSERT INTO api_keys VALUES(2,'1KZkpEyiMH',X'243262243132247055452f5068466c77784e7a69596443565139666d75416d426a614978717230764d5253613459455254674c336556795051367669','2023-03-22 06:22:18.339101298+01:00','2023-04-13 09:32:24.318715268+02:00',NULL); -INSERT INTO api_keys VALUES(3,'6yBMrqvEDX',X'243262243132244838672e4c4b6330506c526266486e497868645779657737782f42334c433831335566634c4e627432497979556e4f76737179614f','2023-04-13 09:32:23.864995051+02:00','2023-07-12 07:32:23.45+00:00',NULL); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,"hostname" text,"user_id" integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`given_name` varchar(63),`forced_tags` text,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:13adef46b653fc9d24415b242714d08a92605cb2d53355a03cda8bae483cc93d','nodekey:aca177c7ceeadf615c5d28bd894ddad5133fe0e99e6c5f2a4af5403daf74aac6','discokey:83f177b585a16eab54768d0284553d62d3f46ef8a21417834039525dfd9ac7dd','desktop-00',1,'authKey',3,'2025-02-25 11:05:11.775075937+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ce6f:3f5b:da2f:8382:f3a1:e2d4:1c47:fe5e]:25604","[ff9b:6bbd:77bc:323d:12e4:22ba:b712:80c]:37729","[1e76:19af:e817:9a33:4f83:366e:b686:6362]:39521","57.71.157.166:9055","[c199:e8ca:c1bb:8364:dde8:bf3e:9467:c050]:61218"]','2022-02-26 18:11:55.92837512+01:00','2025-02-25 11:05:11.775803871+01:00',NULL,'node001',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(3,'mkey:4f7673e616b40b313ef69fd390e9c4bdb4c80f66840defb02c37d6f2fa152c5c','nodekey:91ee8b593bb0599e9583a692dabef9528423878f737f7bfdf339fc02bbe8ab47','discokey:fb3b2c151a0d277e77837dc2a5e96fe8f05bbefa770a84822689ec2c63dfe358','email-24',7,'authKey',NULL,'2025-02-17 20:00:56.633030391+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6add:4288:4372:c9ce:7cee:4852:e6d0:9d56]:44968","[884b:b2cc:e2f3:ab06:131b:e30c:69a2:363]:32943"]','2022-03-01 19:26:46.242187887+01:00','2025-02-17 20:00:56.633298676+01:00',NULL,'node003',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:3a04f3ec6375cf82c44ce30f04f10f753e2785864c35af3838883ed175e220df','nodekey:8d53294925a1cfb59b2bb60ca7846119a841c4dec2bd4dd1f47750815f8ba332','discokey:98216fd248ca55f2b099d83c9ff8f85dd3005b64b5b52a283604ede6ea587ad2','email-39',1,'authKey',6,'2025-02-24 06:38:04.606923061+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f906:29af:dabc:6ef0:a2b3:64a1:c36c:ac7a]:24154","[be53:e324:1587:4d86:d24c:8662:cd5c:a4c8]:58105","59.239.218.25:49807","[cfa4:ad2e:1410:3b78:ce03:b437:6749:ff3a]:39835"]','2022-03-04 08:27:19.383566031+01:00','2025-02-24 06:38:04.607016538+01:00',NULL,'node004',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(5,'mkey:45c951d703f782a16a6db8b4dc872952308f86883499d0613f03318d18f3e7fe','nodekey:70837c16d309644e71e6b251bdb66c4e6a5c9b9f6e23a3b803142b8f00d1017a','discokey:ebe6dd61b6b2bafccce9d01ba0d58ea2ee93a90ae6cafb4383453fb4a17441b4','desktop-61',8,'authKey',NULL,'2025-02-23 12:50:07.992065346+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e5e8:7d01:63fc:4220:3b4a:df5:ebe:748c]:47047","[7513:6321:b35:b633:b39e:41ed:10a1:3fdd]:462","[8dc0:10e:95b6:9954:2614:8dcc:6564:e167]:58499","[8e6d:95be:19a9:316a:d37e:bcb6:5e05:d7f0]:36162","[5bb2:705a:7f5c:3c9:db83:956a:1ce:5291]:9510","37.24.123.232:17607"]','2022-03-05 13:54:23.660591381+01:00','2025-02-23 12:50:07.99213101+01:00',NULL,'node005',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(6,'mkey:41996a93d33f8d668b8138c76f7f6ade6a105ea47d72342a14d470da88afd9f7','nodekey:3d3bd1e69adec0d7fa68d8a835bc25d381cf6b371077941e98362efd62aa38cd','discokey:70213a9865b98fb97c6dc05dff42b6260ecfb1b0b38e9de3fcf1199f36ea2ea0','laptop-78',1,'authkey',NULL,'2025-02-23 08:38:50.091332342+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[533:7131:4504:52ca:972e:c82:4ea1:ace2]:58915","58.226.203.77:32176","110.30.79.130:11845"]','2022-03-21 15:52:20.739594362+01:00','2025-02-23 08:38:50.091690948+01:00',NULL,'node006',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(7,'mkey:05d3833026a1ecd188cc6d2b9b7f9c8a0c51baefabf20a167c85c0c6ef722669','nodekey:d281e1bea5bb2e5d2d01bf476a5ea0df5578a663211c2281dc757024f6233f90','discokey:010f415b1e41652b3eabcebdbaa1e2e62fb1412f817304c125776ecd45621fba','desktop-21',4,'authkey',10,'2025-02-25 10:09:14.062758838+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6102:58dc:7ffb:200a:53e:b50d:38d5:45a8]:29126","32.34.123.165:35870","[1daf:274c:55b2:7413:3c1:bd3d:acc5:4eb1]:33732","182.237.82.235:32290","6.93.152.196:16212"]','2022-04-01 22:43:27.318756043+02:00','2025-02-25 10:09:14.063518972+01:00',NULL,'node007',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(8,'mkey:521fa99ee8b613be5da1bc0fa53e3a11806db23487572f13ce3a99b5750f47d8','nodekey:c2fd5b20a911b8193b7bc8e2dfe68598422759afb0cae2a1f6faf8a975133f68','discokey:99030d8be2cf14592175083adec1a15fdadb1181db721c9ce4f9e5ed485e8422','db-82',7,'authkey',NULL,'2025-02-24 18:51:08.679119377+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b12:6f03:406e:1855:3cf9:8eb6:e610:adbf]:18034","70.180.67.60:48287","141.100.183.252:19523","[e416:9931:8e9:6fcb:6d36:2e45:fe82:5ffe]:30261","215.10.187.249:19488","[bfe7:ebf6:e338:79a6:dba:4051:a7df:10]:31519","128.57.253.0:39500"]','2022-04-03 09:38:46.178224968+02:00','2025-02-24 18:51:08.679420095+01:00',NULL,'node008',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(9,'mkey:74ffa28c2f79c597df6d2584dc81317a09559002ef1c124d79c0961739f04628','nodekey:5df40a5c1b1c89b50f9c3a49445012ab5fe70691a639f09405cfabb7c4e57f62','discokey:40c0492a46b45647cec73ec454d030beb08e2efa406a1505937abc3cf3aa4886','web-41',7,'cli',NULL,'2025-02-24 19:06:53.63097265+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[235e:a50d:c131:2e8:80a6:4826:7e56:cc6e]:43129","[3486:5bc7:ada8:8da1:9df9:3cc6:d018:ee8b]:65112","43.182.159.11:27315","[12a4:b567:a06e:b675:a878:2f42:e5e1:429f]:60924","[171f:b843:ff2e:b9f0:9890:1aac:42e5:c2ab]:47842"]','2022-04-09 09:43:07.09027176+02:00','2025-02-24 19:06:53.631097606+01:00',NULL,'node009',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(10,'mkey:381acb630a8426dc0e69e99cb0fc8994f8df14864459adafb8c33c856481277c','nodekey:8ce541404b26c374b025803ff060a5752800114358b0520bb5370cfe4b7b7ab9','discokey:c531655c46cf50d2c752e28c7d52f1e20795936863cc4c9074f1c1eb7230cc75','laptop-07',1,'authkey',NULL,'2025-02-23 08:14:09.262507807+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[571:ac10:e700:fdbd:6084:e1ec:7273:7f9f]:46051","21.139.115.42:33169"]','2022-06-26 13:57:07.40762063+02:00','2025-02-23 08:14:09.26323988+01:00',NULL,'node010',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(11,'mkey:ec475b374671562006704de6f1be902a3d75079e8c9a3b52e6c3d6dc37d3086a','nodekey:655232f4f4205ddc98661144bd7a05a975f44a3f01ed22e18927e6265329104a','discokey:b30f00fbda4b3a7ed66ff30ac8c41c0c1c04a775756543f27155e1b93f947eca','db-05',1,'authkey',13,'2025-02-24 23:14:30.021124861+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f2e:fd97:b513:b737:33f3:dd2b:df4:a90e]:41096","155.239.169.182:41163","[c908:b81c:b4c8:38d3:e311:b3f6:3067:227c]:11541","[f41f:b9e9:9205:210a:4ff9:ab34:57fd:3148]:21852","[3d49:412:9781:b399:f9a5:587e:bd2e:a579]:3775"]','2022-06-26 15:19:50.566498735+02:00','2025-02-24 23:14:30.021972171+01:00',NULL,'node011',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(14,'mkey:624da83bf38200687af1e4771f9392a903d156e9b9495bafb84adb021e0fac4d','nodekey:13984780af686ba7da0433dd5c0dd04157b929f923291b809cca20e0fcf9de14','discokey:f9ebece3311c35c7efd854261a8505261d347db67e3bcb0f38bdb0631dca6d89','laptop-03',1,'authkey',NULL,'2025-02-23 08:19:10.390225858+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["30.125.129.134:36303","[213c:e72:d34:c59c:d4fa:8bb5:f6d3:d691]:24926","83.92.114.14:38013"]','2022-09-26 16:07:54.206927686+02:00','2025-02-23 08:19:10.39053916+01:00',NULL,'node014',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(23,'mkey:1be02a7f57e51fa42c3af087a1464b33d33ad2802979d102c8b10ac146091537','nodekey:1fc37765c4fbcf068f0dd37f2b98d4d1fe35fc7c9e20ea8ed2be4b52b97481b4','discokey:9ea68ef044c00841bb6187dafedc8811caa4b12d123e00119eba0462b9dac6b2','db-52',1,'authkey',23,'2025-02-10 07:17:23.651226796+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f538:9546:a467:a160:1e3f:ad81:b5aa:f88d]:56031","14.178.248.81:54186","[dcb5:9eee:2075:acba:1a31:1669:21dd:744b]:48312","91.212.243.143:50528"]','2023-05-05 10:08:17.301597525+02:00','2025-02-10 07:17:23.651369947+01:00',NULL,'node023','[]','100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(24,'mkey:a0099c574ba14cdf6db0a2591d2874960f00cb106129315d4e406b605c288353','nodekey:9cef74269ce1f9ee2648676a8298a169da6f5e2ef84ce0699559e4627279338c','discokey:dd188ff283fcda7d3e0fe5fd611b6e959fde88c50ecea39e06ba3b907e6fc39b','lt-44',1,'authkey',25,'2025-01-13 15:02:48.676721758+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[48f9:df69:4387:ae57:b06c:1439:7821:792]:45333","12.27.126.70:46578","[8b32:e301:3c59:346b:8ee6:c646:97a6:99ec]:36499","185.25.84.152:44786","[c7d0:cf6f:bb45:6922:7fd7:adb7:59f5:499e]:60522","65.200.68.231:16530","11.209.67.55:8308"]','2023-08-14 18:15:34.292188686+02:00','2025-01-13 15:02:48.67679754+01:00',NULL,'node024','[]','100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(25,'mkey:54219375ce0ab01cd59b56ad984df668ae7cbe49b7b91e44877ae0d7012d6106','nodekey:2b0c163bc497f6214b25d6a333a37d273c260630dcc91782897f19a579f92a5b','discokey:4cf24a025241ad9a69c20d40513dfee5626f934d738e18f4a64fd9a1ace2cb24','srv-15',1,'authkey',26,'2025-02-15 09:38:06.967090699+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[60da:30ec:55bd:5ab3:3fe7:5492:1fc8:92f3]:36566","105.238.97.70:40193","[8c84:2d7e:5e1a:2f51:8754:aa54:b421:a19f]:60672","[5101:751a:5e00:cf46:8b50:8ba9:3c8e:4c7c]:33862"]','2023-08-29 08:30:45.518580154+02:00','2025-02-15 09:38:06.967816068+01:00',NULL,'node025','[]','100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(26,'mkey:96e8e0e986089fe5d0f5418f7ccb0a701401def77b8c5e1c337e7c5b865aa20c','nodekey:0e01df602ea65f549245772540b04373a8f2ab64a781251598226e99a6fc341f','discokey:444d74201443318337b9c8118b6d62ab087049cd1258b1b670dd3d8b8b883bd0','lt-82',1,'authkey',27,'2025-02-25 11:26:22.780339791+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["12.54.208.97:59171","[13a0:b8ff:bebb:d48:e16e:869d:b542:531b]:17201","44.157.88.255:43418","8.250.82.20:52374"]','2023-11-03 18:53:48.108033118+01:00','2025-02-25 11:26:22.781062875+01:00',NULL,'node026','[]','100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(29,'mkey:8ba830e2027bf650dfb541fcecc1f2cd5e20da44d6b333434e06cc9b17b9d4d4','nodekey:c0f946e33505da4340c838ca01cd3c0c1dda4d73f42a5609709f48727841e316','discokey:d380435e5b89c56938fb4a1c71ce024df4a01e8d5cfeb95e21c4c5d2b9e3a395','web-21',1,'authkey',NULL,'2025-02-25 10:54:54.677120077+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["54.149.91.210:26635","210.72.228.199:14674","101.172.44.176:5878","[c018:a17d:2177:f7b8:482d:b68b:9b7a:c3f4]:52482"]','2024-06-02 09:46:39.307697473+02:00','2025-02-25 10:54:54.677452704+01:00',NULL,'node029','[]','100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(30,'mkey:bba58c946e9f67a39d4a8f0f36b64169c0baf421e280fb4ace27827385444ff8','nodekey:f1e7106cf01ac4b975fc1176b9ca93e8ed17b4f406a5e4c7dffa4d7f003b2ced','discokey:98979075a00ac5b3b0b193011ceca8608729166168eecbe99cb9bb8ff59df080','srv-98',1,'authkey',30,'2025-02-15 09:20:40.349716088+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c7df:a6c9:ae51:6570:f6b6:7d2c:1803:50f0]:6218","62.26.91.22:45257","37.207.25.123:5981","[5b20:30d9:10ce:eafd:90f2:bf34:b3e8:fedb]:18638"]','2024-07-29 14:24:27.684620193+02:00','2025-02-15 09:20:40.350483576+01:00',NULL,'node030','[]','100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(31,'mkey:8de33b4e277715f445a3530365044c55cc63c9b1abf4b586d017fbfdfa8a5b0e','nodekey:519bbb0b96ac86e86baf7dc5b9c92dd59eb3907c6e45f242aa0f3989b87f6c3e','discokey:65ff26a4a3d846c72e104d7cf5c22372c435f781ad1c0aeb99b40f046c3dfb67','lt-92',1,'authkey',31,'2025-02-25 10:56:26.138523564+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["183.225.17.130:38581","128.41.168.9:19215","39.206.106.178:64472","[a3b:5db7:67ad:9dfa:1b70:91c6:15ae:e5ad]:26117"]','2024-08-07 20:35:12.944541318+02:00','2025-02-25 10:56:26.139732495+01:00',NULL,'node031','[]','100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(32,'mkey:41c6942d339502595b451a83099f62811b65eb38dab6332b80dc801ee24a2b3e','nodekey:bfa1d578b0f89f1bf88249ef93aeaebd64e434ab28376e0f312694f979438ad7','discokey:f86c45ad30b7c20300756d34059a6e025ef9f7f477614544f2e433d36d546fd9','desktop-10',1,'cli',NULL,'2025-02-25 11:30:05.852369159+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9b41:6262:cbeb:cfa9:3f72:a287:5524:c052]:8383","[3981:7f70:a471:4993:c3a4:3173:1b62:544e]:43648","200.130.212.85:63469","216.16.62.70:35569","213.115.82.246:7596","195.216.27.246:54466","[c47f:35c9:41fe:6f5:a40d:3e8b:6679:411e]:12963","19.205.218.60:63516","116.228.193.194:44296","[ee98:8de3:1f5b:731c:751b:ba46:a4a7:8760]:23245","[6eb3:8c34:e6c9:4c6a:25c9:89ca:c165:354e]:65112","[7003:63bb:6429:d463:96a6:990:2bdc:217d]:4300","[d820:584e:2b46:6915:6420:3056:f76:9e33]:48885"]','2024-10-26 07:18:04.947942936+02:00','2025-02-25 11:30:05.852688962+01:00',NULL,'node032',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -INSERT INTO nodes VALUES(33,'mkey:448ecfac13b4affb86e121ffbff9da87acabac6c21dee28de2eb3371ea8276b5','nodekey:dd35ae383e1ae1af878c3c16cbd53f09e9ebaafc8787cb3d632954ed7ac9e277','discokey:beeb96403422394aa40bb619a3b582d50453e851815173c5b30d7163f4ed6ce0','lt-02',7,'cli',NULL,'2025-02-24 20:01:06.886972101+01:00',NULL,'{"fake":"data"}','["131.88.182.238:63239","146.198.220.228:37962","[bc8:3c85:dde1:8c19:4d87:a3ca:f9f4:3c6f]:20142","[785b:eebc:c3a6:bde6:f89:927:da7f:190b]:3639","68.183.1.227:6112","[9fd7:101c:db80:8423:7697:70:2c52:cbbc]:17162","67.6.72.132:32571","[524:9a03:6a60:cc60:d594:b98e:e32f:d883]:8390","86.91.139.227:63567"]','2025-02-23 07:59:15.141530181+01:00','2025-02-24 20:01:06.88755488+01:00',NULL,'node033',NULL,'100.64.0.11','fd7a:115c:a1e0::b'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(7,'2023-01-15 14:41:49.445054371+01:00','2023-01-29 10:12:11.959527554+01:00',NULL,14,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(8,'2023-01-15 14:41:49.452617331+01:00','2023-01-29 10:12:13.154311101+01:00',NULL,14,'::/0',1,1,0); -INSERT INTO routes VALUES(9,'2024-05-05 07:16:02.944084847+02:00','2024-05-05 13:25:02.977065366+02:00',NULL,3,'172.30.190.136/30',1,1,1); -INSERT INTO routes VALUES(10,'2024-05-05 07:16:02.95709561+02:00','2024-05-05 13:25:04.513930181+02:00',NULL,3,'10.241.118.90/31',1,1,1); -INSERT INTO routes VALUES(17,'2024-08-08 20:44:22.76306591+02:00','2024-12-21 12:10:07.339044415+01:00',NULL,31,'192.168.0.0/17',1,1,1); -INSERT INTO routes VALUES(18,'2024-08-08 20:47:33.469970726+02:00','2024-12-21 12:10:07.442783446+01:00',NULL,31,'192.168.192.0/19',1,1,1); -INSERT INTO routes VALUES(21,'2024-08-08 20:54:05.146666051+02:00','2024-12-21 12:10:07.538738125+01:00',NULL,31,'172.29.254.8/29',1,1,1); -INSERT INTO routes VALUES(23,'2024-08-08 20:54:05.162212208+02:00','2024-12-21 12:10:07.644714175+01:00',NULL,31,'172.18.8.0/22',1,1,1); -INSERT INTO routes VALUES(25,'2024-08-08 20:54:05.179419681+02:00','2024-12-21 12:10:07.753927883+01:00',NULL,31,'10.169.34.250/31',1,1,1); -INSERT INTO routes VALUES(26,'2024-08-08 20:54:05.186132539+02:00','2024-12-21 12:10:07.871905187+01:00',NULL,31,'172.24.0.0/16',1,1,1); -INSERT INTO routes VALUES(28,'2024-08-08 20:54:05.202442818+02:00','2024-12-21 12:10:07.972132539+01:00',NULL,31,'10.149.80.0/20',1,1,1); -INSERT INTO routes VALUES(31,'2024-08-08 20:54:05.246698925+02:00','2024-12-21 12:10:08.150358433+01:00',NULL,31,'10.232.0.0/13',1,1,1); -INSERT INTO routes VALUES(32,'2024-08-08 20:54:05.256984635+02:00','2024-12-21 12:10:08.349521909+01:00',NULL,31,'172.23.0.0/21',1,1,1); -INSERT INTO routes VALUES(37,'2024-08-08 20:54:05.300971626+02:00','2024-12-21 12:10:08.553265285+01:00',NULL,31,'172.17.124.137/32',1,1,1); -INSERT INTO routes VALUES(43,'2024-08-08 20:54:05.383430747+02:00','2024-12-21 12:10:08.66112581+01:00',NULL,31,'10.91.8.0/21',1,1,1); -INSERT INTO routes VALUES(47,'2024-08-08 20:54:05.443181025+02:00','2024-12-21 12:10:08.826993878+01:00',NULL,31,'192.168.222.64/27',1,1,1); -INSERT INTO routes VALUES(48,'2024-08-08 20:54:05.449778605+02:00','2024-12-21 12:10:09.237117302+01:00',NULL,31,'10.16.0.0/12',1,1,1); -INSERT INTO routes VALUES(49,'2024-09-03 06:43:34.875117755+02:00','2024-12-21 12:10:09.342259317+01:00',NULL,31,'192.168.191.192/28',1,1,1); -INSERT INTO routes VALUES(50,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'10.225.156.72/29',1,1,1); -INSERT INTO routes VALUES(51,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'172.28.71.145/32',1,1,1); -INSERT INTO routes VALUES(52,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'10.104.0.0/13',1,1,1); -INSERT INTO routes VALUES(53,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'172.16.0.0/13',1,1,1); -INSERT INTO routes VALUES(54,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'10.60.104.0/24',1,1,1); -INSERT INTO routes VALUES(55,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'10.89.96.0/19',1,1,1); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2022-02-26 18:00:56.104436744+01:00','2024-09-14 08:51:13.31135114+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2022-04-01 22:30:02.657653341+02:00','2024-09-14 08:55:27.877614108+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2024-05-05 07:08:47.915309504+02:00','2024-09-14 08:55:02.778476991+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2024-09-14 08:57:26.215082073+02:00','2024-09-14 08:57:26.215082073+02:00',NULL,'user008','','',NULL,NULL,''); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.25.1.sql b/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.25.1.sql deleted file mode 100644 index 302403c6..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.25.1.sql +++ /dev/null @@ -1,101 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,"user_id" integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'c0e73b52c1706fcceea368ae07c19c0e91fbe1f78eff2f7b',1,0,1,0,'2022-02-26 17:02:32.568878371+00:00','2022-02-26 18:11:15.388354971+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'ad293159f9506a02d6de4730e5a2ddb74f9fd4033919ecc5',1,0,1,1,'2022-02-26 17:08:07.828690446+00:00','2022-02-26 18:11:18.890985216+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(3,'66c7e0fbf74010ea2e153d35ffa0f3c48380ef38960d1fd1',1,0,0,1,'2022-02-26 17:11:54.149663776+00:00','2022-02-26 17:16:54.147175388+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(4,'cdbb69c88fabdc2609050c7f99e8ebdea6fcf81a50d802fc',1,0,0,1,'2022-02-26 17:15:34.160746962+00:00','2022-02-26 17:20:34.15935255+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(6,'0a99889a217a2f9cdd20082faadd5c90e72c83a47093a63a',1,0,0,1,'2022-03-04 07:27:17.535172209+00:00','2022-03-04 07:32:17.531871524+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(10,'6e88409e3883c1a47249e428030afd4a46bdaf3adb3b74c1',4,0,0,1,'2022-04-01 20:43:21.757703546+00:00','2022-04-01 20:48:21.756458144+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(13,'9d3c2bde9fcef181a3141dfe59d449bf03f6779dad3580e0',1,0,0,1,'2022-06-26 13:19:48.533865696+00:00','2022-06-26 13:24:48.532164552+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(16,'00b123d52ea8f58379b740fdc5c898b02330ab9b366cb1b4',1,0,0,1,'2023-02-12 06:21:30.15120385+00:00','2023-02-12 06:26:30.140082454+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(17,'c471ce93392c0d3040af0cf6166f4f578c3c66dd180b6e0b',1,0,0,1,'2023-02-12 06:26:55.829311638+00:00','2023-02-12 06:31:55.824701077+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(18,'fe0a438e67687540efcbbbc28c8e6c1b8ac1216f99de33d4',1,0,0,1,'2023-02-12 06:31:13.245185592+00:00','2023-02-12 06:36:13.241695106+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(19,'5a469043f1fa43ff11e54ea242dd882a81aea68f168b9a34',1,0,0,1,'2023-02-12 06:31:13.622545545+00:00','2023-02-12 06:36:13.560890824+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(20,'8b31d9a38282dfe07ffcebdfbd40db6b9e49997c93bed570',1,0,0,1,'2023-02-28 12:45:48.518939706+00:00','2023-02-28 12:50:48.445951259+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(21,'47a4216f2b4e5885d4e53a3de2ffe95521d8a708ca26d31e',1,0,0,1,'2023-02-28 12:45:48.53865321+00:00','2023-02-28 12:50:48.439132728+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(22,'1b7be83871f396e4544c8445acfc8d308dbfe29c7f0197f0',1,0,0,1,'2023-02-28 12:45:48.538806791+00:00','2023-02-28 12:50:48.445073692+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(23,'c9f364adba95c6c46c162eaa3786702805595841c0150927',1,0,0,1,'2023-05-05 08:08:16.73107293+00:00','2023-05-05 08:13:16.722921676+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(24,'a90c87e30fa22ffee39e5ce157dd22c909f2026295e3bce4',1,0,0,1,'2023-08-14 14:36:52.042138928+00:00','2023-08-14 14:41:52.038644473+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(25,'bb059cde5663619a8918bde19b9f8236085725554d9d78c2',1,0,0,1,'2023-08-14 16:15:33.722630834+00:00','2023-08-14 16:20:33.719604033+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(26,'0733b8fc67adc82644c87a95947b24c5d368c633cff92eb4',1,0,0,1,'2023-08-29 06:30:44.934900329+00:00','2023-08-29 06:35:44.931280114+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(27,'567f3a39066fd99c567d14bcc374b25cee0ad71af08b9054',1,0,0,1,'2023-11-03 17:53:45.857200883+00:00','2023-11-03 17:58:45.853742836+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(30,'8aa13c93cbef46d36c8159da51fb41a469ae04f932980890',1,0,0,1,'2024-07-29 12:24:27.140614087+00:00','2024-07-29 12:29:27.136525982+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(31,'9314b171abc433d73b8db297c1c5c65dae0be53d39a71520',1,0,0,1,'2024-08-07 18:35:12.375119763+00:00','2024-08-07 18:40:12.371590392+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(32,'6a8fd483387734664f2a911545150a9bc0d9f12a744fe913',7,0,0,0,'2025-02-23 06:56:50.319087142+00:00','2025-02-23 07:56:50.31726796+00:00','[]'); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -INSERT INTO api_keys VALUES(1,'S2nn85NliD',X'2432622431322457713675666f5379396247466672516155317132374f58734e5033627671436470436c447073757979456f4a4a71626b6a434e6236','2022-12-25 21:35:28.644697962+01:00','2023-03-22 06:22:18.724817647+01:00',NULL); -INSERT INTO api_keys VALUES(2,'1KZkpEyiMH',X'2432622431322435797935525374656e525a4e67735365316837455965376872366e4e6563445a4e45666d72334d5273612e4b5151334f675a302e71','2023-03-22 06:22:18.339101298+01:00','2023-04-13 09:32:24.318715268+02:00',NULL); -INSERT INTO api_keys VALUES(3,'6yBMrqvEDX',X'24326224313224737956314430636d615a6e546531705370346a7a72654566327565367a587a737264694c336c7147644977424f7348794644624e79','2023-04-13 09:32:23.864995051+02:00','2023-07-12 07:32:23.45+00:00',NULL); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,"hostname" text,"user_id" integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`given_name` varchar(63),`forced_tags` text,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:13adef46b653fc9d24415b242714d08a92605cb2d53355a03cda8bae483cc93d','nodekey:aca177c7ceeadf615c5d28bd894ddad5133fe0e99e6c5f2a4af5403daf74aac6','discokey:83f177b585a16eab54768d0284553d62d3f46ef8a21417834039525dfd9ac7dd','desktop-00',1,'authKey',3,'2025-04-04 16:01:46.440493034+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ce6f:3f5b:da2f:8382:f3a1:e2d4:1c47:fe5e]:25604","[ff9b:6bbd:77bc:323d:12e4:22ba:b712:80c]:37729","[1e76:19af:e817:9a33:4f83:366e:b686:6362]:39521"]','2022-02-26 18:11:55.92837512+01:00','2025-04-04 16:01:46.441299466+02:00',NULL,'node001',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(3,'mkey:202076169f1eee6bcbaebd7d5c314341b9aca69fadda7286fe49f45ed81b6de0','nodekey:b02b4522c79164bb4dc5ba6ff8115cc7dac03417a9385fa9b3fe12e4c3d14ccc','discokey:e5b11353b9ee94a90963eb4318514b17ad8304b4b80d4e178b3e3485376f3c88','email-16',7,'authKey',NULL,'2025-04-03 21:00:54.6832508+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b886:a57b:2070:84fd:6618:6c8:3c70:5077]:17679","117.184.57.124:25930"]','2022-03-01 19:26:46.242187887+01:00','2025-04-03 21:00:54.683607994+02:00',NULL,'node003',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:56ad8dcc120738372c03052898bcdaf06a4b7e92adfa4e260f3ab33bbec7d560','nodekey:b197bf018c11a550972b6fe5841e99660bd70eb64faa26f334466181c6a0c23b','discokey:af3613f6dc6ef1559bb6a35ef0c74ebb42b49e75a7a07d6385a9d5e0acacaf17','web-61',1,'authKey',6,'2025-04-03 18:23:56.187297547+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["76.70.66.173:4330","164.82.63.27:24154","[be53:e324:1587:4d86:d24c:8662:cd5c:a4c8]:58105","59.239.218.25:49807","[cfa4:ad2e:1410:3b78:ce03:b437:6749:ff3a]:39835","179.46.219.166:38167","166.111.217.41:13979"]','2022-03-04 08:27:19.383566031+01:00','2025-04-03 18:23:56.1873891+02:00',NULL,'node004',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(5,'mkey:e87e820508bd86c183ebe4ffb80637130b6d7480c3db5b94fbe4be097ca31d6b','nodekey:36d6615362a2367b323e53348e7b300b5d3180954f6c73d619738b6b416f8b29','discokey:31bde1fc825d42faaefc6baeb19205176d325a0c385b3d5c42020fac5828baf7','srv-79',8,'authKey',NULL,'2025-02-23 12:50:07.992065346+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["157.68.216.200:462","[8dc0:10e:95b6:9954:2614:8dcc:6564:e167]:58499","[8e6d:95be:19a9:316a:d37e:bcb6:5e05:d7f0]:36162","[5bb2:705a:7f5c:3c9:db83:956a:1ce:5291]:9510","37.24.123.232:17607","17.23.6.93:59855"]','2022-03-05 13:54:23.660591381+01:00','2025-02-23 12:50:07.99213101+01:00',NULL,'node005',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(6,'mkey:afca3664dd99d4f692e6188b11e356803b3b88af6aea8c64f44c691c5ef0dd75','nodekey:56c0b70b513c1332dc6e96e23f9b31ce5e7d2f67bf2ce913788f6f8f4309a286','discokey:84388aee7f9b67ca6f494710d339cebe2570c109bffd46b866a3ce6bdfab0a7e','desktop-69',1,'authkey',NULL,'2025-04-04 06:54:04.865503636+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["58.226.203.77:32176","110.30.79.130:11845","87.221.84.2:11620"]','2022-03-21 15:52:20.739594362+01:00','2025-04-04 06:54:04.865830042+02:00',NULL,'node006',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(7,'mkey:cf245d2ed1c596c90501a51b0a7bb2bf72a77bd8aaa25be1798ba88ca4840de1','nodekey:b48c4f58b478be290e6f1fbdd74b54487f66e3efd00f1c90a46c413ccec88b3b','discokey:2df61e4dd58f94db5f7e51caacbb30da03e1efa2892c18ea63d00b9a0b59bd09','db-01',4,'authkey',10,'2025-04-04 11:27:17.576257983+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2c84:7686:38e3:1aa5:db0c:29e4:c685:6059]:38764","4.4.79.116:35870","[1daf:274c:55b2:7413:3c1:bd3d:acc5:4eb1]:33732","182.237.82.235:32290"]','2022-04-01 22:43:27.318756043+02:00','2025-04-04 11:27:17.576974245+02:00',NULL,'node007',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(8,'mkey:cedaee7ced43e8b6eab7807f2e0b6a65351950be008d086dda73ca1176dcb4b0','nodekey:97377594f32ffa436df6cb935fe998011700cbd94d0b7b9f1878e796e47595eb','discokey:7abe7a09338db86ecffe306c78b69a84b7a3615922a6dd580f87e25ac5065843','email-41',7,'authkey',NULL,'2025-04-04 09:43:37.714947898+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["79.62.99.173:32988","136.206.88.82:20612","151.147.230.32:10729","[2621:a1d2:a66d:a6cf:3592:dff2:93dc:5b32]:62817","[e1e8:e93f:ab91:4027:e416:9932:8e9:6fcb]:30261"]','2022-04-03 09:38:46.178224968+02:00','2025-04-04 09:43:37.715260959+02:00',NULL,'node008',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(9,'mkey:94d71f734657c61ac8dd2f7d1ed06787361c1b43bdc3a3c9a541890afa676ddb','nodekey:7bb55439c507f6a11d866f44fdea7912cc4ad8791528438b5d598f001971936e','discokey:1b0a846e57e2ebb966bae5507a41991dde493eb17c61161146056f1d4065817c','lt-20',7,'cli',NULL,'2025-04-04 16:02:13.701628939+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[45cc:7c5b:4d26:7aa1:ae7e:d9bb:ea07:456]:65311","48.237.159.121:13531","55.139.180.4:64685","[bae8:1f82:ff78:e8a8:235e:a50e:c131:2e8]:43129","[3486:5bc7:ada8:8da1:9df9:3cc6:d018:ee8b]:65112","43.182.159.11:27315"]','2022-04-09 09:43:07.09027176+02:00','2025-04-04 16:02:13.701896564+02:00',NULL,'node009',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(10,'mkey:4edbeff705459e7e5d0df80d42abce8d3db49137e013cbe38416aa2d0e77e4de','nodekey:149c771cccddad67f62a6cdf8ebc282692d3f8138e6749e3bcb8044209f30ea8','discokey:7773ea1a30faff0ef3d64186b00fc7f0d584c68eea4e3d75ad6dab5cfb54bee4','email-49',1,'authkey',NULL,'2025-03-22 18:28:09.173419272+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["61.59.238.85:46024","221.176.225.4:60304"]','2022-06-26 13:57:07.40762063+02:00','2025-03-22 18:28:09.173773792+01:00',NULL,'node010',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(11,'mkey:0bd4a2eac9b6f9b77ab9ee9ea42b36b74d73e0544a2c482526b2385e8bcb0c34','nodekey:1a5c66957c54314f72ea26fed12e48b9c4149c1f640300e39d6190226bc1aaf8','discokey:9f151fb1569ef5341c8f54eec62b82cb7e5f21540ebe9271ae6449a31250123a','laptop-22',1,'authkey',13,'2025-04-03 22:06:46.872430113+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["55.247.173.145:28477","197.123.99.18:53038","102.60.187.246:41096","155.239.169.182:41163","[c908:b81c:b4c8:38d3:e311:b3f6:3067:227c]:11541"]','2022-06-26 15:19:50.566498735+02:00','2025-04-03 22:06:46.873132509+02:00',NULL,'node011',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(14,'mkey:c7d3e8e40cafce614b65f150e345fefab60dd5b8108bc47e5159e8d3464496f8','nodekey:06d9c1b8455e39baee27729233638fd793e38376a0caafd76cfb3f1fe929b331','discokey:93ec10f68b6950a9a842c6e2cf186d86334705cf174d50985a3f4c8561246e9f','lt-29',1,'authkey',NULL,'2025-03-30 11:38:07.287941738+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["34.25.194.159:63765","147.22.45.153:11543","30.125.129.134:36303"]','2022-09-26 16:07:54.206927686+02:00','2025-03-30 11:38:07.288217849+02:00',NULL,'node014',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(23,'mkey:2cc72bee6bf01caeb26a1d00ad6653d7a2f06252afe9f7e9cbb4e1701d3d7da4','nodekey:7edccf09f05dd46626c2981cc69c74c593d314aced7de30423142ba090521d72','discokey:98629534198740b432a58b75cee1c37bd3bdc0d85e85d7fe514af16881fc420b','srv-12',1,'authkey',23,'2025-04-03 21:18:57.701368882+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ec95:2e1e:2d50:967c:c90f:9d1e:4a3e:cac6]:63487","[57e5:faa8:aee0:9b66:a0e:6a4f:f715:631]:23151","209.114.161.143:52706","[6d6f:fcc8:f538:9546:a467:a161:1e3f:ad80]:45335","[d8b4:ed81:6b2f:8517:c171:871:7319:6abb]:54186","[dcb5:9eee:2075:acba:1a31:1669:21dd:744b]:48312"]','2023-05-05 10:08:17.301597525+02:00','2025-04-03 21:18:57.701435228+02:00',NULL,'node023','[]','100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(24,'mkey:cdd2725a2eec500da5a278444f3f3f82351e3df4b38c73781a8c04bdcdf469f3','nodekey:badffb3b36094dcff5902db9b857c436f2aa91b3a0fc2fc8f0b7e0a6cfce842d','discokey:68d8447103b317e2a41dccf0dc60017b8554f55639de9b13ffc9f07366e0dffa','desktop-58',1,'authkey',25,'2025-01-13 15:02:48.676721758+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["62.8.65.228:34575","58.211.127.215:33647","47.22.77.26:36499","185.25.84.152:44786","[c7d0:cf6f:bb45:6922:7fd7:adb7:59f5:499e]:60522","65.200.68.231:16530","11.209.67.55:8308"]','2023-08-14 18:15:34.292188686+02:00','2025-01-13 15:02:48.67679754+01:00',NULL,'node024','[]','100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(25,'mkey:54219375ce0ab01cd59b56ad984df668ae7cbe49b7b91e44877ae0d7012d6106','nodekey:2b0c163bc497f6214b25d6a333a37d273c260630dcc91782897f19a579f92a5b','discokey:4cf24a025241ad9a69c20d40513dfee5626f934d738e18f4a64fd9a1ace2cb24','srv-15',1,'authkey',26,'2025-03-31 23:15:14.255680838+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[60da:30ec:55bd:5ab3:3fe7:5492:1fc8:92f3]:36566","105.238.97.70:40193","[8c84:2d7e:5e1a:2f51:8754:aa54:b421:a19f]:60672","[5101:751a:5e00:cf46:8b50:8ba9:3c8e:4c7c]:33862"]','2023-08-29 08:30:45.518580154+02:00','2025-03-31 23:15:14.25655063+02:00',NULL,'node025','[]','100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(26,'mkey:96e8e0e986089fe5d0f5418f7ccb0a701401def77b8c5e1c337e7c5b865aa20c','nodekey:0e01df602ea65f549245772540b04373a8f2ab64a781251598226e99a6fc341f','discokey:444d74201443318337b9c8118b6d62ab087049cd1258b1b670dd3d8b8b883bd0','lt-82',1,'authkey',27,'2025-04-04 15:59:56.081157956+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["12.54.208.97:59171","[13a0:b8ff:bebb:d48:e16e:869d:b542:531b]:17201","44.157.88.255:43418","8.250.82.20:52374","180.217.175.155:13947","[76d7:a0ac:7cf2:9237:df96:42d2:4086:5b93]:41885"]','2023-11-03 18:53:48.108033118+01:00','2025-04-04 15:59:56.081857808+02:00',NULL,'node026','[]','100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(29,'mkey:ef670d35dba3168a251ce7230dfd073594f7e8454c55d2eea392b11b7eb2c58d','nodekey:c4fe657c9168dd2d798452d1c411faa57d7f2866808cf7fee5e5251eba58cfed','discokey:f47bc0695fd2db95519a8fc51ecc628c4b869dea953b34b88095c765ac4c8b2e','srv-21',1,'authkey',NULL,'2025-04-04 10:13:54.951364016+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c018:a17d:2177:f7b8:482d:b68b:9b7a:c3f4]:52482","[99d:8ae6:556e:9e4a:b721:e12f:cec4:d571]:25093","[688f:e843:1471:41c0:868f:f77c:e787:8d13]:24194","[ae51:6570:f6b6:7d2b:1803:50f1:6e3a:faa4]:6218","62.26.91.22:45257"]','2024-06-02 09:46:39.307697473+02:00','2025-04-04 10:13:54.951675714+02:00',NULL,'node029','[]','100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(30,'mkey:a50ccd27d81ae5fa4e8cf617b9a923d18b818d5903d1aa008044cf4a2145f5b4','nodekey:eb0607968c785075216b01d5e11a67124f7ab9bd8d11498aa29cd68ca7355d8b','discokey:beffe8994e5a4baf7553267e7df7da7460552b3ec1b8b0b2d12f467a07810dea','db-89',1,'authkey',30,'2025-03-31 23:15:17.070924864+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c7b3:52c3:f5c3:87c:f5ed:213:85f4:44c5]:18304","49.111.96.135:35528","[abc1:b7cd:f18:6ed2:4b5a:9dce:3f08:8c17]:333","206.118.184.48:48132"]','2024-07-29 14:24:27.684620193+02:00','2025-03-31 23:15:17.071471786+02:00',NULL,'node030','[]','100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(31,'mkey:e76b247ac8acd857970abf462c8c954f670d78fe190ca537ef68983de00806e0','nodekey:bf2c0b7e9271cb7addf94f7ee4b5c2bfab9f279fb124c76f0d1fed4b4125e422','discokey:af105f27a8a9c750ba54da2defc49dbf001891b96e6a1bb1b96ee39056e1ff5f','laptop-03',1,'authkey',31,'2025-04-04 15:57:02.028544413+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["187.133.234.227:17532","143.220.168.161:8383","[3981:7f70:a471:4993:c3a4:3173:1b62:544e]:43648","200.130.212.85:63469"]','2024-08-07 20:35:12.944541318+02:00','2025-04-04 15:57:02.03016371+02:00',NULL,'node031','[]','100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(32,'mkey:e66714b599a0fcd24e7e87019cfbfc3b9d6d1c9c3b0bb70912d6e7bc7398f8fa','nodekey:0dba1091e5ff8fbf639f490eabaa0fa5188cc076cb8922eb6889b023fc4384bc','discokey:a05e361212ffc667037a9997a178a9d0a50453bded8dfe3772fed2b711092c58','db-20',1,'cli',NULL,'2025-04-04 15:55:20.925507947+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c47f:35c9:41fe:6f5:a40d:3e8b:6679:411e]:12963","19.205.218.60:63516","116.228.193.194:44296","[ee98:8de3:1f5b:731c:751b:ba46:a4a7:8760]:23245","[6eb3:8c34:e6c9:4c6a:25c9:89ca:c165:354e]:65112","[7003:63bb:6429:d463:96a6:990:2bdc:217d]:4300","[d820:584e:2b46:6915:6420:3056:f76:9e33]:48885","25.167.184.15:1967","82.253.97.208:8684"]','2024-10-26 07:18:04.947942936+02:00','2025-04-04 15:55:20.925847076+02:00',NULL,'node032',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -INSERT INTO nodes VALUES(33,'mkey:e885828fffb844493b9ac09514500e4832d588586274d7ab134def4758aec8b4','nodekey:69ecbf0eda9c36c6a5a3a5bbdaebcbe87843112ccb5004acdaa3273557cca09e','discokey:3076854f388bce0a53ce84c6a01f0ce677f641bc2429b10206c6c2de7ac75a7f','laptop-99',7,'cli',NULL,'2025-04-03 22:00:57.883174604+02:00',NULL,'{"fake":"data"}','["104.248.144.146:61623","[a05:c6f9:3d44:8c37:71b:94bc:c915:47aa]:9656","15.105.112.6:17162","67.6.72.132:32571","[524:9a03:6a60:cc60:d594:b98e:e32f:d883]:8390","86.91.139.227:63567","206.126.50.249:22311","[a1f9:9745:a2ed:1bb7:9405:91f9:1c52:8c28]:14465","155.71.33.111:62332","[fd76:427:7ce5:c2b2:183a:1281:d6bb:8633]:47777","[2133:a7d0:99d5:fcce:44dd:8de3:dffe:61f5]:30756"]','2025-02-23 07:59:15.141530181+01:00','2025-04-03 22:00:57.883491572+02:00',NULL,'node033',NULL,'100.64.0.11','fd7a:115c:a1e0::b'); -INSERT INTO nodes VALUES(34,'mkey:bc9632988599109f50c950f4069c6a38f9e04d22f81f5bbd4edfdf271ea3b772','nodekey:a9be3965ba99bf7ef5ca7eee6f1ea50043fab7e40a2da3b8ea45e5c5c859d0e7','discokey:f9c9d5de5033f54236970fa4c12fe4f475be4c2a00f57418f396057142aea9f3','srv-85',9,'cli',NULL,'2025-04-04 00:05:39.390959034+02:00',NULL,'{"fake":"data"}','["[1b72:b6ce:5ad1:81e3:bc01:6516:f696:e19e]:15160","[fc:3ed4:df18:5664:71f1:6c7b:652a:7fcd]:27806","[417a:9260:6c16:5a4c:911d:eaec:7e3b:cf47]:58735","190.241.43.139:19489","[9b36:5bd4:a077:9b75:65ac:ccd0:170f:69b9]:41358"]','2025-03-09 11:42:30.627548688+01:00','2025-04-04 00:05:39.391044635+02:00',NULL,'node034',NULL,'100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(35,'mkey:0a58d0ca3bac65adeddc621d1355a3801b088c393a93bc83a55c198e7ef1858e','nodekey:f25a54c9ec984566a1a245aee7f85f92ffccc090be0d4d9fdab081e23a5bd2c5','discokey:2d95fd862eb2087d1345f2825e8b3013ca9037cc44f595fc1374850ecce93c9f','desktop-75',9,'cli',NULL,'2025-04-04 12:24:50.896091765+02:00',NULL,'{"fake":"data"}','["115.214.18.115:26087","209.46.195.111:48055","[2030:c7ae:bde5:536c:7748:3ac:9eea:2468]:45768","[1ec6:10e:5a64:942e:cb28:7d65:97cc:8e89]:14623"]','2025-03-09 12:06:19.756893731+01:00','2025-04-04 12:24:50.896434652+02:00',NULL,'node035',NULL,'100.64.0.22','fd7a:115c:a1e0::16'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(7,'2023-01-15 14:41:49.445054371+01:00','2023-01-29 10:12:11.959527554+01:00',NULL,14,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(8,'2023-01-15 14:41:49.452617331+01:00','2023-01-29 10:12:13.154311101+01:00',NULL,14,'::/0',1,1,0); -INSERT INTO routes VALUES(9,'2024-05-05 07:16:02.944084847+02:00','2024-05-05 13:25:02.977065366+02:00',NULL,3,'172.30.190.136/30',1,1,1); -INSERT INTO routes VALUES(10,'2024-05-05 07:16:02.95709561+02:00','2024-05-05 13:25:04.513930181+02:00',NULL,3,'10.241.118.90/31',1,1,1); -INSERT INTO routes VALUES(17,'2024-08-08 20:44:22.76306591+02:00','2024-12-21 12:10:07.339044415+01:00',NULL,31,'192.168.0.0/17',1,1,1); -INSERT INTO routes VALUES(18,'2024-08-08 20:47:33.469970726+02:00','2024-12-21 12:10:07.442783446+01:00',NULL,31,'192.168.192.0/19',1,1,1); -INSERT INTO routes VALUES(21,'2024-08-08 20:54:05.146666051+02:00','2024-12-21 12:10:07.538738125+01:00',NULL,31,'172.29.254.8/29',1,1,1); -INSERT INTO routes VALUES(23,'2024-08-08 20:54:05.162212208+02:00','2024-12-21 12:10:07.644714175+01:00',NULL,31,'172.18.8.0/22',1,1,1); -INSERT INTO routes VALUES(25,'2024-08-08 20:54:05.179419681+02:00','2024-12-21 12:10:07.753927883+01:00',NULL,31,'10.169.34.250/31',1,1,1); -INSERT INTO routes VALUES(26,'2024-08-08 20:54:05.186132539+02:00','2024-12-21 12:10:07.871905187+01:00',NULL,31,'172.24.0.0/16',1,1,1); -INSERT INTO routes VALUES(28,'2024-08-08 20:54:05.202442818+02:00','2024-12-21 12:10:07.972132539+01:00',NULL,31,'10.149.80.0/20',1,1,1); -INSERT INTO routes VALUES(31,'2024-08-08 20:54:05.246698925+02:00','2024-12-21 12:10:08.150358433+01:00',NULL,31,'10.232.0.0/13',1,1,1); -INSERT INTO routes VALUES(32,'2024-08-08 20:54:05.256984635+02:00','2024-12-21 12:10:08.349521909+01:00',NULL,31,'172.23.0.0/21',1,1,1); -INSERT INTO routes VALUES(37,'2024-08-08 20:54:05.300971626+02:00','2024-12-21 12:10:08.553265285+01:00',NULL,31,'172.17.124.137/32',1,1,1); -INSERT INTO routes VALUES(43,'2024-08-08 20:54:05.383430747+02:00','2024-12-21 12:10:08.66112581+01:00',NULL,31,'10.91.8.0/21',1,1,1); -INSERT INTO routes VALUES(47,'2024-08-08 20:54:05.443181025+02:00','2024-12-21 12:10:08.826993878+01:00',NULL,31,'192.168.222.64/27',1,1,1); -INSERT INTO routes VALUES(48,'2024-08-08 20:54:05.449778605+02:00','2024-12-21 12:10:09.237117302+01:00',NULL,31,'10.16.0.0/12',1,1,1); -INSERT INTO routes VALUES(49,'2024-09-03 06:43:34.875117755+02:00','2024-12-21 12:10:09.342259317+01:00',NULL,31,'192.168.191.192/28',1,1,1); -INSERT INTO routes VALUES(50,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'10.225.156.72/29',1,1,1); -INSERT INTO routes VALUES(51,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'172.28.71.145/32',1,1,1); -INSERT INTO routes VALUES(52,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'10.104.0.0/13',1,1,1); -INSERT INTO routes VALUES(53,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'172.16.0.0/13',1,1,1); -INSERT INTO routes VALUES(54,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'10.60.104.0/24',1,1,1); -INSERT INTO routes VALUES(55,'2025-02-23 06:43:34.875117755+02:00','2025-02-23 06:43:34.875117755+02:00',NULL,33,'10.89.96.0/19',1,1,1); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2022-02-26 18:00:56.104436744+01:00','2024-09-14 08:51:13.31135114+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2022-04-01 22:30:02.657653341+02:00','2024-09-14 08:55:27.877614108+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2024-05-05 07:08:47.915309504+02:00','2024-09-14 08:55:02.778476991+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2024-09-14 08:57:26.215082073+02:00','2024-09-14 08:57:26.215082073+02:00',NULL,'user008','','',NULL,NULL,''); -INSERT INTO users VALUES(9,'2025-03-09 11:22:20.790371056+01:00','2025-03-09 11:22:20.790371056+01:00',NULL,'user009','','',NULL,'',''); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.26.1.sql b/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.26.1.sql deleted file mode 100644 index 1b4f8913..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db01__0.14.0__0.26.1.sql +++ /dev/null @@ -1,81 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -INSERT INTO migrations VALUES('202502131714'); -INSERT INTO migrations VALUES('202502171819'); -INSERT INTO migrations VALUES('202505091439'); -INSERT INTO migrations VALUES('202505141324'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,"user_id" integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'c0e73b52c1706fcceea368ae07c19c0e91fbe1f78eff2f7b',1,0,1,0,'2022-02-26 17:02:32.568878371+00:00','2022-02-26 18:11:15.388354971+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'ad293159f9506a02d6de4730e5a2ddb74f9fd4033919ecc5',1,0,1,1,'2022-02-26 17:08:07.828690446+00:00','2022-02-26 18:11:18.890985216+01:00',NULL); -INSERT INTO pre_auth_keys VALUES(3,'66c7e0fbf74010ea2e153d35ffa0f3c48380ef38960d1fd1',1,0,0,1,'2022-02-26 17:11:54.149663776+00:00','2022-02-26 17:16:54.147175388+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(4,'cdbb69c88fabdc2609050c7f99e8ebdea6fcf81a50d802fc',1,0,0,1,'2022-02-26 17:15:34.160746962+00:00','2022-02-26 17:20:34.15935255+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(6,'0a99889a217a2f9cdd20082faadd5c90e72c83a47093a63a',1,0,0,1,'2022-03-04 07:27:17.535172209+00:00','2022-03-04 07:32:17.531871524+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(10,'6e88409e3883c1a47249e428030afd4a46bdaf3adb3b74c1',4,0,0,1,'2022-04-01 20:43:21.757703546+00:00','2022-04-01 20:48:21.756458144+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(13,'9d3c2bde9fcef181a3141dfe59d449bf03f6779dad3580e0',1,0,0,1,'2022-06-26 13:19:48.533865696+00:00','2022-06-26 13:24:48.532164552+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(16,'00b123d52ea8f58379b740fdc5c898b02330ab9b366cb1b4',1,0,0,1,'2023-02-12 06:21:30.15120385+00:00','2023-02-12 06:26:30.140082454+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(17,'c471ce93392c0d3040af0cf6166f4f578c3c66dd180b6e0b',1,0,0,1,'2023-02-12 06:26:55.829311638+00:00','2023-02-12 06:31:55.824701077+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(18,'fe0a438e67687540efcbbbc28c8e6c1b8ac1216f99de33d4',1,0,0,1,'2023-02-12 06:31:13.245185592+00:00','2023-02-12 06:36:13.241695106+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(19,'5a469043f1fa43ff11e54ea242dd882a81aea68f168b9a34',1,0,0,1,'2023-02-12 06:31:13.622545545+00:00','2023-02-12 06:36:13.560890824+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(20,'8b31d9a38282dfe07ffcebdfbd40db6b9e49997c93bed570',1,0,0,1,'2023-02-28 12:45:48.518939706+00:00','2023-02-28 12:50:48.445951259+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(21,'47a4216f2b4e5885d4e53a3de2ffe95521d8a708ca26d31e',1,0,0,1,'2023-02-28 12:45:48.53865321+00:00','2023-02-28 12:50:48.439132728+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(22,'1b7be83871f396e4544c8445acfc8d308dbfe29c7f0197f0',1,0,0,1,'2023-02-28 12:45:48.538806791+00:00','2023-02-28 12:50:48.445073692+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(23,'c9f364adba95c6c46c162eaa3786702805595841c0150927',1,0,0,1,'2023-05-05 08:08:16.73107293+00:00','2023-05-05 08:13:16.722921676+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(24,'a90c87e30fa22ffee39e5ce157dd22c909f2026295e3bce4',1,0,0,1,'2023-08-14 14:36:52.042138928+00:00','2023-08-14 14:41:52.038644473+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(25,'bb059cde5663619a8918bde19b9f8236085725554d9d78c2',1,0,0,1,'2023-08-14 16:15:33.722630834+00:00','2023-08-14 16:20:33.719604033+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(26,'0733b8fc67adc82644c87a95947b24c5d368c633cff92eb4',1,0,0,1,'2023-08-29 06:30:44.934900329+00:00','2023-08-29 06:35:44.931280114+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(27,'567f3a39066fd99c567d14bcc374b25cee0ad71af08b9054',1,0,0,1,'2023-11-03 17:53:45.857200883+00:00','2023-11-03 17:58:45.853742836+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(30,'8aa13c93cbef46d36c8159da51fb41a469ae04f932980890',1,0,0,1,'2024-07-29 12:24:27.140614087+00:00','2024-07-29 12:29:27.136525982+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(31,'9314b171abc433d73b8db297c1c5c65dae0be53d39a71520',1,0,0,1,'2024-08-07 18:35:12.375119763+00:00','2024-08-07 18:40:12.371590392+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(32,'6a8fd483387734664f2a911545150a9bc0d9f12a744fe913',7,0,0,0,'2025-02-23 06:56:50.319087142+00:00','2025-02-23 07:56:50.31726796+00:00','[]'); -INSERT INTO pre_auth_keys VALUES(33,'f9e6f7c415c3dada62a7ddaaa84089c1ae5a79799135ddb7',1,0,0,0,'2025-05-20 12:42:38.359223624+00:00','2025-05-20 12:47:38.357400723+00:00','[]'); -INSERT INTO pre_auth_keys VALUES(34,'14bbc5eec5d659e311d1317b5b6fa100989d362e9c9391ca',1,0,0,1,'2025-05-20 12:45:13.242066722+00:00','2025-05-20 12:50:13.240121832+00:00','[]'); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -INSERT INTO api_keys VALUES(1,'S2nn85NliD',X'24326224313224454b717078586e6e7a35566461394c6a744b435a454f6c3539636351513630693035355052625772664b2f776f37467a586d592f57','2022-12-25 21:35:28.644697962+01:00','2023-03-22 06:22:18.724817647+01:00',NULL); -INSERT INTO api_keys VALUES(2,'1KZkpEyiMH',X'243262243132244854744d444b68744567707a2f4b4c4468316658742e6679415a445651546e3270787032334243315547366f4a2f3758326f30422e','2023-03-22 06:22:18.339101298+01:00','2023-04-13 09:32:24.318715268+02:00',NULL); -INSERT INTO api_keys VALUES(3,'6yBMrqvEDX',X'24326224313224714d6e6b52514a773661742f546f69356b45795a744f5077726c76393467713741442f7a344d726e59457032542e5a797162345043','2023-04-13 09:32:23.864995051+02:00','2023-07-12 07:32:23.45+00:00',NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2022-02-26 18:00:56.104436744+01:00','2025-05-14 18:46:33.171927603+02:00',NULL,'user001','','',NULL,'',''); -INSERT INTO users VALUES(4,'2022-04-01 22:30:02.657653341+02:00','2025-05-14 18:46:33.172257847+02:00',NULL,'user004','','',NULL,'',''); -INSERT INTO users VALUES(7,'2024-05-05 07:08:47.915309504+02:00','2025-05-14 18:46:33.17253525+02:00',NULL,'user007','','',NULL,'',''); -INSERT INTO users VALUES(8,'2024-09-14 08:57:26.215082073+02:00','2025-05-14 18:46:33.172790221+02:00',NULL,'user008','','',NULL,'',''); -INSERT INTO users VALUES(9,'2025-03-09 11:22:20.790371056+01:00','2025-05-14 18:46:33.173008092+02:00',NULL,'user009','','',NULL,'',''); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,"hostname" text,"user_id" integer,`register_method` text,`auth_key_id` integer,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`given_name` varchar(63),`forced_tags` text,`ipv4` text,`ipv6` text,`approved_routes` text, `last_seen` datetime,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:474325234da7eb017709065b11310982f2d27f555982cd1d244eed609b8915f8','nodekey:39346ddf789f4b8e0b75640ddb6c68345d9558a05da328ff7aba3f32e0d532ac','discokey:a3339402132a7e9270202a62006a6bf2c7bb878ba42b3d32dd041f48c4106756','laptop-97',1,'authKey',3,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["184.61.212.248:29690","[32a5:51:afc3:d256:5b83:97cd:bb34:13c1]:63964","58.104.128.216:9783","[2893:715c:4057:d56b:ec9b:b0b1:f551:dda2]:53876","124.44.86.49:25138"]','2022-02-26 18:11:55.92837512+01:00','2025-06-28 12:31:23.433667354+02:00',NULL,'node001',NULL,'100.64.0.1','fd7a:115c:a1e0::1',NULL,'2025-06-28 12:31:23.433017377+02:00'); -INSERT INTO nodes VALUES(3,'mkey:5b5f43f22e7a71c5fd565e80430c0e6dac71a5cdb6ba74b1bb54baa0c9ffdc03','nodekey:db60b0b5f7fad865e516f069068d809f23b89b787824398150083f8a556841f9','discokey:456266be182c7d6e5da46744ed79fd2478a8680682b9e4994f88a58fb8977b64','web-42',7,'authKey',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["59.239.218.25:49807","[cfa4:ad2e:1410:3b78:ce03:b437:6749:ff3a]:39835"]','2022-03-01 19:26:46.242187887+01:00','2025-05-05 10:20:11.253224494+02:00',NULL,'node003',NULL,'100.64.0.3','fd7a:115c:a1e0::3','["172.19.40.0/21","172.28.0.0/15"]',NULL); -INSERT INTO nodes VALUES(4,'mkey:e87e820508bd86c183ebe4ffb80637130b6d7480c3db5b94fbe4be097ca31d6b','nodekey:36d6615362a2367b323e53348e7b300b5d3180954f6c73d619738b6b416f8b29','discokey:31bde1fc825d42faaefc6baeb19205176d325a0c385b3d5c42020fac5828baf7','srv-79',1,'authKey',6,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["157.68.216.200:462","[8dc0:10e:95b6:9954:2614:8dcc:6564:e167]:58499","[8e6d:95be:19a9:316a:d37e:bcb6:5e05:d7f0]:36162","[5bb2:705a:7f5c:3c9:db83:956a:1ce:5291]:9510","37.24.123.232:17607","17.23.6.93:59855","[43da:7790:faf4:7da1:8df9:7bd2:7edd:86cb]:32916"]','2022-03-04 08:27:19.383566031+01:00','2025-06-28 09:41:55.764039354+02:00',NULL,'node004',NULL,'100.64.0.2','fd7a:115c:a1e0::2',NULL,'2025-06-28 09:41:55.763896815+02:00'); -INSERT INTO nodes VALUES(5,'mkey:1390cccca2383b00ee50269c3f66a3282d6ac74d510da12d0ff6d06dfa585d6d','nodekey:10bc9579231e8407d16e1145240057b151187c0f713751369fee9d784a0396f1','discokey:bbaae038137272e4ffe36cfa61b5754de21be50116ac18ba68993ac3fef0bc86','desktop-07',8,'authKey',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["7.15.39.193:11845","87.221.84.2:11620","71.36.112.35:10587","68.226.133.92:29098","11.142.49.170:22792","32.34.123.165:35870"]','2022-03-05 13:54:23.660591381+01:00','2025-02-23 12:50:07.99213101+01:00',NULL,'node005',NULL,'100.64.0.4','fd7a:115c:a1e0::4',NULL,NULL); -INSERT INTO nodes VALUES(6,'mkey:bbc0e9d7c02e59a66359201518e7c194d0767a35c8ebf01160df46c99e2a4fb4','nodekey:c03fe802222407ea830f88ac6bffc3b151402e6a9bf51329f252a6235c3e9edb','discokey:f8a7aa10dc7ca161f31103ea4f240f04e4f1eabaecd4cef7ca7e8c06cdf8f7f8','desktop-29',1,'authkey',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["146.236.198.32:16212","[a38c:efed:7c57:2d0f:4348:fbba:e351:9d7]:10415","[8c73:5e49:de7d:6494:c1e:7525:20c1:4e51]:31219"]','2022-03-21 15:52:20.739594362+01:00','2025-06-27 06:44:54.03512388+02:00',NULL,'node006',NULL,'100.64.0.5','fd7a:115c:a1e0::5',NULL,'2025-06-27 06:44:53.041866553+02:00'); -INSERT INTO nodes VALUES(7,'mkey:273845dc25d0db5a03a825317314a99e4ba47c754b48e864032dd9dbdd780069','nodekey:db16750752e4236f2db3adb3dc87b779a99dbca4454d680e80b1db68485de119','discokey:c9739f2584f4f04352e167e14b9b2a7f849a7e174cf7ed057988de85284ff98b','srv-19',4,'authkey',10,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2621:a1d2:a66d:a6cf:3592:dff2:93dc:5b32]:62817","[e1e8:e93f:ab91:4027:e416:9932:8e9:6fcb]:30261","215.10.187.249:19488","[bfe7:ebf6:e338:79a6:dba:4051:a7df:10]:31519","128.57.253.0:39500","25.178.136.37:18298"]','2022-04-01 22:43:27.318756043+02:00','2025-06-28 03:32:29.065962525+02:00',NULL,'node007',NULL,'100.64.0.6','fd7a:115c:a1e0::6',NULL,'2025-06-28 03:32:29.065314923+02:00'); -INSERT INTO nodes VALUES(8,'mkey:de3d486a2491db953838ba8645073e4bd830d5a73d3cc4d2de21f5c89440acaa','nodekey:e90305687f5829ee9dd005d56dda139f7821635a8cc78f9632ee95c2e9a77a1f','discokey:a6aef9c5cc375914e419eb3d1d20d0a7c22829589dd831ae3fb389b6928d84de','desktop-58',7,'authkey',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["[3486:5bc7:ada8:8da1:9df9:3cc6:d018:ee8b]:65112","43.182.159.11:27315","[12a4:b567:a06e:b675:a878:2f42:e5e1:429f]:60924","[171f:b843:ff2e:b9f0:9890:1aac:42e5:c2ab]:47842","174.7.154.174:51720","27.60.143.54:24292","[75c8:63d5:d781:ffb6:6d87:822:dc90:8e21]:58598","219.62.61.186:43774"]','2022-04-03 09:38:46.178224968+02:00','2025-06-28 08:41:48.401988354+02:00',NULL,'node008',NULL,'100.64.0.7','fd7a:115c:a1e0::7',NULL,'2025-06-28 08:41:48.401718826+02:00'); -INSERT INTO nodes VALUES(9,'mkey:b611d5e2fc1ab1d3170a28fc8b95533488d8b60037899c23b3a92c90df0a4184','nodekey:365a5cd793fb1bebd177654e622112f431f472d89d3f4c410c8dfad67da69b67','discokey:d5963ebed732aa08dbef53f02a882074fec07b6dd734ea960879ff3a277d2b22','web-21',7,'cli',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["83.54.151.81:48561","[b513:b737:33f3:dd2a:df4:a90f:6797:6b1e]:7773","166.251.234.109:41163","[c908:b81c:b4c8:38d3:e311:b3f6:3067:227c]:11541","[f41f:b9e9:9205:210a:4ff9:ab34:57fd:3148]:21852","[3d49:412:9781:b399:f9a5:587e:bd2e:a579]:3775","[867:a7e:7e04:bfbb:6286:f95f:1abd:9b5a]:63765","147.22.45.153:11543"]','2022-04-09 09:43:07.09027176+02:00','2025-06-27 18:34:44.020047301+02:00',NULL,'node009',NULL,'100.64.0.8','fd7a:115c:a1e0::8',NULL,'2025-06-27 18:34:44.019955758+02:00'); -INSERT INTO nodes VALUES(10,'mkey:6741c4c82ad84e841fcfbf08ef6f04eff5d2f9b0ba84d5ee461fb33b2ca5cd29','nodekey:e25cf4bd0f6a2b4b124de9ceb3ee35b9dd7c5fea804de904d3654d662f4d17ee','discokey:ac9ede38c3ba09f2a29106a1e7d51fffb8254a291094f959417f172d3d59714f','web-40',1,'authkey',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["147.92.114.14:38013","223.127.253.143:5148"]','2022-06-26 13:57:07.40762063+02:00','2025-06-27 14:19:25.551023302+02:00',NULL,'node010',NULL,'100.64.0.9','fd7a:115c:a1e0::9',NULL,'2025-06-27 14:19:25.550738565+02:00'); -INSERT INTO nodes VALUES(11,'mkey:7c84b8eea26e6d6a14e08ee933806ca4fb854bd0a168ee576efc58a69523ea0f','nodekey:00cb375ac5580c5718d7879942eba4378d73bf4026ae583a72c6d72f9a2f5958','discokey:69afa6724b1c46f7947a232489f51bd458db562c1144d41ba1b6075395e62529','db-61',1,'authkey',13,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9599:6347:eeed:7573:eb34:676:69d5:954]:17338","87.151.18.221:6598","110.245.60.227:50528","33.141.84.140:39332","141.199.13.112:28607"]','2022-06-26 15:19:50.566498735+02:00','2025-06-27 22:13:31.905300414+02:00',NULL,'node011',NULL,'100.64.0.10','fd7a:115c:a1e0::a',NULL,'2025-06-27 22:13:31.904659225+02:00'); -INSERT INTO nodes VALUES(14,'mkey:4924fd3356d306df5387fd14994f605fefb34228482555a798ac35fa7a62c5f0','nodekey:541fea598d1b043201c43a983820265ec347c42c2716733737ff7c2e7be9398e','discokey:d6cccc631dc105d670b2533c3de0bd6f22eb54cc0a6ccea85d891d5ff2b2c81e','desktop-54',1,'authkey',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["4.222.6.136:38260","[b46f:c4d4:f889:3155:aafc:d69a:48ca:a4c1]:44786","[c7d0:cf6f:bb45:6922:7fd7:adb7:59f5:499e]:60522"]','2022-09-26 16:07:54.206927686+02:00','2025-06-27 06:44:53.567922585+02:00',NULL,'node014',NULL,'100.64.0.13','fd7a:115c:a1e0::d','["10.0.0.0/12","172.20.0.0/14"]','2025-06-27 06:44:53.042608133+02:00'); -INSERT INTO nodes VALUES(23,'mkey:54219375ce0ab01cd59b56ad984df668ae7cbe49b7b91e44877ae0d7012d6106','nodekey:2b0c163bc497f6214b25d6a333a37d273c260630dcc91782897f19a579f92a5b','discokey:4cf24a025241ad9a69c20d40513dfee5626f934d738e18f4a64fd9a1ace2cb24','srv-15',1,'authkey',23,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["[60da:30ec:55bd:5ab3:3fe7:5492:1fc8:92f3]:36566","105.238.97.70:40193","[8c84:2d7e:5e1a:2f51:8754:aa54:b421:a19f]:60672","[5101:751a:5e00:cf46:8b50:8ba9:3c8e:4c7c]:33862","[b9fe:3630:1807:826:a4a6:e5d2:532c:bdc8]:43617","[25b5:5b72:2ecc:193c:b61e:329:81c6:5af6]:39854"]','2023-05-05 10:08:17.301597525+02:00','2025-04-03 21:18:57.701435228+02:00',NULL,'node023','[]','100.64.0.14','fd7a:115c:a1e0::e',NULL,NULL); -INSERT INTO nodes VALUES(24,'mkey:1109db54ebd9d9434856deacc7bd48e4b6398bc20bee5848a03961570d0612c3','nodekey:091f831b4c85c72746639611f5048ea7c522cd65a8e8babb00b8d67914a1db0c','discokey:29c076af146813259e9dd97c1771cf6d47e6e4968d92b521fcc9305790025084','laptop-94',1,'authkey',25,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["219.227.129.151:59871","44.14.163.233:8010","[dc07:21d7:4b8b:9784:c4c8:bd7b:122a:a5f6]:19866"]','2023-08-14 18:15:34.292188686+02:00','2025-05-13 13:12:57.458459972+02:00',NULL,'node024','[]','100.64.0.15','fd7a:115c:a1e0::f',NULL,'2025-05-13 13:12:57.458382646+02:00'); -INSERT INTO nodes VALUES(25,'mkey:8d6fa35951ea76da6a6d85bb63eebf72dbf8e803aecb33a7cc8fada2206ba28c','nodekey:1606dda0995e7b12da0367027e8e1af41c2bc9a086151de09ada9ac833540022','discokey:5d66a541c0458f9178ed892cc57435f8e456722ab5c5b387887c7b3dbffde867','email-53',1,'authkey',26,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["210.72.228.199:14674","101.172.44.176:5878","[c018:a17d:2177:f7b8:482d:b68b:9b7a:c3f4]:52482","[99d:8ae6:556e:9e4a:b721:e12f:cec4:d571]:25093"]','2023-08-29 08:30:45.518580154+02:00','2025-06-27 06:44:54.325931948+02:00',NULL,'node025','[]','100.64.0.16','fd7a:115c:a1e0::10',NULL,'2025-06-27 06:44:53.144614504+02:00'); -INSERT INTO nodes VALUES(26,'mkey:94b81dd1a9a0e1d42764099b075c4efa024bd0fbc0a9fc0c3cde7025c4d5a9a5','nodekey:b357796714da554353fc1db9d93898f6767d1f90908c3010c1743f15459d008f','discokey:c03304b111c7a01896bf3a8726a618d33981f8accf52b92d18b5e45f5896fadc','web-27',1,'authkey',27,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["62.26.91.22:45257","37.207.25.123:5981","[5b20:30d9:10ce:eafd:90f2:bf34:b3e8:fedb]:18638","20.120.13.221:18287"]','2023-11-03 18:53:48.108033118+01:00','2025-06-28 12:21:15.876418767+02:00',NULL,'node026','[]','100.64.0.17','fd7a:115c:a1e0::11',NULL,'2025-06-28 12:21:15.875770984+02:00'); -INSERT INTO nodes VALUES(29,'mkey:5a805e2dceef2becc9d1fa1f2ae9854fd6e773570d808956b7eb8df88acf2795','nodekey:9eabd54bb541ec1ed45c2baa516d0a15011636079d750f3d36f6e570d9f0f6b3','discokey:132c6950659404dc4c4583ef60d6a5809486b0d47a2828c093a6d12c7ea61b1d','web-93',1,'authkey',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["39.206.106.178:64472","[a3b:5db7:67ad:9dfa:1b70:91c6:15ae:e5ad]:26117","[fac8:c08b:4b48:8273:42ed:d152:8301:66c4]:15174","67.172.102.32:57252","85.212.234.244:43593"]','2024-06-02 09:46:39.307697473+02:00','2025-06-28 05:47:36.860000326+02:00',NULL,'node029','[]','100.64.0.19','fd7a:115c:a1e0::13',NULL,'2025-06-28 05:47:36.859751546+02:00'); -INSERT INTO nodes VALUES(30,'mkey:1e4d9c9b335bc18582f57648b44139b4c263b22ca46e4bb7b6d95745e17d9c4d','nodekey:334930dbc0b41f2ad56ffaaa7fb320379f97deddd1b6395fce9033b64446eb63','discokey:d28e8e4fda1139832bb9203d623ab9accab1bb64690a495c1cdf96b6fe822f62','web-35',1,'authkey',30,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["216.16.62.70:35569","213.115.82.246:7596","195.216.27.246:54466","[c47f:35c9:41fe:6f5:a40d:3e8b:6679:411e]:12963"]','2024-07-29 14:24:27.684620193+02:00','2025-06-27 06:44:54.03534083+02:00',NULL,'node030','[]','100.64.0.12','fd7a:115c:a1e0::c',NULL,'2025-06-27 06:44:53.247773264+02:00'); -INSERT INTO nodes VALUES(31,'mkey:6ddb2155e63e4641d21add6e086675e22c209566578f70866c7de59006859ece','nodekey:85fe5b9f1f6cac554c527a85dad9a490a2962eaec8e4727537769433a5a01c14','discokey:d3bb4ec681d8d0230a71e2e01cf871c67c814a8125a4c6247ad6885de3980ee8','db-75',1,'authkey',31,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ee98:8de3:1f5b:731c:751b:ba46:a4a7:8760]:23245","[6eb3:8c34:e6c9:4c6a:25c9:89ca:c165:354e]:65112","[7003:63bb:6429:d463:96a6:990:2bdc:217d]:4300","[d820:584e:2b46:6915:6420:3056:f76:9e33]:48885"]','2024-08-07 20:35:12.944541318+02:00','2025-06-28 12:22:15.988053559+02:00',NULL,'node031','[]','100.64.0.20','fd7a:115c:a1e0::14','["10.154.123.128/25","10.128.0.0/10","192.168.246.0/23","192.168.150.0/25","172.16.0.0/13","172.16.224.0/19","10.23.224.0/24","10.0.0.0/8","10.127.59.140/30","172.24.0.0/13","172.31.132.0/22","192.168.64.0/19","192.168.56.0/24","192.168.224.0/19"]','2025-06-28 12:22:15.98744456+02:00'); -INSERT INTO nodes VALUES(32,'mkey:ce4d4da08c077577b78b78a4e914507d2026e96b8584921850ea3e7dd8c0084f','nodekey:9a20472651960cee39c0f528022860269ef1cd53eab071f4aecce54f80fa874a','discokey:91cca1b79300d8ffe78e37aa02e31673d08cccdeb52aab242207efa3d75d6af0','web-49',1,'cli',NULL,'0001-01-01 00:00:00+00:00','{"fake":"data"}','["[aa0f:320b:3c12:4808:b5c7:510:c4d9:2009]:16605","[f87a:6765:f859:7291:198d:f454:4d4:8f6e]:23352","[5ddf:114:328:db55:fa05:4192:5d30:acc]:9087","86.180.96.120:14053","156.124.91.30:504","189.130.203.73:33525","[9470:53d:a212:874c:c627:32d9:5f15:39a]:61202","188.181.153.153:41358","208.6.130.2:5757","[a47e:177b:a793:d8c4:7191:f01a:4819:fe98]:47998","[9561:5615:b7c5:b2bc:4f65:b1f:a9da:5e0a]:2120"]','2024-10-26 07:18:04.947942936+02:00','2025-06-28 12:27:59.883807194+02:00',NULL,'node032',NULL,'100.64.0.21','fd7a:115c:a1e0::15',NULL,'2025-06-28 12:27:59.883542043+02:00'); -INSERT INTO nodes VALUES(33,'mkey:307000af9e5b5c7efee75c3ed7d23090866a9e43955e23aac2dae50b46073c1e','nodekey:b18bd32bbd3bd77d274dea94361272cf171f407989ea6553b40acbd1fb571706','discokey:73e11d3631b882516e5fcd41d6a95f44e15c7322c8428215bc19aff366574a35','desktop-05',7,'cli',NULL,NULL,'{"fake":"data"}','["81.68.14.120:23633","39.210.11.46:40333","96.70.183.102:16729","[b2ef:7817:aea4:73a1:6391:d75b:b4ce:f0b3]:36420","[747d:f46b:96f0:f467:e625:3462:a216:ad48]:4718","124.57.193.30:26578","[e11c:edd3:59b3:c56b:a8b8:ccb9:1716:ceda]:61482","[ef1b:9f1d:9db1:6734:ba55:756d:3d3b:80a6]:30155"]','2025-02-23 07:59:15.141530181+01:00','2025-06-27 22:01:05.602056574+02:00',NULL,'node033',NULL,'100.64.0.11','fd7a:115c:a1e0::b','["172.19.0.0/16","192.168.18.48/31","10.192.0.0/13","172.20.0.0/16","172.28.162.48/29","10.175.189.32/27"]','2025-06-27 22:01:05.601785141+02:00'); -INSERT INTO nodes VALUES(34,'mkey:bdd5a76f6d7f946c4bb9454abe28e7d2d212df3c451c9d068c2985649d193f83','nodekey:f8ed0148a1314428adef803acb6eaf864d1513e3388eaaeb92b7e6a2cd7bdc12','discokey:634d6b49a81f9443ce9a97b04b64c962c6707c182d85c9cdc59abd4a16c6de19','db-92',9,'cli',NULL,NULL,'{"fake":"data"}','["111.62.89.28:35728","150.196.197.67:64757","114.51.87.168:47070","[a4ce:7387:d7b4:514e:c1be:706c:1812:f866]:48356","[167f:1873:3939:734:49cc:5627:6f93:e1c7]:31097","192.35.174.224:36261","137.15.122.25:55742","[c9cc:ff0f:fbb6:e583:c69:78fa:a307:4bd2]:3181","185.20.86.11:20604","[e220:9785:d01f:28d5:1e97:4df0:2443:cc0e]:46902","[6152:ea90:3c89:e905:5201:32df:a4d3:b4bc]:41319","[20e3:1e48:26a7:5c2a:8eaf:1f7c:f36d:6a38]:22275","[73a6:792f:7678:4806:c617:d54e:2e74:862e]:25300","[f626:b8e5:f3c8:9d1d:3f91:a752:5cd5:57c4]:35608"]','2025-03-09 11:42:30.627548688+01:00','2025-05-05 10:22:29.0194247+02:00',NULL,'node034',NULL,'100.64.0.18','fd7a:115c:a1e0::12',NULL,NULL); -INSERT INTO nodes VALUES(35,'mkey:bee84dff5b07ea34d3ca9d7bc3824a4226f725e7f7fe33f879f11af907e73670','nodekey:f022dec5a44c0f09e359106d9f7ffebcfa21bb74103f16c7072e994e8a3094a8','discokey:b24e98fb8f6951bb8c4c081bb7fa326b8bf7f43181b160c0b8555deeeb55924f','lt-42',9,'cli',NULL,NULL,'{"fake":"data"}','["150.171.114.186:31321","147.156.202.29:42507","[b262:bff9:607:762e:cf05:3029:1988:9095]:14244","[8083:6d41:4186:7a48:2933:1fb1:3285:8dcf]:42595"]','2025-03-09 12:06:19.756893731+01:00','2025-04-07 13:08:46.652035521+02:00',NULL,'node035',NULL,'100.64.0.22','fd7a:115c:a1e0::16',NULL,NULL); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.0.sql b/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.0.sql deleted file mode 100644 index 45336438..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.0.sql +++ /dev/null @@ -1,98 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'463a8b372963aeaca12400faa0c7ea29e9bfa3b4c59e9622',3,0,0,1,'2023-05-19 05:09:19.66636462+00:00','2023-05-19 05:14:19.664224869+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'77a019cb12c0d0b9347a17ab23fa4c87983814fe36bb2fbb',14,0,0,0,'2024-05-03 08:13:55.8614948+00:00','2024-05-04 08:13:55.85782156+00:00',NULL); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,"node_id" integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(1,'2023-05-19 07:09:23.387641743+02:00','2023-05-22 09:48:18.908103256+02:00',NULL,3,'192.168.224.0/21',1,0,0); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:0ea01e8cd608548a171ebcedc16da73b25ec18d58861be7a6b4274eb17baf6ba','nodekey:0fcf591dc1997c1d7388a3835bd24dae48240e5b03b6a7f5f33b2e813d222bcf','discokey:ed27f0b3c04222f0d05bfbc6fb5787338275ed0e32fd42fe68c669aca0f3d7a4','desktop-54','node001',2,'cli','null',NULL,'2025-01-16 10:47:40.213920406+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b8f0:cf44:22c8:82d1:f79:c3eb:9144:c24f]:6142","[8e25:af1:e31a:8fcc:7dcf:be1:a3c9:ed6a]:38766","[37a1:215f:bf3d:3fb1:6399:f7cb:4048:3ca8]:51666","137.242.125.72:25376","86.84.184.63:26062","[c187:a8:1181:8321:28ab:fe72:b0b8:8ec6]:41236","[9089:461f:ad5b:68d4:f7a8:618a:a52a:56ac]:63162"]','2023-05-17 19:38:13.531518257+02:00','2025-01-16 10:47:40.214263246+01:00',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(2,'mkey:348464565f0009e9205c9a78f381e12b9cf63ee2006b8e95d720a274347c4847','nodekey:bff28b3734e3343ded434f23bbffe562bab527165158c992e1126c94f39ea877','discokey:9c077cf7719204300ecc1ab64fb6bd3cc0cd398a4a208e37ed79e4500db3f86c','db-91','node002',1,'cli',NULL,NULL,'2025-01-26 08:14:11.090449666+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["165.5.239.20:48876","44.184.117.89:36643","90.223.136.253:7056","[e43c:be7:7f4e:8217:a009:7f9d:a510:1af6]:18872","[3a86:865b:2f4c:f3b7:88a0:4aa2:f05d:c3fd]:47680"]','2023-05-18 10:09:21.757289398+02:00','2025-01-26 08:14:11.091108158+01:00',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(3,'mkey:e44e45bd1656b97e8b328dc927ccab6cbce6c869992f70fb9c6031b980a1bf2b','nodekey:1497e461383f07b4f7cd7c76036b6fcabf23f46ac9f3651e22fd1f67eee22804','discokey:d84cef5408f6a40a7d789abbfdabd99edcd794e066894632f3cbb20d0e90a100','lt-24','node003',3,'authkey','[]',1,'2025-01-22 11:01:10.431892865+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["219.238.10.240:64443","[5e3d:ce1d:284e:9f89:155c:bff2:a65:d667]:51808"]','2023-05-19 07:09:21.399903526+02:00','2025-01-22 11:01:10.432002736+01:00',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:a2be538ff79403d577686c0ff675e1ec1b0da10902211d25ccf8adb58deaa6be','nodekey:69977d7f034a8ca09f0f08dc2d6410eb2b6f5fd343b6c88124ec37d6bcddb4d9','discokey:2a3fadbffba90bd0d74d432626a92d1ce1cf0297d4ad51cfd2011c4de7e198fa','db-36','node004',4,'cli',NULL,NULL,'2025-01-26 08:13:58.345329896+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8a81:30b8:5d50:73d7:9039:f2ce:c1be:a099]:43373","76.78.238.163:55069"]','2023-06-10 09:31:51.940506933+02:00','2025-01-26 08:13:58.345780429+01:00',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(5,'mkey:d1533206380a3d91fe09ba9aca808b8caada2cb469e9065acdebab13f22a215f','nodekey:961c8e15bdb7e261c0d359f72256c218447c9c007032f102a418d07b0b829b1b','discokey:d6e4dec2b26039771350ddf132dadf1a5d8615608bd99461d267271e7b53269c','srv-09','node005',4,'cli',NULL,NULL,'2025-01-26 08:13:52.985209632+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b12e:af88:3c55:41f8:5fba:2aa3:3248:15c1]:23643","186.84.191.82:2621","[c9d0:4e15:513a:171b:9911:f9a0:cccf:cd83]:24151","66.84.158.199:63451","47.94.115.240:6677"]','2023-06-11 13:56:42.694329408+02:00','2025-01-26 08:13:52.985552336+01:00',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(6,'mkey:4cf09b5d948c6492d0706b7972a9f46f03261dc73244cbd098653461c1426dab','nodekey:dd207345a8cdda46d50803658ab26c0fc7f28283217c6aa616a945885e92fca3','discokey:a0e6dfe46d6380aa0d131816dc0ea3287aad89196690b9bd91224f904816aa06','email-01','node006',4,'cli',NULL,NULL,'2025-01-26 08:14:07.254903549+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f739:43a6:4363:817e:34d9:3cc:1272:dba1]:26597","220.204.45.94:59135","187.96.173.136:34092","[fece:76ed:4c23:1e89:f225:97a1:94be:b5c4]:33472","[e1e6:a14:535a:668b:698:f125:543f:b5b7]:31740"]','2023-06-11 13:57:44.975695604+02:00','2025-01-26 08:14:07.255223389+01:00',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(7,'mkey:5bbbd75488079eda04771e44ddf21c5866640024a947e66cfd3c3bafb19d3e11','nodekey:022306df92cac976efdd1dd5d4454d0e20eb46ac6f7b847dbcc382a399d1c97e','discokey:4980d19a6814405d26c40dee31a93a5f43e18b41f6f6f4908ee59066e04f8179','lt-67','node007',4,'cli',NULL,NULL,'2025-01-26 08:12:22.511406544+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8ddd:4aae:2b94:4c1a:d05f:65d2:2bdb:b400]:42839","[c493:a941:58b6:7634:6221:70ba:20e6:40c4]:35486","[9b54:61a6:c678:edac:956f:741e:5e2c:9ec9]:17930","[cd3f:af1c:fcb4:bcb3:806e:adf4:3f3f:ab36]:58513","[7a83:6a84:9f15:ba1a:9671:3d54:2e31:d5e9]:64893","167.223.65.194:6907"]','2023-06-11 14:16:56.951313537+02:00','2025-01-26 08:12:22.511838306+01:00',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(8,'mkey:1abadbff7c8bdbe7a3104396c70068fa0a7aa30e405ef7d088738a8510877b39','nodekey:1251afbe69b3212ef1137c4bf2ef842409b8440d9ce32470ce4e8fb245480ccb','discokey:4a7b4155015d4cc2418ccb4005d3408b610c37b9f7bffdc625e2823f3e44514b','srv-71','node008',5,'cli',NULL,NULL,'2025-01-26 08:14:08.383652695+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["107.117.104.69:47662","[adea:86d3:b312:f445:d835:6023:f1f:f928]:27483","124.104.237.237:47957"]','2023-06-11 19:59:30.401970393+02:00','2025-01-26 08:14:08.384047921+01:00',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(9,'mkey:9ced3abf3a281d71b6f361a75769c571c4aff9e4cbab5b317f6cf18918f66bbc','nodekey:e533ead6c5b810764db07631d7b9b14434bf66549837d33a6187daac1ab1d7a9','discokey:571a7ba01a8a0fff2af21d0f135caf8ae2ee0e15fd5ab996d2e490cf968d7217','email-22','node009',6,'cli',NULL,NULL,'2025-01-22 11:01:12.033798488+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5b29:3bfe:7aa9:2767:900a:8142:1ae:eb94]:59942","165.200.68.0:39862"]','2023-06-17 19:40:45.468789461+02:00','2025-01-22 11:01:12.034166898+01:00',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(10,'mkey:6cb022f8457bd430cd8ca6e88f1d4e3678965fd0af659ef29c956fbaa81c870c','nodekey:b4dc05758407f598a2cb77f98d5c4fca9dc3820e31976887942b54a4d44e43bf','discokey:fb9dba09bf8b64d2ef3b35d917d41d5208aafb2b013a53da0d2196a152a3cf9f','desktop-07','node010',6,'cli',NULL,NULL,'2025-01-25 22:09:32.216704757+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["2.116.83.236:33635","41.172.94.101:32111","223.204.123.230:38090","[9394:b33d:67e:fde5:21f8:77e8:97a8:bbb7]:47884","14.155.207.231:4806"]','2023-06-20 11:18:35.905417341+02:00','2025-01-25 22:09:32.216953236+01:00',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(11,'mkey:e76a4d43284c0d19fbcf84b38d4bbdc5f978f8922f2ef6dcbe15170a3860a7d8','nodekey:85652faac3a65cb620073e7402506930daa194a7a3555c335741ee65a920530b','discokey:3a6aac7afdc916c9bc84b77b90c521c0e08f3fbf06ce89cfb2281271ec070b1a','lt-86','node011',7,'cli',NULL,NULL,'2025-01-25 21:58:31.767301396+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["22.207.212.228:27547","[2c3e:2260:8c34:275:2287:b721:1bdb:d462]:59540","70.115.64.117:6906","147.25.71.84:48460","[7137:65e3:83f9:ae56:6e7e:cc0c:9d64:64fd]:52919"]','2023-06-20 11:35:15.063855316+02:00','2025-01-25 21:58:31.767562461+01:00',NULL,'100.64.0.11','fd7a:115c:a1e0::b'); -INSERT INTO nodes VALUES(12,'mkey:80b6649b616453e50d516e4890706ec7327a137f8051a4e17fa0f50e69408e84','nodekey:ead1339d471cd9067122e4f39e0e0f4655c1bffce99f7edfcba80812fc7a2db2','discokey:f05ebdaf81717f2a41c4985b47698a319c787056efb5412600ebd8a793147687','web-89','node012',5,'cli',NULL,NULL,'2025-01-26 08:14:25.345334172+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e758:c726:ced4:43ea:60fd:3ad8:1b8b:b514]:15200","[314c:5aba:d6ac:63d3:4de3:2c46:5e60:903b]:30789","[6883:e8c2:287f:71c8:9417:824b:b3cd:29d4]:51083"]','2023-06-20 18:22:35.061914624+02:00','2025-01-26 08:14:25.345710569+01:00',NULL,'100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(18,'mkey:70c9e841ef7964d5d754bad3f78d29b99f19f84475ac2bb72c3b38110978a195','nodekey:046c467dfce392942dbd84c1346611d3bf8634ffd51e4de6a8249d6555d28a03','discokey:38e293f7476b6c04489ff6a8e8664b8b5cdd4e93f4c39b8f065ba8e251744af4','web-67','node018',9,'cli',NULL,NULL,'2025-01-26 01:15:31.831355377+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["33.51.84.119:46756","[c844:704e:c3c0:e709:8f94:3564:1b4d:d2d9]:8357","[169:f098:49e9:99af:27bb:802f:fc22:bd59]:63749","[2acf:3f32:1763:5d21:88a:4af:8779:1b1d]:36283"]','2023-06-22 08:30:54.08720463+02:00','2025-01-26 01:15:31.831742232+01:00',NULL,'100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(19,'mkey:338d7e5ca6c5a8c65d7174eb1af312107408a16b7769b7088c8acf64134d9722','nodekey:440bd6fd6eea65489be99a6a95f4b6315ea55fa31fa1d6905a53ba4c046e0817','discokey:80890f521819e60c3778df342a98a9adfe2f46c15b0f55aa14f17df002aaa398','desktop-48','node019',10,'cli','null',NULL,'2024-08-07 09:35:12.220368767+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["191.156.96.176:5489","[166e:c30:4d73:440f:456a:22be:71e1:142b]:33036"]','2023-07-03 15:18:03.829428454+02:00','2024-09-18 16:15:14.037990869+02:00',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(20,'mkey:99e6d97451e391c261c72abf2e5d452754d7526a1d4563307259c88343c8a2cf','nodekey:581017de468a3dab721afad32b6d549e031026e4ff4a43a80ef50b035c5f60b6','discokey:af34366f05ce93952843ebba11ad2ede1fc68ca4016f5cd8569a4d057bfe3a59','srv-39','node020',11,'cli',NULL,NULL,'2025-01-24 21:43:10.483328442+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["17.32.175.58:63234","81.17.18.155:10091","171.238.29.81:53035","[cfa5:e342:a85f:891c:cff3:24ef:4692:f8a0]:31064","207.187.74.99:12216"]','2023-07-10 12:40:34.838579199+02:00','2025-01-24 21:43:10.483531762+01:00',NULL,'100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(21,'mkey:0d7504dc98ef6995e1f14cb39cd28a340b08b05debe9ec639658e6849aa3c1ca','nodekey:811ced3ca0a2ecf6076f7fc549402e4021300609338990634e648bc620a3914d','discokey:542283ed990035a314aeb44a01d428aeb9baff09786dc18463f085d98900f762','laptop-07','node021',11,'cli','null',NULL,'2023-11-20 07:19:19.447470862+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["1.118.111.70:57112","9.144.234.157:13194","[86df:346:dfa1:bd79:e59e:3d3f:b98a:63d6]:59098","[4b06:6a05:94bd:4bca:676:93a4:2e20:b923]:23713"]','2023-07-10 12:42:43.290469734+02:00','2024-09-18 16:15:14.03886353+02:00',NULL,'100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(22,'mkey:3bc73d052a1fc1b031e300db9022a617c04a055225bca5a3f3efb931b2a216ce','nodekey:3527fc46bb993bf23c7866696d860b29b5a54314c62e4d35ab44213278bc821e','discokey:db9ba442b211d4cb5b4db8f96d8f71c8bdd7e3501310d3cf5685ff821af58ca6','web-70','node022',6,'cli',NULL,NULL,'2025-01-19 14:04:19.025822598+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["218.87.248.142:13273","222.154.224.82:48446","9.237.225.227:64622","[46a1:e1e6:6b82:6599:acc7:2273:afea:f09f]:38018"]','2023-08-05 12:08:48.132161695+02:00','2025-01-19 14:04:19.032514264+01:00',NULL,'100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(23,'mkey:0799589ca5f3434d831c8401f52a58017a21139fd1bffe1ea482df70f4490a5c','nodekey:56ae1d35eb872ad470997fb776347e21ad052fd0715c7a5e8b80d9502ec71a40','discokey:23fe8efc052f59d556a6d8314a2757b07df13305d74143f7171de11aafcea455','email-11','node023',12,'cli','null',NULL,'2024-12-07 19:25:56.935152754+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4c65:a6f:293f:8d48:f136:8b6a:185a:7ec7]:43854","43.114.58.145:47143","[758f:df49:401f:abcb:57ff:c04b:79e6:8519]:62313","[2458:76c5:7d48:c26f:34e:1663:6911:ff6e]:22202","66.66.46.14:16686"]','2023-12-15 08:05:56.592241745+01:00','2024-12-07 19:25:56.935317645+01:00',NULL,'100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(24,'mkey:4c4c4036f9b8b19936757b1a5c582a4d5fd016de083063aac974d6a487775946','nodekey:e524384415f37298c43007df3c13da08a5b1156441f62e606378a5eb96a80f0a','discokey:7b9dfad48b0225f02fd0b196b2ef964b538c43795539c30fbd08e5f930bccd6e','desktop-97','node024',12,'cli',NULL,NULL,'2025-01-26 00:04:55.348199222+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e79c:8c65:bff5:d873:2599:3f53:d49b:617a]:59079","137.167.193.123:34677","159.52.2.200:7205","[5b7b:e424:ea77:aa8a:f835:d6bb:2e1f:52ea]:19152"]','2023-12-15 11:14:36.765183054+01:00','2025-01-26 00:04:55.34833514+01:00',NULL,'100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(25,'mkey:da46f6615ae2ebf0b10dd6941ea821cb7e2c64a3e46427013351d19fa9281e33','nodekey:e3a9a5473a28bc56917aa154d2bf3198bd5a226ea85693422d74a235ebc09192','discokey:2d7124caaadcf3ae57163efed0cb5e9d04ef0525e8e9d7d8e436f24ace9217b2','email-08','node025',6,'cli',NULL,NULL,'2025-01-26 08:14:57.87480236+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[74f:3ce5:744d:cd4c:eba2:4e83:2f25:8e32]:35248","[769:97e6:7060:f428:ee55:f77:43a7:8d18]:61446","[630f:7c1f:d48b:ea66:23d7:1ab9:8bd1:10f3]:25600","160.27.109.39:47606"]','2024-01-05 17:32:40.940566279+01:00','2025-01-26 08:14:57.875128359+01:00',NULL,'100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(26,'mkey:26740462ad2cf89190b15cfee7c25600065050b832b780cd845359ec0c715d71','nodekey:6d55d55e354e8a983a485dc1fbac690b5386d97b9ce3d373c89e92a68a1bc411','discokey:3fb9d7bf3b7588b6882a1c4442dd7ad2447ad3f5ae8a6e9974f3f7cabca00ee2','desktop-87','node026',6,'cli',NULL,NULL,'2025-01-26 08:11:54.580017267+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[1e45:fbc:2c6d:d1c2:a4dc:9086:f55b:6e96]:54073","[be93:4c8a:facf:a4fa:29d:ed91:91f1:12b2]:57075","[2c67:a9f3:3163:c675:3386:2447:1cc:ed3a]:42590","64.137.67.1:56198","163.42.128.241:47535","49.160.22.202:42427","[1526:e34a:8857:394e:bbe0:c043:4b37:68d4]:5245","32.31.195.128:7039","[c50a:ec:e2e:e109:3a08:f56b:2928:4921]:61265","221.147.144.138:44737","136.208.142.64:41091"]','2024-01-05 17:34:19.811670479+01:00','2025-01-26 08:11:54.58066838+01:00',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -INSERT INTO nodes VALUES(27,'mkey:73868242348c7d709b77f52963c32eb4b95c8b6f89abb5db76bd95cdaaa538b7','nodekey:2de3f421f78de3f5cfd824ffaa80cf3fde3060bc42f51c2ee2f55b49fd5fbe20','discokey:5ff3b0ad430ee5530c972ecf0d3e69f25eecdadf6ef8a309b3841d7aaa6f70d4','web-56','node027',6,'cli','null',NULL,'2024-01-16 14:32:21.570104175+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a2b8:ea55:f9e3:2c00:f423:8c56:7fec:c617]:53793","[ed88:2277:3c9a:4f64:a25e:5b41:a3ca:e904]:27852","[d203:40a2:5efc:4a42:8910:81f8:59c6:4d8d]:23389","187.126.162.12:25054","[5d8b:9391:47f1:1bbc:e01:92de:4a90:7614]:42410","172.81.188.131:23453"]','2024-01-05 17:48:25.466030859+01:00','2024-09-18 16:15:14.040766068+02:00',NULL,'100.64.0.22','fd7a:115c:a1e0::16'); -INSERT INTO nodes VALUES(28,'mkey:300a566a21fcaab2665dc091300e2c5110dcaee25e2fcc4cc23b011fdae471e6','nodekey:8e36531b406e0b7b9c1b9abadc00626ccc8c6e7a0c37f70f42ccf06cd2dd03fa','discokey:a4134738054f6fc0c57b6eb47913e68a3f47ffb81af8a3496754921c2fcb02f8','laptop-98','node028',7,'cli',NULL,NULL,'2025-01-24 15:11:59.137108429+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["113.143.9.36:55432","[e266:66fe:4929:3831:7830:6f10:1572:20df]:9977","[afe5:7d57:fac5:a1c5:78d4:ee33:d474:1e56]:52223","7.208.95.117:299"]','2024-01-15 09:34:54.847632697+01:00','2025-01-24 15:11:59.137302845+01:00',NULL,'100.64.0.23','fd7a:115c:a1e0::17'); -INSERT INTO nodes VALUES(29,'mkey:6d3edcfd40971c25174fa55eeddf76d05447d7d28f216c6118d8d6e35aa1ae89','nodekey:4b7d7ef564535f114a410c384aa3eb95103b5895cfd74962c7d37d3a8b0bb6ed','discokey:569e08cefcaec4d1312b70c41c12d82b26712965cbab41ac3ca5db857274ba46','web-76','node029',7,'cli',NULL,NULL,'2025-01-24 14:00:44.067705144+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[24c5:f1ec:38a5:5a2d:7224:b76b:cd04:e6d2]:63943","162.176.178.140:46577","[4f52:1708:c0a:7991:9c16:bd7d:4045:195d]:50362"]','2024-01-15 15:18:12.2871978+01:00','2025-01-24 14:00:44.067944989+01:00',NULL,'100.64.0.24','fd7a:115c:a1e0::18'); -INSERT INTO nodes VALUES(30,'mkey:a81a1bea534e858e4f2f78cc1b6ebbbb1c15f6045ed63d403bdcfe2f0ebe9c6f','nodekey:bccf984b4408b961eb80fa63a1d5dff73c46bfe0bdf5e903b9d1e90d6cdc037f','discokey:7fc2f8726fc62046cc8ac9cc78d91a53b4baab2874981807e038ca1161c5addd','laptop-86','node030',7,'cli','null',NULL,'2024-02-05 12:14:40.065688294+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[acfe:eccd:ac4c:8d39:dc9b:840c:c97a:dbc5]:56981","[c204:d59b:29a8:ac18:75bc:1a4a:755e:a8f2]:26579","35.41.214.64:38835","143.202.189.84:43171","[8679:8c46:1c66:cd1e:95f8:2b0a:503c:819b]:57389"]','2024-01-15 15:21:43.217136004+01:00','2024-09-18 16:15:14.041757308+02:00',NULL,'100.64.0.25','fd7a:115c:a1e0::19'); -INSERT INTO nodes VALUES(31,'mkey:b5ef2204ec90b7f3c80be32de06bc6008c678e060e21d802de3c4d6b6ab3175b','nodekey:3a5706b432df2c448b307981345f4586433fde926401c667aabc84666e77e0b5','discokey:31b8429f3d4c130f05b42f36699bc3c6ba8018d48fac1ffb98eaea9b5e77a0a7','srv-51','node031',7,'cli','null',NULL,'2024-02-22 08:35:27.098819037+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7f3b:f876:2230:c8ee:99c0:d76a:2f38:10e9]:33365","[c7af:10d6:9f3d:9dd2:3423:afc0:aa2:e156]:5125","11.55.50.90:25074","[c7aa:b53:8762:dc6e:5d4a:a2f3:16c7:3d3a]:5279"]','2024-01-29 16:05:35.338524634+01:00','2024-09-18 16:15:14.042191514+02:00',NULL,'100.64.0.26','fd7a:115c:a1e0::1a'); -INSERT INTO nodes VALUES(32,'mkey:805d05918a94047740c914235c3c614042b05d18458f331ba0a46ee22d2187e6','nodekey:ebc009009bc015bc45f6499b04b7af9ff2ceff22c0f9676749b10c92f422a8ad','discokey:b6d83e1f6f0e86c3808cffb227cb6649bffcf5d4ba1596f6fd58f41b8b727133','db-70','node032',7,'cli','null',NULL,'2024-04-09 13:59:43.37062537+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a1ed:3568:9b48:6735:e553:a273:ea02:7271]:46613","90.225.231.207:5490"]','2024-01-30 10:41:58.31917869+01:00','2024-09-18 16:15:14.042506082+02:00',NULL,'100.64.0.27','fd7a:115c:a1e0::1b'); -INSERT INTO nodes VALUES(33,'mkey:d5d48a3961081cf65a57a8593db00b32779c3fe44e5c410a3b763a24153543aa','nodekey:b6b5f4dc6ec205b068bbbbd3d72632d9c9bcf48812c6f444b31c4165573f9a78','discokey:3a3b06d462ae33b40900eb11fa7b58e0e16a66dab9921534fa3f77e77b28f7ce','web-02','node033',13,'cli',NULL,NULL,'2025-01-23 21:19:25.034740145+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[fc74:9d:e278:2d3d:8d46:7e4f:ce11:39b5]:6716","[bc29:715c:cdce:f1f6:8630:4929:ea1e:5c03]:10918","[af8:f5e8:32f0:75ee:d351:c0e9:78e:93d9]:12624","71.151.27.31:51063"]','2024-02-03 16:33:26.706408143+01:00','2025-01-23 21:19:25.035093942+01:00',NULL,'100.64.0.28','fd7a:115c:a1e0::1c'); -INSERT INTO nodes VALUES(34,'mkey:88bd1bb4cbdbe175ea0e76e22d7a03fdedf1e1b53584f8d71861afba431765fd','nodekey:584aa8327097f88b7236e4d9b7594de8a971302ad5569a2b73ba28192d7d81ff','discokey:699c13c9c5277f160fea25002df68205cebc7dde6cec4e5e6bc2a20702a4f774','laptop-79','node034',13,'cli',NULL,NULL,'2025-01-26 08:11:55.430892992+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["14.172.203.164:46293","[c:e6fc:18ea:2e45:6bef:c1c1:f84e:b1e2]:62296","71.155.69.236:8707"]','2024-02-03 16:42:32.683785672+01:00','2025-01-26 08:11:55.431204644+01:00',NULL,'100.64.0.29','fd7a:115c:a1e0::1d'); -INSERT INTO nodes VALUES(35,'mkey:627bb55986f4c5331926e5cf3db2aa5aff2cc66d63665e114f7b3d7185772264','nodekey:e52da64771618b76ec6ec975cc00d96d8da9dddeff5d91797c70fcf8d5dd0788','discokey:9380df456003f1dbff764e8f2819e914a7b0ae285b53dbf5db34a7be37b849b6','srv-65','node035',5,'cli',NULL,NULL,'2025-01-21 17:41:56.706838249+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["139.180.210.231:32286","171.29.58.211:2336","200.47.65.115:32030"]','2024-02-03 16:51:52.010016072+01:00','2025-01-21 17:41:56.707118547+01:00',NULL,'100.64.0.30','fd7a:115c:a1e0::1e'); -INSERT INTO nodes VALUES(36,'mkey:291af9833f3d5afbdff138bc4dd1b6c2fd418c12362d1daf8eb65f2d59ae290c','nodekey:f4a5894e008543c73d42c157efef4a3afb66e4247ba2876c245a498e54b43563','discokey:2ff5debb97003182af5b76792c5061d232e755bb9e22fc36a60ee01905319f5c','srv-75','node036',13,'cli','null',NULL,'2025-01-19 14:01:48.956567669+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["122.50.5.12:7674","[a641:3f06:63bf:b716:4658:1e5b:ac6f:e14]:13490","197.206.147.31:58506","[b1e8:68a0:4017:a243:61d0:b303:7860:18e]:28633","[e5b8:6c78:ce2f:cf6f:13fb:401f:8223:3f04]:54121","38.137.88.247:35801","[eec2:61c0:e20a:2fa0:5e4d:56a4:5071:881b]:36628"]','2024-02-09 12:34:57.879970954+01:00','2025-01-19 14:01:48.956830267+01:00',NULL,'100.64.0.31','fd7a:115c:a1e0::1f'); -INSERT INTO nodes VALUES(37,'mkey:a414d52df11b0409ce8f8160ee3390e346b511398c40e9def61fee76c0ba2edd','nodekey:1d023daf8f56db9c71b9e075a63c9cce5b74bb94dc77df6f98542ec85b2f6ec6','discokey:21c22f87ee8f10643c49baf10db4833c6a975d5122762bbd693ee70488e6766a','db-61','node037',5,'cli',NULL,NULL,'2025-01-26 08:12:35.773797291+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[3a31:6dd2:cc04:d20:e5d1:8664:3ec9:71d2]:21117","27.214.20.185:57647","[8c40:dd59:c31d:dd18:321:58a:11fa:d85a]:43064","168.4.78.117:48474","90.45.218.0:29326","90.233.46.138:29846","8.137.133.59:22781","[cb5c:f8d4:ccb0:8333:8ec:170c:a2b0:941d]:26248","[5802:df5c:8853:4851:6dcf:cb83:8208:d143]:24568"]','2024-02-27 12:14:40.452601042+01:00','2025-01-26 08:12:35.774174083+01:00',NULL,'100.64.0.32','fd7a:115c:a1e0::20'); -INSERT INTO nodes VALUES(38,'mkey:465de79eadf29db3136d0b230c481876edbe7e05a803281a523618c586fd3c01','nodekey:01614d4aa1bb0be1f6b41e10aa9ea1c7af206ef9a2571bb68c24e85bea697216','discokey:d3eea5725d8408db2ffc560d74c70da0a70c3f3dc40a0d6a6fad9afaaadee9f2','srv-59','node038',6,'cli',NULL,NULL,'2025-01-25 21:52:13.380610729+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8125:ebe4:83e1:4806:f061:5bd9:a08a:8ccc]:6467","114.199.17.195:52817","[de36:fc02:8cf4:c60d:824a:9963:2440:5b5d]:39343"]','2024-05-22 08:08:16.045350656+02:00','2025-01-25 21:52:13.381028865+01:00',NULL,'100.64.0.33','fd7a:115c:a1e0::21'); -INSERT INTO nodes VALUES(42,'mkey:ff2ce8eb04d99e45d7babacacc758ec8e031474fcf494b4c0349c7084cc62365','nodekey:660bae997d6da1efccc0135ed712176a379f59ec9ca7a02eddea607a8855cebe','discokey:66ca4169c6be68079a1244ec506d8c922af4133187e0117a1800ed4791ed9a67','web-21','node042',14,'cli',NULL,NULL,'2025-01-22 12:09:19.893938069+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[3a3f:717c:1ea3:7c82:81e4:5d9d:765f:7da4]:6540","[bc95:459c:aab:499d:4354:d239:ef14:dcb]:14429","[f673:7912:6d5e:7244:62ca:9500:78f5:31c0]:27677"]','2024-07-03 11:12:29.418355657+02:00','2025-01-22 12:09:19.89470831+01:00',NULL,'100.64.0.37','fd7a:115c:a1e0::25'); -INSERT INTO nodes VALUES(43,'mkey:df2afe1643d44e7b5d2b4fe1b78e8b0c8c7871b50b1b580048cf6b184cc99143','nodekey:4512e240f89db2671db1ad974c151afad41491d7d11486b644aa2cf66bc3410e','discokey:f4b10325d44499f510cf0bd61d7c9d116b10c14434b8fab4559f0797f033936a','lt-12','node043',14,'cli',NULL,NULL,'2025-01-22 04:20:37.032313126+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c91c:4366:d3bd:a993:b5e:75c7:ef8e:8166]:46974","191.185.253.85:37427","17.31.200.70:26794","[183d:f2d6:b80:36c5:318e:64e9:7f4a:b389]:12440"]','2024-07-03 14:48:50.263910778+02:00','2025-01-22 04:20:37.032585258+01:00',NULL,'100.64.0.34','fd7a:115c:a1e0::22'); -INSERT INTO nodes VALUES(44,'mkey:38b6d221092d1ca7bc7a6bded2156ea0cad76eeb937d2fbf8b5c519a6e655096','nodekey:68d1f2817719a56a0a6d482869a443e60b317da298d6e07940c3475a6ce76ff3','discokey:1027b93e5031cec5525f02ff625e869e89a050796b2d19ed3a3574f3817bd4e7','web-74','node044',14,'cli',NULL,NULL,'2025-01-22 12:43:56.696054645+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9e5b:f6da:727b:9014:ca72:1f2c:87a8:8e81]:62466","39.179.118.209:64932","[cc06:5e56:fb0d:b760:b501:efab:c73b:b492]:26172","162.210.141.106:52036","[84ef:a164:d1c:d20d:797c:753e:f8fb:3108]:5767"]','2024-07-03 15:23:48.066044194+02:00','2025-01-22 12:43:56.732053672+01:00',NULL,'100.64.0.35','fd7a:115c:a1e0::23'); -INSERT INTO nodes VALUES(45,'mkey:877372ef6f845443379ee2d2c9d0538937dd1b3d4e13fc6c204b04a86b4daff9','nodekey:97c037de3c80309adc59c894999d4993a451c2762b936748a34025cb26e5e3f8','discokey:aee8faedda0c97f6483e11ed745a1e387605e09132b2c55170dbad0696d5f126','web-77','node045',14,'cli',NULL,NULL,'2025-01-22 04:20:27.07284339+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4c0c:1e2f:95d7:9d1f:b068:c776:ee11:ea3b]:46940","[7526:4960:73a9:911a:1fbf:92bf:1219:a6bf]:25440","[360:d312:e59c:79e0:a7e4:2295:caa0:946e]:17493","26.61.8.198:45861"]','2024-07-03 15:54:01.706018896+02:00','2025-01-22 04:20:27.073044637+01:00',NULL,'100.64.0.36','fd7a:115c:a1e0::24'); -INSERT INTO nodes VALUES(46,'mkey:39dc02019f24bdb0b2b66f9d32d44fb95a3a71de52fccf6aebeed5fec709bb12','nodekey:041c60aa08b42defc979d1a4606096540df4a4dd10191818d54ddbfab89816b8','discokey:5b1253f35e01f1e97ebb34884ede4d1bcb46446c1c5eed456dfc6638e08a8dad','laptop-06','node046',14,'cli',NULL,NULL,'2025-01-22 04:20:43.53444835+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["98.122.93.117:34230","[2077:6b7:927e:5e5c:a0f0:3352:1490:e1e1]:47997"]','2024-07-03 19:38:07.783745318+02:00','2025-01-22 04:20:43.534962909+01:00',NULL,'100.64.0.38','fd7a:115c:a1e0::26'); -INSERT INTO nodes VALUES(47,'mkey:fcbeeeafcfa6f96917f0beefe78ede5a3ec327664fed68dee26757dc6e6896af','nodekey:216b84282abd69521955b0a8df7136a33b496a0152bea4928ed426893594f569','discokey:0fdf92e9def6d70c2161eaa1cea4d6a032bd6bddc47ecfce29cb962c2c6ad2d2','srv-99','node047',14,'cli',NULL,NULL,'2025-01-22 04:20:25.918857828+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["125.19.14.225:12660","[d722:f972:c192:4cff:3916:e1e7:6d02:236b]:55695"]','2024-07-04 10:38:08.344092869+02:00','2025-01-22 04:20:25.919209254+01:00',NULL,'100.64.0.39','fd7a:115c:a1e0::27'); -INSERT INTO nodes VALUES(48,'mkey:570c6df03e928e23665040f909113b7f753ced52db4050f3f918562c930c4ea5','nodekey:7becf5cc7f3471e35f59f844ff8d2c41a580b9f548dba24f5e98896403b0da30','discokey:3ea933912fe815a3a4d6c69aba1da031d24d78003b4bad7cad131c8e599a300a','lt-30','node048',15,'cli','null',NULL,'2024-10-20 13:53:33.831192385+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["97.124.255.12:40587","166.176.60.55:23674","[33c0:859c:4a78:8bdb:359e:31e4:1e71:7e39]:50383","[3aab:52a2:b9d3:1816:5627:336:6c60:57d7]:13486","22.83.180.41:14147","26.126.135.206:7777"]','2024-07-26 08:09:56.608302315+02:00','2024-10-20 13:53:33.831387627+02:00',NULL,'100.64.0.40','fd7a:115c:a1e0::28'); -INSERT INTO nodes VALUES(49,'mkey:4d5f9a38fce205b2dfb62f42464172a1408e00221317b01332714a4db55e6ebb','nodekey:415009b46e40247f7a60d9e0f53c938b6a032fd4c11ae70d824e2d95cf170957','discokey:280e5896d458bd63044265484d38ad711ef52a68981f83281ff693e837421a13','web-11','node049',16,'cli','null',NULL,'2024-09-19 09:07:18.28136023+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[475b:3641:5abe:b0a8:8a68:dd51:8e80:3efa]:574","153.151.233.195:41734","[19b0:8be2:708:d27f:4d3d:3c93:9979:e9e8]:50438"]','2024-08-05 17:32:41.937626584+02:00','2024-09-19 09:07:18.281618912+02:00',NULL,'100.64.0.41','fd7a:115c:a1e0::29'); -INSERT INTO nodes VALUES(50,'mkey:a35a00c69e2fa10d442ae03a4d9312ee61e56caaa030d1303a4ebab2b26c27eb','nodekey:39bf208ba118345630aecd3c6ed042d40b1e6b4897c9548a21a473c28352ba95','discokey:437f4dda6cfd2d269543f32dfaac1333b8660cec03b279728e296f6cbc67ca69','db-28','node050',10,'cli','null',NULL,'2024-08-07 10:10:06.595550455+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["150.170.146.70:3311","139.117.239.209:33788","[2eac:ffa0:99fd:d109:c120:a35d:ed48:eea3]:51544","[7644:c348:2969:b90:e84f:94d4:b629:f266]:49336","169.247.3.239:36225","[5b4c:bb2:d43f:6ff4:d494:8616:a66a:d059]:50360","203.180.122.83:13344","63.162.234.32:11653"]','2024-08-07 11:50:54.144157179+02:00','2024-09-18 16:15:14.050033969+02:00',NULL,'100.64.0.42','fd7a:115c:a1e0::2a'); -INSERT INTO nodes VALUES(51,'mkey:38d8adcfb8f6217b6dce62c89e816da260f1b0747a9a980e0ede5de9412aab4b','nodekey:98cb6a87bfa7cd614051008583e5f0904552b44d98854a0fb5d8e0e3c3115949','discokey:104eda6ea8f8fbeae20617662e601bd0acf102511d17ffc829c08db6a7f78872','laptop-32','node051',14,'cli',NULL,NULL,'2025-01-22 04:20:18.310186164+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["33.182.22.166:13283","181.10.243.176:14596"]','2024-08-07 14:19:31.156780417+02:00','2025-01-22 04:20:18.312102891+01:00',NULL,'100.64.0.43','fd7a:115c:a1e0::2b'); -INSERT INTO nodes VALUES(52,'mkey:3a83699597e3bee8b15df3733dfb54b1d7af3ff9ccc7a4b65df4b0feb4a3f25c','nodekey:86cf80f0bd8bef4888116f2bab1b5ac4b11bd6fa637f7ef0886cd0aa265b2bec','discokey:28e2d745a5216bdaaeb83b4483b19d1c2ae0c149b97c08c06a4b90ca2c742a00','srv-53','node052',17,'cli','null',NULL,'2024-12-25 17:27:58.515851096+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["72.183.239.155:49891","167.46.143.131:51781","[9e9f:93d:a8cf:c86f:17e3:9e2b:dc42:efa6]:3616"]','2024-09-22 15:48:41.385301399+02:00','2024-12-25 17:27:58.517153789+01:00',NULL,'100.64.0.45','fd7a:115c:a1e0::2d'); -INSERT INTO nodes VALUES(53,'mkey:d1fda0c437a79b121817c1c24e88bce18fdab566d5e74678b8730cb48c1dd574','nodekey:1463eab453b78867448baaae38b88547334df5070c30e77a08cd3410bd721d8a','discokey:4688405d194600e9c1a783e71e274c3e42976a5021d81db4ba4a0506a768cbca','srv-86','node053',17,'cli',NULL,NULL,'2025-01-26 08:11:05.595652839+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["69.219.0.37:43476","[6084:ee60:1697:864e:4a61:cf6e:e9b6:4c05]:63961","[fad7:2601:d286:4772:3cce:6e89:f66e:9eeb]:5430","216.60.124.11:45041","[d513:c472:d7a:f316:f610:10d0:9851:4feb]:3955","156.228.105.157:61542","102.126.185.0:50694","[156b:e934:d171:2693:8db4:f193:a58c:17b6]:24190","79.255.179.99:56057","[6369:4412:8cfc:2a5e:7cfb:8b45:e2ac:f98a]:16090","23.0.29.75:24889","[ab0:1193:7bb7:4a4a:c036:b911:994f:3aa4]:5116","[9491:7bd5:3979:414a:d198:a8b:f2e7:29b4]:62795","140.73.225.124:1710"]','2024-10-28 10:04:50.084492941+01:00','2025-01-26 08:11:05.596582615+01:00',NULL,'100.64.0.44','fd7a:115c:a1e0::2c'); -INSERT INTO nodes VALUES(54,'mkey:e47e8ca8e89ebf89a399a8206916ac7a55f855b449acf80c7d27b6b225aac678','nodekey:377793cdc9eb6f2279806b2b70d0fea24c4f4c1dfdf92771eecd9199f67d80e9','discokey:299a45d53358f6a43234df87ceff28e642433aa06e8341862e8557d1d499ddae','email-83','node054',14,'cli',NULL,NULL,'2025-01-22 12:00:41.056079317+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7d2a:9d5:1fbb:54a7:fe7a:bc40:5ea2:ca20]:3534","[f630:8bb5:a4ff:95b1:6d95:decb:a5d2:373b]:42456","[6cc8:4fca:c46e:ed25:e9ef:43e:f0ea:121a]:60965","122.41.198.69:28824"]','2024-12-09 17:10:55.363593066+01:00','2025-01-22 12:00:41.056216357+01:00',NULL,'100.64.0.46','fd7a:115c:a1e0::2e'); -INSERT INTO nodes VALUES(55,'mkey:59c5aaa1eca0e9109bf2b9ec602e9d844936dd3d632543489fe5e66f4cc31e8a','nodekey:815c97157312040cb667db75315dfe58933335496ab31a98f351728872da07ab','discokey:bf3097a60b972dc98275e07c31df2ac05dc540280d7ff9006ee5c70ee8d5962d','laptop-61','node055',18,'cli',NULL,NULL,'2025-01-19 14:04:25.389314117+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["182.195.110.58:14598","155.200.222.111:64410","[b554:3604:41e8:60a4:229:a09c:efd3:c73e]:39533"]','2024-12-10 13:56:39.287449662+01:00','2025-01-19 14:04:25.410288998+01:00',NULL,'100.64.0.47','fd7a:115c:a1e0::2f'); -INSERT INTO nodes VALUES(56,'mkey:1025548b29dc9733e1099fbb6516778e9d8f07eeaa214cd1ef33438e731333eb','nodekey:3f21adc328130ab468ac28ee25b4002974610ddd7b11f586752dec5a151114b2','discokey:605115f1238609b148dff010ca6e04d7b337df4e2b88de7a1f900a2f1de7db42','laptop-79','node056',19,'cli',NULL,NULL,'2025-01-26 08:09:46.709773709+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a329:52fe:45d1:996e:4767:d50c:6513:92ac]:54604","80.29.45.84:8223"]','2024-12-17 14:58:39.429211911+01:00','2025-01-26 08:09:46.710363543+01:00',NULL,'100.64.0.48','fd7a:115c:a1e0::30'); -INSERT INTO nodes VALUES(57,'mkey:f8394e6be895b3c2261fab8cd62d1f28124d377df6b4b17dac4b9f18c54a3e00','nodekey:a8302c7783050d61484d3d6e4abbf256b7062303e01bdfcc6371046a468fdfb0','discokey:3e9447d6b8e10da3215899328df484415e4fc8e6e301e232ca51721f93a50b53','email-71','node057',20,'cli',NULL,NULL,'2025-01-25 14:06:07.21327404+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["158.184.132.83:56087","[1129:e066:9c34:190d:7bf0:c941:18d6:f96b]:48892","7.227.92.62:26101","[86a5:23f7:afd9:863e:e7f5:63bb:ed61:9d51]:17106","110.235.245.179:61572","[57a6:fde1:814:36cd:f8de:668c:f5fb:4f60]:39714","[f08d:aeaa:107:fc17:5c30:ab9a:e31e:3147]:55879"]','2024-12-17 15:17:14.26936913+01:00','2025-01-25 14:06:07.213397766+01:00',NULL,'100.64.0.49','fd7a:115c:a1e0::31'); -INSERT INTO nodes VALUES(58,'mkey:13565579616a60409e411d5406b743f5dcf878da4aa82031350537d41038e09f','nodekey:788fe7ffbb2b564852b1b9ec170a4267449c98b17ed51dcdce65942de712918b','discokey:8027b59df9ab90efb6d63898d60362cc3583c20b52aa4694083ef2201404fb27','lt-15','node058',12,'cli',NULL,NULL,'2025-01-25 18:41:03.881898904+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c493:2a8f:6625:98c3:4f4a:db04:19cb:c7df]:41081","96.25.93.176:21890","219.197.124.83:46550"]','2025-01-17 10:17:23.455895657+01:00','2025-01-25 18:41:03.882180987+01:00',NULL,'100.64.0.50','fd7a:115c:a1e0::32'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-05-17 19:36:55.859473496+02:00','2023-05-17 19:36:55.859473496+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(2,'2023-05-17 19:36:57.059073465+02:00','2023-05-17 19:36:57.059073465+02:00',NULL,'user002','','',NULL,NULL,''); -INSERT INTO users VALUES(3,'2023-05-18 10:10:36.248939077+02:00','2023-05-18 10:10:36.248939077+02:00',NULL,'user003','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2023-06-10 09:06:13.920718561+02:00','2023-06-10 09:06:13.920718561+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(5,'2023-06-11 19:58:32.371218434+02:00','2023-06-11 19:58:32.371218434+02:00',NULL,'user005','','',NULL,NULL,''); -INSERT INTO users VALUES(6,'2023-06-17 19:39:53.031565686+02:00','2023-06-17 19:39:53.031565686+02:00',NULL,'user006','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2023-06-20 11:35:09.325846831+02:00','2023-06-20 11:35:09.325846831+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2023-06-21 22:47:48.196234382+02:00','2023-06-21 22:47:48.196234382+02:00',NULL,'user008','','',NULL,NULL,''); -INSERT INTO users VALUES(9,'2023-06-22 08:30:35.068995572+02:00','2023-06-22 08:30:35.068995572+02:00',NULL,'user009','','',NULL,NULL,''); -INSERT INTO users VALUES(10,'2023-07-03 10:18:32.123226+02:00','2023-07-03 10:18:32.123226+02:00',NULL,'user010','','',NULL,NULL,''); -INSERT INTO users VALUES(11,'2023-07-03 10:18:37.130387602+02:00','2023-07-03 10:18:37.130387602+02:00',NULL,'user011','','',NULL,NULL,''); -INSERT INTO users VALUES(12,'2023-12-15 08:05:06.013615212+01:00','2023-12-15 08:05:06.013615212+01:00',NULL,'user012','','',NULL,NULL,''); -INSERT INTO users VALUES(13,'2024-02-03 16:32:42.224977233+01:00','2024-02-03 16:32:42.224977233+01:00',NULL,'user013','','',NULL,NULL,''); -INSERT INTO users VALUES(14,'2024-05-03 10:12:38.220973042+02:00','2024-05-03 10:12:38.220973042+02:00',NULL,'user014','','',NULL,NULL,''); -INSERT INTO users VALUES(15,'2024-07-26 08:08:40.979783263+02:00','2024-07-26 08:08:40.979783263+02:00',NULL,'user015','','',NULL,NULL,''); -INSERT INTO users VALUES(16,'2024-08-05 17:32:02.878091894+02:00','2024-08-05 17:32:02.878091894+02:00',NULL,'user016','','',NULL,NULL,''); -INSERT INTO users VALUES(17,'2024-09-22 15:48:00.287392203+02:00','2024-09-22 15:48:00.287392203+02:00',NULL,'user017','','',NULL,NULL,''); -INSERT INTO users VALUES(18,'2024-12-10 13:55:11.256977421+01:00','2024-12-10 13:55:11.256977421+01:00',NULL,'user018','','',NULL,NULL,''); -INSERT INTO users VALUES(19,'2024-12-17 14:57:58.550971236+01:00','2024-12-17 14:57:58.550971236+01:00',NULL,'user019','','',NULL,NULL,''); -INSERT INTO users VALUES(20,'2024-12-17 15:02:08.053169491+01:00','2024-12-17 15:02:08.053169491+01:00',NULL,'user020','','',NULL,NULL,''); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.1.sql b/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.1.sql deleted file mode 100644 index 187fe83d..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.1.sql +++ /dev/null @@ -1,104 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'463a8b372963aeaca12400faa0c7ea29e9bfa3b4c59e9622',3,0,0,1,'2023-05-19 05:09:19.66636462+00:00','2023-05-19 05:14:19.664224869+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'77a019cb12c0d0b9347a17ab23fa4c87983814fe36bb2fbb',14,0,0,0,'2024-05-03 08:13:55.8614948+00:00','2024-05-04 08:13:55.85782156+00:00',NULL); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:0ea01e8cd608548a171ebcedc16da73b25ec18d58861be7a6b4274eb17baf6ba','nodekey:0fcf591dc1997c1d7388a3835bd24dae48240e5b03b6a7f5f33b2e813d222bcf','discokey:ed27f0b3c04222f0d05bfbc6fb5787338275ed0e32fd42fe68c669aca0f3d7a4','desktop-54','node001',2,'cli','null',NULL,'2025-01-16 10:47:40.213920406+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b8f0:cf44:22c8:82d1:f79:c3eb:9144:c24f]:6142","[8e25:af1:e31a:8fcc:7dcf:be1:a3c9:ed6a]:38766","[37a1:215f:bf3d:3fb1:6399:f7cb:4048:3ca8]:51666","137.242.125.72:25376","86.84.184.63:26062","[c187:a8:1181:8321:28ab:fe72:b0b8:8ec6]:41236","[9089:461f:ad5b:68d4:f7a8:618a:a52a:56ac]:63162"]','2023-05-17 19:38:13.531518257+02:00','2025-01-16 10:47:40.214263246+01:00',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(2,'mkey:348464565f0009e9205c9a78f381e12b9cf63ee2006b8e95d720a274347c4847','nodekey:bff28b3734e3343ded434f23bbffe562bab527165158c992e1126c94f39ea877','discokey:9c077cf7719204300ecc1ab64fb6bd3cc0cd398a4a208e37ed79e4500db3f86c','db-91','node002',1,'cli',NULL,NULL,'2025-01-30 19:01:13.901141508+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["165.5.239.20:48876","44.184.117.89:36643","90.223.136.253:7056","[e43c:be7:7f4e:8217:a009:7f9d:a510:1af6]:18872","[3a86:865b:2f4c:f3b7:88a0:4aa2:f05d:c3fd]:47680","86.50.30.30:18216","[fdbe:7908:805e:6c8f:47c2:2c87:b1c4:44]:48470","188.237.187.170:7622","[c032:e70d:dbd3:27de:85a3:28b4:8a64:155a]:48864"]','2023-05-18 10:09:21.757289398+02:00','2025-01-30 19:01:13.901463969+01:00',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(3,'mkey:cd435d043e3dba7c0cd2f089c57c46abd59f84c1d05d5b9ea03d344a6dd7e61c','nodekey:5a558233f8e076e1f3a78e95e274265332c984372e8bfa91c62e748707e3d9c8','discokey:d2dbd001168ca99d38dd2b56270201056955f5d05a97934278b7a8311d809e37','laptop-91','node003',3,'authkey','[]',1,'2025-01-26 12:49:44.580858758+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d40d:59d:b450:8ad7:bc04:9836:f16d:d538]:10080","87.84.28.245:43373"]','2023-05-19 07:09:21.399903526+02:00','2025-01-26 12:49:44.582326495+01:00',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:4b507fc1f103042642d3c619b0cdc77a6dadb4415ccd78b5b3880f4fcacacef6','nodekey:259a4b5de585a90421714fe40040c339a26a660b1fa0511f6323e51926b88c5c','discokey:36b7d0cbd65c02c741958c3e793960fcf2b93f7d907fc4483f548043864bdbae','laptop-59','node004',4,'cli',NULL,NULL,'2025-01-30 19:08:13.209119931+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["1.79.159.39:34092","[b12e:af88:3c55:41f8:5fba:2aa3:3248:15c1]:23643"]','2023-06-10 09:31:51.940506933+02:00','2025-01-30 19:08:13.209865565+01:00',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(5,'mkey:4937d77921b93317ae3fb1fb406ffd3fa2d9154afa78c3ed00a5a685ede9aaae','nodekey:3c38f956817733cb7dfb474be1a8289f90ae0cb4df5dfd42ad8fc57e3a35b702','discokey:9abee320c0000a4f325c4aaf75173a715bbd1dc88f93699df10e7bf06ba0097f','email-95','node005',4,'cli',NULL,NULL,'2025-01-30 19:08:24.678660839+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[cd19:68e0:693d:5e6:659d:9f7e:277e:21cf]:31475","86.42.90.60:13473","[14af:1283:fc98:5791:282:5e8:a450:315f]:22243","201.199.139.78:9445","121.159.150.133:23963"]','2023-06-11 13:56:42.694329408+02:00','2025-01-30 19:08:24.679099806+01:00',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(6,'mkey:33535c582e158d27acdafdda3edd3777fd581ddc2adad52eaee607a82a0f9d9c','nodekey:144ba4b580e0d6d9add9a482c5f893deba28ec44622ccd056a06a9ddf80af32c','discokey:62d0e22d743d8096882b23d1bc5f5810863dc39876bbec1253ff2a4c445f423b','db-15','node006',4,'cli',NULL,NULL,'2025-01-30 19:08:06.034183297+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[698:f124:543f:b5b7:644e:5b79:4160:6e0b]:42676","165.236.227.226:53522","[e65d:59b5:f599:8596:2bc9:56b7:820a:8c12]:12277","[8ddd:4aae:2b94:4c1a:d05f:65d2:2bdb:b400]:42839","[c493:a941:58b6:7634:6221:70ba:20e6:40c4]:35486"]','2023-06-11 13:57:44.975695604+02:00','2025-01-30 19:08:06.034942035+01:00',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(7,'mkey:26d1a98dc9d16df9ce9692bd330c25b5d021d11ff35b15a79f282b51c7f13888','nodekey:dd3d71574aa1270acb5176b071f1e99f512a4436ac20050df6d617865d2e47f9','discokey:def83e0a3c95bb379b8af036680ce07aa7725d1d2c4b7feb656c5a441f51fef7','laptop-38','node007',4,'cli',NULL,NULL,'2025-01-30 19:07:54.580925698+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[cef3:87e9:7ebe:e5a3:a6a0:5dc2:7a83:6a84]:43421","177.175.178.171:24266","[489b:6c59:1877:6215:ddc8:57f8:78dc:321c]:3025","77.213.161.20:47662","[adea:86d3:b312:f445:d835:6023:f1f:f928]:27483","124.104.237.237:47957"]','2023-06-11 14:16:56.951313537+02:00','2025-01-30 19:07:54.581710417+01:00',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(8,'mkey:9ced3abf3a281d71b6f361a75769c571c4aff9e4cbab5b317f6cf18918f66bbc','nodekey:e533ead6c5b810764db07631d7b9b14434bf66549837d33a6187daac1ab1d7a9','discokey:571a7ba01a8a0fff2af21d0f135caf8ae2ee0e15fd5ab996d2e490cf968d7217','email-22','node008',5,'cli',NULL,NULL,'2025-01-30 19:08:26.01751405+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5b29:3bfe:7aa9:2767:900a:8142:1ae:eb94]:59942","165.200.68.0:39862","187.166.20.165:11208"]','2023-06-11 19:59:30.401970393+02:00','2025-01-30 19:08:26.017863321+01:00',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(9,'mkey:4115777754bb14ec2d04345ee2ffb97968692a59f0f66fc4aec9737616903f41','nodekey:5a0133001bc6b02d996ea2e2cae3abcfcb073d69a25aefd4077f36c564af0075','discokey:91ec1c63e9572a6742c8bf8796ce7f5ffc3d42a03ad53070a3ee56a2523317a9','lt-39','node009',6,'cli',NULL,NULL,'2025-01-26 12:49:38.323826827+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["192.120.123.140:19810","63.152.247.204:38090"]','2023-06-17 19:40:45.468789461+02:00','2025-01-26 12:49:38.324187836+01:00',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(10,'mkey:94da5084f8fc1cfe7671b94a0d7ef767719bd0fed89f16bc2726728db4cd9d25','nodekey:e6527563ef7a2647b6c719e0c9c95c1ab2a2eeef7fbb3f46a108ee0fbca026f0','discokey:f76230afd20bfcce74fd0d689b35e82cab5bfdf4cd1530073e87b1acae513c2b','web-50','node010',6,'cli',NULL,NULL,'2025-01-30 05:10:56.348559998+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[bed5:a4c2:8547:c044:9956:2566:d794:c918]:56801","113.147.88.203:61138","22.207.212.228:27547","[2c3e:2260:8c34:275:2287:b721:1bdb:d462]:59540","70.115.64.117:6906"]','2023-06-20 11:18:35.905417341+02:00','2025-01-30 05:10:56.349025961+01:00',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(11,'mkey:4b59a62af15d9350f16c9a2e03aa6174692418948d6a5ad5c272f5bb88feb061','nodekey:5004a6cb840ab527bcf897c17761fd30338b1250b3a9e0ac3e3c60a422ee09d0','discokey:e191e0039edbf786f88ab163173eb3c297e060f51cb90249f36e4867d5d89928','email-58','node011',7,'cli',NULL,NULL,'2025-01-30 16:14:19.573081099+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f99c:24c3:b1b5:8b96:abe2:3ddc:c21e:87dc]:65141","[dfe7:6f6b:a382:4fc:a3f6:eb9:9ebc:5798]:1474","56.63.78.181:15200","[314c:5aba:d6ac:63d3:4de3:2c46:5e60:903b]:30789"]','2023-06-20 11:35:15.063855316+02:00','2025-01-30 16:14:19.573230274+01:00',NULL,'100.64.0.11','fd7a:115c:a1e0::b'); -INSERT INTO nodes VALUES(12,'mkey:88ef4bda4139c7c09e33bbc866b88e069c51187346d8095238179c82502dffcc','nodekey:7ad274415b296099222e0ca334eb849e0e6595ffa54dbcc94a0fc9ffa69d1747','discokey:388c1077bdd17ce1f86b92936609756ecd3e8b21ab3f4b8a84da7727dd2b2552','web-70','node012',5,'cli',NULL,NULL,'2025-01-30 19:07:30.410870761+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["33.51.84.119:46756","[c844:704e:c3c0:e709:8f94:3564:1b4d:d2d9]:8357","[169:f098:49e9:99af:27bb:802f:fc22:bd59]:63749"]','2023-06-20 18:22:35.061914624+02:00','2025-01-30 19:07:30.411307945+01:00',NULL,'100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(18,'mkey:c2e0286f5c59bc66c9a7c950ffc72f8241916be6e9f16861fe3680cb3cd2c172','nodekey:2c67affcede5c938b2b0aa8a0a42753fecc147bcad5f3df411aa999751a3482d','discokey:7f4e89781a7a169060fdedb120b316e88a2c66f40c41118bdc1759ff818a28bc','srv-41','node018',9,'cli',NULL,NULL,'2025-01-30 09:35:22.409611504+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["204.36.161.195:61712","[bfe5:7c2f:e9:71b:749a:5176:3138:83c6]:63942","24.173.68.87:39654","216.16.204.79:19720"]','2023-06-22 08:30:54.08720463+02:00','2025-01-30 09:35:22.410235858+01:00',NULL,'100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(19,'mkey:28aaec5617646c94f895ba9449d924469be2345700d80131bd72c449b02346bb','nodekey:419dd05380491ade058b139b43fda78a99d52b00cb1984180c3f251c19c7bac1','discokey:0f0490d46f8794af22a08c1d6e874ee8303df7ba39095fdd78db9522849ed427','web-93','node019',10,'cli','null',NULL,'2024-08-07 09:35:12.220368767+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7b81:52eb:f4e9:e74d:905:79d4:67b1:d359]:13795","81.17.18.155:10091"]','2023-07-03 15:18:03.829428454+02:00','2024-09-18 16:15:14.037990869+02:00',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(20,'mkey:8bc474d6d0d812ab011b7748675802bd9d4af70c2edab0da6c682a0cf1ca34f8','nodekey:2cb4a0ab98e83dafb20621dcfec33cd5fa7fa99b97998546cb6796249bdba88a','discokey:e341e99f352acb61779807aea844516d7e9e220c58699f09232bcee105e88797','lt-43','node020',11,'cli',NULL,NULL,'2025-01-30 18:17:53.075807086+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["63.3.67.150:22907","[77ac:562:7c4c:438a:6ec:be48:743d:47b1]:11349","[33cd:731c:f110:9b45:66b5:de81:1daa:757f]:5997","[d618:5830:8a37:fbe4:3b37:a35c:fad9:11d1]:57112","9.144.234.157:13194"]','2023-07-10 12:40:34.838579199+02:00','2025-01-30 18:17:53.076540774+01:00',NULL,'100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(21,'mkey:84000655b74a39e42877c1941d84a4e5b94db841c5440f19950813a5c25e1204','nodekey:bf71cc04a7b081255d32ce73fa666db0090b20af8e6e6115d279edead8b15227','discokey:b37d86213f6b344b4b24c73541b9c65e5aebcfa8d5c25114d10ef57761da6c09','lt-77','node021',11,'cli','null',NULL,'2023-11-20 07:19:19.447470862+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[68:80fa:7b30:3a0:d1fb:3e89:2c73:256b]:30308","[ad42:3917:5edb:3e85:732:1c71:a19e:9fe0]:11799","25.63.36.137:903","179.61.150.131:36912"]','2023-07-10 12:42:43.290469734+02:00','2024-09-18 16:15:14.03886353+02:00',NULL,'100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(22,'mkey:5f7118e434c174dbac710b1d088db6213ca894ca3912a92688110ed5f96eb37d','nodekey:eb2dc8689a32bb671c62e8744f2c968b9ecf3a0e1e50775de39a0f2a4f4011aa','discokey:bc1e75ccfe1c7c282df689d559cc5ef08659f1ee198456bc82df99e13371dd1c','desktop-81','node022',6,'cli',NULL,NULL,'2025-01-26 12:49:36.385301301+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["146.144.83.178:46323","1.236.14.11:13477","184.41.32.72:1026","[f136:8b69:185a:7ec7:94a1:ca58:5d95:ba9f]:21119"]','2023-08-05 12:08:48.132161695+02:00','2025-01-26 12:49:36.385653182+01:00',NULL,'100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(23,'mkey:94d0eadf801d88022019f9353b7c29cafdd060a269a0245396e01f1293953cb0','nodekey:1db769d2ca42bc0f8b4e18b5c89a31fb4595edf0592ce1a534b713ba503f07e9','discokey:912056b78049017764c78a047b65feb3c3b934fa26ac62b44f8d7d561711321a','srv-85','node023',12,'cli','null',NULL,'2024-12-07 19:25:56.935152754+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["21.107.170.69:30767","212.18.227.41:50735","[f774:e89b:980f:975d:fd3e:93f:a9b5:2034]:46054","[7225:d0ce:d29a:59d8:f596:b945:9a02:29ed]:43419","[5a76:d5b0:b51d:6b56:afaa:3968:277b:192c]:19250"]','2023-12-15 08:05:56.592241745+01:00','2024-12-07 19:25:56.935317645+01:00',NULL,'100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(24,'mkey:247cad9ffd7d108f553e7658db14bbad992332458abb9b4bc5a0c0d3a13ead9f','nodekey:bbfb569ac8a34cf4d59d2bc13d0f03660d24aedf22c61b56510a65d20bdf0812','discokey:61f92520602549b9bc684eeed3186f608a38e13794d06d4b8d165c4efbb69a24','laptop-29','node024',12,'cli',NULL,NULL,'2025-01-29 22:13:40.47247515+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5b7b:e424:ea77:aa8a:f835:d6bb:2e1f:52ea]:19152","[bb06:d322:7cec:426b:ef38:55d0:e7ce:95bc]:31597","[320d:6314:eaa2:a098:8594:977d:27b:8442]:12827","163.235.127.125:61230","208.233.231.156:35248"]','2023-12-15 11:14:36.765183054+01:00','2025-01-29 22:13:40.472698887+01:00',NULL,'100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(25,'mkey:232be2297acbce00785cbdff20dad9faf2b2147e15eeab1a5282fd2a48e7fb6a','nodekey:87b7b1f5e238a246897514dc0b57eca6403f8483bed626a9400c6ebc54161544','discokey:d87d208ca22717686d41e21c9adba999965a5cbcd4336b3dc3876b10f58bfaff','web-78','node025',6,'cli',NULL,NULL,'2025-01-30 19:05:56.81412934+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["160.27.109.39:47606","[62f5:b64d:d2e4:9015:b03a:c64e:656:8c59]:5020","3.166.190.93:19852","[243b:de9c:b8c:ea69:b66d:b3ce:14a5:9671]:22747"]','2024-01-05 17:32:40.940566279+01:00','2025-01-30 19:05:56.814546683+01:00',NULL,'100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(26,'mkey:0fe76a3827debaf6af306bc25af56334fa1e02f73245aa70c508bc3e253c84cb','nodekey:0926a8072f34e14c1131b1e3c92402a4b8cf0dca5cd36a6dcf90887ec2890d41','discokey:69cd5153481aad99ea93df0e335d8aa7d8fd36171e49e1c6d11ef5830af2f7c5','email-86','node026',6,'cli',NULL,NULL,'2025-01-30 19:00:06.767381943+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["64.137.67.1:56198","163.42.128.241:47535","49.160.22.202:42427","[1526:e34a:8857:394e:bbe0:c043:4b37:68d4]:5245","32.31.195.128:7039","[c50a:ec:e2e:e109:3a08:f56b:2928:4921]:61265","221.147.144.138:44737","136.208.142.64:41091","[537d:7a83:c891:ac8:9460:e49:e769:988b]:55062","50.23.199.223:4814","[a2b8:ea55:f9e3:2c00:f423:8c56:7fec:c617]:53793"]','2024-01-05 17:34:19.811670479+01:00','2025-01-30 19:00:06.767773381+01:00',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -INSERT INTO nodes VALUES(27,'mkey:95e3d9972a77bda2f7b243091fbd9286ebbc415c108cd984c15b419b82867a59','nodekey:4c7e3c311374ae8f7b4983bfde91b950234915f4fef2da3445702f1d474f21da','discokey:4b5e41432c037af086249f5b0f7552da9cd523ba84092ed303f09e27d294859e','web-93','node027',6,'cli','null',NULL,'2024-01-16 14:32:21.570104175+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e56d:6121:30ef:4870:5bf5:1066:8aa7:dffc]:56225","87.98.228.228:42410","172.81.188.131:23453","[cbf3:dde3:47ec:2c76:49ea:93f1:f43c:d8d4]:39412","[7267:209d:7e:6a59:8e21:b285:91d8:fc9f]:1315","217.68.74.145:28707"]','2024-01-05 17:48:25.466030859+01:00','2024-09-18 16:15:14.040766068+02:00',NULL,'100.64.0.22','fd7a:115c:a1e0::16'); -INSERT INTO nodes VALUES(28,'mkey:4a6e46330ccd4b3026337cdef37485c27592babda329107565c2a19d97253dcc','nodekey:17de47da1955a8a034788ae671c34fb10083568bf9e2f7494276cb2636065246','discokey:449f5f35a3507974cf03e3ac5c055d890de5122f57a7c2d84a0d3b324998607e','desktop-17','node028',7,'cli',NULL,NULL,'2025-01-30 17:08:57.494420371+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c2f1:82ab:922a:9b9d:f53d:a0b3:bccd:f6af]:299","[d28b:7b33:1f88:7583:7ed7:a923:4c90:9ee0]:62318","[68e3:a070:f02c:708c:a057:b579:aee9:4d25]:11747","[7224:b76a:cd04:e6d2:67fd:fec0:2f14:1837]:29002"]','2024-01-15 09:34:54.847632697+01:00','2025-01-30 17:08:57.494670314+01:00',NULL,'100.64.0.23','fd7a:115c:a1e0::17'); -INSERT INTO nodes VALUES(29,'mkey:6472928030d718e6e2802fb3bceba7c14b4716d994be1b3562a1d03104fd37a9','nodekey:78bd59aba83429486e0ed2d50873211974650b2e2ad8f79dbc07c5d13d00c5a5','discokey:86ead449e1be5a300287e690fb9522ba4da3e5bdeb593d6261e4117708fe3ade','lt-82','node029',7,'cli',NULL,NULL,'2025-01-30 16:03:46.587400142+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[fd0d:f112:f642:c343:ac14:8ede:dc04:e2d9]:58412","[8a1a:5c17:bb81:bb69:c7db:3513:f14c:ca2d]:52077","123.235.220.59:59925","155.210.184.45:60093"]','2024-01-15 15:18:12.2871978+01:00','2025-01-30 16:03:46.587530619+01:00',NULL,'100.64.0.24','fd7a:115c:a1e0::18'); -INSERT INTO nodes VALUES(30,'mkey:947247ecfc46c57c6806008c2edac71e9e88b87903b4814f1d4f2b3ae3b4824a','nodekey:d19005205b9e8a43b2727bc1f9bc44acb0ced560304f2aad60b335de457603a9','discokey:882d6a41700c79c95d228db394d4a59244a1c2f39493b658400d95bd162b957a','lt-04','node030',7,'cli','null',NULL,'2024-02-05 12:14:40.065688294+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e868:f68f:cceb:8d29:5451:c716:bdca:437d]:41081","[7016:b48d:8bcf:4ccd:8679:8c47:1c66:cd1e]:41980","22.231.20.85:41616","129.130.98.189:47833","[99c0:d769:2f38:10e9:c01a:e3d:898e:afc]:17505"]','2024-01-15 15:21:43.217136004+01:00','2024-09-18 16:15:14.041757308+02:00',NULL,'100.64.0.25','fd7a:115c:a1e0::19'); -INSERT INTO nodes VALUES(31,'mkey:c4853fdd63bed5a98baaf74df437eff743f8a968fda613fdf1f2a111e7c1b100','nodekey:9fc3f32ce92ab7f8ab987f16258b11265037a47cd9a1a68ac0e07bd3c279c183','discokey:e52f22aa8e170b6a031e682b5ed6e5f790f2d6ae1eecb752de1b362a1cce7bd4','web-89','node031',7,'cli','null',NULL,'2024-02-22 08:35:27.098819037+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["133.177.207.78:47765","[d1b1:653:cd17:f39c:8a89:85d0:c7aa:b53]:5279","107.140.203.49:9365","[1660:741a:160:67c9:7972:5f9d:14bd:ee53]:21178"]','2024-01-29 16:05:35.338524634+01:00','2024-09-18 16:15:14.042191514+02:00',NULL,'100.64.0.26','fd7a:115c:a1e0::1a'); -INSERT INTO nodes VALUES(32,'mkey:8c468799222af319e17ec309dbbf3a87fb6d924a1cb17e5a0bd7e027fe3bfe5e','nodekey:3c7d82f4a44cfa1469b957ac911e8506a68962fb2583d2afaf3e8a621c0c4aca','discokey:ba179b51f3540e619eaac9a6d4bc67403daa7c3cf28a6b579087cd6d284501b4','lt-53','node032',7,'cli','null',NULL,'2024-04-09 13:59:43.37062537+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d21f:d7bc:f690:e3ba:82fe:8122:c33d:5a86]:227","10.67.113.92:45634"]','2024-01-30 10:41:58.31917869+01:00','2024-09-18 16:15:14.042506082+02:00',NULL,'100.64.0.27','fd7a:115c:a1e0::1b'); -INSERT INTO nodes VALUES(33,'mkey:be7f190677fd0cd1c5e78b4ae5b02a1738dd60be7d9a41017000b22ae9636af4','nodekey:a404e578d3edd3e3d879e9f1946c859b5982c59f5e528bf6b987bd55de00b25b','discokey:ad3013f116c370866fc9134b66999bb05e2dc10a8e2944230c3d9893c0a94c67','lt-80','node033',13,'cli',NULL,NULL,'2025-01-27 21:29:56.078640865+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[af8:f5e8:32f0:75ee:d351:c0e9:78e:93d9]:12624","71.151.27.31:51063","[f0d6:21dc:48f5:63f4:daff:a3c4:7d60:51ea]:17286","[38c5:9551:f6ed:b087:91ba:bef1:771c:273d]:62171"]','2024-02-03 16:33:26.706408143+01:00','2025-01-27 21:29:56.079623854+01:00',NULL,'100.64.0.28','fd7a:115c:a1e0::1c'); -INSERT INTO nodes VALUES(34,'mkey:7784df3aabe1ca1edab5270028fb3cae8b66cee2384790d06969d1e6e298520f','nodekey:fd8e939fd6db8caf6623ac3d73c12ade4da110d8c1f29e3a3ee0d2be0c526915','discokey:59aff4582e00d92260e6a90d6d0da3d818fe8f52c081c8b65e18006ef8269ebe','lt-65','node034',13,'cli',NULL,NULL,'2025-01-30 19:07:28.343976513+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["71.155.69.236:8707","125.2.67.155:50785","32.6.80.181:17295"]','2024-02-03 16:42:32.683785672+01:00','2025-01-30 19:07:28.344657417+01:00',NULL,'100.64.0.29','fd7a:115c:a1e0::1d'); -INSERT INTO nodes VALUES(35,'mkey:ad3838fa63fb120ae8d3e47d4bb6bb4009e7fc5d91483d1ba7417d0681fd7162','nodekey:b10ddbdcf826d79049d74a7d4ad036b499c86f7daf786bb1a9b295366b57afe8','discokey:618ef629d3bd261964c613ad3bc45e6278fa274d734c8cd2d3396cf8c9742d88','web-33','node035',5,'cli',NULL,NULL,'2025-01-26 12:49:34.225736739+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["169.159.71.157:18240","[bbb7:5b4d:218d:55ff:ac9b:1fb6:c933:9a55]:63943","18.156.132.228:63356"]','2024-02-03 16:51:52.010016072+01:00','2025-01-26 12:49:34.238493375+01:00',NULL,'100.64.0.30','fd7a:115c:a1e0::1e'); -INSERT INTO nodes VALUES(36,'mkey:c536a3efc460fb2da4c4775e2b51d0c84f001dc46a4d902429e74e97b2d3c6d4','nodekey:91da2acfa3d7f7c7b0c1a2d952ee527514504fddc504809423762203a16145ab','discokey:989ff5f21d485dcb6312fade38c9047bbde1fbce46bc29ac5f6afe00610ad215','laptop-46','node036',13,'cli','null',NULL,'2025-01-19 14:01:48.956567669+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4017:a243:61d0:b302:7860:18f:7d4d:f605]:28633","[e5b8:6c78:ce2f:cf6f:13fb:401f:8223:3f04]:54121","38.137.88.247:35801","[eec2:61c0:e20a:2fa0:5e4d:56a4:5071:881b]:36628","[b8c2:c792:3329:2c93:f94b:a948:7d11:b9f2]:908","[5925:353c:bd9a:f42e:1916:45b4:c170:e515]:18400","[c2a5:cc2f:e2e8:573f:7b01:4aa:8a1a:fd71]:32146"]','2024-02-09 12:34:57.879970954+01:00','2025-01-19 14:01:48.956830267+01:00',NULL,'100.64.0.31','fd7a:115c:a1e0::1f'); -INSERT INTO nodes VALUES(37,'mkey:142e67963c9045164df7b33af6d2149d72f59527df44fd75221647281444cbe7','nodekey:a3a2cd2bd0c3a1f783d0391ea1951c5c9226ddca0f18130b77fccff8b874d694','discokey:bdb788f8825b9bd816f37dcea882920382c7feb143edf0a6baefc9015774d8aa','lt-24','node037',5,'cli',NULL,NULL,'2025-01-30 18:52:12.369500276+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c31d:dd18:321:589:11fa:d85b:695f:1fca]:43064","168.4.78.117:48474","90.45.218.0:29326","90.233.46.138:29846","8.137.133.59:22781","[cb5c:f8d4:ccb0:8333:8ec:170c:a2b0:941d]:26248","[5802:df5c:8853:4851:6dcf:cb83:8208:d143]:24568","48.38.104.41:43702","139.100.200.227:62546"]','2024-02-27 12:14:40.452601042+01:00','2025-01-30 18:52:12.369790328+01:00',NULL,'100.64.0.32','fd7a:115c:a1e0::20'); -INSERT INTO nodes VALUES(38,'mkey:6829f2d5523aba7a27c31e59e2341f3feb5a5a72a0268ef10d23c39811ea3b53','nodekey:10e4ff66fdd37fb54f6dcb0e42f483a15b6340610acc3a4517c24f0f350b5a9a','discokey:f20786f9c7edc967d0b415bbcce39468441af3b531ff15656c0683d3f328720a','lt-88','node038',6,'cli',NULL,NULL,'2025-01-30 16:09:13.024420431+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a393:f17f:6728:82e4:c1e7:2367:82d0:daaf]:18560","[d192:9897:4cd7:bfb5:de36:fc03:8cf4:c60d]:25096","212.199.67.213:11150"]','2024-05-22 08:08:16.045350656+02:00','2025-01-30 16:09:13.20760575+01:00',NULL,'100.64.0.33','fd7a:115c:a1e0::21'); -INSERT INTO nodes VALUES(42,'mkey:33e764ae40265089c5e7dbfc8571aa23bb390fc7e1e8bd2f3e0cd6891c308214','nodekey:c444c9ff652ba29181a5f7fdf991f91f704e6698237ee18b0649c81879ec932f','discokey:2e1db3a1eea266673acf7826aa44b5aa6fdd23c6ba41330562b37c5a523ba358','laptop-81','node042',14,'cli',NULL,NULL,'2025-01-29 16:46:30.352621883+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["190.179.215.222:61930","45.131.187.230:22916","[510f:af99:ed4d:f53a:fc21:8640:9ad9:4930]:16050"]','2024-07-03 11:12:29.418355657+02:00','2025-01-29 16:46:30.353292973+01:00',NULL,'100.64.0.37','fd7a:115c:a1e0::25'); -INSERT INTO nodes VALUES(43,'mkey:353844c4b332bdbf7f39796c522ee89d7f64fde342d3e566878f456df0d57a54','nodekey:c9f47cb50fd2b9075cda9d68018659491ac29dbbe7a04e8405039fd385966fa7','discokey:ec43638237bc43ab6a9839b3a4d54c2593e11a2ebd13bd491f08f68d7cf93ac0','desktop-90','node043',14,'cli',NULL,NULL,'2025-01-29 16:39:14.757499741+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e071:3cd7:878c:ae2d:7dcf:eab0:d0d1:decf]:37427","17.31.200.70:26794","[183d:f2d6:b80:36c5:318e:64e9:7f4a:b389]:12440","169.173.198.12:15274","[e5df:1889:6ac5:7d21:4dfd:614b:7eb9:d93c]:792"]','2024-07-03 14:48:50.263910778+02:00','2025-01-29 16:39:14.757996869+01:00',NULL,'100.64.0.34','fd7a:115c:a1e0::22'); -INSERT INTO nodes VALUES(44,'mkey:6d3e50168ff6e672ed7650abc5a1de7fec7150c6f84932c705c84964824d1d39','nodekey:ad70f51f99efeefdf0c8e0a9e818db1e5de474e67b2b0a1609a53190339938ef','discokey:c4d4ad19814f9312b454cadb85b333f80cccca1b4a42f30780f7a0f4ccfe1bff','laptop-97','node044',14,'cli',NULL,NULL,'2025-01-29 16:27:57.81216036+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b501:efaa:c73b:b492:c3cf:865:d55b:9557]:26172","162.210.141.106:52036","[84ef:a164:d1c:d20d:797c:753e:f8fb:3108]:5767","[c15a:e322:a721:c9a7:a662:ccb8:26b8:e1c4]:63192","[87c4:c36b:d96f:525b:fa4c:c5f6:7bc8:d6bb]:35894"]','2024-07-03 15:23:48.066044194+02:00','2025-01-29 16:27:57.81269748+01:00',NULL,'100.64.0.35','fd7a:115c:a1e0::23'); -INSERT INTO nodes VALUES(45,'mkey:e1772e6b138211afff6aa8fee08438c311b1d9d32576075c980d39591b13ae77','nodekey:92b8cc28bff0c53f95fbdd8c1363040100ae849e78aea04a3755e9a1fd03fe4e','discokey:7b368b5d64b6e4fb4f0ba9e5809bee8b769949ecdc69ec6ee6c9f12c2321fced','email-94','node045',14,'cli',NULL,NULL,'2025-01-29 16:48:59.640158179+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[caa0:946e:b130:39aa:31b0:8f9:7526:4960]:1729","120.102.247.255:41936","28.177.95.32:7841","[fb23:5998:c63d:b6c4:e86a:96b2:61c:e110]:51902","[89d4:f6c6:ccf6:b19d:3b76:642e:7a36:d229]:930"]','2024-07-03 15:54:01.706018896+02:00','2025-01-29 16:48:59.64070889+01:00',NULL,'100.64.0.36','fd7a:115c:a1e0::24'); -INSERT INTO nodes VALUES(46,'mkey:af6c1d228b675ff04d2584e2c23d6288ec312a6b44c5c0eed213423a11711787','nodekey:f5b21f25bb22b5cf0cedb1b643d50c3a6512ef300cc971eccccbcc500cc23408','discokey:72843a1b2564c177d8556eb0d76dda7cbe0d9f1408bd250fcc6149518eb6d5c0','desktop-41','node046',14,'cli',NULL,NULL,'2025-01-26 18:40:46.403363149+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[37a8:9998:e94e:e297:940e:5414:5549:87c7]:37130","27.138.122.133:37304","98.38.29.194:12660"]','2024-07-03 19:38:07.783745318+02:00','2025-01-26 18:40:46.403759275+01:00',NULL,'100.64.0.38','fd7a:115c:a1e0::26'); -INSERT INTO nodes VALUES(47,'mkey:5bbb77092f21b4835b0ae1af3871fc48445819c9b6e5ff9c46ad4a623b662ae8','nodekey:191c7cc089ee160c07b63e2b6b5ef0701d76d914009420baddad9bfce13e3f2d','discokey:712cadffd23f106abeea82ae70bbae8e06c2a58f3db617afd0d149b3012c64fb','web-61','node047',14,'cli',NULL,NULL,'2025-01-28 14:54:17.570894466+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["186.118.23.242:32373","[5bd8:a5a8:1b3:5e41:3c4d:5fc:4c0:c6cc]:24847"]','2024-07-04 10:38:08.344092869+02:00','2025-01-28 14:54:17.571166771+01:00',NULL,'100.64.0.39','fd7a:115c:a1e0::27'); -INSERT INTO nodes VALUES(48,'mkey:a1e793aa09c20360cb4ef57e7dc42171a0abbc9f562140609e930ba5b7b12e83','nodekey:3343520915898c9d90b7517eabf8660d66c37ed12aea0b5a4935458f95c801c9','discokey:6a80e23d10d2cb3edb58c9c47ee997f9063d6aed2f29049f635b6bf1da090885','web-24','node048',15,'cli','null',NULL,'2024-10-20 13:53:33.831192385+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["101.103.129.11:50383","[3aab:52a2:b9d3:1816:5627:336:6c60:57d7]:13486","22.83.180.41:14147","26.126.135.206:7777","[8706:d109:5006:f832:dca7:58e4:4451:be15]:15180","177.4.27.29:60895"]','2024-07-26 08:09:56.608302315+02:00','2024-10-20 13:53:33.831387627+02:00',NULL,'100.64.0.40','fd7a:115c:a1e0::28'); -INSERT INTO nodes VALUES(49,'mkey:13b8d4cafbdb8e9ab77928219aa57bf778878866ad8e6108b60aaab8dc8cec36','nodekey:17cefd68f8a9cc20b3d4fd93598f77ddfd6bd5a7e6e4b49861f9faad8623e0ef','discokey:939a04ddd2f9a24468e07ee7924dbb7a7662c1e8a1b67426f57fd48c7394448b','desktop-40','node049',16,'cli','null',NULL,'2024-09-19 09:07:18.28136023+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[19b0:8be2:708:d27f:4d3d:3c93:9979:e9e8]:50438","129.0.5.112:19015","[91e2:54c6:93fb:ec32:6ab4:186d:81cb:b815]:37323"]','2024-08-05 17:32:41.937626584+02:00','2024-09-19 09:07:18.281618912+02:00',NULL,'100.64.0.41','fd7a:115c:a1e0::29'); -INSERT INTO nodes VALUES(50,'mkey:03979d5446bb3f596bf0234d81ca84482eacb132e874ceda6cc2703422cd728c','nodekey:805c03cada525f14ee90d28ea02b23a3f57b470c793314872ad9daacf6c6484e','discokey:eb4135e1cc73a3c6b513d8cc04ca0c55a2aef7b0009845fa1d2036488aadf0f8','desktop-02','node050',10,'cli','null',NULL,'2024-08-07 10:10:06.595550455+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9175:498d:64ac:4342:2eac:ffa1:99fd:d109]:21202","[5cba:56ef:db08:320c:333a:541f:605c:44d5]:57469","150.211.2.236:50360","203.180.122.83:13344","63.162.234.32:11653","[6b63:f552:f930:e22c:fc42:3b41:c5c6:6d5f]:63162","114.48.12.185:47837","[2964:cc5d:3a1b:7889:acc8:4d8:22c1:398a]:43515"]','2024-08-07 11:50:54.144157179+02:00','2024-09-18 16:15:14.050033969+02:00',NULL,'100.64.0.42','fd7a:115c:a1e0::2a'); -INSERT INTO nodes VALUES(51,'mkey:5877139c52c0bf3c0fd054bd046aaf4530dd8946548a7bc1a06733649d1557d8','nodekey:553d9638c8c455e8d1bc811ea3af03db1eb3c586413447c8003e7d905a61d06c','discokey:1da883c9eff402d786362aaac2f478c835308b3acec0993e6293becf4474609f','web-62','node051',14,'cli',NULL,NULL,'2025-01-26 12:49:36.351868783+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["86.215.174.228:26026","93.101.159.155:64005"]','2024-08-07 14:19:31.156780417+02:00','2025-01-26 12:49:36.353124877+01:00',NULL,'100.64.0.43','fd7a:115c:a1e0::2b'); -INSERT INTO nodes VALUES(52,'mkey:0e64072dafa5f9e49266b41fc4b21ff7d287be8f4e5f2b6c395487d632f1cf3f','nodekey:2f77bb8cbeb426bcded5ad9fc6da44b0b76b8d6759f10a8db28e634290760ee7','discokey:836558738056edd1fa8d190c7e42f120b365a48a3b1f1c96c87755bb82d7ad0f','desktop-07','node052',17,'cli','null',NULL,'2024-12-25 17:27:58.515851096+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d889:f751:710:257f:977f:5597:9908:14e0]:61162","[f83a:5ebb:911d:ef36:743c:8086:8a4c:a5c7]:23538","[dc45:365c:e426:71d7:fe23:fdd4:d2c2:9409]:57262"]','2024-09-22 15:48:41.385301399+02:00','2024-12-25 17:27:58.517153789+01:00',NULL,'100.64.0.45','fd7a:115c:a1e0::2d'); -INSERT INTO nodes VALUES(53,'mkey:a6f93e99a5019b3b6a2941d95dc7fcbfea5a59317a48b99db93b7a3f4f0a65bb','nodekey:1f78a6d76ea4a2135c0da46b202560c86ed30cd7e8239d4508088d0e02ffb961','discokey:1744062452f052aba5320e37fc08aaa37f1430db7c32c94c5626f0f3c858de3d','desktop-82','node053',17,'cli',NULL,NULL,'2025-01-30 18:44:17.370143702+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6084:ee60:1697:864e:4a61:cf6e:e9b6:4c05]:63961","[fad7:2601:d286:4772:3cce:6e89:f66e:9eeb]:5430","216.60.124.11:45041","[d513:c472:d7a:f316:f610:10d0:9851:4feb]:3955","156.228.105.157:61542","102.126.185.0:50694","[156b:e934:d171:2693:8db4:f193:a58c:17b6]:24190","79.255.179.99:56057","[6369:4412:8cfc:2a5e:7cfb:8b45:e2ac:f98a]:16090","23.0.29.75:24889","[ab0:1193:7bb7:4a4a:c036:b911:994f:3aa4]:5116","[9491:7bd5:3979:414a:d198:a8b:f2e7:29b4]:62795","140.73.225.124:1710"]','2024-10-28 10:04:50.084492941+01:00','2025-01-30 18:44:17.370913494+01:00',NULL,'100.64.0.44','fd7a:115c:a1e0::2c'); -INSERT INTO nodes VALUES(54,'mkey:e47e8ca8e89ebf89a399a8206916ac7a55f855b449acf80c7d27b6b225aac678','nodekey:377793cdc9eb6f2279806b2b70d0fea24c4f4c1dfdf92771eecd9199f67d80e9','discokey:299a45d53358f6a43234df87ceff28e642433aa06e8341862e8557d1d499ddae','email-83','node054',14,'cli',NULL,NULL,'2025-01-30 10:34:07.584992479+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7d2a:9d5:1fbb:54a7:fe7a:bc40:5ea2:ca20]:3534","[f630:8bb5:a4ff:95b1:6d95:decb:a5d2:373b]:42456"]','2024-12-09 17:10:55.363593066+01:00','2025-01-30 10:34:07.58575113+01:00',NULL,'100.64.0.46','fd7a:115c:a1e0::2e'); -INSERT INTO nodes VALUES(55,'mkey:c0b2d0e2ff266bfc53e07b246f000847c7d0c86649152c2bc6a421443627c733','nodekey:105f9b052032e510f9ab817af7df0ea561dc304223ceaf75c9116ce487bbfe86','discokey:cfb384a16ce161420b9bd42615dfa45ca547084bf61f3728ccf342d8693a9383','lt-74','node055',18,'cli',NULL,NULL,'2025-01-30 13:48:04.74697862+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["155.233.51.24:54409","191.94.7.112:26528","[1f30:3a9a:19b:da4e:a3d7:9a55:707b:d112]:54974"]','2024-12-10 13:56:39.287449662+01:00','2025-01-30 13:48:04.74733257+01:00',NULL,'100.64.0.47','fd7a:115c:a1e0::2f'); -INSERT INTO nodes VALUES(56,'mkey:5b28e1fc6b67cf4a900aeed7c413f06b83e8e9de3f5fada3cefac31ef51ac932','nodekey:8f015f994d88c5348b98c2cf40f9f224f85787941044438f4a9ee34cabbb5d4f','discokey:5b44ab87d7620b22294cae6536c69b272e9ca9ade6812bf4f41375fcb44ace7c','lt-86','node056',19,'cli',NULL,NULL,'2025-01-30 19:03:58.987501999+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["44.230.54.249:64083","29.229.203.183:13"]','2024-12-17 14:58:39.429211911+01:00','2025-01-30 19:03:58.987864718+01:00',NULL,'100.64.0.48','fd7a:115c:a1e0::30'); -INSERT INTO nodes VALUES(57,'mkey:200d4ac92d1084e22e9cdcd0a74ac2a12e33a279dd2bd9aa88a779c92c82bf35','nodekey:48c2e3c2c51da5ce63d764bb3698482e96746c543f2c728235ea8afe971202b2','discokey:2b3dec80b092c04bb25c9c39bf18e7f85d38dc666de1fff2dd7a6dd95e6af162','desktop-62','node057',20,'cli',NULL,NULL,'2025-01-28 11:01:52.058651587+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["178.193.134.41:58563","29.219.180.224:55412","[2ddd:7c1b:a237:41c8:3baf:e9cc:1f31:9057]:62916","[9c34:190d:7bf0:c940:18d6:f96c:9aca:8c0a]:8787","181.226.177.82:63703"]','2024-12-17 15:17:14.26936913+01:00','2025-01-28 11:01:52.058860063+01:00',NULL,'100.64.0.49','fd7a:115c:a1e0::31'); -INSERT INTO nodes VALUES(58,'mkey:4d690637aa5cdd6575d9dbb7fa0b55c83bece3e82bc689e73122c9120f871772','nodekey:8733914d48b0c34a0111ec5f633123ae4ec7de39e597077d659ab8f2354270b9','discokey:ff25bf692a69cc0e79101e1cc1b926ca97b6e3897e65c55e5bd3f09ca1ea6d16','lt-20','node058',12,'cli',NULL,NULL,'2025-01-25 18:41:03.881898904+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[57a6:fde1:814:36cd:f8de:668c:f5fb:4f60]:39714","[f08d:aeaa:107:fc17:5c30:ab9a:e31e:3147]:55879","80.27.167.78:21548"]','2025-01-17 10:17:23.455895657+01:00','2025-01-25 18:41:03.882180987+01:00',NULL,'100.64.0.50','fd7a:115c:a1e0::32'); -INSERT INTO nodes VALUES(59,'mkey:581824fee9a91fdacfd3529d7e28548440259284e3f3251d00981c34ff1817a4','nodekey:ef7f1fd978b01f30758074df76c5eab42279ba200d7e3386fd6545e2d2865463','discokey:1e77e9eb061b34dd246f44cf32bf818b431948fed86d4dd02de112845caad4e3','srv-53','node059',21,'cli',NULL,NULL,'2025-01-30 18:54:11.030475733+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2e30:4d2d:4925:201f:e845:2a12:503c:9524]:34552","105.198.43.87:48215","214.199.117.40:30901","58.251.220.64:42364"]','2025-01-29 11:59:27.291048957+01:00','2025-01-30 18:54:11.030888364+01:00',NULL,'100.64.0.54','fd7a:115c:a1e0::36'); -INSERT INTO nodes VALUES(60,'mkey:1fde1d60f3aacf01ce0d5faa329ebe3da2465dfc4a568e31648b06b313a690d4','nodekey:e1038e05b45f270e2d47d90898a9f599253421e9046a799d394b623766a81aae','discokey:d5e3dddd1a9422458643058ed2d8fd8e9da0cf3b4378cf4b421672cf1e13d71d','desktop-08','node060',21,'cli',NULL,NULL,'2025-01-30 19:04:25.435259221+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["15.226.94.222:41465","109.183.111.101:20875","[912a:9b6c:5845:f8d4:f476:491f:cdef:8b7d]:21677","22.203.32.217:28664"]','2025-01-29 12:01:57.48748166+01:00','2025-01-30 19:04:25.4358129+01:00',NULL,'100.64.0.55','fd7a:115c:a1e0::37'); -INSERT INTO nodes VALUES(61,'mkey:17196701e0a4b870bae52dd1fc0eeb26843f2ceb0745ad263e69ef737d9a5403','nodekey:a8c47bf5faf5110c298862a919bc7308eb616c92c110f7dfd38194a74b3ffe7a','discokey:4dcc87f0698a1e669485874eaacf9438090ac69deb20fb9cc693d7df75b8265f','web-53','node061',21,'cli',NULL,NULL,'2025-01-29 23:58:55.106877823+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[1ad1:4e50:7001:e922:d6b0:2b1e:c718:f3f7]:31291","13.175.9.68:14464","[e9e:965b:7b7a:3219:44cc:f0f3:e3a0:a504]:34840","[2476:14e8:ab74:a221:cfca:bd5e:64aa:83d1]:48008"]','2025-01-29 12:03:01.464646336+01:00','2025-01-29 23:58:55.107564914+01:00',NULL,'100.64.0.56','fd7a:115c:a1e0::38'); -INSERT INTO nodes VALUES(62,'mkey:e8ecee3b3c1e75bea1ee3d00a759ab31864c33f6c8df4a15c352d1b420450d88','nodekey:820bacc3e7d49bb90be56bb3297c98aed4c777bc1aed02b3b0b7b4301c8e445f','discokey:d16e93e7d7b6fbc451ec3576211f7c19398ca18f684fa285f5d6bd84be50286c','laptop-57','node062',21,'cli',NULL,NULL,'2025-01-30 17:23:37.246636502+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["69.125.5.213:11546","[68bb:e8d1:72d8:8b02:608b:2284:c9b3:184d]:20876"]','2025-01-29 19:23:14.092804852+01:00','2025-01-30 17:23:37.246792226+01:00',NULL,'100.64.0.57','fd7a:115c:a1e0::39'); -INSERT INTO nodes VALUES(63,'mkey:3e0dcb286f6e19b1e3a0b5330978aa8a8d710934cc754e811d0f690046e04469','nodekey:cd2d301b9b163b806fe8d63cdd07004f26559706bf7dc782645f05e12ebe0ec3','discokey:14675666cf2b55697f0032897d1c6742413d7c24bada8cfdc6246bf150d06fc7','db-76','node063',21,'cli',NULL,NULL,'2025-01-30 19:07:01.231463482+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["74.32.222.198:59615","145.91.193.5:37126","[2fb6:ea0a:3639:a1cd:9075:a258:6a79:4f5c]:13944"]','2025-01-29 19:41:40.535299057+01:00','2025-01-30 19:07:01.23207245+01:00',NULL,'100.64.0.58','fd7a:115c:a1e0::3a'); -INSERT INTO nodes VALUES(64,'mkey:1adfdd6ba590feffc711c40506ed5ecb16609cd6e90a536d50c5f54cec223508','nodekey:0d58cee32d99cde76b3d8769ae4605cb4bfc9cbb01fc13069c8b59faca320e48','discokey:cba00753c508df0b5cdea1e59eaa829c6cfda4f8399121307f931dbaa3730296','desktop-90','node064',21,'cli',NULL,NULL,'2025-01-30 19:04:27.551570027+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["172.210.49.58:58821","[23f9:9c6d:5069:a14f:33d2:c0b1:5ab5:f83b]:46288","[b341:3259:c5dd:9f91:fd70:c616:3033:2b3b]:59081"]','2025-01-30 18:18:57.519126133+01:00','2025-01-30 19:04:27.552291406+01:00',NULL,'100.64.0.59','fd7a:115c:a1e0::3b'); -INSERT INTO nodes VALUES(65,'mkey:547a8b97c8c8227d9ba40174ce4d28d0e64f10fae3f39797c60e29742ed59e62','nodekey:cb4a9d1dee6f581036b7c262348cf2c4a96be8b2eea78f7805097e3f7b1a9fcf','discokey:c233a42a4afb5fbf921bf7b3fb4330e6c46273d18570aaf6b7254dbf3c67614f','desktop-82','node065',21,'cli',NULL,NULL,'2025-01-30 19:01:43.552616838+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["21.197.131.68:13997","6.193.99.72:20874","181.248.237.50:21695"]','2025-01-30 18:19:40.354692307+01:00','2025-01-30 19:01:43.553264821+01:00',NULL,'100.64.0.60','fd7a:115c:a1e0::3c'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-05-17 19:36:55.859473496+02:00','2023-05-17 19:36:55.859473496+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(2,'2023-05-17 19:36:57.059073465+02:00','2023-05-17 19:36:57.059073465+02:00',NULL,'user002','','',NULL,NULL,''); -INSERT INTO users VALUES(3,'2023-05-18 10:10:36.248939077+02:00','2023-05-18 10:10:36.248939077+02:00',NULL,'user003','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2023-06-10 09:06:13.920718561+02:00','2023-06-10 09:06:13.920718561+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(5,'2023-06-11 19:58:32.371218434+02:00','2023-06-11 19:58:32.371218434+02:00',NULL,'user005','','',NULL,NULL,''); -INSERT INTO users VALUES(6,'2023-06-17 19:39:53.031565686+02:00','2023-06-17 19:39:53.031565686+02:00',NULL,'user006','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2023-06-20 11:35:09.325846831+02:00','2023-06-20 11:35:09.325846831+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2023-06-21 22:47:48.196234382+02:00','2023-06-21 22:47:48.196234382+02:00',NULL,'user008','','',NULL,NULL,''); -INSERT INTO users VALUES(9,'2023-06-22 08:30:35.068995572+02:00','2023-06-22 08:30:35.068995572+02:00',NULL,'user009','','',NULL,NULL,''); -INSERT INTO users VALUES(10,'2023-07-03 10:18:32.123226+02:00','2023-07-03 10:18:32.123226+02:00',NULL,'user010','','',NULL,NULL,''); -INSERT INTO users VALUES(11,'2023-07-03 10:18:37.130387602+02:00','2023-07-03 10:18:37.130387602+02:00',NULL,'user011','','',NULL,NULL,''); -INSERT INTO users VALUES(12,'2023-12-15 08:05:06.013615212+01:00','2023-12-15 08:05:06.013615212+01:00',NULL,'user012','','',NULL,NULL,''); -INSERT INTO users VALUES(13,'2024-02-03 16:32:42.224977233+01:00','2024-02-03 16:32:42.224977233+01:00',NULL,'user013','','',NULL,NULL,''); -INSERT INTO users VALUES(14,'2024-05-03 10:12:38.220973042+02:00','2024-05-03 10:12:38.220973042+02:00',NULL,'user014','','',NULL,NULL,''); -INSERT INTO users VALUES(15,'2024-07-26 08:08:40.979783263+02:00','2024-07-26 08:08:40.979783263+02:00',NULL,'user015','','',NULL,NULL,''); -INSERT INTO users VALUES(16,'2024-08-05 17:32:02.878091894+02:00','2024-08-05 17:32:02.878091894+02:00',NULL,'user016','','',NULL,NULL,''); -INSERT INTO users VALUES(17,'2024-09-22 15:48:00.287392203+02:00','2024-09-22 15:48:00.287392203+02:00',NULL,'user017','','',NULL,NULL,''); -INSERT INTO users VALUES(18,'2024-12-10 13:55:11.256977421+01:00','2024-12-10 13:55:11.256977421+01:00',NULL,'user018','','',NULL,NULL,''); -INSERT INTO users VALUES(19,'2024-12-17 14:57:58.550971236+01:00','2024-12-17 14:57:58.550971236+01:00',NULL,'user019','','',NULL,NULL,''); -INSERT INTO users VALUES(20,'2024-12-17 15:02:08.053169491+01:00','2024-12-17 15:02:08.053169491+01:00',NULL,'user020','','',NULL,NULL,''); -INSERT INTO users VALUES(21,'2025-01-28 15:57:32.774456057+01:00','2025-01-28 15:57:32.774456057+01:00',NULL,'user021','','',NULL,'',''); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(1,'2023-05-19 07:09:23.387641743+02:00','2023-05-22 09:48:18.908103256+02:00',NULL,3,'192.168.224.0/21',1,0,0); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.2.sql b/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.2.sql deleted file mode 100644 index 9f45b84d..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.2.sql +++ /dev/null @@ -1,113 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'463a8b372963aeaca12400faa0c7ea29e9bfa3b4c59e9622',3,0,0,1,'2023-05-19 05:09:19.66636462+00:00','2023-05-19 05:14:19.664224869+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'77a019cb12c0d0b9347a17ab23fa4c87983814fe36bb2fbb',14,0,0,0,'2024-05-03 08:13:55.8614948+00:00','2024-05-04 08:13:55.85782156+00:00',NULL); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:0ea01e8cd608548a171ebcedc16da73b25ec18d58861be7a6b4274eb17baf6ba','nodekey:0fcf591dc1997c1d7388a3835bd24dae48240e5b03b6a7f5f33b2e813d222bcf','discokey:ed27f0b3c04222f0d05bfbc6fb5787338275ed0e32fd42fe68c669aca0f3d7a4','desktop-54','node001',2,'cli',NULL,NULL,'2025-01-30 19:50:45.306207411+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b8f0:cf44:22c8:82d1:f79:c3eb:9144:c24f]:6142","[8e25:af1:e31a:8fcc:7dcf:be1:a3c9:ed6a]:38766","[37a1:215f:bf3d:3fb1:6399:f7cb:4048:3ca8]:51666","137.242.125.72:25376"]','2023-05-17 19:38:13.531518257+02:00','2025-01-30 19:50:45.306340015+01:00',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(2,'mkey:1e274f40556282a5bbab04b6754985233ccdf971b663e5b38fc027035e833e9a','nodekey:5b16c7e14191ca2e57eccd86ec636db605fa5dc2c8856be5fa7f9b087cf24890','discokey:5fd2f116700fd48ce06e35e8dd11c847ccd7694e01ec4b3f6ef25f67c14ee197','email-21','node002',1,'cli',NULL,NULL,'2025-02-05 17:24:25.870524261+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5aae:55d1:7b5d:628d:bc07:b3e0:eca2:836d]:13912","[5583:8a0e:1564:ac5b:98ab:59a3:126:cba2]:35537","84.23.188.83:48876","44.184.117.89:36643","90.223.136.253:7056","[e43c:be7:7f4e:8217:a009:7f9d:a510:1af6]:18872","[3a86:865b:2f4c:f3b7:88a0:4aa2:f05d:c3fd]:47680","86.50.30.30:18216"]','2023-05-18 10:09:21.757289398+02:00','2025-02-05 17:24:25.870620505+01:00',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(3,'mkey:4405716e66f6a07a8fea90183cb078a8b87108aef23965b1c65f51375f3882d3','nodekey:96f979ded4cf1a02655d0b9069fb14e04d2da09c182624313acccf8c7f6b67a4','discokey:f0f4a89d7893303e1a336c9838c9a55b36a5215da006fccb72c237a794f4c5ea','email-51','node003',3,'authkey','[]',1,'2025-02-01 06:03:00.282713364+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7cd1:a579:7ddd:ed37:5f70:5783:8e09:1968]:5323","110.83.0.14:9543"]','2023-05-19 07:09:21.399903526+02:00','2025-02-01 06:03:00.284151738+01:00',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:9cd192ff56bc94ec9439083852bb541469d6725e6cda7a2aa624df6e8c8c380d','nodekey:ac53dbdd1bb1c2a36a1dde021e8f5f33d2cf01ef2892fe74853e945b9a08fda3','discokey:57cae29ceafb73857b478955a77927ff5bd8f38c313a869be093f92c6f6303da','email-15','node004',4,'cli',NULL,NULL,'2025-02-05 18:08:25.40794648+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2421:d8be:14:6b5f:7fe:56a8:54b6:8d58]:25207","[20b0:e4f:96fd:1cf3:5d39:ad75:6b8e:d0ce]:51455"]','2023-06-10 09:31:51.940506933+02:00','2025-02-05 18:08:25.408568806+01:00',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(5,'mkey:7f9c690aab22d9db060086ea375d86924359210816ad562dab2db804ad3434c0','nodekey:88fe0bcd504fd98c35a71484820dbf7b91c1704b9d29c0c7950ce5ab06ef2d17','discokey:fbaef49f1a2c61469168a9e81ee6c5bd49a777048288975a22c0f253aa14e2e3','web-24','node005',4,'cli',NULL,NULL,'2025-02-05 18:06:41.054161543+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["186.84.191.82:2621","[c9d0:4e15:513a:171b:9911:f9a0:cccf:cd83]:24151","66.84.158.199:63451","47.94.115.240:6677","19.74.30.92:1284"]','2023-06-11 13:56:42.694329408+02:00','2025-02-05 18:06:41.054829684+01:00',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(6,'mkey:00e154160e4bd8db73f2707c7a5b2f0bae04f3e26427a1aca5b0dcf2dcc1872f','nodekey:4f61b1414dd622304d407a7b4812640f774247f0c3d369a2292e3f4e88383c17','discokey:1028826beface553dd6ea7e553cc63ca59fd529716cb639981d1aa2965ed8885','laptop-48','node006',4,'cli',NULL,NULL,'2025-02-05 18:07:48.852563038+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["220.204.45.94:59135","187.96.173.136:34092","[fece:76ed:4c23:1e89:f225:97a1:94be:b5c4]:33472","[e1e6:a14:535a:668b:698:f125:543f:b5b7]:31740","[b8e3:18de:5ece:3e24:8bbf:1cbd:d27c:7de]:53522"]','2023-06-11 13:57:44.975695604+02:00','2025-02-05 18:07:48.869613078+01:00',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(7,'mkey:69a5389410061b4c336c1f337ae3ccca903f729ff249b6f1e556dcb9d9a3b7eb','nodekey:54c0d87c263afc07b7a3bb5569f8d6b67c58fecdcabefcbe7b8057e91e573a31','discokey:765930463b973ddead698512fe882cbe5e13885d14d44808596600efbcf5303f','db-22','node007',4,'cli',NULL,NULL,'2025-02-05 18:08:07.927513966+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9b54:61a6:c678:edac:956f:741e:5e2c:9ec9]:17930","[cd3f:af1c:fcb4:bcb3:806e:adf4:3f3f:ab36]:58513","[7a83:6a84:9f15:ba1a:9671:3d54:2e31:d5e9]:64893","167.223.65.194:6907","[debd:ad2f:ae50:cd14:9cd2:b297:bb94:d03c]:24266","[489b:6c59:1877:6215:ddc8:57f8:78dc:321c]:3025"]','2023-06-11 14:16:56.951313537+02:00','2025-02-05 18:08:07.927964234+01:00',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(8,'mkey:3fa0a7da3e9f0eecb11b442d911620b629981f5eb3f4ee851256a55b7252a1e7','nodekey:1912a3b843ba973c55b449fb0e1e01faff114b440e7bd1ca8b9af3be0cc56762','discokey:0d4f62fb22a9ab554a8983123cc118ec17b9d47c97d1d84bf082c90f82fe267b','lt-35','node008',5,'cli',NULL,NULL,'2025-02-05 18:07:59.389906976+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["124.104.237.237:47957","[b9f9:eec0:d656:cd83:8595:46fc:b677:6ade]:17494","126.251.133.86:44347"]','2023-06-11 19:59:30.401970393+02:00','2025-02-05 18:07:59.390393378+01:00',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(9,'mkey:dc84e5b1cf58ec8f7200ec0a655c1d5843c520ea59aa7ddd111ad5c9beba9bd8','nodekey:3c06f5a6d7f267023044ab632892b51f5af1ce4d84f0739e88823bc22cda6c57','discokey:3c1aa008a7811012038cd6658b6f290b3d252074c47d23cf1766f878340a30bf','desktop-75','node009',6,'cli',NULL,NULL,'2025-01-31 13:55:16.93378169+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["187.166.20.165:11208","78.47.168.253:34957"]','2023-06-17 19:40:45.468789461+02:00','2025-01-31 13:55:16.976650183+01:00',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(10,'mkey:2325e928fb41b3a3815b7635223411100ba10eba8ea1b442838b40da35379ec9','nodekey:8654f75dc553fbb381f57fd3f8669b5c2572f65d97989d0a69c3b294bb8cc685','discokey:24037160babf3208537006968c9f47ec3e689b1873796a787eaa44fa95afd986','desktop-46','node010',6,'cli',NULL,NULL,'2025-02-05 15:46:53.720788609+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[97a8:bbb7:9bb9:aa2b:e449:4d57:4a65:3561]:17392","151.97.159.85:8120","[69bc:fe73:f6c7:31b2:bc4f:1a95:4676:201d]:4806","[d794:c918:b0e8:6023:d7cd:709:79eb:23f7]:56801","113.147.88.203:61138"]','2023-06-20 11:18:35.905417341+02:00','2025-02-05 15:46:53.721181098+01:00',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(11,'mkey:4fb69f4bf77b8159e4814764398e6cd3bf2067b3fd4eda0eacc443ec299fcde2','nodekey:cebae8af02150eeb54b67f2cadc33dda38a98c94cfec05c1f378bf130007e946','discokey:fd5c9fefc960538a070c2bddafd53c3f890c561e02b62218e4aa8e6ae38cbe4f','email-12','node011',7,'cli',NULL,NULL,'2025-02-05 15:58:04.837157467+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d86:4301:d7d:7706:8b4a:86bf:19cd:1d6]:55529","151.169.147.218:56573","[9e9e:2802:ebb9:ba03:a564:7999:7137:65e3]:52919","[ede2:462a:f99c:24c3:b1b5:8b97:abe2:3ddb]:65141","[dfe7:6f6b:a382:4fc:a3f6:eb9:9ebc:5798]:1474"]','2023-06-20 11:35:15.063855316+02:00','2025-02-05 15:58:04.83726028+01:00',NULL,'100.64.0.11','fd7a:115c:a1e0::b'); -INSERT INTO nodes VALUES(12,'mkey:5e192b1b323439fcc3b921c4f523ce9b7cce867220bd092d23be2451ce93a4a5','nodekey:1452cb2c8f26c8988cd8d8ed42957c78e3ab922bdf9773978c853e4254e23fbb','discokey:b7b41dc5de1fdcc4bd609dc72f79d27269367dbf72ab73ab61e998aa31eac6a2','web-19','node012',5,'cli',NULL,NULL,'2025-02-05 18:07:24.629305293+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["60.5.210.145:1193","[5698:e318:b38c:4331:5152:c2df:743f:8ead]:34839","33.51.84.119:46756"]','2023-06-20 18:22:35.061914624+02:00','2025-02-05 18:07:24.630116588+01:00',NULL,'100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(18,'mkey:f11332335d9f1f9d4bfc8cf9bf4edc0c958d4e4364b7dcde3c67d847f7d05edb','nodekey:e97d97a2751ef3533818907c9f24acf3270e3824f2294bb9a36d60ca196e161e','discokey:412b0eee00b68721d9169fc81039b9cda6cf57143e4cae8f7c417db15b83411a','web-18','node018',9,'cli',NULL,NULL,'2025-02-05 10:23:02.664643604+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2acf:3f32:1763:5d21:88a:4af:8779:1b1d]:36283","84.177.251.38:32968","[3138:83c6:ae09:a073:7888:71e0:494:3863]:59700","191.156.96.176:5489"]','2023-06-22 08:30:54.08720463+02:00','2025-02-05 10:23:02.664955259+01:00',NULL,'100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(19,'mkey:157706346bfe3502b6e24a84efa59b1a151be08f5a260ff96c72a734c1da1390','nodekey:139a99de286f52c55bdd61e932827c9d7cb84101a698df3576bdffbeedbf654d','discokey:293abc1a1cf2921e7961f5753cc39097956648bab47879791afb12ea03a4e6dd','lt-42','node019',10,'cli','null',NULL,'2024-08-07 09:35:12.220368767+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ce0f:9f05:96dc:a47d:edcd:1c25:f578:7551]:30363","[cc06:9c56:919a:2341:4ef7:7f28:c8ca:2a5c]:30788"]','2023-07-03 15:18:03.829428454+02:00','2024-09-18 16:15:14.037990869+02:00',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(20,'mkey:e4713082f1a4d5e2f5ca126b95eedc7ce673782a871afde980a74aa98e31273a','nodekey:3957decaff55af98415799c81e6cec28edc377d6e2a9ddca3c909c26bcffd45b','discokey:9e8360b263f67dc8d5dea9d37f023ebbbb8882f265f6ca43f5923194d7931609','db-08','node020',11,'cli',NULL,NULL,'2025-02-05 08:11:36.462419884+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4692:f8a0:842b:c26a:e549:1f3b:6795:964e]:31064","207.187.74.99:12216","63.3.67.150:22907","[77ac:562:7c4c:438a:6ec:be48:743d:47b1]:11349","[33cd:731c:f110:9b45:66b5:de81:1daa:757f]:5997"]','2023-07-10 12:40:34.838579199+02:00','2025-02-05 08:11:36.462522474+01:00',NULL,'100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(21,'mkey:2894a7babdc030bac1e4ec0e14aa4b6b67ebe382a1773e0ff4aa8b16ecd443a3','nodekey:1e219f3cc03eaa868018b9f6870d108317c31760fdc38ad4bec7200d58379f26','discokey:a81e998383baa23adb3e1e6757ef0c4b960095022f00b500b57f53c6147c71e4','laptop-48','node021',11,'cli','null',NULL,'2023-11-20 07:19:19.447470862+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[736d:a0a:e343:6b2c:b48b:7510:ace2:edc]:60377","82.193.154.129:23713","[68:80fa:7b30:3a0:d1fb:3e89:2c73:256b]:30308","[ad42:3917:5edb:3e85:732:1c71:a19e:9fe0]:11799"]','2023-07-10 12:42:43.290469734+02:00','2024-09-18 16:15:14.03886353+02:00',NULL,'100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(22,'mkey:9ab69494a274cba5b22cbcf35b3ada57d4b3d2ee03930a456f80fb34dc69064b','nodekey:0345912b4e39cd9d03dec55edec6fba7f8aca631312884b49538550475332182','discokey:a2c1be2ae50af07e5b96b0a3ee2d3a1b0148fbfc5c1a283bc787e31410d6f74c','web-51','node022',6,'cli',NULL,NULL,'2025-01-31 13:55:08.655562681+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["9.237.225.227:64622","[46a1:e1e6:6b82:6599:acc7:2273:afea:f09f]:38018","93.129.193.97:13477","184.41.32.72:1026"]','2023-08-05 12:08:48.132161695+02:00','2025-01-31 13:55:08.655891626+01:00',NULL,'100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(23,'mkey:d1b90c1c72090d082067c2dfbf287f26510b58519bbc5113f1ca63ad3ef20b96','nodekey:2d0d6f2300d295c8f91d0b954d733a2f8c4106595070d3efc6eb709df09591f6','discokey:87ef4543866a337716d7d5021346b82bbbd1aab14c36deadf96b4c38718e8cd1','srv-45','node023',12,'cli','null',NULL,'2024-12-07 19:25:56.935152754+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[79e6:8519:803b:6054:5c13:9069:8dfa:1dcd]:45055","30.154.162.65:53795","21.107.170.69:30767","212.18.227.41:50735","[f774:e89b:980f:975d:fd3e:93f:a9b5:2034]:46054"]','2023-12-15 08:05:56.592241745+01:00','2024-12-07 19:25:56.935317645+01:00',NULL,'100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(24,'mkey:f10b7f9f385e48dc99ec0944b8a04ec4d3e5613587419c134440121fa747ee75','nodekey:9559349d25dd3fa88b9595cb4ccebeeb4712338757a209e936afe37ec2643dbc','discokey:b9a07315c1eb2e074871feebe899bd2ecae64d98a8edf08883785130ef386728','laptop-25','node024',12,'cli',NULL,NULL,'2025-02-04 22:10:50.44476731+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9e60:1bb7:b542:2f50:269f:5ef:72b6:f4ba]:34677","159.52.2.200:7205","[5b7b:e424:ea77:aa8a:f835:d6bb:2e1f:52ea]:19152","[bb06:d322:7cec:426b:ef38:55d0:e7ce:95bc]:31597","[320d:6314:eaa2:a098:8594:977d:27b:8442]:12827"]','2023-12-15 11:14:36.765183054+01:00','2025-02-04 22:10:50.444923245+01:00',NULL,'100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(25,'mkey:84784f348b4e879e87986ef13b6f02ddef91639cd7de65540903cd846f93e1f2','nodekey:c15dc98ef47cf6f5dd52f59c22b3b6b0d8536ea5d248787a0145b45e6598cff6','discokey:c14e1be497f1423b4f4aa799f8ee907307eec987a1c3f7e32d4adf215fa731c6','srv-70','node025',6,'cli',NULL,NULL,'2025-02-05 18:05:52.512447937+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[23d7:1ab8:8bd1:10f3:a4f7:bd70:7803:773d]:50718","[3200:76c9:dca8:d60:e5d6:868e:a264:3efa]:33138","[975a:978:5cfb:5e5a:1b6:d272:a1c1:50dd]:3245","[e6e4:7cfc:9ce:b47:62f5:b64e:d2e4:9015]:42369"]','2024-01-05 17:32:40.940566279+01:00','2025-02-05 18:05:52.557755672+01:00',NULL,'100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(26,'mkey:dba47da985e4770a27031261b989e24ddd437983652bd1f96cc520ed78c5fc0c','nodekey:3c36d35f3e9ec98b36741e5973d8f5542fd82503a1752ad9e20545fb6be984bc','discokey:b30567e26d2fae861e566c4081179d8c72f9e659521dd7e6afecb3fbaaeae63a','db-02','node026',6,'cli',NULL,NULL,'2025-02-05 18:04:25.764764072+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["192.175.239.121:921","98.153.121.140:5003","208.68.161.128:56198","163.42.128.241:47535","49.160.22.202:42427","[1526:e34a:8857:394e:bbe0:c043:4b37:68d4]:5245","32.31.195.128:7039","[c50a:ec:e2e:e109:3a08:f56b:2928:4921]:61265","221.147.144.138:44737","136.208.142.64:41091"]','2024-01-05 17:34:19.811670479+01:00','2025-02-05 18:04:25.765166307+01:00',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -INSERT INTO nodes VALUES(27,'mkey:73868242348c7d709b77f52963c32eb4b95c8b6f89abb5db76bd95cdaaa538b7','nodekey:2de3f421f78de3f5cfd824ffaa80cf3fde3060bc42f51c2ee2f55b49fd5fbe20','discokey:5ff3b0ad430ee5530c972ecf0d3e69f25eecdadf6ef8a309b3841d7aaa6f70d4','web-56','node027',6,'cli','null',NULL,'2024-01-16 14:32:21.570104175+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a2b8:ea55:f9e3:2c00:f423:8c56:7fec:c617]:53793","[ed88:2277:3c9a:4f64:a25e:5b41:a3ca:e904]:27852","[d203:40a2:5efc:4a42:8910:81f8:59c6:4d8d]:23389","187.126.162.12:25054","[5d8b:9391:47f1:1bbc:e01:92de:4a90:7614]:42410","172.81.188.131:23453"]','2024-01-05 17:48:25.466030859+01:00','2024-09-18 16:15:14.040766068+02:00',NULL,'100.64.0.22','fd7a:115c:a1e0::16'); -INSERT INTO nodes VALUES(28,'mkey:300a566a21fcaab2665dc091300e2c5110dcaee25e2fcc4cc23b011fdae471e6','nodekey:8e36531b406e0b7b9c1b9abadc00626ccc8c6e7a0c37f70f42ccf06cd2dd03fa','discokey:a4134738054f6fc0c57b6eb47913e68a3f47ffb81af8a3496754921c2fcb02f8','laptop-98','node028',7,'cli',NULL,NULL,'2025-02-05 17:14:09.968775535+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["113.143.9.36:55432","[e266:66fe:4929:3831:7830:6f10:1572:20df]:9977","[afe5:7d57:fac5:a1c5:78d4:ee33:d474:1e56]:52223","7.208.95.117:299"]','2024-01-15 09:34:54.847632697+01:00','2025-02-05 17:14:09.968981413+01:00',NULL,'100.64.0.23','fd7a:115c:a1e0::17'); -INSERT INTO nodes VALUES(29,'mkey:6d3edcfd40971c25174fa55eeddf76d05447d7d28f216c6118d8d6e35aa1ae89','nodekey:4b7d7ef564535f114a410c384aa3eb95103b5895cfd74962c7d37d3a8b0bb6ed','discokey:569e08cefcaec4d1312b70c41c12d82b26712965cbab41ac3ca5db857274ba46','web-76','node029',7,'cli',NULL,NULL,'2025-02-05 16:04:20.309612735+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[24c5:f1ec:38a5:5a2d:7224:b76b:cd04:e6d2]:63943","162.176.178.140:46577","[4f52:1708:c0a:7991:9c16:bd7d:4045:195d]:50362","[fd0d:f112:f642:c343:ac14:8ede:dc04:e2d9]:58412"]','2024-01-15 15:18:12.2871978+01:00','2025-02-05 16:04:20.309824144+01:00',NULL,'100.64.0.24','fd7a:115c:a1e0::18'); -INSERT INTO nodes VALUES(30,'mkey:edae4836dee6e165a4201e0d106fe5db539aaefd4fa95e6f660cb16457bab645','nodekey:59e8b5e492cdce7b05498db07e621fd6f326f8a6141751bfda409b2e1e5e56de','discokey:bb6fff0c0615c5aeb083d7175b6838f6c2cd22634ff3717cd227063709e243c6','laptop-44','node030',7,'cli','null',NULL,'2024-02-05 12:14:40.065688294+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c204:d59b:29a8:ac18:75bc:1a4a:755e:a8f2]:26579","35.41.214.64:38835","143.202.189.84:43171","[8679:8c46:1c66:cd1e:95f8:2b0a:503c:819b]:57389","107.115.138.42:41616"]','2024-01-15 15:21:43.217136004+01:00','2024-09-18 16:15:14.041757308+02:00',NULL,'100.64.0.25','fd7a:115c:a1e0::19'); -INSERT INTO nodes VALUES(31,'mkey:fbebb2c69f263a519c8902463ef9c7db0597109fae5730a222f1c6567278bc86','nodekey:d83666c0acac1a9cd306cb884a1ce24adaabbc2102eb553ef5514ac5ea1714cf','discokey:6f4280bcb38ac995d22543db32230d8f22f2c0a0dc5efe2d86c7361c897ae812','lt-82','node031',7,'cli','null',NULL,'2024-02-22 08:35:27.098819037+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4327:29dd:a02:fb00:9645:d497:8ffc:4a68]:26342","[5d4a:a2f2:16c7:3d3a:a2b8:2962:30f9:3407]:5279","107.140.203.49:9365","[1660:741a:160:67c9:7972:5f9d:14bd:ee53]:21178"]','2024-01-29 16:05:35.338524634+01:00','2024-09-18 16:15:14.042191514+02:00',NULL,'100.64.0.26','fd7a:115c:a1e0::1a'); -INSERT INTO nodes VALUES(32,'mkey:8c468799222af319e17ec309dbbf3a87fb6d924a1cb17e5a0bd7e027fe3bfe5e','nodekey:3c7d82f4a44cfa1469b957ac911e8506a68962fb2583d2afaf3e8a621c0c4aca','discokey:ba179b51f3540e619eaac9a6d4bc67403daa7c3cf28a6b579087cd6d284501b4','lt-53','node032',7,'cli','null',NULL,'2024-04-09 13:59:43.37062537+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d21f:d7bc:f690:e3ba:82fe:8122:c33d:5a86]:227","10.67.113.92:45634"]','2024-01-30 10:41:58.31917869+01:00','2024-09-18 16:15:14.042506082+02:00',NULL,'100.64.0.27','fd7a:115c:a1e0::1b'); -INSERT INTO nodes VALUES(33,'mkey:be7f190677fd0cd1c5e78b4ae5b02a1738dd60be7d9a41017000b22ae9636af4','nodekey:a404e578d3edd3e3d879e9f1946c859b5982c59f5e528bf6b987bd55de00b25b','discokey:ad3013f116c370866fc9134b66999bb05e2dc10a8e2944230c3d9893c0a94c67','lt-80','node033',13,'cli',NULL,NULL,'2025-02-03 21:25:19.620235627+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[af8:f5e8:32f0:75ee:d351:c0e9:78e:93d9]:12624","71.151.27.31:51063","[f0d6:21dc:48f5:63f4:daff:a3c4:7d60:51ea]:17286","[38c5:9551:f6ed:b087:91ba:bef1:771c:273d]:62171"]','2024-02-03 16:33:26.706408143+01:00','2025-02-03 21:25:19.6205373+01:00',NULL,'100.64.0.28','fd7a:115c:a1e0::1c'); -INSERT INTO nodes VALUES(34,'mkey:7784df3aabe1ca1edab5270028fb3cae8b66cee2384790d06969d1e6e298520f','nodekey:fd8e939fd6db8caf6623ac3d73c12ade4da110d8c1f29e3a3ee0d2be0c526915','discokey:59aff4582e00d92260e6a90d6d0da3d818fe8f52c081c8b65e18006ef8269ebe','lt-65','node034',13,'cli',NULL,NULL,'2025-02-05 17:49:20.10044606+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["71.155.69.236:8707","125.2.67.155:50785","32.6.80.181:17295"]','2024-02-03 16:42:32.683785672+01:00','2025-02-05 17:49:20.100741067+01:00',NULL,'100.64.0.29','fd7a:115c:a1e0::1d'); -INSERT INTO nodes VALUES(35,'mkey:ad3838fa63fb120ae8d3e47d4bb6bb4009e7fc5d91483d1ba7417d0681fd7162','nodekey:b10ddbdcf826d79049d74a7d4ad036b499c86f7daf786bb1a9b295366b57afe8','discokey:618ef629d3bd261964c613ad3bc45e6278fa274d734c8cd2d3396cf8c9742d88','web-33','node035',5,'cli',NULL,NULL,'2025-02-04 17:50:49.267594119+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["169.159.71.157:18240","[bbb7:5b4d:218d:55ff:ac9b:1fb6:c933:9a55]:63943","18.156.132.228:63356"]','2024-02-03 16:51:52.010016072+01:00','2025-02-04 17:50:49.26844864+01:00',NULL,'100.64.0.30','fd7a:115c:a1e0::1e'); -INSERT INTO nodes VALUES(36,'mkey:c536a3efc460fb2da4c4775e2b51d0c84f001dc46a4d902429e74e97b2d3c6d4','nodekey:91da2acfa3d7f7c7b0c1a2d952ee527514504fddc504809423762203a16145ab','discokey:989ff5f21d485dcb6312fade38c9047bbde1fbce46bc29ac5f6afe00610ad215','laptop-46','node036',13,'cli','null',NULL,'2025-01-19 14:01:48.956567669+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4017:a243:61d0:b302:7860:18f:7d4d:f605]:28633","[e5b8:6c78:ce2f:cf6f:13fb:401f:8223:3f04]:54121","38.137.88.247:35801","[eec2:61c0:e20a:2fa0:5e4d:56a4:5071:881b]:36628","[b8c2:c792:3329:2c93:f94b:a948:7d11:b9f2]:908","[5925:353c:bd9a:f42e:1916:45b4:c170:e515]:18400","[c2a5:cc2f:e2e8:573f:7b01:4aa:8a1a:fd71]:32146"]','2024-02-09 12:34:57.879970954+01:00','2025-01-19 14:01:48.956830267+01:00',NULL,'100.64.0.31','fd7a:115c:a1e0::1f'); -INSERT INTO nodes VALUES(37,'mkey:142e67963c9045164df7b33af6d2149d72f59527df44fd75221647281444cbe7','nodekey:a3a2cd2bd0c3a1f783d0391ea1951c5c9226ddca0f18130b77fccff8b874d694','discokey:bdb788f8825b9bd816f37dcea882920382c7feb143edf0a6baefc9015774d8aa','lt-24','node037',5,'cli',NULL,NULL,'2025-02-05 18:06:12.650485457+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c31d:dd18:321:589:11fa:d85b:695f:1fca]:43064","168.4.78.117:48474","90.45.218.0:29326","90.233.46.138:29846","8.137.133.59:22781","[cb5c:f8d4:ccb0:8333:8ec:170c:a2b0:941d]:26248","[5802:df5c:8853:4851:6dcf:cb83:8208:d143]:24568","48.38.104.41:43702","139.100.200.227:62546"]','2024-02-27 12:14:40.452601042+01:00','2025-02-05 18:06:12.651261596+01:00',NULL,'100.64.0.32','fd7a:115c:a1e0::20'); -INSERT INTO nodes VALUES(38,'mkey:6829f2d5523aba7a27c31e59e2341f3feb5a5a72a0268ef10d23c39811ea3b53','nodekey:10e4ff66fdd37fb54f6dcb0e42f483a15b6340610acc3a4517c24f0f350b5a9a','discokey:f20786f9c7edc967d0b415bbcce39468441af3b531ff15656c0683d3f328720a','lt-88','node038',6,'cli',NULL,NULL,'2025-02-05 16:57:29.735032128+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a393:f17f:6728:82e4:c1e7:2367:82d0:daaf]:18560","[d192:9897:4cd7:bfb5:de36:fc03:8cf4:c60d]:25096","212.199.67.213:11150","64.251.26.199:56246"]','2024-05-22 08:08:16.045350656+02:00','2025-02-05 16:57:29.735397461+01:00',NULL,'100.64.0.33','fd7a:115c:a1e0::21'); -INSERT INTO nodes VALUES(42,'mkey:374ff35e7639239dbd6f70c03d04025febb0a7050cfa9d065dfb17233d200b06','nodekey:46ee597a17ca7641b7e450e55eefbfe243413ca76dc9c8e1f9d5df6e00e5abb6','discokey:f258814fe446b9599b1e601a6d51f85ba03cb6ee8154c64c5df9c7101cd34291','db-40','node042',14,'cli',NULL,NULL,'2025-02-05 06:03:02.17883855+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["109.16.250.249:16050","[5ea4:3410:eaa2:9da:b8d:b61a:2d9d:edaa]:3526","209.107.206.184:46974"]','2024-07-03 11:12:29.418355657+02:00','2025-02-05 06:03:02.372767209+01:00',NULL,'100.64.0.37','fd7a:115c:a1e0::25'); -INSERT INTO nodes VALUES(43,'mkey:1099debb0ba187ef1f43f001c75abc118eaf017bf3bd6b48ca333fb046b98de9','nodekey:d49ef9ad4a7ecbdabcd24cae92d9ac32fbb765467cbc20abe61b4bda6ef15f1f','discokey:9f2faf39514eccb7f3d7d66ebcaf804474f75284d8b1aedbf97aee6ccdf47ed5','laptop-84','node043',14,'cli',NULL,NULL,'2025-02-05 06:03:09.741784038+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[183d:f2d6:b80:36c5:318e:64e9:7f4a:b389]:12440","169.173.198.12:15274","[e5df:1889:6ac5:7d21:4dfd:614b:7eb9:d93c]:792","119.123.40.237:58615"]','2024-07-03 14:48:50.263910778+02:00','2025-02-05 06:03:09.743853396+01:00',NULL,'100.64.0.34','fd7a:115c:a1e0::22'); -INSERT INTO nodes VALUES(44,'mkey:bbcd4d386de509eaa162b9e6b7ecb5896bd2af74d6dade72463343b98ecba03e','nodekey:9b6dc646c9a4595a7e073a409a79d76b2624d3516d42eb36a5a2d34f5f8fdb69','discokey:109a9fae4a89679b17a8604d4dc8a07c172187683402c999927585ea28a40f4d','web-17','node044',14,'cli',NULL,NULL,'2025-02-05 06:03:08.642240566+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[797c:753d:f8fb:3108:8b9a:2bca:65a2:459d]:6713","[699e:74ba:cb1c:93f0:cad7:784f:b43:a4b7]:19825","[acd8:3779:7b6c:3a71:c15a:e323:a721:c9a7]:63377","[6816:3377:edb0:3f8e:a65b:bd60:461b:ceb]:24642"]','2024-07-03 15:23:48.066044194+02:00','2025-02-05 06:03:08.823204997+01:00',NULL,'100.64.0.35','fd7a:115c:a1e0::23'); -INSERT INTO nodes VALUES(45,'mkey:f8ddd3a46ff292d9d1c00ad78526306ae1182b38e7cfdef477c4024297683495','nodekey:37bd7c1295c38de87a979ade273598b47023217e57ce45af4b02429bcab81db4','discokey:9b89e8e84da2d01590108b6f83f4f0479abcdfac8791b5da44dd2543299de31b','desktop-11','node045',14,'cli',NULL,NULL,'2025-02-05 06:03:08.197696258+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["120.102.247.255:41936","28.177.95.32:7841","[fb23:5998:c63d:b6c4:e86a:96b2:61c:e110]:51902","[89d4:f6c6:ccf6:b19d:3b76:642e:7a36:d229]:930"]','2024-07-03 15:54:01.706018896+02:00','2025-02-05 06:03:08.241135037+01:00',NULL,'100.64.0.36','fd7a:115c:a1e0::24'); -INSERT INTO nodes VALUES(46,'mkey:af6c1d228b675ff04d2584e2c23d6288ec312a6b44c5c0eed213423a11711787','nodekey:f5b21f25bb22b5cf0cedb1b643d50c3a6512ef300cc971eccccbcc500cc23408','discokey:72843a1b2564c177d8556eb0d76dda7cbe0d9f1408bd250fcc6149518eb6d5c0','desktop-41','node046',14,'cli',NULL,NULL,'2025-02-05 06:02:55.262487942+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[37a8:9998:e94e:e297:940e:5414:5549:87c7]:37130","27.138.122.133:37304"]','2024-07-03 19:38:07.783745318+02:00','2025-02-05 06:02:55.262915688+01:00',NULL,'100.64.0.38','fd7a:115c:a1e0::26'); -INSERT INTO nodes VALUES(47,'mkey:c3ea2fb606807d06b1ed3e10219fee70456d1db0208d2e73a081a258992e4316','nodekey:a246b6e4b6a19ab09836e54c6fe7d5da442c2f9097ec6eea01490500872805c6','discokey:e16c22490ad1b9cbbafc0dbb4d4596f19593bf528d69bba3bb37e643aa82023b','email-63','node047',14,'cli',NULL,NULL,'2025-02-05 06:03:02.549111852+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["140.167.197.208:6850","[b895:b3f0:c9b8:845d:3f3a:e534:53b0:bf90]:2433"]','2024-07-04 10:38:08.344092869+02:00','2025-02-05 06:03:02.55637113+01:00',NULL,'100.64.0.39','fd7a:115c:a1e0::27'); -INSERT INTO nodes VALUES(48,'mkey:8fede09a569258fc0fd7ebfbd12b2b4e8c2f4ce95c43ab0d89e41ab3998e6d24','nodekey:63af6498d4fb0e1e8312e3ec6a1639b178c051a271f33f213324e3ac1af53e17','discokey:a1e793aa09c20360cb4ef57e7dc42171a0abbc9f562140609e930ba5b7b12e83','email-92','node048',15,'cli','null',NULL,'2024-10-20 13:53:33.831192385+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[33c0:859c:4a78:8bdb:359e:31e4:1e71:7e39]:50383","[3aab:52a2:b9d3:1816:5627:336:6c60:57d7]:13486","22.83.180.41:14147","26.126.135.206:7777","[8706:d109:5006:f832:dca7:58e4:4451:be15]:15180","177.4.27.29:60895"]','2024-07-26 08:09:56.608302315+02:00','2024-10-20 13:53:33.831387627+02:00',NULL,'100.64.0.40','fd7a:115c:a1e0::28'); -INSERT INTO nodes VALUES(49,'mkey:13b8d4cafbdb8e9ab77928219aa57bf778878866ad8e6108b60aaab8dc8cec36','nodekey:17cefd68f8a9cc20b3d4fd93598f77ddfd6bd5a7e6e4b49861f9faad8623e0ef','discokey:939a04ddd2f9a24468e07ee7924dbb7a7662c1e8a1b67426f57fd48c7394448b','desktop-40','node049',16,'cli','null',NULL,'2024-09-19 09:07:18.28136023+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[19b0:8be2:708:d27f:4d3d:3c93:9979:e9e8]:50438","129.0.5.112:19015","[91e2:54c6:93fb:ec32:6ab4:186d:81cb:b815]:37323"]','2024-08-05 17:32:41.937626584+02:00','2024-09-19 09:07:18.281618912+02:00',NULL,'100.64.0.41','fd7a:115c:a1e0::29'); -INSERT INTO nodes VALUES(50,'mkey:03979d5446bb3f596bf0234d81ca84482eacb132e874ceda6cc2703422cd728c','nodekey:805c03cada525f14ee90d28ea02b23a3f57b470c793314872ad9daacf6c6484e','discokey:eb4135e1cc73a3c6b513d8cc04ca0c55a2aef7b0009845fa1d2036488aadf0f8','desktop-02','node050',10,'cli','null',NULL,'2024-08-07 10:10:06.595550455+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9175:498d:64ac:4342:2eac:ffa1:99fd:d109]:21202","[5cba:56ef:db08:320c:333a:541f:605c:44d5]:57469","150.211.2.236:50360","203.180.122.83:13344","63.162.234.32:11653","[6b63:f552:f930:e22c:fc42:3b41:c5c6:6d5f]:63162","114.48.12.185:47837","[2964:cc5d:3a1b:7889:acc8:4d8:22c1:398a]:43515"]','2024-08-07 11:50:54.144157179+02:00','2024-09-18 16:15:14.050033969+02:00',NULL,'100.64.0.42','fd7a:115c:a1e0::2a'); -INSERT INTO nodes VALUES(51,'mkey:5877139c52c0bf3c0fd054bd046aaf4530dd8946548a7bc1a06733649d1557d8','nodekey:553d9638c8c455e8d1bc811ea3af03db1eb3c586413447c8003e7d905a61d06c','discokey:1da883c9eff402d786362aaac2f478c835308b3acec0993e6293becf4474609f','web-62','node051',14,'cli',NULL,NULL,'2025-02-05 06:02:31.63336842+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["86.215.174.228:26026","93.101.159.155:64005"]','2024-08-07 14:19:31.156780417+02:00','2025-02-05 06:02:31.633461601+01:00',NULL,'100.64.0.43','fd7a:115c:a1e0::2b'); -INSERT INTO nodes VALUES(52,'mkey:0e64072dafa5f9e49266b41fc4b21ff7d287be8f4e5f2b6c395487d632f1cf3f','nodekey:2f77bb8cbeb426bcded5ad9fc6da44b0b76b8d6759f10a8db28e634290760ee7','discokey:836558738056edd1fa8d190c7e42f120b365a48a3b1f1c96c87755bb82d7ad0f','desktop-07','node052',17,'cli','null',NULL,'2024-12-25 17:27:58.515851096+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d889:f751:710:257f:977f:5597:9908:14e0]:61162","[f83a:5ebb:911d:ef36:743c:8086:8a4c:a5c7]:23538","[dc45:365c:e426:71d7:fe23:fdd4:d2c2:9409]:57262"]','2024-09-22 15:48:41.385301399+02:00','2024-12-25 17:27:58.517153789+01:00',NULL,'100.64.0.45','fd7a:115c:a1e0::2d'); -INSERT INTO nodes VALUES(53,'mkey:a6f93e99a5019b3b6a2941d95dc7fcbfea5a59317a48b99db93b7a3f4f0a65bb','nodekey:1f78a6d76ea4a2135c0da46b202560c86ed30cd7e8239d4508088d0e02ffb961','discokey:1744062452f052aba5320e37fc08aaa37f1430db7c32c94c5626f0f3c858de3d','desktop-82','node053',17,'cli',NULL,NULL,'2025-02-05 18:04:29.018269861+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6084:ee60:1697:864e:4a61:cf6e:e9b6:4c05]:63961","[fad7:2601:d286:4772:3cce:6e89:f66e:9eeb]:5430","216.60.124.11:45041","[d513:c472:d7a:f316:f610:10d0:9851:4feb]:3955","156.228.105.157:61542","102.126.185.0:50694"]','2024-10-28 10:04:50.084492941+01:00','2025-02-05 18:04:29.01861254+01:00',NULL,'100.64.0.44','fd7a:115c:a1e0::2c'); -INSERT INTO nodes VALUES(54,'mkey:7ad9c5af6111cc3286256127e6b6a939f1fb87e08c9018bd7bd8671c4b9cefc2','nodekey:84fec35781c2032d05c7e8524bada9f7242165e9ab5df5d66f993266cc6f090d','discokey:ad9158862b9c58848c37b519543415166938b6be3dfe3a8463842ebbd09eb00e','srv-14','node054',14,'cli',NULL,NULL,'2025-02-05 06:03:07.205609421+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e669:9684:b8bf:9a64:8ec7:bec:6369:4412]:16090","23.0.29.75:24889"]','2024-12-09 17:10:55.363593066+01:00','2025-02-05 06:03:07.20590574+01:00',NULL,'100.64.0.46','fd7a:115c:a1e0::2e'); -INSERT INTO nodes VALUES(55,'mkey:ae8c967065558f9f03a95979d2181b3aa2b3537c25a894046a65554beaf553e7','nodekey:bc9cbaeb5cd70e708b5a23565ffb1fb6e1c18f35b1861083196209dcc1e0e20c','discokey:f8647d1a8ab876931f49a8abf785d429f7f21bd1fe6317528bfbf78e38d7aafe','db-73','node055',18,'cli',NULL,NULL,'2025-02-01 12:07:16.769615426+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[caa3:7372:5199:2416:357:7b48:3127:85f1]:21026","[e2a5:db96:e7e3:6335:8bdc:90b4:40ff:920d]:28128"]','2024-12-10 13:56:39.287449662+01:00','2025-02-01 12:07:16.770515926+01:00',NULL,'100.64.0.47','fd7a:115c:a1e0::2f'); -INSERT INTO nodes VALUES(56,'mkey:b3558cd3cf69119c2015a9cb0a87678b00231eb52b4a2b9ad93b92d7019c54da','nodekey:08f90ff45aa02d7e6b1c8bdca99e8560b70ffc7e3baafeffc3fee5477696602c','discokey:48a4765cbbf46b7f990fe03b6ceb2aefbd78dc18c462935109c51e2a3ba309a9','lt-51','node056',19,'cli',NULL,NULL,'2025-02-05 18:07:11.261451892+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f0ea:121a:661e:b1e0:52ec:5313:f630:8bb5]:55696","[bfb6:c553:e032:b418:f3c9:c0cd:9e44:f135]:60965"]','2024-12-17 14:58:39.429211911+01:00','2025-02-05 18:07:11.274103693+01:00',NULL,'100.64.0.48','fd7a:115c:a1e0::30'); -INSERT INTO nodes VALUES(57,'mkey:5bb86c2f730c0b247b01634f73a4f67a19bf271de3ee39cf13f879db671da5b1','nodekey:64f016a04093cc91f7d2ca3639a07a17ae8f95f246a7ad759141d598136bd060','discokey:066b4845ec135e6251d6ef6119c832b46e2bc310332d3ea836c5f39896538b38','lt-49','node057',20,'cli',NULL,NULL,'2025-02-05 14:17:30.564397966+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["128.102.246.147:15968","[d0d3:beb7:361b:71d1:af88:4105:6b5f:343e]:14598","155.200.222.111:64410","[b554:3604:41e8:60a4:229:a09c:efd3:c73e]:39533","44.230.54.249:64083","29.229.203.183:13","153.68.228.171:36559"]','2024-12-17 15:17:14.26936913+01:00','2025-02-05 14:17:30.564974331+01:00',NULL,'100.64.0.49','fd7a:115c:a1e0::31'); -INSERT INTO nodes VALUES(58,'mkey:557cba9878553b95245e4880d662200c70ab60ec53f2a47a48a71b5a3455bf39','nodekey:cf788f8a13f031bb446fc3f12a0120a4b0b89d8918d0ccd7a1b089c600e75b65','discokey:2ca62c3163c1aa5ef0585d1fe67ead83578f835a9ccbc86f64fc13102df0b4c5','db-41','node058',12,'cli',NULL,NULL,'2025-01-25 18:41:03.881898904+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["29.219.180.224:55412","[2ddd:7c1b:a237:41c8:3baf:e9cc:1f31:9057]:62916","[9c34:190d:7bf0:c940:18d6:f96c:9aca:8c0a]:8787"]','2025-01-17 10:17:23.455895657+01:00','2025-01-25 18:41:03.882180987+01:00',NULL,'100.64.0.50','fd7a:115c:a1e0::32'); -INSERT INTO nodes VALUES(59,'mkey:63163f3f5e4b511affa1046328d92add5e5c1e8f883bc851f6853517acc11e8f','nodekey:71f9f812a4b177da766a6a93ada6de6bf41a7f143e4c1c5073704c1938b53931','discokey:93079aca00038320a7f125193e7e2b4658efd88ce24d04ba15a3da1491b8d71d','srv-82','node059',21,'cli',NULL,NULL,'2025-02-04 15:03:47.081255506+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[82b6:6c3:f2d4:e056:dc1f:57e4:7842:53ba]:4136","55.12.42.230:527","[406e:9d3a:4fca:33e7:176:4fca:4450:3e0c]:21548","[536b:bf65:da86:c7ca:ecc4:c84a:1811:334e]:24087"]','2025-01-29 11:59:27.291048957+01:00','2025-02-04 15:03:47.081635111+01:00',NULL,'100.64.0.54','fd7a:115c:a1e0::36'); -INSERT INTO nodes VALUES(60,'mkey:552ca0e4c611df54a38bd150d7f6c4ef77c66625435073b3751718e44884b92a','nodekey:7b4b25d90935b79af2efc7f0499d3f16a49d5078997314de57b65fb5ce01cf1c','discokey:2118ebb5a0b1c12de379802db2b247fcbf516de56216c4d0ee7a0489dde05071','laptop-14','node060',21,'cli',NULL,NULL,'2025-02-05 00:58:59.940225737+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["67.131.191.86:14533","45.142.234.80:30901","58.251.220.64:42364","[d9a7:5288:3427:e52b:4eb5:794f:11d2:5d6b]:33898"]','2025-01-29 12:01:57.48748166+01:00','2025-02-05 00:58:59.940897467+01:00',NULL,'100.64.0.55','fd7a:115c:a1e0::37'); -INSERT INTO nodes VALUES(61,'mkey:f5eaa64ce3d0dbd7539fd9c3f9a494688a4a64b9a3d1fc0b9f40dedc14192ba8','nodekey:ccd6aadf68fdcc95d0d15f6654a0c6bfd5ebbf282491539ee63e5b0848de9ef5','discokey:a492c6d1d0b9de44f6adcf7ac919d49c269058d301b4be21fbb5bf4203b2fa8b','db-97','node061',21,'cli',NULL,NULL,'2025-01-31 13:55:16.761715226+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d002:74d:2b8b:87bb:25cf:5512:50fc:e379]:46829","206.194.47.198:21677","22.203.32.217:28664","112.239.138.73:43954"]','2025-01-29 12:03:01.464646336+01:00','2025-01-31 13:55:16.762309571+01:00',NULL,'100.64.0.56','fd7a:115c:a1e0::38'); -INSERT INTO nodes VALUES(62,'mkey:4cbd491b0977f3a8cc92ef9f20a7726b3984f3c0699429ae46a0bceec5febee4','nodekey:66a7e77700fb0f4bae5b282413bdb5ff7c375549dd85c4300c73ff3d8029b4a4','discokey:2e9034fc43993eaa0cf2bc21118d378bb02afb3d197a33cf9f0dd201d176b23c','web-06','node062',21,'cli',NULL,NULL,'2025-02-04 18:18:13.742833978+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["81.51.60.60:63220","12.64.198.119:51541"]','2025-01-29 19:23:14.092804852+01:00','2025-02-04 18:18:13.743206432+01:00',NULL,'100.64.0.57','fd7a:115c:a1e0::39'); -INSERT INTO nodes VALUES(63,'mkey:4f53e8f17a458333f6c3baa9c54246a6bdf6b9b431fda11cb4982aaec8da44e3','nodekey:cfa7416a7c22869f6875a73cb68c5e71afa3efdba93aa52d97d0d5ba546a3e43','discokey:66b691bca43b231ad1ebfed80944bb4a78150b16e99a6bab48c530ad2573556c','desktop-44','node063',21,'cli',NULL,NULL,'2025-02-05 18:07:10.372444037+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[62ed:996a:ea86:6865:c4ee:753b:e387:5fee]:19029","105.95.65.117:11546","[68bb:e8d1:72d8:8b02:608b:2284:c9b3:184d]:20876"]','2025-01-29 19:41:40.535299057+01:00','2025-02-05 18:07:10.372725767+01:00',NULL,'100.64.0.58','fd7a:115c:a1e0::3a'); -INSERT INTO nodes VALUES(64,'mkey:3e0dcb286f6e19b1e3a0b5330978aa8a8d710934cc754e811d0f690046e04469','nodekey:cd2d301b9b163b806fe8d63cdd07004f26559706bf7dc782645f05e12ebe0ec3','discokey:14675666cf2b55697f0032897d1c6742413d7c24bada8cfdc6246bf150d06fc7','db-76','node064',21,'cli',NULL,NULL,'2025-02-05 18:06:48.587173709+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["74.32.222.198:59615","145.91.193.5:37126","[2fb6:ea0a:3639:a1cd:9075:a258:6a79:4f5c]:13944"]','2025-01-30 18:18:57.519126133+01:00','2025-02-05 18:06:48.587847966+01:00',NULL,'100.64.0.59','fd7a:115c:a1e0::3b'); -INSERT INTO nodes VALUES(65,'mkey:1adfdd6ba590feffc711c40506ed5ecb16609cd6e90a536d50c5f54cec223508','nodekey:0d58cee32d99cde76b3d8769ae4605cb4bfc9cbb01fc13069c8b59faca320e48','discokey:cba00753c508df0b5cdea1e59eaa829c6cfda4f8399121307f931dbaa3730296','desktop-90','node065',21,'cli',NULL,NULL,'2025-02-05 18:05:58.789731794+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["172.210.49.58:58821","[23f9:9c6d:5069:a14f:33d2:c0b1:5ab5:f83b]:46288","[b341:3259:c5dd:9f91:fd70:c616:3033:2b3b]:59081"]','2025-01-30 18:19:40.354692307+01:00','2025-02-05 18:05:58.790429401+01:00',NULL,'100.64.0.60','fd7a:115c:a1e0::3c'); -INSERT INTO nodes VALUES(66,'mkey:547a8b97c8c8227d9ba40174ce4d28d0e64f10fae3f39797c60e29742ed59e62','nodekey:cb4a9d1dee6f581036b7c262348cf2c4a96be8b2eea78f7805097e3f7b1a9fcf','discokey:c233a42a4afb5fbf921bf7b3fb4330e6c46273d18570aaf6b7254dbf3c67614f','desktop-82','node066',21,'cli',NULL,NULL,'2025-02-05 18:06:09.975856526+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["21.197.131.68:13997","6.193.99.72:20874","181.248.237.50:21695"]','2025-01-31 12:05:28.65297301+01:00','2025-02-05 18:06:09.976571645+01:00',NULL,'100.64.0.61','fd7a:115c:a1e0::3d'); -INSERT INTO nodes VALUES(67,'mkey:a8c1493c4f14cc5f41d2a37ee172d4871795f15df1d694e3e60cb00e85753133','nodekey:330d0406c86db5dad6e8c8f3aeea4a9d2e400f5979148c8dc0362c99a732359a','discokey:48d90798a39a3f18baf394f4c5470f6130e4e71b9b3537ab794729a69cb39fed','email-36','node067',21,'cli',NULL,NULL,'2025-02-05 18:07:12.217604914+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f54c:f310:c771:8240:30ba:d473:9e15:10c7]:14619","[b271:6307:9d44:59d0:49bd:4f2e:f1:367c]:65399","140.8.191.17:10998"]','2025-01-31 12:06:30.121464114+01:00','2025-02-05 18:07:12.218026563+01:00',NULL,'100.64.0.62','fd7a:115c:a1e0::3e'); -INSERT INTO nodes VALUES(68,'mkey:45e8c4a1edae8a0cdcabff6034ce05d4a52d3409257ab29296f6cf3a264e83c8','nodekey:b3059cba99de1e33977250c037645ba9d16cd632437994b34684b0907fe3ceac','discokey:3c55aa60782b081dc35d241058173ee1b33b787c275cece979f7e42a5b18342f','email-71','node068',22,'cli',NULL,NULL,'2025-02-05 17:22:26.298576706+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5162:12d4:c733:e842:4621:e95e:6576:9cce]:59605","207.56.208.145:2966"]','2025-02-03 14:16:55.56431345+01:00','2025-02-05 17:22:26.298939503+01:00',NULL,'100.64.0.51','fd7a:115c:a1e0::33'); -INSERT INTO nodes VALUES(69,'mkey:bac278a5ad2db26a926acc26fd63948ba6bd59f251f7cb63189686ce6ab708a1','nodekey:24a9b3c24f38b690ca357a9659d7694a2a4fc85da77fb1412eae653c644b3bd2','discokey:1146959bad3ded51876532d5bd19cb803a242cd31ec4139dbcff8253c0ad3688','desktop-73','node069',22,'cli',NULL,NULL,'2025-02-05 18:04:17.831085877+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["54.50.198.66:11070","29.172.28.99:15557"]','2025-02-03 15:23:16.312084161+01:00','2025-02-05 18:04:17.83122503+01:00',NULL,'100.64.0.52','fd7a:115c:a1e0::34'); -INSERT INTO nodes VALUES(70,'mkey:70c4138967c4f507d775370eb51027481476532f60551ad6a15d7d7f409bed0e','nodekey:f7a1ee8db2e3b1d46d312fa05852ca855dc4fb1c08c661f9d6ae72cf158312c4','discokey:2ef78cf519914d9d37837930ec28165ac6ed347ea0704dc38143c6763d5e0da9','web-04','node070',21,'cli',NULL,NULL,'2025-02-05 18:07:47.934048862+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["74.194.16.233:59572","182.2.129.47:16383","[b20b:7689:a05d:fa7b:596f:af42:cc23:5379]:28028"]','2025-02-03 18:09:35.161109801+01:00','2025-02-05 18:07:47.989204771+01:00',NULL,'100.64.0.53','fd7a:115c:a1e0::35'); -INSERT INTO nodes VALUES(71,'mkey:e50a34b03da2a75cc4317bd6499e8d3acc04c79a7fed05b4fc0fd9f4d039c457','nodekey:d0a2d496173e1283a1bedfb91ed66eb6c329939cdd49186fff95dbbd8793d453','discokey:e5263b334eadce0e0ec392ada8e3c2e9365c3d84c2b66e12159298a3ac1bdec9','laptop-52','node071',21,'cli',NULL,NULL,'2025-02-05 14:26:52.974125098+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a9c8:980f:e64c:96b3:b612:ecf5:c8c9:44cd]:35202","52.120.18.3:13372"]','2025-02-04 12:03:50.32663805+01:00','2025-02-05 14:26:52.974521366+01:00',NULL,'100.64.0.63','fd7a:115c:a1e0::3f'); -INSERT INTO nodes VALUES(72,'mkey:4c796b98359112a863206aac0c0227cb52ff020ab4ae85f4bfe0a6d607605b48','nodekey:e7e7abfb764bef835912f27af2c65edb3c5d01f2d2ac01943b3a6f5e67cd0c6a','discokey:7890936e6fadf5482ac24097a8c38c09b767fd9bfc2e83c05501524120f869a1','email-48','node072',21,'cli',NULL,NULL,'2025-02-05 17:10:05.65587839+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["126.19.171.233:19526","[91e1:a8d7:5c43:b695:4f5:b147:b5f7:9f75]:62218"]','2025-02-04 12:07:36.231437299+01:00','2025-02-05 17:10:05.656193493+01:00',NULL,'100.64.0.64','fd7a:115c:a1e0::40'); -INSERT INTO nodes VALUES(73,'mkey:a46b92cea88a9bac37dcb612a4a50dee75684f736c8c7f2aab812fa5fb9d463f','nodekey:1bbfdddf7b2a929a3da1e69236aa8b8fbd4b25d1839e57aaecf2ab910592bc78','discokey:c2188df6632aded7862de4b5d07c3993b3e908c59ce38f9d481f4840d85448c7','srv-06','node073',21,'cli',NULL,NULL,'2025-02-04 18:18:08.553472495+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[99f:9853:60cd:a4c5:e9d7:2a40:f86:742b]:782","[99cd:c956:e0f9:b633:5f8b:5c6a:50b1:571]:18474"]','2025-02-04 12:10:26.50545127+01:00','2025-02-04 18:18:08.559366076+01:00',NULL,'100.64.0.65','fd7a:115c:a1e0::41'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-05-17 19:36:55.859473496+02:00','2023-05-17 19:36:55.859473496+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(2,'2023-05-17 19:36:57.059073465+02:00','2023-05-17 19:36:57.059073465+02:00',NULL,'user002','','',NULL,NULL,''); -INSERT INTO users VALUES(3,'2023-05-18 10:10:36.248939077+02:00','2023-05-18 10:10:36.248939077+02:00',NULL,'user003','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2023-06-10 09:06:13.920718561+02:00','2023-06-10 09:06:13.920718561+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(5,'2023-06-11 19:58:32.371218434+02:00','2023-06-11 19:58:32.371218434+02:00',NULL,'user005','','',NULL,NULL,''); -INSERT INTO users VALUES(6,'2023-06-17 19:39:53.031565686+02:00','2023-06-17 19:39:53.031565686+02:00',NULL,'user006','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2023-06-20 11:35:09.325846831+02:00','2023-06-20 11:35:09.325846831+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2023-06-21 22:47:48.196234382+02:00','2023-06-21 22:47:48.196234382+02:00',NULL,'user008','','',NULL,NULL,''); -INSERT INTO users VALUES(9,'2023-06-22 08:30:35.068995572+02:00','2023-06-22 08:30:35.068995572+02:00',NULL,'user009','','',NULL,NULL,''); -INSERT INTO users VALUES(10,'2023-07-03 10:18:32.123226+02:00','2023-07-03 10:18:32.123226+02:00',NULL,'user010','','',NULL,NULL,''); -INSERT INTO users VALUES(11,'2023-07-03 10:18:37.130387602+02:00','2023-07-03 10:18:37.130387602+02:00',NULL,'user011','','',NULL,NULL,''); -INSERT INTO users VALUES(12,'2023-12-15 08:05:06.013615212+01:00','2023-12-15 08:05:06.013615212+01:00',NULL,'user012','','',NULL,NULL,''); -INSERT INTO users VALUES(13,'2024-02-03 16:32:42.224977233+01:00','2024-02-03 16:32:42.224977233+01:00',NULL,'user013','','',NULL,NULL,''); -INSERT INTO users VALUES(14,'2024-05-03 10:12:38.220973042+02:00','2024-05-03 10:12:38.220973042+02:00',NULL,'user014','','',NULL,NULL,''); -INSERT INTO users VALUES(15,'2024-07-26 08:08:40.979783263+02:00','2024-07-26 08:08:40.979783263+02:00',NULL,'user015','','',NULL,NULL,''); -INSERT INTO users VALUES(16,'2024-08-05 17:32:02.878091894+02:00','2024-08-05 17:32:02.878091894+02:00',NULL,'user016','','',NULL,NULL,''); -INSERT INTO users VALUES(17,'2024-09-22 15:48:00.287392203+02:00','2024-09-22 15:48:00.287392203+02:00',NULL,'user017','','',NULL,NULL,''); -INSERT INTO users VALUES(18,'2024-12-10 13:55:11.256977421+01:00','2024-12-10 13:55:11.256977421+01:00',NULL,'user018','','',NULL,NULL,''); -INSERT INTO users VALUES(19,'2024-12-17 14:57:58.550971236+01:00','2024-12-17 14:57:58.550971236+01:00',NULL,'user019','','',NULL,NULL,''); -INSERT INTO users VALUES(20,'2024-12-17 15:02:08.053169491+01:00','2024-12-17 15:02:08.053169491+01:00',NULL,'user020','','',NULL,NULL,''); -INSERT INTO users VALUES(21,'2025-01-28 15:57:32.774456057+01:00','2025-02-04 14:53:57.282402538+01:00',NULL,'user021','','',NULL,'',''); -INSERT INTO users VALUES(22,'2025-02-03 14:10:50.491924701+01:00','2025-02-03 14:10:50.491924701+01:00',NULL,'user022','','',NULL,'',''); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(1,'2023-05-19 07:09:23.387641743+02:00','2023-05-22 09:48:18.908103256+02:00',NULL,3,'192.168.224.0/21',1,0,0); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.3.sql b/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.3.sql deleted file mode 100644 index 4d767ccc..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.24.3.sql +++ /dev/null @@ -1,117 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'463a8b372963aeaca12400faa0c7ea29e9bfa3b4c59e9622',3,0,0,1,'2023-05-19 05:09:19.66636462+00:00','2023-05-19 05:14:19.664224869+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'77a019cb12c0d0b9347a17ab23fa4c87983814fe36bb2fbb',14,0,0,0,'2024-05-03 08:13:55.8614948+00:00','2024-05-04 08:13:55.85782156+00:00',NULL); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:0ea01e8cd608548a171ebcedc16da73b25ec18d58861be7a6b4274eb17baf6ba','nodekey:0fcf591dc1997c1d7388a3835bd24dae48240e5b03b6a7f5f33b2e813d222bcf','discokey:ed27f0b3c04222f0d05bfbc6fb5787338275ed0e32fd42fe68c669aca0f3d7a4','desktop-54','node001',2,'cli',NULL,NULL,'2025-01-30 19:50:45.306207411+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b8f0:cf44:22c8:82d1:f79:c3eb:9144:c24f]:6142","[8e25:af1:e31a:8fcc:7dcf:be1:a3c9:ed6a]:38766","[37a1:215f:bf3d:3fb1:6399:f7cb:4048:3ca8]:51666","137.242.125.72:25376"]','2023-05-17 19:38:13.531518257+02:00','2025-01-30 19:50:45.306340015+01:00',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(2,'mkey:1e274f40556282a5bbab04b6754985233ccdf971b663e5b38fc027035e833e9a','nodekey:5b16c7e14191ca2e57eccd86ec636db605fa5dc2c8856be5fa7f9b087cf24890','discokey:5fd2f116700fd48ce06e35e8dd11c847ccd7694e01ec4b3f6ef25f67c14ee197','email-21','node002',1,'cli',NULL,NULL,'2025-02-07 13:37:43.531171201+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5aae:55d1:7b5d:628d:bc07:b3e0:eca2:836d]:13912","[5583:8a0e:1564:ac5b:98ab:59a3:126:cba2]:35537","84.23.188.83:48876","44.184.117.89:36643","90.223.136.253:7056","[e43c:be7:7f4e:8217:a009:7f9d:a510:1af6]:18872","[3a86:865b:2f4c:f3b7:88a0:4aa2:f05d:c3fd]:47680","86.50.30.30:18216"]','2023-05-18 10:09:21.757289398+02:00','2025-02-07 13:37:43.531627524+01:00',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(3,'mkey:4405716e66f6a07a8fea90183cb078a8b87108aef23965b1c65f51375f3882d3','nodekey:96f979ded4cf1a02655d0b9069fb14e04d2da09c182624313acccf8c7f6b67a4','discokey:f0f4a89d7893303e1a336c9838c9a55b36a5215da006fccb72c237a794f4c5ea','email-51','node003',3,'authkey','[]',1,'2025-02-07 13:37:15.112124641+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7cd1:a579:7ddd:ed37:5f70:5783:8e09:1968]:5323","110.83.0.14:9543"]','2023-05-19 07:09:21.399903526+02:00','2025-02-07 13:37:15.122168073+01:00',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:9cd192ff56bc94ec9439083852bb541469d6725e6cda7a2aa624df6e8c8c380d','nodekey:ac53dbdd1bb1c2a36a1dde021e8f5f33d2cf01ef2892fe74853e945b9a08fda3','discokey:57cae29ceafb73857b478955a77927ff5bd8f38c313a869be093f92c6f6303da','email-15','node004',4,'cli',NULL,NULL,'2025-02-07 13:43:11.76403969+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2421:d8be:14:6b5f:7fe:56a8:54b6:8d58]:25207","[20b0:e4f:96fd:1cf3:5d39:ad75:6b8e:d0ce]:51455"]','2023-06-10 09:31:51.940506933+02:00','2025-02-07 13:43:11.765088576+01:00',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(5,'mkey:7f9c690aab22d9db060086ea375d86924359210816ad562dab2db804ad3434c0','nodekey:88fe0bcd504fd98c35a71484820dbf7b91c1704b9d29c0c7950ce5ab06ef2d17','discokey:fbaef49f1a2c61469168a9e81ee6c5bd49a777048288975a22c0f253aa14e2e3','web-24','node005',4,'cli',NULL,NULL,'2025-02-07 13:41:53.142598056+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["186.84.191.82:2621","[c9d0:4e15:513a:171b:9911:f9a0:cccf:cd83]:24151","66.84.158.199:63451","47.94.115.240:6677","19.74.30.92:1284"]','2023-06-11 13:56:42.694329408+02:00','2025-02-07 13:41:53.143294241+01:00',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(6,'mkey:00e154160e4bd8db73f2707c7a5b2f0bae04f3e26427a1aca5b0dcf2dcc1872f','nodekey:4f61b1414dd622304d407a7b4812640f774247f0c3d369a2292e3f4e88383c17','discokey:1028826beface553dd6ea7e553cc63ca59fd529716cb639981d1aa2965ed8885','laptop-48','node006',4,'cli',NULL,NULL,'2025-02-07 13:43:59.675348504+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["220.204.45.94:59135","187.96.173.136:34092","[fece:76ed:4c23:1e89:f225:97a1:94be:b5c4]:33472","[e1e6:a14:535a:668b:698:f125:543f:b5b7]:31740","[b8e3:18de:5ece:3e24:8bbf:1cbd:d27c:7de]:53522"]','2023-06-11 13:57:44.975695604+02:00','2025-02-07 13:43:59.676029923+01:00',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(7,'mkey:69a5389410061b4c336c1f337ae3ccca903f729ff249b6f1e556dcb9d9a3b7eb','nodekey:54c0d87c263afc07b7a3bb5569f8d6b67c58fecdcabefcbe7b8057e91e573a31','discokey:765930463b973ddead698512fe882cbe5e13885d14d44808596600efbcf5303f','db-22','node007',4,'cli',NULL,NULL,'2025-02-07 13:43:32.833347079+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9b54:61a6:c678:edac:956f:741e:5e2c:9ec9]:17930","[cd3f:af1c:fcb4:bcb3:806e:adf4:3f3f:ab36]:58513","[7a83:6a84:9f15:ba1a:9671:3d54:2e31:d5e9]:64893","167.223.65.194:6907","[debd:ad2f:ae50:cd14:9cd2:b297:bb94:d03c]:24266","[489b:6c59:1877:6215:ddc8:57f8:78dc:321c]:3025"]','2023-06-11 14:16:56.951313537+02:00','2025-02-07 13:43:32.833827225+01:00',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(8,'mkey:3fa0a7da3e9f0eecb11b442d911620b629981f5eb3f4ee851256a55b7252a1e7','nodekey:1912a3b843ba973c55b449fb0e1e01faff114b440e7bd1ca8b9af3be0cc56762','discokey:0d4f62fb22a9ab554a8983123cc118ec17b9d47c97d1d84bf082c90f82fe267b','lt-35','node008',5,'cli',NULL,NULL,'2025-02-07 13:44:16.543907578+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["124.104.237.237:47957","[b9f9:eec0:d656:cd83:8595:46fc:b677:6ade]:17494","126.251.133.86:44347"]','2023-06-11 19:59:30.401970393+02:00','2025-02-07 13:44:16.544521726+01:00',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(9,'mkey:dc84e5b1cf58ec8f7200ec0a655c1d5843c520ea59aa7ddd111ad5c9beba9bd8','nodekey:3c06f5a6d7f267023044ab632892b51f5af1ce4d84f0739e88823bc22cda6c57','discokey:3c1aa008a7811012038cd6658b6f290b3d252074c47d23cf1766f878340a30bf','desktop-75','node009',6,'cli',NULL,NULL,'2025-02-07 13:37:15.10720621+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["187.166.20.165:11208","78.47.168.253:34957"]','2023-06-17 19:40:45.468789461+02:00','2025-02-07 13:37:15.127284919+01:00',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(10,'mkey:2325e928fb41b3a3815b7635223411100ba10eba8ea1b442838b40da35379ec9','nodekey:8654f75dc553fbb381f57fd3f8669b5c2572f65d97989d0a69c3b294bb8cc685','discokey:24037160babf3208537006968c9f47ec3e689b1873796a787eaa44fa95afd986','desktop-46','node010',6,'cli',NULL,NULL,'2025-02-07 13:37:36.447602378+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[97a8:bbb7:9bb9:aa2b:e449:4d57:4a65:3561]:17392","151.97.159.85:8120","[69bc:fe73:f6c7:31b2:bc4f:1a95:4676:201d]:4806","[d794:c918:b0e8:6023:d7cd:709:79eb:23f7]:56801","113.147.88.203:61138"]','2023-06-20 11:18:35.905417341+02:00','2025-02-07 13:37:36.47098382+01:00',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(11,'mkey:4fb69f4bf77b8159e4814764398e6cd3bf2067b3fd4eda0eacc443ec299fcde2','nodekey:cebae8af02150eeb54b67f2cadc33dda38a98c94cfec05c1f378bf130007e946','discokey:fd5c9fefc960538a070c2bddafd53c3f890c561e02b62218e4aa8e6ae38cbe4f','email-12','node011',7,'cli',NULL,NULL,'2025-02-07 13:38:08.624346736+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d86:4301:d7d:7706:8b4a:86bf:19cd:1d6]:55529","151.169.147.218:56573","[9e9e:2802:ebb9:ba03:a564:7999:7137:65e3]:52919","[ede2:462a:f99c:24c3:b1b5:8b97:abe2:3ddb]:65141","[dfe7:6f6b:a382:4fc:a3f6:eb9:9ebc:5798]:1474"]','2023-06-20 11:35:15.063855316+02:00','2025-02-07 13:38:08.624847369+01:00',NULL,'100.64.0.11','fd7a:115c:a1e0::b'); -INSERT INTO nodes VALUES(12,'mkey:5e192b1b323439fcc3b921c4f523ce9b7cce867220bd092d23be2451ce93a4a5','nodekey:1452cb2c8f26c8988cd8d8ed42957c78e3ab922bdf9773978c853e4254e23fbb','discokey:b7b41dc5de1fdcc4bd609dc72f79d27269367dbf72ab73ab61e998aa31eac6a2','web-19','node012',5,'cli',NULL,NULL,'2025-02-07 13:44:08.729267096+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["60.5.210.145:1193","[5698:e318:b38c:4331:5152:c2df:743f:8ead]:34839","33.51.84.119:46756"]','2023-06-20 18:22:35.061914624+02:00','2025-02-07 13:44:08.729820984+01:00',NULL,'100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(18,'mkey:f11332335d9f1f9d4bfc8cf9bf4edc0c958d4e4364b7dcde3c67d847f7d05edb','nodekey:e97d97a2751ef3533818907c9f24acf3270e3824f2294bb9a36d60ca196e161e','discokey:412b0eee00b68721d9169fc81039b9cda6cf57143e4cae8f7c417db15b83411a','web-18','node018',9,'cli',NULL,NULL,'2025-02-07 13:37:16.15950606+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2acf:3f32:1763:5d21:88a:4af:8779:1b1d]:36283","84.177.251.38:32968","[3138:83c6:ae09:a073:7888:71e0:494:3863]:59700","191.156.96.176:5489"]','2023-06-22 08:30:54.08720463+02:00','2025-02-07 13:37:16.721306872+01:00',NULL,'100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(19,'mkey:157706346bfe3502b6e24a84efa59b1a151be08f5a260ff96c72a734c1da1390','nodekey:139a99de286f52c55bdd61e932827c9d7cb84101a698df3576bdffbeedbf654d','discokey:293abc1a1cf2921e7961f5753cc39097956648bab47879791afb12ea03a4e6dd','lt-42','node019',10,'cli','null',NULL,'2024-08-07 09:35:12.220368767+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ce0f:9f05:96dc:a47d:edcd:1c25:f578:7551]:30363","[cc06:9c56:919a:2341:4ef7:7f28:c8ca:2a5c]:30788"]','2023-07-03 15:18:03.829428454+02:00','2024-09-18 16:15:14.037990869+02:00',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(20,'mkey:e4713082f1a4d5e2f5ca126b95eedc7ce673782a871afde980a74aa98e31273a','nodekey:3957decaff55af98415799c81e6cec28edc377d6e2a9ddca3c909c26bcffd45b','discokey:9e8360b263f67dc8d5dea9d37f023ebbbb8882f265f6ca43f5923194d7931609','db-08','node020',11,'cli',NULL,NULL,'2025-02-07 07:28:20.585544641+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4692:f8a0:842b:c26a:e549:1f3b:6795:964e]:31064","207.187.74.99:12216","63.3.67.150:22907","[77ac:562:7c4c:438a:6ec:be48:743d:47b1]:11349","[33cd:731c:f110:9b45:66b5:de81:1daa:757f]:5997"]','2023-07-10 12:40:34.838579199+02:00','2025-02-07 07:28:20.585679087+01:00',NULL,'100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(21,'mkey:2894a7babdc030bac1e4ec0e14aa4b6b67ebe382a1773e0ff4aa8b16ecd443a3','nodekey:1e219f3cc03eaa868018b9f6870d108317c31760fdc38ad4bec7200d58379f26','discokey:a81e998383baa23adb3e1e6757ef0c4b960095022f00b500b57f53c6147c71e4','laptop-48','node021',11,'cli','null',NULL,'2023-11-20 07:19:19.447470862+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[736d:a0a:e343:6b2c:b48b:7510:ace2:edc]:60377","82.193.154.129:23713","[68:80fa:7b30:3a0:d1fb:3e89:2c73:256b]:30308","[ad42:3917:5edb:3e85:732:1c71:a19e:9fe0]:11799"]','2023-07-10 12:42:43.290469734+02:00','2024-09-18 16:15:14.03886353+02:00',NULL,'100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(22,'mkey:9ab69494a274cba5b22cbcf35b3ada57d4b3d2ee03930a456f80fb34dc69064b','nodekey:0345912b4e39cd9d03dec55edec6fba7f8aca631312884b49538550475332182','discokey:a2c1be2ae50af07e5b96b0a3ee2d3a1b0148fbfc5c1a283bc787e31410d6f74c','web-51','node022',6,'cli',NULL,NULL,'2025-02-07 13:37:15.266244078+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["9.237.225.227:64622","[46a1:e1e6:6b82:6599:acc7:2273:afea:f09f]:38018","93.129.193.97:13477","184.41.32.72:1026"]','2023-08-05 12:08:48.132161695+02:00','2025-02-07 13:37:15.266464476+01:00',NULL,'100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(23,'mkey:d1b90c1c72090d082067c2dfbf287f26510b58519bbc5113f1ca63ad3ef20b96','nodekey:2d0d6f2300d295c8f91d0b954d733a2f8c4106595070d3efc6eb709df09591f6','discokey:87ef4543866a337716d7d5021346b82bbbd1aab14c36deadf96b4c38718e8cd1','srv-45','node023',12,'cli','null',NULL,'2024-12-07 19:25:56.935152754+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[79e6:8519:803b:6054:5c13:9069:8dfa:1dcd]:45055","30.154.162.65:53795","21.107.170.69:30767","212.18.227.41:50735","[f774:e89b:980f:975d:fd3e:93f:a9b5:2034]:46054"]','2023-12-15 08:05:56.592241745+01:00','2024-12-07 19:25:56.935317645+01:00',NULL,'100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(24,'mkey:f10b7f9f385e48dc99ec0944b8a04ec4d3e5613587419c134440121fa747ee75','nodekey:9559349d25dd3fa88b9595cb4ccebeeb4712338757a209e936afe37ec2643dbc','discokey:b9a07315c1eb2e074871feebe899bd2ecae64d98a8edf08883785130ef386728','laptop-25','node024',12,'cli',NULL,NULL,'2025-02-06 16:36:01.609321017+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9e60:1bb7:b542:2f50:269f:5ef:72b6:f4ba]:34677","159.52.2.200:7205","[5b7b:e424:ea77:aa8a:f835:d6bb:2e1f:52ea]:19152","[bb06:d322:7cec:426b:ef38:55d0:e7ce:95bc]:31597","[320d:6314:eaa2:a098:8594:977d:27b:8442]:12827"]','2023-12-15 11:14:36.765183054+01:00','2025-02-06 16:36:01.609441418+01:00',NULL,'100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(25,'mkey:84784f348b4e879e87986ef13b6f02ddef91639cd7de65540903cd846f93e1f2','nodekey:c15dc98ef47cf6f5dd52f59c22b3b6b0d8536ea5d248787a0145b45e6598cff6','discokey:c14e1be497f1423b4f4aa799f8ee907307eec987a1c3f7e32d4adf215fa731c6','srv-70','node025',6,'cli',NULL,NULL,'2025-02-07 13:42:34.03052815+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[23d7:1ab8:8bd1:10f3:a4f7:bd70:7803:773d]:50718","[3200:76c9:dca8:d60:e5d6:868e:a264:3efa]:33138","[975a:978:5cfb:5e5a:1b6:d272:a1c1:50dd]:3245","[e6e4:7cfc:9ce:b47:62f5:b64e:d2e4:9015]:42369"]','2024-01-05 17:32:40.940566279+01:00','2025-02-07 13:42:34.031054525+01:00',NULL,'100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(26,'mkey:dba47da985e4770a27031261b989e24ddd437983652bd1f96cc520ed78c5fc0c','nodekey:3c36d35f3e9ec98b36741e5973d8f5542fd82503a1752ad9e20545fb6be984bc','discokey:b30567e26d2fae861e566c4081179d8c72f9e659521dd7e6afecb3fbaaeae63a','db-02','node026',6,'cli',NULL,NULL,'2025-02-07 13:37:36.216420089+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["192.175.239.121:921","98.153.121.140:5003","208.68.161.128:56198","163.42.128.241:47535","49.160.22.202:42427","[1526:e34a:8857:394e:bbe0:c043:4b37:68d4]:5245","32.31.195.128:7039","[c50a:ec:e2e:e109:3a08:f56b:2928:4921]:61265","221.147.144.138:44737","136.208.142.64:41091"]','2024-01-05 17:34:19.811670479+01:00','2025-02-07 13:37:36.272729094+01:00',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -INSERT INTO nodes VALUES(27,'mkey:73868242348c7d709b77f52963c32eb4b95c8b6f89abb5db76bd95cdaaa538b7','nodekey:2de3f421f78de3f5cfd824ffaa80cf3fde3060bc42f51c2ee2f55b49fd5fbe20','discokey:5ff3b0ad430ee5530c972ecf0d3e69f25eecdadf6ef8a309b3841d7aaa6f70d4','web-56','node027',6,'cli','null',NULL,'2024-01-16 14:32:21.570104175+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a2b8:ea55:f9e3:2c00:f423:8c56:7fec:c617]:53793","[ed88:2277:3c9a:4f64:a25e:5b41:a3ca:e904]:27852","[d203:40a2:5efc:4a42:8910:81f8:59c6:4d8d]:23389","187.126.162.12:25054","[5d8b:9391:47f1:1bbc:e01:92de:4a90:7614]:42410","172.81.188.131:23453"]','2024-01-05 17:48:25.466030859+01:00','2024-09-18 16:15:14.040766068+02:00',NULL,'100.64.0.22','fd7a:115c:a1e0::16'); -INSERT INTO nodes VALUES(28,'mkey:300a566a21fcaab2665dc091300e2c5110dcaee25e2fcc4cc23b011fdae471e6','nodekey:8e36531b406e0b7b9c1b9abadc00626ccc8c6e7a0c37f70f42ccf06cd2dd03fa','discokey:a4134738054f6fc0c57b6eb47913e68a3f47ffb81af8a3496754921c2fcb02f8','laptop-98','node028',7,'cli',NULL,NULL,'2025-02-07 13:37:15.697245044+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["113.143.9.36:55432","[e266:66fe:4929:3831:7830:6f10:1572:20df]:9977","[afe5:7d57:fac5:a1c5:78d4:ee33:d474:1e56]:52223","7.208.95.117:299","[d28b:7b33:1f88:7583:7ed7:a923:4c90:9ee0]:62318"]','2024-01-15 09:34:54.847632697+01:00','2025-02-07 13:37:15.743059169+01:00',NULL,'100.64.0.23','fd7a:115c:a1e0::17'); -INSERT INTO nodes VALUES(29,'mkey:8e49fb1776eb7ce9cf27cd557a05f6d1868e6ac5a06eab48baa953a0a0b17381','nodekey:3ff07a931e2725e7f96c378dc8b4a531818fa64e853879df8c51d5f2bbff8f9d','discokey:bcf9648f2249fbc34d8fe61593c8ffcfd4205593b9c6525196148b1f280228d2','email-91','node029',7,'cli',NULL,NULL,'2025-02-07 13:37:14.081795989+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["162.176.178.140:46577","[4f52:1708:c0a:7991:9c16:bd7d:4045:195d]:50362","[fd0d:f112:f642:c343:ac14:8ede:dc04:e2d9]:58412","[8a1a:5c17:bb81:bb69:c7db:3513:f14c:ca2d]:52077","123.235.220.59:59925"]','2024-01-15 15:18:12.2871978+01:00','2025-02-07 13:37:14.172137472+01:00',NULL,'100.64.0.24','fd7a:115c:a1e0::18'); -INSERT INTO nodes VALUES(30,'mkey:15fa512b5b78a7714edf3477babba0d325e0b121f5b81b63dafa434a9eb71c32','nodekey:b61427ebfd1dd6801f5bccb3fb42c30e1ace79441daa9d5d0cadcdd9ff199370','discokey:6a09d51cbcafc8c1eb9983a744f2df3f288d43a3b8ba06f088eb7fe1c375b341','laptop-77','node030',7,'cli','null',NULL,'2024-02-05 12:14:40.065688294+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e84c:35c4:4bd9:a100:c271:8921:ca7:5901]:21828","[fab9:f181:95a4:a769:df9b:b7ae:9799:83dd]:32341","[e868:f68f:cceb:8d29:5451:c716:bdca:437d]:41081","[7016:b48d:8bcf:4ccd:8679:8c47:1c66:cd1e]:41980","22.231.20.85:41616"]','2024-01-15 15:21:43.217136004+01:00','2024-09-18 16:15:14.041757308+02:00',NULL,'100.64.0.25','fd7a:115c:a1e0::19'); -INSERT INTO nodes VALUES(31,'mkey:fbebb2c69f263a519c8902463ef9c7db0597109fae5730a222f1c6567278bc86','nodekey:d83666c0acac1a9cd306cb884a1ce24adaabbc2102eb553ef5514ac5ea1714cf','discokey:6f4280bcb38ac995d22543db32230d8f22f2c0a0dc5efe2d86c7361c897ae812','lt-82','node031',7,'cli','null',NULL,'2024-02-22 08:35:27.098819037+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4327:29dd:a02:fb00:9645:d497:8ffc:4a68]:26342","[5d4a:a2f2:16c7:3d3a:a2b8:2962:30f9:3407]:5279","107.140.203.49:9365","[1660:741a:160:67c9:7972:5f9d:14bd:ee53]:21178"]','2024-01-29 16:05:35.338524634+01:00','2024-09-18 16:15:14.042191514+02:00',NULL,'100.64.0.26','fd7a:115c:a1e0::1a'); -INSERT INTO nodes VALUES(32,'mkey:8c468799222af319e17ec309dbbf3a87fb6d924a1cb17e5a0bd7e027fe3bfe5e','nodekey:3c7d82f4a44cfa1469b957ac911e8506a68962fb2583d2afaf3e8a621c0c4aca','discokey:ba179b51f3540e619eaac9a6d4bc67403daa7c3cf28a6b579087cd6d284501b4','lt-53','node032',7,'cli','null',NULL,'2024-04-09 13:59:43.37062537+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d21f:d7bc:f690:e3ba:82fe:8122:c33d:5a86]:227","10.67.113.92:45634"]','2024-01-30 10:41:58.31917869+01:00','2024-09-18 16:15:14.042506082+02:00',NULL,'100.64.0.27','fd7a:115c:a1e0::1b'); -INSERT INTO nodes VALUES(33,'mkey:be7f190677fd0cd1c5e78b4ae5b02a1738dd60be7d9a41017000b22ae9636af4','nodekey:a404e578d3edd3e3d879e9f1946c859b5982c59f5e528bf6b987bd55de00b25b','discokey:ad3013f116c370866fc9134b66999bb05e2dc10a8e2944230c3d9893c0a94c67','lt-80','node033',13,'cli',NULL,NULL,'2025-02-07 13:37:15.862018576+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[af8:f5e8:32f0:75ee:d351:c0e9:78e:93d9]:12624","71.151.27.31:51063","[f0d6:21dc:48f5:63f4:daff:a3c4:7d60:51ea]:17286","[38c5:9551:f6ed:b087:91ba:bef1:771c:273d]:62171"]','2024-02-03 16:33:26.706408143+01:00','2025-02-07 13:37:16.21569294+01:00',NULL,'100.64.0.28','fd7a:115c:a1e0::1c'); -INSERT INTO nodes VALUES(34,'mkey:7784df3aabe1ca1edab5270028fb3cae8b66cee2384790d06969d1e6e298520f','nodekey:fd8e939fd6db8caf6623ac3d73c12ade4da110d8c1f29e3a3ee0d2be0c526915','discokey:59aff4582e00d92260e6a90d6d0da3d818fe8f52c081c8b65e18006ef8269ebe','lt-65','node034',13,'cli',NULL,NULL,'2025-02-07 13:37:13.91519638+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["71.155.69.236:8707","125.2.67.155:50785","32.6.80.181:17295"]','2024-02-03 16:42:32.683785672+01:00','2025-02-07 13:37:14.346290035+01:00',NULL,'100.64.0.29','fd7a:115c:a1e0::1d'); -INSERT INTO nodes VALUES(35,'mkey:ad3838fa63fb120ae8d3e47d4bb6bb4009e7fc5d91483d1ba7417d0681fd7162','nodekey:b10ddbdcf826d79049d74a7d4ad036b499c86f7daf786bb1a9b295366b57afe8','discokey:618ef629d3bd261964c613ad3bc45e6278fa274d734c8cd2d3396cf8c9742d88','web-33','node035',5,'cli',NULL,NULL,'2025-02-04 17:50:49.267594119+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["169.159.71.157:18240","[bbb7:5b4d:218d:55ff:ac9b:1fb6:c933:9a55]:63943","18.156.132.228:63356"]','2024-02-03 16:51:52.010016072+01:00','2025-02-04 17:50:49.26844864+01:00',NULL,'100.64.0.30','fd7a:115c:a1e0::1e'); -INSERT INTO nodes VALUES(36,'mkey:c536a3efc460fb2da4c4775e2b51d0c84f001dc46a4d902429e74e97b2d3c6d4','nodekey:91da2acfa3d7f7c7b0c1a2d952ee527514504fddc504809423762203a16145ab','discokey:989ff5f21d485dcb6312fade38c9047bbde1fbce46bc29ac5f6afe00610ad215','laptop-46','node036',13,'cli','null',NULL,'2025-01-19 14:01:48.956567669+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4017:a243:61d0:b302:7860:18f:7d4d:f605]:28633","[e5b8:6c78:ce2f:cf6f:13fb:401f:8223:3f04]:54121","38.137.88.247:35801","[eec2:61c0:e20a:2fa0:5e4d:56a4:5071:881b]:36628","[b8c2:c792:3329:2c93:f94b:a948:7d11:b9f2]:908","[5925:353c:bd9a:f42e:1916:45b4:c170:e515]:18400","[c2a5:cc2f:e2e8:573f:7b01:4aa:8a1a:fd71]:32146"]','2024-02-09 12:34:57.879970954+01:00','2025-01-19 14:01:48.956830267+01:00',NULL,'100.64.0.31','fd7a:115c:a1e0::1f'); -INSERT INTO nodes VALUES(37,'mkey:142e67963c9045164df7b33af6d2149d72f59527df44fd75221647281444cbe7','nodekey:a3a2cd2bd0c3a1f783d0391ea1951c5c9226ddca0f18130b77fccff8b874d694','discokey:bdb788f8825b9bd816f37dcea882920382c7feb143edf0a6baefc9015774d8aa','lt-24','node037',5,'cli',NULL,NULL,'2025-02-07 13:40:58.763825796+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c31d:dd18:321:589:11fa:d85b:695f:1fca]:43064","168.4.78.117:48474","90.45.218.0:29326","90.233.46.138:29846","8.137.133.59:22781","[cb5c:f8d4:ccb0:8333:8ec:170c:a2b0:941d]:26248","[5802:df5c:8853:4851:6dcf:cb83:8208:d143]:24568","48.38.104.41:43702","139.100.200.227:62546"]','2024-02-27 12:14:40.452601042+01:00','2025-02-07 13:40:58.764224794+01:00',NULL,'100.64.0.32','fd7a:115c:a1e0::20'); -INSERT INTO nodes VALUES(38,'mkey:6829f2d5523aba7a27c31e59e2341f3feb5a5a72a0268ef10d23c39811ea3b53','nodekey:10e4ff66fdd37fb54f6dcb0e42f483a15b6340610acc3a4517c24f0f350b5a9a','discokey:f20786f9c7edc967d0b415bbcce39468441af3b531ff15656c0683d3f328720a','lt-88','node038',6,'cli',NULL,NULL,'2025-02-07 13:42:43.738264414+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a393:f17f:6728:82e4:c1e7:2367:82d0:daaf]:18560","[d192:9897:4cd7:bfb5:de36:fc03:8cf4:c60d]:25096","212.199.67.213:11150"]','2024-05-22 08:08:16.045350656+02:00','2025-02-07 13:42:43.772029617+01:00',NULL,'100.64.0.33','fd7a:115c:a1e0::21'); -INSERT INTO nodes VALUES(42,'mkey:33e764ae40265089c5e7dbfc8571aa23bb390fc7e1e8bd2f3e0cd6891c308214','nodekey:c444c9ff652ba29181a5f7fdf991f91f704e6698237ee18b0649c81879ec932f','discokey:2e1db3a1eea266673acf7826aa44b5aa6fdd23c6ba41330562b37c5a523ba358','laptop-81','node042',14,'cli',NULL,NULL,'2025-02-07 13:37:15.183056803+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["190.179.215.222:61930","45.131.187.230:22916","[510f:af99:ed4d:f53a:fc21:8640:9ad9:4930]:16050"]','2024-07-03 11:12:29.418355657+02:00','2025-02-07 13:37:15.184237123+01:00',NULL,'100.64.0.37','fd7a:115c:a1e0::25'); -INSERT INTO nodes VALUES(43,'mkey:353844c4b332bdbf7f39796c522ee89d7f64fde342d3e566878f456df0d57a54','nodekey:c9f47cb50fd2b9075cda9d68018659491ac29dbbe7a04e8405039fd385966fa7','discokey:ec43638237bc43ab6a9839b3a4d54c2593e11a2ebd13bd491f08f68d7cf93ac0','desktop-90','node043',14,'cli',NULL,NULL,'2025-02-07 13:37:15.256444976+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e071:3cd7:878c:ae2d:7dcf:eab0:d0d1:decf]:37427","17.31.200.70:26794","[183d:f2d6:b80:36c5:318e:64e9:7f4a:b389]:12440","169.173.198.12:15274","[e5df:1889:6ac5:7d21:4dfd:614b:7eb9:d93c]:792"]','2024-07-03 14:48:50.263910778+02:00','2025-02-07 13:37:15.258418656+01:00',NULL,'100.64.0.34','fd7a:115c:a1e0::22'); -INSERT INTO nodes VALUES(44,'mkey:6d3e50168ff6e672ed7650abc5a1de7fec7150c6f84932c705c84964824d1d39','nodekey:ad70f51f99efeefdf0c8e0a9e818db1e5de474e67b2b0a1609a53190339938ef','discokey:c4d4ad19814f9312b454cadb85b333f80cccca1b4a42f30780f7a0f4ccfe1bff','laptop-97','node044',14,'cli',NULL,NULL,'2025-02-07 13:37:15.264418779+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b501:efaa:c73b:b492:c3cf:865:d55b:9557]:26172","162.210.141.106:52036","[84ef:a164:d1c:d20d:797c:753e:f8fb:3108]:5767","[c15a:e322:a721:c9a7:a662:ccb8:26b8:e1c4]:63192","[87c4:c36b:d96f:525b:fa4c:c5f6:7bc8:d6bb]:35894"]','2024-07-03 15:23:48.066044194+02:00','2025-02-07 13:37:15.265313019+01:00',NULL,'100.64.0.35','fd7a:115c:a1e0::23'); -INSERT INTO nodes VALUES(45,'mkey:e1772e6b138211afff6aa8fee08438c311b1d9d32576075c980d39591b13ae77','nodekey:92b8cc28bff0c53f95fbdd8c1363040100ae849e78aea04a3755e9a1fd03fe4e','discokey:7b368b5d64b6e4fb4f0ba9e5809bee8b769949ecdc69ec6ee6c9f12c2321fced','email-94','node045',14,'cli',NULL,NULL,'2025-02-07 13:37:14.65277645+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[caa0:946e:b130:39aa:31b0:8f9:7526:4960]:1729","120.102.247.255:41936","28.177.95.32:7841","[fb23:5998:c63d:b6c4:e86a:96b2:61c:e110]:51902","[89d4:f6c6:ccf6:b19d:3b76:642e:7a36:d229]:930"]','2024-07-03 15:54:01.706018896+02:00','2025-02-07 13:37:14.741317691+01:00',NULL,'100.64.0.36','fd7a:115c:a1e0::24'); -INSERT INTO nodes VALUES(46,'mkey:af6c1d228b675ff04d2584e2c23d6288ec312a6b44c5c0eed213423a11711787','nodekey:f5b21f25bb22b5cf0cedb1b643d50c3a6512ef300cc971eccccbcc500cc23408','discokey:72843a1b2564c177d8556eb0d76dda7cbe0d9f1408bd250fcc6149518eb6d5c0','desktop-41','node046',14,'cli',NULL,NULL,'2025-02-07 13:37:15.182378567+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[37a8:9998:e94e:e297:940e:5414:5549:87c7]:37130","27.138.122.133:37304","98.38.29.194:12660"]','2024-07-03 19:38:07.783745318+02:00','2025-02-07 13:37:15.183590138+01:00',NULL,'100.64.0.38','fd7a:115c:a1e0::26'); -INSERT INTO nodes VALUES(47,'mkey:5bbb77092f21b4835b0ae1af3871fc48445819c9b6e5ff9c46ad4a623b662ae8','nodekey:191c7cc089ee160c07b63e2b6b5ef0701d76d914009420baddad9bfce13e3f2d','discokey:712cadffd23f106abeea82ae70bbae8e06c2a58f3db617afd0d149b3012c64fb','web-61','node047',14,'cli',NULL,NULL,'2025-02-07 13:37:15.16567498+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["186.118.23.242:32373","[5bd8:a5a8:1b3:5e41:3c4d:5fc:4c0:c6cc]:24847"]','2024-07-04 10:38:08.344092869+02:00','2025-02-07 13:37:15.167818666+01:00',NULL,'100.64.0.39','fd7a:115c:a1e0::27'); -INSERT INTO nodes VALUES(48,'mkey:a1e793aa09c20360cb4ef57e7dc42171a0abbc9f562140609e930ba5b7b12e83','nodekey:3343520915898c9d90b7517eabf8660d66c37ed12aea0b5a4935458f95c801c9','discokey:6a80e23d10d2cb3edb58c9c47ee997f9063d6aed2f29049f635b6bf1da090885','web-24','node048',15,'cli','null',NULL,'2024-10-20 13:53:33.831192385+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["101.103.129.11:50383","[3aab:52a2:b9d3:1816:5627:336:6c60:57d7]:13486","22.83.180.41:14147","26.126.135.206:7777","[8706:d109:5006:f832:dca7:58e4:4451:be15]:15180","177.4.27.29:60895"]','2024-07-26 08:09:56.608302315+02:00','2024-10-20 13:53:33.831387627+02:00',NULL,'100.64.0.40','fd7a:115c:a1e0::28'); -INSERT INTO nodes VALUES(49,'mkey:13b8d4cafbdb8e9ab77928219aa57bf778878866ad8e6108b60aaab8dc8cec36','nodekey:17cefd68f8a9cc20b3d4fd93598f77ddfd6bd5a7e6e4b49861f9faad8623e0ef','discokey:939a04ddd2f9a24468e07ee7924dbb7a7662c1e8a1b67426f57fd48c7394448b','desktop-40','node049',16,'cli','null',NULL,'2024-09-19 09:07:18.28136023+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[19b0:8be2:708:d27f:4d3d:3c93:9979:e9e8]:50438","129.0.5.112:19015","[91e2:54c6:93fb:ec32:6ab4:186d:81cb:b815]:37323"]','2024-08-05 17:32:41.937626584+02:00','2024-09-19 09:07:18.281618912+02:00',NULL,'100.64.0.41','fd7a:115c:a1e0::29'); -INSERT INTO nodes VALUES(50,'mkey:03979d5446bb3f596bf0234d81ca84482eacb132e874ceda6cc2703422cd728c','nodekey:805c03cada525f14ee90d28ea02b23a3f57b470c793314872ad9daacf6c6484e','discokey:eb4135e1cc73a3c6b513d8cc04ca0c55a2aef7b0009845fa1d2036488aadf0f8','desktop-02','node050',10,'cli','null',NULL,'2024-08-07 10:10:06.595550455+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9175:498d:64ac:4342:2eac:ffa1:99fd:d109]:21202","[5cba:56ef:db08:320c:333a:541f:605c:44d5]:57469","150.211.2.236:50360","203.180.122.83:13344","63.162.234.32:11653","[6b63:f552:f930:e22c:fc42:3b41:c5c6:6d5f]:63162","114.48.12.185:47837","[2964:cc5d:3a1b:7889:acc8:4d8:22c1:398a]:43515"]','2024-08-07 11:50:54.144157179+02:00','2024-09-18 16:15:14.050033969+02:00',NULL,'100.64.0.42','fd7a:115c:a1e0::2a'); -INSERT INTO nodes VALUES(51,'mkey:5877139c52c0bf3c0fd054bd046aaf4530dd8946548a7bc1a06733649d1557d8','nodekey:553d9638c8c455e8d1bc811ea3af03db1eb3c586413447c8003e7d905a61d06c','discokey:1da883c9eff402d786362aaac2f478c835308b3acec0993e6293becf4474609f','web-62','node051',14,'cli',NULL,NULL,'2025-02-07 13:37:38.925095255+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["86.215.174.228:26026","93.101.159.155:64005"]','2024-08-07 14:19:31.156780417+02:00','2025-02-07 13:37:38.92600407+01:00',NULL,'100.64.0.43','fd7a:115c:a1e0::2b'); -INSERT INTO nodes VALUES(52,'mkey:0e64072dafa5f9e49266b41fc4b21ff7d287be8f4e5f2b6c395487d632f1cf3f','nodekey:2f77bb8cbeb426bcded5ad9fc6da44b0b76b8d6759f10a8db28e634290760ee7','discokey:836558738056edd1fa8d190c7e42f120b365a48a3b1f1c96c87755bb82d7ad0f','desktop-07','node052',17,'cli','null',NULL,'2024-12-25 17:27:58.515851096+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d889:f751:710:257f:977f:5597:9908:14e0]:61162","[f83a:5ebb:911d:ef36:743c:8086:8a4c:a5c7]:23538","[dc45:365c:e426:71d7:fe23:fdd4:d2c2:9409]:57262"]','2024-09-22 15:48:41.385301399+02:00','2024-12-25 17:27:58.517153789+01:00',NULL,'100.64.0.45','fd7a:115c:a1e0::2d'); -INSERT INTO nodes VALUES(53,'mkey:a6f93e99a5019b3b6a2941d95dc7fcbfea5a59317a48b99db93b7a3f4f0a65bb','nodekey:1f78a6d76ea4a2135c0da46b202560c86ed30cd7e8239d4508088d0e02ffb961','discokey:1744062452f052aba5320e37fc08aaa37f1430db7c32c94c5626f0f3c858de3d','desktop-82','node053',17,'cli',NULL,NULL,'2025-02-07 13:37:13.842388728+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6084:ee60:1697:864e:4a61:cf6e:e9b6:4c05]:63961","[fad7:2601:d286:4772:3cce:6e89:f66e:9eeb]:5430","216.60.124.11:45041","[d513:c472:d7a:f316:f610:10d0:9851:4feb]:3955","156.228.105.157:61542","102.126.185.0:50694","[156b:e934:d171:2693:8db4:f193:a58c:17b6]:24190","79.255.179.99:56057"]','2024-10-28 10:04:50.084492941+01:00','2025-02-07 13:37:14.011393166+01:00',NULL,'100.64.0.44','fd7a:115c:a1e0::2c'); -INSERT INTO nodes VALUES(54,'mkey:4eee40f0117b3b8580f4cf3ec7be063bb4d82173cfa3c015c8932431940d9c92','nodekey:4241ffb9768a0e31f38cc0c5b2f240c1de72b8810e7ff3705964316e63c1a055','discokey:3ad8f2ebea95d15109a0adfc69b3a6b566ce02ce82df7a32e68f278538bc05a2','lt-15','node054',14,'cli',NULL,NULL,'2025-02-07 13:37:15.059804316+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9fe:6ddd:ab0:1193:7bb7:4a4b:c036:b910]:63516","158.169.112.249:5480"]','2024-12-09 17:10:55.363593066+01:00','2025-02-07 13:37:15.113575346+01:00',NULL,'100.64.0.46','fd7a:115c:a1e0::2e'); -INSERT INTO nodes VALUES(55,'mkey:22e4ac4e25b05db4e94b5211e08ed14d983972f92e192f15e0231ff355fd46bf','nodekey:225f907deaffccb674416dc4ba532f0b20304e86d57808eef2d96f15211f2309','discokey:e695b849380f698178e7457311dca3cc10ae13a79b0a71d22130e7c401c4f246','web-26','node055',18,'cli',NULL,NULL,'2025-02-07 13:37:15.324226266+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ecbc:a9b6:cbb7:695b:62ad:27cb:36f0:2380]:27992","135.238.213.41:64084","[a5d2:373b:9599:8abc:6e7:2efe:f921:3a09]:56107"]','2024-12-10 13:56:39.287449662+01:00','2025-02-07 13:37:15.355747009+01:00',NULL,'100.64.0.47','fd7a:115c:a1e0::2f'); -INSERT INTO nodes VALUES(56,'mkey:6b8c199041c26e52d594b6e9be9d4d9bb695d32e9fe247c84314b6809b8f38fc','nodekey:26b392e9f02b3910c5974f133282119f99e67ed084b4327aa818333e04010d40','discokey:8312bc333f55d0d6d7fc3e28dcb6300364c3008dff0cc64d2b480d5cba8260f4','web-69','node056',19,'cli',NULL,NULL,'2025-02-07 13:42:18.972640971+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5437:6f4:94c1:c6c1:43ce:7a41:e5c2:65d0]:57161","[7af0:3b83:a2ec:716d:bc46:deca:1685:e863]:26528"]','2024-12-17 14:58:39.429211911+01:00','2025-02-07 13:42:18.973110667+01:00',NULL,'100.64.0.48','fd7a:115c:a1e0::30'); -INSERT INTO nodes VALUES(57,'mkey:edabd08e316b276318382fb96ca0ed40cf36f64c0a0de538295c0a69b0035243','nodekey:6576807f6b92bca841c85256b56daef25ab7113770f1db8ebc903d8b9de41308','discokey:a42e3f0759628634516ca1b4d4fa87450026af9101040401a23532a48ee07c76','laptop-31','node057',20,'cli',NULL,NULL,'2025-02-07 13:42:22.378213299+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[229:a09b:efd3:c73e:b885:d236:7dcd:20c9]:33744","[2de9:ec02:ffc4:1715:4d36:94f9:d726:fb8b]:55312","[72a1:a0b2:624a:827d:7d29:bd5a:3398:dbe4]:56924","[9875:c11c:289c:3b14:dc40:d7af:6:8372]:51751","125.170.153.154:21718","68.3.248.154:42390","[f621:b605:bd04:a964:160c:314a:141a:2e8d]:58563"]','2024-12-17 15:17:14.26936913+01:00','2025-02-07 13:42:22.378739434+01:00',NULL,'100.64.0.49','fd7a:115c:a1e0::31'); -INSERT INTO nodes VALUES(58,'mkey:06bd440cb68a2856dcab4ae09ba4c4e7969b78a99a1d06bbacebc9d1fba037b7','nodekey:c0a2b88f5f4b9aee6a50df28b6ff5e5c495053c26d969100ef23f13a51349bec','discokey:4023ff7bfd754262fe36745731a9d1175cc0b49d7acac56123cfbf0c633fa0c6','db-32','node058',12,'cli',NULL,NULL,'2025-01-25 18:41:03.881898904+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["50.133.88.187:24107","[ed61:9d51:b21f:7096:32fa:e473:f6d4:9f69]:17106","110.235.245.179:61572"]','2025-01-17 10:17:23.455895657+01:00','2025-01-25 18:41:03.882180987+01:00',NULL,'100.64.0.50','fd7a:115c:a1e0::32'); -INSERT INTO nodes VALUES(59,'mkey:837b21b2de65f931fa922fbc8fae0552beb9e5d5f3f089caa3b23e05e21afc9c','nodekey:2966f862ddc948ac6bec836f818b6af9717a5972128c20cadb0756dca94421a1','discokey:81f0e2ccd5b3292e6129c66588862e399a2e2685592f7e8b9e45254932532bf7','desktop-45','node059',21,'cli',NULL,NULL,'2025-02-07 13:37:15.162862529+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["121.80.178.149:12322","148.31.205.9:26021","51.210.182.192:52299","[2e30:4d2d:4925:201f:e845:2a12:503c:9524]:34552"]','2025-01-29 11:59:27.291048957+01:00','2025-02-07 13:37:15.166959448+01:00',NULL,'100.64.0.54','fd7a:115c:a1e0::36'); -INSERT INTO nodes VALUES(60,'mkey:a8258e2d7302528bdad3321e5251beac7b920c0fd2be9a31ac541ceba25f868f','nodekey:f6dcffb992bfd5f22fd31752d1f450d79e68fd9d3a5d8c6002846d7c7e7ca780','discokey:5ce092b58afaaeac550606cb9ca49b9a82d8ec71c5bba6cea2727f3a6c51ed58','desktop-83','node060',21,'cli',NULL,NULL,'2025-02-07 13:37:14.448391309+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9bc9:59d8:52be:2e78:c641:a523:6bef:7101]:9124","103.8.213.76:348","[ac:5c65:da8d:d68c:eb0d:3692:d441:5363]:63612","107.250.166.141:64587"]','2025-01-29 12:01:57.48748166+01:00','2025-02-07 13:37:14.48001932+01:00',NULL,'100.64.0.55','fd7a:115c:a1e0::37'); -INSERT INTO nodes VALUES(61,'mkey:274b3c292aff1b069c84dcb1708adbf93c479a88eff35023edd1d42ff9d2f3a1','nodekey:ff576e5a5d91943d1b95cc0bd0d02edf9ce965b04f4d55cf44cbc328a46e81e4','discokey:da967cf6cf6122fb4c3381342341fe2013bbb02592871a6a4875c0f2ed088497','web-95','node061',21,'cli',NULL,NULL,'2025-02-07 13:37:18.210665072+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["97.178.200.54:28664","112.239.138.73:43954","174.168.204.29:5611","190.0.61.36:13730"]','2025-01-29 12:03:01.464646336+01:00','2025-02-07 13:37:18.211304165+01:00',NULL,'100.64.0.56','fd7a:115c:a1e0::38'); -INSERT INTO nodes VALUES(62,'mkey:55fead8ba87f06051dbc7eb41fc9263832e89a23587dc5161f4caa73e52b86c2','nodekey:ce19e593db9037b1ca155c2eee3628bdadf18183e7bb2c2e8452c490a4a70fc6','discokey:3fe7586ae2c3942956fa5929c628519dd776326bd41941bf9c4ca67c20b39d39','web-47','node062',21,'cli',NULL,NULL,'2025-02-07 13:37:15.119660489+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2476:14e8:ab74:a221:cfca:bd5e:64aa:83d1]:48008","203.192.235.192:35390","30.40.23.45:22912"]','2025-01-29 19:23:14.092804852+01:00','2025-02-07 13:37:15.125730738+01:00',NULL,'100.64.0.57','fd7a:115c:a1e0::39'); -INSERT INTO nodes VALUES(63,'mkey:375546b6250c3cb141b036b348e902de244a8572dd7572422e95e36a6f6e4db8','nodekey:4fc64d645299720f14481c889ecaa1f08e48a3b83f19ece638b7d4d9d00bf75c','discokey:404b99d3c51100cd92dcf7caaee677b9c3f5371070504f0848759bf792ce1ed3','lt-21','node063',21,'cli',NULL,NULL,'2025-02-07 13:44:32.21661957+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[68bb:e8d1:72d8:8b02:608b:2284:c9b3:184d]:20876","[5c17:4014:ea6a:5528:b1a:61b7:5cda:5c5e]:21597","22.244.181.59:21123"]','2025-01-29 19:41:40.535299057+01:00','2025-02-07 13:44:32.217372195+01:00',NULL,'100.64.0.58','fd7a:115c:a1e0::3a'); -INSERT INTO nodes VALUES(64,'mkey:ea810305395687d011a783dcd058cf511363eb4d5638eeeb6886b7f5e690482e','nodekey:c5b9b10d988bc237815f7c98830abfb86981972949898f2ce9a1451a7bc617bc','discokey:1e0c9544833ff8f89685b81f2795c330500d8d414f856e0e9f4ec439e3c76dad','web-27','node064',21,'cli',NULL,NULL,'2025-02-07 13:40:53.575919211+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2fb6:ea0a:3639:a1cd:9075:a258:6a79:4f5c]:13944","[2384:3000:4d47:a9e7:54ef:99:52c0:e920]:39358","130.195.116.24:53838"]','2025-01-30 18:18:57.519126133+01:00','2025-02-07 13:40:53.576418189+01:00',NULL,'100.64.0.59','fd7a:115c:a1e0::3b'); -INSERT INTO nodes VALUES(65,'mkey:fe494a29de4149025e3c41744cabfaaa2ae9a95795bdb6425e3beee27048e0f6','nodekey:2b078ccb547dc8b01f19749735f5fc43e65e4d468357a08a880fa54dc0a1e00d','discokey:cb991d6c0a14b0213e6f807e9c37a837b181c889a1e2d25e10d65d4a7e8b2b05','laptop-33','node065',21,'cli',NULL,NULL,'2025-02-07 13:44:09.41583007+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["54.154.28.86:64790","197.205.147.181:38523","[5e11:9ab9:6805:dec0:99aa:65e6:d1fe:6e58]:45247"]','2025-01-30 18:19:40.354692307+01:00','2025-02-07 13:44:09.416328169+01:00',NULL,'100.64.0.60','fd7a:115c:a1e0::3c'); -INSERT INTO nodes VALUES(66,'mkey:6c97e6102fbb96b98090402e9414b0d8e2e5ded0aa8cded7ebae64da5c2518d9','nodekey:f2d5b51602df1f8c57ab59973c100919b7f84bdc09386d43ad49fa30d6595e38','discokey:15d3ca06419e377b06e2e0a3b36d53f5b448087f9609c73a128429e17c88102c','lt-62','node066',21,'cli',NULL,NULL,'2025-02-07 13:41:35.878248155+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["6.193.99.72:20874","181.248.237.50:21695","171.59.96.217:25726"]','2025-01-31 12:05:28.65297301+01:00','2025-02-07 13:41:35.879388156+01:00',NULL,'100.64.0.61','fd7a:115c:a1e0::3d'); -INSERT INTO nodes VALUES(67,'mkey:eb72ad359958c98e28f93485b720b6edef80d1bc33c4f99d772fc17c766039e9','nodekey:9122bd587ae90984e3231e0f48e15aa8abcfbc1b3324becd1f3622bbbe534da1','discokey:7610d45f8d2d3d4d7774c43288a6d50cd798a832732613647caa15e96621a3e3','laptop-93','node067',21,'cli',NULL,NULL,'2025-02-07 13:42:35.535333727+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[3022:fc47:6534:7fe3:9d1e:8d49:75b:984b]:10998","60.241.100.13:15037","[9356:40f8:a98c:a1a9:18e6:e781:f169:4277]:63375"]','2025-01-31 12:06:30.121464114+01:00','2025-02-07 13:42:35.535828442+01:00',NULL,'100.64.0.62','fd7a:115c:a1e0::3e'); -INSERT INTO nodes VALUES(68,'mkey:b1f44437d8d3550b23e7d3a79e6fb1c2649a48c2566d95dc308b78e56a1ce2f5','nodekey:17d7b53b408bcc2459fce9db6e689099b7a99de8ca26be948f2a0eebf71970c7','discokey:2af8a6603509410d88c13a6d39763e0109e8f422f27fc8da08d2e256dd7569b3','desktop-87','node068',22,'cli',NULL,NULL,'2025-02-07 13:37:14.673907509+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8ac8:a469:4cc:90b5:c84b:f4f:5cb:49e1]:35646","[d152:3a18:3826:ee28:a323:74d6:7140:ca5]:40749"]','2025-02-03 14:16:55.56431345+01:00','2025-02-07 13:37:15.115578974+01:00',NULL,'100.64.0.51','fd7a:115c:a1e0::33'); -INSERT INTO nodes VALUES(69,'mkey:0a0aec355a4e4e478a9e73ac6141c7882ab29d1d14da893ba7442acc11bf6cbe','nodekey:5c3cbe65b07b0676fdc9eef42acb6f0a56196a20fa82d34429b6adc878aa4d0e','discokey:f813981e0cdb97880468aa23f9745b4b3ca367e3e9eac72c6631b8e2530a822b','desktop-75','node069',22,'cli',NULL,NULL,'2025-02-07 13:30:51.951476674+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[443d:d8da:884:82e4:de49:13e:8449:d06d]:43135","152.191.25.57:54747"]','2025-02-03 15:23:16.312084161+01:00','2025-02-07 13:30:51.951819286+01:00',NULL,'100.64.0.52','fd7a:115c:a1e0::34'); -INSERT INTO nodes VALUES(70,'mkey:fa142bd90a16d891af17e0732fc3060af64e41d0ad7f0024afacbd941a538f88','nodekey:2ad7938dba6ec89f4d73dd9d8558d15efd3ef3b5b752a90e40a064835c073fc2','discokey:31457c9296771d81dd053d962b86845adc3ab09ae53f9ed42be57037a0ea2368','email-13','node070',21,'cli',NULL,NULL,'2025-02-07 13:37:15.228318848+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["7.34.191.227:52597","200.133.152.206:14119","28.101.69.72:35202"]','2025-02-03 18:09:35.161109801+01:00','2025-02-07 13:37:15.245827564+01:00',NULL,'100.64.0.53','fd7a:115c:a1e0::35'); -INSERT INTO nodes VALUES(71,'mkey:fc9d5a1c4658644beeb3130cf62b19042698652298ef36899d41f588bec371ab','nodekey:d44ef585e416773d29e627cf9a6e590af9b676d23a0a816865d05d775517ab4b','discokey:0c87d487f7a4f4ec5e045ff0c83c1b6c54453f7afe0e5f13c7ce90746e18da49','lt-48','node071',21,'cli',NULL,NULL,'2025-02-07 13:37:36.853568516+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["126.19.171.233:19526","[91e1:a8d7:5c43:b695:4f5:b147:b5f7:9f75]:62218"]','2025-02-04 12:03:50.32663805+01:00','2025-02-07 13:37:36.85423361+01:00',NULL,'100.64.0.63','fd7a:115c:a1e0::3f'); -INSERT INTO nodes VALUES(72,'mkey:a46b92cea88a9bac37dcb612a4a50dee75684f736c8c7f2aab812fa5fb9d463f','nodekey:1bbfdddf7b2a929a3da1e69236aa8b8fbd4b25d1839e57aaecf2ab910592bc78','discokey:c2188df6632aded7862de4b5d07c3993b3e908c59ce38f9d481f4840d85448c7','srv-06','node072',21,'cli',NULL,NULL,'2025-02-07 13:37:15.252280835+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[99f:9853:60cd:a4c5:e9d7:2a40:f86:742b]:782","[99cd:c956:e0f9:b633:5f8b:5c6a:50b1:571]:18474"]','2025-02-04 12:07:36.231437299+01:00','2025-02-07 13:37:15.256879188+01:00',NULL,'100.64.0.64','fd7a:115c:a1e0::40'); -INSERT INTO nodes VALUES(73,'mkey:2c018031737bb172064fe31674acafe1b8b4d62d2b82db835e2c38061c5bfdbb','nodekey:575f781bb36f7c93cde4d9808e3dedef2c8999864ea1d4e5ee73fdd3d53ac217','discokey:00e3275941e7a0a73a1e5c44cea638a411c60bb2e257812e0f39152ec0c5a21b','lt-01','node073',21,'cli',NULL,NULL,'2025-02-07 13:37:46.354359607+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["56.80.126.49:13907","95.196.223.110:47343"]','2025-02-04 12:10:26.50545127+01:00','2025-02-07 13:37:46.354928354+01:00',NULL,'100.64.0.65','fd7a:115c:a1e0::41'); -INSERT INTO nodes VALUES(74,'mkey:a5a8194cef984a1cc2081212405c10c8cc1061d3d00e6aa7e1415c854e3ded72','nodekey:ca37a131e58c14ebec3a6e179549d7c8742a31b42ed0e6e52aed138ecc176907','discokey:8a4990c19c547da50875ad00613118d5382717652b4ef9897a1d1c060c40a976','laptop-71','node074',21,'cli',NULL,NULL,'2025-02-07 13:37:15.235404491+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["114.141.127.86:11858","[d05b:3c4f:8402:823f:409f:87b2:4dab:4959]:58420"]','2025-02-06 17:33:12.557302525+01:00','2025-02-07 13:37:15.238509319+01:00',NULL,'100.64.0.67','fd7a:115c:a1e0::43'); -INSERT INTO nodes VALUES(75,'mkey:fb413a3831a1db1639197bf6eb887a913ff044346d9fc07eeadc50aa96ae196f','nodekey:763f78cdd841a2859c2a7e1b00129ed1a527e97afc193da93d864263fb048c0e','discokey:1c658f44f2e30f60cd35d6d730625710bb86ca1873f0cc55f6a08f0fc62bd814','srv-20','node075',9,'cli',NULL,NULL,'2025-02-07 13:37:39.952195811+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["147.221.39.105:29051","146.211.135.172:35153"]','2025-02-06 18:23:55.709687186+01:00','2025-02-07 13:37:39.95290257+01:00',NULL,'100.64.0.66','fd7a:115c:a1e0::42'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(1,'2023-05-19 07:09:23.387641743+02:00','2023-05-22 09:48:18.908103256+02:00',NULL,3,'192.168.224.0/21',1,0,0); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-05-17 19:36:55.859473496+02:00','2023-05-17 19:36:55.859473496+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(2,'2023-05-17 19:36:57.059073465+02:00','2023-05-17 19:36:57.059073465+02:00',NULL,'user002','','',NULL,NULL,''); -INSERT INTO users VALUES(3,'2023-05-18 10:10:36.248939077+02:00','2023-05-18 10:10:36.248939077+02:00',NULL,'user003','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2023-06-10 09:06:13.920718561+02:00','2023-06-10 09:06:13.920718561+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(5,'2023-06-11 19:58:32.371218434+02:00','2023-06-11 19:58:32.371218434+02:00',NULL,'user005','','',NULL,NULL,''); -INSERT INTO users VALUES(6,'2023-06-17 19:39:53.031565686+02:00','2023-06-17 19:39:53.031565686+02:00',NULL,'user006','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2023-06-20 11:35:09.325846831+02:00','2023-06-20 11:35:09.325846831+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2023-06-21 22:47:48.196234382+02:00','2023-06-21 22:47:48.196234382+02:00',NULL,'user008','','',NULL,NULL,''); -INSERT INTO users VALUES(9,'2023-06-22 08:30:35.068995572+02:00','2023-06-22 08:30:35.068995572+02:00',NULL,'user009','','',NULL,NULL,''); -INSERT INTO users VALUES(10,'2023-07-03 10:18:32.123226+02:00','2023-07-03 10:18:32.123226+02:00',NULL,'user010','','',NULL,NULL,''); -INSERT INTO users VALUES(11,'2023-07-03 10:18:37.130387602+02:00','2023-07-03 10:18:37.130387602+02:00',NULL,'user011','','',NULL,NULL,''); -INSERT INTO users VALUES(12,'2023-12-15 08:05:06.013615212+01:00','2023-12-15 08:05:06.013615212+01:00',NULL,'user012','','',NULL,NULL,''); -INSERT INTO users VALUES(13,'2024-02-03 16:32:42.224977233+01:00','2024-02-03 16:32:42.224977233+01:00',NULL,'user013','','',NULL,NULL,''); -INSERT INTO users VALUES(14,'2024-05-03 10:12:38.220973042+02:00','2024-05-03 10:12:38.220973042+02:00',NULL,'user014','','',NULL,NULL,''); -INSERT INTO users VALUES(15,'2024-07-26 08:08:40.979783263+02:00','2024-07-26 08:08:40.979783263+02:00',NULL,'user015','','',NULL,NULL,''); -INSERT INTO users VALUES(16,'2024-08-05 17:32:02.878091894+02:00','2024-08-05 17:32:02.878091894+02:00',NULL,'user016','','',NULL,NULL,''); -INSERT INTO users VALUES(17,'2024-09-22 15:48:00.287392203+02:00','2024-09-22 15:48:00.287392203+02:00',NULL,'user017','','',NULL,NULL,''); -INSERT INTO users VALUES(18,'2024-12-10 13:55:11.256977421+01:00','2024-12-10 13:55:11.256977421+01:00',NULL,'user018','','',NULL,NULL,''); -INSERT INTO users VALUES(19,'2024-12-17 14:57:58.550971236+01:00','2024-12-17 14:57:58.550971236+01:00',NULL,'user019','','',NULL,NULL,''); -INSERT INTO users VALUES(20,'2024-12-17 15:02:08.053169491+01:00','2024-12-17 15:02:08.053169491+01:00',NULL,'user020','','',NULL,NULL,''); -INSERT INTO users VALUES(21,'2025-01-28 15:57:32.774456057+01:00','2025-02-06 17:43:41.935399542+01:00',NULL,'user021','','',NULL,'',''); -INSERT INTO users VALUES(22,'2025-02-03 14:10:50.491924701+01:00','2025-02-03 14:10:50.491924701+01:00',NULL,'user022','','',NULL,'',''); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.25.0.sql b/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.25.0.sql deleted file mode 100644 index a30beb4d..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.25.0.sql +++ /dev/null @@ -1,124 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'463a8b372963aeaca12400faa0c7ea29e9bfa3b4c59e9622',3,0,0,1,'2023-05-19 05:09:19.66636462+00:00','2023-05-19 05:14:19.664224869+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'77a019cb12c0d0b9347a17ab23fa4c87983814fe36bb2fbb',14,0,0,0,'2024-05-03 08:13:55.8614948+00:00','2024-05-04 08:13:55.85782156+00:00',NULL); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:0ea01e8cd608548a171ebcedc16da73b25ec18d58861be7a6b4274eb17baf6ba','nodekey:0fcf591dc1997c1d7388a3835bd24dae48240e5b03b6a7f5f33b2e813d222bcf','discokey:ed27f0b3c04222f0d05bfbc6fb5787338275ed0e32fd42fe68c669aca0f3d7a4','desktop-54','node001',2,'cli',NULL,NULL,'2025-02-21 16:47:31.967030967+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b8f0:cf44:22c8:82d1:f79:c3eb:9144:c24f]:6142","[8e25:af1:e31a:8fcc:7dcf:be1:a3c9:ed6a]:38766","[37a1:215f:bf3d:3fb1:6399:f7cb:4048:3ca8]:51666","137.242.125.72:25376","86.84.184.63:26062","[c187:a8:1181:8321:28ab:fe72:b0b8:8ec6]:41236"]','2023-05-17 19:38:13.531518257+02:00','2025-02-21 16:47:31.967134366+01:00',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(2,'mkey:77e0289c8c37caae9d3d7b6c851db0beb9e47fc8854dbe2d5bb0d7a33d377c56','nodekey:7b550e852203d47984bce22eaca3a6a738e711869c79932b087b848cf33c8065','discokey:d1a6b93cedd0e850997eeb0ff84be2d169d15b8b29192dffbf49dea812d50874','web-87','node002',1,'cli',NULL,NULL,'2025-02-25 18:07:33.299494262+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["149.96.226.131:35537","84.23.188.83:48876","44.184.117.89:36643","90.223.136.253:7056","[e43c:be7:7f4e:8217:a009:7f9d:a510:1af6]:18872","[3a86:865b:2f4c:f3b7:88a0:4aa2:f05d:c3fd]:47680","86.50.30.30:18216","[fdbe:7908:805e:6c8f:47c2:2c87:b1c4:44]:48470","188.237.187.170:7622"]','2023-05-18 10:09:21.757289398+02:00','2025-02-25 18:07:33.299970499+01:00',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(3,'mkey:538f91fe9e485697ce13f9444ee0dadf2aaaed712fdfa03ba102113c6505db37','nodekey:1936bdc567d6758b11b2cc06ea7ff74bb933a44421fb17323965d5a402e39783','discokey:e8b762a384d644175f3b4e4fd36867b2c435e03eb0bf6b9e7ebd4f0ce83c13ef','lt-57','node003',3,'authkey','[]',1,'2025-02-14 06:23:29.821940467+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f6d5:e904:28e7:a5a4:12a3:96d8:9391:b22c]:37336","113.59.7.205:48432"]','2023-05-19 07:09:21.399903526+02:00','2025-02-14 06:23:29.828817276+01:00',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:12afa721b0310fd9cf7501a19e5f23a2d33c87f989a9b0233bc09e8eb9e5eaa2','nodekey:83e8f99465cfd0ef79190bfb341bb742b39d9ddd7cfeab237153b58858a2fed2','discokey:d4fdc314c7b73e21589efef171f10158379ade1adbdc281482a62c6c20fc2ed8','lt-50','node004',4,'cli',NULL,NULL,'2025-02-25 18:57:43.626901249+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8105:c5c2:647f:e97e:85b7:577b:20b0:e4f]:16947","1.79.159.39:34092"]','2023-06-10 09:31:51.940506933+02:00','2025-02-25 18:57:43.62738857+01:00',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(5,'mkey:517d332699122618020754ebd6b7aa342136392cbb3186a181c32bad87cfa666','nodekey:ab36644fc8e56ce3816124f4e51be0a2e74505b0b6669ec7dac0001ec3774bf5','discokey:f1413ef26f1c11540e6d0b3f8ae7124e8b2eeffc9a330016cab23036f5b4fcf6','srv-15','node005',4,'cli',NULL,NULL,'2025-02-25 18:54:08.745255864+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["66.84.158.199:63451","47.94.115.240:6677","19.74.30.92:1284","213.110.62.40:49373","222.60.90.114:9445"]','2023-06-11 13:56:42.694329408+02:00','2025-02-25 18:54:08.745544477+01:00',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(6,'mkey:cc260b25fa19bf6d04469bdd96eab67578ec80bb0f2c4e3c4e188050951f8ff4','nodekey:d2730f52d9efa38cc46e37ba658c8bc6df42a4dac4b4fa3a9cc7f284cc1a17a0','discokey:665b19ab4efaa1bdbe9cf27d21a57be739c2713ac442f6342cfc988e410d66e3','web-56','node006',4,'cli',NULL,NULL,'2025-02-25 18:56:58.143735821+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[fece:76ed:4c23:1e89:f225:97a1:94be:b5c4]:33472","[e1e6:a14:535a:668b:698:f125:543f:b5b7]:31740","[b8e3:18de:5ece:3e24:8bbf:1cbd:d27c:7de]:53522","[e65d:59b5:f599:8596:2bc9:56b7:820a:8c12]:12277","[8ddd:4aae:2b94:4c1a:d05f:65d2:2bdb:b400]:42839"]','2023-06-11 13:57:44.975695604+02:00','2025-02-25 18:56:58.144115128+01:00',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(7,'mkey:b1d6228a1bad37f4e8491772fe83a3e7e4832fb93f8a7aeef1fe97a1a1ec6ee0','nodekey:f352383d311b4f323d0a5ae82c06ac24230a6a98671f00b09a2300c6d7413b36','discokey:92304c75e1c7d328f4445ac6f278f66244f11b66ac6e89060b498b0fe122f096','desktop-85','node007',4,'cli',NULL,NULL,'2025-02-25 18:57:34.843267809+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2e31:d5e9:a2fb:d204:f840:3746:7248:ba38]:62726","[cbb6:53d4:54ce:cd07:cef3:87ea:7ebe:e5a3]:12262","[bb94:d03c:901b:a7d6:d7d:955d:7df4:1c2a]:24266","[489b:6c59:1877:6215:ddc8:57f8:78dc:321c]:3025","77.213.161.20:47662","[adea:86d3:b312:f445:d835:6023:f1f:f928]:27483"]','2023-06-11 14:16:56.951313537+02:00','2025-02-25 18:57:34.843605919+01:00',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(8,'mkey:1ec07ddb5eefc8bf45fc902699a9b656dc6c4be81224aff81ef4b14990f02b76','nodekey:f2ad6282c57e4ac666137b69e68805d137e0d002eb05f6173a76b38965b5a877','discokey:6bcc988157ed256c25d0487285b8bd02379ece8f0292608f80f4e4ccfaeca269','desktop-82','node008',5,'cli',NULL,NULL,'2025-02-25 18:58:08.246285374+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["31.85.36.236:46674","[80ab:9956:c1bc:b242:cd97:6bb:52e3:10fb]:47368","[4ddb:3188:c8f2:6122:e983:bf33:d2d4:74bf]:25781"]','2023-06-11 19:59:30.401970393+02:00','2025-02-25 18:58:08.24660639+01:00',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(9,'mkey:f436d5d56dca10e95dac89d2d2b10629a7d1888543378d56d6a66c4ed5e8acbd','nodekey:200327faa3d157c1ab859e71a27bfd52642b863352a219146e7fcae245b0b35f','discokey:08d0b384d256d12f19297672b41d62b87174af0b8332602760dd6615f8ceb796','db-78','node009',6,'cli',NULL,NULL,'2025-02-11 14:57:51.798862802+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f78c:8abc:854e:670e:1d14:fb13:d0a0:ad7a]:33635","41.172.94.101:32111"]','2023-06-17 19:40:45.468789461+02:00','2025-02-11 14:57:53.417438918+01:00',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(10,'mkey:628fa7eba9bc1474eeb62eedf4e6b7e5b970a3acaeb2e020f4de7bd2aad547ef','nodekey:c105625e67654c70809a72beddc9ab34e7da931e4e7b72784c5d2781acbfb9c6','discokey:0d7060f2779a56c357749e61ba7b1cbd03a0d0d8397c9989c54087e1a80b5224','web-49','node010',6,'cli',NULL,NULL,'2025-02-25 18:16:38.447549074+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[69bc:fe73:f6c7:31b2:bc4f:1a95:4676:201d]:4806","[d794:c918:b0e8:6023:d7cd:709:79eb:23f7]:56801","113.147.88.203:61138","22.207.212.228:27547","[2c3e:2260:8c34:275:2287:b721:1bdb:d462]:59540"]','2023-06-20 11:18:35.905417341+02:00','2025-02-25 18:16:38.448089273+01:00',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(11,'mkey:4d2ff8526b10e056a5d073d5640fd28002ca21b561c55407175712a98e37ada7','nodekey:a1835798e4fefbece3000abb9afdd3cb01e5a4eebafc1218ed627cde2175c1f1','discokey:24b8f6da3ef9e024e273918de8869fdfebc45f789d14c4a50a592cc27259f1c1','db-64','node011',7,'cli',NULL,NULL,'2025-02-25 16:12:02.791603536+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f1ba:2aea:9e9e:2802:ebb9:ba04:a564:7998]:52919","[ede2:462a:f99c:24c3:b1b5:8b97:abe2:3ddb]:65141","[dfe7:6f6b:a382:4fc:a3f6:eb9:9ebc:5798]:1474","56.63.78.181:15200"]','2023-06-20 11:35:15.063855316+02:00','2025-02-25 16:12:02.791828881+01:00',NULL,'100.64.0.11','fd7a:115c:a1e0::b'); -INSERT INTO nodes VALUES(12,'mkey:80fb6f55cbac10695c455ecd74fdd59ae5e804ef27a53085450e713eb6afc4ad','nodekey:8b94382fa21050586620d8a5ee69649c9e854b3dfe06f1341e85768a0f6a1185','discokey:27c408f43b0d0857a1897d995513d907107d8e6f04c6a3bc5826d066324f2303','email-92','node012',5,'cli',NULL,NULL,'2025-02-25 18:57:33.682368055+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[743f:8ead:64df:d533:ccdd:f29a:254:97ee]:41637","[f780:17d4:440b:f400:cb10:dc8d:5698:e318]:37997","33.51.84.119:46756"]','2023-06-20 18:22:35.061914624+02:00','2025-02-25 18:57:33.682775689+01:00',NULL,'100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(18,'mkey:f11332335d9f1f9d4bfc8cf9bf4edc0c958d4e4364b7dcde3c67d847f7d05edb','nodekey:e97d97a2751ef3533818907c9f24acf3270e3824f2294bb9a36d60ca196e161e','discokey:412b0eee00b68721d9169fc81039b9cda6cf57143e4cae8f7c417db15b83411a','web-18','node018',9,'cli',NULL,NULL,'2025-02-25 14:45:50.465939403+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2acf:3f32:1763:5d21:88a:4af:8779:1b1d]:36283","84.177.251.38:32968","[3138:83c6:ae09:a073:7888:71e0:494:3863]:59700","191.156.96.176:5489"]','2023-06-22 08:30:54.08720463+02:00','2025-02-25 14:45:50.466242342+01:00',NULL,'100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(20,'mkey:157706346bfe3502b6e24a84efa59b1a151be08f5a260ff96c72a734c1da1390','nodekey:139a99de286f52c55bdd61e932827c9d7cb84101a698df3576bdffbeedbf654d','discokey:293abc1a1cf2921e7961f5753cc39097956648bab47879791afb12ea03a4e6dd','lt-42','node020',11,'cli',NULL,NULL,'2025-02-24 19:54:14.700663055+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ce0f:9f05:96dc:a47d:edcd:1c25:f578:7551]:30363","[cc06:9c56:919a:2341:4ef7:7f28:c8ca:2a5c]:30788","[7b81:52eb:f4e9:e74d:905:79d4:67b1:d359]:13795","81.17.18.155:10091"]','2023-07-10 12:40:34.838579199+02:00','2025-02-24 19:54:14.700825212+01:00',NULL,'100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(21,'mkey:8bc474d6d0d812ab011b7748675802bd9d4af70c2edab0da6c682a0cf1ca34f8','nodekey:2cb4a0ab98e83dafb20621dcfec33cd5fa7fa99b97998546cb6796249bdba88a','discokey:e341e99f352acb61779807aea844516d7e9e220c58699f09232bcee105e88797','lt-43','node021',11,'cli','null',NULL,'2023-11-20 07:19:19.447470862+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["63.3.67.150:22907","[77ac:562:7c4c:438a:6ec:be48:743d:47b1]:11349","[33cd:731c:f110:9b45:66b5:de81:1daa:757f]:5997","[d618:5830:8a37:fbe4:3b37:a35c:fad9:11d1]:57112"]','2023-07-10 12:42:43.290469734+02:00','2024-09-18 16:15:14.03886353+02:00',NULL,'100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(22,'mkey:170eb5e825f19f37d8e8747d888a79962d96acab073459e889d64d906e9a4a7f','nodekey:722321c00084d0152c13e34375b006f3ff2b95507685d13aa2df04093561df8b','discokey:c0463a2785b1741b3ce35422310ccd59f55edee22f5b64fdada2a95abfd0ae74','laptop-89','node022',6,'cli',NULL,NULL,'2025-02-13 18:41:19.151025389+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["194.185.67.128:22758","[722e:3c42:d444:e150:3b32:135c:68:80fa]:3684","132.110.176.16:63801","168.165.127.136:13273"]','2023-08-05 12:08:48.132161695+02:00','2025-02-13 18:41:19.151860126+01:00',NULL,'100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(23,'mkey:0ac434671e1b07546381d5dd36d4e04dbf7a098f226d7173e4b7e766c18283f2','nodekey:dc35ef6da42fbb0f4b90e5cf935b0012ab62fa61016b17d8a3a24240004fa3b4','discokey:5f7118e434c174dbac710b1d088db6213ca894ca3912a92688110ed5f96eb37d','laptop-54','node023',12,'cli','null',NULL,'2024-12-07 19:25:56.935152754+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["146.144.83.178:46323","1.236.14.11:13477","184.41.32.72:1026","[f136:8b69:185a:7ec7:94a1:ca58:5d95:ba9f]:21119","51.158.48.68:23441"]','2023-12-15 08:05:56.592241745+01:00','2024-12-07 19:25:56.935317645+01:00',NULL,'100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(24,'mkey:27effcc5ab9b8bc295f0a76a09e753e7bb6b0c4c33343f59635494e1d717c82c','nodekey:c663007fc1b770065de8303e6bcd9ad7184c345795625591c3e0fe8304b6c29f','discokey:0de43076bf8295b5281d848e1515c304e3f541d5b10b9160fea430a6272ab160','desktop-77','node024',12,'cli',NULL,NULL,'2025-02-25 14:31:30.623705545+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2b5d:522b:8ced:49a8:d0dc:bc0a:81ba:7f84]:30767","212.18.227.41:50735","[f774:e89b:980f:975d:fd3e:93f:a9b5:2034]:46054","[7225:d0ce:d29a:59d8:f596:b945:9a02:29ed]:43419","[5a76:d5b0:b51d:6b56:afaa:3968:277b:192c]:19250"]','2023-12-15 11:14:36.765183054+01:00','2025-02-25 14:31:30.623800675+01:00',NULL,'100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(25,'mkey:247cad9ffd7d108f553e7658db14bbad992332458abb9b4bc5a0c0d3a13ead9f','nodekey:bbfb569ac8a34cf4d59d2bc13d0f03660d24aedf22c61b56510a65d20bdf0812','discokey:61f92520602549b9bc684eeed3186f608a38e13794d06d4b8d165c4efbb69a24','laptop-29','node025',6,'cli',NULL,NULL,'2025-02-25 18:56:13.569853699+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5b7b:e424:ea77:aa8a:f835:d6bb:2e1f:52ea]:19152","[bb06:d322:7cec:426b:ef38:55d0:e7ce:95bc]:31597","[320d:6314:eaa2:a098:8594:977d:27b:8442]:12827","163.235.127.125:61230"]','2024-01-05 17:32:40.940566279+01:00','2025-02-25 18:56:13.57010639+01:00',NULL,'100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(26,'mkey:d623495ccdc12a3f219322cf53458a85c29b8795ceff5cef5338bb9c26e329dc','nodekey:781ad44c939f14168ec1c22c3fa2c5422f5ff1a5b0fc163fa80e6ace4e20c4a2','discokey:c474b4d7b9883e70466b6240b885a2a71dbe6efb0e8c84b85277beb584f25690','email-47','node026',6,'cli',NULL,NULL,'2025-02-25 18:57:44.753081033+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["173.144.3.182:33138","[975a:978:5cfb:5e5a:1b6:d272:a1c1:50dd]:3245","[e6e4:7cfc:9ce:b47:62f5:b64e:d2e4:9015]:42369","[26c6:541b:69af:974a:8ea8:6bd0:7c4d:5a82]:10571","[c898:1525:ebe8:704a:243b:de9d:b8c:ea69]:22747","32.167.123.100:57075","[2c67:a9f3:3163:c675:3386:2447:1cc:ed3a]:42590","64.137.67.1:56198","163.42.128.241:47535","49.160.22.202:42427"]','2024-01-05 17:34:19.811670479+01:00','2025-02-25 18:57:44.753480656+01:00',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -INSERT INTO nodes VALUES(27,'mkey:dc2bfab4553c0294e57f034001e3d575413ee8b0622e788050341c29b702f8e0','nodekey:9fd699021ab41ed4911faade849d0d702debfa1ef735a60eff8c1cae87b820c3','discokey:dc9c5f140c9ab0afaf55756f956a5fb53466ac0480c395aed0a4b481cc775679','srv-61','node027',6,'cli','null',NULL,'2024-01-16 14:32:21.570104175+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a542:b648:e2d9:5461:2ac4:6b34:77a8:ff65]:55609","106.52.35.144:41091","[537d:7a83:c891:ac8:9460:e49:e769:988b]:55062","50.23.199.223:4814","[a2b8:ea55:f9e3:2c00:f423:8c56:7fec:c617]:53793","[ed88:2277:3c9a:4f64:a25e:5b41:a3ca:e904]:27852"]','2024-01-05 17:48:25.466030859+01:00','2024-09-18 16:15:14.040766068+02:00',NULL,'100.64.0.22','fd7a:115c:a1e0::16'); -INSERT INTO nodes VALUES(28,'mkey:08347db9fbe8e5e7a968c2ab9ddd1ff530ea98163c0bab7769316307545f4d46','nodekey:b6b03efd7deb52dbe0298708ea37400510ab7589b0b1787eac1a5f9b50aa7c80','discokey:6a0ab2f3329d9fe5f50c76f9f3f1d2e797574af93d823a511df57f45414f647e','srv-42','node028',7,'cli',NULL,NULL,'2025-02-25 17:11:29.962245449+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["42.99.149.47:5231","146.122.164.252:36824","16.15.205.75:58574","82.136.149.34:28707"]','2024-01-15 09:34:54.847632697+01:00','2025-02-25 17:11:29.962361262+01:00',NULL,'100.64.0.23','fd7a:115c:a1e0::17'); -INSERT INTO nodes VALUES(29,'mkey:4a6e46330ccd4b3026337cdef37485c27592babda329107565c2a19d97253dcc','nodekey:17de47da1955a8a034788ae671c34fb10083568bf9e2f7494276cb2636065246','discokey:449f5f35a3507974cf03e3ac5c055d890de5122f57a7c2d84a0d3b324998607e','desktop-17','node029',7,'cli',NULL,NULL,'2025-02-11 14:54:59.936098654+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c2f1:82ab:922a:9b9d:f53d:a0b3:bccd:f6af]:299","[d28b:7b33:1f88:7583:7ed7:a923:4c90:9ee0]:62318","[68e3:a070:f02c:708c:a057:b579:aee9:4d25]:11747","[7224:b76a:cd04:e6d2:67fd:fec0:2f14:1837]:29002","173.82.71.57:3159"]','2024-01-15 15:18:12.2871978+01:00','2025-02-11 14:54:59.936147608+01:00',NULL,'100.64.0.24','fd7a:115c:a1e0::18'); -INSERT INTO nodes VALUES(30,'mkey:26f1059c45c12a8e56cd57ba6db33dfe104eac76103327af49622cb7810fab7d','nodekey:b157bbdc37aa4634b8d69d6fbc7670780fff69337713176a11f893afafa2a9bc','discokey:4c8c175d0e61ca8f7b2de92a1033cb9be95eaeaf09e9395c767afa5afdf2d9b5','desktop-64','node030',7,'cli','null',NULL,'2024-02-05 12:14:40.065688294+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8a1a:5c17:bb81:bb69:c7db:3513:f14c:ca2d]:52077","123.235.220.59:59925","155.210.184.45:60093","[963:536a:33e9:99fb:c204:d59c:29a8:ac18]:39005","[e84c:35c4:4bd9:a100:c271:8921:ca7:5901]:21828"]','2024-01-15 15:21:43.217136004+01:00','2024-09-18 16:15:14.041757308+02:00',NULL,'100.64.0.25','fd7a:115c:a1e0::19'); -INSERT INTO nodes VALUES(31,'mkey:ef048095dd3e937a8617f18797ed42d2c4513d07126dc419a466f257982113d1','nodekey:1c6c367df9b85ab5e1618c70c0a4f8b22e2eeea77523b40059c3ebfef0f4057f','discokey:80fb63a98202437504c700e582eddf800b02dc654577bb95796c60cdf1176793','lt-55','node031',7,'cli','null',NULL,'2024-02-22 08:35:27.098819037+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[cc02:5592:3738:a2ab:8c55:97bf:a5b8:24e6]:41616","129.130.98.189:47833","[99c0:d769:2f38:10e9:c01a:e3d:898e:afc]:17505","[3423:afbf:aa2:e156:81df:bbd3:412a:df65]:5125"]','2024-01-29 16:05:35.338524634+01:00','2024-09-18 16:15:14.042191514+02:00',NULL,'100.64.0.26','fd7a:115c:a1e0::1a'); -INSERT INTO nodes VALUES(32,'mkey:5b0e391e12914bcbcc6b415d9dd72937d74f9ce92a22c6ec938eb749962ee6ce','nodekey:4fa73acfb54cb140dad6b02bc853f6360f5ed160f819dc57701ad9a4caae9782','discokey:d3e90723a3b90eea513e7710ccaf25f68c0a2d60a85e3d4b9e4385335fddf8e9','srv-15','node032',7,'cli','null',NULL,'2024-04-09 13:59:43.37062537+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[14bd:ee53:6776:c1c0:124a:efbf:38cc:b312]:62180","85.234.248.186:20429"]','2024-01-30 10:41:58.31917869+01:00','2024-09-18 16:15:14.042506082+02:00',NULL,'100.64.0.27','fd7a:115c:a1e0::1b'); -INSERT INTO nodes VALUES(33,'mkey:ba179b51f3540e619eaac9a6d4bc67403daa7c3cf28a6b579087cd6d284501b4','nodekey:246fadfe2a3606f043aca1fc1fa31b79e9eef9c21ec67e72425e05576707b3c6','discokey:5c64c1a08b88425b6e1e2971f950708d38f951d2d5eed2b85df8dd7d1bb88a44','desktop-32','node033',13,'cli',NULL,NULL,'2025-02-25 00:23:14.381890489+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["106.67.113.92:45634","85.199.78.110:6716","[bc29:715c:cdce:f1f6:8630:4929:ea1e:5c03]:10918","[af8:f5e8:32f0:75ee:d351:c0e9:78e:93d9]:12624"]','2024-02-03 16:33:26.706408143+01:00','2025-02-25 00:23:14.38239676+01:00',NULL,'100.64.0.28','fd7a:115c:a1e0::1c'); -INSERT INTO nodes VALUES(34,'mkey:6fe02bbbdd40bb09e8aa74cf66b40b5b94f227690fd804a33242f56e18bee64e','nodekey:acc333e66d45e660ff4a31a152340d391547ebef1056f5f046624b629389bfd3','discokey:cf4a1217f85585d74fc5138af6ed9b78a29a9d4f12ad2e1beac660816d8c7b3f','lt-74','node034',13,'cli',NULL,NULL,'2025-02-25 16:49:04.386505728+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[1092:57ce:a60b:141d:ba79:79a3:91ed:66b8]:11037","[f766:14e5:6acc:ba40:8b28:59b7:ef7f:cd51]:46293","[c:e6fc:18ea:2e45:6bef:c1c1:f84e:b1e2]:62296"]','2024-02-03 16:42:32.683785672+01:00','2025-02-25 16:49:04.386771772+01:00',NULL,'100.64.0.29','fd7a:115c:a1e0::1d'); -INSERT INTO nodes VALUES(35,'mkey:f95240eb788522b37f5513f0253f8bdab2ccf7594ffbf9bb77945fd67e598c05','nodekey:f5a6f47f5a34b4f4c6883e46fb52c0a5f7ca2ce17397cbba751efa8736e0066d','discokey:4dc728a0c0de0ce87530a47dda426d5bf6ad1b2f6cdefcc819c29c1015c3131e','web-44','node035',5,'cli',NULL,NULL,'2025-02-18 17:59:56.043915212+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["136.113.252.147:325","139.180.210.231:32286","171.29.58.211:2336"]','2024-02-03 16:51:52.010016072+01:00','2025-02-18 17:59:56.044175137+01:00',NULL,'100.64.0.30','fd7a:115c:a1e0::1e'); -INSERT INTO nodes VALUES(36,'mkey:8ac09adfe44b8e60665601726c99627216e71b49ef7e8cc58ae6ba4f997c8652','nodekey:62d8af3310a7f27d84b931eaaa7ee110fe7fe46ea3ca8c817e47b9ce456c9b4a','discokey:bd31ae97729ac879d9b57866a69abb521248fb7fd5663d2149857d4dbf023d85','web-93','node036',13,'cli','null',NULL,'2025-01-19 14:01:48.956567669+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7e9e:219a:507d:d3d:cc77:6195:7ce3:b30e]:25018","203.251.190.122:19081","49.144.40.103:7674","[a641:3f06:63bf:b716:4658:1e5b:ac6f:e14]:13490","197.206.147.31:58506","[b1e8:68a0:4017:a243:61d0:b303:7860:18e]:28633","[e5b8:6c78:ce2f:cf6f:13fb:401f:8223:3f04]:54121"]','2024-02-09 12:34:57.879970954+01:00','2025-01-19 14:01:48.956830267+01:00',NULL,'100.64.0.31','fd7a:115c:a1e0::1f'); -INSERT INTO nodes VALUES(37,'mkey:c371756901cab84ae432a7d201e47232b008ef55effcf90341f24de7dfc55bb0','nodekey:0f858ae8a6492e1cc63fb502c6453ae90e8723f8b4ea8eba8c67f9f5114dec39','discokey:b6fc59b2a9ec7ddf1ae9573245cd3271259f964cdd730b811fabb019eccb3c81','db-47','node037',5,'cli',NULL,NULL,'2025-02-25 18:57:09.0895949+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b8c2:c792:3329:2c93:f94b:a948:7d11:b9f2]:908","[5925:353c:bd9a:f42e:1916:45b4:c170:e515]:18400","[c2a5:cc2f:e2e8:573f:7b01:4aa:8a1a:fd71]:32146","193.82.125.68:38601","[e9da:1552:e478:f920:c3d6:ff41:9e86:e27e]:48481","[c31d:dd18:321:589:11fa:d85b:695f:1fca]:43064","168.4.78.117:48474","90.45.218.0:29326","90.233.46.138:29846"]','2024-02-27 12:14:40.452601042+01:00','2025-02-25 18:57:09.090070104+01:00',NULL,'100.64.0.32','fd7a:115c:a1e0::20'); -INSERT INTO nodes VALUES(38,'mkey:86d359e5681ac6a642788f26d2dfa71c65883af4a1af67a290d46fc3861b221d','nodekey:33aa0fc0fb105997472a1e1024710288ec481753149da6c50d79c9d3c59bef0a','discokey:79d066251321d58c1359edfc9b774b33b2dc46797aeb5b70ee3dc9f1be415a42','web-20','node038',6,'cli',NULL,NULL,'2025-02-25 16:11:28.263125427+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["99.62.182.69:45368","41.96.84.44:45622","[a81d:f0ad:7a29:65e6:2d93:238e:99a2:eb30]:50585"]','2024-05-22 08:08:16.045350656+02:00','2025-02-25 16:11:28.291976461+01:00',NULL,'100.64.0.33','fd7a:115c:a1e0::21'); -INSERT INTO nodes VALUES(42,'mkey:10e4ff66fdd37fb54f6dcb0e42f483a15b6340610acc3a4517c24f0f350b5a9a','nodekey:f20786f9c7edc967d0b415bbcce39468441af3b531ff15656c0683d3f328720a','discokey:4e1247022b7ddf4b30d00043fded8b0132f598feeb77dd34d2fc32f97a1d2c2a','lt-05','node042',14,'cli',NULL,NULL,'2025-02-14 10:43:03.67095554+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a393:f17f:6728:82e4:c1e7:2367:82d0:daaf]:18560","[d192:9897:4cd7:bfb5:de36:fc03:8cf4:c60d]:25096","212.199.67.213:11150"]','2024-07-03 11:12:29.418355657+02:00','2025-02-14 10:43:03.671476921+01:00',NULL,'100.64.0.37','fd7a:115c:a1e0::25'); -INSERT INTO nodes VALUES(43,'mkey:33e764ae40265089c5e7dbfc8571aa23bb390fc7e1e8bd2f3e0cd6891c308214','nodekey:c444c9ff652ba29181a5f7fdf991f91f704e6698237ee18b0649c81879ec932f','discokey:2e1db3a1eea266673acf7826aa44b5aa6fdd23c6ba41330562b37c5a523ba358','laptop-81','node043',14,'cli',NULL,NULL,'2025-02-23 16:43:04.441516493+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["190.179.215.222:61930","45.131.187.230:22916","[510f:af99:ed4d:f53a:fc21:8640:9ad9:4930]:16050","[5ea4:3410:eaa2:9da:b8d:b61a:2d9d:edaa]:3526","209.107.206.184:46974"]','2024-07-03 14:48:50.263910778+02:00','2025-02-23 16:43:04.441841303+01:00',NULL,'100.64.0.34','fd7a:115c:a1e0::22'); -INSERT INTO nodes VALUES(44,'mkey:1099debb0ba187ef1f43f001c75abc118eaf017bf3bd6b48ca333fb046b98de9','nodekey:d49ef9ad4a7ecbdabcd24cae92d9ac32fbb765467cbc20abe61b4bda6ef15f1f','discokey:9f2faf39514eccb7f3d7d66ebcaf804474f75284d8b1aedbf97aee6ccdf47ed5','laptop-84','node044',14,'cli',NULL,NULL,'2025-02-11 14:57:51.787233089+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[183d:f2d6:b80:36c5:318e:64e9:7f4a:b389]:12440","169.173.198.12:15274","[e5df:1889:6ac5:7d21:4dfd:614b:7eb9:d93c]:792","119.123.40.237:58615","[415:97d8:4ecb:2a7d:14ae:64f3:7a01:471f]:15771"]','2024-07-03 15:23:48.066044194+02:00','2025-02-11 14:57:51.796185225+01:00',NULL,'100.64.0.35','fd7a:115c:a1e0::23'); -INSERT INTO nodes VALUES(45,'mkey:29eedea8083105a21470a51112b2486ae4c79221e2b6b17e4c3c84f965013513','nodekey:b3249157ebc705f134a366e28beb2d7646a40bc4171bb6d6fece1e750c85379b','discokey:64cb6eaee39eca7e3ad7f1225c216bf2826a8a951b91a631cfacf3b5ec99c2db','web-31','node045',14,'cli',NULL,NULL,'2025-02-11 14:57:51.802536554+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["14.153.231.75:19825","[acd8:3779:7b6c:3a71:c15a:e323:a721:c9a7]:63377","[6816:3377:edb0:3f8e:a65b:bd60:461b:ceb]:24642","147.3.7.139:46940","[7526:4960:73a9:911a:1fbf:92bf:1219:a6bf]:25440","[360:d312:e59c:79e0:a7e4:2295:caa0:946e]:17493"]','2024-07-03 15:54:01.706018896+02:00','2025-02-11 14:57:51.804667794+01:00',NULL,'100.64.0.36','fd7a:115c:a1e0::24'); -INSERT INTO nodes VALUES(46,'mkey:05f627f2614a4b7cf260a96650b0c5c62fcbfa993480253fd89771ad3b16ca2f','nodekey:678b922f6004223f9163155c62b8877ca663b1ecb3327b2c64453527f9c0f6a1','discokey:b77917efffd1020f1b03202e74265e308bc18a12e863bb04b94aa814cbc8e9bc','web-60','node046',14,'cli',NULL,NULL,'2025-02-11 14:57:51.964197644+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[89d4:f6c6:ccf6:b19d:3b76:642e:7a36:d229]:930","[ea0e:83e6:309f:c748:3314:72ba:996c:bfdd]:40599","136.29.193.173:47997","6.170.76.62:28497","[5cff:a345:4885:7b07:98a4:bcfb:f622:b94d]:58641"]','2024-07-03 19:38:07.783745318+02:00','2025-02-11 14:57:52.805260039+01:00',NULL,'100.64.0.38','fd7a:115c:a1e0::26'); -INSERT INTO nodes VALUES(47,'mkey:bed87b73eea00bf3d74184d911253df54cc7697f2dd023cfd836b801efd82518','nodekey:2146154fd3fb48377f6bf4cbf08f555008471b493a90a0614e5ecf76c338bc99','discokey:c3ea2fb606807d06b1ed3e10219fee70456d1db0208d2e73a081a258992e4316','lt-41','node047',14,'cli',NULL,NULL,'2025-02-11 14:57:51.975754077+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["83.69.120.197:25918","[3f3a:e533:53b0:bf90:95bb:fe47:d61:516b]:2433"]','2024-07-04 10:38:08.344092869+02:00','2025-02-11 14:57:52.006716436+01:00',NULL,'100.64.0.39','fd7a:115c:a1e0::27'); -INSERT INTO nodes VALUES(48,'mkey:8fede09a569258fc0fd7ebfbd12b2b4e8c2f4ce95c43ab0d89e41ab3998e6d24','nodekey:63af6498d4fb0e1e8312e3ec6a1639b178c051a271f33f213324e3ac1af53e17','discokey:a1e793aa09c20360cb4ef57e7dc42171a0abbc9f562140609e930ba5b7b12e83','email-92','node048',15,'cli','null',NULL,'2024-10-20 13:53:33.831192385+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[33c0:859c:4a78:8bdb:359e:31e4:1e71:7e39]:50383","[3aab:52a2:b9d3:1816:5627:336:6c60:57d7]:13486","22.83.180.41:14147","26.126.135.206:7777","[8706:d109:5006:f832:dca7:58e4:4451:be15]:15180","177.4.27.29:60895"]','2024-07-26 08:09:56.608302315+02:00','2024-10-20 13:53:33.831387627+02:00',NULL,'100.64.0.40','fd7a:115c:a1e0::28'); -INSERT INTO nodes VALUES(49,'mkey:13b8d4cafbdb8e9ab77928219aa57bf778878866ad8e6108b60aaab8dc8cec36','nodekey:17cefd68f8a9cc20b3d4fd93598f77ddfd6bd5a7e6e4b49861f9faad8623e0ef','discokey:939a04ddd2f9a24468e07ee7924dbb7a7662c1e8a1b67426f57fd48c7394448b','desktop-40','node049',16,'cli','null',NULL,'2024-09-19 09:07:18.28136023+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[19b0:8be2:708:d27f:4d3d:3c93:9979:e9e8]:50438","129.0.5.112:19015","[91e2:54c6:93fb:ec32:6ab4:186d:81cb:b815]:37323"]','2024-08-05 17:32:41.937626584+02:00','2024-09-19 09:07:18.281618912+02:00',NULL,'100.64.0.41','fd7a:115c:a1e0::29'); -INSERT INTO nodes VALUES(50,'mkey:03979d5446bb3f596bf0234d81ca84482eacb132e874ceda6cc2703422cd728c','nodekey:805c03cada525f14ee90d28ea02b23a3f57b470c793314872ad9daacf6c6484e','discokey:eb4135e1cc73a3c6b513d8cc04ca0c55a2aef7b0009845fa1d2036488aadf0f8','desktop-02','node050',10,'cli','null',NULL,'2024-08-07 10:10:06.595550455+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9175:498d:64ac:4342:2eac:ffa1:99fd:d109]:21202","[5cba:56ef:db08:320c:333a:541f:605c:44d5]:57469","150.211.2.236:50360","203.180.122.83:13344","63.162.234.32:11653","[6b63:f552:f930:e22c:fc42:3b41:c5c6:6d5f]:63162","114.48.12.185:47837","[2964:cc5d:3a1b:7889:acc8:4d8:22c1:398a]:43515"]','2024-08-07 11:50:54.144157179+02:00','2024-09-18 16:15:14.050033969+02:00',NULL,'100.64.0.42','fd7a:115c:a1e0::2a'); -INSERT INTO nodes VALUES(51,'mkey:5877139c52c0bf3c0fd054bd046aaf4530dd8946548a7bc1a06733649d1557d8','nodekey:553d9638c8c455e8d1bc811ea3af03db1eb3c586413447c8003e7d905a61d06c','discokey:1da883c9eff402d786362aaac2f478c835308b3acec0993e6293becf4474609f','web-62','node051',14,'cli',NULL,NULL,'2025-02-11 14:57:46.210281474+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["86.215.174.228:26026","93.101.159.155:64005"]','2024-08-07 14:19:31.156780417+02:00','2025-02-11 14:57:46.210762678+01:00',NULL,'100.64.0.43','fd7a:115c:a1e0::2b'); -INSERT INTO nodes VALUES(52,'mkey:0e64072dafa5f9e49266b41fc4b21ff7d287be8f4e5f2b6c395487d632f1cf3f','nodekey:2f77bb8cbeb426bcded5ad9fc6da44b0b76b8d6759f10a8db28e634290760ee7','discokey:836558738056edd1fa8d190c7e42f120b365a48a3b1f1c96c87755bb82d7ad0f','desktop-07','node052',17,'cli','null',NULL,'2024-12-25 17:27:58.515851096+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d889:f751:710:257f:977f:5597:9908:14e0]:61162","[f83a:5ebb:911d:ef36:743c:8086:8a4c:a5c7]:23538","[dc45:365c:e426:71d7:fe23:fdd4:d2c2:9409]:57262"]','2024-09-22 15:48:41.385301399+02:00','2024-12-25 17:27:58.517153789+01:00',NULL,'100.64.0.45','fd7a:115c:a1e0::2d'); -INSERT INTO nodes VALUES(53,'mkey:a6f93e99a5019b3b6a2941d95dc7fcbfea5a59317a48b99db93b7a3f4f0a65bb','nodekey:1f78a6d76ea4a2135c0da46b202560c86ed30cd7e8239d4508088d0e02ffb961','discokey:1744062452f052aba5320e37fc08aaa37f1430db7c32c94c5626f0f3c858de3d','desktop-82','node053',17,'cli',NULL,NULL,'2025-02-25 17:34:21.764621261+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[6084:ee60:1697:864e:4a61:cf6e:e9b6:4c05]:63961","[fad7:2601:d286:4772:3cce:6e89:f66e:9eeb]:5430","216.60.124.11:45041","[d513:c472:d7a:f316:f610:10d0:9851:4feb]:3955","156.228.105.157:61542","102.126.185.0:50694","[156b:e934:d171:2693:8db4:f193:a58c:17b6]:24190","79.255.179.99:56057","[6369:4412:8cfc:2a5e:7cfb:8b45:e2ac:f98a]:16090","23.0.29.75:24889","[ab0:1193:7bb7:4a4a:c036:b911:994f:3aa4]:5116","[9491:7bd5:3979:414a:d198:a8b:f2e7:29b4]:62795"]','2024-10-28 10:04:50.084492941+01:00','2025-02-25 17:34:21.856390854+01:00',NULL,'100.64.0.44','fd7a:115c:a1e0::2c'); -INSERT INTO nodes VALUES(54,'mkey:a6e3c45c81dbefdd09389920767df65c30bc7ff1396806623e7f8e1938b6fcac','nodekey:22e4ac4e25b05db4e94b5211e08ed14d983972f92e192f15e0231ff355fd46bf','discokey:225f907deaffccb674416dc4ba532f0b20304e86d57808eef2d96f15211f2309','db-52','node054',14,'cli',NULL,NULL,'2025-02-11 14:57:51.780134436+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ecbc:a9b6:cbb7:695b:62ad:27cb:36f0:2380]:27992","135.238.213.41:64084"]','2024-12-09 17:10:55.363593066+01:00','2025-02-11 14:57:51.783335437+01:00',NULL,'100.64.0.46','fd7a:115c:a1e0::2e'); -INSERT INTO nodes VALUES(55,'mkey:ad967c824d2c235ed17b1437f6756d4354ad76a2d27b575c0257ecac9a3a5fd2','nodekey:6196332e325e3f1d0c58acb182b48b070340dd673247c616a28d60a7669f5a4a','discokey:3a67fc6fdb9288679886c2d45dba4d1e1ba014af2c48f0f942acd39ef0e0ee64','email-56','node055',18,'cli',NULL,NULL,'2025-02-18 13:56:36.97652271+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["122.41.198.69:28824","[5437:6f4:94c1:c6c1:43ce:7a41:e5c2:65d0]:57161"]','2024-12-10 13:56:39.287449662+01:00','2025-02-18 13:56:36.978076326+01:00',NULL,'100.64.0.47','fd7a:115c:a1e0::2f'); -INSERT INTO nodes VALUES(56,'mkey:3d9e1a7fc884de338b8391bbc5d3398c7d0ec4c36388b756df5b2304b727fa11','nodekey:9cd5d41c04ae83ade43363dd165c792cbc5386e0d3e005311ca7c79ae7f7acaf','discokey:2d6fad1f43bc97a6d48bcd400c829db2d4f47291b12a59dcbe669d8b1d341c1b','srv-70','node056',19,'cli',NULL,NULL,'2025-02-25 18:57:39.407869706+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a34b:36c0:9f18:1062:2269:92c:1c83:6ec6]:56902","[41e8:60a4:229:a09b:efd3:c73f:b885:d235]:39533"]','2024-12-17 14:58:39.429211911+01:00','2025-02-25 18:57:39.408317173+01:00',NULL,'100.64.0.48','fd7a:115c:a1e0::30'); -INSERT INTO nodes VALUES(57,'mkey:1025548b29dc9733e1099fbb6516778e9d8f07eeaa214cd1ef33438e731333eb','nodekey:3f21adc328130ab468ac28ee25b4002974610ddd7b11f586752dec5a151114b2','discokey:605115f1238609b148dff010ca6e04d7b337df4e2b88de7a1f900a2f1de7db42','laptop-79','node057',20,'cli',NULL,NULL,'2025-02-25 10:30:02.978978512+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a329:52fe:45d1:996e:4767:d50c:6513:92ac]:54604","80.29.45.84:8223","178.193.134.41:58563","29.219.180.224:55412","[2ddd:7c1b:a237:41c8:3baf:e9cc:1f31:9057]:62916"]','2024-12-17 15:17:14.26936913+01:00','2025-02-25 10:30:02.979071466+01:00',NULL,'100.64.0.49','fd7a:115c:a1e0::31'); -INSERT INTO nodes VALUES(58,'mkey:e2795aaa6a9a19aa8efe47496bd930d8ed32d5feacd49e903ebce90db20101dc','nodekey:1066fbdf7648099fbf06cd5b69e20e80c3422790caee0354e8465520329f7754','discokey:d00b560ef82f566c3171c1a313f25e65b7bc7a2eeb5146cb4c513bfbfe40fdf9','desktop-48','node058',12,'cli',NULL,NULL,'2025-01-25 18:41:03.881898904+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["168.76.222.22:27595","[dc1f:57e3:7842:53ba:b427:8e40:6ebf:5b36]:4136","55.12.42.230:527"]','2025-01-17 10:17:23.455895657+01:00','2025-01-25 18:41:03.882180987+01:00',NULL,'100.64.0.50','fd7a:115c:a1e0::32'); -INSERT INTO nodes VALUES(59,'mkey:4de34c0ddff8819b97b39c2f31a2568db15e7c0ac6c1d75879b6c5247a5dd9a5','nodekey:13565579616a60409e411d5406b743f5dcf878da4aa82031350537d41038e09f','discokey:788fe7ffbb2b564852b1b9ec170a4267449c98b17ed51dcdce65942de712918b','lt-29','node059',21,'cli',NULL,NULL,'2025-02-16 03:40:04.156549179+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[32d2:d2f7:d5d2:8e10:507f:3428:b2a0:12b7]:22008","[c493:2a8f:6625:98c3:4f4a:db04:19cb:c7df]:41081","96.25.93.176:21890","219.197.124.83:46550"]','2025-01-29 11:59:27.291048957+01:00','2025-02-16 03:40:04.156814266+01:00',NULL,'100.64.0.54','fd7a:115c:a1e0::36'); -INSERT INTO nodes VALUES(60,'mkey:f04e27d229384590995331a3723f7a668443d5e638956102ab37df84529fff40','nodekey:49394e4605a507fd3a121abf6406645648b79c5f07f9f594fd8edc25eb94b04a','discokey:16256cfb6fdbae68d3ee7de3124887892d557512d5f86b5e9f3b4af036c1eb9a','db-54','node060',21,'cli',NULL,NULL,'2025-02-14 16:39:11.521346869+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["168.132.106.166:348","[ac:5c65:da8d:d68c:eb0d:3692:d441:5363]:63612","107.250.166.141:64587","[25cf:5511:50fc:e379:b64c:3a3:ff6f:104f]:22295"]','2025-01-29 12:01:57.48748166+01:00','2025-02-14 16:39:11.521680749+01:00',NULL,'100.64.0.55','fd7a:115c:a1e0::37'); -INSERT INTO nodes VALUES(61,'mkey:bdbef72c5f0e479a26353e2fe037cf6dd20719cb4d501f04a88a08571914307a','nodekey:fa24bcb6e340f6b8ea272a0d17ac7b5401e48d2fda056465326f2e1ac4695412','discokey:94ce18576693a7a51d39574aa5c81060355b8ae3f912c78515e525b5e31e5dae','web-59','node061',21,'cli',NULL,NULL,'2025-02-13 11:30:13.272216377+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["112.239.138.73:43954","174.168.204.29:5611","190.0.61.36:13730","33.70.223.181:46561"]','2025-01-29 12:03:01.464646336+01:00','2025-02-13 11:30:13.272621771+01:00',NULL,'100.64.0.56','fd7a:115c:a1e0::38'); -INSERT INTO nodes VALUES(62,'mkey:7d5ae8f0478c9a247bd1bb17d30ca742093c245c62ee324252337a170c6693a5','nodekey:9eff4972c9c5ed636f469b7869e5b5a9ffb7ec419d5316958631a72dd0da765b','discokey:124c71c581460affbc538c1c3efef93fa40e2170b4a4e3e08d81847bac150bed','laptop-08','node062',21,'cli',NULL,NULL,'2025-02-21 21:09:17.619738167+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[54b0:31b1:f5d1:bd45:5e99:3951:5dc4:45be]:33239","55.225.129.62:57985","28.93.179.45:19029"]','2025-01-29 19:23:14.092804852+01:00','2025-02-21 21:09:17.620347693+01:00',NULL,'100.64.0.57','fd7a:115c:a1e0::39'); -INSERT INTO nodes VALUES(63,'mkey:baeda24b92a15048f28b063ac4e3aa9ffa67b0bfd2a9ceb3b3005f6167e7c846','nodekey:9da3c4c71920a17c6476a51cc658c2d1e8ab167079cd0fab2215fa2e98675e03','discokey:c37813a5ff804ebfbd17ba3c3e030f97e5bacf4bf37c1829f1193b354657974c','web-17','node063',21,'cli',NULL,NULL,'2025-02-25 18:57:46.306411004+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["213.69.213.51:7048","[37a5:a9dd:8b9d:5014:b243:4f4f:6888:713e]:21123","223.252.160.61:32625"]','2025-01-29 19:41:40.535299057+01:00','2025-02-25 18:57:46.306748231+01:00',NULL,'100.64.0.58','fd7a:115c:a1e0::3a'); -INSERT INTO nodes VALUES(64,'mkey:194758915fa3a0f4dc51d5c8809c8d34811a2ba691f8286e1281c8733b908ca0','nodekey:3edcebd1bbc810d5df8fe3217eebd41e209d8b62beb09912bd60fe3682006a2c','discokey:89d4a743b38f61254b01454fdaa89e7855966b1beffe363e86c28ce8ada66454','lt-49','node064',21,'cli',NULL,NULL,'2025-02-25 17:53:18.185882601+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["26.157.224.19:39567","54.120.201.209:5659","[1b62:c927:40b:7b50:6285:2eff:280f:4c8c]:58979"]','2025-01-30 18:18:57.519126133+01:00','2025-02-25 17:53:18.186491242+01:00',NULL,'100.64.0.59','fd7a:115c:a1e0::3b'); -INSERT INTO nodes VALUES(65,'mkey:270533b0e4234eb9e697ee9e69403551dae192d4b3269a6810c4a0d2a6a55b6e','nodekey:d54cc8d3a12b77589e471e30b6ae4d803e58392b82aff1953974d722f8f1368d','discokey:e6c009756cf69ac7cdaea6c4a725c8766ec8e5a67be080a6ba2c6ed0fc5d7900','laptop-75','node065',21,'cli',NULL,NULL,'2025-02-25 18:58:05.70384565+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["197.205.147.181:38523","[5e11:9ab9:6805:dec0:99aa:65e6:d1fe:6e58]:45247","[34f8:19dc:cc84:1e6a:d280:ac0e:2dd9:6fe5]:39241"]','2025-01-30 18:19:40.354692307+01:00','2025-02-25 18:58:05.704297047+01:00',NULL,'100.64.0.60','fd7a:115c:a1e0::3c'); -INSERT INTO nodes VALUES(66,'mkey:de86679045cb8624f1a83852679a58cffd9066814aa8994f4ac0a9213b83175f','nodekey:b62e575e38a00afe79418224f7b954210702ac0a6d450eb8dbf7b5720b1eb5a5','discokey:6492f53782aab503155db83aa2d3a9e7698382da589b8c078d7850373332734b','desktop-65','node066',21,'cli',NULL,NULL,'2025-02-25 18:24:57.136610701+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b84:67b1:28c5:14f:aab8:916c:582c:6917]:24462","51.182.13.145:25726","[3095:7c2f:3784:9955:ebbc:4382:662b:a218]:48812","[c771:8240:30ba:d472:9e15:10c8:b026:2654]:14619"]','2025-01-31 12:05:28.65297301+01:00','2025-02-25 18:24:57.136876179+01:00',NULL,'100.64.0.61','fd7a:115c:a1e0::3d'); -INSERT INTO nodes VALUES(67,'mkey:259ea817a2c4d8a4c072db370e8c15ad9667331a63187285aaa4987b023b3e5f','nodekey:f743d054050f294bc1141b11924d8813401a650cc454e68dbfb20067f7d013af','discokey:aeb45bb93e68891487622a4274d9d2cfdbe795681e6f431075ad1aabb7ec1171','lt-70','node067',21,'cli',NULL,NULL,'2025-02-25 18:49:11.272283063+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b3d0:4706:73c5:9035:dd98:af39:b7f5:1311]:15037","[9356:40f8:a98c:a1a9:18e6:e781:f169:4277]:63375","[c733:e842:4621:e95d:6576:9ccf:b2c8:1582]:41668","[8f2:4995:746a:bc2c:84e9:95fa:b236:3c1f]:52788"]','2025-01-31 12:06:30.121464114+01:00','2025-02-25 18:49:11.272545005+01:00',NULL,'100.64.0.62','fd7a:115c:a1e0::3e'); -INSERT INTO nodes VALUES(68,'mkey:87dacd4a5bad58d9676d8242fe304668201101da8843faf4991afa597adeeb2c','nodekey:bac278a5ad2db26a926acc26fd63948ba6bd59f251f7cb63189686ce6ab708a1','discokey:24a9b3c24f38b690ca357a9659d7694a2a4fc85da77fb1412eae653c644b3bd2','web-47','node068',22,'cli',NULL,NULL,'2025-02-25 17:56:50.63061982+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["171.134.254.145:45887","[29e:44c9:159f:6692:bdc0:de17:58cb:1908]:58971"]','2025-02-03 14:16:55.56431345+01:00','2025-02-25 17:56:50.630814975+01:00',NULL,'100.64.0.51','fd7a:115c:a1e0::33'); -INSERT INTO nodes VALUES(69,'mkey:a3f039c16511f2b68bc8ab68a8d844c2f942aef9a277a1e2b1c3182c9f95b41d','nodekey:4cfd59f8b4bb7c82f7f7fba87ee21d2a0f32ce138d680591e8e0e47df3328ed5','discokey:38bae7e836bfd18b54cd4323709ae713fbd37d12c774cf19a8568bff1a410cb3','srv-80','node069',22,'cli',NULL,NULL,'2025-02-25 18:45:26.655373144+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f2d6:a8f1:727:d9e1:759:3a6c:6aed:d7ea]:22032","[3014:978:1762:1290:b92f:47f5:1dca:d0bf]:16383"]','2025-02-03 15:23:16.312084161+01:00','2025-02-25 18:45:26.655510978+01:00',NULL,'100.64.0.52','fd7a:115c:a1e0::34'); -INSERT INTO nodes VALUES(70,'mkey:0b6c7fc5eea73e731daabee34180efedfe9feea3171ba9f64c45e913bd068910','nodekey:6c63bd3c0336e50cb895df6e1fb177d2e6ab7f8e59baac6f4d54a1708324248b','discokey:b594b1893b68cef8d9e8496797bad67eee207093fdb12a9dd08e0b53cb915791','db-86','node070',21,'cli',NULL,NULL,'2025-02-25 18:34:37.1756356+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["99.25.81.82:35202","52.120.18.3:13372","[dc9a:ed1e:f97d:b356:d5f5:5d44:c917:3077]:33994","[dac7:80d0:6b4:2620:3ae4:2936:23a2:eba9]:31156"]','2025-02-03 18:09:35.161109801+01:00','2025-02-25 18:34:37.176092039+01:00',NULL,'100.64.0.53','fd7a:115c:a1e0::35'); -INSERT INTO nodes VALUES(71,'mkey:2251ba3534d7e20a28b1b2820ffd1a52a7cdcf60b1779a07d6c760aed6b3fc5b','nodekey:772af8901804b0da90941644bef50afbf67d0414226835865b7078383fc25272','discokey:fe85bbd953fe9294941fdb62ef965ff83bf7b646adec696db3bf889316cfc226','web-09','node071',21,'cli',NULL,NULL,'2025-02-25 01:29:06.614357209+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4d9e:563b:9e58:71a5:24ff:2612:7985:3f81]:20272","[2967:f7fc:b02a:7e75:6842:a146:56:dca1]:14923"]','2025-02-04 12:03:50.32663805+01:00','2025-02-25 01:29:06.674397902+01:00',NULL,'100.64.0.63','fd7a:115c:a1e0::3f'); -INSERT INTO nodes VALUES(72,'mkey:b070570c4361b0e9ce2f3afc15c0280c1ccdbdb3cc77f46df24238984e0ec527','nodekey:1b7b5f34a954359091fb9c9f81ded16fb83c96aa793aa08dbfc8e043e9f5378b','discokey:3f7d2271e073c6d51bcaa49e6b040ab689dbf9cde108e1a82d0b3f30c4f2a6a5','db-60','node072',21,'cli',NULL,NULL,'2025-02-24 22:26:45.69683022+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["170.145.88.37:51085","[4556:deae:aed1:6442:1456:5897:f4:12b1]:18138"]','2025-02-04 12:07:36.231437299+01:00','2025-02-24 22:26:45.697080608+01:00',NULL,'100.64.0.64','fd7a:115c:a1e0::40'); -INSERT INTO nodes VALUES(73,'mkey:0ec65946fc090054aa5164b3f5cd6f548dfc27b490025b1a190b93d69068c2e3','nodekey:37bda01462c321bf5c22d9f475325184aaff6d156964ef36d3f4638772db608e','discokey:9c15df2e2016b42d1c57c2ce1e3607786ecfcb4c3271991ea8e6abf28e1a1064','web-61','node073',21,'cli',NULL,NULL,'2025-02-24 20:16:59.20339999+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["20.112.75.120:15833","158.67.245.119:11948"]','2025-02-04 12:10:26.50545127+01:00','2025-02-24 20:16:59.203598225+01:00',NULL,'100.64.0.65','fd7a:115c:a1e0::41'); -INSERT INTO nodes VALUES(74,'mkey:7352aa9afe2d3372cacf883c2fa2f9faee561f8dd567184909c6d1206156ac7c','nodekey:88b163165d7438eb768e100a617a3850f050f4b7f1a9d26f6a1c6c2cbab2a6df','discokey:a32aac3d9bad0a0557274da524523b2831914a5329307114c9c3abffa0899790','db-43','node074',21,'cli',NULL,NULL,'2025-02-25 17:40:44.888267238+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d05b:3c4f:8402:823f:409f:87b2:4dab:4959]:58420","130.243.34.21:50806","104.255.105.25:5817"]','2025-02-06 17:33:12.557302525+01:00','2025-02-25 17:40:44.88889439+01:00',NULL,'100.64.0.67','fd7a:115c:a1e0::43'); -INSERT INTO nodes VALUES(75,'mkey:51f0f5c2b892846f926abf826f5ff56a52fc4a466be7a1073f16028ef2c6aabb','nodekey:0138c27376e8289255c1d505ba8e8d0e9e8e6785198b87984763531273d0c540','discokey:5bbb476ee437efaa55c10931313e35c4c8c6e30015e7f3eb015f4736897abb12','desktop-50','node075',9,'cli',NULL,NULL,'2025-02-25 08:20:55.11942837+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["57.247.45.203:47518","[1709:1d2d:9e92:a2c5:f5a0:fbca:6d9:dcbb]:31824"]','2025-02-06 18:23:55.709687186+01:00','2025-02-25 08:20:55.119891701+01:00',NULL,'100.64.0.66','fd7a:115c:a1e0::42'); -INSERT INTO nodes VALUES(76,'mkey:e3ec4d021ea80bdca94d97bc70443b58b6f7c9fa382e4a2cd61b9397cdbefbb7','nodekey:60862d1848548d0816874d83c3d819dd8359869f24e4e0ca0b10ba98fae9aacd','discokey:5265333600c360986c74f93bda288a7adf25d8abb6b5174694dbfbc56e03d488','laptop-97','node076',21,'cli',NULL,NULL,'2025-02-24 22:26:45.695195612+01:00',NULL,'{"fake":"data"}','["140.129.39.90:29397","163.61.27.204:21704","208.72.71.10:23065","[5400:ad99:9af3:c69f:c044:7959:1cbd:d52d]:62818"]','2025-02-14 15:29:45.220999928+01:00','2025-02-24 22:26:45.695391477+01:00',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(77,'mkey:eacc877df3ab3cdbbe20b1784ebeda63fa5058b8dcf69215b7993219e4a9ea94','nodekey:0e46ba129c4fb18195868f1e3b7bb7ad91e882b5e5aeab040692b69036457b67','discokey:6b42741054b0235ac27e649163acff4f6f90e6cdd6c4e587c25cd2a6b9f35aec','lt-44','node077',23,'cli',NULL,NULL,'2025-02-23 16:44:05.007011804+01:00',NULL,'{"fake":"data"}','["[f8c:6efe:aade:2619:9aa8:378e:2365:1ab7]:58269","[de0b:ae5e:4986:c7a0:3d3a:e4f5:afe1:d7d5]:61468","174.113.53.229:41982","[de3:22e3:e882:aeaf:2f2:ca2d:a179:de48]:23754"]','2025-02-14 17:00:54.226657615+01:00','2025-02-23 16:44:05.007327971+01:00',NULL,'100.64.0.68','fd7a:115c:a1e0::44'); -INSERT INTO nodes VALUES(78,'mkey:db77190826c814e219de0cd88d82298e7bf6d4fbb67ce8f00c0fd5c060fff85c','nodekey:73f0e887839e84b765d6186f97221d612d79595c119ff885682797991a02652c','discokey:74401b0dd1e1bfbbda82bf25fba18e15a89a30f4c1868cfb8ea81eb35ef4ae98','lt-27','node078',23,'cli',NULL,NULL,'2025-02-17 18:30:50.582229691+01:00',NULL,'{"fake":"data"}','["[5132:fd57:f2d8:8b61:470a:b25:60a6:48a6]:13595","208.34.222.79:35431","[e66d:c35:81d9:5ffd:4ad7:25c6:3ea2:a41f]:49279","[89f2:dc57:80ea:c56a:73b3:2563:1fd3:df17]:57048"]','2025-02-14 18:03:28.401774063+01:00','2025-02-17 18:30:50.635233842+01:00',NULL,'100.64.0.70','fd7a:115c:a1e0::46'); -INSERT INTO nodes VALUES(79,'mkey:60cd68f2d9965be36441f2f639e5245d845b7fea90a9325938813fdedf751679','nodekey:783e5ff1eac899d0cb25f2539244aef4c4a46198a4ec8753afe89f00670e29e4','discokey:688d1fb7063cc3e59a7041e64de8d416ab6390f2739a50dcf06d529fd263d28f','desktop-70','node079',25,'cli',NULL,NULL,'2025-02-24 18:37:03.584752527+01:00',NULL,'{"fake":"data"}','["[1e10:11cc:fd29:4681:aaaa:fd21:a79c:bc3d]:57634","78.215.20.31:28892","4.11.131.129:50580","61.47.47.240:57553","107.46.106.255:64756","7.81.60.25:36846"]','2025-02-15 12:49:11.523904469+01:00','2025-02-24 18:37:03.584913585+01:00',NULL,'100.64.0.71','fd7a:115c:a1e0::47'); -INSERT INTO nodes VALUES(80,'mkey:1de9a40ca190e9c6da65eecb16d3bb0574bf7757189115dc466299e08915c514','nodekey:de98e4358158b9231331934496db7fab148a8aac11248e4a17df0c4843c3a46a','discokey:cba115a9da4fa05b24b13e1180999612993a0eafb9728c3be13d50dd4458b95f','email-70','node080',23,'cli',NULL,NULL,'2025-02-21 18:58:38.982571563+01:00',NULL,'{"fake":"data"}','["156.169.142.128:42443","178.75.210.129:64679"]','2025-02-15 21:12:05.434304787+01:00','2025-02-21 18:58:38.983040117+01:00',NULL,'100.64.0.72','fd7a:115c:a1e0::48'); -INSERT INTO nodes VALUES(81,'mkey:58cf6b62ae7bec2c13af7be13bca5edc03bd7a55d56ebdcb4a43b35adce5ff7c','nodekey:31d2dcb989bc02a1943ea12ad376f2f58cbab06e9761cb1bf4ef4024d310def9','discokey:68b0066f49c936d164f9595ddde2676815501a14bc6df2914cb34de8476acd57','srv-02','node081',23,'cli',NULL,NULL,'2025-02-17 16:28:36.502444963+01:00',NULL,'{"fake":"data"}','["42.226.40.62:27764","[bd66:349c:a29a:4c05:b3ab:dd92:749c:434e]:27566","69.37.34.188:32687","172.43.25.248:19211"]','2025-02-17 16:05:05.921277876+01:00','2025-02-17 16:28:36.502658351+01:00',NULL,'100.64.0.73','fd7a:115c:a1e0::49'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(1,'2023-05-19 07:09:23.387641743+02:00','2023-05-22 09:48:18.908103256+02:00',NULL,3,'192.168.224.0/21',1,0,0); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-05-17 19:36:55.859473496+02:00','2023-05-17 19:36:55.859473496+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(2,'2023-05-17 19:36:57.059073465+02:00','2023-05-17 19:36:57.059073465+02:00',NULL,'user002','','',NULL,NULL,''); -INSERT INTO users VALUES(3,'2023-05-18 10:10:36.248939077+02:00','2023-05-18 10:10:36.248939077+02:00',NULL,'user003','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2023-06-10 09:06:13.920718561+02:00','2023-06-10 09:06:13.920718561+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(5,'2023-06-11 19:58:32.371218434+02:00','2023-06-11 19:58:32.371218434+02:00',NULL,'user005','','',NULL,NULL,''); -INSERT INTO users VALUES(6,'2023-06-17 19:39:53.031565686+02:00','2023-06-17 19:39:53.031565686+02:00',NULL,'user006','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2023-06-20 11:35:09.325846831+02:00','2023-06-20 11:35:09.325846831+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2023-06-21 22:47:48.196234382+02:00','2023-06-21 22:47:48.196234382+02:00',NULL,'user008','','',NULL,NULL,''); -INSERT INTO users VALUES(9,'2023-06-22 08:30:35.068995572+02:00','2023-06-22 08:30:35.068995572+02:00',NULL,'user009','','',NULL,NULL,''); -INSERT INTO users VALUES(10,'2023-07-03 10:18:32.123226+02:00','2023-07-03 10:18:32.123226+02:00',NULL,'user010','','',NULL,NULL,''); -INSERT INTO users VALUES(11,'2023-07-03 10:18:37.130387602+02:00','2023-07-03 10:18:37.130387602+02:00',NULL,'user011','','',NULL,NULL,''); -INSERT INTO users VALUES(12,'2023-12-15 08:05:06.013615212+01:00','2023-12-15 08:05:06.013615212+01:00',NULL,'user012','','',NULL,NULL,''); -INSERT INTO users VALUES(13,'2024-02-03 16:32:42.224977233+01:00','2024-02-03 16:32:42.224977233+01:00',NULL,'user013','','',NULL,NULL,''); -INSERT INTO users VALUES(14,'2024-05-03 10:12:38.220973042+02:00','2024-05-03 10:12:38.220973042+02:00',NULL,'user014','','',NULL,NULL,''); -INSERT INTO users VALUES(15,'2024-07-26 08:08:40.979783263+02:00','2024-07-26 08:08:40.979783263+02:00',NULL,'user015','','',NULL,NULL,''); -INSERT INTO users VALUES(16,'2024-08-05 17:32:02.878091894+02:00','2024-08-05 17:32:02.878091894+02:00',NULL,'user016','','',NULL,NULL,''); -INSERT INTO users VALUES(17,'2024-09-22 15:48:00.287392203+02:00','2024-09-22 15:48:00.287392203+02:00',NULL,'user017','','',NULL,NULL,''); -INSERT INTO users VALUES(18,'2024-12-10 13:55:11.256977421+01:00','2024-12-10 13:55:11.256977421+01:00',NULL,'user018','','',NULL,NULL,''); -INSERT INTO users VALUES(19,'2024-12-17 14:57:58.550971236+01:00','2024-12-17 14:57:58.550971236+01:00',NULL,'user019','','',NULL,NULL,''); -INSERT INTO users VALUES(20,'2024-12-17 15:02:08.053169491+01:00','2024-12-17 15:02:08.053169491+01:00',NULL,'user020','','',NULL,NULL,''); -INSERT INTO users VALUES(21,'2025-01-28 15:57:32.774456057+01:00','2025-02-06 17:43:41.935399542+01:00',NULL,'user021','','',NULL,'',''); -INSERT INTO users VALUES(22,'2025-02-03 14:10:50.491924701+01:00','2025-02-03 14:10:50.491924701+01:00',NULL,'user022','','',NULL,'',''); -INSERT INTO users VALUES(23,'2025-02-14 16:58:30.250289644+01:00','2025-02-14 16:58:30.250289644+01:00',NULL,'user023','','',NULL,'',''); -INSERT INTO users VALUES(25,'2025-02-15 12:48:14.650995528+01:00','2025-02-15 12:48:14.650995528+01:00',NULL,'user025','','',NULL,'',''); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.25.1.sql b/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.25.1.sql deleted file mode 100644 index 6ef21914..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.25.1.sql +++ /dev/null @@ -1,136 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'463a8b372963aeaca12400faa0c7ea29e9bfa3b4c59e9622',3,0,0,1,'2023-05-19 05:09:19.66636462+00:00','2023-05-19 05:14:19.664224869+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'77a019cb12c0d0b9347a17ab23fa4c87983814fe36bb2fbb',14,0,0,0,'2024-05-03 08:13:55.8614948+00:00','2024-05-04 08:13:55.85782156+00:00',NULL); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`ipv4` text,`ipv6` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:0ea01e8cd608548a171ebcedc16da73b25ec18d58861be7a6b4274eb17baf6ba','nodekey:0fcf591dc1997c1d7388a3835bd24dae48240e5b03b6a7f5f33b2e813d222bcf','discokey:ed27f0b3c04222f0d05bfbc6fb5787338275ed0e32fd42fe68c669aca0f3d7a4','desktop-54','node001',2,'cli',NULL,NULL,'2025-04-01 16:27:58.397462433+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b8f0:cf44:22c8:82d1:f79:c3eb:9144:c24f]:6142","[8e25:af1:e31a:8fcc:7dcf:be1:a3c9:ed6a]:38766","[37a1:215f:bf3d:3fb1:6399:f7cb:4048:3ca8]:51666","137.242.125.72:25376","86.84.184.63:26062"]','2023-05-17 19:38:13.531518257+02:00','2025-04-01 16:27:58.397628099+02:00',NULL,'100.64.0.1','fd7a:115c:a1e0::1'); -INSERT INTO nodes VALUES(2,'mkey:fa994293c516f7166a803edf260ec091b1638547f4ba78aaf0625a4d0cace0a5','nodekey:5b5bb489a68285591c97034325637a7767cea0051fcf19934dc694331137f459','discokey:1743557d49de33b01698e6eea7c6ce383010f3c6b20a56c39defdaccee9c5873','lt-98','node002',1,'cli',NULL,NULL,'2025-04-05 07:56:58.676183973+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["149.96.226.131:35537","84.23.188.83:48876","44.184.117.89:36643","90.223.136.253:7056","[e43c:be7:7f4e:8217:a009:7f9d:a510:1af6]:18872","[3a86:865b:2f4c:f3b7:88a0:4aa2:f05d:c3fd]:47680"]','2023-05-18 10:09:21.757289398+02:00','2025-04-05 07:56:58.749180421+02:00',NULL,'100.64.0.2','fd7a:115c:a1e0::2'); -INSERT INTO nodes VALUES(3,'mkey:e44e45bd1656b97e8b328dc927ccab6cbce6c869992f70fb9c6031b980a1bf2b','nodekey:1497e461383f07b4f7cd7c76036b6fcabf23f46ac9f3651e22fd1f67eee22804','discokey:d84cef5408f6a40a7d789abbfdabd99edcd794e066894632f3cbb20d0e90a100','lt-24','node003',3,'authkey','[]',1,'2025-03-31 18:38:46.659409582+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["219.238.10.240:64443","[5e3d:ce1d:284e:9f89:155c:bff2:a65:d667]:51808"]','2023-05-19 07:09:21.399903526+02:00','2025-03-31 18:38:48.354115247+02:00',NULL,'100.64.0.3','fd7a:115c:a1e0::3'); -INSERT INTO nodes VALUES(4,'mkey:a2be538ff79403d577686c0ff675e1ec1b0da10902211d25ccf8adb58deaa6be','nodekey:69977d7f034a8ca09f0f08dc2d6410eb2b6f5fd343b6c88124ec37d6bcddb4d9','discokey:2a3fadbffba90bd0d74d432626a92d1ce1cf0297d4ad51cfd2011c4de7e198fa','db-36','node004',4,'cli',NULL,NULL,'2025-04-05 07:55:13.242501536+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8a81:30b8:5d50:73d7:9039:f2ce:c1be:a099]:43373","76.78.238.163:55069"]','2023-06-10 09:31:51.940506933+02:00','2025-04-05 07:55:13.243195188+02:00',NULL,'100.64.0.4','fd7a:115c:a1e0::4'); -INSERT INTO nodes VALUES(5,'mkey:d1533206380a3d91fe09ba9aca808b8caada2cb469e9065acdebab13f22a215f','nodekey:961c8e15bdb7e261c0d359f72256c218447c9c007032f102a418d07b0b829b1b','discokey:d6e4dec2b26039771350ddf132dadf1a5d8615608bd99461d267271e7b53269c','srv-09','node005',4,'cli',NULL,NULL,'2025-04-05 07:57:42.002738555+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b12e:af88:3c55:41f8:5fba:2aa3:3248:15c1]:23643","186.84.191.82:2621","[c9d0:4e15:513a:171b:9911:f9a0:cccf:cd83]:24151","66.84.158.199:63451","47.94.115.240:6677"]','2023-06-11 13:56:42.694329408+02:00','2025-04-05 07:57:42.003378098+02:00',NULL,'100.64.0.5','fd7a:115c:a1e0::5'); -INSERT INTO nodes VALUES(6,'mkey:4cf09b5d948c6492d0706b7972a9f46f03261dc73244cbd098653461c1426dab','nodekey:dd207345a8cdda46d50803658ab26c0fc7f28283217c6aa616a945885e92fca3','discokey:a0e6dfe46d6380aa0d131816dc0ea3287aad89196690b9bd91224f904816aa06','email-01','node006',4,'cli',NULL,NULL,'2025-04-05 07:56:59.056128983+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f739:43a6:4363:817e:34d9:3cc:1272:dba1]:26597","220.204.45.94:59135","187.96.173.136:34092","[fece:76ed:4c23:1e89:f225:97a1:94be:b5c4]:33472","[e1e6:a14:535a:668b:698:f125:543f:b5b7]:31740"]','2023-06-11 13:57:44.975695604+02:00','2025-04-05 07:56:59.0920878+02:00',NULL,'100.64.0.6','fd7a:115c:a1e0::6'); -INSERT INTO nodes VALUES(7,'mkey:5bbbd75488079eda04771e44ddf21c5866640024a947e66cfd3c3bafb19d3e11','nodekey:022306df92cac976efdd1dd5d4454d0e20eb46ac6f7b847dbcc382a399d1c97e','discokey:4980d19a6814405d26c40dee31a93a5f43e18b41f6f6f4908ee59066e04f8179','lt-67','node007',4,'cli',NULL,NULL,'2025-04-05 07:57:44.198576857+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8ddd:4aae:2b94:4c1a:d05f:65d2:2bdb:b400]:42839","[c493:a941:58b6:7634:6221:70ba:20e6:40c4]:35486","[9b54:61a6:c678:edac:956f:741e:5e2c:9ec9]:17930","[cd3f:af1c:fcb4:bcb3:806e:adf4:3f3f:ab36]:58513","[7a83:6a84:9f15:ba1a:9671:3d54:2e31:d5e9]:64893","167.223.65.194:6907"]','2023-06-11 14:16:56.951313537+02:00','2025-04-05 07:57:44.198934821+02:00',NULL,'100.64.0.7','fd7a:115c:a1e0::7'); -INSERT INTO nodes VALUES(8,'mkey:1abadbff7c8bdbe7a3104396c70068fa0a7aa30e405ef7d088738a8510877b39','nodekey:1251afbe69b3212ef1137c4bf2ef842409b8440d9ce32470ce4e8fb245480ccb','discokey:4a7b4155015d4cc2418ccb4005d3408b610c37b9f7bffdc625e2823f3e44514b','srv-71','node008',5,'cli',NULL,NULL,'2025-04-05 07:57:27.984224703+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["107.117.104.69:47662","[adea:86d3:b312:f445:d835:6023:f1f:f928]:27483","124.104.237.237:47957"]','2023-06-11 19:59:30.401970393+02:00','2025-04-05 07:57:27.985100444+02:00',NULL,'100.64.0.8','fd7a:115c:a1e0::8'); -INSERT INTO nodes VALUES(9,'mkey:9ced3abf3a281d71b6f361a75769c571c4aff9e4cbab5b317f6cf18918f66bbc','nodekey:e533ead6c5b810764db07631d7b9b14434bf66549837d33a6187daac1ab1d7a9','discokey:571a7ba01a8a0fff2af21d0f135caf8ae2ee0e15fd5ab996d2e490cf968d7217','email-22','node009',6,'cli',NULL,NULL,'2025-03-31 18:38:47.329086514+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5b29:3bfe:7aa9:2767:900a:8142:1ae:eb94]:59942","165.200.68.0:39862"]','2023-06-17 19:40:45.468789461+02:00','2025-03-31 18:38:48.935281615+02:00',NULL,'100.64.0.9','fd7a:115c:a1e0::9'); -INSERT INTO nodes VALUES(10,'mkey:6cb022f8457bd430cd8ca6e88f1d4e3678965fd0af659ef29c956fbaa81c870c','nodekey:b4dc05758407f598a2cb77f98d5c4fca9dc3820e31976887942b54a4d44e43bf','discokey:fb9dba09bf8b64d2ef3b35d917d41d5208aafb2b013a53da0d2196a152a3cf9f','desktop-07','node010',6,'cli',NULL,NULL,'2025-04-05 06:30:41.564651006+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["2.116.83.236:33635","41.172.94.101:32111","223.204.123.230:38090","[9394:b33d:67e:fde5:21f8:77e8:97a8:bbb7]:47884","14.155.207.231:4806"]','2023-06-20 11:18:35.905417341+02:00','2025-04-05 06:30:41.565251484+02:00',NULL,'100.64.0.10','fd7a:115c:a1e0::a'); -INSERT INTO nodes VALUES(11,'mkey:e76a4d43284c0d19fbcf84b38d4bbdc5f978f8922f2ef6dcbe15170a3860a7d8','nodekey:85652faac3a65cb620073e7402506930daa194a7a3555c335741ee65a920530b','discokey:3a6aac7afdc916c9bc84b77b90c521c0e08f3fbf06ce89cfb2281271ec070b1a','lt-86','node011',7,'cli',NULL,NULL,'2025-04-04 21:59:20.16340839+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["22.207.212.228:27547","[2c3e:2260:8c34:275:2287:b721:1bdb:d462]:59540","70.115.64.117:6906","147.25.71.84:48460"]','2023-06-20 11:35:15.063855316+02:00','2025-04-04 21:59:20.295244825+02:00',NULL,'100.64.0.11','fd7a:115c:a1e0::b'); -INSERT INTO nodes VALUES(12,'mkey:6dacfe30ad1cab87eb1ef96c1d574cdb59d71aa0695483c7f17aef6b26f446b3','nodekey:b1f007ca3a9d6637fa1388c71397a537d4cc3b94918c280dd9068c3871861c02','discokey:3cf2b60ecdfbf0b633afb7c89de5c05274933adb6a71fdfc28fdd79c0814ee51','lt-86','node012',5,'cli',NULL,NULL,'2025-04-05 07:56:06.898009216+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2e1:2c69:85a7:8894:dfe7:6f6c:a382:4fc]:6426","[e758:c726:ced4:43ea:60fd:3ad8:1b8b:b514]:15200","[314c:5aba:d6ac:63d3:4de3:2c46:5e60:903b]:30789"]','2023-06-20 18:22:35.061914624+02:00','2025-04-05 07:56:06.898660893+02:00',NULL,'100.64.0.12','fd7a:115c:a1e0::c'); -INSERT INTO nodes VALUES(18,'mkey:88ef4bda4139c7c09e33bbc866b88e069c51187346d8095238179c82502dffcc','nodekey:7ad274415b296099222e0ca334eb849e0e6595ffa54dbcc94a0fc9ffa69d1747','discokey:388c1077bdd17ce1f86b92936609756ecd3e8b21ab3f4b8a84da7727dd2b2552','web-70','node018',9,'cli',NULL,NULL,'2025-04-04 21:32:36.521819531+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["33.51.84.119:46756","[c844:704e:c3c0:e709:8f94:3564:1b4d:d2d9]:8357","[169:f098:49e9:99af:27bb:802f:fc22:bd59]:63749","[2acf:3f32:1763:5d21:88a:4af:8779:1b1d]:36283"]','2023-06-22 08:30:54.08720463+02:00','2025-04-04 21:32:36.56813156+02:00',NULL,'100.64.0.16','fd7a:115c:a1e0::10'); -INSERT INTO nodes VALUES(20,'mkey:338d7e5ca6c5a8c65d7174eb1af312107408a16b7769b7088c8acf64134d9722','nodekey:440bd6fd6eea65489be99a6a95f4b6315ea55fa31fa1d6905a53ba4c046e0817','discokey:80890f521819e60c3778df342a98a9adfe2f46c15b0f55aa14f17df002aaa398','desktop-48','node020',11,'cli',NULL,NULL,'2025-04-04 22:24:36.219266862+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["191.156.96.176:5489","[166e:c30:4d73:440f:456a:22be:71e1:142b]:33036","[6d05:6bb8:3bec:c76c:1890:ca9c:2684:1bbb]:31223","173.218.108.179:51272","143.8.135.130:62709","[1af1:db8a:7b81:52eb:f4e9:e74e:905:79d3]:7418"]','2023-07-10 12:40:34.838579199+02:00','2025-04-04 22:24:36.219367472+02:00',NULL,'100.64.0.14','fd7a:115c:a1e0::e'); -INSERT INTO nodes VALUES(21,'mkey:ee8b7cc5b7ece1e1a03fdf355a2356928cc272be6f9570866e8ab7d43c09036c','nodekey:2249e3a64297951858ce623341c49d6ebcfe94a8fb5e4edde4889b8f50deea48','discokey:555c31b79c570eafb2f3be56f01cc73913c4886081fc75b868d14340380e5a42','desktop-87','node021',11,'cli','null',NULL,'2023-11-20 07:19:19.447470862+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["207.187.74.99:12216","63.3.67.150:22907","[77ac:562:7c4c:438a:6ec:be48:743d:47b1]:11349","[33cd:731c:f110:9b45:66b5:de81:1daa:757f]:5997"]','2023-07-10 12:42:43.290469734+02:00','2024-09-18 16:15:14.03886353+02:00',NULL,'100.64.0.15','fd7a:115c:a1e0::f'); -INSERT INTO nodes VALUES(22,'mkey:2894a7babdc030bac1e4ec0e14aa4b6b67ebe382a1773e0ff4aa8b16ecd443a3','nodekey:1e219f3cc03eaa868018b9f6870d108317c31760fdc38ad4bec7200d58379f26','discokey:a81e998383baa23adb3e1e6757ef0c4b960095022f00b500b57f53c6147c71e4','laptop-48','node022',6,'cli',NULL,NULL,'2025-03-31 18:38:46.693334487+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[736d:a0a:e343:6b2c:b48b:7510:ace2:edc]:60377","82.193.154.129:23713","[68:80fa:7b30:3a0:d1fb:3e89:2c73:256b]:30308","[ad42:3917:5edb:3e85:732:1c71:a19e:9fe0]:11799"]','2023-08-05 12:08:48.132161695+02:00','2025-03-31 18:38:46.801242352+02:00',NULL,'100.64.0.17','fd7a:115c:a1e0::11'); -INSERT INTO nodes VALUES(23,'mkey:9ab69494a274cba5b22cbcf35b3ada57d4b3d2ee03930a456f80fb34dc69064b','nodekey:0345912b4e39cd9d03dec55edec6fba7f8aca631312884b49538550475332182','discokey:a2c1be2ae50af07e5b96b0a3ee2d3a1b0148fbfc5c1a283bc787e31410d6f74c','web-51','node023',12,'cli','null',NULL,'2024-12-07 19:25:56.935152754+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["9.237.225.227:64622","[46a1:e1e6:6b82:6599:acc7:2273:afea:f09f]:38018","93.129.193.97:13477","184.41.32.72:1026","[f136:8b69:185a:7ec7:94a1:ca58:5d95:ba9f]:21119"]','2023-12-15 08:05:56.592241745+01:00','2024-12-07 19:25:56.935317645+01:00',NULL,'100.64.0.18','fd7a:115c:a1e0::12'); -INSERT INTO nodes VALUES(24,'mkey:94d0eadf801d88022019f9353b7c29cafdd060a269a0245396e01f1293953cb0','nodekey:1db769d2ca42bc0f8b4e18b5c89a31fb4595edf0592ce1a534b713ba503f07e9','discokey:912056b78049017764c78a047b65feb3c3b934fa26ac62b44f8d7d561711321a','srv-85','node024',12,'cli',NULL,NULL,'2025-04-02 22:03:44.023075093+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["21.107.170.69:30767","212.18.227.41:50735","[f774:e89b:980f:975d:fd3e:93f:a9b5:2034]:46054","[7225:d0ce:d29a:59d8:f596:b945:9a02:29ed]:43419","[5a76:d5b0:b51d:6b56:afaa:3968:277b:192c]:19250"]','2023-12-15 11:14:36.765183054+01:00','2025-04-02 22:03:44.02329052+02:00',NULL,'100.64.0.19','fd7a:115c:a1e0::13'); -INSERT INTO nodes VALUES(25,'mkey:247cad9ffd7d108f553e7658db14bbad992332458abb9b4bc5a0c0d3a13ead9f','nodekey:bbfb569ac8a34cf4d59d2bc13d0f03660d24aedf22c61b56510a65d20bdf0812','discokey:61f92520602549b9bc684eeed3186f608a38e13794d06d4b8d165c4efbb69a24','laptop-29','node025',6,'cli',NULL,NULL,'2025-04-05 07:56:00.717501781+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5b7b:e424:ea77:aa8a:f835:d6bb:2e1f:52ea]:19152","[bb06:d322:7cec:426b:ef38:55d0:e7ce:95bc]:31597","[320d:6314:eaa2:a098:8594:977d:27b:8442]:12827","163.235.127.125:61230"]','2024-01-05 17:32:40.940566279+01:00','2025-04-05 07:56:00.718576252+02:00',NULL,'100.64.0.20','fd7a:115c:a1e0::14'); -INSERT INTO nodes VALUES(26,'mkey:d623495ccdc12a3f219322cf53458a85c29b8795ceff5cef5338bb9c26e329dc','nodekey:781ad44c939f14168ec1c22c3fa2c5422f5ff1a5b0fc163fa80e6ace4e20c4a2','discokey:c474b4d7b9883e70466b6240b885a2a71dbe6efb0e8c84b85277beb584f25690','email-47','node026',6,'cli',NULL,NULL,'2025-04-05 07:53:33.912069529+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["173.144.3.182:33138","[975a:978:5cfb:5e5a:1b6:d272:a1c1:50dd]:3245","[e6e4:7cfc:9ce:b47:62f5:b64e:d2e4:9015]:42369","[26c6:541b:69af:974a:8ea8:6bd0:7c4d:5a82]:10571","[c898:1525:ebe8:704a:243b:de9d:b8c:ea69]:22747","32.167.123.100:57075","[2c67:a9f3:3163:c675:3386:2447:1cc:ed3a]:42590","64.137.67.1:56198","163.42.128.241:47535","49.160.22.202:42427","[1526:e34a:8857:394e:bbe0:c043:4b37:68d4]:5245"]','2024-01-05 17:34:19.811670479+01:00','2025-04-05 07:53:34.448650057+02:00',NULL,'100.64.0.21','fd7a:115c:a1e0::15'); -INSERT INTO nodes VALUES(27,'mkey:d95c13394bdb912ef51f3123fdb0495feef567dc13724efd8dea17b33ce7aa1d','nodekey:fd8be528ea41d74697d46084c8cf560fff1c21b3193bec752ba2d60457396b19','discokey:9ec066ac46492f546a67be34b417eb469dfc95aff8b7be28be3bd8b36bf98653','db-72','node027',6,'cli','null',NULL,'2024-01-16 14:32:21.570104175+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5760:dc4e:eac8:5c38:6c9c:8451:98d6:4e9c]:21573","[e350:bf94:9128:bca7:8c74:ab49:c419:4c50]:18052","222.155.102.116:42746","[e32f:122e:8a:e473:4b83:8ec9:61db:60ce]:37054","154.68.57.163:54048","[c5a0:942c:ed88:2277:3c9a:4f65:a25e:5b40]:27852"]','2024-01-05 17:48:25.466030859+01:00','2024-09-18 16:15:14.040766068+02:00',NULL,'100.64.0.22','fd7a:115c:a1e0::16'); -INSERT INTO nodes VALUES(28,'mkey:08347db9fbe8e5e7a968c2ab9ddd1ff530ea98163c0bab7769316307545f4d46','nodekey:b6b03efd7deb52dbe0298708ea37400510ab7589b0b1787eac1a5f9b50aa7c80','discokey:6a0ab2f3329d9fe5f50c76f9f3f1d2e797574af93d823a511df57f45414f647e','srv-42','node028',7,'cli',NULL,NULL,'2025-04-04 14:08:44.348585699+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["42.99.149.47:5231","146.122.164.252:36824","16.15.205.75:58574","82.136.149.34:28707"]','2024-01-15 09:34:54.847632697+01:00','2025-04-04 14:08:44.348711737+02:00',NULL,'100.64.0.23','fd7a:115c:a1e0::17'); -INSERT INTO nodes VALUES(29,'mkey:4a6e46330ccd4b3026337cdef37485c27592babda329107565c2a19d97253dcc','nodekey:17de47da1955a8a034788ae671c34fb10083568bf9e2f7494276cb2636065246','discokey:449f5f35a3507974cf03e3ac5c055d890de5122f57a7c2d84a0d3b324998607e','desktop-17','node029',7,'cli',NULL,NULL,'2025-04-04 21:46:12.712212086+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c2f1:82ab:922a:9b9d:f53d:a0b3:bccd:f6af]:299","[d28b:7b33:1f88:7583:7ed7:a923:4c90:9ee0]:62318","[68e3:a070:f02c:708c:a057:b579:aee9:4d25]:11747","[7224:b76a:cd04:e6d2:67fd:fec0:2f14:1837]:29002","173.82.71.57:3159"]','2024-01-15 15:18:12.2871978+01:00','2025-04-04 21:46:12.807613421+02:00',NULL,'100.64.0.24','fd7a:115c:a1e0::18'); -INSERT INTO nodes VALUES(30,'mkey:26f1059c45c12a8e56cd57ba6db33dfe104eac76103327af49622cb7810fab7d','nodekey:b157bbdc37aa4634b8d69d6fbc7670780fff69337713176a11f893afafa2a9bc','discokey:4c8c175d0e61ca8f7b2de92a1033cb9be95eaeaf09e9395c767afa5afdf2d9b5','desktop-64','node030',7,'cli','null',NULL,'2024-02-05 12:14:40.065688294+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8a1a:5c17:bb81:bb69:c7db:3513:f14c:ca2d]:52077","123.235.220.59:59925","155.210.184.45:60093","[963:536a:33e9:99fb:c204:d59c:29a8:ac18]:39005","[e84c:35c4:4bd9:a100:c271:8921:ca7:5901]:21828"]','2024-01-15 15:21:43.217136004+01:00','2024-09-18 16:15:14.041757308+02:00',NULL,'100.64.0.25','fd7a:115c:a1e0::19'); -INSERT INTO nodes VALUES(31,'mkey:ef048095dd3e937a8617f18797ed42d2c4513d07126dc419a466f257982113d1','nodekey:1c6c367df9b85ab5e1618c70c0a4f8b22e2eeea77523b40059c3ebfef0f4057f','discokey:80fb63a98202437504c700e582eddf800b02dc654577bb95796c60cdf1176793','lt-55','node031',7,'cli','null',NULL,'2024-02-22 08:35:27.098819037+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[cc02:5592:3738:a2ab:8c55:97bf:a5b8:24e6]:41616","129.130.98.189:47833","[99c0:d769:2f38:10e9:c01a:e3d:898e:afc]:17505","[3423:afbf:aa2:e156:81df:bbd3:412a:df65]:5125"]','2024-01-29 16:05:35.338524634+01:00','2024-09-18 16:15:14.042191514+02:00',NULL,'100.64.0.26','fd7a:115c:a1e0::1a'); -INSERT INTO nodes VALUES(32,'mkey:5b0e391e12914bcbcc6b415d9dd72937d74f9ce92a22c6ec938eb749962ee6ce','nodekey:4fa73acfb54cb140dad6b02bc853f6360f5ed160f819dc57701ad9a4caae9782','discokey:d3e90723a3b90eea513e7710ccaf25f68c0a2d60a85e3d4b9e4385335fddf8e9','srv-15','node032',7,'cli','null',NULL,'2024-04-09 13:59:43.37062537+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[14bd:ee53:6776:c1c0:124a:efbf:38cc:b312]:62180","85.234.248.186:20429"]','2024-01-30 10:41:58.31917869+01:00','2024-09-18 16:15:14.042506082+02:00',NULL,'100.64.0.27','fd7a:115c:a1e0::1b'); -INSERT INTO nodes VALUES(33,'mkey:ba179b51f3540e619eaac9a6d4bc67403daa7c3cf28a6b579087cd6d284501b4','nodekey:246fadfe2a3606f043aca1fc1fa31b79e9eef9c21ec67e72425e05576707b3c6','discokey:5c64c1a08b88425b6e1e2971f950708d38f951d2d5eed2b85df8dd7d1bb88a44','desktop-32','node033',13,'cli',NULL,NULL,'2025-04-04 21:35:50.079412269+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["106.67.113.92:45634","85.199.78.110:6716","[bc29:715c:cdce:f1f6:8630:4929:ea1e:5c03]:10918","[af8:f5e8:32f0:75ee:d351:c0e9:78e:93d9]:12624"]','2024-02-03 16:33:26.706408143+01:00','2025-04-04 21:35:50.07986885+02:00',NULL,'100.64.0.28','fd7a:115c:a1e0::1c'); -INSERT INTO nodes VALUES(34,'mkey:6fe02bbbdd40bb09e8aa74cf66b40b5b94f227690fd804a33242f56e18bee64e','nodekey:acc333e66d45e660ff4a31a152340d391547ebef1056f5f046624b629389bfd3','discokey:cf4a1217f85585d74fc5138af6ed9b78a29a9d4f12ad2e1beac660816d8c7b3f','lt-74','node034',13,'cli',NULL,NULL,'2025-04-05 07:35:41.145084059+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[1092:57ce:a60b:141d:ba79:79a3:91ed:66b8]:11037","[f766:14e5:6acc:ba40:8b28:59b7:ef7f:cd51]:46293","[c:e6fc:18ea:2e45:6bef:c1c1:f84e:b1e2]:62296"]','2024-02-03 16:42:32.683785672+01:00','2025-04-05 07:35:41.16521778+02:00',NULL,'100.64.0.29','fd7a:115c:a1e0::1d'); -INSERT INTO nodes VALUES(35,'mkey:f95240eb788522b37f5513f0253f8bdab2ccf7594ffbf9bb77945fd67e598c05','nodekey:f5a6f47f5a34b4f4c6883e46fb52c0a5f7ca2ce17397cbba751efa8736e0066d','discokey:4dc728a0c0de0ce87530a47dda426d5bf6ad1b2f6cdefcc819c29c1015c3131e','web-44','node035',5,'cli',NULL,NULL,'2025-04-01 19:27:16.8887994+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["136.113.252.147:325","139.180.210.231:32286","171.29.58.211:2336"]','2024-02-03 16:51:52.010016072+01:00','2025-04-01 19:27:16.890057487+02:00',NULL,'100.64.0.30','fd7a:115c:a1e0::1e'); -INSERT INTO nodes VALUES(36,'mkey:8ac09adfe44b8e60665601726c99627216e71b49ef7e8cc58ae6ba4f997c8652','nodekey:62d8af3310a7f27d84b931eaaa7ee110fe7fe46ea3ca8c817e47b9ce456c9b4a','discokey:bd31ae97729ac879d9b57866a69abb521248fb7fd5663d2149857d4dbf023d85','web-93','node036',13,'cli','null',NULL,'2025-01-19 14:01:48.956567669+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7e9e:219a:507d:d3d:cc77:6195:7ce3:b30e]:25018","203.251.190.122:19081","49.144.40.103:7674","[a641:3f06:63bf:b716:4658:1e5b:ac6f:e14]:13490","197.206.147.31:58506","[b1e8:68a0:4017:a243:61d0:b303:7860:18e]:28633","[e5b8:6c78:ce2f:cf6f:13fb:401f:8223:3f04]:54121"]','2024-02-09 12:34:57.879970954+01:00','2025-01-19 14:01:48.956830267+01:00',NULL,'100.64.0.31','fd7a:115c:a1e0::1f'); -INSERT INTO nodes VALUES(37,'mkey:c371756901cab84ae432a7d201e47232b008ef55effcf90341f24de7dfc55bb0','nodekey:0f858ae8a6492e1cc63fb502c6453ae90e8723f8b4ea8eba8c67f9f5114dec39','discokey:b6fc59b2a9ec7ddf1ae9573245cd3271259f964cdd730b811fabb019eccb3c81','db-47','node037',5,'cli',NULL,NULL,'2025-04-05 07:56:56.882618261+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b8c2:c792:3329:2c93:f94b:a948:7d11:b9f2]:908","[5925:353c:bd9a:f42e:1916:45b4:c170:e515]:18400","[c2a5:cc2f:e2e8:573f:7b01:4aa:8a1a:fd71]:32146","193.82.125.68:38601","[e9da:1552:e478:f920:c3d6:ff41:9e86:e27e]:48481","[c31d:dd18:321:589:11fa:d85b:695f:1fca]:43064","168.4.78.117:48474","90.45.218.0:29326","90.233.46.138:29846","8.137.133.59:22781","[cb5c:f8d4:ccb0:8333:8ec:170c:a2b0:941d]:26248","[5802:df5c:8853:4851:6dcf:cb83:8208:d143]:24568"]','2024-02-27 12:14:40.452601042+01:00','2025-04-05 07:56:56.896138732+02:00',NULL,'100.64.0.32','fd7a:115c:a1e0::20'); -INSERT INTO nodes VALUES(38,'mkey:465de79eadf29db3136d0b230c481876edbe7e05a803281a523618c586fd3c01','nodekey:01614d4aa1bb0be1f6b41e10aa9ea1c7af206ef9a2571bb68c24e85bea697216','discokey:d3eea5725d8408db2ffc560d74c70da0a70c3f3dc40a0d6a6fad9afaaadee9f2','srv-59','node038',6,'cli',NULL,NULL,'2025-04-04 10:17:46.472491162+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8125:ebe4:83e1:4806:f061:5bd9:a08a:8ccc]:6467","114.199.17.195:52817","[de36:fc02:8cf4:c60d:824a:9963:2440:5b5d]:39343"]','2024-05-22 08:08:16.045350656+02:00','2025-04-04 10:17:46.473325907+02:00',NULL,'100.64.0.33','fd7a:115c:a1e0::21'); -INSERT INTO nodes VALUES(42,'mkey:ff2ce8eb04d99e45d7babacacc758ec8e031474fcf494b4c0349c7084cc62365','nodekey:660bae997d6da1efccc0135ed712176a379f59ec9ca7a02eddea607a8855cebe','discokey:66ca4169c6be68079a1244ec506d8c922af4133187e0117a1800ed4791ed9a67','web-21','node042',14,'cli',NULL,NULL,'2025-04-03 14:18:40.656342991+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[3a3f:717c:1ea3:7c82:81e4:5d9d:765f:7da4]:6540","[bc95:459c:aab:499d:4354:d239:ef14:dcb]:14429","[f673:7912:6d5e:7244:62ca:9500:78f5:31c0]:27677"]','2024-07-03 11:12:29.418355657+02:00','2025-04-03 14:18:40.709062776+02:00',NULL,'100.64.0.37','fd7a:115c:a1e0::25'); -INSERT INTO nodes VALUES(43,'mkey:df2afe1643d44e7b5d2b4fe1b78e8b0c8c7871b50b1b580048cf6b184cc99143','nodekey:4512e240f89db2671db1ad974c151afad41491d7d11486b644aa2cf66bc3410e','discokey:f4b10325d44499f510cf0bd61d7c9d116b10c14434b8fab4559f0797f033936a','lt-12','node043',14,'cli',NULL,NULL,'2025-03-31 18:38:47.400103948+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c91c:4366:d3bd:a993:b5e:75c7:ef8e:8166]:46974","191.185.253.85:37427","17.31.200.70:26794","[183d:f2d6:b80:36c5:318e:64e9:7f4a:b389]:12440","169.173.198.12:15274"]','2024-07-03 14:48:50.263910778+02:00','2025-03-31 18:38:48.06470771+02:00',NULL,'100.64.0.34','fd7a:115c:a1e0::22'); -INSERT INTO nodes VALUES(44,'mkey:7d71974b3095c1ae476074cf0e7e5628088873768b2a24b827c50699dc9d61d5','nodekey:bf967004b398b7634581159dae489f4f6f9ab3a56b35d5c5bbf5751b2ac73bc8','discokey:5be6e1d98febe4e50f1b907d23635c2bd342e9d5ec9524df24c4349c752ac923','db-53','node044',14,'cli',NULL,NULL,'2025-03-31 18:38:49.196820572+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8dab:7bad:415:97d8:4ecb:2a7e:14ae:64f2]:15771","[b501:efaa:c73b:b492:c3cf:865:d55b:9557]:26172","162.210.141.106:52036","[84ef:a164:d1c:d20d:797c:753e:f8fb:3108]:5767","[c15a:e322:a721:c9a7:a662:ccb8:26b8:e1c4]:63192"]','2024-07-03 15:23:48.066044194+02:00','2025-03-31 18:38:49.528314036+02:00',NULL,'100.64.0.35','fd7a:115c:a1e0::23'); -INSERT INTO nodes VALUES(45,'mkey:3f3fa2155a168125fd4d22d58f30d1d101796d67dfbe317a3808721019282ce0','nodekey:edccaae68a3381b0f736db6a23a9d45a8747caf1a8427ac5d680205cc4fed415','discokey:7a91312730081d203379ecdf264707a4325fdf272e1338b0abf770f0050e33ae','email-63','node045',14,'cli',NULL,NULL,'2025-03-31 18:38:48.272123393+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["161.33.154.107:16255","[caa0:946e:b130:39aa:31b0:8f9:7526:4960]:1729","120.102.247.255:41936","28.177.95.32:7841","[fb23:5998:c63d:b6c4:e86a:96b2:61c:e110]:51902"]','2024-07-03 15:54:01.706018896+02:00','2025-03-31 18:38:48.790175394+02:00',NULL,'100.64.0.36','fd7a:115c:a1e0::24'); -INSERT INTO nodes VALUES(46,'mkey:d084c15ae8c10c1de1d625c7292df7da48e4fcfd1ad860ca140ce9259254a1fa','nodekey:6ea87964a36dfb6e359a0721fee4e988461cbf585410fd2a49321d49e215eecc','discokey:bc27f1168519c55a34986345caeb0f666d18990faa74a8b84b317d33058987b5','lt-93','node046',14,'cli',NULL,NULL,'2025-03-31 18:38:46.808546326+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2077:6b7:927e:5e5c:a0f0:3352:1490:e1e1]:47997","6.170.76.62:28497"]','2024-07-03 19:38:07.783745318+02:00','2025-03-31 18:38:47.865608242+02:00',NULL,'100.64.0.38','fd7a:115c:a1e0::26'); -INSERT INTO nodes VALUES(47,'mkey:6a001dd5eed681d035dc3235ead186e6722d92e80695430ec542664659a82b68','nodekey:4a2155f4fac513f7382187a6bced77ab0a1a1c8fc4bfa93f31eb3236bdeb0308','discokey:76377c4b4a67de97ad3ec8af288ef68b7c917da1845351f9e49f7b588bed2b3d','email-54','node047',14,'cli',NULL,NULL,'2025-03-31 18:38:47.867863816+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d722:f972:c192:4cff:3916:e1e7:6d02:236b]:55695","101.58.151.244:39467"]','2024-07-04 10:38:08.344092869+02:00','2025-03-31 18:38:48.178140996+02:00',NULL,'100.64.0.39','fd7a:115c:a1e0::27'); -INSERT INTO nodes VALUES(48,'mkey:479cd91a3b16f2636c609f67817778a3ba3f35a33c5f3a6fad9ef86166ccae38','nodekey:5bf0038573184bd6031b88e9412f42eda5a8ddd2ec5761e1f22b9125b3c349c3','discokey:c7e977fadfc1bbd63183a2a2d82f1fb1f1700aa1b36c03b8c1754d46560fdfd0','lt-53','node048',15,'cli','null',NULL,'2024-10-20 13:53:33.831192385+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["60.228.0.149:54791","45.103.140.120:38129","59.24.21.245:44110","[def3:2f58:1a57:4620:9775:7f2c:3aab:52a2]:50978","[9236:c0c2:b9b4:e60b:99ff:582e:50ee:92b9]:25915","84.253.15.156:7777"]','2024-07-26 08:09:56.608302315+02:00','2024-10-20 13:53:33.831387627+02:00',NULL,'100.64.0.40','fd7a:115c:a1e0::28'); -INSERT INTO nodes VALUES(49,'mkey:4d5f9a38fce205b2dfb62f42464172a1408e00221317b01332714a4db55e6ebb','nodekey:415009b46e40247f7a60d9e0f53c938b6a032fd4c11ae70d824e2d95cf170957','discokey:280e5896d458bd63044265484d38ad711ef52a68981f83281ff693e837421a13','web-11','node049',16,'cli','null',NULL,'2024-09-19 09:07:18.28136023+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[475b:3641:5abe:b0a8:8a68:dd51:8e80:3efa]:574","153.151.233.195:41734","[19b0:8be2:708:d27f:4d3d:3c93:9979:e9e8]:50438"]','2024-08-05 17:32:41.937626584+02:00','2024-09-19 09:07:18.281618912+02:00',NULL,'100.64.0.41','fd7a:115c:a1e0::29'); -INSERT INTO nodes VALUES(50,'mkey:a35a00c69e2fa10d442ae03a4d9312ee61e56caaa030d1303a4ebab2b26c27eb','nodekey:39bf208ba118345630aecd3c6ed042d40b1e6b4897c9548a21a473c28352ba95','discokey:437f4dda6cfd2d269543f32dfaac1333b8660cec03b279728e296f6cbc67ca69','db-28','node050',10,'cli','null',NULL,'2024-08-07 10:10:06.595550455+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["150.170.146.70:3311","139.117.239.209:33788","[2eac:ffa0:99fd:d109:c120:a35d:ed48:eea3]:51544","[7644:c348:2969:b90:e84f:94d4:b629:f266]:49336","169.247.3.239:36225","[5b4c:bb2:d43f:6ff4:d494:8616:a66a:d059]:50360","203.180.122.83:13344","63.162.234.32:11653"]','2024-08-07 11:50:54.144157179+02:00','2024-09-18 16:15:14.050033969+02:00',NULL,'100.64.0.42','fd7a:115c:a1e0::2a'); -INSERT INTO nodes VALUES(51,'mkey:38d8adcfb8f6217b6dce62c89e816da260f1b0747a9a980e0ede5de9412aab4b','nodekey:98cb6a87bfa7cd614051008583e5f0904552b44d98854a0fb5d8e0e3c3115949','discokey:104eda6ea8f8fbeae20617662e601bd0acf102511d17ffc829c08db6a7f78872','laptop-32','node051',14,'cli',NULL,NULL,'2025-03-31 18:38:45.515173008+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["33.182.22.166:13283","181.10.243.176:14596"]','2024-08-07 14:19:31.156780417+02:00','2025-03-31 18:38:46.360795928+02:00',NULL,'100.64.0.43','fd7a:115c:a1e0::2b'); -INSERT INTO nodes VALUES(52,'mkey:3a83699597e3bee8b15df3733dfb54b1d7af3ff9ccc7a4b65df4b0feb4a3f25c','nodekey:86cf80f0bd8bef4888116f2bab1b5ac4b11bd6fa637f7ef0886cd0aa265b2bec','discokey:28e2d745a5216bdaaeb83b4483b19d1c2ae0c149b97c08c06a4b90ca2c742a00','srv-53','node052',17,'cli','null',NULL,'2024-12-25 17:27:58.515851096+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["72.183.239.155:49891","167.46.143.131:51781","[9e9f:93d:a8cf:c86f:17e3:9e2b:dc42:efa6]:3616"]','2024-09-22 15:48:41.385301399+02:00','2024-12-25 17:27:58.517153789+01:00',NULL,'100.64.0.45','fd7a:115c:a1e0::2d'); -INSERT INTO nodes VALUES(53,'mkey:d1fda0c437a79b121817c1c24e88bce18fdab566d5e74678b8730cb48c1dd574','nodekey:1463eab453b78867448baaae38b88547334df5070c30e77a08cd3410bd721d8a','discokey:4688405d194600e9c1a783e71e274c3e42976a5021d81db4ba4a0506a768cbca','srv-86','node053',17,'cli',NULL,NULL,'2025-04-05 07:53:50.501049438+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["69.219.0.37:43476","[6084:ee60:1697:864e:4a61:cf6e:e9b6:4c05]:63961","[fad7:2601:d286:4772:3cce:6e89:f66e:9eeb]:5430","216.60.124.11:45041","[d513:c472:d7a:f316:f610:10d0:9851:4feb]:3955","156.228.105.157:61542","102.126.185.0:50694","[156b:e934:d171:2693:8db4:f193:a58c:17b6]:24190","79.255.179.99:56057"]','2024-10-28 10:04:50.084492941+01:00','2025-04-05 07:53:50.501509322+02:00',NULL,'100.64.0.44','fd7a:115c:a1e0::2c'); -INSERT INTO nodes VALUES(54,'mkey:4eee40f0117b3b8580f4cf3ec7be063bb4d82173cfa3c015c8932431940d9c92','nodekey:4241ffb9768a0e31f38cc0c5b2f240c1de72b8810e7ff3705964316e63c1a055','discokey:3ad8f2ebea95d15109a0adfc69b3a6b566ce02ce82df7a32e68f278538bc05a2','lt-15','node054',14,'cli',NULL,NULL,'2025-03-31 18:38:45.793020142+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9fe:6ddd:ab0:1193:7bb7:4a4b:c036:b910]:63516","158.169.112.249:5480"]','2024-12-09 17:10:55.363593066+01:00','2025-03-31 18:38:45.816484332+02:00',NULL,'100.64.0.46','fd7a:115c:a1e0::2e'); -INSERT INTO nodes VALUES(55,'mkey:22e4ac4e25b05db4e94b5211e08ed14d983972f92e192f15e0231ff355fd46bf','nodekey:225f907deaffccb674416dc4ba532f0b20304e86d57808eef2d96f15211f2309','discokey:e695b849380f698178e7457311dca3cc10ae13a79b0a71d22130e7c401c4f246','web-26','node055',18,'cli',NULL,NULL,'2025-04-04 15:19:04.171978054+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[ecbc:a9b6:cbb7:695b:62ad:27cb:36f0:2380]:27992","135.238.213.41:64084"]','2024-12-10 13:56:39.287449662+01:00','2025-04-04 15:19:04.174839842+02:00',NULL,'100.64.0.47','fd7a:115c:a1e0::2f'); -INSERT INTO nodes VALUES(56,'mkey:ad967c824d2c235ed17b1437f6756d4354ad76a2d27b575c0257ecac9a3a5fd2','nodekey:6196332e325e3f1d0c58acb182b48b070340dd673247c616a28d60a7669f5a4a','discokey:3a67fc6fdb9288679886c2d45dba4d1e1ba014af2c48f0f942acd39ef0e0ee64','email-56','node056',19,'cli',NULL,NULL,'2025-04-05 07:53:56.457113205+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["122.41.198.69:28824","[5437:6f4:94c1:c6c1:43ce:7a41:e5c2:65d0]:57161"]','2024-12-17 14:58:39.429211911+01:00','2025-04-05 07:53:56.457486062+02:00',NULL,'100.64.0.48','fd7a:115c:a1e0::30'); -INSERT INTO nodes VALUES(57,'mkey:3d9e1a7fc884de338b8391bbc5d3398c7d0ec4c36388b756df5b2304b727fa11','nodekey:9cd5d41c04ae83ade43363dd165c792cbc5386e0d3e005311ca7c79ae7f7acaf','discokey:2d6fad1f43bc97a6d48bcd400c829db2d4f47291b12a59dcbe669d8b1d341c1b','srv-70','node057',20,'cli',NULL,NULL,'2025-03-29 11:27:16.332745436+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a34b:36c0:9f18:1062:2269:92c:1c83:6ec6]:56902","[41e8:60a4:229:a09b:efd3:c73f:b885:d235]:39533","44.230.54.249:64083","29.229.203.183:13","153.68.228.171:36559","138.154.192.66:772"]','2024-12-17 15:17:14.26936913+01:00','2025-03-29 11:27:16.556356179+01:00',NULL,'100.64.0.49','fd7a:115c:a1e0::31'); -INSERT INTO nodes VALUES(58,'mkey:97bc215b57ea30368b04727189d5a09f60e682171b2b05355d046f86cd4aa30d','nodekey:33c0e1c36d9762397b6406cba0f71c77dc7d11b8838911c7ce1a1cb632751e1a','discokey:f8394e6be895b3c2261fab8cd62d1f28124d377df6b4b17dac4b9f18c54a3e00','desktop-11','node058',12,'cli',NULL,NULL,'2025-01-25 18:41:03.881898904+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[2ddd:7c1b:a237:41c8:3baf:e9cc:1f31:9057]:62916","[9c34:190d:7bf0:c940:18d6:f96c:9aca:8c0a]:8787","181.226.177.82:63703"]','2025-01-17 10:17:23.455895657+01:00','2025-01-25 18:41:03.882180987+01:00',NULL,'100.64.0.50','fd7a:115c:a1e0::32'); -INSERT INTO nodes VALUES(59,'mkey:4d690637aa5cdd6575d9dbb7fa0b55c83bece3e82bc689e73122c9120f871772','nodekey:8733914d48b0c34a0111ec5f633123ae4ec7de39e597077d659ab8f2354270b9','discokey:ff25bf692a69cc0e79101e1cc1b926ca97b6e3897e65c55e5bd3f09ca1ea6d16','lt-20','node059',21,'cli',NULL,NULL,'2025-04-05 02:21:31.304172899+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[57a6:fde1:814:36cd:f8de:668c:f5fb:4f60]:39714","[f08d:aeaa:107:fc17:5c30:ab9a:e31e:3147]:55879","80.27.167.78:21548","[536b:bf65:da86:c7ca:ecc4:c84a:1811:334e]:24087"]','2025-01-29 11:59:27.291048957+01:00','2025-04-05 02:21:31.304988854+02:00',NULL,'100.64.0.54','fd7a:115c:a1e0::36'); -INSERT INTO nodes VALUES(60,'mkey:552ca0e4c611df54a38bd150d7f6c4ef77c66625435073b3751718e44884b92a','nodekey:7b4b25d90935b79af2efc7f0499d3f16a49d5078997314de57b65fb5ce01cf1c','discokey:2118ebb5a0b1c12de379802db2b247fcbf516de56216c4d0ee7a0489dde05071','laptop-14','node060',21,'cli',NULL,NULL,'2025-03-31 18:38:47.60389811+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["67.131.191.86:14533","45.142.234.80:30901","58.251.220.64:42364","[d9a7:5288:3427:e52b:4eb5:794f:11d2:5d6b]:33898"]','2025-01-29 12:01:57.48748166+01:00','2025-03-31 18:38:48.228181117+02:00',NULL,'100.64.0.55','fd7a:115c:a1e0::37'); -INSERT INTO nodes VALUES(61,'mkey:f5eaa64ce3d0dbd7539fd9c3f9a494688a4a64b9a3d1fc0b9f40dedc14192ba8','nodekey:ccd6aadf68fdcc95d0d15f6654a0c6bfd5ebbf282491539ee63e5b0848de9ef5','discokey:a492c6d1d0b9de44f6adcf7ac919d49c269058d301b4be21fbb5bf4203b2fa8b','db-97','node061',21,'cli',NULL,NULL,'2025-03-31 18:38:47.847511392+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d002:74d:2b8b:87bb:25cf:5512:50fc:e379]:46829","206.194.47.198:21677","22.203.32.217:28664","112.239.138.73:43954"]','2025-01-29 12:03:01.464646336+01:00','2025-03-31 18:38:48.152116326+02:00',NULL,'100.64.0.56','fd7a:115c:a1e0::38'); -INSERT INTO nodes VALUES(62,'mkey:4cbd491b0977f3a8cc92ef9f20a7726b3984f3c0699429ae46a0bceec5febee4','nodekey:66a7e77700fb0f4bae5b282413bdb5ff7c375549dd85c4300c73ff3d8029b4a4','discokey:2e9034fc43993eaa0cf2bc21118d378bb02afb3d197a33cf9f0dd201d176b23c','web-06','node062',21,'cli',NULL,NULL,'2025-03-31 18:38:48.069494509+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["81.51.60.60:63220","12.64.198.119:51541","[5e99:3950:5dc4:45be:bab7:e7ba:2476:14e8]:43360"]','2025-01-29 19:23:14.092804852+01:00','2025-03-31 18:38:48.161442612+02:00',NULL,'100.64.0.57','fd7a:115c:a1e0::39'); -INSERT INTO nodes VALUES(63,'mkey:70cc01dc48021a01c44ab1a4bed313bb8270640ef7317fd983019c4b5e593cf3','nodekey:bfff30bca701176ec3b45b2991810cc437e3995207d924c896c4250e0efdb782','discokey:91eb7d9fa7688e3ed641adfae83be298c1ab261fd00dc2b4ad9cb4c269b623fa','laptop-27','node063',21,'cli',NULL,NULL,'2025-04-05 07:46:04.800277542+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["105.95.65.117:11546","[68bb:e8d1:72d8:8b02:608b:2284:c9b3:184d]:20876","[5c17:4014:ea6a:5528:b1a:61b7:5cda:5c5e]:21597"]','2025-01-29 19:41:40.535299057+01:00','2025-04-05 07:46:04.80130455+02:00',NULL,'100.64.0.58','fd7a:115c:a1e0::3a'); -INSERT INTO nodes VALUES(64,'mkey:ac14ce6b1ecd1a0d79bbf60d279f19d09baf0ebd409f099a401816516953eb30','nodekey:a711e6a5750ec7cceac6a771396b000766d9af8a34327bf5efec3216316e8afc','discokey:0b370e73e9970987dad3fd00317f51c68e4273fbc11b754ef7e84d4d6880002a','srv-33','node064',21,'cli',NULL,NULL,'2025-04-05 07:53:41.648931109+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["145.91.193.5:37126","[2fb6:ea0a:3639:a1cd:9075:a258:6a79:4f5c]:13944","[2384:3000:4d47:a9e7:54ef:99:52c0:e920]:39358"]','2025-01-30 18:18:57.519126133+01:00','2025-04-05 07:53:41.649324054+02:00',NULL,'100.64.0.59','fd7a:115c:a1e0::3b'); -INSERT INTO nodes VALUES(65,'mkey:899a5048a327c15cade8ca854986909ed306f37b2a7655785bbf838c99285ded','nodekey:11b61191a3fc660e078ddc10a65c29c51459b5d67351d99967a504ac6c791f23','discokey:124857d5d0233a30c595fab8e1e5ca8659c9c3c5164fbd084582e7d697c5f580','srv-60','node065',21,'cli',NULL,NULL,'2025-04-05 07:56:55.874050704+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["167.46.40.4:60291","102.143.230.113:46288","[b341:3259:c5dd:9f91:fd70:c616:3033:2b3b]:59081"]','2025-01-30 18:19:40.354692307+01:00','2025-04-05 07:56:55.875106342+02:00',NULL,'100.64.0.60','fd7a:115c:a1e0::3c'); -INSERT INTO nodes VALUES(66,'mkey:547a8b97c8c8227d9ba40174ce4d28d0e64f10fae3f39797c60e29742ed59e62','nodekey:cb4a9d1dee6f581036b7c262348cf2c4a96be8b2eea78f7805097e3f7b1a9fcf','discokey:c233a42a4afb5fbf921bf7b3fb4330e6c46273d18570aaf6b7254dbf3c67614f','desktop-82','node066',21,'cli',NULL,NULL,'2025-04-05 07:57:45.102165038+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["21.197.131.68:13997","6.193.99.72:20874","181.248.237.50:21695","171.59.96.217:25726"]','2025-01-31 12:05:28.65297301+01:00','2025-04-05 07:57:45.169550808+02:00',NULL,'100.64.0.61','fd7a:115c:a1e0::3d'); -INSERT INTO nodes VALUES(67,'mkey:eb72ad359958c98e28f93485b720b6edef80d1bc33c4f99d772fc17c766039e9','nodekey:9122bd587ae90984e3231e0f48e15aa8abcfbc1b3324becd1f3622bbbe534da1','discokey:7610d45f8d2d3d4d7774c43288a6d50cd798a832732613647caa15e96621a3e3','laptop-93','node067',21,'cli',NULL,NULL,'2025-04-05 07:53:21.399420055+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[3022:fc47:6534:7fe3:9d1e:8d49:75b:984b]:10998","60.241.100.13:15037","[9356:40f8:a98c:a1a9:18e6:e781:f169:4277]:63375","[c733:e842:4621:e95d:6576:9ccf:b2c8:1582]:41668"]','2025-01-31 12:06:30.121464114+01:00','2025-04-05 07:53:21.744746916+02:00',NULL,'100.64.0.62','fd7a:115c:a1e0::3e'); -INSERT INTO nodes VALUES(68,'mkey:18580a8353c2c11c364849a9b45141ad99c22e2052de3b07c0a3e095ef6ee9cc','nodekey:d375fbb50d508e31b73b4d960272b0d886df1973e1c00b71723d5dcd7c658b5d','discokey:905bd1003c6e9822d22177c87108107c8a84c9f4f99d0f6e452457b7fdc2f23f','email-60','node068',22,'cli',NULL,NULL,'2025-04-05 07:52:45.084976072+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[d89b:6894:a590:e3f7:d152:3a19:3826:ee28]:40749","99.13.253.35:45887"]','2025-02-03 14:16:55.56431345+01:00','2025-04-05 07:52:45.085842656+02:00',NULL,'100.64.0.51','fd7a:115c:a1e0::33'); -INSERT INTO nodes VALUES(69,'mkey:9562166c366f1968a5f82aca83bd8152d893541eed4d8682da0a540825cd44e9','nodekey:93ef8b713afeffc4324b0f9a99e185be61dbff43d6e5769efac6d82481b425c9','discokey:9aa3fe84188c317d17132a26604380ec4aaf7aed94b4b759bb6b643dbd53f8d9','desktop-61','node069',22,'cli',NULL,NULL,'2025-04-05 07:49:17.983292316+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["56.191.25.57:54747","74.194.16.233:59572"]','2025-02-03 15:23:16.312084161+01:00','2025-04-05 07:49:17.983430933+02:00',NULL,'100.64.0.52','fd7a:115c:a1e0::34'); -INSERT INTO nodes VALUES(70,'mkey:50329891edd2948b53a0905245386567c98b46bc0d566e39e1763768d51cd638','nodekey:e4ebfcbd1e1d653bda175a62b765c6a646b7844b3d6bd49a9955f96a9b284c6a','discokey:f59cb75bac53aef739bbd5be676808891ebb55550aa0560396dc0532658ed893','web-53','node070',21,'cli',NULL,NULL,'2025-04-05 07:51:00.943197398+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["200.133.152.206:14119","28.101.69.72:35202","52.120.18.3:13372","[dc9a:ed1e:f97d:b356:d5f5:5d44:c917:3077]:33994"]','2025-02-03 18:09:35.161109801+01:00','2025-04-05 07:51:00.943819644+02:00',NULL,'100.64.0.53','fd7a:115c:a1e0::35'); -INSERT INTO nodes VALUES(71,'mkey:9845e32e5bdac3fde9276125068f931ea329c65dd136081ddad62a644f41a795','nodekey:9a927389db6eb337e83a82882b5d4dc56667d1f1b2b774122f7e907a5511a95b','discokey:9dd163c600f6a522255812bce48ff492eb56771f74a9e40af7514375b221b86f','email-32','node071',21,'cli',NULL,NULL,'2025-04-05 02:20:31.545698826+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a57e:a4c2:c889:ff09:e17e:a0a0:2623:282f]:2539","137.63.201.132:39740"]','2025-02-04 12:03:50.32663805+01:00','2025-04-05 02:20:31.578007599+02:00',NULL,'100.64.0.63','fd7a:115c:a1e0::3f'); -INSERT INTO nodes VALUES(72,'mkey:e91a507e388c22494ec286192de1d6f5943c88a7084fbe1f0274f1d1f6169c06','nodekey:341a90749469a79de31683b37038f0bfad8aa8dceb80ad6850ed148f8fdef655','discokey:a2143d31409f3645542e4ec1c2982e6cb4cfaacbe7cb99111ef112afff1907da','laptop-21','node072',21,'cli',NULL,NULL,'2025-04-03 11:30:24.373825692+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["93.35.202.238:41314","212.130.175.245:30054"]','2025-02-04 12:07:36.231437299+01:00','2025-04-03 11:30:24.376835622+02:00',NULL,'100.64.0.64','fd7a:115c:a1e0::40'); -INSERT INTO nodes VALUES(73,'mkey:575f781bb36f7c93cde4d9808e3dedef2c8999864ea1d4e5ee73fdd3d53ac217','nodekey:00e3275941e7a0a73a1e5c44cea638a411c60bb2e257812e0f39152ec0c5a21b','discokey:69a8791ea090c8867078eba807d456ce8363aab46f0c998b327f4c7aaed091a7','db-48','node073',21,'cli',NULL,NULL,'2025-03-31 18:38:46.80095927+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["74.253.132.228:65062","20.112.75.120:15833"]','2025-02-04 12:10:26.50545127+01:00','2025-03-31 18:38:47.215533873+02:00',NULL,'100.64.0.65','fd7a:115c:a1e0::41'); -INSERT INTO nodes VALUES(74,'mkey:8a4990c19c547da50875ad00613118d5382717652b4ef9897a1d1c060c40a976','nodekey:4894fef93c5af5094e3f9d18b0c7c822b66ade3221829502c95d2e0ae88ae3c4','discokey:fbe18283aa4bc599c8dd8c82c5fe7b236f1f059bf0344ce9a28e1196258811a1','srv-48','node074',21,'cli',NULL,NULL,'2025-03-31 18:39:34.600462031+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["114.141.127.86:11858","[d05b:3c4f:8402:823f:409f:87b2:4dab:4959]:58420"]','2025-02-06 17:33:12.557302525+01:00','2025-03-31 18:39:34.632063451+02:00',NULL,'100.64.0.67','fd7a:115c:a1e0::43'); -INSERT INTO nodes VALUES(75,'mkey:fb413a3831a1db1639197bf6eb887a913ff044346d9fc07eeadc50aa96ae196f','nodekey:763f78cdd841a2859c2a7e1b00129ed1a527e97afc193da93d864263fb048c0e','discokey:1c658f44f2e30f60cd35d6d730625710bb86ca1873f0cc55f6a08f0fc62bd814','srv-20','node075',9,'cli',NULL,NULL,'2025-04-05 07:55:19.031501504+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["147.221.39.105:29051","146.211.135.172:35153"]','2025-02-06 18:23:55.709687186+01:00','2025-04-05 07:55:19.031893003+02:00',NULL,'100.64.0.66','fd7a:115c:a1e0::42'); -INSERT INTO nodes VALUES(76,'mkey:7a392b19fffc82a338a4a9e640b63b474c918db933a38bb72dc8957dc95fffb8','nodekey:29a892ef6ca08c4691db5272b310b98a0abd3c2f40372702323afc30909517b9','discokey:4ae873b73a87e7282a9275c3d5e9c68227934f68c6601567d92ddfe202d2cd31','laptop-09','node076',21,'cli',NULL,NULL,'2025-04-03 03:19:15.701019816+02:00',NULL,'{"fake":"data"}','["[9768:8d0:63fc:993c:bab0:4be6:571e:9579]:64245","205.167.26.141:5100","140.129.39.90:29397","163.61.27.204:21704"]','2025-02-14 15:29:45.220999928+01:00','2025-04-03 03:19:15.701454834+02:00',NULL,'100.64.0.13','fd7a:115c:a1e0::d'); -INSERT INTO nodes VALUES(77,'mkey:3c49826dc55bc0e0e4643777d4950888452791b9f41b4cdcbce633551c7e4586','nodekey:8c5b96984a77dfba9d9ab0da01c4834e27b4c5e8add2aa9e22be59fa675b58e9','discokey:3f25a8ba8632b6e6dc3148ae42fd90ec433e9c47a98dfcf2f65d326fd030a47c','srv-81','node077',23,'cli',NULL,NULL,'2025-03-31 18:38:48.332406555+02:00',NULL,'{"fake":"data"}','["[734f:219f:40f0:ea33:47c3:ab48:b4f6:a239]:37955","[aade:2619:9aa8:378d:2365:1ab8:8920:a86]:7960","[71ce:8ddc:ec8a:58eb:d01e:2951:a8a7:dff]:31349","209.201.158.153:40013"]','2025-02-14 17:00:54.226657615+01:00','2025-03-31 18:38:48.558712405+02:00',NULL,'100.64.0.68','fd7a:115c:a1e0::44'); -INSERT INTO nodes VALUES(78,'mkey:a89496b608c3a9a479a8adcbeac38dd6a856ac638d3ce35c715abb10ef0c278a','nodekey:04e6629fb5fbadb92b0ab1d2c2fa9f77f0b212890b491ddd7913b0d540a346b8','discokey:cd9e6272edc39d50afc27080316aae91de105df7e4f09b27a4301f219217b460','srv-30','node078',23,'cli',NULL,NULL,'2025-03-31 18:38:46.90455425+02:00',NULL,'{"fake":"data"}','["[8dc1:48a1:2e5e:8c45:131e:9d7a:2d85:61ef]:1654","[dbf5:5aac:6a5a:f73c:a62e:69bc:c227:1180]:49484","200.106.54.69:19053","145.76.224.52:32069"]','2025-02-14 18:03:28.401774063+01:00','2025-03-31 18:38:47.149620162+02:00',NULL,'100.64.0.70','fd7a:115c:a1e0::46'); -INSERT INTO nodes VALUES(79,'mkey:31e2f2e88d77ea01fc3f7869f672d015dc76b0a12b46829aa1b344fdc0827aae','nodekey:124fde6d53a012efc7626136b2c32df231f87c92aff2c4fe9596c4d79a0c0db1','discokey:796b737943327a0b25ce400e70a628eca55747e61546e6a8a3c84ae8d0a7b491','laptop-17','node079',25,'cli',NULL,NULL,'2025-02-24 18:37:03.584752527+01:00',NULL,'{"fake":"data"}','["61.30.74.244:1071","[4587:514f:e01d:e13b:768f:6402:862c:f470]:52999","105.225.1.28:57634","78.215.20.31:28892","4.11.131.129:50580","61.47.47.240:57553"]','2025-02-15 12:49:11.523904469+01:00','2025-02-24 18:37:03.584913585+01:00',NULL,'100.64.0.71','fd7a:115c:a1e0::47'); -INSERT INTO nodes VALUES(80,'mkey:f9c7fccb40e5f809fce3149d8643d3b5459b882837487e19da72f0ec8e9b36c4','nodekey:1fff3539a6acf0e9b8cbb0427824179491ac293b7c0092cd320ff48071eff4a5','discokey:7067ea828bcc4e1e5ee41152c79ea50cdf583a5aca0115cdfe9e2161271e9807','desktop-82','node080',23,'cli',NULL,NULL,'2025-03-31 18:39:08.617771904+02:00',NULL,'{"fake":"data"}','["207.204.17.116:6826","121.94.132.177:17160","[35d1:7781:52e5:8550:72a6:3a02:104e:9380]:34711"]','2025-02-15 21:12:05.434304787+01:00','2025-03-31 18:39:08.727340814+02:00',NULL,'100.64.0.72','fd7a:115c:a1e0::48'); -INSERT INTO nodes VALUES(81,'mkey:d089a5917a4b469009a451f834bd93870888f18afd170ef9d7558abcbf7f56e6','nodekey:5c55b563d9796cf68da579cf8fb195a3bfb0d1e80f81d74359ca65b5367b1638','discokey:41a15fa481076c161c6b7bf8120e90ad4e73f3300d711eb81e31b7301d512b83','web-42','node081',23,'cli',NULL,NULL,'2025-03-31 18:38:46.169280974+02:00',NULL,'{"fake":"data"}','["[2b88:a0fa:7305:9a44:6c1f:65d9:5ece:8f1c]:27764","[bd66:349c:a29a:4c05:b3ab:dd92:749c:434e]:27566","69.37.34.188:32687","172.43.25.248:19211"]','2025-02-17 16:05:05.921277876+01:00','2025-03-31 18:38:47.081362641+02:00',NULL,'100.64.0.73','fd7a:115c:a1e0::49'); -INSERT INTO nodes VALUES(82,'mkey:9c853f04d1bfe9d438a8719160628329d7bbe56df8dcdb74fbdc4a6570af701b','nodekey:b4a22d5a8c9fad2e1fcc6b5f1693528974a38859285fd67b64bab7e6d3122710','discokey:8cc24fcbe0e6a7fdf36a0cedcbbdc4295194c821cac4a30d5c449cdc4d8723ee','srv-08','node082',23,'cli',NULL,NULL,'2025-04-05 07:04:44.68803226+02:00',NULL,'{"fake":"data"}','["[3c46:906c:ac3:7713:f36c:eef7:3764:7844]:57048","59.217.231.119:10496","[be71:9d1a:fa93:361b:4d94:fbdf:bb45:5421]:15076","[3238:dbe9:f89d:91e1:6338:993d:3e76:8561]:32795"]','2025-02-28 09:21:23.143225002+01:00','2025-04-05 07:04:44.688660094+02:00',NULL,'100.64.0.69','fd7a:115c:a1e0::45'); -INSERT INTO nodes VALUES(83,'mkey:0ec54480d16ae9c954b21852540a7e669caf0b1df2ebc3192353058b191d4732','nodekey:981dfbabf258ac252cb4470040b425696d40c15ae9bd70cdbb6541ff47a4f99a','discokey:f77ff65615d1ec4afefd1accd883bfdcad5daf3810a72604fc0a95df2214a1e3','srv-36','node083',26,'cli',NULL,NULL,'2025-04-05 07:53:06.561831453+02:00',NULL,'{"fake":"data"}','["195.8.172.201:53059","[ed13:96e3:daf2:3df4:c6f:3b8d:af9f:eb5b]:10689"]','2025-03-18 09:09:50.849674955+01:00','2025-04-05 07:53:06.562851054+02:00',NULL,'100.64.0.74','fd7a:115c:a1e0::4a'); -INSERT INTO nodes VALUES(84,'mkey:911d47d2e7518087e6e43d47074cb58739880e9dfd3912467b8b7d618458f93f','nodekey:87e6089fc13207cdb034bb83d3361be25faca6d35a4a12c1d195d7b5607d726a','discokey:172772172f52d6a3033debe69654f8c9fbd823a7876e36cd890a5a2990eda884','laptop-89','node084',23,'cli',NULL,NULL,'2025-04-01 19:26:08.705667279+02:00',NULL,'{"fake":"data"}','["172.116.24.79:64278","26.140.212.195:18107","[8ea5:bf8c:6205:76f0:a23e:8ac8:f856:f091]:26017","25.10.147.197:36132"]','2025-03-25 10:04:37.147174393+01:00','2025-04-01 19:26:08.706193488+02:00',NULL,'100.64.0.75','fd7a:115c:a1e0::4b'); -INSERT INTO nodes VALUES(85,'mkey:0d5a6cbee59cef0dfb24e410466777da40ac48b308d5387c7118371da4dd66c0','nodekey:a3c2435fbaddbe915301fd294f6f7d787f18ea7938b468d0dbc2373585c6e153','discokey:3f4fad81a23a9b13308d6b2e84a217c00a73088d2a31ce5eb15be9192dc7aebb','laptop-35','node085',23,'cli',NULL,NULL,'2025-04-01 23:16:37.04412104+02:00',NULL,'{"fake":"data"}','["[f54b:3eec:1a3d:2bec:d025:cfbb:b951:6e47]:10751","169.51.234.97:16861","16.142.13.19:36771","[ad8a:a3fa:8d4d:c659:5e32:45dd:d95f:abc8]:43118"]','2025-03-25 10:08:52.471661925+01:00','2025-04-01 23:16:37.044513666+02:00',NULL,'100.64.0.76','fd7a:115c:a1e0::4c'); -INSERT INTO nodes VALUES(86,'mkey:6e91fa8345d4795f579da2e0c8672463bfdc4035eb5f31fd0a511722e6986ca4','nodekey:0064d45bc62c00ddf68241741547493ef6a6d0f4ead224d963500ea42fb90ae4','discokey:3d63df2a68aa3ac1fd33e0bb8eabc59a5f7f03eec4c8d1e2e3693f69f2fe90ee','laptop-84','node086',27,'cli',NULL,NULL,'2025-04-03 21:56:30.423789526+02:00',NULL,'{"fake":"data"}','["36.19.57.186:49366","[cf88:94d3:5835:2356:c48a:6f91:dafc:9c5d]:24503","[83c:82b4:9941:dec8:30fa:4c7f:de12:fd5c]:60355"]','2025-03-26 10:24:31.473595709+01:00','2025-04-03 21:56:30.424301457+02:00',NULL,'100.64.0.77','fd7a:115c:a1e0::4d'); -INSERT INTO nodes VALUES(87,'mkey:ecdf9b138473ce9d2b2b4861bf2a358bfda62032572e1474a9726f45f24e3702','nodekey:69ad63020724d64042e91204bbd382fe4f8e838d2eca106709dc2b61592462a9','discokey:f574c20bb2f8558f787798b4b89918e5bbfc8d3001ef41ab155d8adb4ceda4e0','lt-76','node087',27,'cli',NULL,NULL,'2025-04-05 06:05:53.807723942+02:00',NULL,'{"fake":"data"}','["8.73.3.103:62847","[fd18:384b:ef76:f4b7:6983:bd4e:70af:ce35]:42618","7.111.70.18:4634","85.240.97.0:21327"]','2025-03-26 14:30:57.179694234+01:00','2025-04-05 06:05:53.808074341+02:00',NULL,'100.64.0.78','fd7a:115c:a1e0::4e'); -INSERT INTO nodes VALUES(88,'mkey:a4a50964463b8a2bb0778291c014c4ffc1de4b45140278723879e3afd7ad3b53','nodekey:7befe9b0bd2e08bc976c08e2b199069d0fc42d0ea8dbb346ccd706f1771c6d74','discokey:fc3a7797bd83d5c4109b33fb7c52cd4fca772e91d31787f7bb09ddf9d5d8df17','db-62','node088',27,'cli',NULL,NULL,'2025-04-05 06:45:42.584225511+02:00',NULL,'{"fake":"data"}','["48.12.29.120:59556","146.247.211.102:6983","212.18.134.205:41528","190.153.32.51:23708","33.50.209.62:29699"]','2025-03-26 14:37:55.198097806+01:00','2025-04-05 06:45:42.585458518+02:00',NULL,'100.64.0.79','fd7a:115c:a1e0::4f'); -INSERT INTO nodes VALUES(89,'mkey:7c3f129f0eb0f541a270bd8d4236991cbcfc984d5fbc422e0f227401b3e33fcb','nodekey:9e8eac7d23ec974e4170fb5437c80c773c8f93b5d99fa479f0a7a92660423f4d','discokey:436409c6a1b9a7e50e2e7b7a07861f6d29f1d1a1f0934f8145d2d7a959b0daeb','srv-64','node089',28,'cli',NULL,NULL,'2025-04-03 21:12:36.172240418+02:00',NULL,'{"fake":"data"}','["165.160.173.41:62002","[d35d:30be:3c0a:c5f2:40ca:40f4:9b64:de92]:54562","[e40f:e89b:a9f5:d5cd:d553:b6:17f3:3d88]:21519","102.217.166.61:13721"]','2025-03-31 18:34:22.574460539+02:00','2025-04-03 21:12:36.172550446+02:00',NULL,'100.64.0.80','fd7a:115c:a1e0::50'); -INSERT INTO nodes VALUES(90,'mkey:3aedbb5b71c6de73eeb59138b920c4c5a62debd4f738d4856115412e5f8f9318','nodekey:d7d143dc5680d770fe0ba2273eb3b01004593c2b187edafde8f89709de3d0078','discokey:7c45bed52e4d175be90c88784a39adcd765627238691aa9963191e0acef41624','srv-73','node090',27,'cli',NULL,NULL,'2025-04-05 07:31:24.445741627+02:00',NULL,'{"fake":"data"}','["[bf19:5560:9254:d556:e834:573e:63a8:92f1]:57176","[48b1:5eba:7dda:918d:19ac:f3da:239a:72f8]:6265","104.240.254.212:44730"]','2025-04-03 17:52:02.130025013+02:00','2025-04-05 07:31:24.473509723+02:00',NULL,'100.64.0.81','fd7a:115c:a1e0::51'); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "routes" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -INSERT INTO routes VALUES(1,'2023-05-19 07:09:23.387641743+02:00','2023-05-22 09:48:18.908103256+02:00',NULL,3,'192.168.224.0/21',1,0,0); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-05-17 19:36:55.859473496+02:00','2023-05-17 19:36:55.859473496+02:00',NULL,'user001','','',NULL,NULL,''); -INSERT INTO users VALUES(2,'2023-05-17 19:36:57.059073465+02:00','2023-05-17 19:36:57.059073465+02:00',NULL,'user002','','',NULL,NULL,''); -INSERT INTO users VALUES(3,'2023-05-18 10:10:36.248939077+02:00','2023-05-18 10:10:36.248939077+02:00',NULL,'user003','','',NULL,NULL,''); -INSERT INTO users VALUES(4,'2023-06-10 09:06:13.920718561+02:00','2023-06-10 09:06:13.920718561+02:00',NULL,'user004','','',NULL,NULL,''); -INSERT INTO users VALUES(5,'2023-06-11 19:58:32.371218434+02:00','2023-06-11 19:58:32.371218434+02:00',NULL,'user005','','',NULL,NULL,''); -INSERT INTO users VALUES(6,'2023-06-17 19:39:53.031565686+02:00','2023-06-17 19:39:53.031565686+02:00',NULL,'user006','','',NULL,NULL,''); -INSERT INTO users VALUES(7,'2023-06-20 11:35:09.325846831+02:00','2023-06-20 11:35:09.325846831+02:00',NULL,'user007','','',NULL,NULL,''); -INSERT INTO users VALUES(8,'2023-06-21 22:47:48.196234382+02:00','2023-06-21 22:47:48.196234382+02:00',NULL,'user008','','',NULL,NULL,''); -INSERT INTO users VALUES(9,'2023-06-22 08:30:35.068995572+02:00','2023-06-22 08:30:35.068995572+02:00',NULL,'user009','','',NULL,NULL,''); -INSERT INTO users VALUES(10,'2023-07-03 10:18:32.123226+02:00','2023-07-03 10:18:32.123226+02:00',NULL,'user010','','',NULL,NULL,''); -INSERT INTO users VALUES(11,'2023-07-03 10:18:37.130387602+02:00','2023-07-03 10:18:37.130387602+02:00',NULL,'user011','','',NULL,NULL,''); -INSERT INTO users VALUES(12,'2023-12-15 08:05:06.013615212+01:00','2023-12-15 08:05:06.013615212+01:00',NULL,'user012','','',NULL,NULL,''); -INSERT INTO users VALUES(13,'2024-02-03 16:32:42.224977233+01:00','2024-02-03 16:32:42.224977233+01:00',NULL,'user013','','',NULL,NULL,''); -INSERT INTO users VALUES(14,'2024-05-03 10:12:38.220973042+02:00','2024-05-03 10:12:38.220973042+02:00',NULL,'user014','','',NULL,NULL,''); -INSERT INTO users VALUES(15,'2024-07-26 08:08:40.979783263+02:00','2024-07-26 08:08:40.979783263+02:00',NULL,'user015','','',NULL,NULL,''); -INSERT INTO users VALUES(16,'2024-08-05 17:32:02.878091894+02:00','2024-08-05 17:32:02.878091894+02:00',NULL,'user016','','',NULL,NULL,''); -INSERT INTO users VALUES(17,'2024-09-22 15:48:00.287392203+02:00','2024-09-22 15:48:00.287392203+02:00',NULL,'user017','','',NULL,NULL,''); -INSERT INTO users VALUES(18,'2024-12-10 13:55:11.256977421+01:00','2024-12-10 13:55:11.256977421+01:00',NULL,'user018','','',NULL,NULL,''); -INSERT INTO users VALUES(19,'2024-12-17 14:57:58.550971236+01:00','2024-12-17 14:57:58.550971236+01:00',NULL,'user019','','',NULL,NULL,''); -INSERT INTO users VALUES(20,'2024-12-17 15:02:08.053169491+01:00','2024-12-17 15:02:08.053169491+01:00',NULL,'user020','','',NULL,NULL,''); -INSERT INTO users VALUES(21,'2025-01-28 15:57:32.774456057+01:00','2025-02-06 17:43:41.935399542+01:00',NULL,'user021','','',NULL,'',''); -INSERT INTO users VALUES(22,'2025-02-03 14:10:50.491924701+01:00','2025-02-03 14:10:50.491924701+01:00',NULL,'user022','','',NULL,'',''); -INSERT INTO users VALUES(23,'2025-02-14 16:58:30.250289644+01:00','2025-02-14 16:58:30.250289644+01:00',NULL,'user023','','',NULL,'',''); -INSERT INTO users VALUES(25,'2025-02-15 12:48:14.650995528+01:00','2025-02-15 12:48:14.650995528+01:00',NULL,'user025','','',NULL,'',''); -INSERT INTO users VALUES(26,'2025-03-18 09:09:00.456523573+01:00','2025-03-18 09:09:00.456523573+01:00',NULL,'user026','','',NULL,'',''); -INSERT INTO users VALUES(27,'2025-03-26 10:23:51.960113834+01:00','2025-03-26 10:23:51.960113834+01:00',NULL,'user027','','',NULL,'',''); -INSERT INTO users VALUES(28,'2025-03-31 18:25:26.535133091+02:00','2025-03-31 18:25:26.535133091+02:00',NULL,'user028','','',NULL,'',''); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.26.1.sql b/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.26.1.sql deleted file mode 100644 index 34591430..00000000 --- a/hscontrol/db/testdata/sqlite/from_nblock_db02__0.22.1__0.26.1.sql +++ /dev/null @@ -1,146 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -INSERT INTO migrations VALUES('202502131714'); -INSERT INTO migrations VALUES('202502171819'); -INSERT INTO migrations VALUES('202505091439'); -INSERT INTO migrations VALUES('202505141324'); -CREATE TABLE IF NOT EXISTS "pre_auth_keys" (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime, `tags` text,PRIMARY KEY (`id`),CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -INSERT INTO pre_auth_keys VALUES(1,'463a8b372963aeaca12400faa0c7ea29e9bfa3b4c59e9622',3,0,0,1,'2023-05-19 05:09:19.66636462+00:00','2023-05-19 05:14:19.664224869+00:00',NULL); -INSERT INTO pre_auth_keys VALUES(2,'77a019cb12c0d0b9347a17ab23fa4c87983814fe36bb2fbb',14,0,0,0,'2024-05-03 08:13:55.8614948+00:00','2024-05-04 08:13:55.85782156+00:00',NULL); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer,`machine_key` text,`node_key` text,`disco_key` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`ipv4` text,`ipv6` text, `approved_routes` text,PRIMARY KEY (`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -INSERT INTO nodes VALUES(1,'mkey:38315e39aa0f0ac09b49850ace14a9bffab27d4258daf52bedfcca5d3045cd0c','nodekey:5b55d8314717137fc190e4b42811f5eff1241cf32cd24d768c1e710962884db4','discokey:ac65232fe17445944990c9532faec80c27081443997d02b1d3586e7b481261b3','laptop-74','node001',2,'cli',NULL,NULL,'2025-06-22 07:21:58.271483603+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[bab2:a8b9:e00d:9655:f397:9a5e:c6bb:2300]:9564","[b8f0:cf44:22c8:82d1:f79:c3eb:9144:c24f]:6142","[8e25:af1:e31a:8fcc:7dcf:be1:a3c9:ed6a]:38766","[37a1:215f:bf3d:3fb1:6399:f7cb:4048:3ca8]:51666","137.242.125.72:25376"]','2023-05-17 19:38:13.531518257+02:00','2025-06-22 07:21:58.271647152+02:00',NULL,'100.64.0.1','fd7a:115c:a1e0::1',NULL); -INSERT INTO nodes VALUES(2,'mkey:1e274f40556282a5bbab04b6754985233ccdf971b663e5b38fc027035e833e9a','nodekey:5b16c7e14191ca2e57eccd86ec636db605fa5dc2c8856be5fa7f9b087cf24890','discokey:5fd2f116700fd48ce06e35e8dd11c847ccd7694e01ec4b3f6ef25f67c14ee197','email-21','node002',1,'cli',NULL,NULL,'2025-04-17 20:26:48.332385788+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5aae:55d1:7b5d:628d:bc07:b3e0:eca2:836d]:13912","[5583:8a0e:1564:ac5b:98ab:59a3:126:cba2]:35537","84.23.188.83:48876","44.184.117.89:36643","90.223.136.253:7056","[e43c:be7:7f4e:8217:a009:7f9d:a510:1af6]:18872"]','2023-05-18 10:09:21.757289398+02:00','2025-04-17 20:26:49.018604197+02:00',NULL,'100.64.0.2','fd7a:115c:a1e0::2',NULL); -INSERT INTO nodes VALUES(3,'mkey:5b57ab9712ce9587cd887a650ba629d2aeceb57d83a0bcadf9fa19a33e141a07','nodekey:788dce6f52985afca7b36ad6961dd93b7cd6d102574e243c1f39b14804b370a9','discokey:8b7c85ead8a9e7225cda4c5d1975dc426aefe919b4153f4653228e325c015bfb','desktop-51','node003',3,'authkey','[]',1,'2025-06-20 23:54:25.334614243+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[fdbe:7908:805e:6c8f:47c2:2c87:b1c4:44]:48470","188.237.187.170:7622"]','2023-05-19 07:09:21.399903526+02:00','2025-06-20 23:54:25.335360998+02:00',NULL,'100.64.0.3','fd7a:115c:a1e0::3',NULL); -INSERT INTO nodes VALUES(4,'mkey:538f91fe9e485697ce13f9444ee0dadf2aaaed712fdfa03ba102113c6505db37','nodekey:1936bdc567d6758b11b2cc06ea7ff74bb933a44421fb17323965d5a402e39783','discokey:e8b762a384d644175f3b4e4fd36867b2c435e03eb0bf6b9e7ebd4f0ce83c13ef','lt-57','node004',4,'cli',NULL,NULL,'2025-06-28 12:23:20.734405211+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f6d5:e904:28e7:a5a4:12a3:96d8:9391:b22c]:37336","113.59.7.205:48432"]','2023-06-10 09:31:51.940506933+02:00','2025-06-28 12:23:20.735270748+02:00',NULL,'100.64.0.4','fd7a:115c:a1e0::4',NULL); -INSERT INTO nodes VALUES(5,'mkey:12afa721b0310fd9cf7501a19e5f23a2d33c87f989a9b0233bc09e8eb9e5eaa2','nodekey:83e8f99465cfd0ef79190bfb341bb742b39d9ddd7cfeab237153b58858a2fed2','discokey:d4fdc314c7b73e21589efef171f10158379ade1adbdc281482a62c6c20fc2ed8','lt-50','node005',4,'cli',NULL,NULL,'2025-06-28 12:20:57.298589096+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[8105:c5c2:647f:e97e:85b7:577b:20b0:e4f]:16947","1.79.159.39:34092","[b12e:af88:3c55:41f8:5fba:2aa3:3248:15c1]:23643","186.84.191.82:2621","[c9d0:4e15:513a:171b:9911:f9a0:cccf:cd83]:24151"]','2023-06-11 13:56:42.694329408+02:00','2025-06-28 12:20:57.298996156+02:00',NULL,'100.64.0.5','fd7a:115c:a1e0::5',NULL); -INSERT INTO nodes VALUES(6,'mkey:229f561cff2994f922e02f0e286f6afacdb603245fa1a9c234ddb81f79674fd2','nodekey:edde606fce52c6196c994a72eb0f24ee4d4b8047d22e16dfef3f6531a8eaeb4f','discokey:2814289eb6e7bbd6093798f51410a2babab3f6a0d9ca105136d1ed387cb2ceb6','srv-66','node006',4,'cli',NULL,NULL,'2025-06-28 12:22:36.34542317+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["128.160.129.121:10590","48.173.178.104:58309","102.207.203.66:23963","153.152.90.188:59135","187.96.173.136:34092"]','2023-06-11 13:57:44.975695604+02:00','2025-06-28 12:22:36.364369698+02:00',NULL,'100.64.0.6','fd7a:115c:a1e0::6',NULL); -INSERT INTO nodes VALUES(7,'mkey:1634b8df4138ea0986578531f3be5bc398d11c83b226674d9b4b0d6daade5268','nodekey:9fa6e75dec8ed3e22a1f201fe661e67fe03de2583bb2e16ec1e97ec8b34a1ede','discokey:d4408013dd173b024ca49301592fd89d89260615f83253780b6d2c9af652f75d','web-46','node007',4,'cli',NULL,NULL,'2025-06-28 12:23:01.214171138+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["189.17.47.14:59833","[fa71:f70b:e65d:59b5:f599:8597:2bc9:56b6]:12277","[8ddd:4aae:2b94:4c1a:d05f:65d2:2bdb:b400]:42839","[c493:a941:58b6:7634:6221:70ba:20e6:40c4]:35486","[9b54:61a6:c678:edac:956f:741e:5e2c:9ec9]:17930","[cd3f:af1c:fcb4:bcb3:806e:adf4:3f3f:ab36]:58513"]','2023-06-11 14:16:56.951313537+02:00','2025-06-28 12:23:01.214662148+02:00',NULL,'100.64.0.7','fd7a:115c:a1e0::7',NULL); -INSERT INTO nodes VALUES(8,'mkey:c841efdb3731e5e826ebfecbcb50460016e7020a2b73ac202aaffc69f1260ca4','nodekey:9bbd9e2ed99a94a3284391ef3e949806578fe58f5c7e5f444cfd72e1068c7252','discokey:c763a94031000944026c3243e175a7dde5ec85ad92dffadf2ba365b8d6b4fdf4','email-75','node008',5,'cli',NULL,NULL,'2025-06-28 12:23:04.274230621+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[dd24:c2dc:debd:ad2f:ae50:cd15:9cd2:b296]:24266","[489b:6c59:1877:6215:ddc8:57f8:78dc:321c]:3025","77.213.161.20:47662"]','2023-06-11 19:59:30.401970393+02:00','2025-06-28 12:23:04.275091119+02:00',NULL,'100.64.0.8','fd7a:115c:a1e0::8',NULL); -INSERT INTO nodes VALUES(9,'mkey:b1e7fb42e35b0219c318fee0717c988ab851714c6f487bb26a184b8340920d87','nodekey:da9ddc2fadd4a47eefa8a541e9c3e316353a5650a26d30d258d60ee9fce9c899','discokey:c52edf1d688d73febbc73c977c723a638da546afcb3813de4e716c1e013404ba','lt-35','node009',6,'cli',NULL,NULL,'2025-06-22 06:38:00.485835142+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b9f9:eec0:d656:cd83:8595:46fc:b677:6ade]:17494","126.251.133.86:44347"]','2023-06-17 19:40:45.468789461+02:00','2025-06-22 06:38:00.486219076+02:00',NULL,'100.64.0.9','fd7a:115c:a1e0::9',NULL); -INSERT INTO nodes VALUES(10,'mkey:dc84e5b1cf58ec8f7200ec0a655c1d5843c520ea59aa7ddd111ad5c9beba9bd8','nodekey:3c06f5a6d7f267023044ab632892b51f5af1ce4d84f0739e88823bc22cda6c57','discokey:3c1aa008a7811012038cd6658b6f290b3d252074c47d23cf1766f878340a30bf','desktop-75','node010',6,'cli',NULL,NULL,'2025-06-24 15:38:08.972653514+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["187.166.20.165:11208","78.47.168.253:34957","[1d14:fb12:d0a0:ad7a:ad7d:496b:154a:7042]:33635","41.172.94.101:32111","223.204.123.230:38090"]','2023-06-20 11:18:35.905417341+02:00','2025-06-24 15:38:08.973364357+02:00',NULL,'100.64.0.10','fd7a:115c:a1e0::a',NULL); -INSERT INTO nodes VALUES(11,'mkey:94da5084f8fc1cfe7671b94a0d7ef767719bd0fed89f16bc2726728db4cd9d25','nodekey:e6527563ef7a2647b6c719e0c9c95c1ab2a2eeef7fbb3f46a108ee0fbca026f0','discokey:f76230afd20bfcce74fd0d689b35e82cab5bfdf4cd1530073e87b1acae513c2b','web-50','node011',7,'cli',NULL,NULL,'2025-06-27 16:44:35.818823257+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[bed5:a4c2:8547:c044:9956:2566:d794:c918]:56801","113.147.88.203:61138","22.207.212.228:27547","[2c3e:2260:8c34:275:2287:b721:1bdb:d462]:59540"]','2023-06-20 11:35:15.063855316+02:00','2025-06-27 16:44:35.81914126+02:00',NULL,'100.64.0.11','fd7a:115c:a1e0::b',NULL); -INSERT INTO nodes VALUES(12,'mkey:4d2ff8526b10e056a5d073d5640fd28002ca21b561c55407175712a98e37ada7','nodekey:a1835798e4fefbece3000abb9afdd3cb01e5a4eebafc1218ed627cde2175c1f1','discokey:24b8f6da3ef9e024e273918de8869fdfebc45f789d14c4a50a592cc27259f1c1','db-64','node012',5,'cli',NULL,NULL,'2025-06-28 12:22:57.399153574+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f1ba:2aea:9e9e:2802:ebb9:ba04:a564:7998]:52919","[ede2:462a:f99c:24c3:b1b5:8b97:abe2:3ddb]:65141","[dfe7:6f6b:a382:4fc:a3f6:eb9:9ebc:5798]:1474"]','2023-06-20 18:22:35.061914624+02:00','2025-06-28 12:22:57.400249031+02:00',NULL,'100.64.0.12','fd7a:115c:a1e0::c',NULL); -INSERT INTO nodes VALUES(18,'mkey:5e192b1b323439fcc3b921c4f523ce9b7cce867220bd092d23be2451ce93a4a5','nodekey:1452cb2c8f26c8988cd8d8ed42957c78e3ab922bdf9773978c853e4254e23fbb','discokey:b7b41dc5de1fdcc4bd609dc72f79d27269367dbf72ab73ab61e998aa31eac6a2','web-19','node018',9,'cli',NULL,NULL,'2025-06-27 21:23:45.610301586+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["60.5.210.145:1193","[5698:e318:b38c:4331:5152:c2df:743f:8ead]:34839","33.51.84.119:46756","[c844:704e:c3c0:e709:8f94:3564:1b4d:d2d9]:8357"]','2023-06-22 08:30:54.08720463+02:00','2025-06-27 21:23:45.610590171+02:00',NULL,'100.64.0.16','fd7a:115c:a1e0::10',NULL); -INSERT INTO nodes VALUES(20,'mkey:29574231af6530b1386ebe61550e3b14263500b7a70395e7b540a54ddc3fc777','nodekey:c1d6f11f1bdc2756be5836e556e2a8c10f50aed84c969662b67ebd942db8b8ce','discokey:dd5d8c17120e8560f2698b44d571ca1b0bb90e012dd34be9d31ae169d26ca344','laptop-98','node020',11,'cli',NULL,NULL,'2025-06-27 12:57:39.754859479+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["26.88.253.147:32968","[3138:83c6:ae09:a073:7888:71e0:494:3863]:59700","191.156.96.176:5489","[166e:c30:4d73:440f:456a:22be:71e1:142b]:33036","[6d05:6bb8:3bec:c76c:1890:ca9c:2684:1bbb]:31223","173.218.108.179:51272"]','2023-07-10 12:40:34.838579199+02:00','2025-06-27 12:57:39.755061411+02:00',NULL,'100.64.0.14','fd7a:115c:a1e0::e',NULL); -INSERT INTO nodes VALUES(21,'mkey:94ebb2a00b9bbfe15d9a19a85409c5af01aa1b518319fbb394421888fa9fb7f9','nodekey:9109c5a733073fe487b72a328742f98f4e6e71748f1380562c46b1aaa2aa3f7e','discokey:1f11ce0393834879b2b9bd10b5fe034c85615ce94b30f67fe6153747b65653dc','laptop-07','node021',11,'cli','null',NULL,'2023-11-20 07:19:19.447470862+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["85.50.19.51:63246","[cff3:24ee:4692:f8a0:842b:c26b:e549:1f3a]:31064","207.187.74.99:12216","63.3.67.150:22907"]','2023-07-10 12:42:43.290469734+02:00','2024-09-18 16:15:14.03886353+02:00',NULL,'100.64.0.15','fd7a:115c:a1e0::f',NULL); -INSERT INTO nodes VALUES(22,'mkey:ec629722edab9383e9f816b50c2221486d52a55dd0735be575d20441b6e0c0cd','nodekey:8fc1644973c6614621da30be00b52f277c5e57f80032c681ac084354e7974665','discokey:286e95bbaca154ac900767ecd1f028764528a176cdc7c6e0bd3e0d2fff71207c','laptop-16','node022',6,'cli',NULL,NULL,'2025-06-26 08:55:27.018436146+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[fc33:9337:6f8c:253d:d87e:2d45:d618:5830]:33733","[6f37:bdc3:19c5:7d22:190e:a9d4:1ca1:e1cc]:59098","[4b06:6a05:94bd:4bca:676:93a4:2e20:b923]:23713","[68:80fa:7b30:3a0:d1fb:3e89:2c73:256b]:30308"]','2023-08-05 12:08:48.132161695+02:00','2025-06-26 08:55:27.028290638+02:00',NULL,'100.64.0.17','fd7a:115c:a1e0::11',NULL); -INSERT INTO nodes VALUES(23,'mkey:0323ca3e9858de1b53d0e3750e441d4da9a63f7e526be63b98430dbb51c66ebb','nodekey:8cd1023d1cdebd18f0f05d30852cde076e87cda3d6bc8a2785ef8f7ef37df099','discokey:0dd0075a712979849441f180da7e62bc73069050a4d25789e6f6afbcd4acb072','db-17','node023',12,'cli',NULL,NULL,'2025-06-08 16:50:57.344745353+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[f371:f639:cf31:c836:19ec:b41f:f0cd:4487]:36912","27.211.232.205:36366","[e83e:63c5:7e37:2195:1ede:1e3b:f549:85d0]:56150","[92ea:991a:46a1:e1e6:6b82:659a:acc7:2272]:38018","93.129.193.97:13477"]','2023-12-15 08:05:56.592241745+01:00','2025-06-08 16:50:57.344868501+02:00',NULL,'100.64.0.18','fd7a:115c:a1e0::12',NULL); -INSERT INTO nodes VALUES(24,'mkey:f24720736968f27b5c78d9c7734a083464275b63b50662adfce0d0ad5304e4d3','nodekey:b254e68666635d7fdae5c391f6ce81b802e018cf431408646f604c5ca8dbdb8c','discokey:498b9f4eca5fb3725cec3570d70c768660e124d3e88a85949706bc7384109de6','db-59','node024',12,'cli',NULL,NULL,'2025-06-27 21:59:35.663631021+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["43.114.58.145:47143","[758f:df49:401f:abcb:57ff:c04b:79e6:8519]:62313","[2458:76c5:7d48:c26f:34e:1663:6911:ff6e]:22202","66.66.46.14:16686","[fd3e:93e:a9b5:2034:a98e:4ab1:6317:f13a]:46054"]','2023-12-15 11:14:36.765183054+01:00','2025-06-27 21:59:35.663899525+02:00',NULL,'100.64.0.19','fd7a:115c:a1e0::13',NULL); -INSERT INTO nodes VALUES(25,'mkey:f10b7f9f385e48dc99ec0944b8a04ec4d3e5613587419c134440121fa747ee75','nodekey:9559349d25dd3fa88b9595cb4ccebeeb4712338757a209e936afe37ec2643dbc','discokey:b9a07315c1eb2e074871feebe899bd2ecae64d98a8edf08883785130ef386728','laptop-25','node025',6,'cli',NULL,NULL,'2025-06-28 12:22:37.905312423+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9e60:1bb7:b542:2f50:269f:5ef:72b6:f4ba]:34677","159.52.2.200:7205","[5b7b:e424:ea77:aa8a:f835:d6bb:2e1f:52ea]:19152","[bb06:d322:7cec:426b:ef38:55d0:e7ce:95bc]:31597"]','2024-01-05 17:32:40.940566279+01:00','2025-06-28 12:22:37.906446459+02:00',NULL,'100.64.0.20','fd7a:115c:a1e0::14',NULL); -INSERT INTO nodes VALUES(26,'mkey:b07183e67f21f2d1b4f7f4eccb393c57257683629a59758f393c47b759f5da81','nodekey:b246eeb06f7aa31c0f92aa63a6836e6ecde5cccb73d0c03e103601a7cded51b6','discokey:30fbedab81413cac0736dcd7b473511905658726861a326a556bb2df80bf15b6','db-13','node026',6,'cli',NULL,NULL,'2025-06-28 12:23:07.762159769+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[769:97e6:7060:f428:ee55:f77:43a7:8d18]:61446","[630f:7c1f:d48b:ea66:23d7:1ab9:8bd1:10f3]:25600","160.27.109.39:47606","[62f5:b64d:d2e4:9015:b03a:c64e:656:8c59]:5020","3.166.190.93:19852","[243b:de9c:b8c:ea69:b66d:b3ce:14a5:9671]:22747","32.167.123.100:57075","[2c67:a9f3:3163:c675:3386:2447:1cc:ed3a]:42590","64.137.67.1:56198","163.42.128.241:47535","49.160.22.202:42427","[1526:e34a:8857:394e:bbe0:c043:4b37:68d4]:5245"]','2024-01-05 17:34:19.811670479+01:00','2025-06-28 12:23:07.762465037+02:00',NULL,'100.64.0.21','fd7a:115c:a1e0::15',NULL); -INSERT INTO nodes VALUES(27,'mkey:d95c13394bdb912ef51f3123fdb0495feef567dc13724efd8dea17b33ce7aa1d','nodekey:fd8be528ea41d74697d46084c8cf560fff1c21b3193bec752ba2d60457396b19','discokey:9ec066ac46492f546a67be34b417eb469dfc95aff8b7be28be3bd8b36bf98653','db-72','node027',6,'cli','null',NULL,'2024-01-16 14:32:21.570104175+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5760:dc4e:eac8:5c38:6c9c:8451:98d6:4e9c]:21573","[e350:bf94:9128:bca7:8c74:ab49:c419:4c50]:18052","222.155.102.116:42746","[e32f:122e:8a:e473:4b83:8ec9:61db:60ce]:37054","154.68.57.163:54048","[c5a0:942c:ed88:2277:3c9a:4f65:a25e:5b40]:27852"]','2024-01-05 17:48:25.466030859+01:00','2024-09-18 16:15:14.040766068+02:00',NULL,'100.64.0.22','fd7a:115c:a1e0::16',NULL); -INSERT INTO nodes VALUES(28,'mkey:08347db9fbe8e5e7a968c2ab9ddd1ff530ea98163c0bab7769316307545f4d46','nodekey:b6b03efd7deb52dbe0298708ea37400510ab7589b0b1787eac1a5f9b50aa7c80','discokey:6a0ab2f3329d9fe5f50c76f9f3f1d2e797574af93d823a511df57f45414f647e','srv-42','node028',7,'cli',NULL,NULL,'2025-06-23 17:14:19.693135279+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["42.99.149.47:5231","146.122.164.252:36824","16.15.205.75:58574","82.136.149.34:28707"]','2024-01-15 09:34:54.847632697+01:00','2025-06-23 17:14:19.693240007+02:00',NULL,'100.64.0.23','fd7a:115c:a1e0::17',NULL); -INSERT INTO nodes VALUES(29,'mkey:4a6e46330ccd4b3026337cdef37485c27592babda329107565c2a19d97253dcc','nodekey:17de47da1955a8a034788ae671c34fb10083568bf9e2f7494276cb2636065246','discokey:449f5f35a3507974cf03e3ac5c055d890de5122f57a7c2d84a0d3b324998607e','desktop-17','node029',7,'cli',NULL,NULL,'2025-06-27 14:00:54.026931428+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c2f1:82ab:922a:9b9d:f53d:a0b3:bccd:f6af]:299","[d28b:7b33:1f88:7583:7ed7:a923:4c90:9ee0]:62318","[68e3:a070:f02c:708c:a057:b579:aee9:4d25]:11747","[7224:b76a:cd04:e6d2:67fd:fec0:2f14:1837]:29002"]','2024-01-15 15:18:12.2871978+01:00','2025-06-27 14:00:54.02708773+02:00',NULL,'100.64.0.24','fd7a:115c:a1e0::18',NULL); -INSERT INTO nodes VALUES(30,'mkey:6472928030d718e6e2802fb3bceba7c14b4716d994be1b3562a1d03104fd37a9','nodekey:78bd59aba83429486e0ed2d50873211974650b2e2ad8f79dbc07c5d13d00c5a5','discokey:86ead449e1be5a300287e690fb9522ba4da3e5bdeb593d6261e4117708fe3ade','lt-82','node030',7,'cli','null',NULL,'2024-02-05 12:14:40.065688294+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[fd0d:f112:f642:c343:ac14:8ede:dc04:e2d9]:58412","[8a1a:5c17:bb81:bb69:c7db:3513:f14c:ca2d]:52077","123.235.220.59:59925","155.210.184.45:60093","[963:536a:33e9:99fb:c204:d59c:29a8:ac18]:39005"]','2024-01-15 15:21:43.217136004+01:00','2024-09-18 16:15:14.041757308+02:00',NULL,'100.64.0.25','fd7a:115c:a1e0::19',NULL); -INSERT INTO nodes VALUES(31,'mkey:212b1ffd20a790e4f3fccc87a3f2d76a0c6149d4f2472f4eba76b40671f500af','nodekey:7b88563747cd8ad8526dd795e1dc858d84b4a9fabf3b009ad2ec5f6e7cecdfd6','discokey:660bda5c0e80a6bb18ed7d69dc80de790fc3f4a64bf5a6bebb69ce73ff3d6755','email-99','node031',7,'cli','null',NULL,'2024-02-22 08:35:27.098819037+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[7016:b48d:8bcf:4ccd:8679:8c47:1c66:cd1e]:41980","22.231.20.85:41616","129.130.98.189:47833","[99c0:d769:2f38:10e9:c01a:e3d:898e:afc]:17505"]','2024-01-29 16:05:35.338524634+01:00','2024-09-18 16:15:14.042191514+02:00',NULL,'100.64.0.26','fd7a:115c:a1e0::1a',NULL); -INSERT INTO nodes VALUES(32,'mkey:c4853fdd63bed5a98baaf74df437eff743f8a968fda613fdf1f2a111e7c1b100','nodekey:9fc3f32ce92ab7f8ab987f16258b11265037a47cd9a1a68ac0e07bd3c279c183','discokey:e52f22aa8e170b6a031e682b5ed6e5f790f2d6ae1eecb752de1b362a1cce7bd4','web-89','node032',7,'cli','null',NULL,'2024-04-09 13:59:43.37062537+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["133.177.207.78:47765","[d1b1:653:cd17:f39c:8a89:85d0:c7aa:b53]:5279"]','2024-01-30 10:41:58.31917869+01:00','2024-09-18 16:15:14.042506082+02:00',NULL,'100.64.0.27','fd7a:115c:a1e0::1b',NULL); -INSERT INTO nodes VALUES(33,'mkey:805d05918a94047740c914235c3c614042b05d18458f331ba0a46ee22d2187e6','nodekey:ebc009009bc015bc45f6499b04b7af9ff2ceff22c0f9676749b10c92f422a8ad','discokey:b6d83e1f6f0e86c3808cffb227cb6649bffcf5d4ba1596f6fd58f41b8b727133','db-70','node033',13,'cli',NULL,NULL,'2025-06-27 21:25:43.528971537+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a1ed:3568:9b48:6735:e553:a273:ea02:7271]:46613","90.225.231.207:5490","176.14.56.16:22127","[6224:9e4:5921:7fda:2437:15c6:a1b1:adf4]:15557"]','2024-02-03 16:33:26.706408143+01:00','2025-06-27 21:25:43.529881502+02:00',NULL,'100.64.0.28','fd7a:115c:a1e0::1c',NULL); -INSERT INTO nodes VALUES(34,'mkey:a404e578d3edd3e3d879e9f1946c859b5982c59f5e528bf6b987bd55de00b25b','nodekey:ad3013f116c370866fc9134b66999bb05e2dc10a8e2944230c3d9893c0a94c67','discokey:4fc6b881a4f170158e94336444d2cf0dea6a387b06dfc5fde08ed15d0f391c90','lt-08','node034',13,'cli',NULL,NULL,'2025-06-28 11:18:54.787303405+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["172.87.199.175:12624","71.151.27.31:51063","[f0d6:21dc:48f5:63f4:daff:a3c4:7d60:51ea]:17286"]','2024-02-03 16:42:32.683785672+01:00','2025-06-28 11:18:54.78775904+02:00',NULL,'100.64.0.29','fd7a:115c:a1e0::1d',NULL); -INSERT INTO nodes VALUES(35,'mkey:06f7b17a75e55b445cc41ad693565fd93da6f4d690a305f8e207d69b028bf859','nodekey:0f4a7ade77ba260d9b318ac18c6c1f2f0606af7edc29128a33984558b6a047b1','discokey:e3cc18eacc630c61f2a2ad6a4152ee2af4b93f0ff767122676a6e02f58f5f01e','lt-11','node035',5,'cli',NULL,NULL,'2025-06-28 09:59:51.80761163+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["94.107.22.1:45906","68.64.112.147:37449","175.25.134.28:16830"]','2024-02-03 16:51:52.010016072+01:00','2025-06-28 09:59:51.808121221+02:00',NULL,'100.64.0.30','fd7a:115c:a1e0::1e',NULL); -INSERT INTO nodes VALUES(36,'mkey:37a68d32b77a701b08f9aab0e407525cad61a79aae25512a47e834e67070caa4','nodekey:ac40d87508d60cd5ffa0c21bc569b85685667e418434e5d83cee138ef1552ac0','discokey:dc90acbf7b156f6a5dd8e2f0a36638a99994148f16138230269ced79ff89fb8b','db-19','node036',13,'cli','null',NULL,'2025-01-19 14:01:48.956567669+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["68.209.122.217:6048","[edcb:62c0:3e8f:3a8c:93dd:1672:b477:ce1a]:18240","[bbb7:5b4d:218d:55ff:ac9b:1fb6:c933:9a55]:63943","18.156.132.228:63356","122.50.5.12:7674","[a641:3f06:63bf:b716:4658:1e5b:ac6f:e14]:13490","197.206.147.31:58506"]','2024-02-09 12:34:57.879970954+01:00','2025-01-19 14:01:48.956830267+01:00',NULL,'100.64.0.31','fd7a:115c:a1e0::1f',NULL); -INSERT INTO nodes VALUES(37,'mkey:2a302116eb66f119bc13369abf20be603cc433e83e4607480a6731e38ba9e8bf','nodekey:9f97afa1f234c491e12dbf3e613e4f4525d64aaa90cd40dba2919e8d315cbd27','discokey:7470aa5ff096df380fc115e26326342355093e00a5b1c7d9e6a6e91228bd2e19','srv-81','node037',5,'cli',NULL,NULL,'2025-06-28 12:21:12.129795624+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[5071:881b:75ae:d129:45ec:cfba:1a25:63df]:48282","[f94b:a947:7d11:b9f2:7758:640d:478a:1da8]:26194","[6b8d:8778:d67a:4cb3:1c6:1986:f223:15a9]:12844","[e1b5:d8f1:23f0:93f:bf72:7b6a:5925:353c]:52615","[3ec9:71d2:c2a5:cc2f:e2e8:5740:7b01:4a9]:29794","[4b64:fce2:293e:a24a:9995:b0fa:ff27:5401]:58695","[5eb0:a5cb:e9da:1552:e478:f921:c3d6:ff40]:57647","[8c40:dd59:c31d:dd18:321:58a:11fa:d85a]:43064","168.4.78.117:48474","90.45.218.0:29326"]','2024-02-27 12:14:40.452601042+01:00','2025-06-28 12:21:12.130426228+02:00',NULL,'100.64.0.32','fd7a:115c:a1e0::20',NULL); -INSERT INTO nodes VALUES(38,'mkey:0cb7a61e162c150a2397b17feaa98b86ecd8eb2cb899f406f9cffc5f94bed186','nodekey:aaf8f2337e0809d347caec27b07a8a5ffbb41110569b50a386ed852b61a4b1d1','discokey:b8c6a5277eeb27573ce8647557540773904cd4559a9b2fc38c8ed0fe0c9d3b01','srv-58','node038',6,'cli',NULL,NULL,'2025-06-28 12:22:38.364224551+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["189.185.249.112:45061","117.137.197.148:33075","165.145.180.67:23334"]','2024-05-22 08:08:16.045350656+02:00','2025-06-28 12:22:38.364913686+02:00',NULL,'100.64.0.33','fd7a:115c:a1e0::21',NULL); -INSERT INTO nodes VALUES(42,'mkey:08032558a6f70b2a3125f92860e4e3261cb87d9fa5507ad75b00d14d506e42ed','nodekey:d7a05dda88f80df694090cfdd8c5e0fccead6014378741fdc8166bce304eac7b','discokey:adecd0e23ec16e2ad50c7df1d855cc7d2ac016adf103f4d9373ab4d9da6d38dc','lt-88','node042',14,'cli',NULL,NULL,'2025-06-06 20:55:32.708129231+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[a393:f17f:6728:82e4:c1e7:2367:82d0:daaf]:18560","[d192:9897:4cd7:bfb5:de36:fc03:8cf4:c60d]:25096","212.199.67.213:11150"]','2024-07-03 11:12:29.418355657+02:00','2025-06-06 20:55:32.710128529+02:00',NULL,'100.64.0.37','fd7a:115c:a1e0::25',NULL); -INSERT INTO nodes VALUES(43,'mkey:33e764ae40265089c5e7dbfc8571aa23bb390fc7e1e8bd2f3e0cd6891c308214','nodekey:c444c9ff652ba29181a5f7fdf991f91f704e6698237ee18b0649c81879ec932f','discokey:2e1db3a1eea266673acf7826aa44b5aa6fdd23c6ba41330562b37c5a523ba358','laptop-81','node043',14,'cli',NULL,NULL,'2025-06-06 20:55:32.888930994+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["190.179.215.222:61930","45.131.187.230:22916","[510f:af99:ed4d:f53a:fc21:8640:9ad9:4930]:16050","[5ea4:3410:eaa2:9da:b8d:b61a:2d9d:edaa]:3526","209.107.206.184:46974"]','2024-07-03 14:48:50.263910778+02:00','2025-06-06 20:55:32.892202297+02:00',NULL,'100.64.0.34','fd7a:115c:a1e0::22',NULL); -INSERT INTO nodes VALUES(44,'mkey:1099debb0ba187ef1f43f001c75abc118eaf017bf3bd6b48ca333fb046b98de9','nodekey:d49ef9ad4a7ecbdabcd24cae92d9ac32fbb765467cbc20abe61b4bda6ef15f1f','discokey:9f2faf39514eccb7f3d7d66ebcaf804474f75284d8b1aedbf97aee6ccdf47ed5','laptop-84','node044',14,'cli',NULL,NULL,'2025-06-06 20:55:32.852055717+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[183d:f2d6:b80:36c5:318e:64e9:7f4a:b389]:12440","169.173.198.12:15274","[e5df:1889:6ac5:7d21:4dfd:614b:7eb9:d93c]:792","119.123.40.237:58615","[415:97d8:4ecb:2a7d:14ae:64f3:7a01:471f]:15771"]','2024-07-03 15:23:48.066044194+02:00','2025-06-06 20:55:32.876004334+02:00',NULL,'100.64.0.35','fd7a:115c:a1e0::23',NULL); -INSERT INTO nodes VALUES(45,'mkey:29eedea8083105a21470a51112b2486ae4c79221e2b6b17e4c3c84f965013513','nodekey:b3249157ebc705f134a366e28beb2d7646a40bc4171bb6d6fece1e750c85379b','discokey:64cb6eaee39eca7e3ad7f1225c216bf2826a8a951b91a631cfacf3b5ec99c2db','web-31','node045',14,'cli',NULL,NULL,'2025-06-06 20:55:32.853701621+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["14.153.231.75:19825","[acd8:3779:7b6c:3a71:c15a:e323:a721:c9a7]:63377","[6816:3377:edb0:3f8e:a65b:bd60:461b:ceb]:24642","147.3.7.139:46940","[7526:4960:73a9:911a:1fbf:92bf:1219:a6bf]:25440"]','2024-07-03 15:54:01.706018896+02:00','2025-06-06 20:55:32.862645337+02:00',NULL,'100.64.0.36','fd7a:115c:a1e0::24',NULL); -INSERT INTO nodes VALUES(46,'mkey:8c948629fd6a1b66977db7f1c6f11c30f1ef99da371f2f88f6bb77d79a3fd157','nodekey:af7e300b7119ffd77488c76a756660fec6ce7b4036c9e6c5159e1dca503dbb3a','discokey:a266b60014f0798c0e31f9e11ad5eb09082bd1178e3a2a0d5791801c1f2fc415','srv-47','node046',14,'cli',NULL,NULL,'2025-06-06 20:55:32.644737833+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["57.87.213.13:62573","160.29.16.50:26152","217.233.117.213:34230"]','2024-07-03 19:38:07.783745318+02:00','2025-06-06 20:55:32.659793776+02:00',NULL,'100.64.0.38','fd7a:115c:a1e0::26',NULL); -INSERT INTO nodes VALUES(47,'mkey:bfd4f59c7df7a9f7c3ed7736b35aff1895318a2ba2de7dbeb8c5911a9fdab6c2','nodekey:18f4f5a2905ba4ddd20ab6fc31945f78cc312cf8b453a544cc6c5d7456a1f8e0','discokey:dc5f0a3318228e929379c1b82984b0b955988131f3cf1522f547680420acf39d','web-50','node047',14,'cli',NULL,NULL,'2025-06-06 20:55:32.624765906+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["55.20.245.10:37304","98.38.29.194:12660"]','2024-07-04 10:38:08.344092869+02:00','2025-06-06 20:55:32.642819932+02:00',NULL,'100.64.0.39','fd7a:115c:a1e0::27',NULL); -INSERT INTO nodes VALUES(48,'mkey:5bbb77092f21b4835b0ae1af3871fc48445819c9b6e5ff9c46ad4a623b662ae8','nodekey:191c7cc089ee160c07b63e2b6b5ef0701d76d914009420baddad9bfce13e3f2d','discokey:712cadffd23f106abeea82ae70bbae8e06c2a58f3db617afd0d149b3012c64fb','web-61','node048',15,'cli','null',NULL,'2024-10-20 13:53:33.831192385+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["186.118.23.242:32373","[5bd8:a5a8:1b3:5e41:3c4d:5fc:4c0:c6cc]:24847","193.95.63.195:40587","166.176.60.55:23674","[33c0:859c:4a78:8bdb:359e:31e4:1e71:7e39]:50383","[3aab:52a2:b9d3:1816:5627:336:6c60:57d7]:13486"]','2024-07-26 08:09:56.608302315+02:00','2024-10-20 13:53:33.831387627+02:00',NULL,'100.64.0.40','fd7a:115c:a1e0::28',NULL); -INSERT INTO nodes VALUES(49,'mkey:0aeb63f409c65f5e3fb88049d610c3b58476d4d9ffa41502e928b526888f937b','nodekey:39c2ec4b5e9e4ffabfbf2b0e77e12d71b281e75945e80ab2318b600cb7a2794a','discokey:36e41b77ca844d02104807b6ad0eabe3b14252fe06b0e4c3ee6d0ecb51abbbeb','lt-93','node049',16,'cli','null',NULL,'2024-09-19 09:07:18.28136023+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[4451:be15:8bc9:696f:fced:a453:f30:99db]:40973","[ff9c:d2f4:d4ae:92b3:1da6:15b:cb40:b9a7]:10893","[76ef:e160:820:d8ed:de32:25f4:7874:de94]:23150"]','2024-08-05 17:32:41.937626584+02:00','2024-09-19 09:07:18.281618912+02:00',NULL,'100.64.0.41','fd7a:115c:a1e0::29',NULL); -INSERT INTO nodes VALUES(50,'mkey:f368047f0ade97d2e85be32da245e474ebdceb640b35947a5403ea4ec17dba20','nodekey:fb33e7e55c17bc7ddfc809cf37470a044cbdae049633f8e2f77009205685a318','discokey:17b46a4dd1b090347f47ff0f980521faf4cab279f97133bc03cbe0f1750187e3','desktop-00','node050',10,'cli','null',NULL,'2024-08-07 10:10:06.595550455+00:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["220.80.110.172:44100","[2523:fe65:400:15c0:a5a1:b955:1454:70c7]:54632","[a8a7:bda:7d93:b992:48e5:f2c1:91e2:54c6]:19368","150.170.146.70:3311","139.117.239.209:33788","[2eac:ffa0:99fd:d109:c120:a35d:ed48:eea3]:51544","[7644:c348:2969:b90:e84f:94d4:b629:f266]:49336","169.247.3.239:36225"]','2024-08-07 11:50:54.144157179+02:00','2024-09-18 16:15:14.050033969+02:00',NULL,'100.64.0.42','fd7a:115c:a1e0::2a',NULL); -INSERT INTO nodes VALUES(51,'mkey:735ea22d4d96ae83e77aa73b97a50a1b34636cd05384bad6a9b9256ca20d7290','nodekey:7fa08d819358ea2114540aca2eb712f6a85f7610e888e9e34d1e40e2900114cc','discokey:330eb26ff5d9403bd755091b57a42aedb2c3b46583576ca6d979f9a703234cd2','web-65','node051',14,'cli',NULL,NULL,'2025-06-06 20:55:30.776282094+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[c5c6:6d5f:96bc:5d51:16c2:9fb8:d390:bea8]:54983","[2300:cb9f:25a7:8450:3dd7:1ed5:6362:63d0]:47837"]','2024-08-07 14:19:31.156780417+02:00','2025-06-06 20:55:30.792088043+02:00',NULL,'100.64.0.43','fd7a:115c:a1e0::2b',NULL); -INSERT INTO nodes VALUES(52,'mkey:a1c649017ddbb4dcd4554acf5286329de306c9053b6ba103684683bc4b825b9c','nodekey:2676428ddde9161b1de90dc62ac99f436c3ac780e43719bed6313c680743ac06','discokey:f16bd3d12b57ee652bf20e358a184111d2be864bee16195053b68d21b5357384','email-01','node052',17,'cli','null',NULL,'2024-12-25 17:27:58.515851096+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[1c82:2ff1:f85f:9c4c:9bef:6197:b9b0:5e38]:29991","[2fd4:63f:f9f:ec1d:1ad9:621a:4155:8a19]:46781","[32:2239:56ed:558:32d5:7b03:dcdb:e5d0]:53865"]','2024-09-22 15:48:41.385301399+02:00','2024-12-25 17:27:58.517153789+01:00',NULL,'100.64.0.45','fd7a:115c:a1e0::2d',NULL); -INSERT INTO nodes VALUES(53,'mkey:3b2b622721b0c64d215283349a2df3b86a2a4e107b92e6ca4450204061ce53d4','nodekey:0e64072dafa5f9e49266b41fc4b21ff7d287be8f4e5f2b6c395487d632f1cf3f','discokey:2f77bb8cbeb426bcded5ad9fc6da44b0b76b8d6759f10a8db28e634290760ee7','srv-65','node053',17,'cli',NULL,NULL,'2025-06-28 12:20:35.189820653+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[977f:5596:9908:14e0:8983:810e:9e9f:93d]:3616","[911d:ef36:743c:8085:8a4c:a5c8:9f35:27fc]:23538"]','2024-10-28 10:04:50.084492941+01:00','2025-06-28 12:20:35.190401013+02:00',NULL,'100.64.0.44','fd7a:115c:a1e0::2c',NULL); -INSERT INTO nodes VALUES(54,'mkey:a996b2df08089ba5a36bf00f4f0a337aa876a58f3214114862f76cf570c317f1','nodekey:241d83b165073dfebc00ef12480032b008e7d305957fe3e7b14938125b7fd788','discokey:1c36d23ccdc5d359c1f4b0626c27e0df4be0b8165553005771b638effc36249a','lt-94','node054',14,'cli',NULL,NULL,'2025-06-06 20:55:32.691473004+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["69.219.0.37:43476","[6084:ee60:1697:864e:4a61:cf6e:e9b6:4c05]:63961"]','2024-12-09 17:10:55.363593066+01:00','2025-06-06 20:55:32.700659203+02:00',NULL,'100.64.0.46','fd7a:115c:a1e0::2e',NULL); -INSERT INTO nodes VALUES(55,'mkey:ad8d2836fc3e281ea45c14de854da61abc9ab333e94ebe3de2435e9c4c7f5ea0','nodekey:30b87abf366b736d7e782d74536382f1853f9de785d2bd532b63c5efb3d7c6cc','discokey:2ecc9071b240d067246a96ba329af5d3385cf430708f93a8aaa553e49ae225ac','web-20','node055',18,'cli',NULL,NULL,'2025-06-23 17:13:34.971389477+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["172.87.149.191:59171","[1fae:4025:aab6:7bd4:33bb:9306:1e8f:991b]:50694"]','2024-12-10 13:56:39.287449662+01:00','2025-06-23 17:16:24.577468282+02:00',NULL,'100.64.0.47','fd7a:115c:a1e0::2f',NULL); -INSERT INTO nodes VALUES(56,'mkey:7ad9c5af6111cc3286256127e6b6a939f1fb87e08c9018bd7bd8671c4b9cefc2','nodekey:84fec35781c2032d05c7e8524bada9f7242165e9ab5df5d66f993266cc6f090d','discokey:ad9158862b9c58848c37b519543415166938b6be3dfe3a8463842ebbd09eb00e','srv-14','node056',19,'cli',NULL,NULL,'2025-06-28 12:22:44.430647515+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[e669:9684:b8bf:9a64:8ec7:bec:6369:4412]:16090","23.0.29.75:24889"]','2024-12-17 14:58:39.429211911+01:00','2025-06-28 12:22:44.431119935+02:00',NULL,'100.64.0.48','fd7a:115c:a1e0::30',NULL); -INSERT INTO nodes VALUES(57,'mkey:ae8c967065558f9f03a95979d2181b3aa2b3537c25a894046a65554beaf553e7','nodekey:bc9cbaeb5cd70e708b5a23565ffb1fb6e1c18f35b1861083196209dcc1e0e20c','discokey:f8647d1a8ab876931f49a8abf785d429f7f21bd1fe6317528bfbf78e38d7aafe','db-73','node057',20,'cli',NULL,NULL,'2025-06-28 12:22:11.086950088+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[caa3:7372:5199:2416:357:7b48:3127:85f1]:21026","[e2a5:db96:e7e3:6335:8bdc:90b4:40ff:920d]:28128","163.106.192.242:30271","[7d2a:9d5:1fbb:54a7:fe7a:bc40:5ea2:ca20]:3534","[f630:8bb5:a4ff:95b1:6d95:decb:a5d2:373b]:42456","[6cc8:4fca:c46e:ed25:e9ef:43e:f0ea:121a]:60965"]','2024-12-17 15:17:14.26936913+01:00','2025-06-28 12:22:11.087909892+02:00',NULL,'100.64.0.49','fd7a:115c:a1e0::31',NULL); -INSERT INTO nodes VALUES(58,'mkey:5bb86c2f730c0b247b01634f73a4f67a19bf271de3ee39cf13f879db671da5b1','nodekey:64f016a04093cc91f7d2ca3639a07a17ae8f95f246a7ad759141d598136bd060','discokey:066b4845ec135e6251d6ef6119c832b46e2bc310332d3ea836c5f39896538b38','lt-49','node058',12,'cli',NULL,NULL,'2025-01-25 18:41:03.881898904+01:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["128.102.246.147:15968","[d0d3:beb7:361b:71d1:af88:4105:6b5f:343e]:14598","155.200.222.111:64410"]','2025-01-17 10:17:23.455895657+01:00','2025-01-25 18:41:03.882180987+01:00',NULL,'100.64.0.50','fd7a:115c:a1e0::32',NULL); -INSERT INTO nodes VALUES(59,'mkey:86289561abe2eff47dd9c23c0bd1e076c0ff7b4306569d6c2ab63aa2ae5096a1','nodekey:1ac8d1434fe581284ae297cc9063685fc2afd854c45ed3cf0809c95b6ecbc037','discokey:44567ca0878c3505bb078f76a993fe3b381f17325a3c0f3df1125bba5eca1a92','desktop-26','node059',21,'cli',NULL,NULL,'2025-06-06 20:55:38.816110199+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[9875:c11c:289c:3b14:dc40:d7af:6:8372]:51751","125.170.153.154:21718","68.3.248.154:42390","[f621:b605:bd04:a964:160c:314a:141a:2e8d]:58563"]','2025-01-29 11:59:27.291048957+01:00','2025-06-06 20:55:38.890138267+02:00',NULL,'100.64.0.54','fd7a:115c:a1e0::36',NULL); -INSERT INTO nodes VALUES(60,'mkey:06bd440cb68a2856dcab4ae09ba4c4e7969b78a99a1d06bbacebc9d1fba037b7','nodekey:c0a2b88f5f4b9aee6a50df28b6ff5e5c495053c26d969100ef23f13a51349bec','discokey:4023ff7bfd754262fe36745731a9d1175cc0b49d7acac56123cfbf0c633fa0c6','db-32','node060',21,'cli',NULL,NULL,'2025-06-06 20:55:32.690648751+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["50.133.88.187:24107","[ed61:9d51:b21f:7096:32fa:e473:f6d4:9f69]:17106","110.235.245.179:61572","[57a6:fde1:814:36cd:f8de:668c:f5fb:4f60]:39714"]','2025-01-29 12:01:57.48748166+01:00','2025-06-06 20:55:32.699638826+02:00',NULL,'100.64.0.55','fd7a:115c:a1e0::37',NULL); -INSERT INTO nodes VALUES(61,'mkey:6841756817cebda15bcacc15dc8e13134d3189a709a4b01cfe7f140a0ff928f4','nodekey:c9ad34579c8f83c989958c188d366b58bddfcfc8623f0ed478e86ae9ad9001c6','discokey:4de34c0ddff8819b97b39c2f31a2568db15e7c0ac6c1d75879b6c5247a5dd9a5','desktop-04','node061',21,'cli',NULL,NULL,'2025-06-06 20:55:32.705457818+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[507f:3427:b2a0:12b7:abaf:dd7d:2f0b:9a7b]:26021","51.210.182.192:52299","[2e30:4d2d:4925:201f:e845:2a12:503c:9524]:34552","105.198.43.87:48215"]','2025-01-29 12:03:01.464646336+01:00','2025-06-06 20:55:32.706102582+02:00',NULL,'100.64.0.56','fd7a:115c:a1e0::38',NULL); -INSERT INTO nodes VALUES(62,'mkey:f5a8d012556138cec4b6a6dbd5f36669d16e025962edf3367789b24a01cd54ab','nodekey:4ed3e288b9820dab0f68f070d6b36cb71e969d35804fdca8692c66b68538e8f0','discokey:0d6aa1b3942712032ff7936507a5eb7df01e40d068092e717d1469c3e683e8d7','web-15','node062',21,'cli',NULL,NULL,'2025-06-27 21:55:15.517052697+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["48.141.84.207:348","[ac:5c65:da8d:d68c:eb0d:3692:d441:5363]:63612"]','2025-01-29 19:23:14.092804852+01:00','2025-06-27 21:55:15.518034318+02:00',NULL,'100.64.0.57','fd7a:115c:a1e0::39','[]'); -INSERT INTO nodes VALUES(63,'mkey:4be52a5e6b73a19d10d6cb424f776651633802f78caff3e2f6a45cf23eb529ae','nodekey:d4881065396d3d9b479137f2fa50504da55869d3b03120f991dcf3a9d7c733ab','discokey:77289acf81d715492dcbf00468f6297621efc9abb9d7934426babfde0c41206e','lt-52','node063',21,'cli',NULL,NULL,'2025-06-28 12:17:33.005602929+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[912a:9b6c:5845:f8d4:f476:491f:cdef:8b7d]:21677","22.203.32.217:28664","112.239.138.73:43954"]','2025-01-29 19:41:40.535299057+01:00','2025-06-28 12:17:33.174275791+02:00',NULL,'100.64.0.58','fd7a:115c:a1e0::3a',NULL); -INSERT INTO nodes VALUES(64,'mkey:4cbd491b0977f3a8cc92ef9f20a7726b3984f3c0699429ae46a0bceec5febee4','nodekey:66a7e77700fb0f4bae5b282413bdb5ff7c375549dd85c4300c73ff3d8029b4a4','discokey:2e9034fc43993eaa0cf2bc21118d378bb02afb3d197a33cf9f0dd201d176b23c','web-06','node064',21,'cli',NULL,NULL,'2025-06-28 12:00:16.139095833+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["81.51.60.60:63220","12.64.198.119:51541","[5e99:3950:5dc4:45be:bab7:e7ba:2476:14e8]:43360"]','2025-01-30 18:18:57.519126133+01:00','2025-06-28 12:00:16.139426752+02:00',NULL,'100.64.0.59','fd7a:115c:a1e0::3b',NULL); -INSERT INTO nodes VALUES(65,'mkey:70cc01dc48021a01c44ab1a4bed313bb8270640ef7317fd983019c4b5e593cf3','nodekey:bfff30bca701176ec3b45b2991810cc437e3995207d924c896c4250e0efdb782','discokey:91eb7d9fa7688e3ed641adfae83be298c1ab261fd00dc2b4ad9cb4c269b623fa','laptop-27','node065',21,'cli',NULL,NULL,'2025-06-28 12:20:11.981310509+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["105.95.65.117:11546","[68bb:e8d1:72d8:8b02:608b:2284:c9b3:184d]:20876","[5c17:4014:ea6a:5528:b1a:61b7:5cda:5c5e]:21597"]','2025-01-30 18:19:40.354692307+01:00','2025-06-28 12:20:11.982409032+02:00',NULL,'100.64.0.60','fd7a:115c:a1e0::3c',NULL); -INSERT INTO nodes VALUES(66,'mkey:ac14ce6b1ecd1a0d79bbf60d279f19d09baf0ebd409f099a401816516953eb30','nodekey:a711e6a5750ec7cceac6a771396b000766d9af8a34327bf5efec3216316e8afc','discokey:0b370e73e9970987dad3fd00317f51c68e4273fbc11b754ef7e84d4d6880002a','srv-33','node066',21,'cli',NULL,NULL,'2025-06-28 11:59:14.080155658+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["145.91.193.5:37126","[2fb6:ea0a:3639:a1cd:9075:a258:6a79:4f5c]:13944","[2384:3000:4d47:a9e7:54ef:99:52c0:e920]:39358","130.195.116.24:53838"]','2025-01-31 12:05:28.65297301+01:00','2025-06-28 11:59:14.080969357+02:00',NULL,'100.64.0.61','fd7a:115c:a1e0::3d',NULL); -INSERT INTO nodes VALUES(67,'mkey:fe494a29de4149025e3c41744cabfaaa2ae9a95795bdb6425e3beee27048e0f6','nodekey:2b078ccb547dc8b01f19749735f5fc43e65e4d468357a08a880fa54dc0a1e00d','discokey:cb991d6c0a14b0213e6f807e9c37a837b181c889a1e2d25e10d65d4a7e8b2b05','laptop-33','node067',21,'cli',NULL,NULL,'2025-06-28 12:16:35.347217701+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["54.154.28.86:64790","197.205.147.181:38523","[5e11:9ab9:6805:dec0:99aa:65e6:d1fe:6e58]:45247","[34f8:19dc:cc84:1e6a:d280:ac0e:2dd9:6fe5]:39241"]','2025-01-31 12:06:30.121464114+01:00','2025-06-28 12:16:35.34788428+02:00',NULL,'100.64.0.62','fd7a:115c:a1e0::3e',NULL); -INSERT INTO nodes VALUES(68,'mkey:de86679045cb8624f1a83852679a58cffd9066814aa8994f4ac0a9213b83175f','nodekey:b62e575e38a00afe79418224f7b954210702ac0a6d450eb8dbf7b5720b1eb5a5','discokey:6492f53782aab503155db83aa2d3a9e7698382da589b8c078d7850373332734b','desktop-65','node068',22,'cli',NULL,NULL,'2025-06-28 12:23:23.107075566+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[b84:67b1:28c5:14f:aab8:916c:582c:6917]:24462","51.182.13.145:25726"]','2025-02-03 14:16:55.56431345+01:00','2025-06-28 12:23:23.107666016+02:00',NULL,'100.64.0.51','fd7a:115c:a1e0::33',NULL); -INSERT INTO nodes VALUES(69,'mkey:eb72ad359958c98e28f93485b720b6edef80d1bc33c4f99d772fc17c766039e9','nodekey:9122bd587ae90984e3231e0f48e15aa8abcfbc1b3324becd1f3622bbbe534da1','discokey:7610d45f8d2d3d4d7774c43288a6d50cd798a832732613647caa15e96621a3e3','laptop-93','node069',22,'cli',NULL,NULL,'2025-05-22 19:56:30.861469961+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[3022:fc47:6534:7fe3:9d1e:8d49:75b:984b]:10998","60.241.100.13:15037"]','2025-02-03 15:23:16.312084161+01:00','2025-05-22 19:56:30.892062591+02:00',NULL,'100.64.0.52','fd7a:115c:a1e0::34',NULL); -INSERT INTO nodes VALUES(70,'mkey:8c5837a59e69539df974f244ee29ea29968a617072d4fdc6dfef564eea6a4061','nodekey:05ebbe5cdc96a19feb16211d4a62a2318e9448e15e1a3a90f0aa556fccba8142','discokey:cf7b3319a18ac04f588e2c5a95de9a24a37f5545338c3532d9703b861a43e3fc','laptop-86','node070',21,'cli',NULL,NULL,'2025-06-28 11:26:01.538093141+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[abf1:9c06:f1de:eef4:f14a:d431:8f2:4995]:52788","[8ac8:a469:4cc:90b5:c84b:f4f:5cb:49e1]:35646","[d152:3a18:3826:ee28:a323:74d6:7140:ca5]:40749","99.13.253.35:45887"]','2025-02-03 18:09:35.161109801+01:00','2025-06-28 11:26:01.539046342+02:00',NULL,'100.64.0.53','fd7a:115c:a1e0::35',NULL); -INSERT INTO nodes VALUES(71,'mkey:9562166c366f1968a5f82aca83bd8152d893541eed4d8682da0a540825cd44e9','nodekey:93ef8b713afeffc4324b0f9a99e185be61dbff43d6e5769efac6d82481b425c9','discokey:9aa3fe84188c317d17132a26604380ec4aaf7aed94b4b759bb6b643dbd53f8d9','desktop-61','node071',21,'cli',NULL,NULL,'2025-06-27 07:45:53.640560103+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["56.191.25.57:54747","74.194.16.233:59572"]','2025-02-04 12:03:50.32663805+01:00','2025-06-27 07:45:53.64117544+02:00',NULL,'100.64.0.63','fd7a:115c:a1e0::3f',NULL); -INSERT INTO nodes VALUES(72,'mkey:50329891edd2948b53a0905245386567c98b46bc0d566e39e1763768d51cd638','nodekey:e4ebfcbd1e1d653bda175a62b765c6a646b7844b3d6bd49a9955f96a9b284c6a','discokey:f59cb75bac53aef739bbd5be676808891ebb55550aa0560396dc0532658ed893','web-53','node072',21,'cli',NULL,NULL,'2025-06-06 20:55:32.885806537+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["200.133.152.206:14119","28.101.69.72:35202"]','2025-02-04 12:07:36.231437299+01:00','2025-06-06 20:55:32.902608355+02:00',NULL,'100.64.0.64','fd7a:115c:a1e0::40',NULL); -INSERT INTO nodes VALUES(73,'mkey:fc9d5a1c4658644beeb3130cf62b19042698652298ef36899d41f588bec371ab','nodekey:d44ef585e416773d29e627cf9a6e590af9b676d23a0a816865d05d775517ab4b','discokey:0c87d487f7a4f4ec5e045ff0c83c1b6c54453f7afe0e5f13c7ce90746e18da49','lt-48','node073',21,'cli',NULL,NULL,'2025-06-06 20:55:32.909812195+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["126.19.171.233:19526","[91e1:a8d7:5c43:b695:4f5:b147:b5f7:9f75]:62218"]','2025-02-04 12:10:26.50545127+01:00','2025-06-06 20:55:32.911837947+02:00',NULL,'100.64.0.65','fd7a:115c:a1e0::41',NULL); -INSERT INTO nodes VALUES(74,'mkey:a46b92cea88a9bac37dcb612a4a50dee75684f736c8c7f2aab812fa5fb9d463f','nodekey:1bbfdddf7b2a929a3da1e69236aa8b8fbd4b25d1839e57aaecf2ab910592bc78','discokey:c2188df6632aded7862de4b5d07c3993b3e908c59ce38f9d481f4840d85448c7','srv-06','node074',21,'cli',NULL,NULL,'2025-06-27 12:16:51.183868148+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["[99f:9853:60cd:a4c5:e9d7:2a40:f86:742b]:782","[99cd:c956:e0f9:b633:5f8b:5c6a:50b1:571]:18474"]','2025-02-06 17:33:12.557302525+01:00','2025-06-27 12:16:51.184524169+02:00',NULL,'100.64.0.67','fd7a:115c:a1e0::43',NULL); -INSERT INTO nodes VALUES(75,'mkey:2c018031737bb172064fe31674acafe1b8b4d62d2b82db835e2c38061c5bfdbb','nodekey:575f781bb36f7c93cde4d9808e3dedef2c8999864ea1d4e5ee73fdd3d53ac217','discokey:00e3275941e7a0a73a1e5c44cea638a411c60bb2e257812e0f39152ec0c5a21b','lt-01','node075',9,'cli',NULL,NULL,'2025-06-28 12:12:54.498870123+02:00','0001-01-01 00:00:00+00:00','{"fake":"data"}','["56.80.126.49:13907","95.196.223.110:47343"]','2025-02-06 18:23:55.709687186+01:00','2025-06-28 12:12:54.499413173+02:00',NULL,'100.64.0.66','fd7a:115c:a1e0::42',NULL); -INSERT INTO nodes VALUES(76,'mkey:a5a8194cef984a1cc2081212405c10c8cc1061d3d00e6aa7e1415c854e3ded72','nodekey:ca37a131e58c14ebec3a6e179549d7c8742a31b42ed0e6e52aed138ecc176907','discokey:8a4990c19c547da50875ad00613118d5382717652b4ef9897a1d1c060c40a976','laptop-71','node076',21,'cli',NULL,NULL,'2025-06-21 21:24:09.453652526+02:00',NULL,'{"fake":"data"}','["114.141.127.86:11858","[d05b:3c4f:8402:823f:409f:87b2:4dab:4959]:58420","130.243.34.21:50806","104.255.105.25:5817"]','2025-02-14 15:29:45.220999928+01:00','2025-06-21 21:24:09.453999154+02:00',NULL,'100.64.0.13','fd7a:115c:a1e0::d',NULL); -INSERT INTO nodes VALUES(77,'mkey:51f0f5c2b892846f926abf826f5ff56a52fc4a466be7a1073f16028ef2c6aabb','nodekey:0138c27376e8289255c1d505ba8e8d0e9e8e6785198b87984763531273d0c540','discokey:5bbb476ee437efaa55c10931313e35c4c8c6e30015e7f3eb015f4736897abb12','desktop-50','node077',23,'cli',NULL,NULL,'2025-06-25 09:52:57.106038118+02:00',NULL,'{"fake":"data"}','["57.247.45.203:47518","[1709:1d2d:9e92:a2c5:f5a0:fbca:6d9:dcbb]:31824","59.75.44.185:44605","[1294:31c8:7d7a:efc5:9768:8d1:63fc:993c]:62533"]','2025-02-14 17:00:54.226657615+01:00','2025-06-25 09:52:57.123618657+02:00',NULL,'100.64.0.68','fd7a:115c:a1e0::44',NULL); -INSERT INTO nodes VALUES(78,'mkey:138104bf35231052caaaf0f55c18b5c3beb552669360dbdc4a5246a25d3ef8f5','nodekey:c4ae5fe468012696b6a85e95b23e1391f472b17ca834efe8d61578c186efb9d9','discokey:32d79e04d128c17209805e7641c0c4f3415690430afb344699f4cdaabb2ab28e','lt-38','node078',23,'cli',NULL,NULL,'2025-06-26 08:56:45.225612029+02:00',NULL,'{"fake":"data"}','["107.221.8.103:1156","[1cbd:d52d:8b7f:a698:2d0c:98f5:bbef:4d38]:43009","[6d9d:7cf0:440e:2ee6:8dc8:fdfe:3e87:f2f9]:6993","[734f:219f:40f0:ea33:47c3:ab48:b4f6:a239]:37955"]','2025-02-14 18:03:28.401774063+01:00','2025-06-26 08:56:45.232919473+02:00',NULL,'100.64.0.70','fd7a:115c:a1e0::46',NULL); -INSERT INTO nodes VALUES(79,'mkey:8dda457abd80238812f15bbc816248c5477f1c7f2a31b180e6d1917bb4cda3dd','nodekey:bee17b81a1b62eb551d672bbc6ab5c7900cc39951a3eb1be98b05e813c7a5671','discokey:aafde40e00db7577b135254c85e37e76348876996bd5b5e5b93cd9cf994cabff','email-79','node079',25,'cli',NULL,NULL,'2025-02-24 18:37:03.584752527+01:00',NULL,'{"fake":"data"}','["32.188.178.139:7110","177.149.150.247:56583","[e188:6623:d389:230b:a467:ab3b:2883:5196]:59813","[8dc1:48a1:2e5e:8c45:131e:9d7a:2d85:61ef]:1654","[dbf5:5aac:6a5a:f73c:a62e:69bc:c227:1180]:49484","200.106.54.69:19053"]','2025-02-15 12:49:11.523904469+01:00','2025-02-24 18:37:03.584913585+01:00',NULL,'100.64.0.71','fd7a:115c:a1e0::47',NULL); -INSERT INTO nodes VALUES(80,'mkey:94a77d61756e11d471ee47d55a1cb2eae0c0faacbe5b94ed8256ee68efabf715','nodekey:0b13795cf68e3a5adb091b3b97e0c2ac4884f8295245011f60c074693fe1d3f7','discokey:5ee7570d8f70a4078d7b9afe0cadc5441cb77ef9e2f849301e1082411e04c0f7','srv-48','node080',23,'cli',NULL,NULL,'2025-06-24 13:47:09.058273693+02:00',NULL,'{"fake":"data"}','["[231a:c601:4e16:b5a3:aafd:6b2a:8d59:33b6]:51062","194.8.95.111:60702","42.58.107.175:62253","[90eb:ca6c:1e10:11cc:fd29:4682:aaaa:fd20]:57634"]','2025-02-15 21:12:05.434304787+01:00','2025-06-24 13:47:09.058652899+02:00',NULL,'100.64.0.72','fd7a:115c:a1e0::48',NULL); -INSERT INTO nodes VALUES(81,'mkey:c63e8e0bd19c1d1433056fbffb0d8b66387fec012a75060189fe896296834a81','nodekey:fa2e09d0c11508bfef7f3b63cf2f0aa752a092565cc28a1a7007cfe1d44addae','discokey:37c9fd82db6d47f8dfeec0ed6d764f8b604fb06ddf8f2cdf44eea45a840b16ff','srv-28','node081',23,'cli',NULL,NULL,'2025-06-06 20:55:32.713163573+02:00',NULL,'{"fake":"data"}','["175.165.229.254:57553","107.46.106.255:64756","7.81.60.25:36846","[16f5:5847:5615:9f6f:b38a:e4eb:d340:c800]:62212"]','2025-02-17 16:05:05.921277876+01:00','2025-06-26 08:56:46.063988329+02:00',NULL,'100.64.0.73','fd7a:115c:a1e0::49',NULL); -INSERT INTO nodes VALUES(82,'mkey:9cff47309a04474303f70c020f06f8bb795bccfc71b2ebee64b5d2c5fa862f97','nodekey:538480654e054ac4e41a1249cf3a7780a48681ac2ed5c7f758fb5d1abe750389','discokey:9511b469afd6ac8c4644f6464873d440c6c4e4fc706e53d799bf7819556962e4','web-22','node082',23,'cli',NULL,NULL,'2025-06-28 11:30:16.497956346+02:00',NULL,'{"fake":"data"}','["[8c86:53d3:80d7:bd12:aa96:5eb6:7e53:dfbe]:39629","177.106.74.193:23401","153.144.178.85:48541","[a7b3:18ce:8c59:ff04:2b88:a0fb:7305:9a44]:27764"]','2025-02-28 09:21:23.143225002+01:00','2025-06-28 11:30:16.498254852+02:00',NULL,'100.64.0.69','fd7a:115c:a1e0::45',NULL); -INSERT INTO nodes VALUES(83,'mkey:619cc5464a91d8f554d47b641eb33921721d77cc277d1d4e662cb9a7e9a52b48','nodekey:8494fd062128da2a0b6cde389888fffe2e7384b5ea4f254c23392e62f7dc6962','discokey:4efe7a6440876724d36fa7380d9c73c43eb5d11a2b7c244628f878406fb166b3','db-00','node083',26,'cli',NULL,NULL,'2025-06-28 11:33:20.425827891+02:00',NULL,'{"fake":"data"}','["[f57a:5d73:f1b4:e8ff:6b9:ae85:1d9:244]:19001","3.20.49.173:50146"]','2025-03-18 09:09:50.849674955+01:00','2025-06-28 11:33:20.426570852+02:00',NULL,'100.64.0.74','fd7a:115c:a1e0::4a',NULL); -INSERT INTO nodes VALUES(84,'mkey:57b63829d09076ea08e0bacc6c66e99112804d4d05804fcd142550188928efc3','nodekey:f4506c8002304d533a91c84d4b8b450f0ead78a853dd1d962421d09f7fe323b5','discokey:e8b549a18c4ad009efe91b011ec3992d86cadb5fdc5838668835035d9442add7','srv-03','node084',23,'cli',NULL,NULL,'2025-06-06 20:55:32.855833618+02:00',NULL,'{"fake":"data"}','["195.54.83.239:15076","[3238:dbe9:f89d:91e1:6338:993d:3e76:8561]:32795","[b501:f57f:99da:e5f1:65fa:1d84:7d10:7d66]:24427","186.27.91.14:37325"]','2025-03-25 10:04:37.147174393+01:00','2025-06-06 20:55:32.866553475+02:00',NULL,'100.64.0.75','fd7a:115c:a1e0::4b',NULL); -INSERT INTO nodes VALUES(85,'mkey:569dd1b12fb2b9920f0366c57bdbf359f0e0239e6ef62cf7a1f29554f3e3ffe5','nodekey:94d32c5d5da99b19bd320ec4b7fe883e2d5bf137ede9262d035880f36610a39f','discokey:d25d5f42aee8b5d8e22b9ba1fa40e7295ad0f2b00c466d03c267ac9c0e00d152','email-49','node085',23,'cli',NULL,NULL,'2025-06-06 20:55:32.134989319+02:00',NULL,'{"fake":"data"}','["[b0bf:223c:e0af:4993:14e0:acb1:936e:4db2]:39578","222.45.50.133:6908","223.57.18.237:24236","172.116.24.79:64278"]','2025-03-25 10:08:52.471661925+01:00','2025-06-06 20:55:32.135474025+02:00',NULL,'100.64.0.76','fd7a:115c:a1e0::4c',NULL); -INSERT INTO nodes VALUES(86,'mkey:41dcb7bf690ffde927829637a6fc7c98b2600a504561ce7b0667f0f2aa1f3ab2','nodekey:633596e036705ce9ba03ce044cb7b5cd008f7045caee57fa594803a3b95c564a','discokey:9d96b830729e50d763debfefcd3af6f91ee9bb9daf53d6890652a25f3263f823','srv-96','node086',27,'cli',NULL,NULL,'2025-06-20 08:34:16.969649646+02:00',NULL,'{"fake":"data"}','["[a09b:5970:b127:fdfc:e68e:e496:bdee:6f12]:36132","51.12.68.23:57633","33.175.40.132:2790"]','2025-03-26 10:24:31.473595709+01:00','2025-06-20 08:34:16.970146755+02:00',NULL,'100.64.0.77','fd7a:115c:a1e0::4d',NULL); -INSERT INTO nodes VALUES(87,'mkey:3944556a65c99ee2890a8bd572dc3001f71f1eeebc580a026146eaa8def09009','nodekey:11966e544db141f610b5a1a864bd4be18373952b3a8ced40214105ef7371127f','discokey:a77a4b657a9ac80ad760d1e7348065578555373d8843162326e6366b0c54e444','web-11','node087',27,'cli',NULL,NULL,'2025-06-20 08:29:06.936731549+02:00',NULL,'{"fake":"data"}','["16.142.13.19:36771","[ad8a:a3fa:8d4d:c659:5e32:45dd:d95f:abc8]:43118","[797c:a12f:a521:102b:e357:ad81:247:fdcf]:61479","[5f8:a29a:bcab:9a90:4bdc:7d00:46bc:a69e]:16041"]','2025-03-26 14:30:57.179694234+01:00','2025-06-20 08:29:10.787309705+02:00',NULL,'100.64.0.78','fd7a:115c:a1e0::4e',NULL); -INSERT INTO nodes VALUES(88,'mkey:8ba458a88b40b125982bf763462a8f252fc3e80fdf8f472ab1545842af53494f','nodekey:09e6884692ec3f7f53bcc16bea60e1d4dfab44dd459af7dc42ab263e1d78920b','discokey:a09ed63f785315ae2ab524c3b34aad6f12d23a96771def70a3e0fb2de7ae7416','laptop-52','node088',27,'cli',NULL,NULL,'2025-06-19 08:39:43.92787588+02:00',NULL,'{"fake":"data"}','["54.165.39.252:59066","[127c:b334:8ae3:358c:4178:fae4:9f31:521c]:33026","104.73.3.103:62847","[fd18:384b:ef76:f4b7:6983:bd4e:70af:ce35]:42618","7.111.70.18:4634"]','2025-03-26 14:37:55.198097806+01:00','2025-06-19 08:39:48.020920495+02:00',NULL,'100.64.0.79','fd7a:115c:a1e0::4f',NULL); -INSERT INTO nodes VALUES(89,'mkey:3a17d7f9a8e51885b842e63fffad7374b06e1804f3a48e9362c2cc3e46df0af7','nodekey:4c771b227735fcb0f7e6cef27350852bffa406ad446fd37cf60f98ecbacc04a7','discokey:6a2091fd8a448626ad02024b66216b22ed02926807c2b7a18877f300e1d1d632','srv-52','node089',28,'cli',NULL,NULL,'2025-06-28 07:45:05.917078332+02:00',NULL,'{"fake":"data"}','["[3769:2186:2b73:8e38:b9bb:54c6:e48a:6e65]:33253","[460c:9b85:4ea7:ea85:e205:aa7:210e:c098]:44172","222.138.65.141:16368","[4bdf:4d9a:f0d3:2bea:deb9:33d4:c163:513]:6983","212.18.134.205:41528","190.153.32.51:23708"]','2025-03-31 18:34:22.574460539+02:00','2025-06-28 07:45:05.917342923+02:00',NULL,'100.64.0.80','fd7a:115c:a1e0::50',NULL); -INSERT INTO nodes VALUES(90,'mkey:3b2445a4e3d1c6cf3d909493530eecf8bff85ef8a440ba467d32bf6f1b64d24b','nodekey:298e7d97f1d1289dff2249401d706813c0d89121fab70e2cc514f3a53dbf14c7','discokey:3cca67ae4c540ceb6a8a7b552fdd8eddb54db6a1c082e290a4c0aa638c5c5efe','srv-72','node090',27,'cli',NULL,NULL,'2025-06-20 23:20:42.928064133+02:00',NULL,'{"fake":"data"}','["[262b:3b74:f0ee:1ea9:9099:5c3d:f9a2:22]:46101","[40ca:40f3:9b64:de92:c1af:a19d:f7e0:1dbc]:30741","[a9f5:d5cd:d553:b5:17f3:3d89:8617:46a5]:21519","102.217.166.61:13721","46.95.71.249:58007","168.186.124.250:51025"]','2025-04-03 17:52:02.130025013+02:00','2025-06-20 23:20:42.928708477+02:00',NULL,'100.64.0.81','fd7a:115c:a1e0::51',NULL); -INSERT INTO nodes VALUES(91,'mkey:2eb9cfef2ed45380bbb4043e4aa79257b50ed4575489fb5c789469283f31610a','nodekey:b09c411a584f487513bdbebd8fb50f91ca57c0f544c0649b155ba892bc9173b3','discokey:e95823238298c4f5554dd2c69e1f62ef05cae05dcd1bd95045b582f2136a7757','db-74','node091',27,'cli',NULL,NULL,'2025-06-28 03:30:33.869801917+02:00',NULL,'{"fake":"data"}','["[575d:4099:ec56:8209:f0f:ed49:b6d9:f75b]:27401","[a35b:63a2:51a7:81a6:eb0b:5e72:f82e:bdec]:54393","84.160.88.244:21712"]','2025-04-05 18:49:13.742268632+02:00','2025-06-28 03:30:33.870261795+02:00',NULL,'100.64.0.82','fd7a:115c:a1e0::52',NULL); -INSERT INTO nodes VALUES(92,'mkey:51551f7bcfb7e10f052439502541e9c00c37a8f3de91ed0fe01e5c3c47dea5a3','nodekey:6975b5ed6c0cfa85e0cdf6152f2d6c0c273109bfcaa76860637de5f1dc9e40ac','discokey:dba4f8059d89090a2e198367bbd04d6c5475abca7ae3731b7aa29c382131af27','lt-17','node092',23,'cli',NULL,NULL,'2025-06-28 12:21:17.063796663+02:00',NULL,'{"fake":"data"}','["[8052:8707:2eea:c767:5493:dd06:2d75:55a1]:60873","[8c83:88e1:5427:d2e:8604:ff8b:3c0a:6587]:6368","182.184.190.162:26611","[c45e:f0f9:8bf6:6beb:9170:ae3e:bc81:1842]:3750"]','2025-04-10 09:33:19.75758384+02:00','2025-06-28 12:21:17.064223203+02:00',NULL,'100.64.0.83','fd7a:115c:a1e0::54',NULL); -INSERT INTO nodes VALUES(93,'mkey:df4d9b0848620e7033c8cbecbe7cd076475433baa6a8233289a8207f538969c1','nodekey:2d50983fa2d5735e306a8c05630a41b897419528e2c9e7143efc29f2517777c6','discokey:7d62636f14b697c8d9666535fccc89cacb009088200957070d2ffb7108e79694','desktop-28','node093',23,'cli',NULL,NULL,'2025-06-28 12:23:07.890521061+02:00',NULL,'{"fake":"data"}','["[9497:e689:56ef:65ff:40ba:5bec:198:cbaa]:6478","165.179.14.165:11130","59.185.174.72:31766","[b430:987d:5157:7cdf:582b:c6c:801b:19d3]:8744"]','2025-04-10 11:00:40.243041051+02:00','2025-06-28 12:23:07.891010158+02:00',NULL,'100.64.0.84','fd7a:115c:a1e0::55',NULL); -INSERT INTO nodes VALUES(94,'mkey:0457d727c81909cd14e5edc1f8fd43c685bb5bca802c26df60fc27b90b66ff04','nodekey:316a01774fde64ebcfd3192f99ef9f83ff9c250b916090c1829d690d7ef5291e','discokey:de047a08a4c07142790d6919eb61e5d1c6bfebda7547778ba6e13b4166dd9100','email-53','node094',23,'cli',NULL,NULL,'2025-06-28 12:18:55.75713159+02:00',NULL,'{"fake":"data"}','["95.217.54.187:12234","2.119.104.181:58132","[7e19:7d5d:ce33:64aa:d9a0:5:ec9e:56c3]:40537","111.114.18.75:28011"]','2025-04-10 11:41:58.339188608+02:00','2025-06-28 12:18:55.757420045+02:00',NULL,'100.64.0.85','fd7a:115c:a1e0::56',NULL); -INSERT INTO nodes VALUES(95,'mkey:a0aad10c0947eabadb07e0a064b118abc6317194a20a35eefbf5f00497cc86b4','nodekey:f508bb43820174bbfbff0ddaa441477c6a4b1ad16e2d6a6c8bd8e7d579cc9512','discokey:7f3ea4389df4edfa3c5091be7ff8dab835b2584d453d8df7727d82828e9eb598','db-56','node095',1,'cli',NULL,NULL,'2025-06-28 10:23:24.075008579+02:00',NULL,'{"fake":"data"}','["24.36.243.82:39056","[100e:6264:4572:885e:777a:94d9:d296:29e5]:56545","[85c:f9c7:4780:41ba:b873:e740:7a65:aaf6]:46931","117.173.93.251:32036","3.127.130.51:31048","[b2b6:54b1:547b:3b76:a977:8ef0:9263:1d95]:10788"]','2025-04-18 07:31:25.494609012+02:00','2025-06-28 10:23:24.075270399+02:00',NULL,'100.64.0.86','fd7a:115c:a1e0::57',NULL); -INSERT INTO nodes VALUES(96,'mkey:9c90f9a2cf141fbf87bb7a713b11363c7700648c247b2be55880be65b097b32e','nodekey:a891e03b7e3f92ee155c51f966ff579e1b59affcdd7ffe2be0b17617fc5d2a4f','discokey:2d47647673be16002087672c8c3c333af739c0429b4ca3199a0b37c775f188ec','srv-31','node096',27,'cli',NULL,NULL,'2025-06-21 19:42:23.023186161+02:00',NULL,'{"fake":"data"}','["99.229.28.236:22736","[e5ea:7d6c:d0e6:153c:b9f6:17da:3b8e:a7dd]:11150","116.109.151.5:63986","[4167:adfc:1fcb:624c:e8d1:7469:aab7:e6a3]:29364","151.32.217.201:35528","129.148.62.154:38346","[5bcc:3f9b:413a:f79c:9785:4537:4acb:9123]:12439"]','2025-04-25 19:06:18.720283603+02:00','2025-06-21 19:42:23.023317999+02:00',NULL,'100.64.0.87','fd7a:115c:a1e0::58',NULL); -INSERT INTO nodes VALUES(97,'mkey:e403a9d40dd5cc7b3c34bb8e883069e46ea691bd901d80a18255b2e978ebbca3','nodekey:9fb4a4edd2f095f6d3ffc2783d13c1c1c9397edc7926a57a2b91d292dc6d9539','discokey:e3ddfd156fab89c7572aba20e9606ec10cdf6f8f60bf4a0e4b0ee9986b9fe875','desktop-17','node097',23,'cli',NULL,NULL,'2025-06-06 20:55:32.914042482+02:00',NULL,'{"fake":"data"}','["[5088:bc4f:e27b:708c:e75:6278:90c:92b5]:37686","[f352:c3ab:52a4:2b1c:d1b5:9ee4:a15e:d832]:39715"]','2025-05-14 09:57:07.359717602+02:00','2025-06-06 20:55:32.915628496+02:00',NULL,'100.64.0.88','fd7a:115c:a1e0::59',NULL); -INSERT INTO nodes VALUES(98,'mkey:076f21957e297a47c7e893b37dc38ccbc10d16041c45e733755f63a9adf27209','nodekey:23be3c688a8b508b5e09708e2a2623ece5b9ec426a776787ff163bea80104fa6','discokey:443acdc2d4a9c9f5c316b728916259e58eea2d0dc7e438181335306964c22121','desktop-72','node098',27,'cli',NULL,NULL,'2025-06-12 20:47:06.114210533+02:00',NULL,'{"fake":"data"}','["34.11.54.132:20475","[8104:4565:fd92:f07b:5ce7:8ad9:590e:349b]:64085","[fa37:1f6:5256:186d:2ce6:ed1f:b9b8:5d73]:42737","[6e79:cebf:1df1:83d1:d0de:38ca:4eaa:7bd3]:25953","[56df:3d29:e870:353a:bb28:ffcd:b225:5231]:57718","170.226.226.154:61176","[d122:a66a:2153:741a:b188:5260:a325:6c68]:17261","[67ca:e4a8:6a48:8880:1183:c56:7416:9af0]:57017","72.3.174.188:14460"]','2025-05-14 11:56:48.415832658+02:00','2025-06-12 20:47:06.114995503+02:00',NULL,'100.64.0.89','fd7a:115c:a1e0::5a',NULL); -INSERT INTO nodes VALUES(99,'mkey:24908f72d8e598ce86ab51aedbe19f7db7d459b33d53e0383231e3d54239680d','nodekey:37a38d1af4997a3a37406cfefc83240035cf5fbad122d46534e62ac9c9b23821','discokey:19926fa875306f0b304e898eee83f1711db11b5930a3016c18d0d503b37f0841','laptop-14','node099',14,'cli',NULL,NULL,'2025-06-06 20:55:32.627376311+02:00',NULL,'{"fake":"data"}','["[6228:aa08:b376:ded6:55d1:b081:66fa:6299]:59385"]','2025-05-22 13:59:12.642339275+02:00','2025-06-06 20:55:32.632840843+02:00',NULL,'100.64.0.90','fd7a:115c:a1e0::5b',NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE TABLE IF NOT EXISTS "users" (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text,PRIMARY KEY (`id`)); -INSERT INTO users VALUES(1,'2023-05-17 19:36:55.859473496+02:00','2025-05-14 19:39:49.446674051+02:00',NULL,'user001','','',NULL,'',''); -INSERT INTO users VALUES(2,'2023-05-17 19:36:57.059073465+02:00','2025-05-14 19:39:49.446939883+02:00',NULL,'user002','','',NULL,'',''); -INSERT INTO users VALUES(3,'2023-05-18 10:10:36.248939077+02:00','2025-05-14 19:39:49.447156233+02:00',NULL,'user003','','',NULL,'',''); -INSERT INTO users VALUES(4,'2023-06-10 09:06:13.920718561+02:00','2025-05-14 19:39:49.447397385+02:00',NULL,'user004','','',NULL,'',''); -INSERT INTO users VALUES(5,'2023-06-11 19:58:32.371218434+02:00','2025-05-14 19:39:49.447602838+02:00',NULL,'user005','','',NULL,'',''); -INSERT INTO users VALUES(6,'2023-06-17 19:39:53.031565686+02:00','2025-05-14 19:39:49.447803872+02:00',NULL,'user006','','',NULL,'',''); -INSERT INTO users VALUES(7,'2023-06-20 11:35:09.325846831+02:00','2025-05-14 19:39:49.447987808+02:00',NULL,'user007','','',NULL,'',''); -INSERT INTO users VALUES(8,'2023-06-21 22:47:48.196234382+02:00','2025-05-14 19:39:49.44820155+02:00',NULL,'user008','','',NULL,'',''); -INSERT INTO users VALUES(9,'2023-06-22 08:30:35.068995572+02:00','2025-05-14 19:39:49.448483514+02:00',NULL,'user009','','',NULL,'',''); -INSERT INTO users VALUES(10,'2023-07-03 10:18:32.123226+02:00','2025-05-14 19:39:49.448692002+02:00',NULL,'user010','','',NULL,'',''); -INSERT INTO users VALUES(11,'2023-07-03 10:18:37.130387602+02:00','2025-05-14 19:39:49.448883814+02:00',NULL,'user011','','',NULL,'',''); -INSERT INTO users VALUES(12,'2023-12-15 08:05:06.013615212+01:00','2025-05-14 19:39:49.449111509+02:00',NULL,'user012','','',NULL,'',''); -INSERT INTO users VALUES(13,'2024-02-03 16:32:42.224977233+01:00','2025-05-14 19:39:49.449357341+02:00',NULL,'user013','','',NULL,'',''); -INSERT INTO users VALUES(14,'2024-05-03 10:12:38.220973042+02:00','2025-05-14 19:39:49.449556253+02:00',NULL,'user014','','',NULL,'',''); -INSERT INTO users VALUES(15,'2024-07-26 08:08:40.979783263+02:00','2025-05-14 19:39:49.449755129+02:00',NULL,'user015','','',NULL,'',''); -INSERT INTO users VALUES(16,'2024-08-05 17:32:02.878091894+02:00','2025-05-14 19:39:49.449974677+02:00',NULL,'user016','','',NULL,'',''); -INSERT INTO users VALUES(17,'2024-09-22 15:48:00.287392203+02:00','2025-05-14 19:39:49.450181734+02:00',NULL,'user017','','',NULL,'',''); -INSERT INTO users VALUES(18,'2024-12-10 13:55:11.256977421+01:00','2025-05-14 19:39:49.450421161+02:00',NULL,'user018','','',NULL,'',''); -INSERT INTO users VALUES(19,'2024-12-17 14:57:58.550971236+01:00','2025-05-14 19:39:49.450633287+02:00',NULL,'user019','','',NULL,'',''); -INSERT INTO users VALUES(20,'2024-12-17 15:02:08.053169491+01:00','2025-05-14 19:39:49.450837357+02:00',NULL,'user020','','',NULL,'',''); -INSERT INTO users VALUES(21,'2025-01-28 15:57:32.774456057+01:00','2025-05-14 19:39:49.451012461+02:00',NULL,'user021','','',NULL,'',''); -INSERT INTO users VALUES(22,'2025-02-03 14:10:50.491924701+01:00','2025-05-14 19:39:49.451214878+02:00',NULL,'user022','','',NULL,'',''); -INSERT INTO users VALUES(23,'2025-02-14 16:58:30.250289644+01:00','2025-05-14 19:39:49.451415294+02:00',NULL,'user023','','',NULL,'',''); -INSERT INTO users VALUES(25,'2025-02-15 12:48:14.650995528+01:00','2025-05-14 19:39:49.451605671+02:00',NULL,'user025','','',NULL,'',''); -INSERT INTO users VALUES(26,'2025-03-18 09:09:00.456523573+01:00','2025-05-14 19:39:49.451854829+02:00',NULL,'user026','','',NULL,'',''); -INSERT INTO users VALUES(27,'2025-03-26 10:23:51.960113834+01:00','2025-05-14 19:39:49.452164228+02:00',NULL,'user027','','',NULL,'',''); -INSERT INTO users VALUES(28,'2025-03-31 18:25:26.535133091+02:00','2025-05-14 19:39:49.452536759+02:00',NULL,'user028','','',NULL,'',''); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.0_dump.sql deleted file mode 100644 index 110ae306..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.0_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.0_schema.sql deleted file mode 100644 index 6e806a5f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.0_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.1_dump.sql deleted file mode 100644 index 110ae306..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.1_schema.sql deleted file mode 100644 index 6e806a5f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.2_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.2_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.2_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.2_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.3_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.3_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.3_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.3_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.4_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.4_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.4_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.4_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.4_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.4_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.5_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.5_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.5_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.5_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.5_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.5_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.6_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.6_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.6_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.6_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.6_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.6_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.7_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.7_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.7_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.7_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.7_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.7_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.8_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.8_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.8_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.10.8_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.10.8_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.10.8_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.11.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.11.0_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.11.0_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.11.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.11.0_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.11.0_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.1_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.1_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.2-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.2-beta1_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.2-beta1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.2-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.2-beta1_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.2-beta1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.2_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.2_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.2_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.2_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.3_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.3_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.3_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.3_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.4_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.4_dump.sql deleted file mode 100644 index ea8d152f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.4_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.12.4_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.12.4_schema.sql deleted file mode 100644 index b2b5c649..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.12.4_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.13.0-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.13.0-beta1_dump.sql deleted file mode 100644 index 4c570225..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.13.0-beta1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.13.0-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.13.0-beta1_schema.sql deleted file mode 100644 index e8f2c4b8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.13.0-beta1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.13.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.13.0_dump.sql deleted file mode 100644 index 5e23c4b6..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.13.0_dump.sql +++ /dev/null @@ -1,13 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.13.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.13.0_schema.sql deleted file mode 100644 index 2d3095c3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.13.0_schema.sql +++ /dev/null @@ -1,9 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta1_dump.sql deleted file mode 100644 index 5e23c4b6..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta1_dump.sql +++ /dev/null @@ -1,13 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta1_schema.sql deleted file mode 100644 index 2d3095c3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta1_schema.sql +++ /dev/null @@ -1,9 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta2_dump.sql deleted file mode 100644 index 5e23c4b6..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta2_dump.sql +++ /dev/null @@ -1,13 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta2_schema.sql deleted file mode 100644 index 2d3095c3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.14.0-beta2_schema.sql +++ /dev/null @@ -1,9 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.14.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.14.0_dump.sql deleted file mode 100644 index 5e23c4b6..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.14.0_dump.sql +++ /dev/null @@ -1,13 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.14.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.14.0_schema.sql deleted file mode 100644 index 2d3095c3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.14.0_schema.sql +++ /dev/null @@ -1,9 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta1_dump.sql deleted file mode 100644 index 434e3b56..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta1_schema.sql deleted file mode 100644 index 5bf2de15..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta2_dump.sql deleted file mode 100644 index 194ee4cf..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta2_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta2_schema.sql deleted file mode 100644 index 31d2dedb..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta2_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta3_dump.sql deleted file mode 100644 index 194ee4cf..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta3_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta3_schema.sql deleted file mode 100644 index 31d2dedb..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta3_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta4_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta4_dump.sql deleted file mode 100644 index 194ee4cf..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta4_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta4_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta4_schema.sql deleted file mode 100644 index 31d2dedb..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta4_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta5_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta5_dump.sql deleted file mode 100644 index 194ee4cf..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta5_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta5_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta5_schema.sql deleted file mode 100644 index 31d2dedb..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta5_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta6_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta6_dump.sql deleted file mode 100644 index 194ee4cf..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta6_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta6_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta6_schema.sql deleted file mode 100644 index 31d2dedb..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0-beta6_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0_dump.sql deleted file mode 100644 index 194ee4cf..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.15.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.15.0_schema.sql deleted file mode 100644 index 31d2dedb..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.15.0_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`name` text,`namespace_id` integer,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.0-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.0-beta1_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.0-beta1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.0-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.0-beta1_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.0-beta1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.0_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.0_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.0_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.0_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.1_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.1_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.2_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.2_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.2_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.2_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.3_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.3_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.3_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.3_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.4_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.4_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.4_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.16.4_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.16.4_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.16.4_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha1_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha1_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha2_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha2_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha2_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha2_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha3_dump.sql deleted file mode 100644 index 08b6d859..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha3_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha3_schema.sql deleted file mode 100644 index f1fc3f5c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha3_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha4_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha4_dump.sql deleted file mode 100644 index 033cf2f3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha4_dump.sql +++ /dev/null @@ -1,12 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha4_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha4_schema.sql deleted file mode 100644 index d57975e8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-alpha4_schema.sql +++ /dev/null @@ -1,8 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta1_dump.sql deleted file mode 100644 index 033cf2f3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta1_dump.sql +++ /dev/null @@ -1,12 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta1_schema.sql deleted file mode 100644 index d57975e8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta1_schema.sql +++ /dev/null @@ -1,8 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta5_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta5_dump.sql deleted file mode 100644 index 033cf2f3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta5_dump.sql +++ /dev/null @@ -1,12 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta5_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta5_schema.sql deleted file mode 100644 index d57975e8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0-beta5_schema.sql +++ /dev/null @@ -1,8 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0_dump.sql deleted file mode 100644 index 033cf2f3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0_dump.sql +++ /dev/null @@ -1,12 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.0_schema.sql deleted file mode 100644 index d57975e8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.0_schema.sql +++ /dev/null @@ -1,8 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.1_dump.sql deleted file mode 100644 index 033cf2f3..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.1_dump.sql +++ /dev/null @@ -1,12 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.17.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.17.1_schema.sql deleted file mode 100644 index d57975e8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.17.1_schema.sql +++ /dev/null @@ -1,8 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`enabled_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta1_dump.sql deleted file mode 100644 index a048c4e4..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta1_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta1_schema.sql deleted file mode 100644 index 33781c3d..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta1_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta2_dump.sql deleted file mode 100644 index a048c4e4..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta2_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta2_schema.sql deleted file mode 100644 index 33781c3d..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta2_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta3_dump.sql deleted file mode 100644 index a048c4e4..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta3_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta3_schema.sql deleted file mode 100644 index 33781c3d..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta3_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta4_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta4_dump.sql deleted file mode 100644 index a048c4e4..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta4_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta4_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta4_schema.sql deleted file mode 100644 index 33781c3d..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0-beta4_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0_dump.sql deleted file mode 100644 index a048c4e4..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.18.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.18.0_schema.sql deleted file mode 100644 index 33781c3d..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.18.0_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`namespace_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta1_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta1_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta1_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta1_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta2_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta2_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta2_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.19.0-beta2_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.19.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.19.0_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.19.0_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.19.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.19.0_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.19.0_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.2.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.2.0_dump.sql deleted file mode 100644 index 881209d8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.2.0_dump.sql +++ /dev/null @@ -1,9 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.2.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.2.0_schema.sql deleted file mode 100644 index b806b014..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.2.0_schema.sql +++ /dev/null @@ -1,5 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.20.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.20.0_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.20.0_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.20.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.20.0_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.20.0_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.21.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.21.0_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.21.0_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.21.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.21.0_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.21.0_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha1_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha1_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha1_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha1_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha2_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha2_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha2_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha2_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha3_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha3_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha3_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.0-alpha3_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.0_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.0_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.0_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.0_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.1_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.1_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.1_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.1_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.2_dump.sql deleted file mode 100644 index 75b5745f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.2_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.2_schema.sql deleted file mode 100644 index 0a111497..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.2_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.3_dump.sql deleted file mode 100644 index 4ded3308..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.3_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.22.3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.22.3_schema.sql deleted file mode 100644 index 89946ee4..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.22.3_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE IF NOT EXISTS "api_keys" (`id` integer,`prefix` text UNIQUE,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha10_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha10_dump.sql deleted file mode 100644 index 2752ad0a..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha10_dump.sql +++ /dev/null @@ -1,19 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha10_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha10_schema.sql deleted file mode 100644 index 22eb3376..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha10_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha11_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha11_dump.sql deleted file mode 100644 index 2752ad0a..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha11_dump.sql +++ /dev/null @@ -1,19 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha11_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha11_schema.sql deleted file mode 100644 index 22eb3376..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha11_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha12_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha12_dump.sql deleted file mode 100644 index 18d4af75..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha12_dump.sql +++ /dev/null @@ -1,19 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha12_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha12_schema.sql deleted file mode 100644 index 47199e0e..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha12_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha1_dump.sql deleted file mode 100644 index f96cd2fe..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha1_dump.sql +++ /dev/null @@ -1,14 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `nodes` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha1_schema.sql deleted file mode 100644 index 87f9ddbb..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha1_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `users` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `nodes` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` text,`endpoints` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `routes` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,PRIMARY KEY (`id`)); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer,`pre_auth_key_id` integer,`tag` text,PRIMARY KEY (`id`)); -CREATE TABLE `api_keys` (`id` integer,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime,PRIMARY KEY (`id`)); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha2_dump.sql deleted file mode 100644 index 381f54f6..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha2_dump.sql +++ /dev/null @@ -1,16 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `nodes` (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -DELETE FROM sqlite_sequence; -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha2_schema.sql deleted file mode 100644 index d4f5f077..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha2_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `nodes` (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha3_dump.sql deleted file mode 100644 index 381f54f6..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha3_dump.sql +++ /dev/null @@ -1,16 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `nodes` (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -DELETE FROM sqlite_sequence; -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha3_schema.sql deleted file mode 100644 index d4f5f077..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha3_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `nodes` (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha4_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha4_dump.sql deleted file mode 100644 index ce973b6c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha4_dump.sql +++ /dev/null @@ -1,16 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `nodes` (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -DELETE FROM sqlite_sequence; -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha4_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha4_schema.sql deleted file mode 100644 index fa581182..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha4_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `nodes` (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha5_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha5_dump.sql deleted file mode 100644 index 055dc79f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha5_dump.sql +++ /dev/null @@ -1,18 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha5_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha5_schema.sql deleted file mode 100644 index e621195c..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha5_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ip_addresses` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha7_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha7_dump.sql deleted file mode 100644 index 2752ad0a..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha7_dump.sql +++ /dev/null @@ -1,19 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha7_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha7_schema.sql deleted file mode 100644 index 22eb3376..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha7_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha8_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha8_dump.sql deleted file mode 100644 index 2752ad0a..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha8_dump.sql +++ /dev/null @@ -1,19 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha8_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha8_schema.sql deleted file mode 100644 index 22eb3376..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha8_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha9_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha9_dump.sql deleted file mode 100644 index 2752ad0a..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha9_dump.sql +++ /dev/null @@ -1,19 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha9_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha9_schema.sql deleted file mode 100644 index 22eb3376..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-alpha9_schema.sql +++ /dev/null @@ -1,10 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.4_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.4_dump.sql deleted file mode 100644 index c17123fa..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.4_dump.sql +++ /dev/null @@ -1,22 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.4_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.4_schema.sql deleted file mode 100644 index f7245f3e..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.4_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.5_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.5_dump.sql deleted file mode 100644 index c17123fa..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.5_dump.sql +++ /dev/null @@ -1,22 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.5_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.5_schema.sql deleted file mode 100644 index f7245f3e..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta.5_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta1_dump.sql deleted file mode 100644 index c17123fa..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta1_dump.sql +++ /dev/null @@ -1,22 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta1_schema.sql deleted file mode 100644 index f7245f3e..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta1_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta2_dump.sql deleted file mode 100644 index c17123fa..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta2_dump.sql +++ /dev/null @@ -1,22 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta2_schema.sql deleted file mode 100644 index f7245f3e..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta2_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta3_dump.sql deleted file mode 100644 index c17123fa..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta3_dump.sql +++ /dev/null @@ -1,22 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta3_schema.sql deleted file mode 100644 index f7245f3e..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-beta3_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-rc.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-rc.1_dump.sql deleted file mode 100644 index c17123fa..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-rc.1_dump.sql +++ /dev/null @@ -1,22 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0-rc.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0-rc.1_schema.sql deleted file mode 100644 index f7245f3e..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0-rc.1_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0_dump.sql deleted file mode 100644 index c17123fa..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0_dump.sql +++ /dev/null @@ -1,22 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.23.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.23.0_schema.sql deleted file mode 100644 index f7245f3e..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.23.0_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,CONSTRAINT `uni_users_name` UNIQUE (`name`)); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `pre_auth_key_acl_tags` (`id` integer PRIMARY KEY AUTOINCREMENT,`pre_auth_key_id` integer,`tag` text,CONSTRAINT `fk_pre_auth_keys_acl_tags` FOREIGN KEY (`pre_auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.1_dump.sql deleted file mode 100644 index 82483273..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.1_dump.sql +++ /dev/null @@ -1,27 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.1_schema.sql deleted file mode 100644 index d950f080..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.1_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.2_dump.sql deleted file mode 100644 index 82483273..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.2_dump.sql +++ /dev/null @@ -1,27 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.2_schema.sql deleted file mode 100644 index d950f080..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.0-beta.2_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.0_dump.sql deleted file mode 100644 index 82483273..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.0_dump.sql +++ /dev/null @@ -1,27 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.0_schema.sql deleted file mode 100644 index d950f080..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.0_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.1_dump.sql deleted file mode 100644 index 36caa47a..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.1_dump.sql +++ /dev/null @@ -1,28 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.1_schema.sql deleted file mode 100644 index 5d10fff4..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.1_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.2_dump.sql deleted file mode 100644 index 36caa47a..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.2_dump.sql +++ /dev/null @@ -1,28 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.2_schema.sql deleted file mode 100644 index 5d10fff4..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.2_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.3_dump.sql deleted file mode 100644 index e1d96376..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.3_dump.sql +++ /dev/null @@ -1,30 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.24.3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.24.3_schema.sql deleted file mode 100644 index a98f7d34..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.24.3_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.1_dump.sql deleted file mode 100644 index 248815f6..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.1_dump.sql +++ /dev/null @@ -1,29 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.1_schema.sql deleted file mode 100644 index a98f7d34..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.1_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.2_schema.sql deleted file mode 100644 index 83a1d0ba..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.2_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.25.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.25.0_dump.sql deleted file mode 100644 index e1d96376..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.25.0_dump.sql +++ /dev/null @@ -1,30 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.25.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.25.0_schema.sql deleted file mode 100644 index a98f7d34..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.25.0_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.25.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.25.1_dump.sql deleted file mode 100644 index e1d96376..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.25.1_dump.sql +++ /dev/null @@ -1,30 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -INSERT INTO migrations VALUES('202312101416'); -INSERT INTO migrations VALUES('202312101430'); -INSERT INTO migrations VALUES('202402151347'); -INSERT INTO migrations VALUES('2024041121742'); -INSERT INTO migrations VALUES('202406021630'); -INSERT INTO migrations VALUES('202409271400'); -INSERT INTO migrations VALUES('202407191627'); -INSERT INTO migrations VALUES('202408181235'); -INSERT INTO migrations VALUES('202501221827'); -INSERT INTO migrations VALUES('202501311657'); -INSERT INTO migrations VALUES('202502070949'); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -DELETE FROM sqlite_sequence; -INSERT INTO sqlite_sequence VALUES('nodes',0); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.25.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.25.1_schema.sql deleted file mode 100644 index a98f7d34..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.25.1_schema.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.26.0-beta.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.26.0-beta.1_schema.sql deleted file mode 100644 index 9cdab563..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.26.0-beta.1_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.26.0-beta.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.26.0-beta.2_schema.sql deleted file mode 100644 index 2a5c360d..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.26.0-beta.2_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.26.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.26.0_schema.sql deleted file mode 100644 index 2a5c360d..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.26.0_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.26.1_dump-litestream.sql similarity index 76% rename from hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.2_dump.sql rename to hscontrol/db/testdata/sqlite/headscale_0.26.1_dump-litestream.sql index d538b878..c8c05755 100644 --- a/hscontrol/db/testdata/sqlite/headscale_0.25.0-beta.2_dump.sql +++ b/hscontrol/db/testdata/sqlite/headscale_0.26.1_dump-litestream.sql @@ -12,19 +12,23 @@ INSERT INTO migrations VALUES('202408181235'); INSERT INTO migrations VALUES('202501221827'); INSERT INTO migrations VALUES('202501311657'); INSERT INTO migrations VALUES('202502070949'); +INSERT INTO migrations VALUES('202502131714'); +INSERT INTO migrations VALUES('202502171819'); +INSERT INTO migrations VALUES('202505091439'); +INSERT INTO migrations VALUES('202505141324'); CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `routes` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`node_id` integer NOT NULL,`prefix` text,`advertised` numeric,`enabled` numeric,`is_primary` numeric,CONSTRAINT `fk_nodes_routes` FOREIGN KEY (`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE); CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`),CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE); +CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); DELETE FROM sqlite_sequence; INSERT INTO sqlite_sequence VALUES('nodes',0); CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; +CREATE TABLE _litestream_seq (id INTEGER PRIMARY KEY, seq INTEGER); +CREATE TABLE _litestream_lock (id INTEGER); COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.26.1_schema-litestream.sql b/hscontrol/db/testdata/sqlite/headscale_0.26.1_schema-litestream.sql deleted file mode 100644 index 3fc2b319..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.26.1_schema-litestream.sql +++ /dev/null @@ -1,14 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; -CREATE TABLE _litestream_seq (id INTEGER PRIMARY KEY, seq INTEGER); -CREATE TABLE _litestream_lock (id INTEGER); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.26.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.26.1_schema.sql deleted file mode 100644 index 2a5c360d..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.26.1_schema.sql +++ /dev/null @@ -1,12 +0,0 @@ -CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); -CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); -CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); -CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL; -CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier); -CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.3.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.3.0_dump.sql deleted file mode 100644 index 881209d8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.3.0_dump.sql +++ /dev/null @@ -1,9 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.3.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.3.0_schema.sql deleted file mode 100644 index b806b014..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.3.0_schema.sql +++ /dev/null @@ -1,5 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.4.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.4.0_dump.sql deleted file mode 100644 index 881209d8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.4.0_dump.sql +++ /dev/null @@ -1,9 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.4.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.4.0_schema.sql deleted file mode 100644 index b806b014..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.4.0_schema.sql +++ /dev/null @@ -1,5 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.5.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.5.0_dump.sql deleted file mode 100644 index 881209d8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.5.0_dump.sql +++ /dev/null @@ -1,9 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.5.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.5.0_schema.sql deleted file mode 100644 index b806b014..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.5.0_schema.sql +++ /dev/null @@ -1,5 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.6.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.6.0_dump.sql deleted file mode 100644 index 881209d8..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.6.0_dump.sql +++ /dev/null @@ -1,9 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.6.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.6.0_schema.sql deleted file mode 100644 index b806b014..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.6.0_schema.sql +++ /dev/null @@ -1,5 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.7.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.7.0_dump.sql deleted file mode 100644 index 2599cdda..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.7.0_dump.sql +++ /dev/null @@ -1,9 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.7.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.7.0_schema.sql deleted file mode 100644 index 43d5bf08..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.7.0_schema.sql +++ /dev/null @@ -1,5 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.7.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.7.1_dump.sql deleted file mode 100644 index 2599cdda..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.7.1_dump.sql +++ /dev/null @@ -1,9 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.7.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.7.1_schema.sql deleted file mode 100644 index 43d5bf08..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.7.1_schema.sql +++ /dev/null @@ -1,5 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.8.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.8.0_dump.sql deleted file mode 100644 index 110ae306..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.8.0_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.8.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.8.0_schema.sql deleted file mode 100644 index 6e806a5f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.8.0_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.8.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.8.1_dump.sql deleted file mode 100644 index 110ae306..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.8.1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.8.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.8.1_schema.sql deleted file mode 100644 index 6e806a5f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.8.1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.9.0_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.9.0_dump.sql deleted file mode 100644 index 110ae306..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.9.0_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.9.0_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.9.0_schema.sql deleted file mode 100644 index 6e806a5f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.9.0_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.9.1_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.9.1_dump.sql deleted file mode 100644 index 110ae306..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.9.1_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.9.1_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.9.1_schema.sql deleted file mode 100644 index 6e806a5f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.9.1_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.9.2_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.9.2_dump.sql deleted file mode 100644 index 110ae306..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.9.2_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.9.2_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.9.2_schema.sql deleted file mode 100644 index 6e806a5f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.9.2_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/headscale_0.9.3_dump.sql b/hscontrol/db/testdata/sqlite/headscale_0.9.3_dump.sql deleted file mode 100644 index 110ae306..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.9.3_dump.sql +++ /dev/null @@ -1,11 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -INSERT INTO kvs VALUES('db_version','1'); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/testdata/sqlite/headscale_0.9.3_schema.sql b/hscontrol/db/testdata/sqlite/headscale_0.9.3_schema.sql deleted file mode 100644 index 6e806a5f..00000000 --- a/hscontrol/db/testdata/sqlite/headscale_0.9.3_schema.sql +++ /dev/null @@ -1,7 +0,0 @@ -CREATE TABLE `namespaces` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text UNIQUE,PRIMARY KEY (`id`)); -CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`); -CREATE TABLE `pre_auth_keys` (`id` integer,`key` text,`namespace_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`created_at` datetime,`expiration` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `machines` (`id` integer,`machine_key` varchar(64),`node_key` text,`disco_key` text,`ip_address` text,`name` text,`namespace_id` integer,`registered` numeric,`register_method` text,`auth_key_id` integer,`last_seen` datetime,`last_successful_update` datetime,`expiry` datetime,`host_info` JSON,`endpoints` JSON,`enabled_routes` JSON,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,PRIMARY KEY (`id`)); -CREATE TABLE `kvs` (`key` text,`value` text); -CREATE TABLE `shared_machines` (`id` integer,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`machine_id` integer,`namespace_id` integer,PRIMARY KEY (`id`)); -CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`); diff --git a/hscontrol/db/testdata/sqlite/request_tags_migration_test.sql b/hscontrol/db/testdata/sqlite/request_tags_migration_test.sql new file mode 100644 index 00000000..6a6c1568 --- /dev/null +++ b/hscontrol/db/testdata/sqlite/request_tags_migration_test.sql @@ -0,0 +1,119 @@ +-- Test SQL dump for RequestTags migration (202601121700-migrate-hostinfo-request-tags) +-- and forced_tags->tags rename migration (202511131445-node-forced-tags-to-tags) +-- +-- This dump simulates a 0.27.x database where: +-- - Tags from --advertise-tags were stored only in host_info.RequestTags +-- - The tags column is still named forced_tags +-- +-- Test scenarios: +-- 1. Node with RequestTags that user is authorized for (should be migrated) +-- 2. Node with RequestTags that user is NOT authorized for (should be rejected) +-- 3. Node with existing forced_tags that should be preserved +-- 4. Node with RequestTags that overlap with existing tags (no duplicates) +-- 5. Node without RequestTags (should be unchanged) +-- 6. Node with RequestTags via group membership (should be migrated) + +PRAGMA foreign_keys=OFF; +BEGIN TRANSACTION; + +-- Migrations table - includes all migrations BEFORE the two tag migrations +CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`)); +INSERT INTO migrations VALUES('202312101416'); +INSERT INTO migrations VALUES('202312101430'); +INSERT INTO migrations VALUES('202402151347'); +INSERT INTO migrations VALUES('2024041121742'); +INSERT INTO migrations VALUES('202406021630'); +INSERT INTO migrations VALUES('202409271400'); +INSERT INTO migrations VALUES('202407191627'); +INSERT INTO migrations VALUES('202408181235'); +INSERT INTO migrations VALUES('202501221827'); +INSERT INTO migrations VALUES('202501311657'); +INSERT INTO migrations VALUES('202502070949'); +INSERT INTO migrations VALUES('202502131714'); +INSERT INTO migrations VALUES('202502171819'); +INSERT INTO migrations VALUES('202505091439'); +INSERT INTO migrations VALUES('202505141324'); +INSERT INTO migrations VALUES('202507021200'); +INSERT INTO migrations VALUES('202510311551'); +INSERT INTO migrations VALUES('202511101554-drop-old-idx'); +INSERT INTO migrations VALUES('202511011637-preauthkey-bcrypt'); +INSERT INTO migrations VALUES('202511122344-remove-newline-index'); +-- Note: 202511131445-node-forced-tags-to-tags is NOT included - it will run +-- Note: 202601121700-migrate-hostinfo-request-tags is NOT included - it will run + +-- Users table +-- Note: User names must match the usernames in the policy (with @) +CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text); +INSERT INTO users VALUES(1,'2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL,'user1@example.com','User One','user1@example.com',NULL,NULL,NULL); +INSERT INTO users VALUES(2,'2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL,'user2@example.com','User Two','user2@example.com',NULL,NULL,NULL); +INSERT INTO users VALUES(3,'2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL,'admin1@example.com','Admin One','admin1@example.com',NULL,NULL,NULL); + +-- Pre-auth keys table +CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,`prefix` text,`hash` blob,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL); + +-- API keys table +CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime); + +-- Nodes table - using OLD schema with forced_tags (not tags) +CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`)); + +-- Node 1: user1 owns it, has RequestTags for tag:server (user1 is authorized for this tag) +-- Expected: tag:server should be added to tags +INSERT INTO nodes VALUES(1,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e01','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605501','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57701','[]','{"RequestTags":["tag:server"]}','100.64.0.1','fd7a:115c:a1e0::1','node1','node1',1,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL); + +-- Node 2: user1 owns it, has RequestTags for tag:unauthorized (user1 is NOT authorized for this tag) +-- Expected: tag:unauthorized should be rejected, tags stays empty +INSERT INTO nodes VALUES(2,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e02','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605502','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57702','[]','{"RequestTags":["tag:unauthorized"]}','100.64.0.2','fd7a:115c:a1e0::2','node2','node2',1,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL); + +-- Node 3: user2 owns it, has RequestTags for tag:client (user2 is authorized) +-- Also has existing forced_tags that should be preserved +-- Expected: tag:client added, tag:existing preserved +INSERT INTO nodes VALUES(3,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e03','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605503','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57703','[]','{"RequestTags":["tag:client"]}','100.64.0.3','fd7a:115c:a1e0::3','node3','node3',2,'oidc','["tag:existing"]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL); + +-- Node 4: user1 owns it, has RequestTags for tag:server which already exists in forced_tags +-- Expected: no duplicates, tags should be ["tag:server"] +INSERT INTO nodes VALUES(4,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e04','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605504','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57704','[]','{"RequestTags":["tag:server"]}','100.64.0.4','fd7a:115c:a1e0::4','node4','node4',1,'oidc','["tag:server"]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL); + +-- Node 5: user2 owns it, no RequestTags in host_info +-- Expected: tags unchanged (empty) +INSERT INTO nodes VALUES(5,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e05','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605505','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57705','[]','{}','100.64.0.5','fd7a:115c:a1e0::5','node5','node5',2,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL); + +-- Node 6: admin1 owns it, has RequestTags for tag:admin (admin1 is in group:admins which owns tag:admin) +-- Expected: tag:admin should be added via group membership +INSERT INTO nodes VALUES(6,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e06','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605506','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57706','[]','{"RequestTags":["tag:admin"]}','100.64.0.6','fd7a:115c:a1e0::6','node6','node6',3,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL); + +-- Node 7: user1 owns it, has multiple RequestTags (tag:server authorized, tag:forbidden not authorized) +-- Expected: tag:server added, tag:forbidden rejected +INSERT INTO nodes VALUES(7,'mkey:a0ab77456320823945ae0331823e3c0d516fae9585bd42698dfa1ac3d7679e07','nodekey:7c84167ab68f494942de14deb83587fd841843de2bac105b6c670048c1605507','discokey:53075b3c6cad3b62a2a29caea61beeb93f66b8c75cb89dac465236a5bbf57707','[]','{"RequestTags":["tag:server","tag:forbidden"]}','100.64.0.7','fd7a:115c:a1e0::7','node7','node7',1,'oidc','[]',NULL,'0001-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00','[]','2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL); + +-- Policies table with tagOwners defining who can use which tags +-- Note: Usernames in policy must contain @ (e.g., user1@example.com or just user1@) +CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text); +INSERT INTO policies VALUES(1,'2024-01-01 00:00:00+00:00','2024-01-01 00:00:00+00:00',NULL,'{ + "groups": { + "group:admins": ["admin1@example.com"] + }, + "tagOwners": { + "tag:server": ["user1@example.com"], + "tag:client": ["user1@example.com", "user2@example.com"], + "tag:admin": ["group:admins"] + }, + "acls": [ + {"action": "accept", "src": ["*"], "dst": ["*:*"]} + ] +}'); + +-- Indexes (using exact format expected by schema validation) +DELETE FROM sqlite_sequence; +INSERT INTO sqlite_sequence VALUES('users',3); +INSERT INTO sqlite_sequence VALUES('nodes',7); +INSERT INTO sqlite_sequence VALUES('policies',1); +CREATE INDEX idx_users_deleted_at ON users(deleted_at); +CREATE UNIQUE INDEX idx_api_keys_prefix ON api_keys(prefix); +CREATE INDEX idx_policies_deleted_at ON policies(deleted_at); +CREATE UNIQUE INDEX idx_provider_identifier ON users(provider_identifier) WHERE provider_identifier IS NOT NULL; +CREATE UNIQUE INDEX idx_name_provider_identifier ON users(name, provider_identifier); +CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users(name) WHERE provider_identifier IS NULL; +CREATE UNIQUE INDEX IF NOT EXISTS idx_pre_auth_keys_prefix ON pre_auth_keys(prefix) WHERE prefix IS NOT NULL AND prefix != ''; + +COMMIT; diff --git a/hscontrol/db/testdata/sqlite/wrongly-migrated-schema-0.25.1_dump.sql b/hscontrol/db/testdata/sqlite/wrongly-migrated-schema-0.25.1_dump.sql deleted file mode 100644 index af55266f..00000000 --- a/hscontrol/db/testdata/sqlite/wrongly-migrated-schema-0.25.1_dump.sql +++ /dev/null @@ -1,101 +0,0 @@ -PRAGMA foreign_keys=OFF; -BEGIN TRANSACTION; -CREATE TABLE `migrations`(`id` text,PRIMARY KEY(`id`)); -INSERT INTO migrations VALUES('202505141324'); -CREATE TABLE IF NOT EXISTS "users"( - `id` integer, - `created_at` datetime, - `updated_at` datetime, - `deleted_at` datetime, - `name` text UNIQUE, - `display_name` text, - `email` text, - `provider_identifier` text, - `provider` text, - `profile_pic_url` text, - PRIMARY KEY(`id`) -); -INSERT INTO users VALUES(1,'2024-09-27 14:26:08.573622915+00:00','2024-09-27 14:26:08.573622915+00:00',NULL,'user2',NULL,NULL,NULL,NULL,NULL); -INSERT INTO users VALUES(2,'2024-09-27 14:26:17.094350688+00:00','2024-09-27 14:26:17.094350688+00:00',NULL,'user1',NULL,NULL,NULL,NULL,NULL); -CREATE TABLE IF NOT EXISTS "pre_auth_keys"( - `id` integer, - `key` text, - `user_id` integer, - `reusable` numeric, - `ephemeral` numeric DEFAULT false, - `used` numeric DEFAULT false, - `tags` text, - `created_at` datetime, - `expiration` datetime, - PRIMARY KEY(`id`), - CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY(`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE -); -INSERT INTO pre_auth_keys VALUES(1,'3d133ec953e31fd41edbd935371234f762b4bae300cea618',1,1,0,1,NULL,'2024-09-27 14:26:14.737869796+00:00','2024-09-28 14:26:14.736601748+00:00'); -INSERT INTO pre_auth_keys VALUES(2,'9813cc1df1832259fb6322dad788bb9bec89d8a01eef683a',2,1,0,1,NULL,'2024-09-27 14:26:23.181049239+00:00','2024-09-28 14:26:23.179903567+00:00'); -CREATE TABLE IF NOT EXISTS "routes"( - `id` integer, - `created_at` datetime, - `updated_at` datetime, - `deleted_at` datetime, - `node_id` integer NOT NULL, - `prefix` text, - `advertised` numeric, - `enabled` numeric, - `is_primary` numeric, - PRIMARY KEY(`id`), - CONSTRAINT `fk_nodes_routes` FOREIGN KEY(`node_id`) REFERENCES `nodes`(`id`) ON DELETE CASCADE -); -INSERT INTO routes VALUES(1,'2025-06-18 10:00:00','2025-06-18 10:00:00',NULL,1,'0.0.0.0/0',1,1,0); -INSERT INTO routes VALUES(2,'2025-06-18 10:00:00','2025-06-18 10:00:00',NULL,1,'::/0',1,1,0); -INSERT INTO routes VALUES(3,'2025-06-18 10:00:00','2025-06-18 10:00:00',NULL,2,'192.168.100.0/24',1,1,1); -INSERT INTO routes VALUES(4,'2025-06-18 10:00:00','2025-06-18 10:00:00',NULL,3,'10.0.0.0/8',1,0,0); -CREATE TABLE IF NOT EXISTS "api_keys"( - `id` integer, - `prefix` text UNIQUE, - `hash` blob, - `created_at` datetime, - `expiration` datetime, - `last_seen` datetime, - PRIMARY KEY(`id`) -); -INSERT INTO api_keys VALUES(1,'ak_test',X'deadbeef','2025-12-31 23:59:59','2025-06-18 12:00:00','2025-06-18 10:00:00'); -CREATE TABLE IF NOT EXISTS "nodes"( - `id` integer, - `machine_key` text, - `node_key` text, - `disco_key` text, - `endpoints` text, - `host_info` text, - `ipv4` text, - `ipv6` text, - `hostname` text, - `given_name` varchar(63), - `user_id` integer, - `register_method` text, - `forced_tags` text, - `auth_key_id` integer, - `last_seen` datetime, - `expiry` datetime, - `created_at` datetime, - `updated_at` datetime, - `deleted_at` datetime, - PRIMARY KEY(`id`), - CONSTRAINT `fk_nodes_user` FOREIGN KEY(`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE, - CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY(`auth_key_id`) REFERENCES `pre_auth_keys`(`id`) ON DELETE SET NULL -); -INSERT INTO nodes VALUES(1,'mkey:1efe4388236c1c83fe0a19d3ce7c321ab81e138a4da57917c231ce4c01944409','nodekey:4091de8ee569b46a0cf322ae7350e80f3af4ccfd6d83a27ad4ce455982bd0f52','discokey:0ec0a701b7596a230fff993483c12019951899920fbc1eefa90f73f05147ea20','["192.168.1.100:41641"]','{"OS":"linux"}','100.64.0.1','fd7a:115c:a1e0::1','node1','node1',1,'authkey',NULL,NULL,NULL,NULL,'2025-06-18 10:00:00','2025-06-18 10:00:00',NULL); -INSERT INTO nodes VALUES(2,'mkey:779766343bd0311dd043e61f4e5ab13b43dbd9fef3c243aad406aac43146f566','nodekey:ae80297e118d23f00e029c89c82c53cf575803c40e0dfab5bf3f34213b265731','discokey:591540881c8a783dcfeeb1dbe049ce9a9b74347b6a96c0f17452735cb1de6c2f','["192.168.1.101:41641"]','{"OS":"darwin"}','100.64.0.2','fd7a:115c:a1e0::2','node2','node2',1,'authkey',NULL,NULL,NULL,NULL,'2025-06-18 10:00:00','2025-06-18 10:00:00',NULL); -INSERT INTO nodes VALUES(3,'mkey:233ecd117c36c1e5a635b1658fd54369fddf38b5312adf8aae38dfe6506fdf47','nodekey:2a53f1bbefae24a4724201379a05d32c84fc8c86fb2c856334a904ac53a3b827','discokey:acda4e99407eed3b807b81649998d69f93e9c28ce6e4dc1032686b45a70bca09','["192.168.1.102:41641"]','{"OS":"windows"}','100.64.0.3','fd7a:115c:a1e0::3','node3','node3',2,'authkey',NULL,NULL,NULL,NULL,'2025-06-18 10:00:00','2025-06-18 10:00:00',NULL); -CREATE TABLE `policies`( - `id` integer PRIMARY KEY AUTOINCREMENT, - `created_at` datetime, - `updated_at` datetime, - `deleted_at` datetime, - `data` text -); -DELETE FROM sqlite_sequence; -CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`); -CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`); -CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`); -CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`); -COMMIT; diff --git a/hscontrol/db/text_serialiser.go b/hscontrol/db/text_serialiser.go index 1652901f..6172e7e0 100644 --- a/hscontrol/db/text_serialiser.go +++ b/hscontrol/db/text_serialiser.go @@ -31,7 +31,7 @@ func decodingError(name string, err error) error { // have a type that implements encoding.TextUnmarshaler. type TextSerialiser struct{} -func (TextSerialiser) Scan(ctx context.Context, field *schema.Field, dst reflect.Value, dbValue interface{}) (err error) { +func (TextSerialiser) Scan(ctx context.Context, field *schema.Field, dst reflect.Value, dbValue any) error { fieldValue := reflect.New(field.FieldType) // If the field is a pointer, we need to dereference it to get the actual type @@ -77,10 +77,10 @@ func (TextSerialiser) Scan(ctx context.Context, field *schema.Field, dst reflect } } - return err + return nil } -func (TextSerialiser) Value(ctx context.Context, field *schema.Field, dst reflect.Value, fieldValue interface{}) (interface{}, error) { +func (TextSerialiser) Value(ctx context.Context, field *schema.Field, dst reflect.Value, fieldValue any) (any, error) { switch v := fieldValue.(type) { case encoding.TextMarshaler: // If the value is nil, we return nil, however, go nil values are not diff --git a/hscontrol/db/users.go b/hscontrol/db/users.go index 039933c7..6aff9ed1 100644 --- a/hscontrol/db/users.go +++ b/hscontrol/db/users.go @@ -58,12 +58,12 @@ func DestroyUser(tx *gorm.DB, uid types.UserID) error { return ErrUserStillHasNodes } - keys, err := ListPreAuthKeysByUser(tx, uid) + keys, err := ListPreAuthKeys(tx) if err != nil { return err } for _, key := range keys { - err = DestroyPreAuthKey(tx, key) + err = DestroyPreAuthKey(tx, key.ID) if err != nil { return err } @@ -189,33 +189,17 @@ func (hsdb *HSDatabase) GetUserByName(name string) (*types.User, error) { // ListNodesByUser gets all the nodes in a given user. func ListNodesByUser(tx *gorm.DB, uid types.UserID) (types.Nodes, error) { nodes := types.Nodes{} - if err := tx.Preload("AuthKey").Preload("AuthKey.User").Preload("User").Where(&types.Node{UserID: uint(uid)}).Find(&nodes).Error; err != nil { + + uidPtr := uint(uid) + + err := tx.Preload("AuthKey").Preload("AuthKey.User").Preload("User").Where(&types.Node{UserID: &uidPtr}).Find(&nodes).Error + if err != nil { return nil, err } return nodes, nil } -// AssignNodeToUser assigns a Node to a user. -// Note: Validation should be done in the state layer before calling this function. -func AssignNodeToUser(tx *gorm.DB, nodeID types.NodeID, uid types.UserID) error { - // Check if the user exists - var userExists bool - if err := tx.Model(&types.User{}).Select("count(*) > 0").Where("id = ?", uid).Find(&userExists).Error; err != nil { - return fmt.Errorf("failed to check if user exists: %w", err) - } - - if !userExists { - return ErrUserNotFound - } - - if err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("user_id", uid).Error; err != nil { - return fmt.Errorf("failed to assign node to user: %w", err) - } - - return nil -} - func (hsdb *HSDatabase) CreateUserForTest(name ...string) *types.User { if !testing.Testing() { panic("CreateUserForTest can only be called during tests") diff --git a/hscontrol/db/users_test.go b/hscontrol/db/users_test.go index 5b2f0c4b..a3fd49b3 100644 --- a/hscontrol/db/users_test.go +++ b/hscontrol/db/users_test.go @@ -1,138 +1,167 @@ package db import ( - "strings" + "testing" "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" - "gopkg.in/check.v1" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" "gorm.io/gorm" "tailscale.com/types/ptr" ) -func (s *Suite) TestCreateAndDestroyUser(c *check.C) { +func TestCreateAndDestroyUser(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) + user := db.CreateUserForTest("test") - c.Assert(user.Name, check.Equals, "test") + assert.Equal(t, "test", user.Name) users, err := db.ListUsers() - c.Assert(err, check.IsNil) - c.Assert(len(users), check.Equals, 1) + require.NoError(t, err) + assert.Len(t, users, 1) err = db.DestroyUser(types.UserID(user.ID)) - c.Assert(err, check.IsNil) + require.NoError(t, err) _, err = db.GetUserByID(types.UserID(user.ID)) - c.Assert(err, check.NotNil) + assert.Error(t, err) } -func (s *Suite) TestDestroyUserErrors(c *check.C) { - err := db.DestroyUser(9998) - c.Assert(err, check.Equals, ErrUserNotFound) +func TestDestroyUserErrors(t *testing.T) { + tests := []struct { + name string + test func(*testing.T, *HSDatabase) + }{ + { + name: "error_user_not_found", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() - user := db.CreateUserForTest("test") + err := db.DestroyUser(9998) + assert.ErrorIs(t, err, ErrUserNotFound) + }, + }, + { + name: "success_deletes_preauthkeys", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() - pak, err := db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) - c.Assert(err, check.IsNil) + user := db.CreateUserForTest("test") - err = db.DestroyUser(types.UserID(user.ID)) - c.Assert(err, check.IsNil) + pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) + require.NoError(t, err) - result := db.DB.Preload("User").First(&pak, "key = ?", pak.Key) - // destroying a user also deletes all associated preauthkeys - c.Assert(result.Error, check.Equals, gorm.ErrRecordNotFound) + err = db.DestroyUser(types.UserID(user.ID)) + require.NoError(t, err) - user, err = db.CreateUser(types.User{Name: "test"}) - c.Assert(err, check.IsNil) + // Verify preauth key was deleted (need to search by prefix for new keys) + var foundPak types.PreAuthKey - pak, err = db.CreatePreAuthKey(types.UserID(user.ID), false, false, nil, nil) - c.Assert(err, check.IsNil) + result := db.DB.First(&foundPak, "id = ?", pak.ID) + assert.ErrorIs(t, result.Error, gorm.ErrRecordNotFound) + }, + }, + { + name: "error_user_has_nodes", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() - node := types.Node{ - ID: 0, - Hostname: "testnode", - UserID: user.ID, - RegisterMethod: util.RegisterMethodAuthKey, - AuthKeyID: ptr.To(pak.ID), + user, err := db.CreateUser(types.User{Name: "test"}) + require.NoError(t, err) + + pak, err := db.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) + require.NoError(t, err) + + node := types.Node{ + ID: 0, + Hostname: "testnode", + UserID: &user.ID, + RegisterMethod: util.RegisterMethodAuthKey, + AuthKeyID: ptr.To(pak.ID), + } + trx := db.DB.Save(&node) + require.NoError(t, trx.Error) + + err = db.DestroyUser(types.UserID(user.ID)) + assert.ErrorIs(t, err, ErrUserStillHasNodes) + }, + }, } - trx := db.DB.Save(&node) - c.Assert(trx.Error, check.IsNil) - err = db.DestroyUser(types.UserID(user.ID)) - c.Assert(err, check.Equals, ErrUserStillHasNodes) -} + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) -func (s *Suite) TestRenameUser(c *check.C) { - userTest := db.CreateUserForTest("test") - c.Assert(userTest.Name, check.Equals, "test") - - users, err := db.ListUsers() - c.Assert(err, check.IsNil) - c.Assert(len(users), check.Equals, 1) - - err = db.RenameUser(types.UserID(userTest.ID), "test-renamed") - c.Assert(err, check.IsNil) - - users, err = db.ListUsers(&types.User{Name: "test"}) - c.Assert(err, check.Equals, nil) - c.Assert(len(users), check.Equals, 0) - - users, err = db.ListUsers(&types.User{Name: "test-renamed"}) - c.Assert(err, check.IsNil) - c.Assert(len(users), check.Equals, 1) - - err = db.RenameUser(99988, "test") - c.Assert(err, check.Equals, ErrUserNotFound) - - userTest2 := db.CreateUserForTest("test2") - c.Assert(userTest2.Name, check.Equals, "test2") - - want := "UNIQUE constraint failed" - err = db.RenameUser(types.UserID(userTest2.ID), "test-renamed") - if err == nil || !strings.Contains(err.Error(), want) { - c.Fatalf("expected failure with unique constraint, want: %q got: %q", want, err) + tt.test(t, db) + }) } } -func (s *Suite) TestSetMachineUser(c *check.C) { - oldUser := db.CreateUserForTest("old") - newUser := db.CreateUserForTest("new") +func TestRenameUser(t *testing.T) { + tests := []struct { + name string + test func(*testing.T, *HSDatabase) + }{ + { + name: "success_rename", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() - pak, err := db.CreatePreAuthKey(types.UserID(oldUser.ID), false, false, nil, nil) - c.Assert(err, check.IsNil) + userTest := db.CreateUserForTest("test") + assert.Equal(t, "test", userTest.Name) - node := types.Node{ - ID: 12, - Hostname: "testnode", - UserID: oldUser.ID, - RegisterMethod: util.RegisterMethodAuthKey, - AuthKeyID: ptr.To(pak.ID), + users, err := db.ListUsers() + require.NoError(t, err) + assert.Len(t, users, 1) + + err = db.RenameUser(types.UserID(userTest.ID), "test-renamed") + require.NoError(t, err) + + users, err = db.ListUsers(&types.User{Name: "test"}) + require.NoError(t, err) + assert.Empty(t, users) + + users, err = db.ListUsers(&types.User{Name: "test-renamed"}) + require.NoError(t, err) + assert.Len(t, users, 1) + }, + }, + { + name: "error_user_not_found", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() + + err := db.RenameUser(99988, "test") + assert.ErrorIs(t, err, ErrUserNotFound) + }, + }, + { + name: "error_duplicate_name", + test: func(t *testing.T, db *HSDatabase) { + t.Helper() + + userTest := db.CreateUserForTest("test") + userTest2 := db.CreateUserForTest("test2") + + assert.Equal(t, "test", userTest.Name) + assert.Equal(t, "test2", userTest2.Name) + + err := db.RenameUser(types.UserID(userTest2.ID), "test") + require.Error(t, err) + assert.Contains(t, err.Error(), "UNIQUE constraint failed") + }, + }, } - trx := db.DB.Save(&node) - c.Assert(trx.Error, check.IsNil) - c.Assert(node.UserID, check.Equals, oldUser.ID) - err = db.Write(func(tx *gorm.DB) error { - return AssignNodeToUser(tx, 12, types.UserID(newUser.ID)) - }) - c.Assert(err, check.IsNil) - // Reload node from database to see updated values - updatedNode, err := db.GetNodeByID(12) - c.Assert(err, check.IsNil) - c.Assert(updatedNode.UserID, check.Equals, newUser.ID) - c.Assert(updatedNode.User.Name, check.Equals, newUser.Name) + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + db, err := newSQLiteTestDB() + require.NoError(t, err) - err = db.Write(func(tx *gorm.DB) error { - return AssignNodeToUser(tx, 12, 9584849) - }) - c.Assert(err, check.Equals, ErrUserNotFound) - - err = db.Write(func(tx *gorm.DB) error { - return AssignNodeToUser(tx, 12, types.UserID(newUser.ID)) - }) - c.Assert(err, check.IsNil) - // Reload node from database again to see updated values - finalNode, err := db.GetNodeByID(12) - c.Assert(err, check.IsNil) - c.Assert(finalNode.UserID, check.Equals, newUser.ID) - c.Assert(finalNode.User.Name, check.Equals, newUser.Name) + tt.test(t, db) + }) + } } diff --git a/hscontrol/derp/server/derp_server.go b/hscontrol/derp/server/derp_server.go index da261304..474306e5 100644 --- a/hscontrol/derp/server/derp_server.go +++ b/hscontrol/derp/server/derp_server.go @@ -20,6 +20,7 @@ import ( "github.com/juanfont/headscale/hscontrol/util" "github.com/rs/zerolog/log" "tailscale.com/derp" + "tailscale.com/derp/derpserver" "tailscale.com/envknob" "tailscale.com/net/stun" "tailscale.com/net/wsconn" @@ -45,7 +46,7 @@ type DERPServer struct { serverURL string key key.NodePrivate cfg *types.DERPConfig - tailscaleDERP *derp.Server + tailscaleDERP *derpserver.Server } func NewDERPServer( @@ -54,7 +55,7 @@ func NewDERPServer( cfg *types.DERPConfig, ) (*DERPServer, error) { log.Trace().Caller().Msg("Creating new embedded DERP server") - server := derp.NewServer(derpKey, util.TSLogfWrapper()) // nolint // zerolinter complains + server := derpserver.New(derpKey, util.TSLogfWrapper()) // nolint // zerolinter complains if cfg.ServerVerifyClients { server.SetVerifyClientURL(DerpVerifyScheme + "://verify") @@ -364,7 +365,13 @@ func serverSTUNListener(ctx context.Context, packetConn *net.UDPConn) { return } log.Error().Caller().Err(err).Msgf("STUN ReadFrom") - time.Sleep(time.Second) + + // Rate limit error logging - wait before retrying, but respect context cancellation + select { + case <-ctx.Done(): + return + case <-time.After(time.Second): + } continue } diff --git a/hscontrol/grpcv1.go b/hscontrol/grpcv1.go index 6d5189b8..a35a73af 100644 --- a/hscontrol/grpcv1.go +++ b/hscontrol/grpcv1.go @@ -16,7 +16,6 @@ import ( "time" "github.com/rs/zerolog/log" - "github.com/samber/lo" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" "google.golang.org/protobuf/types/known/timestamppb" @@ -29,7 +28,6 @@ import ( v1 "github.com/juanfont/headscale/gen/go/headscale/v1" "github.com/juanfont/headscale/hscontrol/state" "github.com/juanfont/headscale/hscontrol/types" - "github.com/juanfont/headscale/hscontrol/types/change" "github.com/juanfont/headscale/hscontrol/util" ) @@ -59,14 +57,9 @@ func (api headscaleV1APIServer) CreateUser( return nil, status.Errorf(codes.Internal, "failed to create user: %s", err) } - c := change.UserAdded(types.UserID(user.ID)) - - // TODO(kradalby): Both of these might be policy changes, find a better way to merge. - if !policyChanged.Empty() { - c.Change = change.Policy - } - - api.h.Change(c) + // CreateUser returns a policy change response if the user creation affected policy. + // This triggers a full policy re-evaluation for all connected nodes. + api.h.Change(policyChanged) return &v1.CreateUserResponse{User: user.Proto()}, nil } @@ -105,12 +98,13 @@ func (api headscaleV1APIServer) DeleteUser( return nil, err } - err = api.h.state.DeleteUser(types.UserID(user.ID)) + policyChanged, err := api.h.state.DeleteUser(types.UserID(user.ID)) if err != nil { return nil, err } - api.h.Change(change.UserRemoved(types.UserID(user.ID))) + // Use the change returned from DeleteUser which includes proper policy updates + api.h.Change(policyChanged) return &v1.DeleteUserResponse{}, nil } @@ -166,13 +160,17 @@ func (api headscaleV1APIServer) CreatePreAuthKey( } } - user, err := api.h.state.GetUserByID(types.UserID(request.GetUser())) - if err != nil { - return nil, err + var userID *types.UserID + if request.GetUser() != 0 { + user, err := api.h.state.GetUserByID(types.UserID(request.GetUser())) + if err != nil { + return nil, err + } + userID = user.TypedID() } preAuthKey, err := api.h.state.CreatePreAuthKey( - types.UserID(user.ID), + userID, request.GetReusable(), request.GetEphemeral(), &expiration, @@ -189,16 +187,7 @@ func (api headscaleV1APIServer) ExpirePreAuthKey( ctx context.Context, request *v1.ExpirePreAuthKeyRequest, ) (*v1.ExpirePreAuthKeyResponse, error) { - preAuthKey, err := api.h.state.GetPreAuthKey(request.Key) - if err != nil { - return nil, err - } - - if uint64(preAuthKey.User.ID) != request.GetUser() { - return nil, fmt.Errorf("preauth key does not belong to user") - } - - err = api.h.state.ExpirePreAuthKey(preAuthKey) + err := api.h.state.ExpirePreAuthKey(request.GetId()) if err != nil { return nil, err } @@ -206,16 +195,23 @@ func (api headscaleV1APIServer) ExpirePreAuthKey( return &v1.ExpirePreAuthKeyResponse{}, nil } -func (api headscaleV1APIServer) ListPreAuthKeys( +func (api headscaleV1APIServer) DeletePreAuthKey( ctx context.Context, - request *v1.ListPreAuthKeysRequest, -) (*v1.ListPreAuthKeysResponse, error) { - user, err := api.h.state.GetUserByID(types.UserID(request.GetUser())) + request *v1.DeletePreAuthKeyRequest, +) (*v1.DeletePreAuthKeyResponse, error) { + err := api.h.state.DeletePreAuthKey(request.GetId()) if err != nil { return nil, err } - preAuthKeys, err := api.h.state.ListPreAuthKeys(types.UserID(user.ID)) + return &v1.DeletePreAuthKeyResponse{}, nil +} + +func (api headscaleV1APIServer) ListPreAuthKeys( + ctx context.Context, + request *v1.ListPreAuthKeysRequest, +) (*v1.ListPreAuthKeysResponse, error) { + preAuthKeys, err := api.h.state.ListPreAuthKeys() if err != nil { return nil, err } @@ -236,10 +232,18 @@ func (api headscaleV1APIServer) RegisterNode( ctx context.Context, request *v1.RegisterNodeRequest, ) (*v1.RegisterNodeResponse, error) { + // Generate ephemeral registration key for tracking this registration flow in logs + registrationKey, err := util.GenerateRegistrationKey() + if err != nil { + log.Warn().Err(err).Msg("Failed to generate registration key") + registrationKey = "" // Continue without key if generation fails + } + log.Trace(). Caller(). Str("user", request.GetUser()). Str("registration_id", request.GetKey()). + Str("registration_key", registrationKey). Msg("Registering node") registrationId, err := types.RegistrationIDFromString(request.GetKey()) @@ -259,9 +263,19 @@ func (api headscaleV1APIServer) RegisterNode( util.RegisterMethodCLI, ) if err != nil { + log.Error(). + Str("registration_key", registrationKey). + Err(err). + Msg("Failed to register node") return nil, err } + log.Info(). + Str("registration_key", registrationKey). + Str("node_id", fmt.Sprintf("%d", node.ID())). + Str("hostname", node.Hostname()). + Msg("Node registered successfully") + // This is a bit of a back and forth, but we have a bit of a chicken and egg // dependency here. // Because the way the policy manager works, we need to have the node @@ -273,13 +287,13 @@ func (api headscaleV1APIServer) RegisterNode( // ensure we send an update. // This works, but might be another good candidate for doing some sort of // eventbus. - _ = api.h.state.AutoApproveRoutes(node) - _, _, err = api.h.state.SaveNode(node) + routeChange, err := api.h.state.AutoApproveRoutes(node) if err != nil { - return nil, fmt.Errorf("saving auto approved routes to node: %w", err) + return nil, fmt.Errorf("auto approving routes: %w", err) } - api.h.Change(nodeChange) + // Send both changes. Empty changes are ignored by Change(). + api.h.Change(nodeChange, routeChange) return &v1.RegisterNodeResponse{Node: node.Proto()}, nil } @@ -302,6 +316,17 @@ func (api headscaleV1APIServer) SetTags( ctx context.Context, request *v1.SetTagsRequest, ) (*v1.SetTagsResponse, error) { + // Validate tags not empty - tagged nodes must have at least one tag + if len(request.GetTags()) == 0 { + return &v1.SetTagsResponse{ + Node: nil, + }, status.Error( + codes.InvalidArgument, + "cannot remove all tags from a node - tagged nodes must have at least one tag", + ) + } + + // Validate tag format for _, tag := range request.GetTags() { err := validateTag(tag) if err != nil { @@ -309,6 +334,16 @@ func (api headscaleV1APIServer) SetTags( } } + // User XOR Tags: nodes are either tagged or user-owned, never both. + // Setting tags on a user-owned node converts it to a tagged node. + // Once tagged, a node cannot be converted back to user-owned. + _, found := api.h.state.GetNodeByID(types.NodeID(request.GetNodeId())) + if !found { + return &v1.SetTagsResponse{ + Node: nil, + }, status.Error(codes.NotFound, "node not found") + } + node, nodeChange, err := api.h.state.SetNodeTags(types.NodeID(request.GetNodeId()), request.GetTags()) if err != nil { return &v1.SetTagsResponse{ @@ -490,13 +525,11 @@ func nodesToProto(state *state.State, nodes views.Slice[types.NodeView]) []*v1.N for index, node := range nodes.All() { resp := node.Proto() - var tags []string - for _, tag := range node.RequestTags() { - if state.NodeCanHaveTag(node, tag) { - tags = append(tags, tag) - } + // Tags-as-identity: tagged nodes show as TaggedDevices user in API responses + // (UserID may be set internally for "created by" tracking) + if node.IsTagged() { + resp.User = types.TaggedDevices.Proto() } - resp.ValidTags = lo.Uniq(append(tags, node.ForcedTags().AsSlice()...)) resp.SubnetRoutes = util.PrefixesToString(append(state.GetNodePrimaryRoutes(node.ID()), node.ExitRoutes()...)) response[index] = resp @@ -509,22 +542,6 @@ func nodesToProto(state *state.State, nodes views.Slice[types.NodeView]) []*v1.N return response } -func (api headscaleV1APIServer) MoveNode( - ctx context.Context, - request *v1.MoveNodeRequest, -) (*v1.MoveNodeResponse, error) { - node, nodeChange, err := api.h.state.AssignNodeToUser(types.NodeID(request.GetNodeId()), types.UserID(request.GetUser())) - if err != nil { - return nil, err - } - - // TODO(kradalby): Ensure the policy is also sent - // TODO(kradalby): ensure that both the selfupdate and peer updates are sent - api.h.Change(nodeChange) - - return &v1.MoveNodeResponse{Node: node.Proto()}, nil -} - func (api headscaleV1APIServer) BackfillNodeIPs( ctx context.Context, request *v1.BackfillNodeIPsRequest, @@ -560,14 +577,35 @@ func (api headscaleV1APIServer) CreateApiKey( return &v1.CreateApiKeyResponse{ApiKey: apiKey}, nil } +// apiKeyIdentifier is implemented by requests that identify an API key. +type apiKeyIdentifier interface { + GetId() uint64 + GetPrefix() string +} + +// getAPIKey retrieves an API key by ID or prefix from the request. +// Returns InvalidArgument if neither or both are provided. +func (api headscaleV1APIServer) getAPIKey(req apiKeyIdentifier) (*types.APIKey, error) { + hasID := req.GetId() != 0 + hasPrefix := req.GetPrefix() != "" + + switch { + case hasID && hasPrefix: + return nil, status.Error(codes.InvalidArgument, "provide either id or prefix, not both") + case hasID: + return api.h.state.GetAPIKeyByID(req.GetId()) + case hasPrefix: + return api.h.state.GetAPIKey(req.GetPrefix()) + default: + return nil, status.Error(codes.InvalidArgument, "must provide id or prefix") + } +} + func (api headscaleV1APIServer) ExpireApiKey( ctx context.Context, request *v1.ExpireApiKeyRequest, ) (*v1.ExpireApiKeyResponse, error) { - var apiKey *types.APIKey - var err error - - apiKey, err = api.h.state.GetAPIKey(request.Prefix) + apiKey, err := api.getAPIKey(request) if err != nil { return nil, err } @@ -605,12 +643,7 @@ func (api headscaleV1APIServer) DeleteApiKey( ctx context.Context, request *v1.DeleteApiKeyRequest, ) (*v1.DeleteApiKeyResponse, error) { - var ( - apiKey *types.APIKey - err error - ) - - apiKey, err = api.h.state.GetAPIKey(request.Prefix) + apiKey, err := api.getAPIKey(request) if err != nil { return nil, err } @@ -757,7 +790,7 @@ func (api headscaleV1APIServer) DebugCreateNode( NodeKey: key.NewNode().Public(), MachineKey: key.NewMachine().Public(), Hostname: request.GetName(), - User: *user, + User: user, Expiry: &time.Time{}, LastSeen: &time.Time{}, diff --git a/hscontrol/grpcv1_test.go b/hscontrol/grpcv1_test.go index 1d87bfe0..4cf5b7d4 100644 --- a/hscontrol/grpcv1_test.go +++ b/hscontrol/grpcv1_test.go @@ -1,6 +1,17 @@ package hscontrol -import "testing" +import ( + "context" + "testing" + + v1 "github.com/juanfont/headscale/gen/go/headscale/v1" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "tailscale.com/tailcfg" + "tailscale.com/types/key" +) func Test_validateTag(t *testing.T) { type args struct { @@ -40,3 +51,418 @@ func Test_validateTag(t *testing.T) { }) } } + +// TestSetTags_Conversion tests the conversion of user-owned nodes to tagged nodes. +// The tags-as-identity model allows one-way conversion from user-owned to tagged. +// Tag authorization is checked via the policy manager - unauthorized tags are rejected. +func TestSetTags_Conversion(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + + // Create test user and nodes + user := app.state.CreateUserForTest("test-user") + + // Create a pre-auth key WITHOUT tags for user-owned node + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil) + require.NoError(t, err) + + machineKey1 := key.NewMachine() + nodeKey1 := key.NewNode() + + // Register a user-owned node (via untagged PreAuthKey) + userOwnedReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey1.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "user-owned-node", + }, + } + _, err = app.handleRegisterWithAuthKey(userOwnedReq, machineKey1.Public()) + require.NoError(t, err) + + // Get the created node + userOwnedNode, found := app.state.GetNodeByNodeKey(nodeKey1.Public()) + require.True(t, found) + + // Create API server instance + apiServer := newHeadscaleV1APIServer(app) + + tests := []struct { + name string + nodeID uint64 + tags []string + wantErr bool + wantCode codes.Code + wantErrMessage string + }{ + { + // Conversion is allowed, but tag authorization fails without tagOwners + name: "reject unauthorized tags on user-owned node", + nodeID: uint64(userOwnedNode.ID()), + tags: []string{"tag:server"}, + wantErr: true, + wantCode: codes.InvalidArgument, + wantErrMessage: "requested tags", + }, + { + // Conversion is allowed, but tag authorization fails without tagOwners + name: "reject multiple unauthorized tags", + nodeID: uint64(userOwnedNode.ID()), + tags: []string{"tag:server", "tag:database"}, + wantErr: true, + wantCode: codes.InvalidArgument, + wantErrMessage: "requested tags", + }, + { + name: "reject non-existent node", + nodeID: 99999, + tags: []string{"tag:server"}, + wantErr: true, + wantCode: codes.NotFound, + wantErrMessage: "node not found", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + resp, err := apiServer.SetTags(context.Background(), &v1.SetTagsRequest{ + NodeId: tt.nodeID, + Tags: tt.tags, + }) + + if tt.wantErr { + require.Error(t, err) + st, ok := status.FromError(err) + require.True(t, ok, "error should be a gRPC status error") + assert.Equal(t, tt.wantCode, st.Code()) + assert.Contains(t, st.Message(), tt.wantErrMessage) + assert.Nil(t, resp.GetNode()) + } else { + require.NoError(t, err) + assert.NotNil(t, resp) + assert.NotNil(t, resp.GetNode()) + } + }) + } +} + +// TestSetTags_TaggedNode tests that SetTags correctly identifies tagged nodes +// and doesn't reject them with the "user-owned nodes" error. +// Note: This test doesn't validate ACL tag authorization - that's tested elsewhere. +func TestSetTags_TaggedNode(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + + // Create test user and tagged pre-auth key + user := app.state.CreateUserForTest("test-user") + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"tag:initial"}) + require.NoError(t, err) + + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + // Register a tagged node (via tagged PreAuthKey) + taggedReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node", + }, + } + _, err = app.handleRegisterWithAuthKey(taggedReq, machineKey.Public()) + require.NoError(t, err) + + // Get the created node + taggedNode, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + assert.True(t, taggedNode.IsTagged(), "Node should be tagged") + assert.True(t, taggedNode.UserID().Valid(), "Tagged node should have UserID for tracking") + + // Create API server instance + apiServer := newHeadscaleV1APIServer(app) + + // Test: SetTags should NOT reject tagged nodes with "user-owned" error + // (Even though they have UserID set, IsTagged() identifies them correctly) + resp, err := apiServer.SetTags(context.Background(), &v1.SetTagsRequest{ + NodeId: uint64(taggedNode.ID()), + Tags: []string{"tag:initial"}, // Keep existing tag to avoid ACL validation issues + }) + + // The call should NOT fail with "cannot set tags on user-owned nodes" + if err != nil { + st, ok := status.FromError(err) + require.True(t, ok) + // If error is about unauthorized tags, that's fine - ACL validation is working + // If error is about user-owned nodes, that's the bug we're testing for + assert.NotContains(t, st.Message(), "user-owned nodes", "Should not reject tagged nodes as user-owned") + } else { + // Success is also fine + assert.NotNil(t, resp) + } +} + +// TestSetTags_CannotRemoveAllTags tests that SetTags rejects attempts to remove +// all tags from a tagged node, enforcing Tailscale's requirement that tagged +// nodes must have at least one tag. +func TestSetTags_CannotRemoveAllTags(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + + // Create test user and tagged pre-auth key + user := app.state.CreateUserForTest("test-user") + pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, []string{"tag:server"}) + require.NoError(t, err) + + machineKey := key.NewMachine() + nodeKey := key.NewNode() + + // Register a tagged node + taggedReq := tailcfg.RegisterRequest{ + Auth: &tailcfg.RegisterResponseAuth{ + AuthKey: pak.Key, + }, + NodeKey: nodeKey.Public(), + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "tagged-node", + }, + } + _, err = app.handleRegisterWithAuthKey(taggedReq, machineKey.Public()) + require.NoError(t, err) + + // Get the created node + taggedNode, found := app.state.GetNodeByNodeKey(nodeKey.Public()) + require.True(t, found) + assert.True(t, taggedNode.IsTagged()) + + // Create API server instance + apiServer := newHeadscaleV1APIServer(app) + + // Attempt to remove all tags (empty array) + resp, err := apiServer.SetTags(context.Background(), &v1.SetTagsRequest{ + NodeId: uint64(taggedNode.ID()), + Tags: []string{}, // Empty - attempting to remove all tags + }) + + // Should fail with InvalidArgument error + require.Error(t, err) + st, ok := status.FromError(err) + require.True(t, ok, "error should be a gRPC status error") + assert.Equal(t, codes.InvalidArgument, st.Code()) + assert.Contains(t, st.Message(), "cannot remove all tags") + assert.Nil(t, resp.GetNode()) +} + +// TestDeleteUser_ReturnsProperChangeSignal tests issue #2967 fix: +// When a user is deleted, the state should return a non-empty change signal +// to ensure policy manager is updated and clients are notified immediately. +func TestDeleteUser_ReturnsProperChangeSignal(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + + // Create a user + user := app.state.CreateUserForTest("test-user-to-delete") + require.NotNil(t, user) + + // Delete the user and verify a non-empty change is returned + // Issue #2967: Without the fix, DeleteUser returned an empty change, + // causing stale policy state until another user operation triggered an update. + changeSignal, err := app.state.DeleteUser(*user.TypedID()) + require.NoError(t, err, "DeleteUser should succeed") + assert.False(t, changeSignal.IsEmpty(), "DeleteUser should return a non-empty change signal (issue #2967)") +} + +// TestExpireApiKey_ByID tests that API keys can be expired by ID. +func TestExpireApiKey_ByID(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + apiServer := newHeadscaleV1APIServer(app) + + // Create an API key + createResp, err := apiServer.CreateApiKey(context.Background(), &v1.CreateApiKeyRequest{}) + require.NoError(t, err) + require.NotEmpty(t, createResp.GetApiKey()) + + // List keys to get the ID + listResp, err := apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{}) + require.NoError(t, err) + require.Len(t, listResp.GetApiKeys(), 1) + + keyID := listResp.GetApiKeys()[0].GetId() + + // Expire by ID + _, err = apiServer.ExpireApiKey(context.Background(), &v1.ExpireApiKeyRequest{ + Id: keyID, + }) + require.NoError(t, err) + + // Verify key is expired (expiration is set to now or in the past) + listResp, err = apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{}) + require.NoError(t, err) + require.Len(t, listResp.GetApiKeys(), 1) + assert.NotNil(t, listResp.GetApiKeys()[0].GetExpiration(), "expiration should be set") +} + +// TestExpireApiKey_ByPrefix tests that API keys can still be expired by prefix. +func TestExpireApiKey_ByPrefix(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + apiServer := newHeadscaleV1APIServer(app) + + // Create an API key + createResp, err := apiServer.CreateApiKey(context.Background(), &v1.CreateApiKeyRequest{}) + require.NoError(t, err) + require.NotEmpty(t, createResp.GetApiKey()) + + // List keys to get the prefix + listResp, err := apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{}) + require.NoError(t, err) + require.Len(t, listResp.GetApiKeys(), 1) + + keyPrefix := listResp.GetApiKeys()[0].GetPrefix() + + // Expire by prefix + _, err = apiServer.ExpireApiKey(context.Background(), &v1.ExpireApiKeyRequest{ + Prefix: keyPrefix, + }) + require.NoError(t, err) +} + +// TestDeleteApiKey_ByID tests that API keys can be deleted by ID. +func TestDeleteApiKey_ByID(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + apiServer := newHeadscaleV1APIServer(app) + + // Create an API key + createResp, err := apiServer.CreateApiKey(context.Background(), &v1.CreateApiKeyRequest{}) + require.NoError(t, err) + require.NotEmpty(t, createResp.GetApiKey()) + + // List keys to get the ID + listResp, err := apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{}) + require.NoError(t, err) + require.Len(t, listResp.GetApiKeys(), 1) + + keyID := listResp.GetApiKeys()[0].GetId() + + // Delete by ID + _, err = apiServer.DeleteApiKey(context.Background(), &v1.DeleteApiKeyRequest{ + Id: keyID, + }) + require.NoError(t, err) + + // Verify key is deleted + listResp, err = apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{}) + require.NoError(t, err) + assert.Empty(t, listResp.GetApiKeys()) +} + +// TestDeleteApiKey_ByPrefix tests that API keys can still be deleted by prefix. +func TestDeleteApiKey_ByPrefix(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + apiServer := newHeadscaleV1APIServer(app) + + // Create an API key + createResp, err := apiServer.CreateApiKey(context.Background(), &v1.CreateApiKeyRequest{}) + require.NoError(t, err) + require.NotEmpty(t, createResp.GetApiKey()) + + // List keys to get the prefix + listResp, err := apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{}) + require.NoError(t, err) + require.Len(t, listResp.GetApiKeys(), 1) + + keyPrefix := listResp.GetApiKeys()[0].GetPrefix() + + // Delete by prefix + _, err = apiServer.DeleteApiKey(context.Background(), &v1.DeleteApiKeyRequest{ + Prefix: keyPrefix, + }) + require.NoError(t, err) + + // Verify key is deleted + listResp, err = apiServer.ListApiKeys(context.Background(), &v1.ListApiKeysRequest{}) + require.NoError(t, err) + assert.Empty(t, listResp.GetApiKeys()) +} + +// TestExpireApiKey_NoIdentifier tests that an error is returned when neither ID nor prefix is provided. +func TestExpireApiKey_NoIdentifier(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + apiServer := newHeadscaleV1APIServer(app) + + _, err := apiServer.ExpireApiKey(context.Background(), &v1.ExpireApiKeyRequest{}) + require.Error(t, err) + st, ok := status.FromError(err) + require.True(t, ok, "error should be a gRPC status error") + assert.Equal(t, codes.InvalidArgument, st.Code()) + assert.Contains(t, st.Message(), "must provide id or prefix") +} + +// TestDeleteApiKey_NoIdentifier tests that an error is returned when neither ID nor prefix is provided. +func TestDeleteApiKey_NoIdentifier(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + apiServer := newHeadscaleV1APIServer(app) + + _, err := apiServer.DeleteApiKey(context.Background(), &v1.DeleteApiKeyRequest{}) + require.Error(t, err) + st, ok := status.FromError(err) + require.True(t, ok, "error should be a gRPC status error") + assert.Equal(t, codes.InvalidArgument, st.Code()) + assert.Contains(t, st.Message(), "must provide id or prefix") +} + +// TestExpireApiKey_BothIdentifiers tests that an error is returned when both ID and prefix are provided. +func TestExpireApiKey_BothIdentifiers(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + apiServer := newHeadscaleV1APIServer(app) + + _, err := apiServer.ExpireApiKey(context.Background(), &v1.ExpireApiKeyRequest{ + Id: 1, + Prefix: "test", + }) + require.Error(t, err) + st, ok := status.FromError(err) + require.True(t, ok, "error should be a gRPC status error") + assert.Equal(t, codes.InvalidArgument, st.Code()) + assert.Contains(t, st.Message(), "provide either id or prefix, not both") +} + +// TestDeleteApiKey_BothIdentifiers tests that an error is returned when both ID and prefix are provided. +func TestDeleteApiKey_BothIdentifiers(t *testing.T) { + t.Parallel() + + app := createTestApp(t) + apiServer := newHeadscaleV1APIServer(app) + + _, err := apiServer.DeleteApiKey(context.Background(), &v1.DeleteApiKeyRequest{ + Id: 1, + Prefix: "test", + }) + require.Error(t, err) + st, ok := status.FromError(err) + require.True(t, ok, "error should be a gRPC status error") + assert.Equal(t, codes.InvalidArgument, st.Code()) + assert.Contains(t, st.Message(), "provide either id or prefix, not both") +} diff --git a/hscontrol/handlers.go b/hscontrol/handlers.go index 0cc5bd36..dc693dae 100644 --- a/hscontrol/handlers.go +++ b/hscontrol/handlers.go @@ -1,6 +1,7 @@ package hscontrol import ( + "bytes" "encoding/json" "errors" "fmt" @@ -8,9 +9,10 @@ import ( "net/http" "strconv" "strings" + "time" - "github.com/chasefleming/elem-go/styles" "github.com/gorilla/mux" + "github.com/juanfont/headscale/hscontrol/assets" "github.com/juanfont/headscale/hscontrol/templates" "github.com/juanfont/headscale/hscontrol/types" "github.com/rs/zerolog/log" @@ -98,6 +100,7 @@ func (h *Headscale) handleVerifyRequest( // Check if any node has the requested NodeKey var nodeKeyFound bool + for _, node := range nodes.All() { if node.NodeKey() == derpAdmitClientRequest.NodePublic { nodeKeyFound = true @@ -128,6 +131,7 @@ func (h *Headscale) VerifyHandler( httpError(writer, err) return } + writer.Header().Set("Content-Type", "application/json") } @@ -149,6 +153,7 @@ func (h *Headscale) KeyHandler( resp := tailcfg.OverTLSPublicKeyResponse{ PublicKey: h.noisePrivateKey.Public(), } + writer.Header().Set("Content-Type", "application/json") json.NewEncoder(writer).Encode(resp) @@ -171,13 +176,14 @@ func (h *Headscale) HealthHandler( if err != nil { writer.WriteHeader(http.StatusInternalServerError) + res.Status = "fail" } json.NewEncoder(writer).Encode(res) } - - if err := h.state.PingDB(req.Context()); err != nil { + err := h.state.PingDB(req.Context()) + if err != nil { respond(err) return @@ -192,6 +198,7 @@ func (h *Headscale) RobotsHandler( ) { writer.Header().Set("Content-Type", "text/plain") writer.WriteHeader(http.StatusOK) + _, err := writer.Write([]byte("User-agent: *\nDisallow: /")) if err != nil { log.Error(). @@ -211,7 +218,8 @@ func (h *Headscale) VersionHandler( writer.WriteHeader(http.StatusOK) versionInfo := types.GetVersionInfo() - if err := json.NewEncoder(writer).Encode(versionInfo); err != nil { + err := json.NewEncoder(writer).Encode(versionInfo) + if err != nil { log.Error(). Caller(). Err(err). @@ -219,13 +227,6 @@ func (h *Headscale) VersionHandler( } } -var codeStyleRegisterWebAPI = styles.Props{ - styles.Display: "block", - styles.Padding: "20px", - styles.Border: "1px solid #bbb", - styles.BackgroundColor: "#eee", -} - type AuthProviderWeb struct { serverURL string } @@ -268,3 +269,22 @@ func (a *AuthProviderWeb) RegisterHandler( writer.WriteHeader(http.StatusOK) writer.Write([]byte(templates.RegisterWeb(registrationId).Render())) } + +func FaviconHandler(writer http.ResponseWriter, req *http.Request) { + writer.Header().Set("Content-Type", "image/png") + http.ServeContent(writer, req, "favicon.ico", time.Unix(0, 0), bytes.NewReader(assets.Favicon)) +} + +// BlankHandler returns a blank page with favicon linked. +func BlankHandler(writer http.ResponseWriter, res *http.Request) { + writer.Header().Set("Content-Type", "text/html; charset=utf-8") + writer.WriteHeader(http.StatusOK) + + _, err := writer.Write([]byte(templates.BlankPage().Render())) + if err != nil { + log.Error(). + Caller(). + Err(err). + Msg("Failed to write HTTP response") + } +} diff --git a/hscontrol/mapper/batcher.go b/hscontrol/mapper/batcher.go index b56bca08..0a1e30d0 100644 --- a/hscontrol/mapper/batcher.go +++ b/hscontrol/mapper/batcher.go @@ -8,12 +8,19 @@ import ( "github.com/juanfont/headscale/hscontrol/state" "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types/change" + "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/client_golang/prometheus/promauto" "github.com/puzpuzpuz/xsync/v4" "github.com/rs/zerolog/log" "tailscale.com/tailcfg" - "tailscale.com/types/ptr" ) +var mapResponseGenerated = promauto.NewCounterVec(prometheus.CounterOpts{ + Namespace: "headscale", + Name: "mapresponse_generated_total", + Help: "total count of mapresponses generated by response type", +}, []string{"response_type"}) + type batcherFunc func(cfg *types.Config, state *state.State) Batcher // Batcher defines the common interface for all batcher implementations. @@ -24,8 +31,8 @@ type Batcher interface { RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse) bool IsConnected(id types.NodeID) bool ConnectedMap() *xsync.Map[types.NodeID, bool] - AddWork(c ...change.ChangeSet) - MapResponseFromChange(id types.NodeID, c change.ChangeSet) (*tailcfg.MapResponse, error) + AddWork(r ...change.Change) + MapResponseFromChange(id types.NodeID, r change.Change) (*tailcfg.MapResponse, error) DebugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) } @@ -39,7 +46,7 @@ func NewBatcher(batchTime time.Duration, workers int, mapper *mapper) *LockFreeB workCh: make(chan work, workers*200), nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](), connected: xsync.NewMap[types.NodeID, *time.Time](), - pendingChanges: xsync.NewMap[types.NodeID, []change.ChangeSet](), + pendingChanges: xsync.NewMap[types.NodeID, []change.Change](), } } @@ -57,15 +64,21 @@ type nodeConnection interface { nodeID() types.NodeID version() tailcfg.CapabilityVersion send(data *tailcfg.MapResponse) error + // computePeerDiff returns peers that were previously sent but are no longer in the current list. + computePeerDiff(currentPeers []tailcfg.NodeID) (removed []tailcfg.NodeID) + // updateSentPeers updates the tracking of which peers have been sent to this node. + updateSentPeers(resp *tailcfg.MapResponse) } -// generateMapResponse generates a [tailcfg.MapResponse] for the given NodeID that is based on the provided [change.ChangeSet]. -func generateMapResponse(nodeID types.NodeID, version tailcfg.CapabilityVersion, mapper *mapper, c change.ChangeSet) (*tailcfg.MapResponse, error) { - if c.Empty() { - return nil, nil +// generateMapResponse generates a [tailcfg.MapResponse] for the given NodeID based on the provided [change.Change]. +func generateMapResponse(nc nodeConnection, mapper *mapper, r change.Change) (*tailcfg.MapResponse, error) { + nodeID := nc.nodeID() + version := nc.version() + + if r.IsEmpty() { + return nil, nil //nolint:nilnil // Empty response means nothing to send } - // Validate inputs before processing if nodeID == 0 { return nil, fmt.Errorf("invalid nodeID: %d", nodeID) } @@ -74,99 +87,68 @@ func generateMapResponse(nodeID types.NodeID, version tailcfg.CapabilityVersion, return nil, fmt.Errorf("mapper is nil for nodeID %d", nodeID) } + // Handle self-only responses + if r.IsSelfOnly() && r.TargetNode != nodeID { + return nil, nil //nolint:nilnil // No response needed for other nodes when self-only + } + + // Check if this is a self-update (the changed node is the receiving node). + // When true, ensure the response includes the node's self info so it sees + // its own attribute changes (e.g., tags changed via admin API). + isSelfUpdate := r.OriginNode != 0 && r.OriginNode == nodeID + var ( mapResp *tailcfg.MapResponse err error ) - switch c.Change { - case change.DERP: - mapResp, err = mapper.derpMapResponse(nodeID) + // Track metric using categorized type, not free-form reason + mapResponseGenerated.WithLabelValues(r.Type()).Inc() - case change.NodeCameOnline, change.NodeWentOffline: - if c.IsSubnetRouter { - // TODO(kradalby): This can potentially be a peer update of the old and new subnet router. - mapResp, err = mapper.fullMapResponse(nodeID, version) - } else { - // Trust the change type for online/offline status to avoid race conditions - // between NodeStore updates and change processing - onlineStatus := c.Change == change.NodeCameOnline + // Check if this requires runtime peer visibility computation (e.g., policy changes) + if r.RequiresRuntimePeerComputation { + currentPeers := mapper.state.ListPeers(nodeID) - mapResp, err = mapper.peerChangedPatchResponse(nodeID, []*tailcfg.PeerChange{ - { - NodeID: c.NodeID.NodeID(), - Online: ptr.To(onlineStatus), - }, - }) + currentPeerIDs := make([]tailcfg.NodeID, 0, currentPeers.Len()) + for _, peer := range currentPeers.All() { + currentPeerIDs = append(currentPeerIDs, peer.ID().NodeID()) } - case change.NodeNewOrUpdate: - // If the node is the one being updated, we send a self update that preserves peer information - // to ensure the node sees changes to its own properties (e.g., hostname/DNS name changes) - // without losing its view of peer status during rapid reconnection cycles - if c.IsSelfUpdate(nodeID) { - mapResp, err = mapper.selfMapResponse(nodeID, version) - } else { - mapResp, err = mapper.peerChangeResponse(nodeID, version, c.NodeID) - } - - case change.NodeRemove: - mapResp, err = mapper.peerRemovedResponse(nodeID, c.NodeID) - - case change.NodeKeyExpiry: - // If the node is the one whose key is expiring, we send a "full" self update - // as nodes will ignore patch updates about themselves (?). - if c.IsSelfUpdate(nodeID) { - mapResp, err = mapper.selfMapResponse(nodeID, version) - // mapResp, err = mapper.fullMapResponse(nodeID, version) - } else { - mapResp, err = mapper.peerChangedPatchResponse(nodeID, []*tailcfg.PeerChange{ - { - NodeID: c.NodeID.NodeID(), - KeyExpiry: c.NodeExpiry, - }, - }) - } - - default: - // The following will always hit this: - // change.Full, change.Policy - mapResp, err = mapper.fullMapResponse(nodeID, version) + removedPeers := nc.computePeerDiff(currentPeerIDs) + // Include self node when this is a self-update (e.g., node's own tags changed) + // so the node sees its updated self info along with new packet filters. + mapResp, err = mapper.policyChangeResponse(nodeID, version, removedPeers, currentPeers, isSelfUpdate) + } else if isSelfUpdate { + // Non-policy self-update: just send the self node info + mapResp, err = mapper.selfMapResponse(nodeID, version) + } else { + mapResp, err = mapper.buildFromChange(nodeID, version, &r) } if err != nil { return nil, fmt.Errorf("generating map response for nodeID %d: %w", nodeID, err) } - // TODO(kradalby): Is this necessary? - // Validate the generated map response - only check for nil response - // Note: mapResp.Node can be nil for peer updates, which is valid - if mapResp == nil && c.Change != change.DERP && c.Change != change.NodeRemove { - return nil, fmt.Errorf("generated nil map response for nodeID %d change %s", nodeID, c.Change.String()) - } - return mapResp, nil } -// handleNodeChange generates and sends a [tailcfg.MapResponse] for a given node and [change.ChangeSet]. -func handleNodeChange(nc nodeConnection, mapper *mapper, c change.ChangeSet) error { +// handleNodeChange generates and sends a [tailcfg.MapResponse] for a given node and [change.Change]. +func handleNodeChange(nc nodeConnection, mapper *mapper, r change.Change) error { if nc == nil { return errors.New("nodeConnection is nil") } nodeID := nc.nodeID() - log.Debug().Caller().Uint64("node.id", nodeID.Uint64()).Str("change.type", c.Change.String()).Msg("Node change processing started because change notification received") + log.Debug().Caller().Uint64("node.id", nodeID.Uint64()).Str("reason", r.Reason).Msg("Node change processing started because change notification received") - var data *tailcfg.MapResponse - var err error - data, err = generateMapResponse(nodeID, nc.version(), mapper, c) + data, err := generateMapResponse(nc, mapper, r) if err != nil { return fmt.Errorf("generating map response for node %d: %w", nodeID, err) } if data == nil { - // No data to send is valid for some change types + // No data to send is valid for some response types return nil } @@ -176,6 +158,9 @@ func handleNodeChange(nc nodeConnection, mapper *mapper, c change.ChangeSet) err return fmt.Errorf("sending map response to node %d: %w", nodeID, err) } + // Update peer tracking after successful send + nc.updateSentPeers(data) + return nil } @@ -187,7 +172,7 @@ type workResult struct { // work represents a unit of work to be processed by workers. type work struct { - c change.ChangeSet + c change.Change nodeID types.NodeID resultCh chan<- workResult // optional channel for synchronous operations } diff --git a/hscontrol/mapper/batcher_lockfree.go b/hscontrol/mapper/batcher_lockfree.go index d40b36b0..e00512b6 100644 --- a/hscontrol/mapper/batcher_lockfree.go +++ b/hscontrol/mapper/batcher_lockfree.go @@ -1,8 +1,8 @@ package mapper import ( - "context" "crypto/rand" + "errors" "fmt" "sync" "sync/atomic" @@ -16,6 +16,8 @@ import ( "tailscale.com/types/ptr" ) +var errConnectionClosed = errors.New("connection channel already closed") + // LockFreeBatcher uses atomic operations and concurrent maps to eliminate mutex contention. type LockFreeBatcher struct { tick *time.Ticker @@ -26,16 +28,16 @@ type LockFreeBatcher struct { connected *xsync.Map[types.NodeID, *time.Time] // Work queue channel - workCh chan work - ctx context.Context - cancel context.CancelFunc + workCh chan work + workChOnce sync.Once // Ensures workCh is only closed once + done chan struct{} + doneOnce sync.Once // Ensures done is only closed once // Batching state - pendingChanges *xsync.Map[types.NodeID, []change.ChangeSet] + pendingChanges *xsync.Map[types.NodeID, []change.Change] // Metrics totalNodes atomic.Int64 - totalUpdates atomic.Int64 workQueuedCount atomic.Int64 workProcessed atomic.Int64 workErrors atomic.Int64 @@ -140,28 +142,27 @@ func (b *LockFreeBatcher) RemoveNode(id types.NodeID, c chan<- *tailcfg.MapRespo } // AddWork queues a change to be processed by the batcher. -func (b *LockFreeBatcher) AddWork(c ...change.ChangeSet) { - b.addWork(c...) +func (b *LockFreeBatcher) AddWork(r ...change.Change) { + b.addWork(r...) } func (b *LockFreeBatcher) Start() { - b.ctx, b.cancel = context.WithCancel(context.Background()) + b.done = make(chan struct{}) go b.doWork() } func (b *LockFreeBatcher) Close() { - if b.cancel != nil { - b.cancel() - b.cancel = nil - } + // Signal shutdown to all goroutines, only once + b.doneOnce.Do(func() { + if b.done != nil { + close(b.done) + } + }) - // Only close workCh once - select { - case <-b.workCh: - // Channel is already closed - default: + // Only close workCh once using sync.Once to prevent races + b.workChOnce.Do(func() { close(b.workCh) - } + }) // Close the underlying channels supplying the data to the clients. b.nodes.Range(func(nodeID types.NodeID, conn *multiChannelNodeConn) bool { @@ -187,8 +188,8 @@ func (b *LockFreeBatcher) doWork() { case <-cleanupTicker.C: // Clean up nodes that have been offline for too long b.cleanupOfflineNodes() - case <-b.ctx.Done(): - log.Info().Msg("batcher context done, stopping to feed workers") + case <-b.done: + log.Info().Msg("batcher done channel closed, stopping to feed workers") return } } @@ -213,15 +214,19 @@ func (b *LockFreeBatcher) worker(workerID int) { var result workResult if nc, exists := b.nodes.Load(w.nodeID); exists { var err error - result.mapResponse, err = generateMapResponse(nc.nodeID(), nc.version(), b.mapper, w.c) + + result.mapResponse, err = generateMapResponse(nc, b.mapper, w.c) result.err = err if result.err != nil { b.workErrors.Add(1) log.Error().Err(result.err). Int("worker.id", workerID). Uint64("node.id", w.nodeID.Uint64()). - Str("change", w.c.Change.String()). + Str("reason", w.c.Reason). Msg("failed to generate map response for synchronous work") + } else if result.mapResponse != nil { + // Update peer tracking for synchronous responses too + nc.updateSentPeers(result.mapResponse) } } else { result.err = fmt.Errorf("node %d not found", w.nodeID) @@ -236,7 +241,7 @@ func (b *LockFreeBatcher) worker(workerID int) { // Send result select { case w.resultCh <- result: - case <-b.ctx.Done(): + case <-b.done: return } @@ -254,20 +259,20 @@ func (b *LockFreeBatcher) worker(workerID int) { b.workErrors.Add(1) log.Error().Err(err). Int("worker.id", workerID). - Uint64("node.id", w.c.NodeID.Uint64()). - Str("change", w.c.Change.String()). + Uint64("node.id", w.nodeID.Uint64()). + Str("reason", w.c.Reason). Msg("failed to apply change") } } - case <-b.ctx.Done(): - log.Debug().Int("workder.id", workerID).Msg("batcher context is done, exiting worker") + case <-b.done: + log.Debug().Int("worker.id", workerID).Msg("batcher shutting down, exiting worker") return } } } -func (b *LockFreeBatcher) addWork(c ...change.ChangeSet) { - b.addToBatch(c...) +func (b *LockFreeBatcher) addWork(r ...change.Change) { + b.addToBatch(r...) } // queueWork safely queues work. @@ -277,44 +282,78 @@ func (b *LockFreeBatcher) queueWork(w work) { select { case b.workCh <- w: // Successfully queued - case <-b.ctx.Done(): + case <-b.done: // Batcher is shutting down return } } -// addToBatch adds a change to the pending batch. -func (b *LockFreeBatcher) addToBatch(c ...change.ChangeSet) { +// addToBatch adds changes to the pending batch. +func (b *LockFreeBatcher) addToBatch(changes ...change.Change) { + // Clean up any nodes being permanently removed from the system. + // + // This handles the case where a node is deleted from state but the batcher + // still has it registered. By cleaning up here, we prevent "node not found" + // errors when workers try to generate map responses for deleted nodes. + // + // Safety: change.Change.PeersRemoved is ONLY populated when nodes are actually + // deleted from the system (via change.NodeRemoved in state.DeleteNode). Policy + // changes that affect peer visibility do NOT use this field - they set + // RequiresRuntimePeerComputation=true and compute removed peers at runtime, + // putting them in tailcfg.MapResponse.PeersRemoved (a different struct). + // Therefore, this cleanup only removes nodes that are truly being deleted, + // not nodes that are still connected but have lost visibility of certain peers. + // + // See: https://github.com/juanfont/headscale/issues/2924 + for _, ch := range changes { + for _, removedID := range ch.PeersRemoved { + if _, existed := b.nodes.LoadAndDelete(removedID); existed { + b.totalNodes.Add(-1) + log.Debug(). + Uint64("node.id", removedID.Uint64()). + Msg("Removed deleted node from batcher") + } + + b.connected.Delete(removedID) + b.pendingChanges.Delete(removedID) + } + } + // Short circuit if any of the changes is a full update, which // means we can skip sending individual changes. - if change.HasFull(c) { + if change.HasFull(changes) { b.nodes.Range(func(nodeID types.NodeID, _ *multiChannelNodeConn) bool { - b.pendingChanges.Store(nodeID, []change.ChangeSet{{Change: change.Full}}) + b.pendingChanges.Store(nodeID, []change.Change{change.FullUpdate()}) return true }) - return - } - - all, self := change.SplitAllAndSelf(c) - - for _, changeSet := range self { - changes, _ := b.pendingChanges.LoadOrStore(changeSet.NodeID, []change.ChangeSet{}) - changes = append(changes, changeSet) - b.pendingChanges.Store(changeSet.NodeID, changes) return } - b.nodes.Range(func(nodeID types.NodeID, _ *multiChannelNodeConn) bool { - rel := change.RemoveUpdatesForSelf(nodeID, all) + broadcast, targeted := change.SplitTargetedAndBroadcast(changes) - changes, _ := b.pendingChanges.LoadOrStore(nodeID, []change.ChangeSet{}) - changes = append(changes, rel...) - b.pendingChanges.Store(nodeID, changes) + // Handle targeted changes - send only to the specific node + for _, ch := range targeted { + pending, _ := b.pendingChanges.LoadOrStore(ch.TargetNode, []change.Change{}) + pending = append(pending, ch) + b.pendingChanges.Store(ch.TargetNode, pending) + } - return true - }) + // Handle broadcast changes - send to all nodes, filtering as needed + if len(broadcast) > 0 { + b.nodes.Range(func(nodeID types.NodeID, _ *multiChannelNodeConn) bool { + filtered := change.FilterForNode(nodeID, broadcast) + + if len(filtered) > 0 { + pending, _ := b.pendingChanges.LoadOrStore(nodeID, []change.Change{}) + pending = append(pending, filtered...) + b.pendingChanges.Store(nodeID, pending) + } + + return true + }) + } } // processBatchedChanges processes all pending batched changes. @@ -324,14 +363,14 @@ func (b *LockFreeBatcher) processBatchedChanges() { } // Process all pending changes - b.pendingChanges.Range(func(nodeID types.NodeID, changes []change.ChangeSet) bool { - if len(changes) == 0 { + b.pendingChanges.Range(func(nodeID types.NodeID, pending []change.Change) bool { + if len(pending) == 0 { return true } // Send all batched changes for this node - for _, c := range changes { - b.queueWork(work{c: c, nodeID: nodeID, resultCh: nil}) + for _, ch := range pending { + b.queueWork(work{c: ch, nodeID: nodeID, resultCh: nil}) } // Clear the pending changes for this node @@ -434,17 +473,17 @@ func (b *LockFreeBatcher) ConnectedMap() *xsync.Map[types.NodeID, bool] { // MapResponseFromChange queues work to generate a map response and waits for the result. // This allows synchronous map generation using the same worker pool. -func (b *LockFreeBatcher) MapResponseFromChange(id types.NodeID, c change.ChangeSet) (*tailcfg.MapResponse, error) { +func (b *LockFreeBatcher) MapResponseFromChange(id types.NodeID, ch change.Change) (*tailcfg.MapResponse, error) { resultCh := make(chan workResult, 1) // Queue the work with a result channel using the safe queueing method - b.queueWork(work{c: c, nodeID: id, resultCh: resultCh}) + b.queueWork(work{c: ch, nodeID: id, resultCh: resultCh}) // Wait for the result select { case result := <-resultCh: return result.mapResponse, result.err - case <-b.ctx.Done(): + case <-b.done: return nil, fmt.Errorf("batcher shutting down while generating map response for node %d", id) } } @@ -456,6 +495,7 @@ type connectionEntry struct { version tailcfg.CapabilityVersion created time.Time lastUsed atomic.Int64 // Unix timestamp of last successful send + closed atomic.Bool // Indicates if this connection has been closed } // multiChannelNodeConn manages multiple concurrent connections for a single node. @@ -467,6 +507,12 @@ type multiChannelNodeConn struct { connections []*connectionEntry updateCount atomic.Int64 + + // lastSentPeers tracks which peers were last sent to this node. + // This enables computing diffs for policy changes instead of sending + // full peer lists (which clients interpret as "no change" when empty). + // Using xsync.Map for lock-free concurrent access. + lastSentPeers *xsync.Map[tailcfg.NodeID, struct{}] } // generateConnectionID generates a unique connection identifier. @@ -479,8 +525,9 @@ func generateConnectionID() string { // newMultiChannelNodeConn creates a new multi-channel node connection. func newMultiChannelNodeConn(id types.NodeID, mapper *mapper) *multiChannelNodeConn { return &multiChannelNodeConn{ - id: id, - mapper: mapper, + id: id, + mapper: mapper, + lastSentPeers: xsync.NewMap[tailcfg.NodeID, struct{}](), } } @@ -489,6 +536,9 @@ func (mc *multiChannelNodeConn) close() { defer mc.mutex.Unlock() for _, conn := range mc.connections { + // Mark as closed before closing the channel to prevent + // send on closed channel panics from concurrent workers + conn.closed.Store(true) close(conn.c) } } @@ -621,6 +671,12 @@ func (entry *connectionEntry) send(data *tailcfg.MapResponse) error { return nil } + // Check if the connection has been closed to prevent send on closed channel panic. + // This can happen during shutdown when Close() is called while workers are still processing. + if entry.closed.Load() { + return fmt.Errorf("connection %s: %w", entry.id, errConnectionClosed) + } + // Use a short timeout to detect stale connections where the client isn't reading the channel. // This is critical for detecting Docker containers that are forcefully terminated // but still have channels that appear open. @@ -654,9 +710,59 @@ func (mc *multiChannelNodeConn) version() tailcfg.CapabilityVersion { return mc.connections[0].version } +// updateSentPeers updates the tracked peer state based on a sent MapResponse. +// This must be called after successfully sending a response to keep track of +// what the client knows about, enabling accurate diffs for future updates. +func (mc *multiChannelNodeConn) updateSentPeers(resp *tailcfg.MapResponse) { + if resp == nil { + return + } + + // Full peer list replaces tracked state entirely + if resp.Peers != nil { + mc.lastSentPeers.Clear() + + for _, peer := range resp.Peers { + mc.lastSentPeers.Store(peer.ID, struct{}{}) + } + } + + // Incremental additions + for _, peer := range resp.PeersChanged { + mc.lastSentPeers.Store(peer.ID, struct{}{}) + } + + // Incremental removals + for _, id := range resp.PeersRemoved { + mc.lastSentPeers.Delete(id) + } +} + +// computePeerDiff compares the current peer list against what was last sent +// and returns the peers that were removed (in lastSentPeers but not in current). +func (mc *multiChannelNodeConn) computePeerDiff(currentPeers []tailcfg.NodeID) []tailcfg.NodeID { + currentSet := make(map[tailcfg.NodeID]struct{}, len(currentPeers)) + for _, id := range currentPeers { + currentSet[id] = struct{}{} + } + + var removed []tailcfg.NodeID + + // Find removed: in lastSentPeers but not in current + mc.lastSentPeers.Range(func(id tailcfg.NodeID, _ struct{}) bool { + if _, exists := currentSet[id]; !exists { + removed = append(removed, id) + } + + return true + }) + + return removed +} + // change applies a change to all active connections for the node. -func (mc *multiChannelNodeConn) change(c change.ChangeSet) error { - return handleNodeChange(mc, mc.mapper, c) +func (mc *multiChannelNodeConn) change(r change.Change) error { + return handleNodeChange(mc, mc.mapper, r) } // DebugNodeInfo contains debug information about a node's connections. @@ -715,3 +821,9 @@ func (b *LockFreeBatcher) Debug() map[types.NodeID]DebugNodeInfo { func (b *LockFreeBatcher) DebugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) { return b.mapper.debugMapResponses() } + +// WorkErrors returns the count of work errors encountered. +// This is primarily useful for testing and debugging. +func (b *LockFreeBatcher) WorkErrors() int64 { + return b.workErrors.Load() +} diff --git a/hscontrol/mapper/batcher_test.go b/hscontrol/mapper/batcher_test.go index 30e75f48..70d5e377 100644 --- a/hscontrol/mapper/batcher_test.go +++ b/hscontrol/mapper/batcher_test.go @@ -1,8 +1,10 @@ package mapper import ( + "errors" "fmt" "net/netip" + "runtime" "strings" "sync" "sync/atomic" @@ -21,6 +23,8 @@ import ( "zgo.at/zcache/v2" ) +var errNodeNotFoundAfterAdd = errors.New("node not found after adding to batcher") + // batcherTestCase defines a batcher function with a descriptive name for testing. type batcherTestCase struct { name string @@ -50,7 +54,12 @@ func (t *testBatcherWrapper) AddNode(id types.NodeID, c chan<- *tailcfg.MapRespo // Send the online notification that poll.go would normally send // This ensures other nodes get notified about this node coming online - t.AddWork(change.NodeOnline(id)) + node, ok := t.state.GetNodeByID(id) + if !ok { + return fmt.Errorf("%w: %d", errNodeNotFoundAfterAdd, id) + } + + t.AddWork(change.NodeOnlineFor(node)) return nil } @@ -65,7 +74,10 @@ func (t *testBatcherWrapper) RemoveNode(id types.NodeID, c chan<- *tailcfg.MapRe // Send the offline notification that poll.go would normally send // Do this BEFORE removing from batcher so the change can be processed - t.AddWork(change.NodeOffline(id)) + node, ok := t.state.GetNodeByID(id) + if ok { + t.AddWork(change.NodeOfflineFor(node)) + } // Finally remove from the real batcher removed := t.Batcher.RemoveNode(id, c) @@ -135,12 +147,12 @@ type node struct { n *types.Node ch chan *tailcfg.MapResponse - // Update tracking + // Update tracking (all accessed atomically for thread safety) updateCount int64 patchCount int64 fullCount int64 - maxPeersCount int - lastPeerCount int + maxPeersCount atomic.Int64 + lastPeerCount atomic.Int64 stop chan struct{} stopped chan struct{} } @@ -192,15 +204,16 @@ func setupBatcherWithTestData( }, }, Tuning: types.Tuning{ - BatchChangeDelay: 10 * time.Millisecond, - BatcherWorkers: types.DefaultBatcherWorkers(), // Use same logic as config.go + BatchChangeDelay: 10 * time.Millisecond, + BatcherWorkers: types.DefaultBatcherWorkers(), // Use same logic as config.go + NodeStoreBatchSize: state.TestBatchSize, + NodeStoreBatchTimeout: state.TestBatchTimeout, }, } // Create database and populate it with test data database, err := db.NewHeadscaleDatabase( - cfg.Database, - "", + cfg, emptyCache(), ) if err != nil { @@ -408,18 +421,32 @@ func (n *node) start() { // Track update types if info.IsFull { atomic.AddInt64(&n.fullCount, 1) - n.lastPeerCount = info.PeerCount - // Update max peers seen - if info.PeerCount > n.maxPeersCount { - n.maxPeersCount = info.PeerCount + n.lastPeerCount.Store(int64(info.PeerCount)) + // Update max peers seen using compare-and-swap for thread safety + for { + current := n.maxPeersCount.Load() + if int64(info.PeerCount) <= current { + break + } + + if n.maxPeersCount.CompareAndSwap(current, int64(info.PeerCount)) { + break + } } } if info.IsPatch { atomic.AddInt64(&n.patchCount, 1) - // For patches, we track how many patch items - if info.PatchCount > n.maxPeersCount { - n.maxPeersCount = info.PatchCount + // For patches, we track how many patch items using compare-and-swap + for { + current := n.maxPeersCount.Load() + if int64(info.PatchCount) <= current { + break + } + + if n.maxPeersCount.CompareAndSwap(current, int64(info.PatchCount)) { + break + } } } } @@ -451,8 +478,8 @@ func (n *node) cleanup() NodeStats { TotalUpdates: atomic.LoadInt64(&n.updateCount), PatchUpdates: atomic.LoadInt64(&n.patchCount), FullUpdates: atomic.LoadInt64(&n.fullCount), - MaxPeersSeen: n.maxPeersCount, - LastPeerCount: n.lastPeerCount, + MaxPeersSeen: int(n.maxPeersCount.Load()), + LastPeerCount: int(n.lastPeerCount.Load()), } } @@ -489,8 +516,10 @@ func TestEnhancedNodeTracking(t *testing.T) { // Send the data to the node's channel testNode.ch <- &resp - // Give it time to process - time.Sleep(100 * time.Millisecond) + // Wait for tracking goroutine to process the update + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.GreaterOrEqual(c, atomic.LoadInt64(&testNode.updateCount), int64(1), "should have processed the update") + }, time.Second, 10*time.Millisecond, "waiting for update to be processed") // Check stats stats := testNode.cleanup() @@ -520,17 +549,21 @@ func TestEnhancedTrackingWithBatcher(t *testing.T) { // Connect the node to the batcher batcher.AddNode(testNode.n.ID, testNode.ch, tailcfg.CapabilityVersion(100)) - time.Sleep(100 * time.Millisecond) // Let connection settle - // Generate some work - batcher.AddWork(change.FullSet) - time.Sleep(100 * time.Millisecond) // Let work be processed + // Wait for connection to be established + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.True(c, batcher.IsConnected(testNode.n.ID), "node should be connected") + }, time.Second, 10*time.Millisecond, "waiting for node connection") - batcher.AddWork(change.PolicySet) - time.Sleep(100 * time.Millisecond) + // Generate work and wait for updates to be processed + batcher.AddWork(change.FullUpdate()) + batcher.AddWork(change.PolicyChange()) + batcher.AddWork(change.DERPMap()) - batcher.AddWork(change.DERPSet) - time.Sleep(100 * time.Millisecond) + // Wait for updates to be processed (at least 1 update received) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.GreaterOrEqual(c, atomic.LoadInt64(&testNode.updateCount), int64(1), "should have received updates") + }, time.Second, 10*time.Millisecond, "waiting for updates to be processed") // Check stats stats := testNode.cleanup() @@ -561,14 +594,12 @@ func TestBatcherScalabilityAllToAll(t *testing.T) { name string nodeCount int }{ - {"10_nodes", 10}, - {"50_nodes", 50}, - {"100_nodes", 100}, - // Grinds to a halt because of Database bottleneck - // {"250_nodes", 250}, - // {"500_nodes", 500}, - // {"1000_nodes", 1000}, - // {"5000_nodes", 5000}, + {"10_nodes", 10}, // Quick baseline test + {"100_nodes", 100}, // Full scalability test ~2 minutes + // Large-scale tests commented out - uncomment for scalability testing + // {"1000_nodes", 1000}, // ~12 minutes + // {"2000_nodes", 2000}, // ~60+ minutes + // {"5000_nodes", 5000}, // Not recommended - database bottleneck } for _, batcherFunc := range allBatcherFunctions { @@ -589,7 +620,8 @@ func TestBatcherScalabilityAllToAll(t *testing.T) { // Use large buffer to avoid blocking during rapid joins // Buffer needs to handle nodeCount * average_updates_per_node // Estimate: each node receives ~2*nodeCount updates during all-to-all - bufferSize := max(1000, tc.nodeCount*2) + // For very large tests (>1000 nodes), limit buffer to avoid excessive memory + bufferSize := max(1000, min(tc.nodeCount*2, 10000)) testData, cleanup := setupBatcherWithTestData( t, @@ -615,8 +647,8 @@ func TestBatcherScalabilityAllToAll(t *testing.T) { allNodes[i].start() } - // Give time for tracking goroutines to start - time.Sleep(100 * time.Millisecond) + // Yield to allow tracking goroutines to start + runtime.Gosched() startTime := time.Now() @@ -628,31 +660,26 @@ func TestBatcherScalabilityAllToAll(t *testing.T) { batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100)) // Issue full update after each join to ensure connectivity - batcher.AddWork(change.FullSet) + batcher.AddWork(change.FullUpdate()) - // Add tiny delay for large node counts to prevent overwhelming + // Yield to scheduler for large node counts to prevent overwhelming the work queue if tc.nodeCount > 100 && i%50 == 49 { - time.Sleep(10 * time.Millisecond) + runtime.Gosched() } } joinTime := time.Since(startTime) t.Logf("All nodes joined in %v, waiting for full connectivity...", joinTime) - // Wait for all updates to propagate - no timeout, continue until all nodes achieve connectivity - checkInterval := 5 * time.Second + // Wait for all updates to propagate until all nodes achieve connectivity expectedPeers := tc.nodeCount - 1 // Each node should see all others except itself - for { - time.Sleep(checkInterval) - - // Check if all nodes have seen the expected number of peers + assert.EventuallyWithT(t, func(c *assert.CollectT) { connectedCount := 0 - for i := range allNodes { node := &allNodes[i] - // Check current stats without stopping the tracking - currentMaxPeers := node.maxPeersCount + + currentMaxPeers := int(node.maxPeersCount.Load()) if currentMaxPeers >= expectedPeers { connectedCount++ } @@ -662,12 +689,10 @@ func TestBatcherScalabilityAllToAll(t *testing.T) { t.Logf("Progress: %d/%d nodes (%.1f%%) have seen %d+ peers", connectedCount, len(allNodes), progress, expectedPeers) - if connectedCount == len(allNodes) { - t.Logf("✅ All nodes achieved full connectivity!") - break - } - } + assert.Equal(c, len(allNodes), connectedCount, "all nodes should achieve full connectivity") + }, 5*time.Minute, 5*time.Second, "waiting for full connectivity") + t.Logf("✅ All nodes achieved full connectivity!") totalTime := time.Since(startTime) // Disconnect all nodes @@ -676,8 +701,12 @@ func TestBatcherScalabilityAllToAll(t *testing.T) { batcher.RemoveNode(node.n.ID, node.ch) } - // Give time for final updates to process - time.Sleep(500 * time.Millisecond) + // Wait for all nodes to be disconnected + assert.EventuallyWithT(t, func(c *assert.CollectT) { + for i := range allNodes { + assert.False(c, batcher.IsConnected(allNodes[i].n.ID), "node should be disconnected") + } + }, 5*time.Second, 50*time.Millisecond, "waiting for nodes to disconnect") // Collect final statistics totalUpdates := int64(0) @@ -802,7 +831,7 @@ func TestBatcherBasicOperations(t *testing.T) { } // Test work processing with DERP change - batcher.AddWork(change.DERPChange()) + batcher.AddWork(change.DERPMap()) // Wait for update and validate content select { @@ -929,31 +958,31 @@ func drainChannelTimeout(ch <-chan *tailcfg.MapResponse, name string, timeout ti // }{ // { // name: "DERP change", -// changeSet: change.DERPSet, +// changeSet: change.DERPMapResponse(), // expectData: true, // description: "DERP changes should generate map updates", // }, // { // name: "Node key expiry", -// changeSet: change.KeyExpiry(testNodes[1].n.ID), +// changeSet: change.KeyExpiryFor(testNodes[1].n.ID), // expectData: true, // description: "Node key expiry with real node data", // }, // { // name: "Node new registration", -// changeSet: change.NodeAdded(testNodes[1].n.ID), +// changeSet: change.NodeAddedResponse(testNodes[1].n.ID), // expectData: true, // description: "New node registration with real data", // }, // { // name: "Full update", -// changeSet: change.FullSet, +// changeSet: change.FullUpdateResponse(), // expectData: true, // description: "Full updates with real node data", // }, // { // name: "Policy change", -// changeSet: change.PolicySet, +// changeSet: change.PolicyChangeResponse(), // expectData: true, // description: "Policy updates with real node data", // }, @@ -1027,13 +1056,13 @@ func TestBatcherWorkQueueBatching(t *testing.T) { var receivedUpdates []*tailcfg.MapResponse // Add multiple changes rapidly to test batching - batcher.AddWork(change.DERPSet) + batcher.AddWork(change.DERPMap()) // Use a valid expiry time for testing since test nodes don't have expiry set testExpiry := time.Now().Add(24 * time.Hour) - batcher.AddWork(change.KeyExpiry(testNodes[1].n.ID, testExpiry)) - batcher.AddWork(change.DERPSet) + batcher.AddWork(change.KeyExpiryFor(testNodes[1].n.ID, testExpiry)) + batcher.AddWork(change.DERPMap()) batcher.AddWork(change.NodeAdded(testNodes[1].n.ID)) - batcher.AddWork(change.DERPSet) + batcher.AddWork(change.DERPMap()) // Collect updates with timeout updateCount := 0 @@ -1057,8 +1086,8 @@ func TestBatcherWorkQueueBatching(t *testing.T) { t.Logf("Update %d: nil update", updateCount) } case <-timeout: - // Expected: 5 changes should generate 6 updates (no batching in current implementation) - expectedUpdates := 6 + // Expected: 5 explicit changes + 1 initial from AddNode + 1 NodeOnline from wrapper = 7 updates + expectedUpdates := 7 t.Logf("Received %d updates from %d changes (expected %d)", updateCount, 5, expectedUpdates) @@ -1124,40 +1153,30 @@ func XTestBatcherChannelClosingRace(t *testing.T) { // First connection ch1 := make(chan *tailcfg.MapResponse, 1) - wg.Add(1) - - go func() { - defer wg.Done() - + wg.Go(func() { batcher.AddNode(testNode.n.ID, ch1, tailcfg.CapabilityVersion(100)) - }() + }) // Add real work during connection chaos if i%10 == 0 { - batcher.AddWork(change.DERPSet) + batcher.AddWork(change.DERPMap()) } // Rapid second connection - should replace ch1 ch2 := make(chan *tailcfg.MapResponse, 1) - wg.Add(1) - - go func() { - defer wg.Done() - - time.Sleep(1 * time.Microsecond) + wg.Go(func() { + runtime.Gosched() // Yield to introduce timing variability batcher.AddNode(testNode.n.ID, ch2, tailcfg.CapabilityVersion(100)) - }() + }) // Remove second connection - wg.Add(1) - go func() { - defer wg.Done() - - time.Sleep(2 * time.Microsecond) + wg.Go(func() { + runtime.Gosched() // Yield to introduce timing variability + runtime.Gosched() // Extra yield to offset from AddNode batcher.RemoveNode(testNode.n.ID, ch2) - }() + }) wg.Wait() @@ -1240,7 +1259,7 @@ func TestBatcherWorkerChannelSafety(t *testing.T) { // Add node and immediately queue real work batcher.AddNode(testNode.n.ID, ch, tailcfg.CapabilityVersion(100)) - batcher.AddWork(change.DERPSet) + batcher.AddWork(change.DERPMap()) // Consumer goroutine to validate data and detect channel issues go func() { @@ -1282,15 +1301,17 @@ func TestBatcherWorkerChannelSafety(t *testing.T) { if i%10 == 0 { // Use a valid expiry time for testing since test nodes don't have expiry set testExpiry := time.Now().Add(24 * time.Hour) - batcher.AddWork(change.KeyExpiry(testNode.n.ID, testExpiry)) + batcher.AddWork(change.KeyExpiryFor(testNode.n.ID, testExpiry)) } // Rapid removal creates race between worker and removal - time.Sleep(time.Duration(i%3) * 100 * time.Microsecond) + for range i % 3 { + runtime.Gosched() // Introduce timing variability + } batcher.RemoveNode(testNode.n.ID, ch) - // Give workers time to process and close channels - time.Sleep(5 * time.Millisecond) + // Yield to allow workers to process and close channels + runtime.Gosched() }() } @@ -1470,7 +1491,9 @@ func TestBatcherConcurrentClients(t *testing.T) { wg.Done() }() - time.Sleep(time.Duration(i%5) * time.Millisecond) + for range i % 5 { + runtime.Gosched() // Introduce timing variability + } churningChannelsMutex.Lock() ch, exists := churningChannels[nodeID] @@ -1486,12 +1509,12 @@ func TestBatcherConcurrentClients(t *testing.T) { // Generate various types of work during racing if i%3 == 0 { // DERP changes - batcher.AddWork(change.DERPSet) + batcher.AddWork(change.DERPMap()) } if i%5 == 0 { // Full updates using real node data - batcher.AddWork(change.FullSet) + batcher.AddWork(change.FullUpdate()) } if i%7 == 0 && len(allNodes) > 0 { @@ -1499,11 +1522,11 @@ func TestBatcherConcurrentClients(t *testing.T) { node := allNodes[i%len(allNodes)] // Use a valid expiry time for testing since test nodes don't have expiry set testExpiry := time.Now().Add(24 * time.Hour) - batcher.AddWork(change.KeyExpiry(node.n.ID, testExpiry)) + batcher.AddWork(change.KeyExpiryFor(node.n.ID, testExpiry)) } - // Small delay to allow some batching - time.Sleep(2 * time.Millisecond) + // Yield to allow some batching + runtime.Gosched() } wg.Wait() @@ -1518,8 +1541,8 @@ func TestBatcherConcurrentClients(t *testing.T) { return } - // Allow final updates to be processed - time.Sleep(100 * time.Millisecond) + // Yield to allow any in-flight updates to complete + runtime.Gosched() // Validate results panicMutex.Lock() @@ -1729,8 +1752,8 @@ func XTestBatcherScalability(t *testing.T) { testNodes[i].start() } - // Give time for all tracking goroutines to start - time.Sleep(100 * time.Millisecond) + // Yield to allow tracking goroutines to start + runtime.Gosched() // Connect all nodes first so they can see each other as peers connectedNodes := make(map[types.NodeID]bool) @@ -1747,10 +1770,21 @@ func XTestBatcherScalability(t *testing.T) { connectedNodesMutex.Unlock() } - // Give more time for all connections to be established - time.Sleep(500 * time.Millisecond) - batcher.AddWork(change.FullSet) - time.Sleep(500 * time.Millisecond) // Allow initial update to propagate + // Wait for all connections to be established + assert.EventuallyWithT(t, func(c *assert.CollectT) { + for i := range testNodes { + assert.True(c, batcher.IsConnected(testNodes[i].n.ID), "node should be connected") + } + }, 5*time.Second, 50*time.Millisecond, "waiting for nodes to connect") + + batcher.AddWork(change.FullUpdate()) + + // Wait for initial update to propagate + assert.EventuallyWithT(t, func(c *assert.CollectT) { + for i := range testNodes { + assert.GreaterOrEqual(c, atomic.LoadInt64(&testNodes[i].updateCount), int64(1), "should have received initial update") + } + }, 5*time.Second, 50*time.Millisecond, "waiting for initial update") go func() { defer close(done) @@ -1768,19 +1802,16 @@ func XTestBatcherScalability(t *testing.T) { if cycle%10 == 0 { t.Logf("Cycle %d/%d completed", cycle, tc.cycles) } - // Add delays for mixed chaos + // Yield for mixed chaos to introduce timing variability if tc.chaosType == "mixed" && cycle%10 == 0 { - time.Sleep(time.Duration(cycle%2) * time.Microsecond) + runtime.Gosched() } // For chaos testing, only disconnect/reconnect a subset of nodes // This ensures some nodes stay connected to continue receiving updates startIdx := cycle % len(testNodes) - endIdx := startIdx + len(testNodes)/4 - if endIdx > len(testNodes) { - endIdx = len(testNodes) - } + endIdx := min(startIdx+len(testNodes)/4, len(testNodes)) if startIdx >= endIdx { startIdx = 0 @@ -1837,9 +1868,12 @@ func XTestBatcherScalability(t *testing.T) { wg.Done() }() - // Small delay before reconnecting - time.Sleep(time.Duration(index%3) * time.Millisecond) - batcher.AddNode( + // Yield before reconnecting to introduce timing variability + for range index % 3 { + runtime.Gosched() + } + + _ = batcher.AddNode( nodeID, channel, tailcfg.CapabilityVersion(100), @@ -1852,7 +1886,7 @@ func XTestBatcherScalability(t *testing.T) { // Add work to create load if index%5 == 0 { - batcher.AddWork(change.FullSet) + batcher.AddWork(change.FullUpdate()) } }( node.n.ID, @@ -1879,11 +1913,11 @@ func XTestBatcherScalability(t *testing.T) { // Generate different types of work to ensure updates are sent switch index % 4 { case 0: - batcher.AddWork(change.FullSet) + batcher.AddWork(change.FullUpdate()) case 1: - batcher.AddWork(change.PolicySet) + batcher.AddWork(change.PolicyChange()) case 2: - batcher.AddWork(change.DERPSet) + batcher.AddWork(change.DERPMap()) default: // Pick a random node and generate a node change if len(testNodes) > 0 { @@ -1892,7 +1926,7 @@ func XTestBatcherScalability(t *testing.T) { change.NodeAdded(testNodes[nodeIdx].n.ID), ) } else { - batcher.AddWork(change.FullSet) + batcher.AddWork(change.FullUpdate()) } } }(i) @@ -1943,9 +1977,17 @@ func XTestBatcherScalability(t *testing.T) { } } - // Give time for batcher workers to process all the work and send updates - // BEFORE disconnecting nodes - time.Sleep(1 * time.Second) + // Wait for batcher workers to process all work and send updates + // before disconnecting nodes + assert.EventuallyWithT(t, func(c *assert.CollectT) { + // Check that at least some updates were processed + var totalUpdates int64 + for i := range testNodes { + totalUpdates += atomic.LoadInt64(&testNodes[i].updateCount) + } + + assert.Positive(c, totalUpdates, "should have processed some updates") + }, 5*time.Second, 50*time.Millisecond, "waiting for updates to be processed") // Now disconnect all nodes from batcher to stop new updates for i := range testNodes { @@ -1953,8 +1995,12 @@ func XTestBatcherScalability(t *testing.T) { batcher.RemoveNode(node.n.ID, node.ch) } - // Give time for enhanced tracking goroutines to process any remaining data in channels - time.Sleep(200 * time.Millisecond) + // Wait for nodes to be disconnected + assert.EventuallyWithT(t, func(c *assert.CollectT) { + for i := range testNodes { + assert.False(c, batcher.IsConnected(testNodes[i].n.ID), "node should be disconnected") + } + }, 5*time.Second, 50*time.Millisecond, "waiting for nodes to disconnect") // Cleanup nodes and get their final stats totalUpdates := int64(0) @@ -2091,17 +2137,24 @@ func TestBatcherFullPeerUpdates(t *testing.T) { t.Logf("Created %d nodes in database", len(allNodes)) - // Connect nodes one at a time to avoid overwhelming the work queue + // Connect nodes one at a time and wait for each to be connected for i, node := range allNodes { batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100)) t.Logf("Connected node %d (ID: %d)", i, node.n.ID) - // Small delay between connections to allow NodeCameOnline processing - time.Sleep(50 * time.Millisecond) + + // Wait for node to be connected + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.True(c, batcher.IsConnected(node.n.ID), "node should be connected") + }, time.Second, 10*time.Millisecond, "waiting for node connection") } - // Give additional time for all NodeCameOnline events to be processed + // Wait for all NodeCameOnline events to be processed t.Logf("Waiting for NodeCameOnline events to settle...") - time.Sleep(500 * time.Millisecond) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + for i := range allNodes { + assert.True(c, batcher.IsConnected(allNodes[i].n.ID), "all nodes should be connected") + } + }, 5*time.Second, 50*time.Millisecond, "waiting for all nodes to connect") // Check how many peers each node should see for i, node := range allNodes { @@ -2111,11 +2164,23 @@ func TestBatcherFullPeerUpdates(t *testing.T) { // Send a full update - this should generate full peer lists t.Logf("Sending FullSet update...") - batcher.AddWork(change.FullSet) + batcher.AddWork(change.FullUpdate()) - // Give much more time for workers to process the FullSet work items + // Wait for FullSet work items to be processed t.Logf("Waiting for FullSet to be processed...") - time.Sleep(1 * time.Second) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + // Check that some data is available in at least one channel + found := false + + for i := range allNodes { + if len(allNodes[i].ch) > 0 { + found = true + break + } + } + + assert.True(c, found, "no updates received yet") + }, 5*time.Second, 50*time.Millisecond, "waiting for FullSet updates") // Check what each node receives - read multiple updates totalUpdates := 0 @@ -2195,7 +2260,7 @@ func TestBatcherFullPeerUpdates(t *testing.T) { t.Logf("Total updates received across all nodes: %d", totalUpdates) if !foundFullUpdate { - t.Errorf("CRITICAL: No FULL updates received despite sending change.FullSet!") + t.Errorf("CRITICAL: No FULL updates received despite sending change.FullUpdateResponse()!") t.Errorf( "This confirms the bug - FullSet updates are not generating full peer responses", ) @@ -2228,7 +2293,12 @@ func TestBatcherRapidReconnection(t *testing.T) { } } - time.Sleep(100 * time.Millisecond) // Let connections settle + // Wait for all connections to settle + assert.EventuallyWithT(t, func(c *assert.CollectT) { + for i := range allNodes { + assert.True(c, batcher.IsConnected(allNodes[i].n.ID), "node should be connected") + } + }, 5*time.Second, 50*time.Millisecond, "waiting for connections to settle") // Phase 2: Rapid disconnect ALL nodes (simulating nodes going down) t.Logf("Phase 2: Rapid disconnect all nodes...") @@ -2248,7 +2318,12 @@ func TestBatcherRapidReconnection(t *testing.T) { } } - time.Sleep(100 * time.Millisecond) // Let reconnections settle + // Wait for all reconnections to settle + assert.EventuallyWithT(t, func(c *assert.CollectT) { + for i := range allNodes { + assert.True(c, batcher.IsConnected(allNodes[i].n.ID), "node should be reconnected") + } + }, 5*time.Second, 50*time.Millisecond, "waiting for reconnections to settle") // Phase 4: Check debug status - THIS IS WHERE THE BUG SHOULD APPEAR t.Logf("Phase 4: Checking debug status...") @@ -2296,12 +2371,12 @@ func TestBatcherRapidReconnection(t *testing.T) { t.Logf("Phase 5: Testing if nodes can receive updates despite debug status...") // Send a change that should reach all nodes - batcher.AddWork(change.DERPChange()) + batcher.AddWork(change.DERPMap()) receivedCount := 0 timeout := time.After(500 * time.Millisecond) - for i := 0; i < len(allNodes); i++ { + for i := range allNodes { select { case update := <-newChannels[i]: if update != nil { @@ -2349,7 +2424,11 @@ func TestBatcherMultiConnection(t *testing.T) { t.Fatalf("Failed to add node2: %v", err) } - time.Sleep(50 * time.Millisecond) + // Wait for initial connections + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.True(c, batcher.IsConnected(node1.n.ID), "node1 should be connected") + assert.True(c, batcher.IsConnected(node2.n.ID), "node2 should be connected") + }, time.Second, 10*time.Millisecond, "waiting for initial connections") // Phase 2: Add second connection for node1 (multi-connection scenario) t.Logf("Phase 2: Adding second connection for node 1...") @@ -2359,7 +2438,8 @@ func TestBatcherMultiConnection(t *testing.T) { t.Fatalf("Failed to add second connection for node1: %v", err) } - time.Sleep(50 * time.Millisecond) + // Yield to allow connection to be processed + runtime.Gosched() // Phase 3: Add third connection for node1 t.Logf("Phase 3: Adding third connection for node 1...") @@ -2369,7 +2449,8 @@ func TestBatcherMultiConnection(t *testing.T) { t.Fatalf("Failed to add third connection for node1: %v", err) } - time.Sleep(50 * time.Millisecond) + // Yield to allow connection to be processed + runtime.Gosched() // Phase 4: Verify debug status shows correct connection count t.Logf("Phase 4: Verifying debug status shows multiple connections...") @@ -2426,15 +2507,14 @@ func TestBatcherMultiConnection(t *testing.T) { clearChannel(node2.ch) // Send a change notification from node2 (so node1 should receive it on all connections) - testChangeSet := change.ChangeSet{ - NodeID: node2.n.ID, - Change: change.NodeNewOrUpdate, - SelfUpdateOnly: false, - } + testChangeSet := change.NodeAdded(node2.n.ID) batcher.AddWork(testChangeSet) - time.Sleep(100 * time.Millisecond) // Let updates propagate + // Wait for updates to propagate to at least one channel + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.Positive(c, len(node1.ch)+len(secondChannel)+len(thirdChannel), "should have received updates") + }, 5*time.Second, 50*time.Millisecond, "waiting for updates to propagate") // Verify all three connections for node1 receive the update connection1Received := false @@ -2481,7 +2561,8 @@ func TestBatcherMultiConnection(t *testing.T) { t.Errorf("Failed to remove second connection for node1") } - time.Sleep(50 * time.Millisecond) + // Yield to allow removal to be processed + runtime.Gosched() // Verify debug status shows 2 connections now if debugBatcher, ok := batcher.(interface { @@ -2505,14 +2586,14 @@ func TestBatcherMultiConnection(t *testing.T) { clearChannel(node1.ch) clearChannel(thirdChannel) - testChangeSet2 := change.ChangeSet{ - NodeID: node2.n.ID, - Change: change.NodeNewOrUpdate, - SelfUpdateOnly: false, - } + testChangeSet2 := change.NodeAdded(node2.n.ID) batcher.AddWork(testChangeSet2) - time.Sleep(100 * time.Millisecond) + + // Wait for updates to propagate to remaining channels + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.Positive(c, len(node1.ch)+len(thirdChannel), "should have received updates") + }, 5*time.Second, 50*time.Millisecond, "waiting for updates to propagate") // Verify remaining connections still receive updates remaining1Received := false @@ -2539,7 +2620,11 @@ func TestBatcherMultiConnection(t *testing.T) { remaining1Received, remaining3Received) } - // Verify second channel no longer receives updates (should be closed/removed) + // Drain secondChannel of any messages received before removal + // (the test wrapper sends NodeOffline before removal, which may have reached this channel) + clearChannel(secondChannel) + + // Verify second channel no longer receives new updates after being removed select { case <-secondChannel: t.Errorf("Removed connection still received update - this should not happen") @@ -2549,3 +2634,140 @@ func TestBatcherMultiConnection(t *testing.T) { }) } } + +// TestNodeDeletedWhileChangesPending reproduces issue #2924 where deleting a node +// from state while there are pending changes for that node in the batcher causes +// "node not found" errors. The race condition occurs when: +// 1. Node is connected and changes are queued for it +// 2. Node is deleted from state (NodeStore) but not from batcher +// 3. Batcher worker tries to generate map response for deleted node +// 4. Mapper fails to find node in state, causing repeated "node not found" errors. +func TestNodeDeletedWhileChangesPending(t *testing.T) { + for _, batcherFunc := range allBatcherFunctions { + t.Run(batcherFunc.name, func(t *testing.T) { + // Create test environment with 3 nodes + testData, cleanup := setupBatcherWithTestData(t, batcherFunc.fn, 1, 3, NORMAL_BUFFER_SIZE) + defer cleanup() + + batcher := testData.Batcher + st := testData.State + node1 := &testData.Nodes[0] + node2 := &testData.Nodes[1] + node3 := &testData.Nodes[2] + + t.Logf("Testing issue #2924: Node1=%d, Node2=%d, Node3=%d", + node1.n.ID, node2.n.ID, node3.n.ID) + + // Helper to drain channels + drainCh := func(ch chan *tailcfg.MapResponse) { + for { + select { + case <-ch: + // drain + default: + return + } + } + } + + // Start update consumers for all nodes + node1.start() + node2.start() + node3.start() + + defer node1.cleanup() + defer node2.cleanup() + defer node3.cleanup() + + // Connect all nodes to the batcher + require.NoError(t, batcher.AddNode(node1.n.ID, node1.ch, tailcfg.CapabilityVersion(100))) + require.NoError(t, batcher.AddNode(node2.n.ID, node2.ch, tailcfg.CapabilityVersion(100))) + require.NoError(t, batcher.AddNode(node3.n.ID, node3.ch, tailcfg.CapabilityVersion(100))) + + // Wait for all nodes to be connected + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.True(c, batcher.IsConnected(node1.n.ID), "node1 should be connected") + assert.True(c, batcher.IsConnected(node2.n.ID), "node2 should be connected") + assert.True(c, batcher.IsConnected(node3.n.ID), "node3 should be connected") + }, 5*time.Second, 50*time.Millisecond, "waiting for nodes to connect") + + // Get initial work errors count + var initialWorkErrors int64 + if lfb, ok := unwrapBatcher(batcher).(*LockFreeBatcher); ok { + initialWorkErrors = lfb.WorkErrors() + t.Logf("Initial work errors: %d", initialWorkErrors) + } + + // Clear channels to prepare for the test + drainCh(node1.ch) + drainCh(node2.ch) + drainCh(node3.ch) + + // Get node view for deletion + nodeToDelete, ok := st.GetNodeByID(node3.n.ID) + require.True(t, ok, "node3 should exist in state") + + // Delete the node from state - this returns a NodeRemoved change + // In production, this change is sent to batcher via app.Change() + nodeChange, err := st.DeleteNode(nodeToDelete) + require.NoError(t, err, "should be able to delete node from state") + t.Logf("Deleted node %d from state, change: %s", node3.n.ID, nodeChange.Reason) + + // Verify node is deleted from state + _, exists := st.GetNodeByID(node3.n.ID) + require.False(t, exists, "node3 should be deleted from state") + + // Send the NodeRemoved change to batcher (this is what app.Change() does) + // With the fix, this should clean up node3 from batcher's internal state + batcher.AddWork(nodeChange) + + // Wait for the batcher to process the removal and clean up the node + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assert.False(c, batcher.IsConnected(node3.n.ID), "node3 should be disconnected from batcher") + }, 5*time.Second, 50*time.Millisecond, "waiting for node removal to be processed") + + t.Logf("Node %d connected in batcher after NodeRemoved: %v", node3.n.ID, batcher.IsConnected(node3.n.ID)) + + // Now queue changes that would have caused errors before the fix + // With the fix, these should NOT cause "node not found" errors + // because node3 was cleaned up when NodeRemoved was processed + batcher.AddWork(change.FullUpdate()) + batcher.AddWork(change.PolicyChange()) + + // Wait for work to be processed and verify no errors occurred + // With the fix, no new errors should occur because the deleted node + // was cleaned up from batcher state when NodeRemoved was processed + assert.EventuallyWithT(t, func(c *assert.CollectT) { + var finalWorkErrors int64 + if lfb, ok := unwrapBatcher(batcher).(*LockFreeBatcher); ok { + finalWorkErrors = lfb.WorkErrors() + } + + newErrors := finalWorkErrors - initialWorkErrors + assert.Zero(c, newErrors, "Fix for #2924: should have no work errors after node deletion") + }, 5*time.Second, 100*time.Millisecond, "waiting for work processing to complete without errors") + + // Verify remaining nodes still work correctly + drainCh(node1.ch) + drainCh(node2.ch) + batcher.AddWork(change.NodeAdded(node1.n.ID)) + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + // Node 1 and 2 should receive updates + stats1 := NodeStats{TotalUpdates: atomic.LoadInt64(&node1.updateCount)} + stats2 := NodeStats{TotalUpdates: atomic.LoadInt64(&node2.updateCount)} + assert.Positive(c, stats1.TotalUpdates, "node1 should have received updates") + assert.Positive(c, stats2.TotalUpdates, "node2 should have received updates") + }, 5*time.Second, 100*time.Millisecond, "waiting for remaining nodes to receive updates") + }) + } +} + +// unwrapBatcher extracts the underlying batcher from wrapper types. +func unwrapBatcher(b Batcher) Batcher { + if wrapper, ok := b.(*testBatcherWrapper); ok { + return unwrapBatcher(wrapper.Batcher) + } + + return b +} diff --git a/hscontrol/mapper/builder.go b/hscontrol/mapper/builder.go index b85eb908..c666ff24 100644 --- a/hscontrol/mapper/builder.go +++ b/hscontrol/mapper/builder.go @@ -29,10 +29,8 @@ type debugType string const ( fullResponseDebug debugType = "full" selfResponseDebug debugType = "self" - patchResponseDebug debugType = "patch" - removeResponseDebug debugType = "remove" changeResponseDebug debugType = "change" - derpResponseDebug debugType = "derp" + policyResponseDebug debugType = "policy" ) // NewMapResponseBuilder creates a new builder with basic fields set. @@ -76,8 +74,9 @@ func (b *MapResponseBuilder) WithSelfNode() *MapResponseBuilder { } _, matchers := b.mapper.state.Filter() - tailnode, err := tailNode( - nv, b.capVer, b.mapper.state, + + tailnode, err := nv.TailNode( + b.capVer, func(id types.NodeID) []netip.Prefix { return policy.ReduceRoutes(nv, b.mapper.state.GetNodePrimaryRoutes(id), matchers) }, @@ -251,8 +250,8 @@ func (b *MapResponseBuilder) buildTailPeers(peers views.Slice[types.NodeView]) ( changedViews = peers } - tailPeers, err := tailNodes( - changedViews, b.capVer, b.mapper.state, + tailPeers, err := types.TailNodes( + changedViews, b.capVer, func(id types.NodeID) []netip.Prefix { return policy.ReduceRoutes(node, b.mapper.state.GetNodePrimaryRoutes(id), matchers) }, diff --git a/hscontrol/mapper/mapper.go b/hscontrol/mapper/mapper.go index 372bb557..616d470f 100644 --- a/hscontrol/mapper/mapper.go +++ b/hscontrol/mapper/mapper.go @@ -4,7 +4,6 @@ import ( "encoding/json" "fmt" "io/fs" - "net/netip" "net/url" "os" "path" @@ -15,6 +14,7 @@ import ( "github.com/juanfont/headscale/hscontrol/state" "github.com/juanfont/headscale/hscontrol/types" + "github.com/juanfont/headscale/hscontrol/types/change" "github.com/rs/zerolog/log" "tailscale.com/envknob" "tailscale.com/tailcfg" @@ -69,19 +69,22 @@ func newMapper( } } +// generateUserProfiles creates user profiles for MapResponse. func generateUserProfiles( node types.NodeView, peers views.Slice[types.NodeView], ) []tailcfg.UserProfile { - userMap := make(map[uint]*types.User) + userMap := make(map[uint]*types.UserView) ids := make([]uint, 0, len(userMap)) - user := node.User() - userMap[user.ID] = &user - ids = append(ids, user.ID) + user := node.Owner() + userID := user.Model().ID + userMap[userID] = &user + ids = append(ids, userID) for _, peer := range peers.All() { - peerUser := peer.User() - userMap[peerUser.ID] = &peerUser - ids = append(ids, peerUser.ID) + peerUser := peer.Owner() + peerUserID := peerUser.Model().ID + userMap[peerUserID] = &peerUser + ids = append(ids, peerUserID) } slices.Sort(ids) @@ -178,52 +181,117 @@ func (m *mapper) selfMapResponse( return ma, err } -func (m *mapper) derpMapResponse( - nodeID types.NodeID, -) (*tailcfg.MapResponse, error) { - return m.NewMapResponseBuilder(nodeID). - WithDebugType(derpResponseDebug). - WithDERPMap(). - Build() -} - -// PeerChangedPatchResponse creates a patch MapResponse with -// incoming update from a state change. -func (m *mapper) peerChangedPatchResponse( - nodeID types.NodeID, - changed []*tailcfg.PeerChange, -) (*tailcfg.MapResponse, error) { - return m.NewMapResponseBuilder(nodeID). - WithDebugType(patchResponseDebug). - WithPeerChangedPatch(changed). - Build() -} - -// peerChangeResponse returns a MapResponse with changed or added nodes. -func (m *mapper) peerChangeResponse( +// policyChangeResponse creates a MapResponse for policy changes. +// It sends: +// - PeersRemoved for peers that are no longer visible after the policy change +// - PeersChanged for remaining peers (their AllowedIPs may have changed due to policy) +// - Updated PacketFilters +// - Updated SSHPolicy (SSH rules may reference users/groups that changed) +// - Optionally, the node's own self info (when includeSelf is true) +// This avoids the issue where an empty Peers slice is interpreted by Tailscale +// clients as "no change" rather than "no peers". +// When includeSelf is true, the node's self info is included so that a node +// whose own attributes changed (e.g., tags via admin API) sees its updated +// self info along with the new packet filters. +func (m *mapper) policyChangeResponse( nodeID types.NodeID, capVer tailcfg.CapabilityVersion, - changedNodeID types.NodeID, + removedPeers []tailcfg.NodeID, + currentPeers views.Slice[types.NodeView], + includeSelf bool, ) (*tailcfg.MapResponse, error) { - peers := m.state.ListPeers(nodeID, changedNodeID) - - return m.NewMapResponseBuilder(nodeID). - WithDebugType(changeResponseDebug). + builder := m.NewMapResponseBuilder(nodeID). + WithDebugType(policyResponseDebug). WithCapabilityVersion(capVer). - WithUserProfiles(peers). - WithPeerChanges(peers). - Build() + WithPacketFilters(). + WithSSHPolicy() + + if includeSelf { + builder = builder.WithSelfNode() + } + + if len(removedPeers) > 0 { + // Convert tailcfg.NodeID to types.NodeID for WithPeersRemoved + removedIDs := make([]types.NodeID, len(removedPeers)) + for i, id := range removedPeers { + removedIDs[i] = types.NodeID(id) //nolint:gosec // NodeID types are equivalent + } + + builder.WithPeersRemoved(removedIDs...) + } + + // Send remaining peers in PeersChanged - their AllowedIPs may have + // changed due to the policy update (e.g., different routes allowed). + if currentPeers.Len() > 0 { + builder.WithPeerChanges(currentPeers) + } + + return builder.Build() } -// peerRemovedResponse creates a MapResponse indicating that a peer has been removed. -func (m *mapper) peerRemovedResponse( +// buildFromChange builds a MapResponse from a change.Change specification. +// This provides fine-grained control over what gets included in the response. +func (m *mapper) buildFromChange( nodeID types.NodeID, - removedNodeID types.NodeID, + capVer tailcfg.CapabilityVersion, + resp *change.Change, ) (*tailcfg.MapResponse, error) { - return m.NewMapResponseBuilder(nodeID). - WithDebugType(removeResponseDebug). - WithPeersRemoved(removedNodeID). - Build() + if resp.IsEmpty() { + return nil, nil //nolint:nilnil // Empty response means nothing to send, not an error + } + + // If this is a self-update (the changed node is the receiving node), + // send a self-update response to ensure the node sees its own changes. + if resp.OriginNode != 0 && resp.OriginNode == nodeID { + return m.selfMapResponse(nodeID, capVer) + } + + builder := m.NewMapResponseBuilder(nodeID). + WithCapabilityVersion(capVer). + WithDebugType(changeResponseDebug) + + if resp.IncludeSelf { + builder.WithSelfNode() + } + + if resp.IncludeDERPMap { + builder.WithDERPMap() + } + + if resp.IncludeDNS { + builder.WithDNSConfig() + } + + if resp.IncludeDomain { + builder.WithDomain() + } + + if resp.IncludePolicy { + builder.WithPacketFilters() + builder.WithSSHPolicy() + } + + if resp.SendAllPeers { + peers := m.state.ListPeers(nodeID) + builder.WithUserProfiles(peers) + builder.WithPeers(peers) + } else { + if len(resp.PeersChanged) > 0 { + peers := m.state.ListPeers(nodeID, resp.PeersChanged...) + builder.WithUserProfiles(peers) + builder.WithPeerChanges(peers) + } + + if len(resp.PeersRemoved) > 0 { + builder.WithPeersRemoved(resp.PeersRemoved...) + } + } + + if len(resp.PeerPatches) > 0 { + builder.WithPeerChangedPatch(resp.PeerPatches) + } + + return builder.Build() } func writeDebugMapResponse( @@ -257,11 +325,6 @@ func writeDebugMapResponse( } } -// routeFilterFunc is a function that takes a node ID and returns a list of -// netip.Prefixes that are allowed for that node. It is used to filter routes -// from the primary route manager to the node. -type routeFilterFunc func(id types.NodeID) []netip.Prefix - func (m *mapper) debugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) { if debugDumpMapResponsePath == "" { return nil, nil diff --git a/hscontrol/mapper/mapper_test.go b/hscontrol/mapper/mapper_test.go index b801f7dd..1bafd135 100644 --- a/hscontrol/mapper/mapper_test.go +++ b/hscontrol/mapper/mapper_test.go @@ -14,6 +14,7 @@ import ( "github.com/juanfont/headscale/hscontrol/types" "tailscale.com/tailcfg" "tailscale.com/types/dnstype" + "tailscale.com/types/ptr" ) var iap = func(ipStr string) *netip.Addr { @@ -50,8 +51,8 @@ func TestDNSConfigMapResponse(t *testing.T) { mach := func(hostname, username string, userid uint) *types.Node { return &types.Node{ Hostname: hostname, - UserID: userid, - User: types.User{ + UserID: ptr.To(userid), + User: &types.User{ Name: username, }, } diff --git a/hscontrol/mapper/suite_test.go b/hscontrol/mapper/suite_test.go deleted file mode 100644 index c9b1a580..00000000 --- a/hscontrol/mapper/suite_test.go +++ /dev/null @@ -1,15 +0,0 @@ -package mapper - -import ( - "testing" - - "gopkg.in/check.v1" -) - -func Test(t *testing.T) { - check.TestingT(t) -} - -var _ = check.Suite(&Suite{}) - -type Suite struct{} diff --git a/hscontrol/mapper/tail.go b/hscontrol/mapper/tail.go deleted file mode 100644 index 3a518d94..00000000 --- a/hscontrol/mapper/tail.go +++ /dev/null @@ -1,145 +0,0 @@ -package mapper - -import ( - "fmt" - "time" - - "github.com/juanfont/headscale/hscontrol/types" - "github.com/samber/lo" - "tailscale.com/net/tsaddr" - "tailscale.com/tailcfg" - "tailscale.com/types/views" -) - -// NodeCanHaveTagChecker is an interface for checking if a node can have a tag. -type NodeCanHaveTagChecker interface { - NodeCanHaveTag(node types.NodeView, tag string) bool -} - -func tailNodes( - nodes views.Slice[types.NodeView], - capVer tailcfg.CapabilityVersion, - checker NodeCanHaveTagChecker, - primaryRouteFunc routeFilterFunc, - cfg *types.Config, -) ([]*tailcfg.Node, error) { - tNodes := make([]*tailcfg.Node, 0, nodes.Len()) - - for _, node := range nodes.All() { - tNode, err := tailNode( - node, - capVer, - checker, - primaryRouteFunc, - cfg, - ) - if err != nil { - return nil, err - } - - tNodes = append(tNodes, tNode) - } - - return tNodes, nil -} - -// tailNode converts a Node into a Tailscale Node. -func tailNode( - node types.NodeView, - capVer tailcfg.CapabilityVersion, - checker NodeCanHaveTagChecker, - primaryRouteFunc routeFilterFunc, - cfg *types.Config, -) (*tailcfg.Node, error) { - addrs := node.Prefixes() - - var derp int - - // TODO(kradalby): legacyDERP was removed in tailscale/tailscale@2fc4455e6dd9ab7f879d4e2f7cffc2be81f14077 - // and should be removed after 111 is the minimum capver. - var legacyDERP string - if node.Hostinfo().Valid() && node.Hostinfo().NetInfo().Valid() { - legacyDERP = fmt.Sprintf("127.3.3.40:%d", node.Hostinfo().NetInfo().PreferredDERP()) - derp = node.Hostinfo().NetInfo().PreferredDERP() - } else { - legacyDERP = "127.3.3.40:0" // Zero means disconnected or unknown. - } - - var keyExpiry time.Time - if node.Expiry().Valid() { - keyExpiry = node.Expiry().Get() - } else { - keyExpiry = time.Time{} - } - - hostname, err := node.GetFQDN(cfg.BaseDomain) - if err != nil { - return nil, err - } - - var tags []string - for _, tag := range node.RequestTagsSlice().All() { - if checker.NodeCanHaveTag(node, tag) { - tags = append(tags, tag) - } - } - for _, tag := range node.ForcedTags().All() { - tags = append(tags, tag) - } - tags = lo.Uniq(tags) - - routes := primaryRouteFunc(node.ID()) - allowed := append(addrs, routes...) - allowed = append(allowed, node.ExitRoutes()...) - tsaddr.SortPrefixes(allowed) - - tNode := tailcfg.Node{ - ID: tailcfg.NodeID(node.ID()), // this is the actual ID - StableID: node.ID().StableID(), - Name: hostname, - Cap: capVer, - - User: tailcfg.UserID(node.UserID()), - - Key: node.NodeKey(), - KeyExpiry: keyExpiry.UTC(), - - Machine: node.MachineKey(), - DiscoKey: node.DiscoKey(), - Addresses: addrs, - PrimaryRoutes: routes, - AllowedIPs: allowed, - Endpoints: node.Endpoints().AsSlice(), - HomeDERP: derp, - LegacyDERPString: legacyDERP, - Hostinfo: node.Hostinfo(), - Created: node.CreatedAt().UTC(), - - Online: node.IsOnline().Clone(), - - Tags: tags, - - MachineAuthorized: !node.IsExpired(), - Expired: node.IsExpired(), - } - - tNode.CapMap = tailcfg.NodeCapMap{ - tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{}, - tailcfg.CapabilityAdmin: []tailcfg.RawMessage{}, - tailcfg.CapabilitySSH: []tailcfg.RawMessage{}, - } - - if cfg.RandomizeClientPort { - tNode.CapMap[tailcfg.NodeAttrRandomizeClientPort] = []tailcfg.RawMessage{} - } - - // Set LastSeen only for offline nodes to avoid confusing Tailscale clients - // during rapid reconnection cycles. Online nodes should not have LastSeen set - // as this can make clients interpret them as "not online" despite Online=true. - if node.LastSeen().Valid() && node.IsOnline().Valid() && !node.IsOnline().Get() { - lastSeen := node.LastSeen().Get() - tNode.LastSeen = &lastSeen - } - - return &tNode, nil -} diff --git a/hscontrol/mapper/tail_test.go b/hscontrol/mapper/tail_test.go index 3a3b39d1..5b7030de 100644 --- a/hscontrol/mapper/tail_test.go +++ b/hscontrol/mapper/tail_test.go @@ -8,13 +8,12 @@ import ( "github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp/cmpopts" - "github.com/juanfont/headscale/hscontrol/policy" "github.com/juanfont/headscale/hscontrol/routes" "github.com/juanfont/headscale/hscontrol/types" - "github.com/stretchr/testify/require" "tailscale.com/net/tsaddr" "tailscale.com/tailcfg" "tailscale.com/types/key" + "tailscale.com/types/ptr" ) func TestTailNode(t *testing.T) { @@ -70,7 +69,6 @@ func TestTailNode(t *testing.T) { HomeDERP: 0, LegacyDERPString: "127.3.3.40:0", Hostinfo: hiview(tailcfg.Hostinfo{}), - Tags: []string{}, MachineAuthorized: true, CapMap: tailcfg.NodeCapMap{ @@ -97,14 +95,14 @@ func TestTailNode(t *testing.T) { IPv4: iap("100.64.0.1"), Hostname: "mini", GivenName: "mini", - UserID: 0, - User: types.User{ + UserID: ptr.To(uint(0)), + User: &types.User{ Name: "mini", }, - ForcedTags: []string{}, - AuthKey: &types.PreAuthKey{}, - LastSeen: &lastSeen, - Expiry: &expire, + Tags: []string{}, + AuthKey: &types.PreAuthKey{}, + LastSeen: &lastSeen, + Expiry: &expire, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{ tsaddr.AllIPv4(), @@ -185,7 +183,6 @@ func TestTailNode(t *testing.T) { HomeDERP: 0, LegacyDERPString: "127.3.3.40:0", Hostinfo: hiview(tailcfg.Hostinfo{}), - Tags: []string{}, MachineAuthorized: true, CapMap: tailcfg.NodeCapMap{ @@ -203,23 +200,20 @@ func TestTailNode(t *testing.T) { for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - polMan, err := policy.NewPolicyManager(tt.pol, []types.User{}, types.Nodes{tt.node}.ViewSlice()) - require.NoError(t, err) primary := routes.New() cfg := &types.Config{ BaseDomain: tt.baseDomain, TailcfgDNSConfig: tt.dnsConfig, RandomizeClientPort: false, + Taildrop: types.TaildropConfig{Enabled: true}, } _ = primary.SetRoutes(tt.node.ID, tt.node.SubnetRoutes()...) // This is a hack to avoid having a second node to test the primary route. // This should be baked into the test case proper if it is extended in the future. _ = primary.SetRoutes(2, netip.MustParsePrefix("192.168.0.0/24")) - got, err := tailNode( - tt.node.View(), + got, err := tt.node.View().TailNode( 0, - polMan, func(id types.NodeID) []netip.Prefix { return primary.PrimaryRoutes(id) }, @@ -227,13 +221,13 @@ func TestTailNode(t *testing.T) { ) if (err != nil) != tt.wantErr { - t.Errorf("tailNode() error = %v, wantErr %v", err, tt.wantErr) + t.Errorf("TailNode() error = %v, wantErr %v", err, tt.wantErr) return } if diff := cmp.Diff(tt.want, got, cmpopts.EquateEmpty()); diff != "" { - t.Errorf("tailNode() unexpected result (-want +got):\n%s", diff) + t.Errorf("TailNode() unexpected result (-want +got):\n%s", diff) } }) } @@ -273,17 +267,13 @@ func TestNodeExpiry(t *testing.T) { GivenName: "test", Expiry: tt.exp, } - polMan, err := policy.NewPolicyManager(nil, nil, types.Nodes{}.ViewSlice()) - require.NoError(t, err) - tn, err := tailNode( - node.View(), + tn, err := node.View().TailNode( 0, - polMan, func(id types.NodeID) []netip.Prefix { return []netip.Prefix{} }, - &types.Config{}, + &types.Config{Taildrop: types.TaildropConfig{Enabled: true}}, ) if err != nil { t.Fatalf("nodeExpiry() error = %v", err) diff --git a/hscontrol/mapper/utils.go b/hscontrol/mapper/utils.go deleted file mode 100644 index c1dce1f7..00000000 --- a/hscontrol/mapper/utils.go +++ /dev/null @@ -1,47 +0,0 @@ -package mapper - -import "tailscale.com/tailcfg" - -// mergePatch takes the current patch and a newer patch -// and override any field that has changed. -func mergePatch(currPatch, newPatch *tailcfg.PeerChange) { - if newPatch.DERPRegion != 0 { - currPatch.DERPRegion = newPatch.DERPRegion - } - - if newPatch.Cap != 0 { - currPatch.Cap = newPatch.Cap - } - - if newPatch.CapMap != nil { - currPatch.CapMap = newPatch.CapMap - } - - if newPatch.Endpoints != nil { - currPatch.Endpoints = newPatch.Endpoints - } - - if newPatch.Key != nil { - currPatch.Key = newPatch.Key - } - - if newPatch.KeySignature != nil { - currPatch.KeySignature = newPatch.KeySignature - } - - if newPatch.DiscoKey != nil { - currPatch.DiscoKey = newPatch.DiscoKey - } - - if newPatch.Online != nil { - currPatch.Online = newPatch.Online - } - - if newPatch.LastSeen != nil { - currPatch.LastSeen = newPatch.LastSeen - } - - if newPatch.KeyExpiry != nil { - currPatch.KeyExpiry = newPatch.KeyExpiry - } -} diff --git a/hscontrol/metrics.go b/hscontrol/metrics.go index ef427afb..749d651e 100644 --- a/hscontrol/metrics.go +++ b/hscontrol/metrics.go @@ -32,31 +32,16 @@ var ( Name: "mapresponse_sent_total", Help: "total count of mapresponses sent to clients", }, []string{"status", "type"}) - mapResponseUpdateReceived = promauto.NewCounterVec(prometheus.CounterOpts{ - Namespace: prometheusNamespace, - Name: "mapresponse_updates_received_total", - Help: "total count of mapresponse updates received on update channel", - }, []string{"type"}) mapResponseEndpointUpdates = promauto.NewCounterVec(prometheus.CounterOpts{ Namespace: prometheusNamespace, Name: "mapresponse_endpoint_updates_total", Help: "total count of endpoint updates received", }, []string{"status"}) - mapResponseReadOnly = promauto.NewCounterVec(prometheus.CounterOpts{ - Namespace: prometheusNamespace, - Name: "mapresponse_readonly_requests_total", - Help: "total count of readonly requests received", - }, []string{"status"}) mapResponseEnded = promauto.NewCounterVec(prometheus.CounterOpts{ Namespace: prometheusNamespace, Name: "mapresponse_ended_total", Help: "total count of new mapsessions ended", }, []string{"reason"}) - mapResponseClosed = promauto.NewCounterVec(prometheus.CounterOpts{ - Namespace: prometheusNamespace, - Name: "mapresponse_closed_total", - Help: "total count of calls to mapresponse close", - }, []string{"return"}) httpDuration = promauto.NewHistogramVec(prometheus.HistogramOpts{ Namespace: prometheusNamespace, Name: "http_duration_seconds", diff --git a/hscontrol/noise.go b/hscontrol/noise.go index fa5eb1dd..a667cd1f 100644 --- a/hscontrol/noise.go +++ b/hscontrol/noise.go @@ -29,9 +29,6 @@ const ( // of length. Then that many bytes of JSON-encoded tailcfg.EarlyNoise. // The early payload is optional. Some servers may not send it... But we do! earlyPayloadMagic = "\xff\xff\xffTS" - - // EarlyNoise was added in protocol version 49. - earlyNoiseCapabilityVersion = 49 ) type noiseServer struct { diff --git a/hscontrol/oidc.go b/hscontrol/oidc.go index 7c7895c6..7013b8ed 100644 --- a/hscontrol/oidc.go +++ b/hscontrol/oidc.go @@ -4,10 +4,8 @@ import ( "bytes" "cmp" "context" - _ "embed" "errors" "fmt" - "html/template" "net/http" "slices" "strings" @@ -16,6 +14,7 @@ import ( "github.com/coreos/go-oidc/v3/oidc" "github.com/gorilla/mux" "github.com/juanfont/headscale/hscontrol/db" + "github.com/juanfont/headscale/hscontrol/templates" "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types/change" "github.com/juanfont/headscale/hscontrol/util" @@ -42,10 +41,7 @@ var ( errOIDCAllowedUsers = errors.New( "authenticated principal does not match any allowed user", ) - errOIDCInvalidNodeState = errors.New( - "requested node state key expired before authorisation completed", - ) - errOIDCNodeKeyMissing = errors.New("could not get node key from cache") + errOIDCUnverifiedEmail = errors.New("authenticated principal has an unverified email") ) // RegistrationInfo contains both machine key and verifier information for OIDC validation. @@ -108,16 +104,8 @@ func (a *AuthProviderOIDC) AuthURL(registrationID types.RegistrationID) string { registrationID.String()) } -func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) time.Time { - if a.cfg.UseExpiryFromToken { - return idTokenExpiration - } - - return time.Now().Add(a.cfg.Expiry) -} - -// RegisterOIDC redirects to the OIDC provider for authentication -// Puts NodeKey in cache so the callback can retrieve it using the oidc state param +// RegisterHandler registers the OIDC callback handler with the given router. +// It puts NodeKey in cache so the callback can retrieve it using the oidc state param. // Listens in /register/:registration_id. func (a *AuthProviderOIDC) RegisterHandler( writer http.ResponseWriter, @@ -186,18 +174,6 @@ func (a *AuthProviderOIDC) RegisterHandler( http.Redirect(writer, req, authURL, http.StatusFound) } -type oidcCallbackTemplateConfig struct { - User string - Verb string -} - -//go:embed assets/oidc_callback_template.html -var oidcCallbackTemplateContent string - -var oidcCallbackTemplate = template.Must( - template.New("oidccallback").Parse(oidcCallbackTemplateContent), -) - // OIDCCallbackHandler handles the callback from the OIDC endpoint // Retrieves the nkey from the state cache and adds the node to the users email user // TODO: A confirmation page for new nodes should be added to avoid phishing vulnerabilities @@ -289,22 +265,13 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler( // The user claims are now updated from the userinfo endpoint so we can verify the user // against allowed emails, email domains, and groups. - if err := validateOIDCAllowedDomains(a.cfg.AllowedDomains, &claims); err != nil { + err = doOIDCAuthorization(a.cfg, &claims) + if err != nil { httpError(writer, err) return } - if err := validateOIDCAllowedGroups(a.cfg.AllowedGroups, &claims); err != nil { - httpError(writer, err) - return - } - - if err := validateOIDCAllowedUsers(a.cfg.AllowedUsers, &claims); err != nil { - httpError(writer, err) - return - } - - user, c, err := a.createOrUpdateUserFromClaim(&claims) + user, _, err := a.createOrUpdateUserFromClaim(&claims) if err != nil { log.Error(). Err(err). @@ -323,9 +290,6 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler( return } - // Send policy update notifications if needed - a.h.Change(c) - // TODO(kradalby): Is this comment right? // If the node exists, then the node should be reauthenticated, // if the node does not exist, and the machine key exists, then @@ -372,6 +336,14 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler( httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", nil)) } +func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) time.Time { + if a.cfg.UseExpiryFromToken { + return idTokenExpiration + } + + return time.Now().Add(a.cfg.Expiry) +} + func extractCodeAndStateParamFromRequest( req *http.Request, ) (string, string, error) { @@ -454,17 +426,13 @@ func validateOIDCAllowedGroups( allowedGroups []string, claims *types.OIDCClaims, ) error { - if len(allowedGroups) > 0 { - for _, group := range allowedGroups { - if slices.Contains(claims.Groups, group) { - return nil - } + for _, group := range allowedGroups { + if slices.Contains(claims.Groups, group) { + return nil } - - return NewHTTPError(http.StatusUnauthorized, "unauthorised group", errOIDCAllowedGroups) } - return nil + return NewHTTPError(http.StatusUnauthorized, "unauthorised group", errOIDCAllowedGroups) } // validateOIDCAllowedUsers checks that if AllowedUsers is provided, @@ -473,14 +441,62 @@ func validateOIDCAllowedUsers( allowedUsers []string, claims *types.OIDCClaims, ) error { - if len(allowedUsers) > 0 && - !slices.Contains(allowedUsers, claims.Email) { + if !slices.Contains(allowedUsers, claims.Email) { return NewHTTPError(http.StatusUnauthorized, "unauthorised user", errOIDCAllowedUsers) } return nil } +// doOIDCAuthorization applies authorization tests to claims. +// +// The following tests are always applied: +// +// - validateOIDCAllowedGroups +// +// The following tests are applied if cfg.EmailVerifiedRequired=false +// or claims.email_verified=true: +// +// - validateOIDCAllowedDomains +// - validateOIDCAllowedUsers +// +// NOTE that, contrary to the function name, validateOIDCAllowedUsers +// only checks the email address -- not the username. +func doOIDCAuthorization( + cfg *types.OIDCConfig, + claims *types.OIDCClaims, +) error { + if len(cfg.AllowedGroups) > 0 { + err := validateOIDCAllowedGroups(cfg.AllowedGroups, claims) + if err != nil { + return err + } + } + + trustEmail := !cfg.EmailVerifiedRequired || bool(claims.EmailVerified) + + hasEmailTests := len(cfg.AllowedDomains) > 0 || len(cfg.AllowedUsers) > 0 + if !trustEmail && hasEmailTests { + return NewHTTPError(http.StatusUnauthorized, "unverified email", errOIDCUnverifiedEmail) + } + + if len(cfg.AllowedDomains) > 0 { + err := validateOIDCAllowedDomains(cfg.AllowedDomains, claims) + if err != nil { + return err + } + } + + if len(cfg.AllowedUsers) > 0 { + err := validateOIDCAllowedUsers(cfg.AllowedUsers, claims) + if err != nil { + return err + } + } + + return nil +} + // getRegistrationIDFromState retrieves the registration ID from the state. func (a *AuthProviderOIDC) getRegistrationIDFromState(state string) *types.RegistrationID { regInfo, ok := a.registrationCache.Get(state) @@ -493,30 +509,32 @@ func (a *AuthProviderOIDC) getRegistrationIDFromState(state string) *types.Regis func (a *AuthProviderOIDC) createOrUpdateUserFromClaim( claims *types.OIDCClaims, -) (*types.User, change.ChangeSet, error) { - var user *types.User - var err error - var newUser bool - var c change.ChangeSet +) (*types.User, change.Change, error) { + var ( + user *types.User + err error + newUser bool + c change.Change + ) user, err = a.h.state.GetUserByOIDCIdentifier(claims.Identifier()) if err != nil && !errors.Is(err, db.ErrUserNotFound) { - return nil, change.EmptySet, fmt.Errorf("creating or updating user: %w", err) + return nil, change.Change{}, fmt.Errorf("creating or updating user: %w", err) } // if the user is still not found, create a new empty user. - // TODO(kradalby): This might cause us to not have an ID below which - // is a problem. + // TODO(kradalby): This context is not inherited from the request, which is probably not ideal. + // However, we need a context to use the OIDC provider. if user == nil { newUser = true user = &types.User{} } - user.FromClaim(claims) + user.FromClaim(claims, a.cfg.EmailVerifiedRequired) if newUser { user, c, err = a.h.state.CreateUser(*user) if err != nil { - return nil, change.EmptySet, fmt.Errorf("creating user: %w", err) + return nil, change.Change{}, fmt.Errorf("creating user: %w", err) } } else { _, c, err = a.h.state.UpdateUser(types.UserID(user.ID), func(u *types.User) error { @@ -524,7 +542,7 @@ func (a *AuthProviderOIDC) createOrUpdateUserFromClaim( return nil }) if err != nil { - return nil, change.EmptySet, fmt.Errorf("updating user: %w", err) + return nil, change.Change{}, fmt.Errorf("updating user: %w", err) } } @@ -557,37 +575,23 @@ func (a *AuthProviderOIDC) handleRegistration( // ensure we send an update. // This works, but might be another good candidate for doing some sort of // eventbus. - _ = a.h.state.AutoApproveRoutes(node) - _, policyChange, err := a.h.state.SaveNode(node) + routesChange, err := a.h.state.AutoApproveRoutes(node) if err != nil { - return false, fmt.Errorf("saving auto approved routes to node: %w", err) + return false, fmt.Errorf("auto approving routes: %w", err) } - // Policy updates are full and take precedence over node changes. - if !policyChange.Empty() { - a.h.Change(policyChange) - } else { - a.h.Change(nodeChange) - } + // Send both changes. Empty changes are ignored by Change(). + a.h.Change(nodeChange, routesChange) - return !nodeChange.Empty(), nil + return !nodeChange.IsEmpty(), nil } -// TODO(kradalby): -// Rewrite in elem-go. func renderOIDCCallbackTemplate( user *types.User, verb string, ) (*bytes.Buffer, error) { - var content bytes.Buffer - if err := oidcCallbackTemplate.Execute(&content, oidcCallbackTemplateConfig{ - User: user.Display(), - Verb: verb, - }); err != nil { - return nil, fmt.Errorf("rendering OIDC callback template: %w", err) - } - - return &content, nil + html := templates.OIDCCallback(user.Display(), verb).Render() + return bytes.NewBufferString(html), nil } // getCookieName generates a unique cookie name based on a cookie value. diff --git a/hscontrol/oidc_template_test.go b/hscontrol/oidc_template_test.go new file mode 100644 index 00000000..367451b1 --- /dev/null +++ b/hscontrol/oidc_template_test.go @@ -0,0 +1,51 @@ +package hscontrol + +import ( + "testing" + + "github.com/juanfont/headscale/hscontrol/templates" + "github.com/stretchr/testify/assert" +) + +func TestOIDCCallbackTemplate(t *testing.T) { + tests := []struct { + name string + userName string + verb string + }{ + { + name: "logged_in_user", + userName: "test@example.com", + verb: "Logged in", + }, + { + name: "registered_user", + userName: "newuser@example.com", + verb: "Registered", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + // Render using the elem-go template + html := templates.OIDCCallback(tt.userName, tt.verb).Render() + + // Verify the HTML contains expected elements + assert.Contains(t, html, "") + assert.Contains(t, html, "Headscale Authentication Succeeded") + assert.Contains(t, html, tt.verb) + assert.Contains(t, html, tt.userName) + assert.Contains(t, html, "You can now close this window") + + // Verify Material for MkDocs design system CSS is present + assert.Contains(t, html, "Material for MkDocs") + assert.Contains(t, html, "Roboto") + assert.Contains(t, html, ".md-typeset") + + // Verify SVG elements are present + assert.Contains(t, html, " want=%v | got=%v", tC.name, tC.wantErr, err) + } + }) + } +} diff --git a/hscontrol/policy/pm.go b/hscontrol/policy/pm.go index 910eb4a2..f4db88a4 100644 --- a/hscontrol/policy/pm.go +++ b/hscontrol/policy/pm.go @@ -26,6 +26,9 @@ type PolicyManager interface { // NodeCanHaveTag reports whether the given node can have the given tag. NodeCanHaveTag(types.NodeView, string) bool + // TagExists reports whether the given tag is defined in the policy. + TagExists(tag string) bool + // NodeCanApproveRoute reports whether the given node can approve the given route. NodeCanApproveRoute(types.NodeView, netip.Prefix) bool diff --git a/hscontrol/policy/policy_autoapprove_test.go b/hscontrol/policy/policy_autoapprove_test.go index 6c0908b9..61c69067 100644 --- a/hscontrol/policy/policy_autoapprove_test.go +++ b/hscontrol/policy/policy_autoapprove_test.go @@ -32,11 +32,11 @@ func TestApproveRoutesWithPolicy_NeverRemovesApprovedRoutes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "test-node", - UserID: user1.ID, - User: user1, + UserID: ptr.To(user1.ID), + User: ptr.To(user1), RegisterMethod: util.RegisterMethodAuthKey, IPv4: ptr.To(netip.MustParseAddr("100.64.0.1")), - ForcedTags: []string{"tag:test"}, + Tags: []string{"tag:test"}, } node2 := &types.Node{ @@ -44,8 +44,8 @@ func TestApproveRoutesWithPolicy_NeverRemovesApprovedRoutes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "other-node", - UserID: user2.ID, - User: user2, + UserID: ptr.To(user2.ID), + User: ptr.To(user2), RegisterMethod: util.RegisterMethodAuthKey, IPv4: ptr.To(netip.MustParseAddr("100.64.0.2")), } @@ -304,8 +304,8 @@ func TestApproveRoutesWithPolicy_NilAndEmptyCases(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "testnode", - UserID: user.ID, - User: user, + UserID: ptr.To(user.ID), + User: ptr.To(user), RegisterMethod: util.RegisterMethodAuthKey, IPv4: ptr.To(netip.MustParseAddr("100.64.0.1")), ApprovedRoutes: tt.currentApproved, diff --git a/hscontrol/policy/policy_route_approval_test.go b/hscontrol/policy/policy_route_approval_test.go index 610ce7b1..70aa6a21 100644 --- a/hscontrol/policy/policy_route_approval_test.go +++ b/hscontrol/policy/policy_route_approval_test.go @@ -168,15 +168,15 @@ func TestApproveRoutesWithPolicy_NeverRemovesRoutes(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: tt.nodeHostname, - UserID: user.ID, - User: user, + UserID: ptr.To(user.ID), + User: ptr.To(user), RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: tt.announcedRoutes, }, IPv4: ptr.To(netip.MustParseAddr("100.64.0.1")), ApprovedRoutes: tt.currentApproved, - ForcedTags: tt.nodeTags, + Tags: tt.nodeTags, } nodes := types.Nodes{&node} @@ -294,8 +294,8 @@ func TestApproveRoutesWithPolicy_EdgeCases(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "testnode", - UserID: user.ID, - User: user, + UserID: ptr.To(user.ID), + User: ptr.To(user), RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: tt.announcedRoutes, @@ -343,8 +343,8 @@ func TestApproveRoutesWithPolicy_NilPolicyManagerCase(t *testing.T) { MachineKey: key.NewMachine().Public(), NodeKey: key.NewNode().Public(), Hostname: "testnode", - UserID: user.ID, - User: user, + UserID: ptr.To(user.ID), + User: ptr.To(user), RegisterMethod: util.RegisterMethodAuthKey, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: announcedRoutes, diff --git a/hscontrol/policy/policy_test.go b/hscontrol/policy/policy_test.go index 10f6bf0a..da212605 100644 --- a/hscontrol/policy/policy_test.go +++ b/hscontrol/policy/policy_test.go @@ -14,6 +14,7 @@ import ( "github.com/stretchr/testify/require" "gorm.io/gorm" "tailscale.com/tailcfg" + "tailscale.com/types/ptr" ) var ap = func(ipStr string) *netip.Addr { @@ -44,17 +45,17 @@ func TestReduceNodes(t *testing.T) { &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, rules: []tailcfg.FilterRule{ @@ -68,19 +69,19 @@ func TestReduceNodes(t *testing.T) { node: &types.Node{ // current nodes ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, }, want: types.Nodes{ &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, }, @@ -91,17 +92,17 @@ func TestReduceNodes(t *testing.T) { &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, rules: []tailcfg.FilterRule{ // list of all ACLRules registered @@ -115,14 +116,14 @@ func TestReduceNodes(t *testing.T) { node: &types.Node{ // current nodes ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, }, want: types.Nodes{ &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, }, }, @@ -133,17 +134,17 @@ func TestReduceNodes(t *testing.T) { &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, rules: []tailcfg.FilterRule{ // list of all ACLRules registered @@ -157,14 +158,14 @@ func TestReduceNodes(t *testing.T) { node: &types.Node{ // current nodes ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, }, want: types.Nodes{ &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, }, @@ -175,17 +176,17 @@ func TestReduceNodes(t *testing.T) { &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, rules: []tailcfg.FilterRule{ // list of all ACLRules registered @@ -199,14 +200,14 @@ func TestReduceNodes(t *testing.T) { node: &types.Node{ // current nodes ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, }, want: types.Nodes{ &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, }, }, @@ -217,17 +218,17 @@ func TestReduceNodes(t *testing.T) { &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, rules: []tailcfg.FilterRule{ // list of all ACLRules registered @@ -241,19 +242,19 @@ func TestReduceNodes(t *testing.T) { node: &types.Node{ // current nodes ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, }, want: types.Nodes{ &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, }, @@ -264,17 +265,17 @@ func TestReduceNodes(t *testing.T) { &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, rules: []tailcfg.FilterRule{ // list of all ACLRules registered @@ -288,19 +289,19 @@ func TestReduceNodes(t *testing.T) { node: &types.Node{ // current nodes ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, }, want: types.Nodes{ &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, }, @@ -311,17 +312,17 @@ func TestReduceNodes(t *testing.T) { &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "joe"}, + User: &types.User{Name: "joe"}, }, &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, &types.Node{ ID: 3, IPv4: ap("100.64.0.3"), - User: types.User{Name: "mickael"}, + User: &types.User{Name: "mickael"}, }, }, rules: []tailcfg.FilterRule{ // list of all ACLRules registered @@ -329,7 +330,7 @@ func TestReduceNodes(t *testing.T) { node: &types.Node{ // current nodes ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "marc"}, + User: &types.User{Name: "marc"}, }, }, want: nil, @@ -347,28 +348,28 @@ func TestReduceNodes(t *testing.T) { Hostname: "ts-head-upcrmb", IPv4: ap("100.64.0.3"), IPv6: ap("fd7a:115c:a1e0::3"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, &types.Node{ ID: 2, Hostname: "ts-unstable-rlwpvr", IPv4: ap("100.64.0.4"), IPv6: ap("fd7a:115c:a1e0::4"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, &types.Node{ ID: 3, Hostname: "ts-head-8w6paa", IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, &types.Node{ ID: 4, Hostname: "ts-unstable-lys2ib", IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0::2"), - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, }, rules: []tailcfg.FilterRule{ // list of all ACLRules registered @@ -390,7 +391,7 @@ func TestReduceNodes(t *testing.T) { Hostname: "ts-head-8w6paa", IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, }, want: types.Nodes{ @@ -399,14 +400,14 @@ func TestReduceNodes(t *testing.T) { Hostname: "ts-head-upcrmb", IPv4: ap("100.64.0.3"), IPv6: ap("fd7a:115c:a1e0::3"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, &types.Node{ ID: 2, Hostname: "ts-unstable-rlwpvr", IPv4: ap("100.64.0.4"), IPv6: ap("fd7a:115c:a1e0::4"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, }, }, @@ -418,13 +419,13 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.2"), Hostname: "peer1", - User: types.User{Name: "mini"}, + User: &types.User{Name: "mini"}, }, { ID: 2, IPv4: ap("100.64.0.3"), Hostname: "peer2", - User: types.User{Name: "peer2"}, + User: &types.User{Name: "peer2"}, }, }, rules: []tailcfg.FilterRule{ @@ -440,7 +441,7 @@ func TestReduceNodes(t *testing.T) { ID: 0, IPv4: ap("100.64.0.1"), Hostname: "mini", - User: types.User{Name: "mini"}, + User: &types.User{Name: "mini"}, }, }, want: []*types.Node{ @@ -448,7 +449,7 @@ func TestReduceNodes(t *testing.T) { ID: 2, IPv4: ap("100.64.0.3"), Hostname: "peer2", - User: types.User{Name: "peer2"}, + User: &types.User{Name: "peer2"}, }, }, }, @@ -460,19 +461,19 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.2"), Hostname: "user1-2", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, { ID: 0, IPv4: ap("100.64.0.1"), Hostname: "user1-1", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, { ID: 3, IPv4: ap("100.64.0.4"), Hostname: "user2-2", - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, }, rules: []tailcfg.FilterRule{ @@ -509,7 +510,7 @@ func TestReduceNodes(t *testing.T) { ID: 2, IPv4: ap("100.64.0.3"), Hostname: "user-2-1", - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, }, want: []*types.Node{ @@ -517,19 +518,19 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.2"), Hostname: "user1-2", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, { ID: 0, IPv4: ap("100.64.0.1"), Hostname: "user1-1", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, { ID: 3, IPv4: ap("100.64.0.4"), Hostname: "user2-2", - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, }, }, @@ -541,19 +542,19 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.2"), Hostname: "user1-2", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, { ID: 2, IPv4: ap("100.64.0.3"), Hostname: "user-2-1", - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, { ID: 3, IPv4: ap("100.64.0.4"), Hostname: "user2-2", - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, }, rules: []tailcfg.FilterRule{ @@ -590,7 +591,7 @@ func TestReduceNodes(t *testing.T) { ID: 0, IPv4: ap("100.64.0.1"), Hostname: "user1-1", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, }, want: []*types.Node{ @@ -598,19 +599,19 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.2"), Hostname: "user1-2", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, { ID: 2, IPv4: ap("100.64.0.3"), Hostname: "user-2-1", - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, { ID: 3, IPv4: ap("100.64.0.4"), Hostname: "user2-2", - User: types.User{Name: "user2"}, + User: &types.User{Name: "user2"}, }, }, }, @@ -622,13 +623,13 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.1"), Hostname: "user1", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, { ID: 2, IPv4: ap("100.64.0.2"), Hostname: "router", - User: types.User{Name: "router"}, + User: &types.User{Name: "router"}, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")}, }, @@ -649,7 +650,7 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.1"), Hostname: "user1", - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, }, want: []*types.Node{ @@ -657,7 +658,7 @@ func TestReduceNodes(t *testing.T) { ID: 2, IPv4: ap("100.64.0.2"), Hostname: "router", - User: types.User{Name: "router"}, + User: &types.User{Name: "router"}, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")}, }, @@ -673,7 +674,7 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.1"), Hostname: "router", - User: types.User{Name: "router"}, + User: &types.User{Name: "router"}, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.99.0.0/16")}, }, @@ -683,7 +684,7 @@ func TestReduceNodes(t *testing.T) { ID: 2, IPv4: ap("100.64.0.2"), Hostname: "node", - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, }, rules: []tailcfg.FilterRule{ @@ -700,7 +701,7 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.1"), Hostname: "router", - User: types.User{Name: "router"}, + User: &types.User{Name: "router"}, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.99.0.0/16")}, }, @@ -712,7 +713,7 @@ func TestReduceNodes(t *testing.T) { ID: 2, IPv4: ap("100.64.0.2"), Hostname: "node", - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, }, }, @@ -724,7 +725,7 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.1"), Hostname: "router", - User: types.User{Name: "router"}, + User: &types.User{Name: "router"}, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.99.0.0/16")}, }, @@ -734,7 +735,7 @@ func TestReduceNodes(t *testing.T) { ID: 2, IPv4: ap("100.64.0.2"), Hostname: "node", - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, }, rules: []tailcfg.FilterRule{ @@ -751,7 +752,7 @@ func TestReduceNodes(t *testing.T) { ID: 2, IPv4: ap("100.64.0.2"), Hostname: "node", - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, }, want: []*types.Node{ @@ -759,7 +760,7 @@ func TestReduceNodes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.1"), Hostname: "router", - User: types.User{Name: "router"}, + User: &types.User{Name: "router"}, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.99.0.0/16")}, }, @@ -804,7 +805,7 @@ func TestReduceNodesFromPolicy(t *testing.T) { ID: id, IPv4: ap(ip), Hostname: hostname, - User: types.User{Name: username}, + User: &types.User{Name: username}, Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: routes, }, @@ -812,8 +813,6 @@ func TestReduceNodesFromPolicy(t *testing.T) { } } - type args struct { - } tests := []struct { name string nodes types.Nodes @@ -1075,22 +1074,31 @@ func TestSSHPolicyRules(t *testing.T) { nodeUser1 := types.Node{ Hostname: "user1-device", IPv4: ap("100.64.0.1"), - UserID: 1, - User: users[0], + UserID: ptr.To(uint(1)), + User: ptr.To(users[0]), } nodeUser2 := types.Node{ Hostname: "user2-device", IPv4: ap("100.64.0.2"), - UserID: 2, - User: users[1], + UserID: ptr.To(uint(2)), + User: ptr.To(users[1]), } taggedClient := types.Node{ - Hostname: "tagged-client", - IPv4: ap("100.64.0.4"), - UserID: 2, - User: users[1], - ForcedTags: []string{"tag:client"}, + Hostname: "tagged-client", + IPv4: ap("100.64.0.4"), + UserID: ptr.To(uint(2)), + User: ptr.To(users[1]), + Tags: []string{"tag:client"}, + } + + // Create a tagged server node for valid SSH patterns + nodeTaggedServer := types.Node{ + Hostname: "tagged-server", + IPv4: ap("100.64.0.5"), + UserID: ptr.To(uint(1)), + User: ptr.To(users[0]), + Tags: []string{"tag:server"}, } tests := []struct { @@ -1103,10 +1111,13 @@ func TestSSHPolicyRules(t *testing.T) { errorMessage string }{ { - name: "group-to-user", - targetNode: nodeUser1, + name: "group-to-tag", + targetNode: nodeTaggedServer, peers: types.Nodes{&nodeUser2}, policy: `{ + "tagOwners": { + "tag:server": ["user1@"] + }, "groups": { "group:admins": ["user2@"] }, @@ -1114,7 +1125,7 @@ func TestSSHPolicyRules(t *testing.T) { { "action": "accept", "src": ["group:admins"], - "dst": ["user1@"], + "dst": ["tag:server"], "users": ["autogroup:nonroot"] } ] @@ -1139,18 +1150,21 @@ func TestSSHPolicyRules(t *testing.T) { }, { name: "check-period-specified", - targetNode: nodeUser1, - peers: types.Nodes{&taggedClient}, + targetNode: taggedClient, + peers: types.Nodes{&nodeUser2}, policy: `{ "tagOwners": { - "tag:client": ["user1@"], + "tag:client": ["user1@"] + }, + "groups": { + "group:admins": ["user2@"] }, "ssh": [ { "action": "check", "checkPeriod": "24h", - "src": ["tag:client"], - "dst": ["user1@"], + "src": ["group:admins"], + "dst": ["tag:client"], "users": ["autogroup:nonroot"] } ] @@ -1158,7 +1172,7 @@ func TestSSHPolicyRules(t *testing.T) { wantSSH: &tailcfg.SSHPolicy{Rules: []*tailcfg.SSHRule{ { Principals: []*tailcfg.SSHPrincipal{ - {NodeIP: "100.64.0.4"}, + {NodeIP: "100.64.0.2"}, }, SSHUsers: map[string]string{ "*": "=", @@ -1177,16 +1191,19 @@ func TestSSHPolicyRules(t *testing.T) { { name: "no-matching-rules", targetNode: nodeUser2, - peers: types.Nodes{&nodeUser1}, + peers: types.Nodes{&nodeUser1, &nodeTaggedServer}, policy: `{ "tagOwners": { - "tag:client": ["user1@"], + "tag:server": ["user1@"] }, + "groups": { + "group:admins": ["user1@"] + }, "ssh": [ { "action": "accept", - "src": ["tag:client"], - "dst": ["user1@"], + "src": ["group:admins"], + "dst": ["tag:server"], "users": ["autogroup:nonroot"] } ] @@ -1195,14 +1212,20 @@ func TestSSHPolicyRules(t *testing.T) { }, { name: "invalid-action", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, peers: types.Nodes{&nodeUser2}, policy: `{ + "tagOwners": { + "tag:server": ["user1@"] + }, + "groups": { + "group:admins": ["user2@"] + }, "ssh": [ { "action": "invalid", "src": ["group:admins"], - "dst": ["user1@"], + "dst": ["tag:server"], "users": ["autogroup:nonroot"] } ] @@ -1212,15 +1235,21 @@ func TestSSHPolicyRules(t *testing.T) { }, { name: "invalid-check-period", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, peers: types.Nodes{&nodeUser2}, policy: `{ + "tagOwners": { + "tag:server": ["user1@"] + }, + "groups": { + "group:admins": ["user2@"] + }, "ssh": [ { "action": "check", "checkPeriod": "invalid", "src": ["group:admins"], - "dst": ["user1@"], + "dst": ["tag:server"], "users": ["autogroup:nonroot"] } ] @@ -1230,26 +1259,12 @@ func TestSSHPolicyRules(t *testing.T) { }, { name: "unsupported-autogroup", - targetNode: nodeUser1, - peers: types.Nodes{&taggedClient}, - policy: `{ - "ssh": [ - { - "action": "accept", - "src": ["tag:client"], - "dst": ["user1@"], - "users": ["autogroup:invalid"] - } - ] - }`, - expectErr: true, - errorMessage: "autogroup \"autogroup:invalid\" is not supported", - }, - { - name: "autogroup-nonroot-should-use-wildcard-with-root-excluded", - targetNode: nodeUser1, + targetNode: taggedClient, peers: types.Nodes{&nodeUser2}, policy: `{ + "tagOwners": { + "tag:client": ["user1@"] + }, "groups": { "group:admins": ["user2@"] }, @@ -1257,7 +1272,30 @@ func TestSSHPolicyRules(t *testing.T) { { "action": "accept", "src": ["group:admins"], - "dst": ["user1@"], + "dst": ["tag:client"], + "users": ["autogroup:invalid"] + } + ] + }`, + expectErr: true, + errorMessage: "autogroup \"autogroup:invalid\" is not supported", + }, + { + name: "autogroup-nonroot-should-use-wildcard-with-root-excluded", + targetNode: nodeTaggedServer, + peers: types.Nodes{&nodeUser2}, + policy: `{ + "tagOwners": { + "tag:server": ["user1@"] + }, + "groups": { + "group:admins": ["user2@"] + }, + "ssh": [ + { + "action": "accept", + "src": ["group:admins"], + "dst": ["tag:server"], "users": ["autogroup:nonroot"] } ] @@ -1283,9 +1321,12 @@ func TestSSHPolicyRules(t *testing.T) { }, { name: "autogroup-nonroot-plus-root-should-use-wildcard-with-root-mapped", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, peers: types.Nodes{&nodeUser2}, policy: `{ + "tagOwners": { + "tag:server": ["user1@"] + }, "groups": { "group:admins": ["user2@"] }, @@ -1293,7 +1334,7 @@ func TestSSHPolicyRules(t *testing.T) { { "action": "accept", "src": ["group:admins"], - "dst": ["user1@"], + "dst": ["tag:server"], "users": ["autogroup:nonroot", "root"] } ] @@ -1319,9 +1360,12 @@ func TestSSHPolicyRules(t *testing.T) { }, { name: "specific-users-should-map-to-themselves-not-equals", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, peers: types.Nodes{&nodeUser2}, policy: `{ + "tagOwners": { + "tag:server": ["user1@"] + }, "groups": { "group:admins": ["user2@"] }, @@ -1329,7 +1373,7 @@ func TestSSHPolicyRules(t *testing.T) { { "action": "accept", "src": ["group:admins"], - "dst": ["user1@"], + "dst": ["tag:server"], "users": ["ubuntu", "root"] } ] @@ -1447,7 +1491,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.0.0.0/24"), @@ -1475,7 +1519,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.0.0.0/24"), @@ -1501,7 +1545,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.0.0.0/24"), @@ -1529,7 +1573,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.0.0.0/24"), @@ -1556,7 +1600,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.0.0.0/24"), @@ -1581,7 +1625,7 @@ func TestReduceRoutes(t *testing.T) { ID: 1, IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: types.User{Name: "user1"}, + User: &types.User{Name: "user1"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.0.0.0/24"), @@ -1614,7 +1658,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), // Node IP - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.10.10.0/24"), @@ -1646,7 +1690,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.10.10.0/24"), @@ -1673,7 +1717,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.10.10.0/24"), @@ -1701,7 +1745,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.10.10.0/24"), @@ -1739,7 +1783,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), // node with IP 100.64.0.2 - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.10.10.0/24"), @@ -1774,7 +1818,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 1, IPv4: ap("100.64.0.1"), // router with IP 100.64.0.1 - User: types.User{Name: "router"}, + User: &types.User{Name: "router"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.10.10.0/24"), @@ -1816,7 +1860,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), // node - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.10.10.0/24"), @@ -1850,7 +1894,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.64.0.2"), // node - User: types.User{Name: "node"}, + User: &types.User{Name: "node"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("10.10.10.0/24"), @@ -1887,7 +1931,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.123.45.89"), // Node B - regular node - User: types.User{Name: "node-b"}, + User: &types.User{Name: "node-b"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("192.168.1.0/24"), // Subnet connected to Node A @@ -1917,7 +1961,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 1, IPv4: ap("100.123.45.67"), // Node A - router node - User: types.User{Name: "router"}, + User: &types.User{Name: "router"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("192.168.1.0/24"), // Subnet connected to this router @@ -1946,7 +1990,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.123.45.89"), // Node B - regular node that should be reachable - User: types.User{Name: "node-b"}, + User: &types.User{Name: "node-b"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("192.168.1.0/24"), // Subnet behind router @@ -1984,7 +2028,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 3, IPv4: ap("100.123.45.99"), // Node C - isolated node - User: types.User{Name: "isolated-node"}, + User: &types.User{Name: "isolated-node"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("192.168.1.0/24"), // Subnet behind router @@ -2027,7 +2071,7 @@ func TestReduceRoutes(t *testing.T) { node: &types.Node{ ID: 2, IPv4: ap("100.123.45.89"), // Node B - regular node - User: types.User{Name: "node-b"}, + User: &types.User{Name: "node-b"}, }, routes: []netip.Prefix{ netip.MustParsePrefix("192.168.1.0/14"), // Network 192.168.1.0/14 as mentioned in original issue diff --git a/hscontrol/policy/policyutil/reduce_test.go b/hscontrol/policy/policyutil/reduce_test.go index 973d149c..35f5b472 100644 --- a/hscontrol/policy/policyutil/reduce_test.go +++ b/hscontrol/policy/policyutil/reduce_test.go @@ -16,6 +16,7 @@ import ( "gorm.io/gorm" "tailscale.com/net/tsaddr" "tailscale.com/tailcfg" + "tailscale.com/types/ptr" "tailscale.com/util/must" ) @@ -143,13 +144,13 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0:ab12:4843:2222:6273:2221"), - User: users[0], + User: ptr.To(users[0]), }, peers: types.Nodes{ &types.Node{ IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0:ab12:4843:2222:6273:2222"), - User: users[0], + User: ptr.To(users[0]), }, }, want: []tailcfg.FilterRule{}, @@ -190,7 +191,7 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[1], + User: ptr.To(users[1]), Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{ netip.MustParsePrefix("10.33.0.0/16"), @@ -201,7 +202,7 @@ func TestReduceFilterRules(t *testing.T) { &types.Node{ IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0::2"), - User: users[1], + User: ptr.To(users[1]), }, }, want: []tailcfg.FilterRule{ @@ -282,19 +283,19 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[1], + User: ptr.To(users[1]), }, peers: types.Nodes{ &types.Node{ IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0::2"), - User: users[2], + User: ptr.To(users[2]), }, // "internal" exit node &types.Node{ IPv4: ap("100.64.0.100"), IPv6: ap("fd7a:115c:a1e0::100"), - User: users[3], + User: ptr.To(users[3]), Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: tsaddr.ExitRoutes(), }, @@ -343,7 +344,7 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.100"), IPv6: ap("fd7a:115c:a1e0::100"), - User: users[3], + User: ptr.To(users[3]), Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: tsaddr.ExitRoutes(), }, @@ -352,12 +353,12 @@ func TestReduceFilterRules(t *testing.T) { &types.Node{ IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0::2"), - User: users[2], + User: ptr.To(users[2]), }, &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[1], + User: ptr.To(users[1]), }, }, want: []tailcfg.FilterRule{ @@ -452,7 +453,7 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.100"), IPv6: ap("fd7a:115c:a1e0::100"), - User: users[3], + User: ptr.To(users[3]), Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: tsaddr.ExitRoutes(), }, @@ -461,12 +462,12 @@ func TestReduceFilterRules(t *testing.T) { &types.Node{ IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0::2"), - User: users[2], + User: ptr.To(users[2]), }, &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[1], + User: ptr.To(users[1]), }, }, want: []tailcfg.FilterRule{ @@ -564,7 +565,7 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.100"), IPv6: ap("fd7a:115c:a1e0::100"), - User: users[3], + User: ptr.To(users[3]), Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("8.0.0.0/16"), netip.MustParsePrefix("16.0.0.0/16")}, }, @@ -573,12 +574,12 @@ func TestReduceFilterRules(t *testing.T) { &types.Node{ IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0::2"), - User: users[2], + User: ptr.To(users[2]), }, &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[1], + User: ptr.To(users[1]), }, }, want: []tailcfg.FilterRule{ @@ -654,7 +655,7 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.100"), IPv6: ap("fd7a:115c:a1e0::100"), - User: users[3], + User: ptr.To(users[3]), Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("8.0.0.0/8"), netip.MustParsePrefix("16.0.0.0/8")}, }, @@ -663,12 +664,12 @@ func TestReduceFilterRules(t *testing.T) { &types.Node{ IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0::2"), - User: users[2], + User: ptr.To(users[2]), }, &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[1], + User: ptr.To(users[1]), }, }, want: []tailcfg.FilterRule{ @@ -736,17 +737,17 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.100"), IPv6: ap("fd7a:115c:a1e0::100"), - User: users[3], + User: ptr.To(users[3]), Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{netip.MustParsePrefix("172.16.0.0/24")}, }, - ForcedTags: []string{"tag:access-servers"}, + Tags: []string{"tag:access-servers"}, }, peers: types.Nodes{ &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[1], + User: ptr.To(users[1]), }, }, want: []tailcfg.FilterRule{ @@ -803,13 +804,13 @@ func TestReduceFilterRules(t *testing.T) { node: &types.Node{ IPv4: ap("100.64.0.2"), IPv6: ap("fd7a:115c:a1e0::2"), - User: users[3], + User: ptr.To(users[3]), }, peers: types.Nodes{ &types.Node{ IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[1], + User: ptr.To(users[1]), Hostinfo: &tailcfg.Hostinfo{ RoutableIPs: []netip.Prefix{p("172.16.0.0/24"), p("10.10.11.0/24"), p("10.10.12.0/24")}, }, diff --git a/hscontrol/policy/route_approval_test.go b/hscontrol/policy/route_approval_test.go index 1e6fabf3..39b15cee 100644 --- a/hscontrol/policy/route_approval_test.go +++ b/hscontrol/policy/route_approval_test.go @@ -10,6 +10,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "gorm.io/gorm" + "tailscale.com/types/ptr" ) func TestNodeCanApproveRoute(t *testing.T) { @@ -24,34 +25,34 @@ func TestNodeCanApproveRoute(t *testing.T) { ID: 1, Hostname: "user1-device", IPv4: ap("100.64.0.1"), - UserID: 1, - User: users[0], + UserID: ptr.To(uint(1)), + User: ptr.To(users[0]), } exitNode := types.Node{ ID: 2, Hostname: "user2-device", IPv4: ap("100.64.0.2"), - UserID: 2, - User: users[1], + UserID: ptr.To(uint(2)), + User: ptr.To(users[1]), } taggedNode := types.Node{ - ID: 3, - Hostname: "tagged-server", - IPv4: ap("100.64.0.3"), - UserID: 3, - User: users[2], - ForcedTags: []string{"tag:router"}, + ID: 3, + Hostname: "tagged-server", + IPv4: ap("100.64.0.3"), + UserID: ptr.To(uint(3)), + User: ptr.To(users[2]), + Tags: []string{"tag:router"}, } multiTagNode := types.Node{ - ID: 4, - Hostname: "multi-tag-node", - IPv4: ap("100.64.0.4"), - UserID: 2, - User: users[1], - ForcedTags: []string{"tag:router", "tag:server"}, + ID: 4, + Hostname: "multi-tag-node", + IPv4: ap("100.64.0.4"), + UserID: ptr.To(uint(2)), + User: ptr.To(users[1]), + Tags: []string{"tag:router", "tag:server"}, } tests := []struct { @@ -747,6 +748,32 @@ func TestNodeCanApproveRoute(t *testing.T) { }`, canApprove: true, }, + { + // Tags-as-identity: Tagged nodes are identified by their tags, not by the + // user who created them. Group membership of the creator is irrelevant. + // A tagged node can only be auto-approved via tag-based autoApprovers, + // not group-based ones (even if the creator is in the group). + name: "tagged-node-with-group-autoapprover-not-approved", + node: taggedNode, // Has tag:router, owned by user3 + route: p("10.30.0.0/16"), + policy: `{ + "tagOwners": { + "tag:router": ["user3@"] + }, + "groups": { + "group:ops": ["user3@"] + }, + "acls": [ + {"action": "accept", "src": ["*"], "dst": ["*:*"]} + ], + "autoApprovers": { + "routes": { + "10.30.0.0/16": ["group:ops"] + } + } + }`, + canApprove: false, // Tagged nodes don't inherit group membership for auto-approval + }, { name: "small-subnet-with-exitnode-only-approval", node: normalNode, diff --git a/hscontrol/policy/v2/filter.go b/hscontrol/policy/v2/filter.go index dd8e70c5..78c6ebc5 100644 --- a/hscontrol/policy/v2/filter.go +++ b/hscontrol/policy/v2/filter.go @@ -3,6 +3,7 @@ package v2 import ( "errors" "fmt" + "slices" "time" "github.com/juanfont/headscale/hscontrol/types" @@ -167,7 +168,7 @@ func (pol *Policy) compileACLWithAutogroupSelf( // Pre-filter to same-user untagged devices once - reuse for both sources and destinations sameUserNodes := make([]types.NodeView, 0) for _, n := range nodes.All() { - if n.User().ID == node.User().ID && !n.IsTagged() { + if n.User().ID() == node.User().ID() && !n.IsTagged() { sameUserNodes = append(sameUserNodes, n) } } @@ -178,11 +179,8 @@ func (pol *Policy) compileACLWithAutogroupSelf( for _, ips := range resolvedSrcIPs { for _, n := range sameUserNodes { // Check if any of this node's IPs are in the source set - for _, nodeIP := range n.IPs() { - if ips.Contains(nodeIP) { - n.AppendToIPSet(&srcIPs) - break - } + if slices.ContainsFunc(n.IPs(), ips.Contains) { + n.AppendToIPSet(&srcIPs) } } } @@ -351,7 +349,7 @@ func (pol *Policy) compileSSHPolicy( // Build destination set for autogroup:self (same-user untagged devices only) var dest netipx.IPSetBuilder for _, n := range nodes.All() { - if n.User().ID == node.User().ID && !n.IsTagged() { + if n.User().ID() == node.User().ID() && !n.IsTagged() { n.AppendToIPSet(&dest) } } @@ -367,7 +365,7 @@ func (pol *Policy) compileSSHPolicy( // Pre-filter to same-user untagged devices for efficiency sameUserNodes := make([]types.NodeView, 0) for _, n := range nodes.All() { - if n.User().ID == node.User().ID && !n.IsTagged() { + if n.User().ID() == node.User().ID() && !n.IsTagged() { sameUserNodes = append(sameUserNodes, n) } } @@ -375,11 +373,8 @@ func (pol *Policy) compileSSHPolicy( var filteredSrcIPs netipx.IPSetBuilder for _, n := range sameUserNodes { // Check if any of this node's IPs are in the source set - for _, nodeIP := range n.IPs() { - if srcIPs.Contains(nodeIP) { - n.AppendToIPSet(&filteredSrcIPs) - break // Found this node, move to next - } + if slices.ContainsFunc(n.IPs(), srcIPs.Contains) { + n.AppendToIPSet(&filteredSrcIPs) // Found this node, move to next } } diff --git a/hscontrol/policy/v2/filter_test.go b/hscontrol/policy/v2/filter_test.go index 37ff8730..0df1e147 100644 --- a/hscontrol/policy/v2/filter_test.go +++ b/hscontrol/policy/v2/filter_test.go @@ -3,6 +3,7 @@ package v2 import ( "encoding/json" "net/netip" + "slices" "strings" "testing" "time" @@ -14,6 +15,7 @@ import ( "github.com/stretchr/testify/require" "gorm.io/gorm" "tailscale.com/tailcfg" + "tailscale.com/types/ptr" ) // aliasWithPorts creates an AliasWithPorts structure from an alias and ports. @@ -380,7 +382,7 @@ func TestParsing(t *testing.T) { }, &types.Node{ IPv4: ap("200.200.200.200"), - User: users[0], + User: &users[0], Hostinfo: &tailcfg.Hostinfo{}, }, }.ViewSlice()) @@ -404,21 +406,33 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { {Name: "user2", Model: gorm.Model{ID: 2}}, } - // Create test nodes - nodeUser1 := types.Node{ - Hostname: "user1-device", + // Create test nodes - use tagged nodes as SSH destinations + // and untagged nodes as SSH sources (since group->username destinations + // are not allowed per Tailscale security model, but groups can SSH to tags) + nodeTaggedServer := types.Node{ + Hostname: "tagged-server", IPv4: createAddr("100.64.0.1"), - UserID: 1, - User: users[0], + UserID: ptr.To(users[0].ID), + User: ptr.To(users[0]), + Tags: []string{"tag:server"}, } - nodeUser2 := types.Node{ - Hostname: "user2-device", + nodeTaggedDB := types.Node{ + Hostname: "tagged-db", IPv4: createAddr("100.64.0.2"), - UserID: 2, - User: users[1], + UserID: ptr.To(users[1].ID), + User: ptr.To(users[1]), + Tags: []string{"tag:database"}, + } + // Add untagged node for user2 - this will be the SSH source + // (group:admins contains user2, so user2's untagged node provides the source IPs) + nodeUser2Untagged := types.Node{ + Hostname: "user2-device", + IPv4: createAddr("100.64.0.3"), + UserID: ptr.To(users[1].ID), + User: ptr.To(users[1]), } - nodes := types.Nodes{&nodeUser1, &nodeUser2} + nodes := types.Nodes{&nodeTaggedServer, &nodeTaggedDB, &nodeUser2Untagged} tests := []struct { name string @@ -429,8 +443,11 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { }{ { name: "specific user mapping", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("user1@")}, + }, Groups: Groups{ Group("group:admins"): []Username{Username("user2@")}, }, @@ -438,7 +455,7 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { { Action: "accept", Sources: SSHSrcAliases{gp("group:admins")}, - Destinations: SSHDstAliases{up("user1@")}, + Destinations: SSHDstAliases{tp("tag:server")}, Users: []SSHUser{"ssh-it-user"}, }, }, @@ -449,8 +466,11 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { }, { name: "multiple specific users", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("user1@")}, + }, Groups: Groups{ Group("group:admins"): []Username{Username("user2@")}, }, @@ -458,7 +478,7 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { { Action: "accept", Sources: SSHSrcAliases{gp("group:admins")}, - Destinations: SSHDstAliases{up("user1@")}, + Destinations: SSHDstAliases{tp("tag:server")}, Users: []SSHUser{"ubuntu", "admin", "deploy"}, }, }, @@ -471,8 +491,11 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { }, { name: "autogroup:nonroot only", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("user1@")}, + }, Groups: Groups{ Group("group:admins"): []Username{Username("user2@")}, }, @@ -480,7 +503,7 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { { Action: "accept", Sources: SSHSrcAliases{gp("group:admins")}, - Destinations: SSHDstAliases{up("user1@")}, + Destinations: SSHDstAliases{tp("tag:server")}, Users: []SSHUser{SSHUser(AutoGroupNonRoot)}, }, }, @@ -492,8 +515,11 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { }, { name: "root only", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("user1@")}, + }, Groups: Groups{ Group("group:admins"): []Username{Username("user2@")}, }, @@ -501,7 +527,7 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { { Action: "accept", Sources: SSHSrcAliases{gp("group:admins")}, - Destinations: SSHDstAliases{up("user1@")}, + Destinations: SSHDstAliases{tp("tag:server")}, Users: []SSHUser{"root"}, }, }, @@ -512,8 +538,11 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { }, { name: "autogroup:nonroot plus root", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("user1@")}, + }, Groups: Groups{ Group("group:admins"): []Username{Username("user2@")}, }, @@ -521,7 +550,7 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { { Action: "accept", Sources: SSHSrcAliases{gp("group:admins")}, - Destinations: SSHDstAliases{up("user1@")}, + Destinations: SSHDstAliases{tp("tag:server")}, Users: []SSHUser{SSHUser(AutoGroupNonRoot), "root"}, }, }, @@ -533,8 +562,11 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { }, { name: "mixed specific users and autogroups", - targetNode: nodeUser1, + targetNode: nodeTaggedServer, policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("user1@")}, + }, Groups: Groups{ Group("group:admins"): []Username{Username("user2@")}, }, @@ -542,7 +574,7 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { { Action: "accept", Sources: SSHSrcAliases{gp("group:admins")}, - Destinations: SSHDstAliases{up("user1@")}, + Destinations: SSHDstAliases{tp("tag:server")}, Users: []SSHUser{SSHUser(AutoGroupNonRoot), "root", "ubuntu", "admin"}, }, }, @@ -556,8 +588,12 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { }, { name: "no matching destination", - targetNode: nodeUser2, // Target node2, but policy only allows user1 + targetNode: nodeTaggedDB, // Target tag:database, but policy only allows tag:server policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("user1@")}, + Tag("tag:database"): Owners{up("user1@")}, + }, Groups: Groups{ Group("group:admins"): []Username{Username("user2@")}, }, @@ -565,7 +601,7 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { { Action: "accept", Sources: SSHSrcAliases{gp("group:admins")}, - Destinations: SSHDstAliases{up("user1@")}, // Only user1, not user2 + Destinations: SSHDstAliases{tp("tag:server")}, // Only tag:server, not tag:database Users: []SSHUser{"ssh-it-user"}, }, }, @@ -598,9 +634,9 @@ func TestCompileSSHPolicy_UserMapping(t *testing.T) { rule := sshPolicy.Rules[0] assert.Equal(t, tt.wantSSHUsers, rule.SSHUsers, "SSH users mapping should match expected") - // Verify principals are set correctly (should contain user2's IP since that's the source) + // Verify principals are set correctly (should contain user2's untagged device IP since that's the source) require.Len(t, rule.Principals, 1) - assert.Equal(t, "100.64.0.2", rule.Principals[0].NodeIP) + assert.Equal(t, "100.64.0.3", rule.Principals[0].NodeIP) // Verify action is set correctly assert.True(t, rule.Action.Accept) @@ -617,22 +653,27 @@ func TestCompileSSHPolicy_CheckAction(t *testing.T) { {Name: "user2", Model: gorm.Model{ID: 2}}, } - nodeUser1 := types.Node{ - Hostname: "user1-device", + // Use tagged nodes for SSH user mapping tests + nodeTaggedServer := types.Node{ + Hostname: "tagged-server", IPv4: createAddr("100.64.0.1"), - UserID: 1, - User: users[0], + UserID: ptr.To(users[0].ID), + User: ptr.To(users[0]), + Tags: []string{"tag:server"}, } nodeUser2 := types.Node{ Hostname: "user2-device", IPv4: createAddr("100.64.0.2"), - UserID: 2, - User: users[1], + UserID: ptr.To(users[1].ID), + User: ptr.To(users[1]), } - nodes := types.Nodes{&nodeUser1, &nodeUser2} + nodes := types.Nodes{&nodeTaggedServer, &nodeUser2} policy := &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("user1@")}, + }, Groups: Groups{ Group("group:admins"): []Username{Username("user2@")}, }, @@ -641,7 +682,7 @@ func TestCompileSSHPolicy_CheckAction(t *testing.T) { Action: "check", CheckPeriod: model.Duration(24 * time.Hour), Sources: SSHSrcAliases{gp("group:admins")}, - Destinations: SSHDstAliases{up("user1@")}, + Destinations: SSHDstAliases{tp("tag:server")}, Users: []SSHUser{"ssh-it-user"}, }, }, @@ -650,7 +691,7 @@ func TestCompileSSHPolicy_CheckAction(t *testing.T) { err := policy.validate() require.NoError(t, err) - sshPolicy, err := policy.compileSSHPolicy(users, nodeUser1.View(), nodes.ViewSlice()) + sshPolicy, err := policy.compileSSHPolicy(users, nodeTaggedServer.View(), nodes.ViewSlice()) require.NoError(t, err) require.NotNil(t, sshPolicy) require.Len(t, sshPolicy.Rules, 1) @@ -681,30 +722,31 @@ func TestSSHIntegrationReproduction(t *testing.T) { node1 := &types.Node{ Hostname: "user1-node", IPv4: createAddr("100.64.0.1"), - UserID: 1, - User: users[0], + UserID: ptr.To(users[0].ID), + User: ptr.To(users[0]), } node2 := &types.Node{ Hostname: "user2-node", IPv4: createAddr("100.64.0.2"), - UserID: 2, - User: users[1], + UserID: ptr.To(users[1].ID), + User: ptr.To(users[1]), } nodes := types.Nodes{node1, node2} // Create a simple policy that reproduces the issue + // Updated to use autogroup:self instead of username destination (per Tailscale security model) policy := &Policy{ Groups: Groups{ - Group("group:integration-test"): []Username{Username("user1@")}, + Group("group:integration-test"): []Username{Username("user1@"), Username("user2@")}, }, SSHs: []SSH{ { Action: "accept", Sources: SSHSrcAliases{gp("group:integration-test")}, - Destinations: SSHDstAliases{up("user2@")}, // Target user2 - Users: []SSHUser{SSHUser("ssh-it-user")}, // This is the key - specific user + Destinations: SSHDstAliases{agp("autogroup:self")}, // Users can SSH to their own devices + Users: []SSHUser{SSHUser("ssh-it-user")}, // This is the key - specific user }, }, } @@ -713,7 +755,7 @@ func TestSSHIntegrationReproduction(t *testing.T) { err := policy.validate() require.NoError(t, err) - // Test SSH policy compilation for node2 (target) + // Test SSH policy compilation for node2 (owned by user2, who is in the group) sshPolicy, err := policy.compileSSHPolicy(users, node2.View(), nodes.ViewSlice()) require.NoError(t, err) require.NotNil(t, sshPolicy) @@ -740,11 +782,12 @@ func TestSSHJSONSerialization(t *testing.T) { {Name: "user1", Model: gorm.Model{ID: 1}}, } + uid := uint(1) node := &types.Node{ Hostname: "test-node", IPv4: createAddr("100.64.0.1"), - UserID: 1, - User: users[0], + UserID: &uid, + User: &users[0], } nodes := types.Nodes{node} @@ -803,32 +846,32 @@ func TestCompileFilterRulesForNodeWithAutogroupSelf(t *testing.T) { nodes := types.Nodes{ { - User: users[0], + User: ptr.To(users[0]), IPv4: ap("100.64.0.1"), }, { - User: users[0], + User: ptr.To(users[0]), IPv4: ap("100.64.0.2"), }, { - User: users[1], + User: ptr.To(users[1]), IPv4: ap("100.64.0.3"), }, { - User: users[1], + User: ptr.To(users[1]), IPv4: ap("100.64.0.4"), }, // Tagged device for user1 { - User: users[0], - IPv4: ap("100.64.0.5"), - ForcedTags: []string{"tag:test"}, + User: &users[0], + IPv4: ap("100.64.0.5"), + Tags: []string{"tag:test"}, }, // Tagged device for user2 { - User: users[1], - IPv4: ap("100.64.0.6"), - ForcedTags: []string{"tag:test"}, + User: &users[1], + IPv4: ap("100.64.0.6"), + Tags: []string{"tag:test"}, }, } @@ -906,14 +949,7 @@ func TestCompileFilterRulesForNodeWithAutogroupSelf(t *testing.T) { } for _, expectedIP := range expectedDestIPs { - found := false - - for _, actualIP := range actualDestIPs { - if actualIP == expectedIP { - found = true - break - } - } + found := slices.Contains(actualDestIPs, expectedIP) if !found { t.Errorf("expected destination IP %s to be included, got: %v", expectedIP, actualDestIPs) @@ -931,6 +967,251 @@ func TestCompileFilterRulesForNodeWithAutogroupSelf(t *testing.T) { } } +// TestTagUserMutualExclusivity tests that user-owned nodes and tagged nodes +// are treated as separate identity classes and cannot inadvertently access each other. +func TestTagUserMutualExclusivity(t *testing.T) { + users := types.Users{ + {Model: gorm.Model{ID: 1}, Name: "user1"}, + {Model: gorm.Model{ID: 2}, Name: "user2"}, + } + + nodes := types.Nodes{ + // User-owned nodes + { + User: ptr.To(users[0]), + IPv4: ap("100.64.0.1"), + }, + { + User: ptr.To(users[1]), + IPv4: ap("100.64.0.2"), + }, + // Tagged nodes + { + User: &users[0], // "created by" tracking + IPv4: ap("100.64.0.10"), + Tags: []string{"tag:server"}, + }, + { + User: &users[1], // "created by" tracking + IPv4: ap("100.64.0.11"), + Tags: []string{"tag:database"}, + }, + } + + policy := &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{ptr.To(Username("user1@"))}, + Tag("tag:database"): Owners{ptr.To(Username("user2@"))}, + }, + ACLs: []ACL{ + // Rule 1: user1 (user-owned) should NOT be able to reach tagged nodes + { + Action: "accept", + Sources: []Alias{up("user1@")}, + Destinations: []AliasWithPorts{ + aliasWithPorts(tp("tag:server"), tailcfg.PortRangeAny), + }, + }, + // Rule 2: tag:server should be able to reach tag:database + { + Action: "accept", + Sources: []Alias{tp("tag:server")}, + Destinations: []AliasWithPorts{ + aliasWithPorts(tp("tag:database"), tailcfg.PortRangeAny), + }, + }, + }, + } + + err := policy.validate() + if err != nil { + t.Fatalf("policy validation failed: %v", err) + } + + // Test user1's user-owned node (100.64.0.1) + userNode := nodes[0].View() + + userRules, err := policy.compileFilterRulesForNode(users, userNode, nodes.ViewSlice()) + if err != nil { + t.Fatalf("unexpected error for user node: %v", err) + } + + // User1's user-owned node should NOT reach tag:server (100.64.0.10) + // because user1@ as a source only matches user1's user-owned devices, NOT tagged devices + for _, rule := range userRules { + for _, dst := range rule.DstPorts { + if dst.IP == "100.64.0.10" { + t.Errorf("SECURITY: user-owned node should NOT reach tagged node (got dest %s in rule)", dst.IP) + } + } + } + + // Test tag:server node (100.64.0.10) + // compileFilterRulesForNode returns rules for what the node can ACCESS (as source) + taggedNode := nodes[2].View() + + taggedRules, err := policy.compileFilterRulesForNode(users, taggedNode, nodes.ViewSlice()) + if err != nil { + t.Fatalf("unexpected error for tagged node: %v", err) + } + + // Tag:server (as source) should be able to reach tag:database (100.64.0.11) + // Check destinations in the rules for this node + foundDatabaseDest := false + + for _, rule := range taggedRules { + // Check if this rule applies to tag:server as source + if !slices.Contains(rule.SrcIPs, "100.64.0.10/32") { + continue + } + + // Check if tag:database is in destinations + for _, dst := range rule.DstPorts { + if dst.IP == "100.64.0.11/32" { + foundDatabaseDest = true + break + } + } + + if foundDatabaseDest { + break + } + } + + if !foundDatabaseDest { + t.Errorf("tag:server should reach tag:database but didn't find 100.64.0.11 in destinations") + } +} + +// TestAutogroupTagged tests that autogroup:tagged correctly selects all devices +// with tag-based identity (IsTagged() == true or has requested tags in tagOwners). +func TestAutogroupTagged(t *testing.T) { + t.Parallel() + + users := types.Users{ + {Model: gorm.Model{ID: 1}, Name: "user1"}, + {Model: gorm.Model{ID: 2}, Name: "user2"}, + } + + nodes := types.Nodes{ + // User-owned nodes (not tagged) + { + User: ptr.To(users[0]), + IPv4: ap("100.64.0.1"), + }, + { + User: ptr.To(users[1]), + IPv4: ap("100.64.0.2"), + }, + // Tagged nodes + { + User: &users[0], // "created by" tracking + IPv4: ap("100.64.0.10"), + Tags: []string{"tag:server"}, + }, + { + User: &users[1], // "created by" tracking + IPv4: ap("100.64.0.11"), + Tags: []string{"tag:database"}, + }, + { + User: &users[0], + IPv4: ap("100.64.0.12"), + Tags: []string{"tag:web", "tag:prod"}, + }, + } + + policy := &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{ptr.To(Username("user1@"))}, + Tag("tag:database"): Owners{ptr.To(Username("user2@"))}, + Tag("tag:web"): Owners{ptr.To(Username("user1@"))}, + Tag("tag:prod"): Owners{ptr.To(Username("user1@"))}, + }, + ACLs: []ACL{ + // Rule: autogroup:tagged can reach user-owned nodes + { + Action: "accept", + Sources: []Alias{agp("autogroup:tagged")}, + Destinations: []AliasWithPorts{ + aliasWithPorts(up("user1@"), tailcfg.PortRangeAny), + aliasWithPorts(up("user2@"), tailcfg.PortRangeAny), + }, + }, + }, + } + + err := policy.validate() + require.NoError(t, err) + + // Verify autogroup:tagged includes all tagged nodes + taggedIPs, err := AutoGroupTagged.Resolve(policy, users, nodes.ViewSlice()) + require.NoError(t, err) + require.NotNil(t, taggedIPs) + + // Should contain all tagged nodes + assert.True(t, taggedIPs.Contains(*ap("100.64.0.10")), "should include tag:server") + assert.True(t, taggedIPs.Contains(*ap("100.64.0.11")), "should include tag:database") + assert.True(t, taggedIPs.Contains(*ap("100.64.0.12")), "should include tag:web,tag:prod") + + // Should NOT contain user-owned nodes + assert.False(t, taggedIPs.Contains(*ap("100.64.0.1")), "should not include user1 node") + assert.False(t, taggedIPs.Contains(*ap("100.64.0.2")), "should not include user2 node") + + // Test ACL filtering: all tagged nodes should be able to reach user nodes + tests := []struct { + name string + sourceNode types.NodeView + shouldReach []string // IP strings for comparison + }{ + { + name: "tag:server can reach user-owned nodes", + sourceNode: nodes[2].View(), + shouldReach: []string{"100.64.0.1", "100.64.0.2"}, + }, + { + name: "tag:database can reach user-owned nodes", + sourceNode: nodes[3].View(), + shouldReach: []string{"100.64.0.1", "100.64.0.2"}, + }, + { + name: "tag:web,tag:prod can reach user-owned nodes", + sourceNode: nodes[4].View(), + shouldReach: []string{"100.64.0.1", "100.64.0.2"}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + + rules, err := policy.compileFilterRulesForNode(users, tt.sourceNode, nodes.ViewSlice()) + require.NoError(t, err) + + // Verify all expected destinations are reachable + for _, expectedDest := range tt.shouldReach { + found := false + + for _, rule := range rules { + for _, dstPort := range rule.DstPorts { + // DstPort.IP is CIDR notation like "100.64.0.1/32" + if strings.HasPrefix(dstPort.IP, expectedDest+"/") || dstPort.IP == expectedDest { + found = true + break + } + } + + if found { + break + } + } + + assert.True(t, found, "Expected to find destination %s in rules", expectedDest) + } + }) + } +} + func TestAutogroupSelfInSourceIsRejected(t *testing.T) { // Test that autogroup:self cannot be used in sources (per Tailscale spec) policy := &Policy{ @@ -965,10 +1246,10 @@ func TestAutogroupSelfWithSpecificUserSource(t *testing.T) { } nodes := types.Nodes{ - {User: users[0], IPv4: ap("100.64.0.1")}, - {User: users[0], IPv4: ap("100.64.0.2")}, - {User: users[1], IPv4: ap("100.64.0.3")}, - {User: users[1], IPv4: ap("100.64.0.4")}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.1")}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.2")}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.3")}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.4")}, } policy := &Policy{ @@ -1032,11 +1313,11 @@ func TestAutogroupSelfWithGroupSource(t *testing.T) { } nodes := types.Nodes{ - {User: users[0], IPv4: ap("100.64.0.1")}, - {User: users[0], IPv4: ap("100.64.0.2")}, - {User: users[1], IPv4: ap("100.64.0.3")}, - {User: users[1], IPv4: ap("100.64.0.4")}, - {User: users[2], IPv4: ap("100.64.0.5")}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.1")}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.2")}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.3")}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.4")}, + {User: ptr.To(users[2]), IPv4: ap("100.64.0.5")}, } policy := &Policy{ @@ -1101,13 +1382,13 @@ func TestSSHWithAutogroupSelfInDestination(t *testing.T) { nodes := types.Nodes{ // User1's nodes - {User: users[0], IPv4: ap("100.64.0.1"), Hostname: "user1-node1"}, - {User: users[0], IPv4: ap("100.64.0.2"), Hostname: "user1-node2"}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.1"), Hostname: "user1-node1"}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.2"), Hostname: "user1-node2"}, // User2's nodes - {User: users[1], IPv4: ap("100.64.0.3"), Hostname: "user2-node1"}, - {User: users[1], IPv4: ap("100.64.0.4"), Hostname: "user2-node2"}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.3"), Hostname: "user2-node1"}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.4"), Hostname: "user2-node2"}, // Tagged node for user1 (should be excluded) - {User: users[0], IPv4: ap("100.64.0.5"), Hostname: "user1-tagged", ForcedTags: []string{"tag:server"}}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.5"), Hostname: "user1-tagged", Tags: []string{"tag:server"}}, } policy := &Policy{ @@ -1179,10 +1460,10 @@ func TestSSHWithAutogroupSelfAndSpecificUser(t *testing.T) { } nodes := types.Nodes{ - {User: users[0], IPv4: ap("100.64.0.1")}, - {User: users[0], IPv4: ap("100.64.0.2")}, - {User: users[1], IPv4: ap("100.64.0.3")}, - {User: users[1], IPv4: ap("100.64.0.4")}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.1")}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.2")}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.3")}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.4")}, } policy := &Policy{ @@ -1233,11 +1514,11 @@ func TestSSHWithAutogroupSelfAndGroup(t *testing.T) { } nodes := types.Nodes{ - {User: users[0], IPv4: ap("100.64.0.1")}, - {User: users[0], IPv4: ap("100.64.0.2")}, - {User: users[1], IPv4: ap("100.64.0.3")}, - {User: users[1], IPv4: ap("100.64.0.4")}, - {User: users[2], IPv4: ap("100.64.0.5")}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.1")}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.2")}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.3")}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.4")}, + {User: ptr.To(users[2]), IPv4: ap("100.64.0.5")}, } policy := &Policy{ @@ -1290,10 +1571,10 @@ func TestSSHWithAutogroupSelfExcludesTaggedDevices(t *testing.T) { } nodes := types.Nodes{ - {User: users[0], IPv4: ap("100.64.0.1"), Hostname: "untagged1"}, - {User: users[0], IPv4: ap("100.64.0.2"), Hostname: "untagged2"}, - {User: users[0], IPv4: ap("100.64.0.3"), Hostname: "tagged1", ForcedTags: []string{"tag:server"}}, - {User: users[0], IPv4: ap("100.64.0.4"), Hostname: "tagged2", ForcedTags: []string{"tag:web"}}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.1"), Hostname: "untagged1"}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.2"), Hostname: "untagged2"}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.3"), Hostname: "tagged1", Tags: []string{"tag:server"}}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.4"), Hostname: "tagged2", Tags: []string{"tag:web"}}, } policy := &Policy{ @@ -1350,10 +1631,10 @@ func TestSSHWithAutogroupSelfAndMixedDestinations(t *testing.T) { } nodes := types.Nodes{ - {User: users[0], IPv4: ap("100.64.0.1"), Hostname: "user1-device"}, - {User: users[0], IPv4: ap("100.64.0.2"), Hostname: "user1-device2"}, - {User: users[1], IPv4: ap("100.64.0.3"), Hostname: "user2-device"}, - {User: users[1], IPv4: ap("100.64.0.4"), Hostname: "user2-router", ForcedTags: []string{"tag:router"}}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.1"), Hostname: "user1-device"}, + {User: ptr.To(users[0]), IPv4: ap("100.64.0.2"), Hostname: "user1-device2"}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.3"), Hostname: "user2-device"}, + {User: ptr.To(users[1]), IPv4: ap("100.64.0.4"), Hostname: "user2-router", Tags: []string{"tag:router"}}, } policy := &Policy{ diff --git a/hscontrol/policy/v2/policy.go b/hscontrol/policy/v2/policy.go index 27cf70b4..54196e6b 100644 --- a/hscontrol/policy/v2/policy.go +++ b/hscontrol/policy/v2/policy.go @@ -1,7 +1,9 @@ package v2 import ( + "cmp" "encoding/json" + "errors" "fmt" "net/netip" "slices" @@ -19,6 +21,9 @@ import ( "tailscale.com/util/deephash" ) +// ErrInvalidTagOwner is returned when a tag owner is not an Alias type. +var ErrInvalidTagOwner = errors.New("tag owner is not an Alias") + type PolicyManager struct { mu sync.Mutex pol *Policy @@ -47,6 +52,14 @@ type PolicyManager struct { usesAutogroupSelf bool } +// filterAndPolicy combines the compiled filter rules with policy content for hashing. +// This ensures filterHash changes when policy changes, even for autogroup:self where +// the compiled filter is always empty. +type filterAndPolicy struct { + Filter []tailcfg.FilterRule + Policy *Policy +} + // NewPolicyManager creates a new PolicyManager from a policy file and a list of users and nodes. // It returns an error if the policy file is invalid. // The policy manager will update the filter rules based on the users and nodes. @@ -77,14 +90,6 @@ func NewPolicyManager(b []byte, users []types.User, nodes views.Slice[types.Node // updateLocked updates the filter rules based on the current policy and nodes. // It must be called with the lock held. func (pm *PolicyManager) updateLocked() (bool, error) { - // Clear the SSH policy map to ensure it's recalculated with the new policy. - // TODO(kradalby): This could potentially be optimized by only clearing the - // policies for nodes that have changed. Particularly if the only difference is - // that nodes has been added or removed. - clear(pm.sshPolicyMap) - clear(pm.compiledFilterRulesMap) - clear(pm.filterRulesMap) - // Check if policy uses autogroup:self pm.usesAutogroupSelf = pm.pol.usesAutogroupSelf() @@ -98,7 +103,14 @@ func (pm *PolicyManager) updateLocked() (bool, error) { return false, fmt.Errorf("compiling filter rules: %w", err) } - filterHash := deephash.Hash(&filter) + // Hash both the compiled filter AND the policy content together. + // This ensures filterHash changes when policy changes, even for autogroup:self + // where the compiled filter is always empty. This eliminates the need for + // a separate policyHash field. + filterHash := deephash.Hash(&filterAndPolicy{ + Filter: filter, + Policy: pm.pol, + }) filterChanged := filterHash != pm.filterHash if filterChanged { log.Debug(). @@ -164,8 +176,27 @@ func (pm *PolicyManager) updateLocked() (bool, error) { pm.exitSet = exitSet pm.exitSetHash = exitSetHash - // If neither of the calculated values changed, no need to update nodes - if !filterChanged && !tagOwnerChanged && !autoApproveChanged && !exitSetChanged { + // Determine if we need to send updates to nodes + // filterChanged now includes policy content changes (via combined hash), + // so it will detect changes even for autogroup:self where compiled filter is empty + needsUpdate := filterChanged || tagOwnerChanged || autoApproveChanged || exitSetChanged + + // Only clear caches if we're actually going to send updates + // This prevents clearing caches when nothing changed, which would leave nodes + // with stale filters until they reconnect. This is critical for autogroup:self + // where even reloading the same policy would clear caches but not send updates. + if needsUpdate { + // Clear the SSH policy map to ensure it's recalculated with the new policy. + // TODO(kradalby): This could potentially be optimized by only clearing the + // policies for nodes that have changed. Particularly if the only difference is + // that nodes has been added or removed. + clear(pm.sshPolicyMap) + clear(pm.compiledFilterRulesMap) + clear(pm.filterRulesMap) + } + + // If nothing changed, no need to update nodes + if !needsUpdate { log.Trace(). Msg("Policy evaluation detected no changes - all hashes match") return false, nil @@ -292,7 +323,10 @@ func (pm *PolicyManager) BuildPeerMap(nodes views.Slice[types.NodeView]) map[typ // Check each node pair for peer relationships. // Start j at i+1 to avoid checking the same pair twice and creating duplicates. - // We check both directions (i->j and j->i) since ACLs can be asymmetric. + // We use symmetric visibility: if EITHER node can access the other, BOTH see + // each other. This matches the global filter path behavior and ensures that + // one-way access rules (e.g., admin -> tagged server) still allow both nodes + // to see each other as peers, which is required for network connectivity. for i := range nodes.Len() { nodeI := nodes.At(i) matchersI, hasFilterI := nodeMatchers[nodeI.ID()] @@ -301,13 +335,16 @@ func (pm *PolicyManager) BuildPeerMap(nodes views.Slice[types.NodeView]) map[typ nodeJ := nodes.At(j) matchersJ, hasFilterJ := nodeMatchers[nodeJ.ID()] - // Check if nodeI can access nodeJ - if hasFilterI && nodeI.CanAccess(matchersI, nodeJ) { - ret[nodeI.ID()] = append(ret[nodeI.ID()], nodeJ) - } + // If either node can access the other, both should see each other as peers. + // This symmetric visibility is required for proper network operation: + // - Admin with *:* rule should see tagged servers (even if servers + // can't access admin) + // - Servers should see admin so they can respond to admin's connections + canIAccessJ := hasFilterI && nodeI.CanAccess(matchersI, nodeJ) + canJAccessI := hasFilterJ && nodeJ.CanAccess(matchersJ, nodeI) - // Check if nodeJ can access nodeI - if hasFilterJ && nodeJ.CanAccess(matchersJ, nodeI) { + if canIAccessJ || canJAccessI { + ret[nodeI.ID()] = append(ret[nodeI.ID()], nodeJ) ret[nodeJ.ID()] = append(ret[nodeJ.ID()], nodeI) } } @@ -467,8 +504,7 @@ func (pm *PolicyManager) SetNodes(nodes views.Slice[types.NodeView]) (bool, erro pm.mu.Lock() defer pm.mu.Unlock() - oldNodeCount := pm.nodes.Len() - newNodeCount := nodes.Len() + policyChanged := pm.nodesHavePolicyAffectingChanges(nodes) // Invalidate cache entries for nodes that changed. // For autogroup:self: invalidate all nodes belonging to affected users (peer changes). @@ -477,24 +513,29 @@ func (pm *PolicyManager) SetNodes(nodes views.Slice[types.NodeView]) (bool, erro pm.nodes = nodes - nodesChanged := oldNodeCount != newNodeCount - - // When nodes are added/removed, we must recompile filters because: + // When policy-affecting node properties change, we must recompile filters because: // 1. User/group aliases (like "user1@") resolve to node IPs - // 2. Filter compilation needs nodes to generate rules - // 3. Without nodes, filters compile to empty (0 rules) + // 2. Tag aliases (like "tag:server") match nodes based on their tags + // 3. Filter compilation needs nodes to generate rules // // For autogroup:self: return true when nodes change even if the global filter // hash didn't change. The global filter is empty for autogroup:self (each node // has its own filter), so the hash never changes. But peer relationships DO // change when nodes are added/removed, so we must signal this to trigger updates. // For global policies: the filter must be recompiled to include the new nodes. - if nodesChanged { + if policyChanged { // Recompile filter with the new node list - _, err := pm.updateLocked() + needsUpdate, err := pm.updateLocked() if err != nil { return false, err } + + if !needsUpdate { + // This ensures fresh filter rules are generated for all nodes + clear(pm.sshPolicyMap) + clear(pm.compiledFilterRulesMap) + clear(pm.filterRulesMap) + } // Always return true when nodes changed, even if filter hash didn't change // (can happen with autogroup:self or when nodes are added but don't affect rules) return true, nil @@ -503,23 +544,132 @@ func (pm *PolicyManager) SetNodes(nodes views.Slice[types.NodeView]) (bool, erro return false, nil } +func (pm *PolicyManager) nodesHavePolicyAffectingChanges(newNodes views.Slice[types.NodeView]) bool { + if pm.nodes.Len() != newNodes.Len() { + return true + } + + oldNodes := make(map[types.NodeID]types.NodeView, pm.nodes.Len()) + for _, node := range pm.nodes.All() { + oldNodes[node.ID()] = node + } + + for _, newNode := range newNodes.All() { + oldNode, exists := oldNodes[newNode.ID()] + if !exists { + return true + } + + if newNode.HasPolicyChange(oldNode) { + return true + } + } + + return false +} + +// NodeCanHaveTag checks if a node can have the specified tag during client-initiated +// registration or reauth flows (e.g., tailscale up --advertise-tags). +// +// This function is NOT used by the admin API's SetNodeTags - admins can set any +// existing tag on any node by calling State.SetNodeTags directly, which bypasses +// this authorization check. func (pm *PolicyManager) NodeCanHaveTag(node types.NodeView, tag string) bool { - if pm == nil { + if pm == nil || pm.pol == nil { return false } pm.mu.Lock() defer pm.mu.Unlock() + // Check if tag exists in policy + owners, exists := pm.pol.TagOwners[Tag(tag)] + if !exists { + return false + } + + // Check if node's owner can assign this tag via the pre-resolved tagOwnerMap. + // The tagOwnerMap contains IP sets built from resolving TagOwners entries + // (usernames/groups) to their nodes' IPs, so checking if the node's IP + // is in the set answers "does this node's owner own this tag?" if ips, ok := pm.tagOwnerMap[Tag(tag)]; ok { if slices.ContainsFunc(node.IPs(), ips.Contains) { return true } } + // For new nodes being registered, their IP may not yet be in the tagOwnerMap. + // Fall back to checking the node's user directly against the TagOwners. + // This handles the case where a user registers a new node with --advertise-tags. + if node.User().Valid() { + for _, owner := range owners { + if pm.userMatchesOwner(node.User(), owner) { + return true + } + } + } + return false } +// userMatchesOwner checks if a user matches a tag owner entry. +// This is used as a fallback when the node's IP is not in the tagOwnerMap. +func (pm *PolicyManager) userMatchesOwner(user types.UserView, owner Owner) bool { + switch o := owner.(type) { + case *Username: + if o == nil { + return false + } + // Resolve the username to find the user it refers to + resolvedUser, err := o.resolveUser(pm.users) + if err != nil { + return false + } + + return user.ID() == resolvedUser.ID + + case *Group: + if o == nil || pm.pol == nil { + return false + } + // Resolve the group to get usernames + usernames, ok := pm.pol.Groups[*o] + if !ok { + return false + } + // Check if the user matches any username in the group + for _, uname := range usernames { + resolvedUser, err := uname.resolveUser(pm.users) + if err != nil { + continue + } + + if user.ID() == resolvedUser.ID { + return true + } + } + + return false + + default: + return false + } +} + +// TagExists reports whether the given tag is defined in the policy. +func (pm *PolicyManager) TagExists(tag string) bool { + if pm == nil || pm.pol == nil { + return false + } + + pm.mu.Lock() + defer pm.mu.Unlock() + + _, exists := pm.pol.TagOwners[Tag(tag)] + + return exists +} + func (pm *PolicyManager) NodeCanApproveRoute(node types.NodeView, route netip.Prefix) bool { if pm == nil { return false @@ -664,14 +814,14 @@ func (pm *PolicyManager) invalidateAutogroupSelfCache(oldNodes, newNodes views.S // Check for removed nodes for nodeID, oldNode := range oldNodeMap { if _, exists := newNodeMap[nodeID]; !exists { - affectedUsers[oldNode.User().ID] = struct{}{} + affectedUsers[oldNode.User().ID()] = struct{}{} } } // Check for added nodes for nodeID, newNode := range newNodeMap { if _, exists := oldNodeMap[nodeID]; !exists { - affectedUsers[newNode.User().ID] = struct{}{} + affectedUsers[newNode.User().ID()] = struct{}{} } } @@ -679,26 +829,26 @@ func (pm *PolicyManager) invalidateAutogroupSelfCache(oldNodes, newNodes views.S for nodeID, newNode := range newNodeMap { if oldNode, exists := oldNodeMap[nodeID]; exists { // Check if user changed - if oldNode.User().ID != newNode.User().ID { - affectedUsers[oldNode.User().ID] = struct{}{} - affectedUsers[newNode.User().ID] = struct{}{} + if oldNode.User().ID() != newNode.User().ID() { + affectedUsers[oldNode.User().ID()] = struct{}{} + affectedUsers[newNode.User().ID()] = struct{}{} } // Check if tag status changed if oldNode.IsTagged() != newNode.IsTagged() { - affectedUsers[newNode.User().ID] = struct{}{} + affectedUsers[newNode.User().ID()] = struct{}{} } // Check if IPs changed (simple check - could be more sophisticated) oldIPs := oldNode.IPs() newIPs := newNode.IPs() if len(oldIPs) != len(newIPs) { - affectedUsers[newNode.User().ID] = struct{}{} + affectedUsers[newNode.User().ID()] = struct{}{} } else { // Check if any IPs are different for i, oldIP := range oldIPs { if i >= len(newIPs) || oldIP != newIPs[i] { - affectedUsers[newNode.User().ID] = struct{}{} + affectedUsers[newNode.User().ID()] = struct{}{} break } } @@ -717,7 +867,7 @@ func (pm *PolicyManager) invalidateAutogroupSelfCache(oldNodes, newNodes views.S // Check in new nodes first for _, node := range newNodes.All() { if node.ID() == nodeID { - nodeUserID = node.User().ID + nodeUserID = node.User().ID() found = true break } @@ -727,7 +877,7 @@ func (pm *PolicyManager) invalidateAutogroupSelfCache(oldNodes, newNodes views.S if !found { for _, node := range oldNodes.All() { if node.ID() == nodeID { - nodeUserID = node.User().ID + nodeUserID = node.User().ID() found = true break } @@ -801,3 +951,126 @@ func (pm *PolicyManager) invalidateGlobalPolicyCache(newNodes views.Slice[types. } } } + +// flattenTags flattens the TagOwners by resolving nested tags and detecting cycles. +// It will return a Owners list where all the Tag types have been resolved to their underlying Owners. +func flattenTags(tagOwners TagOwners, tag Tag, visiting map[Tag]bool, chain []Tag) (Owners, error) { + if visiting[tag] { + cycleStart := 0 + + for i, t := range chain { + if t == tag { + cycleStart = i + break + } + } + + cycleTags := make([]string, len(chain[cycleStart:])) + for i, t := range chain[cycleStart:] { + cycleTags[i] = string(t) + } + + slices.Sort(cycleTags) + + return nil, fmt.Errorf("%w: %s", ErrCircularReference, strings.Join(cycleTags, " -> ")) + } + + visiting[tag] = true + + chain = append(chain, tag) + defer delete(visiting, tag) + + var result Owners + + for _, owner := range tagOwners[tag] { + switch o := owner.(type) { + case *Tag: + if _, ok := tagOwners[*o]; !ok { + return nil, fmt.Errorf("tag %q %w %q", tag, ErrUndefinedTagReference, *o) + } + + nested, err := flattenTags(tagOwners, *o, visiting, chain) + if err != nil { + return nil, err + } + + result = append(result, nested...) + default: + result = append(result, owner) + } + } + + return result, nil +} + +// flattenTagOwners flattens all TagOwners by resolving nested tags and detecting cycles. +// It will return a new TagOwners map where all the Tag types have been resolved to their underlying Owners. +func flattenTagOwners(tagOwners TagOwners) (TagOwners, error) { + ret := make(TagOwners) + + for tag := range tagOwners { + flattened, err := flattenTags(tagOwners, tag, make(map[Tag]bool), nil) + if err != nil { + return nil, err + } + + slices.SortFunc(flattened, func(a, b Owner) int { + return cmp.Compare(a.String(), b.String()) + }) + ret[tag] = slices.CompactFunc(flattened, func(a, b Owner) bool { + return a.String() == b.String() + }) + } + + return ret, nil +} + +// resolveTagOwners resolves the TagOwners to a map of Tag to netipx.IPSet. +// The resulting map can be used to quickly look up the IPSet for a given Tag. +// It is intended for internal use in a PolicyManager. +func resolveTagOwners(p *Policy, users types.Users, nodes views.Slice[types.NodeView]) (map[Tag]*netipx.IPSet, error) { + if p == nil { + return make(map[Tag]*netipx.IPSet), nil + } + + if len(p.TagOwners) == 0 { + return make(map[Tag]*netipx.IPSet), nil + } + + ret := make(map[Tag]*netipx.IPSet) + + tagOwners, err := flattenTagOwners(p.TagOwners) + if err != nil { + return nil, err + } + + for tag, owners := range tagOwners { + var ips netipx.IPSetBuilder + + for _, owner := range owners { + switch o := owner.(type) { + case *Tag: + // After flattening, Tag types should not appear in the owners list. + // If they do, skip them as they represent already-resolved references. + + case Alias: + // If it does not resolve, that means the tag is not associated with any IP addresses. + resolved, _ := o.Resolve(p, users, nodes) + ips.AddSet(resolved) + + default: + // Should never happen - after flattening, all owners should be Alias types + return nil, fmt.Errorf("%w: %v", ErrInvalidTagOwner, owner) + } + } + + ipSet, err := ips.IPSet() + if err != nil { + return nil, err + } + + ret[tag] = ipSet + } + + return ret, nil +} diff --git a/hscontrol/policy/v2/policy_test.go b/hscontrol/policy/v2/policy_test.go index bbde136e..26b0d141 100644 --- a/hscontrol/policy/v2/policy_test.go +++ b/hscontrol/policy/v2/policy_test.go @@ -11,6 +11,7 @@ import ( "github.com/stretchr/testify/require" "gorm.io/gorm" "tailscale.com/tailcfg" + "tailscale.com/types/ptr" ) func node(name, ipv4, ipv6 string, user types.User, hostinfo *tailcfg.Hostinfo) *types.Node { @@ -19,8 +20,8 @@ func node(name, ipv4, ipv6 string, user types.User, hostinfo *tailcfg.Hostinfo) Hostname: name, IPv4: ap(ipv4), IPv6: ap(ipv6), - User: user, - UserID: user.ID, + User: ptr.To(user), + UserID: ptr.To(user.ID), Hostinfo: hostinfo, } } @@ -456,21 +457,21 @@ func TestAutogroupSelfWithOtherRules(t *testing.T) { Hostname: "test-1-device", IPv4: ap("100.64.0.1"), IPv6: ap("fd7a:115c:a1e0::1"), - User: users[0], - UserID: users[0].ID, + User: ptr.To(users[0]), + UserID: ptr.To(users[0].ID), Hostinfo: &tailcfg.Hostinfo{}, } // test-2 has a router device with tag:node-router test2RouterNode := &types.Node{ - ID: 2, - Hostname: "test-2-router", - IPv4: ap("100.64.0.2"), - IPv6: ap("fd7a:115c:a1e0::2"), - User: users[1], - UserID: users[1].ID, - ForcedTags: []string{"tag:node-router"}, - Hostinfo: &tailcfg.Hostinfo{}, + ID: 2, + Hostname: "test-2-router", + IPv4: ap("100.64.0.2"), + IPv6: ap("fd7a:115c:a1e0::2"), + User: ptr.To(users[1]), + UserID: ptr.To(users[1].ID), + Tags: []string{"tag:node-router"}, + Hostinfo: &tailcfg.Hostinfo{}, } nodes := types.Nodes{test1Node, test2RouterNode} @@ -519,3 +520,371 @@ func TestAutogroupSelfWithOtherRules(t *testing.T) { require.NoError(t, err) require.NotEmpty(t, rules, "test-1 should have filter rules from both ACL rules") } + +// TestAutogroupSelfPolicyUpdateTriggersMapResponse verifies that when a policy with +// autogroup:self is updated, SetPolicy returns true to trigger MapResponse updates, +// even if the global filter hash didn't change (which is always empty for autogroup:self). +// This fixes the issue where policy updates would clear caches but not trigger updates, +// leaving nodes with stale filter rules until reconnect. +func TestAutogroupSelfPolicyUpdateTriggersMapResponse(t *testing.T) { + users := types.Users{ + {Model: gorm.Model{ID: 1}, Name: "test-1", Email: "test-1@example.com"}, + {Model: gorm.Model{ID: 2}, Name: "test-2", Email: "test-2@example.com"}, + } + + test1Node := &types.Node{ + ID: 1, + Hostname: "test-1-device", + IPv4: ap("100.64.0.1"), + IPv6: ap("fd7a:115c:a1e0::1"), + User: ptr.To(users[0]), + UserID: ptr.To(users[0].ID), + Hostinfo: &tailcfg.Hostinfo{}, + } + + test2Node := &types.Node{ + ID: 2, + Hostname: "test-2-device", + IPv4: ap("100.64.0.2"), + IPv6: ap("fd7a:115c:a1e0::2"), + User: ptr.To(users[1]), + UserID: ptr.To(users[1].ID), + Hostinfo: &tailcfg.Hostinfo{}, + } + + nodes := types.Nodes{test1Node, test2Node} + + // Initial policy with autogroup:self + initialPolicy := `{ + "acls": [ + { + "action": "accept", + "src": ["autogroup:member"], + "dst": ["autogroup:self:*"] + } + ] + }` + + pm, err := NewPolicyManager([]byte(initialPolicy), users, nodes.ViewSlice()) + require.NoError(t, err) + require.True(t, pm.usesAutogroupSelf, "policy should use autogroup:self") + + // Get initial filter rules for test-1 (should be cached) + rules1, err := pm.FilterForNode(test1Node.View()) + require.NoError(t, err) + require.NotEmpty(t, rules1, "test-1 should have filter rules") + + // Update policy with a different ACL that still results in empty global filter + // (only autogroup:self rules, which compile to empty global filter) + // We add a comment/description change by adding groups (which don't affect filter compilation) + updatedPolicy := `{ + "groups": { + "group:test": ["test-1@example.com"] + }, + "acls": [ + { + "action": "accept", + "src": ["autogroup:member"], + "dst": ["autogroup:self:*"] + } + ] + }` + + // SetPolicy should return true even though global filter hash didn't change + policyChanged, err := pm.SetPolicy([]byte(updatedPolicy)) + require.NoError(t, err) + require.True(t, policyChanged, "SetPolicy should return true when policy content changes, even if global filter hash unchanged (autogroup:self)") + + // Verify that caches were cleared and new rules are generated + // The cache should be empty, so FilterForNode will recompile + rules2, err := pm.FilterForNode(test1Node.View()) + require.NoError(t, err) + require.NotEmpty(t, rules2, "test-1 should have filter rules after policy update") + + // Verify that the policy hash tracking works - a second identical update should return false + policyChanged2, err := pm.SetPolicy([]byte(updatedPolicy)) + require.NoError(t, err) + require.False(t, policyChanged2, "SetPolicy should return false when policy content hasn't changed") +} + +// TestTagPropagationToPeerMap tests that when a node's tags change, +// the peer map is correctly updated. This is a regression test for +// https://github.com/juanfont/headscale/issues/2389 +func TestTagPropagationToPeerMap(t *testing.T) { + users := types.Users{ + {Model: gorm.Model{ID: 1}, Name: "user1", Email: "user1@headscale.net"}, + {Model: gorm.Model{ID: 2}, Name: "user2", Email: "user2@headscale.net"}, + } + + // Policy: user2 can access tag:web nodes + policy := `{ + "tagOwners": { + "tag:web": ["user1@headscale.net"], + "tag:internal": ["user1@headscale.net"] + }, + "acls": [ + { + "action": "accept", + "src": ["user2@headscale.net"], + "dst": ["user2@headscale.net:*"] + }, + { + "action": "accept", + "src": ["user2@headscale.net"], + "dst": ["tag:web:*"] + }, + { + "action": "accept", + "src": ["tag:web"], + "dst": ["user2@headscale.net:*"] + } + ] + }` + + // user1's node starts with tag:web and tag:internal + user1Node := &types.Node{ + ID: 1, + Hostname: "user1-node", + IPv4: ap("100.64.0.1"), + IPv6: ap("fd7a:115c:a1e0::1"), + User: ptr.To(users[0]), + UserID: ptr.To(users[0].ID), + Tags: []string{"tag:web", "tag:internal"}, + } + + // user2's node (no tags) + user2Node := &types.Node{ + ID: 2, + Hostname: "user2-node", + IPv4: ap("100.64.0.2"), + IPv6: ap("fd7a:115c:a1e0::2"), + User: ptr.To(users[1]), + UserID: ptr.To(users[1].ID), + } + + initialNodes := types.Nodes{user1Node, user2Node} + + pm, err := NewPolicyManager([]byte(policy), users, initialNodes.ViewSlice()) + require.NoError(t, err) + + // Initial state: user2 should see user1 as a peer (user1 has tag:web) + initialPeerMap := pm.BuildPeerMap(initialNodes.ViewSlice()) + + // Check user2's peers - should include user1 + user2Peers := initialPeerMap[user2Node.ID] + require.Len(t, user2Peers, 1, "user2 should have 1 peer initially (user1 with tag:web)") + require.Equal(t, user1Node.ID, user2Peers[0].ID(), "user2's peer should be user1") + + // Check user1's peers - should include user2 (bidirectional ACL) + user1Peers := initialPeerMap[user1Node.ID] + require.Len(t, user1Peers, 1, "user1 should have 1 peer initially (user2)") + require.Equal(t, user2Node.ID, user1Peers[0].ID(), "user1's peer should be user2") + + // Now change user1's tags: remove tag:web, keep only tag:internal + user1NodeUpdated := &types.Node{ + ID: 1, + Hostname: "user1-node", + IPv4: ap("100.64.0.1"), + IPv6: ap("fd7a:115c:a1e0::1"), + User: ptr.To(users[0]), + UserID: ptr.To(users[0].ID), + Tags: []string{"tag:internal"}, // tag:web removed! + } + + updatedNodes := types.Nodes{user1NodeUpdated, user2Node} + + // SetNodes should detect the tag change + changed, err := pm.SetNodes(updatedNodes.ViewSlice()) + require.NoError(t, err) + require.True(t, changed, "SetNodes should return true when tags change") + + // After tag change: user2 should NOT see user1 as a peer anymore + // (no ACL allows user2 to access tag:internal) + updatedPeerMap := pm.BuildPeerMap(updatedNodes.ViewSlice()) + + // Check user2's peers - should be empty now + user2PeersAfter := updatedPeerMap[user2Node.ID] + require.Empty(t, user2PeersAfter, "user2 should have no peers after tag:web is removed from user1") + + // Check user1's peers - should also be empty + user1PeersAfter := updatedPeerMap[user1Node.ID] + require.Empty(t, user1PeersAfter, "user1 should have no peers after tag:web is removed") + + // Also verify MatchersForNode returns non-empty matchers and ReduceNodes filters correctly + // This simulates what buildTailPeers does in the mapper + matchersForUser2, err := pm.MatchersForNode(user2Node.View()) + require.NoError(t, err) + require.NotEmpty(t, matchersForUser2, "MatchersForNode should return non-empty matchers (at least self-access rule)") + + // Test ReduceNodes logic with the updated nodes and matchers + // This is what buildTailPeers does - it takes peers from ListPeers (which might include user1) + // and filters them using ReduceNodes with the updated matchers + // Inline the ReduceNodes logic to avoid import cycle + user2View := user2Node.View() + user1UpdatedView := user1NodeUpdated.View() + + // Check if user2 can access user1 OR user1 can access user2 + canAccess := user2View.CanAccess(matchersForUser2, user1UpdatedView) || + user1UpdatedView.CanAccess(matchersForUser2, user2View) + + require.False(t, canAccess, "user2 should NOT be able to access user1 after tag:web is removed (ReduceNodes should filter out)") +} + +// TestAutogroupSelfWithAdminOverride reproduces issue #2990: +// When autogroup:self is combined with an admin rule (group:admin -> *:*), +// tagged nodes become invisible to admins because BuildPeerMap uses asymmetric +// peer visibility in the autogroup:self path. +// +// The fix requires symmetric visibility: if admin can access tagged node, +// BOTH admin and tagged node should see each other as peers. +func TestAutogroupSelfWithAdminOverride(t *testing.T) { + users := types.Users{ + {Model: gorm.Model{ID: 1}, Name: "admin", Email: "admin@example.com"}, + {Model: gorm.Model{ID: 2}, Name: "user1", Email: "user1@example.com"}, + } + + // Admin has a regular device + adminNode := &types.Node{ + ID: 1, + Hostname: "admin-device", + IPv4: ap("100.64.0.1"), + IPv6: ap("fd7a:115c:a1e0::1"), + User: ptr.To(users[0]), + UserID: ptr.To(users[0].ID), + Hostinfo: &tailcfg.Hostinfo{}, + } + + // user1 has a tagged server + user1TaggedNode := &types.Node{ + ID: 2, + Hostname: "user1-server", + IPv4: ap("100.64.0.2"), + IPv6: ap("fd7a:115c:a1e0::2"), + User: ptr.To(users[1]), + UserID: ptr.To(users[1].ID), + Tags: []string{"tag:server"}, + Hostinfo: &tailcfg.Hostinfo{}, + } + + nodes := types.Nodes{adminNode, user1TaggedNode} + + // Policy from issue #2990: + // - group:admin has full access to everything (*:*) + // - autogroup:member -> autogroup:self (allows users to see their own devices) + // + // Bug: The tagged server becomes invisible to admin because: + // 1. Admin can access tagged server (via *:* rule) + // 2. Tagged server CANNOT access admin (no rule for that) + // 3. With asymmetric logic, tagged server is not added to admin's peer list + policy := `{ + "groups": { + "group:admin": ["admin@example.com"] + }, + "tagOwners": { + "tag:server": ["user1@example.com"] + }, + "acls": [ + { + "action": "accept", + "src": ["group:admin"], + "dst": ["*:*"] + }, + { + "action": "accept", + "src": ["autogroup:member"], + "dst": ["autogroup:self:*"] + } + ] + }` + + pm, err := NewPolicyManager([]byte(policy), users, nodes.ViewSlice()) + require.NoError(t, err) + + peerMap := pm.BuildPeerMap(nodes.ViewSlice()) + + // Admin should see the tagged server as a peer (via group:admin -> *:* rule) + adminPeers := peerMap[adminNode.ID] + require.True(t, slices.ContainsFunc(adminPeers, func(n types.NodeView) bool { + return n.ID() == user1TaggedNode.ID + }), "admin should see tagged server as peer via *:* rule (issue #2990)") + + // Tagged server should also see admin as a peer (symmetric visibility) + // Even though tagged server cannot ACCESS admin, it should still SEE admin + // because admin CAN access it. This is required for proper network operation. + taggedPeers := peerMap[user1TaggedNode.ID] + require.True(t, slices.ContainsFunc(taggedPeers, func(n types.NodeView) bool { + return n.ID() == adminNode.ID + }), "tagged server should see admin as peer (symmetric visibility)") +} + +// TestAutogroupSelfSymmetricVisibility verifies that peer visibility is symmetric: +// if node A can access node B, then both A and B should see each other as peers. +// This is the same behavior as the global filter path. +func TestAutogroupSelfSymmetricVisibility(t *testing.T) { + users := types.Users{ + {Model: gorm.Model{ID: 1}, Name: "user1", Email: "user1@example.com"}, + {Model: gorm.Model{ID: 2}, Name: "user2", Email: "user2@example.com"}, + } + + // user1 has device A + deviceA := &types.Node{ + ID: 1, + Hostname: "device-a", + IPv4: ap("100.64.0.1"), + IPv6: ap("fd7a:115c:a1e0::1"), + User: ptr.To(users[0]), + UserID: ptr.To(users[0].ID), + Hostinfo: &tailcfg.Hostinfo{}, + } + + // user2 has device B (tagged) + deviceB := &types.Node{ + ID: 2, + Hostname: "device-b", + IPv4: ap("100.64.0.2"), + IPv6: ap("fd7a:115c:a1e0::2"), + User: ptr.To(users[1]), + UserID: ptr.To(users[1].ID), + Tags: []string{"tag:web"}, + Hostinfo: &tailcfg.Hostinfo{}, + } + + nodes := types.Nodes{deviceA, deviceB} + + // One-way rule: user1 can access tag:web, but tag:web cannot access user1 + policy := `{ + "tagOwners": { + "tag:web": ["user2@example.com"] + }, + "acls": [ + { + "action": "accept", + "src": ["user1@example.com"], + "dst": ["tag:web:*"] + }, + { + "action": "accept", + "src": ["autogroup:member"], + "dst": ["autogroup:self:*"] + } + ] + }` + + pm, err := NewPolicyManager([]byte(policy), users, nodes.ViewSlice()) + require.NoError(t, err) + + peerMap := pm.BuildPeerMap(nodes.ViewSlice()) + + // Device A (user1) should see device B (tag:web) as peer + aPeers := peerMap[deviceA.ID] + require.True(t, slices.ContainsFunc(aPeers, func(n types.NodeView) bool { + return n.ID() == deviceB.ID + }), "device A should see device B as peer (user1 -> tag:web rule)") + + // Device B (tag:web) should ALSO see device A as peer (symmetric visibility) + // Even though B cannot ACCESS A, B should still SEE A as a peer + bPeers := peerMap[deviceB.ID] + require.True(t, slices.ContainsFunc(bPeers, func(n types.NodeView) bool { + return n.ID() == deviceA.ID + }), "device B should see device A as peer (symmetric visibility)") +} diff --git a/hscontrol/policy/v2/types.go b/hscontrol/policy/v2/types.go index 2d2f2f19..ce968225 100644 --- a/hscontrol/policy/v2/types.go +++ b/hscontrol/policy/v2/types.go @@ -9,7 +9,6 @@ import ( "strings" "github.com/go-json-experiment/json" - "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" "github.com/prometheus/common/model" @@ -34,6 +33,19 @@ const Wildcard = Asterix(0) var ErrAutogroupSelfRequiresPerNodeResolution = errors.New("autogroup:self requires per-node resolution and cannot be resolved in this context") +var ErrCircularReference = errors.New("circular reference detected") + +var ErrUndefinedTagReference = errors.New("references undefined tag") + +// SSH validation errors. +var ( + ErrSSHTagSourceToUserDest = errors.New("tags in SSH source cannot access user-owned devices") + ErrSSHUserDestRequiresSameUser = errors.New("user destination requires source to contain only that same user") + ErrSSHAutogroupSelfRequiresUserSource = errors.New("autogroup:self destination requires source to contain only users or groups, not tags or autogroup:tagged") + ErrSSHTagSourceToAutogroupMember = errors.New("tags in SSH source cannot access autogroup:member (user-owned devices)") + ErrSSHWildcardDestination = errors.New("wildcard (*) is not supported as SSH destination") +) + type Asterix int func (a Asterix) Validate() error { @@ -201,12 +213,17 @@ func (u Username) Resolve(_ *Policy, users types.Users, nodes views.Slice[types. } for _, node := range nodes.All() { - // Skip tagged nodes + // Skip tagged nodes - they are identified by tags, not users if node.IsTagged() { continue } - if node.User().ID == user.ID { + // Skip nodes without a user (defensive check for tests) + if !node.User().Valid() { + continue + } + + if node.User().ID() == user.ID { node.AppendToIPSet(&ips) } } @@ -298,35 +315,11 @@ func (t *Tag) UnmarshalJSON(b []byte) error { func (t Tag) Resolve(p *Policy, users types.Users, nodes views.Slice[types.NodeView]) (*netipx.IPSet, error) { var ips netipx.IPSetBuilder - // TODO(kradalby): This is currently resolved twice, and should be resolved once. - // It is added temporary until we sort out the story on how and when we resolve tags - // from the three places they can be "approved": - // - As part of a PreAuthKey (handled in HasTag) - // - As part of ForcedTags (set via CLI) (handled in HasTag) - // - As part of HostInfo.RequestTags and approved by policy (this is happening here) - // Part of #2417 - tagMap, err := resolveTagOwners(p, users, nodes) - if err != nil { - return nil, err - } - for _, node := range nodes.All() { - // Check if node has this tag in all tags (ForcedTags + AuthKey.Tags) - if slices.Contains(node.Tags(), string(t)) { + // Check if node has this tag + if node.HasTag(string(t)) { node.AppendToIPSet(&ips) } - - // TODO(kradalby): remove as part of #2417, see comment above - if tagMap != nil { - if tagips, ok := tagMap[t]; ok && node.InIPSet(tagips) && node.Hostinfo().Valid() { - for _, tag := range node.RequestTagsSlice().All() { - if tag == string(t) { - node.AppendToIPSet(&ips) - break - } - } - } - } } return ips.IPSet() @@ -336,6 +329,10 @@ func (t Tag) CanBeAutoApprover() bool { return true } +func (t Tag) CanBeTagOwner() bool { + return true +} + func (t Tag) String() string { return string(t) } @@ -532,61 +529,26 @@ func (ag AutoGroup) Resolve(p *Policy, users types.Users, nodes views.Slice[type return util.TheInternet(), nil case AutoGroupMember: - // autogroup:member represents all untagged devices in the tailnet. - tagMap, err := resolveTagOwners(p, users, nodes) - if err != nil { - return nil, err - } - for _, node := range nodes.All() { // Skip if node is tagged if node.IsTagged() { continue } - // Skip if node has any allowed requested tags - hasAllowedTag := false - if node.RequestTagsSlice().Len() != 0 { - for _, tag := range node.RequestTagsSlice().All() { - if _, ok := tagMap[Tag(tag)]; ok { - hasAllowedTag = true - break - } - } - } - if hasAllowedTag { - continue - } - - // Node is a member if it has no forced tags and no allowed requested tags + // Node is a member if it is not tagged node.AppendToIPSet(&build) } return build.IPSet() case AutoGroupTagged: - // autogroup:tagged represents all devices with a tag in the tailnet. - tagMap, err := resolveTagOwners(p, users, nodes) - if err != nil { - return nil, err - } - for _, node := range nodes.All() { // Include if node is tagged - if node.IsTagged() { - node.AppendToIPSet(&build) + if !node.IsTagged() { continue } - // Include if node has any allowed requested tags - if node.RequestTagsSlice().Len() != 0 { - for _, tag := range node.RequestTagsSlice().All() { - if _, ok := tagMap[Tag(tag)]; ok { - node.AppendToIPSet(&build) - break - } - } - } + node.AppendToIPSet(&build) } return build.IPSet() @@ -910,6 +872,7 @@ func (ve *AutoApproverEnc) UnmarshalJSON(b []byte) error { type Owner interface { CanBeTagOwner() bool UnmarshalJSON([]byte) error + String() string } // OwnerEnc is used to deserialize a Owner. @@ -958,6 +921,8 @@ func (o Owners) MarshalJSON() ([]byte, error) { owners[i] = string(*v) case *Group: owners[i] = string(*v) + case *Tag: + owners[i] = string(*v) default: return nil, fmt.Errorf("unknown owner type: %T", v) } @@ -972,6 +937,8 @@ func parseOwner(s string) (Owner, error) { return ptr.To(Username(s)), nil case isGroup(s): return ptr.To(Group(s)), nil + case isTag(s): + return ptr.To(Tag(s)), nil } return nil, fmt.Errorf(`Invalid Owner %q. An alias must be one of the following types: @@ -1007,7 +974,7 @@ func (g Groups) Contains(group *Group) error { // with "group:". If any group name is invalid, an error is returned. func (g *Groups) UnmarshalJSON(b []byte) error { // First unmarshal as a generic map to validate group names first - var rawMap map[string]interface{} + var rawMap map[string]any if err := json.Unmarshal(b, &rawMap); err != nil { return err } @@ -1024,7 +991,7 @@ func (g *Groups) UnmarshalJSON(b []byte) error { rawGroups := make(map[string][]string) for key, value := range rawMap { switch v := value.(type) { - case []interface{}: + case []any: // Convert []interface{} to []string var stringSlice []string for _, item := range v { @@ -1129,6 +1096,8 @@ func (to TagOwners) MarshalJSON() ([]byte, error) { ownerStrs[i] = string(*v) case *Group: ownerStrs[i] = string(*v) + case *Tag: + ownerStrs[i] = string(*v) default: return nil, fmt.Errorf("unknown owner type: %T", v) } @@ -1157,41 +1126,6 @@ func (to TagOwners) Contains(tagOwner *Tag) error { return fmt.Errorf(`Tag %q is not defined in the Policy, please define or remove the reference to it`, tagOwner) } -// resolveTagOwners resolves the TagOwners to a map of Tag to netipx.IPSet. -// The resulting map can be used to quickly look up the IPSet for a given Tag. -// It is intended for internal use in a PolicyManager. -func resolveTagOwners(p *Policy, users types.Users, nodes views.Slice[types.NodeView]) (map[Tag]*netipx.IPSet, error) { - if p == nil { - return nil, nil - } - - ret := make(map[Tag]*netipx.IPSet) - - for tag, owners := range p.TagOwners { - var ips netipx.IPSetBuilder - - for _, owner := range owners { - o, ok := owner.(Alias) - if !ok { - // Should never happen - return nil, fmt.Errorf("owner %v is not an Alias", owner) - } - // If it does not resolve, that means the tag is not associated with any IP addresses. - resolved, _ := o.Resolve(p, users, nodes) - ips.AddSet(resolved) - } - - ipSet, err := ips.IPSet() - if err != nil { - return nil, err - } - - ret[tag] = ipSet - } - - return ret, nil -} - type AutoApproverPolicy struct { Routes map[netip.Prefix]AutoApprovers `json:"routes,omitempty"` ExitNode AutoApprovers `json:"exitNode,omitempty"` @@ -1688,6 +1622,63 @@ func validateAutogroupForSSHUser(user *AutoGroup) error { return nil } +// validateSSHSrcDstCombination validates that SSH source/destination combinations +// follow Tailscale's security model: +// - Destination can be: tags, autogroup:self (if source is users/groups), or same-user +// - Tags/autogroup:tagged CANNOT SSH to user destinations +// - Username destinations require the source to be that same single user only. +func validateSSHSrcDstCombination(sources SSHSrcAliases, destinations SSHDstAliases) error { + // Categorize source types + srcHasTaggedEntities := false + srcHasGroups := false + srcUsernames := make(map[string]bool) + + for _, src := range sources { + switch v := src.(type) { + case *Tag: + srcHasTaggedEntities = true + case *AutoGroup: + if v.Is(AutoGroupTagged) { + srcHasTaggedEntities = true + } else if v.Is(AutoGroupMember) { + srcHasGroups = true // autogroup:member is like a group of users + } + case *Group: + srcHasGroups = true + case *Username: + srcUsernames[string(*v)] = true + } + } + + // Check destinations against source constraints + for _, dst := range destinations { + switch v := dst.(type) { + case *Username: + // Rule: Tags/autogroup:tagged CANNOT SSH to user destinations + if srcHasTaggedEntities { + return fmt.Errorf("%w (%s); use autogroup:tagged or specific tags as destinations instead", + ErrSSHTagSourceToUserDest, *v) + } + // Rule: Username destination requires source to be that same single user only + if srcHasGroups || len(srcUsernames) != 1 || !srcUsernames[string(*v)] { + return fmt.Errorf("%w %q; use autogroup:self instead for same-user SSH access", + ErrSSHUserDestRequiresSameUser, *v) + } + case *AutoGroup: + // Rule: autogroup:self requires source to NOT contain tags + if v.Is(AutoGroupSelf) && srcHasTaggedEntities { + return ErrSSHAutogroupSelfRequiresUserSource + } + // Rule: autogroup:member (user-owned devices) cannot be accessed by tagged entities + if v.Is(AutoGroupMember) && srcHasTaggedEntities { + return ErrSSHTagSourceToAutogroupMember + } + } + } + + return nil +} + // validate reports if there are any errors in a policy after // the unmarshaling process. // It runs through all rules and checks if there are any inconsistencies @@ -1829,6 +1820,12 @@ func (p *Policy) validate() error { } } } + + // Validate SSH source/destination combinations follow Tailscale's security model + err := validateSSHSrcDstCombination(ssh.Sources, ssh.Destinations) + if err != nil { + errs = append(errs, err) + } } for _, tagOwners := range p.TagOwners { @@ -1839,10 +1836,23 @@ func (p *Policy) validate() error { if err := p.Groups.Contains(g); err != nil { errs = append(errs, err) } + case *Tag: + t := tagOwner + + err := p.TagOwners.Contains(t) + if err != nil { + errs = append(errs, err) + } } } } + // Validate tag ownership chains for circular references and undefined tags. + _, err := flattenTagOwners(p.TagOwners) + if err != nil { + errs = append(errs, err) + } + for _, approvers := range p.AutoApprovers.Routes { for _, approver := range approvers { switch approver := approver.(type) { @@ -1948,15 +1958,12 @@ func (a *SSHDstAliases) UnmarshalJSON(b []byte) error { *a = make([]Alias, len(aliases)) for i, alias := range aliases { switch alias.Alias.(type) { - case *Username, *Tag, *AutoGroup, *Host, - // Asterix and Group is actually not supposed to be supported, - // however we do not support autogroups at the moment - // so we will leave it in as there is no other option - // to dynamically give all access - // https://tailscale.com/kb/1193/tailscale-ssh#dst - // TODO(kradalby): remove this when we support autogroup:tagged and autogroup:member - Asterix: + case *Username, *Tag, *AutoGroup, *Host: (*a)[i] = alias.Alias + case Asterix: + return fmt.Errorf("%w; use 'autogroup:member' for user-owned devices, "+ + "'autogroup:tagged' for tagged devices, or specific tags/users", + ErrSSHWildcardDestination) default: return fmt.Errorf( "alias %T is not supported for SSH destination", @@ -1986,6 +1993,8 @@ func (a SSHDstAliases) MarshalJSON() ([]byte, error) { case *Host: aliases[i] = string(*v) case Asterix: + // Marshal wildcard as "*" so it gets rejected during unmarshal + // with a proper error message explaining alternatives aliases[i] = "*" default: return nil, fmt.Errorf("unknown SSH destination alias type: %T", v) diff --git a/hscontrol/policy/v2/types_test.go b/hscontrol/policy/v2/types_test.go index d5a8730a..542c9b2c 100644 --- a/hscontrol/policy/v2/types_test.go +++ b/hscontrol/policy/v2/types_test.go @@ -664,7 +664,8 @@ func TestUnmarshalPolicy(t *testing.T) { input: ` { "tagOwners": { - "tag:web": ["admin@example.com"] + "tag:web": ["admin@example.com"], + "tag:server": ["admin@example.com"] }, "ssh": [ { @@ -673,7 +674,7 @@ func TestUnmarshalPolicy(t *testing.T) { "tag:web" ], "dst": [ - "admin@example.com" + "tag:server" ], "users": ["*"] } @@ -682,7 +683,8 @@ func TestUnmarshalPolicy(t *testing.T) { `, want: &Policy{ TagOwners: TagOwners{ - Tag("tag:web"): Owners{ptr.To(Username("admin@example.com"))}, + Tag("tag:web"): Owners{ptr.To(Username("admin@example.com"))}, + Tag("tag:server"): Owners{ptr.To(Username("admin@example.com"))}, }, SSHs: []SSH{ { @@ -691,7 +693,7 @@ func TestUnmarshalPolicy(t *testing.T) { tp("tag:web"), }, Destinations: SSHDstAliases{ - ptr.To(Username("admin@example.com")), + tp("tag:server"), }, Users: []SSHUser{ SSHUser("*"), @@ -714,7 +716,7 @@ func TestUnmarshalPolicy(t *testing.T) { "group:admins" ], "dst": [ - "admin@example.com" + "autogroup:self" ], "users": ["root"], "checkPeriod": "24h" @@ -733,7 +735,7 @@ func TestUnmarshalPolicy(t *testing.T) { gp("group:admins"), }, Destinations: SSHDstAliases{ - ptr.To(Username("admin@example.com")), + agp("autogroup:self"), }, Users: []SSHUser{ SSHUser("root"), @@ -1470,6 +1472,300 @@ func TestUnmarshalPolicy(t *testing.T) { }, }, }, + { + name: "tags-can-own-other-tags", + input: ` +{ + "tagOwners": { + "tag:bigbrother": [], + "tag:smallbrother": ["tag:bigbrother"], + }, + "acls": [ + { + "action": "accept", + "proto": "tcp", + "src": ["*"], + "dst": ["tag:smallbrother:9000"] + } + ] +} +`, + want: &Policy{ + TagOwners: TagOwners{ + Tag("tag:bigbrother"): {}, + Tag("tag:smallbrother"): {ptr.To(Tag("tag:bigbrother"))}, + }, + ACLs: []ACL{ + { + Action: "accept", + Protocol: "tcp", + Sources: Aliases{ + Wildcard, + }, + Destinations: []AliasWithPorts{ + { + Alias: ptr.To(Tag("tag:smallbrother")), + Ports: []tailcfg.PortRange{{First: 9000, Last: 9000}}, + }, + }, + }, + }, + }, + }, + { + name: "tag-owner-references-undefined-tag", + input: ` +{ + "tagOwners": { + "tag:child": ["tag:nonexistent"], + }, +} +`, + wantErr: `tag "tag:child" references undefined tag "tag:nonexistent"`, + }, + // SSH source/destination validation tests (#3009, #3010) + { + name: "ssh-tag-to-user-rejected", + input: ` +{ + "tagOwners": {"tag:server": ["admin@"]}, + "ssh": [{ + "action": "accept", + "src": ["tag:server"], + "dst": ["admin@"], + "users": ["autogroup:nonroot"] + }] +} +`, + wantErr: "tags in SSH source cannot access user-owned devices", + }, + { + name: "ssh-autogroup-tagged-to-user-rejected", + input: ` +{ + "ssh": [{ + "action": "accept", + "src": ["autogroup:tagged"], + "dst": ["admin@"], + "users": ["autogroup:nonroot"] + }] +} +`, + wantErr: "tags in SSH source cannot access user-owned devices", + }, + { + name: "ssh-tag-to-autogroup-self-rejected", + input: ` +{ + "tagOwners": {"tag:server": ["admin@"]}, + "ssh": [{ + "action": "accept", + "src": ["tag:server"], + "dst": ["autogroup:self"], + "users": ["autogroup:nonroot"] + }] +} +`, + wantErr: "autogroup:self destination requires source to contain only users or groups", + }, + { + name: "ssh-group-to-user-rejected", + input: ` +{ + "groups": {"group:admins": ["admin@", "user1@"]}, + "ssh": [{ + "action": "accept", + "src": ["group:admins"], + "dst": ["admin@"], + "users": ["autogroup:nonroot"] + }] +} +`, + wantErr: `user destination requires source to contain only that same user "admin@"`, + }, + { + name: "ssh-same-user-to-user-allowed", + input: ` +{ + "ssh": [{ + "action": "accept", + "src": ["admin@"], + "dst": ["admin@"], + "users": ["autogroup:nonroot"] + }] +} +`, + want: &Policy{ + SSHs: []SSH{ + { + Action: "accept", + Sources: SSHSrcAliases{up("admin@")}, + Destinations: SSHDstAliases{up("admin@")}, + Users: []SSHUser{SSHUser(AutoGroupNonRoot)}, + }, + }, + }, + }, + { + name: "ssh-group-to-autogroup-self-allowed", + input: ` +{ + "groups": {"group:admins": ["admin@", "user1@"]}, + "ssh": [{ + "action": "accept", + "src": ["group:admins"], + "dst": ["autogroup:self"], + "users": ["autogroup:nonroot"] + }] +} +`, + want: &Policy{ + Groups: Groups{ + Group("group:admins"): []Username{Username("admin@"), Username("user1@")}, + }, + SSHs: []SSH{ + { + Action: "accept", + Sources: SSHSrcAliases{gp("group:admins")}, + Destinations: SSHDstAliases{agp("autogroup:self")}, + Users: []SSHUser{SSHUser(AutoGroupNonRoot)}, + }, + }, + }, + }, + { + name: "ssh-autogroup-tagged-to-autogroup-member-rejected", + input: ` +{ + "ssh": [{ + "action": "accept", + "src": ["autogroup:tagged"], + "dst": ["autogroup:member"], + "users": ["autogroup:nonroot"] + }] +} +`, + wantErr: "tags in SSH source cannot access autogroup:member", + }, + { + name: "ssh-autogroup-tagged-to-autogroup-tagged-allowed", + input: ` +{ + "ssh": [{ + "action": "accept", + "src": ["autogroup:tagged"], + "dst": ["autogroup:tagged"], + "users": ["autogroup:nonroot"] + }] +} +`, + want: &Policy{ + SSHs: []SSH{ + { + Action: "accept", + Sources: SSHSrcAliases{agp("autogroup:tagged")}, + Destinations: SSHDstAliases{agp("autogroup:tagged")}, + Users: []SSHUser{SSHUser(AutoGroupNonRoot)}, + }, + }, + }, + }, + { + name: "ssh-wildcard-destination-rejected", + input: ` +{ + "groups": {"group:admins": ["admin@"]}, + "ssh": [{ + "action": "accept", + "src": ["group:admins"], + "dst": ["*"], + "users": ["autogroup:nonroot"] + }] +} +`, + wantErr: "wildcard (*) is not supported as SSH destination", + }, + { + name: "ssh-group-to-tag-allowed", + input: ` +{ + "tagOwners": {"tag:server": ["admin@"]}, + "groups": {"group:admins": ["admin@"]}, + "ssh": [{ + "action": "accept", + "src": ["group:admins"], + "dst": ["tag:server"], + "users": ["autogroup:nonroot"] + }] +} +`, + want: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("admin@")}, + }, + Groups: Groups{ + Group("group:admins"): []Username{Username("admin@")}, + }, + SSHs: []SSH{ + { + Action: "accept", + Sources: SSHSrcAliases{gp("group:admins")}, + Destinations: SSHDstAliases{tp("tag:server")}, + Users: []SSHUser{SSHUser(AutoGroupNonRoot)}, + }, + }, + }, + }, + { + name: "ssh-user-to-tag-allowed", + input: ` +{ + "tagOwners": {"tag:server": ["admin@"]}, + "ssh": [{ + "action": "accept", + "src": ["admin@"], + "dst": ["tag:server"], + "users": ["autogroup:nonroot"] + }] +} +`, + want: &Policy{ + TagOwners: TagOwners{ + Tag("tag:server"): Owners{up("admin@")}, + }, + SSHs: []SSH{ + { + Action: "accept", + Sources: SSHSrcAliases{up("admin@")}, + Destinations: SSHDstAliases{tp("tag:server")}, + Users: []SSHUser{SSHUser(AutoGroupNonRoot)}, + }, + }, + }, + }, + { + name: "ssh-autogroup-member-to-autogroup-tagged-allowed", + input: ` +{ + "ssh": [{ + "action": "accept", + "src": ["autogroup:member"], + "dst": ["autogroup:tagged"], + "users": ["autogroup:nonroot"] + }] +} +`, + want: &Policy{ + SSHs: []SSH{ + { + Action: "accept", + Sources: SSHSrcAliases{agp("autogroup:member")}, + Destinations: SSHDstAliases{agp("autogroup:tagged")}, + Users: []SSHUser{SSHUser(AutoGroupNonRoot)}, + }, + }, + }, + }, } cmps := append(util.Comparers, @@ -1549,7 +1845,17 @@ func TestResolvePolicy(t *testing.T) { "groupuser1": {Model: gorm.Model{ID: 3}, Name: "groupuser1"}, "groupuser2": {Model: gorm.Model{ID: 4}, Name: "groupuser2"}, "notme": {Model: gorm.Model{ID: 5}, Name: "notme"}, + "testuser2": {Model: gorm.Model{ID: 6}, Name: "testuser2"}, } + + // Extract users to variables so we can take their addresses + testuser := users["testuser"] + groupuser := users["groupuser"] + groupuser1 := users["groupuser1"] + groupuser2 := users["groupuser2"] + notme := users["notme"] + testuser2 := users["testuser2"] + tests := []struct { name string nodes types.Nodes @@ -1579,29 +1885,27 @@ func TestResolvePolicy(t *testing.T) { nodes: types.Nodes{ // Not matching other user { - User: users["notme"], + User: ptr.To(notme), IPv4: ap("100.100.101.1"), }, // Not matching forced tags { - User: users["testuser"], - ForcedTags: []string{"tag:anything"}, - IPv4: ap("100.100.101.2"), + User: ptr.To(testuser), + Tags: []string{"tag:anything"}, + IPv4: ap("100.100.101.2"), }, - // not matching pak tag + // not matching because it's tagged (tags copied from AuthKey) { - User: users["testuser"], - AuthKey: &types.PreAuthKey{ - Tags: []string{"alsotagged"}, - }, + User: ptr.To(testuser), + Tags: []string{"alsotagged"}, IPv4: ap("100.100.101.3"), }, { - User: users["testuser"], + User: ptr.To(testuser), IPv4: ap("100.100.101.103"), }, { - User: users["testuser"], + User: ptr.To(testuser), IPv4: ap("100.100.101.104"), }, }, @@ -1613,29 +1917,27 @@ func TestResolvePolicy(t *testing.T) { nodes: types.Nodes{ // Not matching other user { - User: users["notme"], + User: ptr.To(notme), IPv4: ap("100.100.101.4"), }, // Not matching forced tags { - User: users["groupuser"], - ForcedTags: []string{"tag:anything"}, - IPv4: ap("100.100.101.5"), + User: ptr.To(groupuser), + Tags: []string{"tag:anything"}, + IPv4: ap("100.100.101.5"), }, - // not matching pak tag + // not matching because it's tagged (tags copied from AuthKey) { - User: users["groupuser"], - AuthKey: &types.PreAuthKey{ - Tags: []string{"tag:alsotagged"}, - }, + User: ptr.To(groupuser), + Tags: []string{"tag:alsotagged"}, IPv4: ap("100.100.101.6"), }, { - User: users["groupuser"], + User: ptr.To(groupuser), IPv4: ap("100.100.101.203"), }, { - User: users["groupuser"], + User: ptr.To(groupuser), IPv4: ap("100.100.101.204"), }, }, @@ -1653,13 +1955,13 @@ func TestResolvePolicy(t *testing.T) { nodes: types.Nodes{ // Not matching other user { - User: users["notme"], + User: ptr.To(notme), IPv4: ap("100.100.101.9"), }, // Not matching forced tags { - ForcedTags: []string{"tag:anything"}, - IPv4: ap("100.100.101.10"), + Tags: []string{"tag:anything"}, + IPv4: ap("100.100.101.10"), }, // not matching pak tag { @@ -1670,14 +1972,12 @@ func TestResolvePolicy(t *testing.T) { }, // Not matching forced tags { - ForcedTags: []string{"tag:test"}, - IPv4: ap("100.100.101.234"), + Tags: []string{"tag:test"}, + IPv4: ap("100.100.101.234"), }, - // not matching pak tag + // matching tag (tags copied from AuthKey during registration) { - AuthKey: &types.PreAuthKey{ - Tags: []string{"tag:test"}, - }, + Tags: []string{"tag:test"}, IPv4: ap("100.100.101.239"), }, }, @@ -1685,6 +1985,52 @@ func TestResolvePolicy(t *testing.T) { pol: &Policy{}, want: []netip.Prefix{mp("100.100.101.234/32"), mp("100.100.101.239/32")}, }, + { + name: "tag-owned-by-tag-call-child", + toResolve: tp("tag:smallbrother"), + pol: &Policy{ + TagOwners: TagOwners{ + Tag("tag:bigbrother"): {}, + Tag("tag:smallbrother"): {ptr.To(Tag("tag:bigbrother"))}, + }, + }, + nodes: types.Nodes{ + // Should not match as we resolve the "child" tag. + { + Tags: []string{"tag:bigbrother"}, + IPv4: ap("100.100.101.234"), + }, + // Should match. + { + Tags: []string{"tag:smallbrother"}, + IPv4: ap("100.100.101.239"), + }, + }, + want: []netip.Prefix{mp("100.100.101.239/32")}, + }, + { + name: "tag-owned-by-tag-call-parent", + toResolve: tp("tag:bigbrother"), + pol: &Policy{ + TagOwners: TagOwners{ + Tag("tag:bigbrother"): {}, + Tag("tag:smallbrother"): {ptr.To(Tag("tag:bigbrother"))}, + }, + }, + nodes: types.Nodes{ + // Should match - we are resolving "tag:bigbrother" which this node has. + { + Tags: []string{"tag:bigbrother"}, + IPv4: ap("100.100.101.234"), + }, + // Should not match - this node has "tag:smallbrother", not the tag we're resolving. + { + Tags: []string{"tag:smallbrother"}, + IPv4: ap("100.100.101.239"), + }, + }, + want: []netip.Prefix{mp("100.100.101.234/32")}, + }, { name: "empty-policy", toResolve: pp("100.100.101.101/32"), @@ -1706,11 +2052,11 @@ func TestResolvePolicy(t *testing.T) { toResolve: ptr.To(Group("group:testgroup")), nodes: types.Nodes{ { - User: users["groupuser1"], + User: ptr.To(groupuser1), IPv4: ap("100.100.101.203"), }, { - User: users["groupuser2"], + User: ptr.To(groupuser2), IPv4: ap("100.100.101.204"), }, }, @@ -1731,7 +2077,7 @@ func TestResolvePolicy(t *testing.T) { toResolve: ptr.To(Username("invaliduser@")), nodes: types.Nodes{ { - User: users["testuser"], + User: ptr.To(testuser), IPv4: ap("100.100.101.103"), }, }, @@ -1742,8 +2088,8 @@ func TestResolvePolicy(t *testing.T) { toResolve: tp("tag:invalid"), nodes: types.Nodes{ { - ForcedTags: []string{"tag:test"}, - IPv4: ap("100.100.101.234"), + Tags: []string{"tag:test"}, + IPv4: ap("100.100.101.234"), }, }, }, @@ -1761,124 +2107,108 @@ func TestResolvePolicy(t *testing.T) { name: "autogroup-member-comprehensive", toResolve: ptr.To(AutoGroup(AutoGroupMember)), nodes: types.Nodes{ - // Node with no tags (should be included) + // Node with no tags (should be included - is a member) { - User: users["testuser"], + User: ptr.To(testuser), IPv4: ap("100.100.101.1"), }, - // Node with forced tags (should be excluded) + // Node with single tag (should be excluded - tagged nodes are not members) { - User: users["testuser"], - ForcedTags: []string{"tag:test"}, - IPv4: ap("100.100.101.2"), + User: ptr.To(testuser), + Tags: []string{"tag:test"}, + IPv4: ap("100.100.101.2"), }, - // Node with allowed requested tag (should be excluded) + // Node with multiple tags, all defined in policy (should be excluded) { - User: users["testuser"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:test"}, - }, + User: ptr.To(testuser), + Tags: []string{"tag:test", "tag:other"}, IPv4: ap("100.100.101.3"), }, - // Node with non-allowed requested tag (should be included) + // Node with tag not defined in policy (should be excluded - still tagged) { - User: users["testuser"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:notallowed"}, - }, + User: ptr.To(testuser), + Tags: []string{"tag:undefined"}, IPv4: ap("100.100.101.4"), }, - // Node with multiple requested tags, one allowed (should be excluded) + // Node with mixed tags - some defined, some not (should be excluded) { - User: users["testuser"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:test", "tag:notallowed"}, - }, + User: ptr.To(testuser), + Tags: []string{"tag:test", "tag:undefined"}, IPv4: ap("100.100.101.5"), }, - // Node with multiple requested tags, none allowed (should be included) + // Another untagged node from different user (should be included) { - User: users["testuser"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:notallowed1", "tag:notallowed2"}, - }, + User: ptr.To(testuser2), IPv4: ap("100.100.101.6"), }, }, pol: &Policy{ TagOwners: TagOwners{ - Tag("tag:test"): Owners{ptr.To(Username("testuser@"))}, + Tag("tag:test"): Owners{ptr.To(Username("testuser@"))}, + Tag("tag:other"): Owners{ptr.To(Username("testuser@"))}, }, }, want: []netip.Prefix{ - mp("100.100.101.1/32"), // No tags - mp("100.100.101.4/32"), // Non-allowed requested tag - mp("100.100.101.6/32"), // Multiple non-allowed requested tags + mp("100.100.101.1/32"), // No tags - is a member + mp("100.100.101.6/32"), // No tags, different user - is a member }, }, { name: "autogroup-tagged", toResolve: ptr.To(AutoGroup(AutoGroupTagged)), nodes: types.Nodes{ - // Node with no tags (should be excluded) + // Node with no tags (should be excluded - not tagged) { - User: users["testuser"], + User: ptr.To(testuser), IPv4: ap("100.100.101.1"), }, - // Node with forced tag (should be included) + // Node with single tag defined in policy (should be included) { - User: users["testuser"], - ForcedTags: []string{"tag:test"}, - IPv4: ap("100.100.101.2"), + User: ptr.To(testuser), + Tags: []string{"tag:test"}, + IPv4: ap("100.100.101.2"), }, - // Node with allowed requested tag (should be included) + // Node with multiple tags, all defined in policy (should be included) { - User: users["testuser"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:test"}, - }, + User: ptr.To(testuser), + Tags: []string{"tag:test", "tag:other"}, IPv4: ap("100.100.101.3"), }, - // Node with non-allowed requested tag (should be excluded) + // Node with tag not defined in policy (should be included - still tagged) { - User: users["testuser"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:notallowed"}, - }, + User: ptr.To(testuser), + Tags: []string{"tag:undefined"}, IPv4: ap("100.100.101.4"), }, - // Node with multiple requested tags, one allowed (should be included) + // Node with mixed tags - some defined, some not (should be included) { - User: users["testuser"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:test", "tag:notallowed"}, - }, + User: ptr.To(testuser), + Tags: []string{"tag:test", "tag:undefined"}, IPv4: ap("100.100.101.5"), }, - // Node with multiple requested tags, none allowed (should be excluded) + // Another untagged node from different user (should be excluded) { - User: users["testuser"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:notallowed1", "tag:notallowed2"}, - }, + User: ptr.To(testuser2), IPv4: ap("100.100.101.6"), }, - // Node with multiple forced tags (should be included) + // Tagged node from different user (should be included) { - User: users["testuser"], - ForcedTags: []string{"tag:test", "tag:other"}, - IPv4: ap("100.100.101.7"), + User: ptr.To(testuser2), + Tags: []string{"tag:server"}, + IPv4: ap("100.100.101.7"), }, }, pol: &Policy{ TagOwners: TagOwners{ - Tag("tag:test"): Owners{ptr.To(Username("testuser@"))}, + Tag("tag:test"): Owners{ptr.To(Username("testuser@"))}, + Tag("tag:other"): Owners{ptr.To(Username("testuser@"))}, + Tag("tag:server"): Owners{ptr.To(Username("testuser2@"))}, }, }, want: []netip.Prefix{ - mp("100.100.101.2/31"), // Forced tag and allowed requested tag consecutive IPs are put in 31 prefix - mp("100.100.101.5/32"), // Multiple requested tags, one allowed - mp("100.100.101.7/32"), // Multiple forced tags + mp("100.100.101.2/31"), // .2, .3 consecutive tagged nodes + mp("100.100.101.4/31"), // .4, .5 consecutive tagged nodes + mp("100.100.101.7/32"), // Tagged node from different user }, }, { @@ -1886,23 +2216,21 @@ func TestResolvePolicy(t *testing.T) { toResolve: ptr.To(AutoGroupSelf), nodes: types.Nodes{ { - User: users["testuser"], + User: ptr.To(testuser), IPv4: ap("100.100.101.1"), }, { - User: users["testuser2"], + User: ptr.To(testuser2), IPv4: ap("100.100.101.2"), }, { - User: users["testuser"], - ForcedTags: []string{"tag:test"}, - IPv4: ap("100.100.101.3"), + User: ptr.To(testuser), + Tags: []string{"tag:test"}, + IPv4: ap("100.100.101.3"), }, { - User: users["testuser2"], - Hostinfo: &tailcfg.Hostinfo{ - RequestTags: []string{"tag:test"}, - }, + User: ptr.To(testuser2), + Tags: []string{"tag:test"}, IPv4: ap("100.100.101.4"), }, }, @@ -1961,23 +2289,23 @@ func TestResolveAutoApprovers(t *testing.T) { nodes := types.Nodes{ { IPv4: ap("100.64.0.1"), - User: users[0], + User: &users[0], }, { IPv4: ap("100.64.0.2"), - User: users[1], + User: &users[1], }, { IPv4: ap("100.64.0.3"), - User: users[2], + User: &users[2], }, { - IPv4: ap("100.64.0.4"), - ForcedTags: []string{"tag:testtag"}, + IPv4: ap("100.64.0.4"), + Tags: []string{"tag:testtag"}, }, { - IPv4: ap("100.64.0.5"), - ForcedTags: []string{"tag:exittest"}, + IPv4: ap("100.64.0.5"), + Tags: []string{"tag:exittest"}, }, } @@ -2280,15 +2608,15 @@ func TestNodeCanApproveRoute(t *testing.T) { nodes := types.Nodes{ { IPv4: ap("100.64.0.1"), - User: users[0], + User: &users[0], }, { IPv4: ap("100.64.0.2"), - User: users[1], + User: &users[1], }, { IPv4: ap("100.64.0.3"), - User: users[2], + User: &users[2], }, } @@ -2413,15 +2741,15 @@ func TestResolveTagOwners(t *testing.T) { nodes := types.Nodes{ { IPv4: ap("100.64.0.1"), - User: users[0], + User: &users[0], }, { IPv4: ap("100.64.0.2"), - User: users[1], + User: &users[1], }, { IPv4: ap("100.64.0.3"), - User: users[2], + User: &users[2], }, } @@ -2470,6 +2798,20 @@ func TestResolveTagOwners(t *testing.T) { }, wantErr: false, }, + { + name: "tag-owns-tag", + policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:bigbrother"): Owners{ptr.To(Username("user1@"))}, + Tag("tag:smallbrother"): Owners{ptr.To(Tag("tag:bigbrother"))}, + }, + }, + want: map[Tag]*netipx.IPSet{ + Tag("tag:bigbrother"): mustIPSet("100.64.0.1/32"), + Tag("tag:smallbrother"): mustIPSet("100.64.0.1/32"), + }, + wantErr: false, + }, } cmps := append(util.Comparers, cmp.Comparer(ipSetComparer)) @@ -2498,15 +2840,15 @@ func TestNodeCanHaveTag(t *testing.T) { nodes := types.Nodes{ { IPv4: ap("100.64.0.1"), - User: users[0], + User: &users[0], }, { IPv4: ap("100.64.0.2"), - User: users[1], + User: &users[1], }, { IPv4: ap("100.64.0.3"), - User: users[2], + User: &users[2], }, } @@ -2580,6 +2922,170 @@ func TestNodeCanHaveTag(t *testing.T) { tag: "tag:test", want: false, }, + { + name: "node-with-unauthorized-tag-different-user", + policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:prod"): Owners{ptr.To(Username("user1@"))}, + }, + }, + node: nodes[2], // user3's node + tag: "tag:prod", + want: false, + }, + { + name: "node-with-multiple-tags-one-unauthorized", + policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:web"): Owners{ptr.To(Username("user1@"))}, + Tag("tag:database"): Owners{ptr.To(Username("user2@"))}, + }, + }, + node: nodes[0], // user1's node + tag: "tag:database", + want: false, // user1 cannot have tag:database (owned by user2) + }, + { + name: "empty-tagowners-map", + policy: &Policy{ + TagOwners: TagOwners{}, + }, + node: nodes[0], + tag: "tag:test", + want: false, // No one can have tags if tagOwners is empty + }, + { + name: "tag-not-in-tagowners", + policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:prod"): Owners{ptr.To(Username("user1@"))}, + }, + }, + node: nodes[0], + tag: "tag:dev", // This tag is not defined in tagOwners + want: false, + }, + // Test cases for nodes without IPs (new registration scenario) + // These test the user-based fallback in NodeCanHaveTag + { + name: "node-without-ip-user-owns-tag", + policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:test"): Owners{ptr.To(Username("user1@"))}, + }, + }, + node: &types.Node{ + // No IPv4 or IPv6 - simulates new node registration + User: &users[0], + UserID: ptr.To(users[0].ID), + }, + tag: "tag:test", + want: true, // Should succeed via user-based fallback + }, + { + name: "node-without-ip-user-does-not-own-tag", + policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:test"): Owners{ptr.To(Username("user2@"))}, + }, + }, + node: &types.Node{ + // No IPv4 or IPv6 - simulates new node registration + User: &users[0], // user1, but tag owned by user2 + UserID: ptr.To(users[0].ID), + }, + tag: "tag:test", + want: false, // user1 does not own tag:test + }, + { + name: "node-without-ip-group-owns-tag", + policy: &Policy{ + Groups: Groups{ + "group:admins": Usernames{"user1@", "user2@"}, + }, + TagOwners: TagOwners{ + Tag("tag:admin"): Owners{ptr.To(Group("group:admins"))}, + }, + }, + node: &types.Node{ + // No IPv4 or IPv6 - simulates new node registration + User: &users[1], // user2 is in group:admins + UserID: ptr.To(users[1].ID), + }, + tag: "tag:admin", + want: true, // Should succeed via group membership + }, + { + name: "node-without-ip-not-in-group", + policy: &Policy{ + Groups: Groups{ + "group:admins": Usernames{"user1@"}, + }, + TagOwners: TagOwners{ + Tag("tag:admin"): Owners{ptr.To(Group("group:admins"))}, + }, + }, + node: &types.Node{ + // No IPv4 or IPv6 - simulates new node registration + User: &users[1], // user2 is NOT in group:admins + UserID: ptr.To(users[1].ID), + }, + tag: "tag:admin", + want: false, // user2 is not in group:admins + }, + { + name: "node-without-ip-no-user", + policy: &Policy{ + TagOwners: TagOwners{ + Tag("tag:test"): Owners{ptr.To(Username("user1@"))}, + }, + }, + node: &types.Node{ + // No IPv4, IPv6, or User - edge case + }, + tag: "tag:test", + want: false, // No user means can't authorize via user-based fallback + }, + { + name: "node-without-ip-mixed-owners-user-match", + policy: &Policy{ + Groups: Groups{ + "group:ops": Usernames{"user3@"}, + }, + TagOwners: TagOwners{ + Tag("tag:server"): Owners{ + ptr.To(Username("user1@")), + ptr.To(Group("group:ops")), + }, + }, + }, + node: &types.Node{ + User: &users[0], // user1 directly owns the tag + UserID: ptr.To(users[0].ID), + }, + tag: "tag:server", + want: true, + }, + { + name: "node-without-ip-mixed-owners-group-match", + policy: &Policy{ + Groups: Groups{ + "group:ops": Usernames{"user3@"}, + }, + TagOwners: TagOwners{ + Tag("tag:server"): Owners{ + ptr.To(Username("user1@")), + ptr.To(Group("group:ops")), + }, + }, + }, + node: &types.Node{ + User: &users[2], // user3 is in group:ops + UserID: ptr.To(users[2].ID), + }, + tag: "tag:server", + want: true, + }, } for _, tt := range tests { @@ -2602,6 +3108,106 @@ func TestNodeCanHaveTag(t *testing.T) { } } +func TestUserMatchesOwner(t *testing.T) { + users := types.Users{ + {Model: gorm.Model{ID: 1}, Name: "user1"}, + {Model: gorm.Model{ID: 2}, Name: "user2"}, + {Model: gorm.Model{ID: 3}, Name: "user3"}, + } + + tests := []struct { + name string + policy *Policy + user types.User + owner Owner + want bool + }{ + { + name: "username-match", + policy: &Policy{}, + user: users[0], + owner: ptr.To(Username("user1@")), + want: true, + }, + { + name: "username-no-match", + policy: &Policy{}, + user: users[0], + owner: ptr.To(Username("user2@")), + want: false, + }, + { + name: "group-match", + policy: &Policy{ + Groups: Groups{ + "group:admins": Usernames{"user1@", "user2@"}, + }, + }, + user: users[1], // user2 is in group:admins + owner: ptr.To(Group("group:admins")), + want: true, + }, + { + name: "group-no-match", + policy: &Policy{ + Groups: Groups{ + "group:admins": Usernames{"user1@"}, + }, + }, + user: users[1], // user2 is NOT in group:admins + owner: ptr.To(Group("group:admins")), + want: false, + }, + { + name: "group-not-defined", + policy: &Policy{ + Groups: Groups{}, + }, + user: users[0], + owner: ptr.To(Group("group:undefined")), + want: false, + }, + { + name: "nil-username-owner", + policy: &Policy{}, + user: users[0], + owner: (*Username)(nil), + want: false, + }, + { + name: "nil-group-owner", + policy: &Policy{}, + user: users[0], + owner: (*Group)(nil), + want: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + // Create a minimal PolicyManager for testing + // We need nodes with IPs to initialize the tagOwnerMap + nodes := types.Nodes{ + { + IPv4: ap("100.64.0.1"), + User: &users[0], + }, + } + + b, err := json.Marshal(tt.policy) + require.NoError(t, err) + + pm, err := NewPolicyManager(b, users, nodes.ViewSlice()) + require.NoError(t, err) + + got := pm.userMatchesOwner(tt.user.View(), tt.owner) + if got != tt.want { + t.Errorf("userMatchesOwner() = %v, want %v", got, tt.want) + } + }) + } +} + func TestACL_UnmarshalJSON_WithCommentFields(t *testing.T) { tests := []struct { name string @@ -2889,3 +3495,147 @@ func mustParseAlias(s string) Alias { } return alias } + +func TestFlattenTagOwners(t *testing.T) { + tests := []struct { + name string + input TagOwners + want TagOwners + wantErr string + }{ + { + name: "tag-owns-tag", + input: TagOwners{ + Tag("tag:bigbrother"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:smallbrother"): Owners{ptr.To(Tag("tag:bigbrother"))}, + }, + want: TagOwners{ + Tag("tag:bigbrother"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:smallbrother"): Owners{ptr.To(Group("group:user1"))}, + }, + wantErr: "", + }, + { + name: "circular-reference", + input: TagOwners{ + Tag("tag:a"): Owners{ptr.To(Tag("tag:b"))}, + Tag("tag:b"): Owners{ptr.To(Tag("tag:a"))}, + }, + want: nil, + wantErr: "circular reference detected: tag:a -> tag:b", + }, + { + name: "mixed-owners", + input: TagOwners{ + Tag("tag:x"): Owners{ptr.To(Username("user1@")), ptr.To(Tag("tag:y"))}, + Tag("tag:y"): Owners{ptr.To(Username("user2@"))}, + }, + want: TagOwners{ + Tag("tag:x"): Owners{ptr.To(Username("user1@")), ptr.To(Username("user2@"))}, + Tag("tag:y"): Owners{ptr.To(Username("user2@"))}, + }, + wantErr: "", + }, + { + name: "mixed-dupe-owners", + input: TagOwners{ + Tag("tag:x"): Owners{ptr.To(Username("user1@")), ptr.To(Tag("tag:y"))}, + Tag("tag:y"): Owners{ptr.To(Username("user1@"))}, + }, + want: TagOwners{ + Tag("tag:x"): Owners{ptr.To(Username("user1@"))}, + Tag("tag:y"): Owners{ptr.To(Username("user1@"))}, + }, + wantErr: "", + }, + { + name: "no-tag-owners", + input: TagOwners{ + Tag("tag:solo"): Owners{ptr.To(Username("user1@"))}, + }, + want: TagOwners{ + Tag("tag:solo"): Owners{ptr.To(Username("user1@"))}, + }, + wantErr: "", + }, + { + name: "tag-long-owner-chain", + input: TagOwners{ + Tag("tag:a"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:b"): Owners{ptr.To(Tag("tag:a"))}, + Tag("tag:c"): Owners{ptr.To(Tag("tag:b"))}, + Tag("tag:d"): Owners{ptr.To(Tag("tag:c"))}, + Tag("tag:e"): Owners{ptr.To(Tag("tag:d"))}, + Tag("tag:f"): Owners{ptr.To(Tag("tag:e"))}, + Tag("tag:g"): Owners{ptr.To(Tag("tag:f"))}, + }, + want: TagOwners{ + Tag("tag:a"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:b"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:c"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:d"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:e"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:f"): Owners{ptr.To(Group("group:user1"))}, + Tag("tag:g"): Owners{ptr.To(Group("group:user1"))}, + }, + wantErr: "", + }, + { + name: "tag-long-circular-chain", + input: TagOwners{ + Tag("tag:a"): Owners{ptr.To(Tag("tag:g"))}, + Tag("tag:b"): Owners{ptr.To(Tag("tag:a"))}, + Tag("tag:c"): Owners{ptr.To(Tag("tag:b"))}, + Tag("tag:d"): Owners{ptr.To(Tag("tag:c"))}, + Tag("tag:e"): Owners{ptr.To(Tag("tag:d"))}, + Tag("tag:f"): Owners{ptr.To(Tag("tag:e"))}, + Tag("tag:g"): Owners{ptr.To(Tag("tag:f"))}, + }, + wantErr: "circular reference detected: tag:a -> tag:b -> tag:c -> tag:d -> tag:e -> tag:f -> tag:g", + }, + { + name: "undefined-tag-reference", + input: TagOwners{ + Tag("tag:a"): Owners{ptr.To(Tag("tag:nonexistent"))}, + }, + wantErr: `tag "tag:a" references undefined tag "tag:nonexistent"`, + }, + { + name: "tag-with-empty-owners-is-valid", + input: TagOwners{ + Tag("tag:a"): Owners{ptr.To(Tag("tag:b"))}, + Tag("tag:b"): Owners{}, // empty owners but exists + }, + want: TagOwners{ + Tag("tag:a"): nil, + Tag("tag:b"): nil, + }, + wantErr: "", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got, err := flattenTagOwners(tt.input) + if tt.wantErr != "" { + if err == nil { + t.Fatalf("flattenTagOwners() expected error %q, got nil", tt.wantErr) + } + + if err.Error() != tt.wantErr { + t.Fatalf("flattenTagOwners() expected error %q, got %q", tt.wantErr, err.Error()) + } + + return + } + + if err != nil { + t.Fatalf("flattenTagOwners() unexpected error: %v", err) + } + + if diff := cmp.Diff(tt.want, got); diff != "" { + t.Errorf("flattenTagOwners() mismatch (-want +got):\n%s", diff) + } + }) + } +} diff --git a/hscontrol/policy/v2/utils.go b/hscontrol/policy/v2/utils.go index 7482c97b..a4367775 100644 --- a/hscontrol/policy/v2/utils.go +++ b/hscontrol/policy/v2/utils.go @@ -39,9 +39,10 @@ func parsePortRange(portDef string) ([]tailcfg.PortRange, error) { } var portRanges []tailcfg.PortRange - parts := strings.Split(portDef, ",") - for _, part := range parts { + parts := strings.SplitSeq(portDef, ",") + + for part := range parts { if strings.Contains(part, "-") { rangeParts := strings.Split(part, "-") rangeParts = slices.DeleteFunc(rangeParts, func(e string) bool { diff --git a/hscontrol/poll.go b/hscontrol/poll.go index 4324ffba..02275751 100644 --- a/hscontrol/poll.go +++ b/hscontrol/poll.go @@ -11,6 +11,7 @@ import ( "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" + "github.com/rs/zerolog" "github.com/rs/zerolog/log" "github.com/sasha-s/go-deadlock" "tailscale.com/tailcfg" @@ -42,11 +43,6 @@ type mapSession struct { node *types.Node w http.ResponseWriter - - warnf func(string, ...any) - infof func(string, ...any) - tracef func(string, ...any) - errf func(error, string, ...any) } func (h *Headscale) newMapSession( @@ -55,8 +51,6 @@ func (h *Headscale) newMapSession( w http.ResponseWriter, node *types.Node, ) *mapSession { - warnf, infof, tracef, errf := logPollFunc(req, node) - ka := keepAliveInterval + (time.Duration(rand.IntN(9000)) * time.Millisecond) return &mapSession{ @@ -73,12 +67,6 @@ func (h *Headscale) newMapSession( keepAlive: ka, keepAliveTicker: nil, - - // Loggers - warnf: warnf, - infof: infof, - tracef: tracef, - errf: errf, } } @@ -295,6 +283,7 @@ func (m *mapSession) writeMap(msg *tailcfg.MapResponse) error { } data := make([]byte, reservedResponseHeaderSize) + //nolint:gosec // G115: JSON response size will not exceed uint32 max binary.LittleEndian.PutUint32(data, uint32(len(jsonBody))) data = append(data, jsonBody...) @@ -330,80 +319,22 @@ var keepAlive = tailcfg.MapResponse{ KeepAlive: true, } -func logTracePeerChange(hostname string, hostinfoChange bool, peerChange *tailcfg.PeerChange) { - trace := log.Trace().Caller().Uint64("node.id", uint64(peerChange.NodeID)).Str("hostname", hostname) - - if peerChange.Key != nil { - trace = trace.Str("node.key", peerChange.Key.ShortString()) - } - - if peerChange.DiscoKey != nil { - trace = trace.Str("disco.key", peerChange.DiscoKey.ShortString()) - } - - if peerChange.Online != nil { - trace = trace.Bool("online", *peerChange.Online) - } - - if peerChange.Endpoints != nil { - eps := make([]string, len(peerChange.Endpoints)) - for idx, ep := range peerChange.Endpoints { - eps[idx] = ep.String() - } - - trace = trace.Strs("endpoints", eps) - } - - if hostinfoChange { - trace = trace.Bool("hostinfo_changed", hostinfoChange) - } - - if peerChange.DERPRegion != 0 { - trace = trace.Int("derp_region", peerChange.DERPRegion) - } - - trace.Time("last_seen", *peerChange.LastSeen).Msg("PeerChange received") +// logf adds common mapSession context to a zerolog event. +func (m *mapSession) logf(event *zerolog.Event) *zerolog.Event { + return event. + Bool("omitPeers", m.req.OmitPeers). + Bool("stream", m.req.Stream). + Uint64("node.id", m.node.ID.Uint64()). + Str("node.name", m.node.Hostname) } -func logPollFunc( - mapRequest tailcfg.MapRequest, - node *types.Node, -) (func(string, ...any), func(string, ...any), func(string, ...any), func(error, string, ...any)) { - return func(msg string, a ...any) { - log.Warn(). - Caller(). - Bool("omitPeers", mapRequest.OmitPeers). - Bool("stream", mapRequest.Stream). - Uint64("node.id", node.ID.Uint64()). - Str("node.name", node.Hostname). - Msgf(msg, a...) - }, - func(msg string, a ...any) { - log.Info(). - Caller(). - Bool("omitPeers", mapRequest.OmitPeers). - Bool("stream", mapRequest.Stream). - Uint64("node.id", node.ID.Uint64()). - Str("node.name", node.Hostname). - Msgf(msg, a...) - }, - func(msg string, a ...any) { - log.Trace(). - Caller(). - Bool("omitPeers", mapRequest.OmitPeers). - Bool("stream", mapRequest.Stream). - Uint64("node.id", node.ID.Uint64()). - Str("node.name", node.Hostname). - Msgf(msg, a...) - }, - func(err error, msg string, a ...any) { - log.Error(). - Caller(). - Bool("omitPeers", mapRequest.OmitPeers). - Bool("stream", mapRequest.Stream). - Uint64("node.id", node.ID.Uint64()). - Str("node.name", node.Hostname). - Err(err). - Msgf(msg, a...) - } +//nolint:zerologlint // logf returns *zerolog.Event which is properly terminated with Msgf +func (m *mapSession) infof(msg string, a ...any) { m.logf(log.Info().Caller()).Msgf(msg, a...) } + +//nolint:zerologlint // logf returns *zerolog.Event which is properly terminated with Msgf +func (m *mapSession) tracef(msg string, a ...any) { m.logf(log.Trace().Caller()).Msgf(msg, a...) } + +//nolint:zerologlint // logf returns *zerolog.Event which is properly terminated with Msgf +func (m *mapSession) errf(err error, msg string, a ...any) { + m.logf(log.Error().Caller()).Err(err).Msgf(msg, a...) } diff --git a/hscontrol/state/debug.go b/hscontrol/state/debug.go index 03d6854f..3ed1d79f 100644 --- a/hscontrol/state/debug.go +++ b/hscontrol/state/debug.go @@ -5,6 +5,7 @@ import ( "strings" "time" + hsdb "github.com/juanfont/headscale/hscontrol/db" "github.com/juanfont/headscale/hscontrol/routes" "github.com/juanfont/headscale/hscontrol/types" "tailscale.com/tailcfg" @@ -78,7 +79,7 @@ func (s *State) DebugOverview() string { now := time.Now() for _, node := range allNodes.All() { if node.Valid() { - userName := node.User().Name + userName := node.Owner().Name() userNodeCounts[userName]++ if node.IsOnline().Valid() && node.IsOnline().Get() { @@ -200,9 +201,9 @@ func (s *State) DebugSSHPolicies() map[string]*tailcfg.SSHPolicy { } // DebugRegistrationCache returns debug information about the registration cache. -func (s *State) DebugRegistrationCache() map[string]interface{} { +func (s *State) DebugRegistrationCache() map[string]any { // The cache doesn't expose internal statistics, so we provide basic info - result := map[string]interface{}{ + result := map[string]any{ "type": "zcache", "expiration": registerCacheExpiration.String(), "cleanup": registerCacheCleanup.String(), @@ -228,7 +229,7 @@ func (s *State) DebugPolicy() (string, error) { return p.Data, nil case types.PolicyModeFile: - pol, err := policyBytes(s.db, s.cfg) + pol, err := hsdb.PolicyBytes(s.db.DB, s.cfg) if err != nil { return "", err } @@ -281,7 +282,7 @@ func (s *State) DebugOverviewJSON() DebugOverviewInfo { for _, node := range allNodes.All() { if node.Valid() { - userName := node.User().Name + userName := node.Owner().Name() info.Users[userName]++ if node.IsOnline().Valid() && node.IsOnline().Get() { diff --git a/hscontrol/state/debug_test.go b/hscontrol/state/debug_test.go index 60d77245..6fd528a8 100644 --- a/hscontrol/state/debug_test.go +++ b/hscontrol/state/debug_test.go @@ -15,7 +15,7 @@ func TestNodeStoreDebugString(t *testing.T) { { name: "empty nodestore", setupFn: func() *NodeStore { - return NewNodeStore(nil, allowAllPeersFunc) + return NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) }, contains: []string{ "=== NodeStore Debug Information ===", @@ -30,7 +30,7 @@ func TestNodeStoreDebugString(t *testing.T) { node1 := createTestNode(1, 1, "user1", "node1") node2 := createTestNode(2, 2, "user2", "node2") - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() _ = store.PutNode(node1) @@ -66,7 +66,7 @@ func TestNodeStoreDebugString(t *testing.T) { func TestDebugRegistrationCache(t *testing.T) { // Create a minimal NodeStore for testing debug methods - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) debugStr := store.DebugString() diff --git a/hscontrol/state/endpoint_test.go b/hscontrol/state/endpoint_test.go new file mode 100644 index 00000000..b8905ab7 --- /dev/null +++ b/hscontrol/state/endpoint_test.go @@ -0,0 +1,113 @@ +package state + +import ( + "net/netip" + "testing" + + "github.com/juanfont/headscale/hscontrol/types" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "tailscale.com/tailcfg" +) + +// TestEndpointStorageInNodeStore verifies that endpoints sent in MapRequest via ApplyPeerChange +// are correctly stored in the NodeStore and can be retrieved for sending to peers. +// This test reproduces the issue reported in https://github.com/juanfont/headscale/issues/2846 +func TestEndpointStorageInNodeStore(t *testing.T) { + // Create two test nodes + node1 := createTestNode(1, 1, "test-user", "node1") + node2 := createTestNode(2, 1, "test-user", "node2") + + // Create NodeStore with allow-all peers function + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) + + store.Start() + defer store.Stop() + + // Add both nodes to NodeStore + store.PutNode(node1) + store.PutNode(node2) + + // Create a MapRequest with endpoints for node1 + endpoints := []netip.AddrPort{ + netip.MustParseAddrPort("192.168.1.1:41641"), + netip.MustParseAddrPort("10.0.0.1:41641"), + } + + mapReq := tailcfg.MapRequest{ + NodeKey: node1.NodeKey, + DiscoKey: node1.DiscoKey, + Endpoints: endpoints, + Hostinfo: &tailcfg.Hostinfo{ + Hostname: "node1", + }, + } + + // Simulate what UpdateNodeFromMapRequest does: create PeerChange and apply it + peerChange := node1.PeerChangeFromMapRequest(mapReq) + + // Verify PeerChange has endpoints + require.NotNil(t, peerChange.Endpoints, "PeerChange should contain endpoints") + assert.Len(t, peerChange.Endpoints, len(endpoints), + "PeerChange should have same number of endpoints as MapRequest") + + // Apply the PeerChange via NodeStore.UpdateNode + updatedNode, ok := store.UpdateNode(node1.ID, func(n *types.Node) { + n.ApplyPeerChange(&peerChange) + }) + require.True(t, ok, "UpdateNode should succeed") + require.True(t, updatedNode.Valid(), "Updated node should be valid") + + // Verify endpoints are in the updated node view + storedEndpoints := updatedNode.Endpoints().AsSlice() + assert.Len(t, storedEndpoints, len(endpoints), + "NodeStore should have same number of endpoints as sent") + + if len(storedEndpoints) == len(endpoints) { + for i, ep := range endpoints { + assert.Equal(t, ep, storedEndpoints[i], + "Endpoint %d should match", i) + } + } + + // Verify we can retrieve the node again and endpoints are still there + retrievedNode, found := store.GetNode(node1.ID) + require.True(t, found, "node1 should exist in NodeStore") + + retrievedEndpoints := retrievedNode.Endpoints().AsSlice() + assert.Len(t, retrievedEndpoints, len(endpoints), + "Retrieved node should have same number of endpoints") + + // Verify that when we get node1 as a peer of node2, it has endpoints + // This is the critical part that was failing in the bug report + peers := store.ListPeers(node2.ID) + require.Positive(t, peers.Len(), "node2 should have at least one peer") + + // Find node1 in the peer list + var node1Peer types.NodeView + + foundPeer := false + + for _, peer := range peers.All() { + if peer.ID() == node1.ID { + node1Peer = peer + foundPeer = true + + break + } + } + + require.True(t, foundPeer, "node1 should be in node2's peer list") + + // Check that node1's endpoints are available in the peer view + peerEndpoints := node1Peer.Endpoints().AsSlice() + assert.Len(t, peerEndpoints, len(endpoints), + "Peer view should have same number of endpoints as sent") + + if len(peerEndpoints) == len(endpoints) { + for i, ep := range endpoints { + assert.Equal(t, ep, peerEndpoints[i], + "Peer endpoint %d should match", i) + } + } +} diff --git a/hscontrol/state/ephemeral_test.go b/hscontrol/state/ephemeral_test.go index e3acc9b9..632af13c 100644 --- a/hscontrol/state/ephemeral_test.go +++ b/hscontrol/state/ephemeral_test.go @@ -20,7 +20,7 @@ func TestEphemeralNodeDeleteWithConcurrentUpdate(t *testing.T) { node := createTestNode(1, 1, "test-user", "test-node") // Create NodeStore - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() @@ -57,8 +57,6 @@ func TestEphemeralNodeDeleteWithConcurrentUpdate(t *testing.T) { // Goroutine 2: DeleteNode (simulates handleLogout for ephemeral node) go func() { - // Small delay to increase chance of batching together - time.Sleep(1 * time.Millisecond) store.DeleteNode(node.ID) done <- true }() @@ -67,15 +65,11 @@ func TestEphemeralNodeDeleteWithConcurrentUpdate(t *testing.T) { <-done <-done - // Give batching time to complete - time.Sleep(50 * time.Millisecond) - - // The key assertion: if UpdateNode and DeleteNode were batched together - // with DELETE after UPDATE, then UpdateNode should return an invalid node - // OR it should return a valid node but the node should no longer exist in the store - - _, found = store.GetNode(node.ID) - assert.False(t, found, "node should be deleted from NodeStore") + // Verify node is eventually deleted + require.EventuallyWithT(t, func(c *assert.CollectT) { + _, found = store.GetNode(node.ID) + assert.False(c, found, "node should be deleted from NodeStore") + }, 1*time.Second, 10*time.Millisecond, "waiting for node to be deleted") // If the update happened before delete in the batch, the returned node might be invalid if updateOk { @@ -95,22 +89,21 @@ func TestEphemeralNodeDeleteWithConcurrentUpdate(t *testing.T) { func TestUpdateNodeReturnsInvalidWhenDeletedInSameBatch(t *testing.T) { node := createTestNode(2, 1, "test-user", "test-node-2") - store := NewNodeStore(nil, allowAllPeersFunc) + // Use batch size of 2 to guarantee UpdateNode and DeleteNode batch together + store := NewNodeStore(nil, allowAllPeersFunc, 2, TestBatchTimeout) store.Start() defer store.Stop() // Put node in store _ = store.PutNode(node) - // Simulate the exact sequence: UpdateNode gets queued, then DeleteNode gets queued, - // they batch together, and we check what UpdateNode returns - + // Queue UpdateNode and DeleteNode - with batch size of 2, they will batch together resultChan := make(chan struct { node types.NodeView ok bool }) - // Start UpdateNode - it will block until batch is applied + // Start UpdateNode in goroutine - it will queue and wait for batch go func() { node, ok := store.UpdateNode(node.ID, func(n *types.Node) { n.LastSeen = ptr.To(time.Now()) @@ -121,18 +114,15 @@ func TestUpdateNodeReturnsInvalidWhenDeletedInSameBatch(t *testing.T) { }{node, ok} }() - // Give UpdateNode a moment to queue its work - time.Sleep(5 * time.Millisecond) - - // Now queue DeleteNode - should batch with the UPDATE - store.DeleteNode(node.ID) + // Start DeleteNode in goroutine - it will queue and trigger batch processing + // Since batch size is 2, both operations will be processed together + go func() { + store.DeleteNode(node.ID) + }() // Get the result from UpdateNode result := <-resultChan - // Wait for batch to complete - time.Sleep(50 * time.Millisecond) - // Node should be deleted _, found := store.GetNode(node.ID) assert.False(t, found, "node should be deleted") @@ -157,7 +147,7 @@ func TestUpdateNodeReturnsInvalidWhenDeletedInSameBatch(t *testing.T) { func TestPersistNodeToDBPreventsRaceCondition(t *testing.T) { node := createTestNode(3, 1, "test-user", "test-node-3") - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() @@ -174,12 +164,11 @@ func TestPersistNodeToDBPreventsRaceCondition(t *testing.T) { // Now delete the node (simulating ephemeral logout happening concurrently) store.DeleteNode(node.ID) - // Wait for deletion to complete - time.Sleep(50 * time.Millisecond) - - // Verify node is deleted - _, found := store.GetNode(node.ID) - require.False(t, found, "node should be deleted") + // Verify node is eventually deleted + require.EventuallyWithT(t, func(c *assert.CollectT) { + _, found := store.GetNode(node.ID) + assert.False(c, found, "node should be deleted") + }, 1*time.Second, 10*time.Millisecond, "waiting for node to be deleted") // Now try to use the updatedNode from before the deletion // In the old code, this would re-insert the node into the database @@ -213,7 +202,8 @@ func TestEphemeralNodeLogoutRaceCondition(t *testing.T) { Ephemeral: true, } - store := NewNodeStore(nil, allowAllPeersFunc) + // Use batch size of 2 to guarantee UpdateNode and DeleteNode batch together + store := NewNodeStore(nil, allowAllPeersFunc, 2, TestBatchTimeout) store.Start() defer store.Stop() @@ -238,7 +228,6 @@ func TestEphemeralNodeLogoutRaceCondition(t *testing.T) { // Goroutine 2: DeleteNode (simulates handleLogout for ephemeral node) go func() { - time.Sleep(1 * time.Millisecond) // Slight delay to batch operations store.DeleteNode(ephemeralNode.ID) done <- true }() @@ -247,12 +236,11 @@ func TestEphemeralNodeLogoutRaceCondition(t *testing.T) { <-done <-done - // Give batching time to complete - time.Sleep(50 * time.Millisecond) - - // Node should be deleted from store - _, found := store.GetNode(ephemeralNode.ID) - assert.False(t, found, "ephemeral node should be deleted from NodeStore") + // Verify node is eventually deleted + require.EventuallyWithT(t, func(c *assert.CollectT) { + _, found := store.GetNode(ephemeralNode.ID) + assert.False(c, found, "ephemeral node should be deleted from NodeStore") + }, 1*time.Second, 10*time.Millisecond, "waiting for ephemeral node to be deleted") // Critical assertion: if UpdateNode returned before DeleteNode completed, // the updatedNode might be valid but the node is actually deleted. @@ -288,51 +276,57 @@ func TestUpdateNodeFromMapRequestEphemeralLogoutSequence(t *testing.T) { Ephemeral: true, } - store := NewNodeStore(nil, allowAllPeersFunc) + // Use batch size of 2 to guarantee UpdateNode and DeleteNode batch together + // Use batch size of 2 to guarantee UpdateNode and DeleteNode batch together + store := NewNodeStore(nil, allowAllPeersFunc, 2, TestBatchTimeout) store.Start() defer store.Stop() - // Initial state: ephemeral node exists + // Put ephemeral node in store _ = store.PutNode(ephemeralNode) // Step 1: UpdateNodeFromMapRequest calls UpdateNode // (simulating client sending MapRequest with endpoint updates) - updateStarted := make(chan bool) - var updatedNode types.NodeView - var updateOk bool + updateResult := make(chan struct { + node types.NodeView + ok bool + }) go func() { - updateStarted <- true - updatedNode, updateOk = store.UpdateNode(ephemeralNode.ID, func(n *types.Node) { + node, ok := store.UpdateNode(ephemeralNode.ID, func(n *types.Node) { n.LastSeen = ptr.To(time.Now()) endpoint := netip.MustParseAddrPort("10.0.0.1:41641") n.Endpoints = []netip.AddrPort{endpoint} }) + updateResult <- struct { + node types.NodeView + ok bool + }{node, ok} }() - <-updateStarted - // Small delay to ensure UpdateNode is queued - time.Sleep(5 * time.Millisecond) - // Step 2: Logout happens - handleLogout calls DeleteNode - // (simulating client sending logout with past expiry) - store.DeleteNode(ephemeralNode.ID) + // With batch size of 2, this will trigger batch processing with UpdateNode + go func() { + store.DeleteNode(ephemeralNode.ID) + }() - // Wait for batching to complete - time.Sleep(50 * time.Millisecond) + // Step 3: Wait and verify node is eventually deleted + require.EventuallyWithT(t, func(c *assert.CollectT) { + _, nodeExists := store.GetNode(ephemeralNode.ID) + assert.False(c, nodeExists, "ephemeral node must be deleted after logout") + }, 1*time.Second, 10*time.Millisecond, "waiting for ephemeral node to be deleted") - // Step 3: Check results - _, nodeExists := store.GetNode(ephemeralNode.ID) - assert.False(t, nodeExists, "ephemeral node must be deleted after logout") + // Step 4: Get the update result + result := <-updateResult - // Step 4: Simulate what happens if we try to persist the updatedNode - if updateOk && updatedNode.Valid() { + // Simulate what happens if we try to persist the updatedNode + if result.ok && result.node.Valid() { // This is the problematic path - UpdateNode returned a valid node // but the node was deleted in the same batch t.Log("UpdateNode returned valid node even though node was deleted") // The fix: persistNodeToDB must check NodeStore before persisting - _, checkExists := store.GetNode(updatedNode.ID()) + _, checkExists := store.GetNode(result.node.ID()) if checkExists { t.Error("BUG: Node still exists in NodeStore after deletion - should be impossible") } else { @@ -353,14 +347,15 @@ func TestUpdateNodeFromMapRequestEphemeralLogoutSequence(t *testing.T) { func TestUpdateNodeDeletedInSameBatchReturnsInvalid(t *testing.T) { node := createTestNode(6, 1, "test-user", "test-node-6") - store := NewNodeStore(nil, allowAllPeersFunc) + // Use batch size of 2 to guarantee UpdateNode and DeleteNode batch together + store := NewNodeStore(nil, allowAllPeersFunc, 2, TestBatchTimeout) store.Start() defer store.Stop() // Put node in store _ = store.PutNode(node) - // Queue UpdateNode + // Queue UpdateNode and DeleteNode - with batch size of 2, they will batch together updateDone := make(chan struct { node types.NodeView ok bool @@ -376,18 +371,14 @@ func TestUpdateNodeDeletedInSameBatchReturnsInvalid(t *testing.T) { }{updatedNode, ok} }() - // Small delay to ensure UpdateNode is queued - time.Sleep(5 * time.Millisecond) - - // Queue DeleteNode - should batch with UpdateNode - store.DeleteNode(node.ID) + // Queue DeleteNode - with batch size of 2, this triggers batch processing + go func() { + store.DeleteNode(node.ID) + }() // Get UpdateNode result result := <-updateDone - // Wait for batch to complete - time.Sleep(50 * time.Millisecond) - // Node should be deleted _, exists := store.GetNode(node.ID) assert.False(t, exists, "node should be deleted from store") @@ -417,30 +408,28 @@ func TestPersistNodeToDBChecksNodeStoreBeforePersist(t *testing.T) { Ephemeral: true, } - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() - // Put node in store + // Put node _ = store.PutNode(ephemeralNode) - // Simulate the race: - // 1. UpdateNode is called (from UpdateNodeFromMapRequest) + // UpdateNode returns a node updatedNode, ok := store.UpdateNode(ephemeralNode.ID, func(n *types.Node) { n.LastSeen = ptr.To(time.Now()) }) require.True(t, ok, "UpdateNode should succeed") - require.True(t, updatedNode.Valid(), "UpdateNode should return valid node") + require.True(t, updatedNode.Valid(), "updated node should be valid") - // 2. Node is deleted (from handleLogout for ephemeral node) + // Delete the node store.DeleteNode(ephemeralNode.ID) - // Wait for deletion - time.Sleep(50 * time.Millisecond) - - // 3. Verify node is deleted from store - _, exists := store.GetNode(ephemeralNode.ID) - require.False(t, exists, "node should be deleted from NodeStore") + // Verify node is eventually deleted + require.EventuallyWithT(t, func(c *assert.CollectT) { + _, exists := store.GetNode(ephemeralNode.ID) + assert.False(c, exists, "node should be deleted from NodeStore") + }, 1*time.Second, 10*time.Millisecond, "waiting for node to be deleted") // 4. Simulate what persistNodeToDB does - check if node still exists // The fix in persistNodeToDB checks NodeStore before persisting: diff --git a/hscontrol/state/maprequest_test.go b/hscontrol/state/maprequest_test.go index 865d3eb4..99f781d4 100644 --- a/hscontrol/state/maprequest_test.go +++ b/hscontrol/state/maprequest_test.go @@ -9,6 +9,7 @@ import ( "github.com/stretchr/testify/require" "tailscale.com/tailcfg" "tailscale.com/types/key" + "tailscale.com/types/ptr" ) func TestNetInfoFromMapRequest(t *testing.T) { @@ -148,8 +149,8 @@ func createTestNodeSimple(id types.NodeID) *types.Node { node := &types.Node{ ID: id, Hostname: "test-node", - UserID: uint(id), - User: user, + UserID: ptr.To(uint(id)), + User: &user, MachineKey: machineKey.Public(), NodeKey: nodeKey.Public(), IPv4: &netip.Addr{}, diff --git a/hscontrol/state/node_store.go b/hscontrol/state/node_store.go index a06151a5..6327b46b 100644 --- a/hscontrol/state/node_store.go +++ b/hscontrol/state/node_store.go @@ -14,11 +14,6 @@ import ( "tailscale.com/types/views" ) -const ( - batchSize = 100 - batchTimeout = 500 * time.Millisecond -) - const ( put = 1 del = 2 @@ -92,9 +87,12 @@ type NodeStore struct { peersFunc PeersFunc writeQueue chan work + + batchSize int + batchTimeout time.Duration } -func NewNodeStore(allNodes types.Nodes, peersFunc PeersFunc) *NodeStore { +func NewNodeStore(allNodes types.Nodes, peersFunc PeersFunc, batchSize int, batchTimeout time.Duration) *NodeStore { nodes := make(map[types.NodeID]types.Node, len(allNodes)) for _, n := range allNodes { nodes[n.ID] = *n @@ -102,7 +100,9 @@ func NewNodeStore(allNodes types.Nodes, peersFunc PeersFunc) *NodeStore { snap := snapshotFromNodes(nodes, peersFunc) store := &NodeStore{ - peersFunc: peersFunc, + peersFunc: peersFunc, + batchSize: batchSize, + batchTimeout: batchTimeout, } store.data.Store(&snap) @@ -249,9 +249,10 @@ func (s *NodeStore) Stop() { // processWrite processes the write queue in batches. func (s *NodeStore) processWrite() { - c := time.NewTicker(batchTimeout) + c := time.NewTicker(s.batchTimeout) defer c.Stop() - batch := make([]work, 0, batchSize) + + batch := make([]work, 0, s.batchSize) for { select { @@ -264,17 +265,19 @@ func (s *NodeStore) processWrite() { return } batch = append(batch, w) - if len(batch) >= batchSize { + if len(batch) >= s.batchSize { s.applyBatch(batch) batch = batch[:0] - c.Reset(batchTimeout) + + c.Reset(s.batchTimeout) } case <-c.C: if len(batch) != 0 { s.applyBatch(batch) batch = batch[:0] } - c.Reset(batchTimeout) + + c.Reset(s.batchTimeout) } } } @@ -405,7 +408,7 @@ func snapshotFromNodes(nodes map[types.NodeID]types.Node, peersFunc PeersFunc) S // Build nodesByUser, nodesByNodeKey, and nodesByMachineKey maps for _, n := range nodes { nodeView := n.View() - userID := types.UserID(n.UserID) + userID := n.TypedUserID() newSnap.nodesByUser[userID] = append(newSnap.nodesByUser[userID], nodeView) newSnap.nodesByNodeKey[n.NodeKey] = nodeView @@ -506,15 +509,27 @@ func (s *NodeStore) DebugString() string { sb.WriteString(fmt.Sprintf("Users with Nodes: %d\n", len(snapshot.nodesByUser))) sb.WriteString("\n") - // User distribution - sb.WriteString("Nodes by User:\n") + // User distribution (shows internal UserID tracking, not display owner) + sb.WriteString("Nodes by Internal User ID:\n") for userID, nodes := range snapshot.nodesByUser { if len(nodes) > 0 { userName := "unknown" + taggedCount := 0 if len(nodes) > 0 && nodes[0].Valid() { - userName = nodes[0].User().Name + userName = nodes[0].User().Name() + // Count tagged nodes (which have UserID set but are owned by "tagged-devices") + for _, n := range nodes { + if n.IsTagged() { + taggedCount++ + } + } + } + + if taggedCount > 0 { + sb.WriteString(fmt.Sprintf(" - User %d (%s): %d nodes (%d tagged)\n", userID, userName, len(nodes), taggedCount)) + } else { + sb.WriteString(fmt.Sprintf(" - User %d (%s): %d nodes\n", userID, userName, len(nodes))) } - sb.WriteString(fmt.Sprintf(" - User %d (%s): %d nodes\n", userID, userName, len(nodes))) } } sb.WriteString("\n") diff --git a/hscontrol/state/node_store_test.go b/hscontrol/state/node_store_test.go index 64ee0406..3d6184ba 100644 --- a/hscontrol/state/node_store_test.go +++ b/hscontrol/state/node_store_test.go @@ -13,6 +13,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "tailscale.com/types/key" + "tailscale.com/types/ptr" ) func TestSnapshotFromNodes(t *testing.T) { @@ -173,8 +174,8 @@ func createTestNode(nodeID types.NodeID, userID uint, username, hostname string) DiscoKey: discoKey.Public(), Hostname: hostname, GivenName: hostname, - UserID: userID, - User: types.User{ + UserID: ptr.To(userID), + User: &types.User{ Name: username, DisplayName: username, }, @@ -236,7 +237,7 @@ func TestNodeStoreOperations(t *testing.T) { { name: "create empty store and add single node", setupFunc: func(t *testing.T) *NodeStore { - return NewNodeStore(nil, allowAllPeersFunc) + return NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) }, steps: []testStep{ { @@ -276,7 +277,8 @@ func TestNodeStoreOperations(t *testing.T) { setupFunc: func(t *testing.T) *NodeStore { node1 := createTestNode(1, 1, "user1", "node1") initialNodes := types.Nodes{&node1} - return NewNodeStore(initialNodes, allowAllPeersFunc) + + return NewNodeStore(initialNodes, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) }, steps: []testStep{ { @@ -346,7 +348,7 @@ func TestNodeStoreOperations(t *testing.T) { node3 := createTestNode(3, 2, "user2", "node3") initialNodes := types.Nodes{&node1, &node2, &node3} - return NewNodeStore(initialNodes, allowAllPeersFunc) + return NewNodeStore(initialNodes, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) }, steps: []testStep{ { @@ -405,7 +407,8 @@ func TestNodeStoreOperations(t *testing.T) { node1 := createTestNode(1, 1, "user1", "node1") node2 := createTestNode(2, 1, "user1", "node2") initialNodes := types.Nodes{&node1, &node2} - return NewNodeStore(initialNodes, allowAllPeersFunc) + + return NewNodeStore(initialNodes, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) }, steps: []testStep{ { @@ -443,7 +446,7 @@ func TestNodeStoreOperations(t *testing.T) { { name: "test with odd-even peers filtering", setupFunc: func(t *testing.T) *NodeStore { - return NewNodeStore(nil, oddEvenPeersFunc) + return NewNodeStore(nil, oddEvenPeersFunc, TestBatchSize, TestBatchTimeout) }, steps: []testStep{ { @@ -502,7 +505,8 @@ func TestNodeStoreOperations(t *testing.T) { node1 := createTestNode(1, 1, "user1", "node1") node2 := createTestNode(2, 1, "user1", "node2") initialNodes := types.Nodes{&node1, &node2} - return NewNodeStore(initialNodes, allowAllPeersFunc) + + return NewNodeStore(initialNodes, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) }, steps: []testStep{ { @@ -624,7 +628,7 @@ func TestNodeStoreOperations(t *testing.T) { go func() { resultNode3, ok3 = store.UpdateNode(1, func(n *types.Node) { - n.ForcedTags = []string{"tag1", "tag2"} + n.Tags = []string{"tag1", "tag2"} }) close(done3) }() @@ -645,24 +649,24 @@ func TestNodeStoreOperations(t *testing.T) { // resultNode1 (from hostname update) should also have the givenname and tags changes assert.Equal(t, "multi-update-hostname", resultNode1.Hostname()) assert.Equal(t, "multi-update-givenname", resultNode1.GivenName()) - assert.Equal(t, []string{"tag1", "tag2"}, resultNode1.ForcedTags().AsSlice()) + assert.Equal(t, []string{"tag1", "tag2"}, resultNode1.Tags().AsSlice()) // resultNode2 (from givenname update) should also have the hostname and tags changes assert.Equal(t, "multi-update-hostname", resultNode2.Hostname()) assert.Equal(t, "multi-update-givenname", resultNode2.GivenName()) - assert.Equal(t, []string{"tag1", "tag2"}, resultNode2.ForcedTags().AsSlice()) + assert.Equal(t, []string{"tag1", "tag2"}, resultNode2.Tags().AsSlice()) // resultNode3 (from tags update) should also have the hostname and givenname changes assert.Equal(t, "multi-update-hostname", resultNode3.Hostname()) assert.Equal(t, "multi-update-givenname", resultNode3.GivenName()) - assert.Equal(t, []string{"tag1", "tag2"}, resultNode3.ForcedTags().AsSlice()) + assert.Equal(t, []string{"tag1", "tag2"}, resultNode3.Tags().AsSlice()) // Verify the snapshot also has all changes snapshot := store.data.Load() finalNode := snapshot.nodesByID[1] assert.Equal(t, "multi-update-hostname", finalNode.Hostname) assert.Equal(t, "multi-update-givenname", finalNode.GivenName) - assert.Equal(t, []string{"tag1", "tag2"}, finalNode.ForcedTags) + assert.Equal(t, []string{"tag1", "tag2"}, finalNode.Tags) }, }, }, @@ -673,7 +677,8 @@ func TestNodeStoreOperations(t *testing.T) { node1 := createTestNode(1, 1, "user1", "node1") node2 := createTestNode(2, 1, "user1", "node2") initialNodes := types.Nodes{&node1, &node2} - return NewNodeStore(initialNodes, allowAllPeersFunc) + + return NewNodeStore(initialNodes, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) }, steps: []testStep{ { @@ -683,7 +688,7 @@ func TestNodeStoreOperations(t *testing.T) { resultNode, ok := store.UpdateNode(1, func(n *types.Node) { n.Hostname = "db-save-hostname" n.GivenName = "db-save-given" - n.ForcedTags = []string{"db-tag1", "db-tag2"} + n.Tags = []string{"db-tag1", "db-tag2"} }) assert.True(t, ok, "UpdateNode should succeed") @@ -692,21 +697,21 @@ func TestNodeStoreOperations(t *testing.T) { // Verify the returned node has all expected values assert.Equal(t, "db-save-hostname", resultNode.Hostname()) assert.Equal(t, "db-save-given", resultNode.GivenName()) - assert.Equal(t, []string{"db-tag1", "db-tag2"}, resultNode.ForcedTags().AsSlice()) + assert.Equal(t, []string{"db-tag1", "db-tag2"}, resultNode.Tags().AsSlice()) // Convert to struct as would be done for database save nodePtr := resultNode.AsStruct() assert.NotNil(t, nodePtr) assert.Equal(t, "db-save-hostname", nodePtr.Hostname) assert.Equal(t, "db-save-given", nodePtr.GivenName) - assert.Equal(t, []string{"db-tag1", "db-tag2"}, nodePtr.ForcedTags) + assert.Equal(t, []string{"db-tag1", "db-tag2"}, nodePtr.Tags) // Verify the snapshot also reflects the same state snapshot := store.data.Load() storedNode := snapshot.nodesByID[1] assert.Equal(t, "db-save-hostname", storedNode.Hostname) assert.Equal(t, "db-save-given", storedNode.GivenName) - assert.Equal(t, []string{"db-tag1", "db-tag2"}, storedNode.ForcedTags) + assert.Equal(t, []string{"db-tag1", "db-tag2"}, storedNode.Tags) }, }, { @@ -738,7 +743,7 @@ func TestNodeStoreOperations(t *testing.T) { go func() { result3, ok3 = store.UpdateNode(1, func(n *types.Node) { - n.ForcedTags = []string{"concurrent-tag"} + n.Tags = []string{"concurrent-tag"} }) close(done3) }() @@ -763,22 +768,22 @@ func TestNodeStoreOperations(t *testing.T) { // All should have the complete final state assert.Equal(t, "concurrent-db-hostname", nodePtr1.Hostname) assert.Equal(t, "concurrent-db-given", nodePtr1.GivenName) - assert.Equal(t, []string{"concurrent-tag"}, nodePtr1.ForcedTags) + assert.Equal(t, []string{"concurrent-tag"}, nodePtr1.Tags) assert.Equal(t, "concurrent-db-hostname", nodePtr2.Hostname) assert.Equal(t, "concurrent-db-given", nodePtr2.GivenName) - assert.Equal(t, []string{"concurrent-tag"}, nodePtr2.ForcedTags) + assert.Equal(t, []string{"concurrent-tag"}, nodePtr2.Tags) assert.Equal(t, "concurrent-db-hostname", nodePtr3.Hostname) assert.Equal(t, "concurrent-db-given", nodePtr3.GivenName) - assert.Equal(t, []string{"concurrent-tag"}, nodePtr3.ForcedTags) + assert.Equal(t, []string{"concurrent-tag"}, nodePtr3.Tags) // Verify consistency with stored state snapshot := store.data.Load() storedNode := snapshot.nodesByID[1] assert.Equal(t, nodePtr1.Hostname, storedNode.Hostname) assert.Equal(t, nodePtr1.GivenName, storedNode.GivenName) - assert.Equal(t, nodePtr1.ForcedTags, storedNode.ForcedTags) + assert.Equal(t, nodePtr1.Tags, storedNode.Tags) }, }, { @@ -851,8 +856,8 @@ func createConcurrentTestNode(id types.NodeID, hostname string) types.Node { Hostname: hostname, MachineKey: machineKey.Public(), NodeKey: nodeKey.Public(), - UserID: 1, - User: types.User{ + UserID: ptr.To(uint(1)), + User: &types.User{ Name: "concurrent-test-user", }, } @@ -861,13 +866,14 @@ func createConcurrentTestNode(id types.NodeID, hostname string) types.Node { // --- Concurrency: concurrent PutNode operations --- func TestNodeStoreConcurrentPutNode(t *testing.T) { const concurrentOps = 20 - store := NewNodeStore(nil, allowAllPeersFunc) + + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() var wg sync.WaitGroup results := make(chan bool, concurrentOps) - for i := 0; i < concurrentOps; i++ { + for i := range concurrentOps { wg.Add(1) go func(nodeID int) { defer wg.Done() @@ -892,13 +898,14 @@ func TestNodeStoreConcurrentPutNode(t *testing.T) { func TestNodeStoreBatchingEfficiency(t *testing.T) { const batchSize = 10 const ops = 15 // more than batchSize - store := NewNodeStore(nil, allowAllPeersFunc) + + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() var wg sync.WaitGroup results := make(chan bool, ops) - for i := 0; i < ops; i++ { + for i := range ops { wg.Add(1) go func(nodeID int) { defer wg.Done() @@ -921,7 +928,7 @@ func TestNodeStoreBatchingEfficiency(t *testing.T) { // --- Race conditions: many goroutines on same node --- func TestNodeStoreRaceConditions(t *testing.T) { - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() @@ -935,11 +942,12 @@ func TestNodeStoreRaceConditions(t *testing.T) { var wg sync.WaitGroup errors := make(chan error, numGoroutines*opsPerGoroutine) - for i := 0; i < numGoroutines; i++ { + for i := range numGoroutines { wg.Add(1) go func(gid int) { defer wg.Done() - for j := 0; j < opsPerGoroutine; j++ { + + for j := range opsPerGoroutine { switch j % 3 { case 0: resultNode, _ := store.UpdateNode(nodeID, func(n *types.Node) { @@ -979,15 +987,20 @@ func TestNodeStoreRaceConditions(t *testing.T) { // --- Resource cleanup: goroutine leak detection --- func TestNodeStoreResourceCleanup(t *testing.T) { // initialGoroutines := runtime.NumGoroutine() - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() - time.Sleep(50 * time.Millisecond) - afterStartGoroutines := runtime.NumGoroutine() + // Wait for store to be ready + var afterStartGoroutines int + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + afterStartGoroutines = runtime.NumGoroutine() + assert.Positive(c, afterStartGoroutines) // Just ensure we have a valid count + }, time.Second, 10*time.Millisecond, "store should be running") const ops = 100 - for i := 0; i < ops; i++ { + for i := range ops { nodeID := types.NodeID(i + 1) node := createConcurrentTestNode(nodeID, "cleanup-node") resultNode := store.PutNode(node) @@ -1002,16 +1015,18 @@ func TestNodeStoreResourceCleanup(t *testing.T) { } } runtime.GC() - time.Sleep(100 * time.Millisecond) - finalGoroutines := runtime.NumGoroutine() - if finalGoroutines > afterStartGoroutines+2 { - t.Errorf("Potential goroutine leak: started with %d, ended with %d", afterStartGoroutines, finalGoroutines) - } + + // Wait for goroutines to settle and check for leaks + assert.EventuallyWithT(t, func(c *assert.CollectT) { + finalGoroutines := runtime.NumGoroutine() + assert.LessOrEqual(c, finalGoroutines, afterStartGoroutines+2, + "Potential goroutine leak: started with %d, ended with %d", afterStartGoroutines, finalGoroutines) + }, time.Second, 10*time.Millisecond, "goroutines should not leak") } // --- Timeout/deadlock: operations complete within reasonable time --- func TestNodeStoreOperationTimeout(t *testing.T) { - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() @@ -1094,8 +1109,8 @@ func TestNodeStoreOperationTimeout(t *testing.T) { // --- Edge case: update non-existent node --- func TestNodeStoreUpdateNonExistentNode(t *testing.T) { - for i := 0; i < 10; i++ { - store := NewNodeStore(nil, allowAllPeersFunc) + for i := range 10 { + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() nonExistentID := types.NodeID(999 + i) updateCallCount := 0 @@ -1114,12 +1129,11 @@ func TestNodeStoreUpdateNonExistentNode(t *testing.T) { // --- Allocation benchmark --- func BenchmarkNodeStoreAllocations(b *testing.B) { - store := NewNodeStore(nil, allowAllPeersFunc) + store := NewNodeStore(nil, allowAllPeersFunc, TestBatchSize, TestBatchTimeout) store.Start() defer store.Stop() - b.ResetTimer() - for i := 0; i < b.N; i++ { + for i := 0; b.Loop(); i++ { nodeID := types.NodeID(i + 1) node := createConcurrentTestNode(nodeID, "bench-node") store.PutNode(node) @@ -1138,3 +1152,92 @@ func TestNodeStoreAllocationStats(t *testing.T) { allocs := res.AllocsPerOp() t.Logf("NodeStore allocations per op: %.2f", float64(allocs)) } + +// TestRebuildPeerMapsWithChangedPeersFunc tests that RebuildPeerMaps correctly +// rebuilds the peer map when the peersFunc behavior changes. +// This simulates what happens when SetNodeTags changes node tags and the +// PolicyManager's matchers are updated, requiring the peer map to be rebuilt. +func TestRebuildPeerMapsWithChangedPeersFunc(t *testing.T) { + // Create a peersFunc that can be controlled via a channel + // Initially it returns all nodes as peers, then we change it to return no peers + allowPeers := true + + // This simulates how PolicyManager.BuildPeerMap works - it reads state + // that can change between calls + dynamicPeersFunc := func(nodes []types.NodeView) map[types.NodeID][]types.NodeView { + ret := make(map[types.NodeID][]types.NodeView, len(nodes)) + if allowPeers { + // Allow all peers + for _, node := range nodes { + var peers []types.NodeView + + for _, n := range nodes { + if n.ID() != node.ID() { + peers = append(peers, n) + } + } + + ret[node.ID()] = peers + } + } else { + // Allow no peers + for _, node := range nodes { + ret[node.ID()] = []types.NodeView{} + } + } + + return ret + } + + // Create nodes + node1 := createTestNode(1, 1, "user1", "node1") + node2 := createTestNode(2, 2, "user2", "node2") + initialNodes := types.Nodes{&node1, &node2} + + // Create store with dynamic peersFunc + store := NewNodeStore(initialNodes, dynamicPeersFunc, TestBatchSize, TestBatchTimeout) + + store.Start() + defer store.Stop() + + // Initially, nodes should see each other as peers + snapshot := store.data.Load() + require.Len(t, snapshot.peersByNode[1], 1, "node1 should have 1 peer initially") + require.Len(t, snapshot.peersByNode[2], 1, "node2 should have 1 peer initially") + require.Equal(t, types.NodeID(2), snapshot.peersByNode[1][0].ID()) + require.Equal(t, types.NodeID(1), snapshot.peersByNode[2][0].ID()) + + // Now "change the policy" by disabling peers + allowPeers = false + + // Call RebuildPeerMaps to rebuild with the new behavior + store.RebuildPeerMaps() + + // After rebuild, nodes should have no peers + snapshot = store.data.Load() + assert.Empty(t, snapshot.peersByNode[1], "node1 should have no peers after rebuild") + assert.Empty(t, snapshot.peersByNode[2], "node2 should have no peers after rebuild") + + // Verify that ListPeers returns the correct result + peers1 := store.ListPeers(1) + peers2 := store.ListPeers(2) + + assert.Equal(t, 0, peers1.Len(), "ListPeers for node1 should return empty") + assert.Equal(t, 0, peers2.Len(), "ListPeers for node2 should return empty") + + // Now re-enable peers and rebuild again + allowPeers = true + + store.RebuildPeerMaps() + + // Nodes should see each other again + snapshot = store.data.Load() + require.Len(t, snapshot.peersByNode[1], 1, "node1 should have 1 peer after re-enabling") + require.Len(t, snapshot.peersByNode[2], 1, "node2 should have 1 peer after re-enabling") + + peers1 = store.ListPeers(1) + peers2 = store.ListPeers(2) + + assert.Equal(t, 1, peers1.Len(), "ListPeers for node1 should return 1") + assert.Equal(t, 1, peers2.Len(), "ListPeers for node2 should return 1") +} diff --git a/hscontrol/state/state.go b/hscontrol/state/state.go index 297004fc..d1401ef0 100644 --- a/hscontrol/state/state.go +++ b/hscontrol/state/state.go @@ -8,10 +8,9 @@ import ( "context" "errors" "fmt" - "io" "net/netip" - "os" "slices" + "strings" "sync" "sync/atomic" "time" @@ -40,11 +39,31 @@ const ( // registerCacheCleanup defines the interval for cleaning up expired cache entries. registerCacheCleanup = time.Minute * 20 + + // defaultNodeStoreBatchSize is the default number of write operations to batch + // before rebuilding the in-memory node snapshot. + defaultNodeStoreBatchSize = 100 + + // defaultNodeStoreBatchTimeout is the default maximum time to wait before + // processing a partial batch of node operations. + defaultNodeStoreBatchTimeout = 500 * time.Millisecond ) // ErrUnsupportedPolicyMode is returned for invalid policy modes. Valid modes are "file" and "db". var ErrUnsupportedPolicyMode = errors.New("unsupported policy mode") +// ErrNodeNotFound is returned when a node cannot be found by its ID. +var ErrNodeNotFound = errors.New("node not found") + +// ErrInvalidNodeView is returned when an invalid node view is provided. +var ErrInvalidNodeView = errors.New("invalid node view provided") + +// ErrNodeNotInNodeStore is returned when a node no longer exists in the NodeStore. +var ErrNodeNotInNodeStore = errors.New("node no longer exists in NodeStore") + +// ErrNodeNameNotUnique is returned when a node name is not unique. +var ErrNodeNameNotUnique = errors.New("node name is not unique") + // State manages Headscale's core state, coordinating between database, policy management, // IP allocation, and DERP routing. All methods are thread-safe. type State struct { @@ -94,8 +113,7 @@ func NewState(cfg *types.Config) (*State, error) { ) db, err := hsdb.NewHeadscaleDatabase( - cfg.Database, - cfg.BaseDomain, + cfg, registrationCache, ) if err != nil { @@ -122,7 +140,7 @@ func NewState(cfg *types.Config) (*State, error) { return nil, fmt.Errorf("loading users: %w", err) } - pol, err := policyBytes(db, cfg) + pol, err := hsdb.PolicyBytes(db.DB, cfg) if err != nil { return nil, fmt.Errorf("loading policy: %w", err) } @@ -132,11 +150,27 @@ func NewState(cfg *types.Config) (*State, error) { return nil, fmt.Errorf("init policy manager: %w", err) } + // Apply defaults for NodeStore batch configuration if not set. + // This ensures tests that create Config directly (without viper) still work. + batchSize := cfg.Tuning.NodeStoreBatchSize + if batchSize == 0 { + batchSize = defaultNodeStoreBatchSize + } + batchTimeout := cfg.Tuning.NodeStoreBatchTimeout + if batchTimeout == 0 { + batchTimeout = defaultNodeStoreBatchTimeout + } + // PolicyManager.BuildPeerMap handles both global and per-node filter complexity. // This moves the complex peer relationship logic into the policy package where it belongs. - nodeStore := NewNodeStore(nodes, func(nodes []types.NodeView) map[types.NodeID][]types.NodeView { - return polMan.BuildPeerMap(views.SliceOf(nodes)) - }) + nodeStore := NewNodeStore( + nodes, + func(nodes []types.NodeView) map[types.NodeID][]types.NodeView { + return polMan.BuildPeerMap(views.SliceOf(nodes)) + }, + batchSize, + batchTimeout, + ) nodeStore.Start() return &State{ @@ -162,47 +196,6 @@ func (s *State) Close() error { return nil } -// policyBytes loads policy configuration from file or database based on the configured mode. -// Returns nil if no policy is configured, which is valid. -func policyBytes(db *hsdb.HSDatabase, cfg *types.Config) ([]byte, error) { - switch cfg.Policy.Mode { - case types.PolicyModeFile: - path := cfg.Policy.Path - - // It is fine to start headscale without a policy file. - if len(path) == 0 { - return nil, nil - } - - absPath := util.AbsolutePathFromConfigPath(path) - policyFile, err := os.Open(absPath) - if err != nil { - return nil, err - } - defer policyFile.Close() - - return io.ReadAll(policyFile) - - case types.PolicyModeDB: - p, err := db.GetPolicy() - if err != nil { - if errors.Is(err, types.ErrPolicyNotFound) { - return nil, nil - } - - return nil, err - } - - if p.Data == "" { - return nil, nil - } - - return []byte(p.Data), err - } - - return nil, fmt.Errorf("%w: %s", ErrUnsupportedPolicyMode, cfg.Policy.Mode) -} - // SetDERPMap updates the DERP relay configuration. func (s *State) SetDERPMap(dm *tailcfg.DERPMap) { s.derpMap.Store(dm) @@ -215,8 +208,8 @@ func (s *State) DERPMap() tailcfg.DERPMapView { // ReloadPolicy reloads the access control policy and triggers auto-approval if changed. // Returns true if the policy changed. -func (s *State) ReloadPolicy() ([]change.ChangeSet, error) { - pol, err := policyBytes(s.db, s.cfg) +func (s *State) ReloadPolicy() ([]change.Change, error) { + pol, err := hsdb.PolicyBytes(s.db.DB, s.cfg) if err != nil { return nil, fmt.Errorf("loading policy: %w", err) } @@ -232,7 +225,7 @@ func (s *State) ReloadPolicy() ([]change.ChangeSet, error) { // propagate correctly when switching between policy types. s.nodeStore.RebuildPeerMaps() - cs := []change.ChangeSet{change.PolicyChange()} + cs := []change.Change{change.PolicyChange()} // Always call autoApproveNodes during policy reload, regardless of whether // the policy content has changed. This ensures that routes are re-evaluated @@ -261,16 +254,16 @@ func (s *State) ReloadPolicy() ([]change.ChangeSet, error) { // CreateUser creates a new user and updates the policy manager. // Returns the created user, change set, and any error. -func (s *State) CreateUser(user types.User) (*types.User, change.ChangeSet, error) { +func (s *State) CreateUser(user types.User) (*types.User, change.Change, error) { if err := s.db.DB.Save(&user).Error; err != nil { - return nil, change.EmptySet, fmt.Errorf("creating user: %w", err) + return nil, change.Change{}, fmt.Errorf("creating user: %w", err) } // Check if policy manager needs updating c, err := s.updatePolicyManagerUsers() if err != nil { // Log the error but don't fail the user creation - return &user, change.EmptySet, fmt.Errorf("failed to update policy manager after user creation: %w", err) + return &user, change.Change{}, fmt.Errorf("failed to update policy manager after user creation: %w", err) } // Even if the policy manager doesn't detect a filter change, SSH policies @@ -278,7 +271,7 @@ func (s *State) CreateUser(user types.User) (*types.User, change.ChangeSet, erro // nodes, we should send a policy change to ensure they get updated SSH policies. // TODO(kradalby): detect this, or rebuild all SSH policies so we can determine // this upstream. - if c.Empty() { + if c.IsEmpty() { c = change.PolicyChange() } @@ -289,7 +282,7 @@ func (s *State) CreateUser(user types.User) (*types.User, change.ChangeSet, erro // UpdateUser modifies an existing user using the provided update function within a transaction. // Returns the updated user, change set, and any error. -func (s *State) UpdateUser(userID types.UserID, updateFn func(*types.User) error) (*types.User, change.ChangeSet, error) { +func (s *State) UpdateUser(userID types.UserID, updateFn func(*types.User) error) (*types.User, change.Change, error) { user, err := hsdb.Write(s.db.DB, func(tx *gorm.DB) (*types.User, error) { user, err := hsdb.GetUserByID(tx, userID) if err != nil { @@ -309,13 +302,13 @@ func (s *State) UpdateUser(userID types.UserID, updateFn func(*types.User) error return user, nil }) if err != nil { - return nil, change.EmptySet, err + return nil, change.Change{}, err } // Check if policy manager needs updating c, err := s.updatePolicyManagerUsers() if err != nil { - return user, change.EmptySet, fmt.Errorf("failed to update policy manager after user update: %w", err) + return user, change.Change{}, fmt.Errorf("failed to update policy manager after user update: %w", err) } // TODO(kradalby): We might want to update nodestore with the user data @@ -325,12 +318,33 @@ func (s *State) UpdateUser(userID types.UserID, updateFn func(*types.User) error // DeleteUser permanently removes a user and all associated data (nodes, API keys, etc). // This operation is irreversible. -func (s *State) DeleteUser(userID types.UserID) error { - return s.db.DestroyUser(userID) +// It also updates the policy manager to ensure ACL policies referencing the deleted +// user are re-evaluated immediately, fixing issue #2967. +func (s *State) DeleteUser(userID types.UserID) (change.Change, error) { + err := s.db.DestroyUser(userID) + if err != nil { + return change.Change{}, err + } + + // Update policy manager with the new user list (without the deleted user) + // This ensures that if the policy references the deleted user, it gets + // re-evaluated immediately rather than when some other operation triggers it. + c, err := s.updatePolicyManagerUsers() + if err != nil { + return change.Change{}, fmt.Errorf("updating policy after user deletion: %w", err) + } + + // If the policy manager doesn't detect changes, still return UserRemoved + // to ensure peer lists are refreshed + if c.IsEmpty() { + c = change.UserRemoved() + } + + return c, nil } // RenameUser changes a user's name. The new name must be unique. -func (s *State) RenameUser(userID types.UserID, newName string) (*types.User, change.ChangeSet, error) { +func (s *State) RenameUser(userID types.UserID, newName string) (*types.User, change.Change, error) { return s.UpdateUser(userID, func(user *types.User) error { user.Name = newName return nil @@ -367,9 +381,9 @@ func (s *State) ListAllUsers() ([]types.User, error) { // NodeStore and the database. It verifies the node still exists in NodeStore to prevent // race conditions where a node might be deleted between UpdateNode returning and // persistNodeToDB being called. -func (s *State) persistNodeToDB(node types.NodeView) (types.NodeView, change.ChangeSet, error) { +func (s *State) persistNodeToDB(node types.NodeView) (types.NodeView, change.Change, error) { if !node.Valid() { - return types.NodeView{}, change.EmptySet, fmt.Errorf("invalid node view provided") + return types.NodeView{}, change.Change{}, ErrInvalidNodeView } // Verify the node still exists in NodeStore before persisting to database. @@ -383,7 +397,8 @@ func (s *State) persistNodeToDB(node types.NodeView) (types.NodeView, change.Cha Str("node.name", node.Hostname()). Bool("is_ephemeral", node.IsEphemeral()). Msg("Node no longer exists in NodeStore, skipping database persist to prevent race condition") - return types.NodeView{}, change.EmptySet, fmt.Errorf("node %d no longer exists in NodeStore, skipping database persist", node.ID()) + + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotInNodeStore, node.ID()) } nodePtr := node.AsStruct() @@ -393,23 +408,23 @@ func (s *State) persistNodeToDB(node types.NodeView) (types.NodeView, change.Cha // See: https://github.com/juanfont/headscale/issues/2862 err := s.db.DB.Omit("expiry").Updates(nodePtr).Error if err != nil { - return types.NodeView{}, change.EmptySet, fmt.Errorf("saving node: %w", err) + return types.NodeView{}, change.Change{}, fmt.Errorf("saving node: %w", err) } // Check if policy manager needs updating c, err := s.updatePolicyManagerNodes() if err != nil { - return nodePtr.View(), change.EmptySet, fmt.Errorf("failed to update policy manager after node save: %w", err) + return nodePtr.View(), change.Change{}, fmt.Errorf("failed to update policy manager after node save: %w", err) } - if c.Empty() { + if c.IsEmpty() { c = change.NodeAdded(node.ID()) } return node, c, nil } -func (s *State) SaveNode(node types.NodeView) (types.NodeView, change.ChangeSet, error) { +func (s *State) SaveNode(node types.NodeView) (types.NodeView, change.Change, error) { // Update NodeStore first nodePtr := node.AsStruct() @@ -421,31 +436,35 @@ func (s *State) SaveNode(node types.NodeView) (types.NodeView, change.ChangeSet, // DeleteNode permanently removes a node and cleans up associated resources. // Returns whether policies changed and any error. This operation is irreversible. -func (s *State) DeleteNode(node types.NodeView) (change.ChangeSet, error) { +func (s *State) DeleteNode(node types.NodeView) (change.Change, error) { s.nodeStore.DeleteNode(node.ID()) err := s.db.DeleteNode(node.AsStruct()) if err != nil { - return change.EmptySet, err + return change.Change{}, err } + s.ipAlloc.FreeIPs(node.IPs()) + c := change.NodeRemoved(node.ID()) // Check if policy manager needs updating after node deletion policyChange, err := s.updatePolicyManagerNodes() if err != nil { - return change.EmptySet, fmt.Errorf("failed to update policy manager after node deletion: %w", err) + return change.Change{}, fmt.Errorf("failed to update policy manager after node deletion: %w", err) } - if !policyChange.Empty() { - c = policyChange + if !policyChange.IsEmpty() { + // Merge policy change with NodeRemoved to preserve PeersRemoved info + // This ensures the batcher cleans up the deleted node from its state + c = c.Merge(policyChange) } return c, nil } // Connect marks a node as connected and updates its primary routes in the state. -func (s *State) Connect(id types.NodeID) []change.ChangeSet { +func (s *State) Connect(id types.NodeID) []change.Change { // CRITICAL FIX: Update the online status in NodeStore BEFORE creating change notification // This ensures that when the NodeCameOnline change is distributed and processed by other nodes, // the NodeStore already reflects the correct online status for full map generation. @@ -457,7 +476,8 @@ func (s *State) Connect(id types.NodeID) []change.ChangeSet { if !ok { return nil } - c := []change.ChangeSet{change.NodeOnline(id)} + + c := []change.Change{change.NodeOnlineFor(node)} log.Info().Uint64("node.id", id.Uint64()).Str("node.name", node.Hostname()).Msg("Node connected") @@ -474,7 +494,7 @@ func (s *State) Connect(id types.NodeID) []change.ChangeSet { } // Disconnect marks a node as disconnected and updates its primary routes in the state. -func (s *State) Disconnect(id types.NodeID) ([]change.ChangeSet, error) { +func (s *State) Disconnect(id types.NodeID) ([]change.Change, error) { now := time.Now() node, ok := s.nodeStore.UpdateNode(id, func(n *types.Node) { @@ -496,14 +516,15 @@ func (s *State) Disconnect(id types.NodeID) ([]change.ChangeSet, error) { // Log error but don't fail the disconnection - NodeStore is already updated // and we need to send change notifications to peers log.Error().Err(err).Uint64("node.id", id.Uint64()).Str("node.name", node.Hostname()).Msg("Failed to update last seen in database") - c = change.EmptySet + + c = change.Change{} } // The node is disconnecting so make sure that none of the routes it // announced are served to any nodes. routeChange := s.primaryRoutes.SetRoutes(id) - cs := []change.ChangeSet{change.NodeOffline(id), c} + cs := []change.Change{change.NodeOfflineFor(node), c} // If we have a policy change or route change, return that as it's more comprehensive // Otherwise, return the NodeOffline change to ensure nodes are notified @@ -606,7 +627,7 @@ func (s *State) ListEphemeralNodes() views.Slice[types.NodeView] { } // SetNodeExpiry updates the expiration time for a node. -func (s *State) SetNodeExpiry(nodeID types.NodeID, expiry time.Time) (types.NodeView, change.ChangeSet, error) { +func (s *State) SetNodeExpiry(nodeID types.NodeID, expiry time.Time) (types.NodeView, change.Change, error) { // Update NodeStore before database to ensure consistency. The NodeStore update is // blocking and will be the source of truth for the batcher. The database update must // make the exact same change. If the database update fails, the NodeStore change will @@ -618,30 +639,79 @@ func (s *State) SetNodeExpiry(nodeID types.NodeID, expiry time.Time) (types.Node }) if !ok { - return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", nodeID) + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotInNodeStore, nodeID) } return s.persistNodeToDB(n) } -// SetNodeTags assigns tags to a node for use in access control policies. -func (s *State) SetNodeTags(nodeID types.NodeID, tags []string) (types.NodeView, change.ChangeSet, error) { +// SetNodeTags assigns tags to a node, making it a "tagged node". +// Once a node is tagged, it cannot be un-tagged (only tags can be changed). +// The UserID is preserved as "created by" information. +func (s *State) SetNodeTags(nodeID types.NodeID, tags []string) (types.NodeView, change.Change, error) { + // CANNOT REMOVE ALL TAGS + if len(tags) == 0 { + return types.NodeView{}, change.Change{}, types.ErrCannotRemoveAllTags + } + + // Get node for validation + existingNode, exists := s.nodeStore.GetNode(nodeID) + if !exists { + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotFound, nodeID) + } + + // Validate tags: must have correct format and exist in policy + validatedTags := make([]string, 0, len(tags)) + invalidTags := make([]string, 0) + + for _, tag := range tags { + if !strings.HasPrefix(tag, "tag:") || !s.polMan.TagExists(tag) { + invalidTags = append(invalidTags, tag) + + continue + } + + validatedTags = append(validatedTags, tag) + } + + if len(invalidTags) > 0 { + return types.NodeView{}, change.Change{}, fmt.Errorf("%w %v are invalid or not permitted", ErrRequestedTagsInvalidOrNotPermitted, invalidTags) + } + + slices.Sort(validatedTags) + validatedTags = slices.Compact(validatedTags) + + // Log the operation + logTagOperation(existingNode, validatedTags) + // Update NodeStore before database to ensure consistency. The NodeStore update is // blocking and will be the source of truth for the batcher. The database update must // make the exact same change. n, ok := s.nodeStore.UpdateNode(nodeID, func(node *types.Node) { - node.ForcedTags = tags + node.Tags = validatedTags + // UserID is preserved as "created by" - do NOT set to nil }) if !ok { - return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", nodeID) + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotInNodeStore, nodeID) } - return s.persistNodeToDB(n) + nodeView, c, err := s.persistNodeToDB(n) + if err != nil { + return nodeView, c, err + } + + // Set OriginNode so the mapper knows to include self info for this node. + // When tags change, persistNodeToDB returns PolicyChange which doesn't set OriginNode, + // so the mapper's self-update check fails and the node never sees its new tags. + // Setting OriginNode ensures the node gets a self-update with the new tags. + c.OriginNode = nodeID + + return nodeView, c, nil } // SetApprovedRoutes sets the network routes that a node is approved to advertise. -func (s *State) SetApprovedRoutes(nodeID types.NodeID, routes []netip.Prefix) (types.NodeView, change.ChangeSet, error) { +func (s *State) SetApprovedRoutes(nodeID types.NodeID, routes []netip.Prefix) (types.NodeView, change.Change, error) { // TODO(kradalby): In principle we should call the AutoApprove logic here // because even if the CLI removes an auto-approved route, it will be added // back automatically. @@ -650,13 +720,13 @@ func (s *State) SetApprovedRoutes(nodeID types.NodeID, routes []netip.Prefix) (t }) if !ok { - return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", nodeID) + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotInNodeStore, nodeID) } // Persist the node changes to the database nodeView, c, err := s.persistNodeToDB(n) if err != nil { - return types.NodeView{}, change.EmptySet, err + return types.NodeView{}, change.Change{}, err } // Update primary routes table based on SubnetRoutes (intersection of announced and approved). @@ -674,9 +744,9 @@ func (s *State) SetApprovedRoutes(nodeID types.NodeID, routes []netip.Prefix) (t } // RenameNode changes the display name of a node. -func (s *State) RenameNode(nodeID types.NodeID, newName string) (types.NodeView, change.ChangeSet, error) { +func (s *State) RenameNode(nodeID types.NodeID, newName string) (types.NodeView, change.Change, error) { if err := util.ValidateHostname(newName); err != nil { - return types.NodeView{}, change.EmptySet, fmt.Errorf("renaming node: %w", err) + return types.NodeView{}, change.Change{}, fmt.Errorf("renaming node: %w", err) } // Check name uniqueness against NodeStore @@ -684,7 +754,7 @@ func (s *State) RenameNode(nodeID types.NodeID, newName string) (types.NodeView, for i := 0; i < allNodes.Len(); i++ { node := allNodes.At(i) if node.ID() != nodeID && node.AsStruct().GivenName == newName { - return types.NodeView{}, change.EmptySet, fmt.Errorf("name is not unique: %s", newName) + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %s", ErrNodeNameNotUnique, newName) } } @@ -696,35 +766,7 @@ func (s *State) RenameNode(nodeID types.NodeID, newName string) (types.NodeView, }) if !ok { - return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", nodeID) - } - - return s.persistNodeToDB(n) -} - -// AssignNodeToUser transfers a node to a different user. -func (s *State) AssignNodeToUser(nodeID types.NodeID, userID types.UserID) (types.NodeView, change.ChangeSet, error) { - // Validate that both node and user exist - _, found := s.GetNodeByID(nodeID) - if !found { - return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found: %d", nodeID) - } - - user, err := s.GetUserByID(userID) - if err != nil { - return types.NodeView{}, change.EmptySet, fmt.Errorf("user not found: %w", err) - } - - // Update NodeStore before database to ensure consistency. The NodeStore update is - // blocking and will be the source of truth for the batcher. The database update must - // make the exact same change. - n, ok := s.nodeStore.UpdateNode(nodeID, func(n *types.Node) { - n.User = *user - n.UserID = uint(userID) - }) - - if !ok { - return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", nodeID) + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotInNodeStore, nodeID) } return s.persistNodeToDB(n) @@ -769,12 +811,12 @@ func (s *State) BackfillNodeIPs() ([]string, error) { // ExpireExpiredNodes finds and processes expired nodes since the last check. // Returns next check time, state update with expired nodes, and whether any were found. -func (s *State) ExpireExpiredNodes(lastCheck time.Time) (time.Time, []change.ChangeSet, bool) { +func (s *State) ExpireExpiredNodes(lastCheck time.Time) (time.Time, []change.Change, bool) { // Why capture start time: We need to ensure we don't miss nodes that expire // while this function is running by using a consistent timestamp for the next check started := time.Now() - var updates []change.ChangeSet + var updates []change.Change for _, node := range s.nodeStore.ListNodes().All() { if !node.Valid() { @@ -784,7 +826,7 @@ func (s *State) ExpireExpiredNodes(lastCheck time.Time) (time.Time, []change.Cha // Why check After(lastCheck): We only want to notify about nodes that // expired since the last check to avoid duplicate notifications if node.IsExpired() && node.Expiry().Valid() && node.Expiry().Get().After(lastCheck) { - updates = append(updates, change.KeyExpiry(node.ID(), node.Expiry().Get())) + updates = append(updates, change.KeyExpiryFor(node.ID(), node.Expiry().Get())) } } @@ -827,7 +869,7 @@ func (s *State) SetPolicy(pol []byte) (bool, error) { // AutoApproveRoutes checks if a node's routes should be auto-approved. // AutoApproveRoutes checks if any routes should be auto-approved for a node and updates them. -func (s *State) AutoApproveRoutes(nv types.NodeView) bool { +func (s *State) AutoApproveRoutes(nv types.NodeView) (change.Change, error) { approved, changed := policy.ApproveRoutesWithPolicy(s.polMan, nv, nv.ApprovedRoutes().AsSlice(), nv.AnnouncedRoutes()) if changed { log.Debug(). @@ -840,7 +882,7 @@ func (s *State) AutoApproveRoutes(nv types.NodeView) bool { // Persist the auto-approved routes to database and NodeStore via SetApprovedRoutes // This ensures consistency between database and NodeStore - _, _, err := s.SetApprovedRoutes(nv.ID(), approved) + _, c, err := s.SetApprovedRoutes(nv.ID(), approved) if err != nil { log.Error(). Uint64("node.id", nv.ID().Uint64()). @@ -848,13 +890,15 @@ func (s *State) AutoApproveRoutes(nv types.NodeView) bool { Err(err). Msg("Failed to persist auto-approved routes") - return false + return change.Change{}, err } log.Info().Uint64("node.id", nv.ID().Uint64()).Str("node.name", nv.Hostname()).Strs("routes.approved", util.PrefixesToString(approved)).Msg("Routes approved") + + return c, nil } - return changed + return change.Change{}, nil } // GetPolicy retrieves the current policy from the database. @@ -868,14 +912,14 @@ func (s *State) SetPolicyInDB(data string) (*types.Policy, error) { } // SetNodeRoutes sets the primary routes for a node. -func (s *State) SetNodeRoutes(nodeID types.NodeID, routes ...netip.Prefix) change.ChangeSet { +func (s *State) SetNodeRoutes(nodeID types.NodeID, routes ...netip.Prefix) change.Change { if s.primaryRoutes.SetRoutes(nodeID, routes...) { // Route changes affect packet filters for all nodes, so trigger a policy change // to ensure filters are regenerated across the entire network return change.PolicyChange() } - return change.EmptySet + return change.Change{} } // GetNodePrimaryRoutes returns the primary routes for a node. @@ -899,10 +943,22 @@ func (s *State) CreateAPIKey(expiration *time.Time) (string, *types.APIKey, erro } // GetAPIKey retrieves an API key by its prefix. -func (s *State) GetAPIKey(prefix string) (*types.APIKey, error) { +// Accepts both display format (hskey-api-{12chars}-***) and database format ({12chars}). +func (s *State) GetAPIKey(displayPrefix string) (*types.APIKey, error) { + // Parse the display prefix to extract the database prefix + prefix, err := hsdb.ParseAPIKeyPrefix(displayPrefix) + if err != nil { + return nil, err + } + return s.db.GetAPIKey(prefix) } +// GetAPIKeyByID retrieves an API key by its database ID. +func (s *State) GetAPIKeyByID(id uint64) (*types.APIKey, error) { + return s.db.GetAPIKeyByID(id) +} + // ExpireAPIKey marks an API key as expired. func (s *State) ExpireAPIKey(key *types.APIKey) error { return s.db.ExpireAPIKey(key) @@ -919,7 +975,8 @@ func (s *State) DestroyAPIKey(key types.APIKey) error { } // CreatePreAuthKey generates a new pre-authentication key for a user. -func (s *State) CreatePreAuthKey(userID types.UserID, reusable bool, ephemeral bool, expiration *time.Time, aclTags []string) (*types.PreAuthKey, error) { +// The userID parameter is now optional (can be nil) for system-created tagged keys. +func (s *State) CreatePreAuthKey(userID *types.UserID, reusable bool, ephemeral bool, expiration *time.Time, aclTags []string) (*types.PreAuthKeyNew, error) { return s.db.CreatePreAuthKey(userID, reusable, ephemeral, expiration, aclTags) } @@ -961,13 +1018,18 @@ func (s *State) GetPreAuthKey(id string) (*types.PreAuthKey, error) { } // ListPreAuthKeys returns all pre-authentication keys for a user. -func (s *State) ListPreAuthKeys(userID types.UserID) ([]types.PreAuthKey, error) { - return s.db.ListPreAuthKeys(userID) +func (s *State) ListPreAuthKeys() ([]types.PreAuthKey, error) { + return s.db.ListPreAuthKeys() } // ExpirePreAuthKey marks a pre-authentication key as expired. -func (s *State) ExpirePreAuthKey(preAuthKey *types.PreAuthKey) error { - return s.db.ExpirePreAuthKey(preAuthKey) +func (s *State) ExpirePreAuthKey(id uint64) error { + return s.db.ExpirePreAuthKey(id) +} + +// DeletePreAuthKey permanently deletes a pre-authentication key. +func (s *State) DeletePreAuthKey(id uint64) error { + return s.db.DeletePreAuthKey(id) } // GetRegistrationCacheEntry retrieves a node registration from cache. @@ -1050,8 +1112,6 @@ func (s *State) createAndSaveNewNode(params newNodeParams) (types.NodeView, erro // Prepare the node for registration nodeToRegister := types.Node{ Hostname: params.Hostname, - UserID: params.User.ID, - User: params.User, MachineKey: params.MachineKey, NodeKey: params.NodeKey, DiscoKey: params.DiscoKey, @@ -1062,11 +1122,80 @@ func (s *State) createAndSaveNewNode(params newNodeParams) (types.NodeView, erro Expiry: params.Expiry, } - // Pre-auth key specific fields + // Assign ownership based on PreAuthKey if params.PreAuthKey != nil { - nodeToRegister.ForcedTags = params.PreAuthKey.Proto().GetAclTags() + if params.PreAuthKey.IsTagged() { + // TAGGED NODE + // Tags from PreAuthKey are assigned ONLY during initial authentication + nodeToRegister.Tags = params.PreAuthKey.Proto().GetAclTags() + + // Set UserID to track "created by" (who created the PreAuthKey) + if params.PreAuthKey.UserID != nil { + nodeToRegister.UserID = params.PreAuthKey.UserID + nodeToRegister.User = params.PreAuthKey.User + } + // If PreAuthKey.UserID is nil, the node is "orphaned" (system-created) + + // Tagged nodes have key expiry disabled. + nodeToRegister.Expiry = nil + } else { + // USER-OWNED NODE + nodeToRegister.UserID = ¶ms.PreAuthKey.User.ID + nodeToRegister.User = params.PreAuthKey.User + nodeToRegister.Tags = nil + } nodeToRegister.AuthKey = params.PreAuthKey nodeToRegister.AuthKeyID = ¶ms.PreAuthKey.ID + } else { + // Non-PreAuthKey registration (OIDC, CLI) - always user-owned + nodeToRegister.UserID = ¶ms.User.ID + nodeToRegister.User = ¶ms.User + nodeToRegister.Tags = nil + } + + // Reject advertise-tags for PreAuthKey registrations early, before any resource allocation. + // PreAuthKey nodes get their tags from the key itself, not from client requests. + if params.PreAuthKey != nil && params.Hostinfo != nil && len(params.Hostinfo.RequestTags) > 0 { + return types.NodeView{}, fmt.Errorf("%w %v are invalid or not permitted", ErrRequestedTagsInvalidOrNotPermitted, params.Hostinfo.RequestTags) + } + + // Process RequestTags (from tailscale up --advertise-tags) ONLY for non-PreAuthKey registrations. + // Validate early before IP allocation to avoid resource leaks on failure. + if params.PreAuthKey == nil && params.Hostinfo != nil && len(params.Hostinfo.RequestTags) > 0 { + var approvedTags, rejectedTags []string + + for _, tag := range params.Hostinfo.RequestTags { + if s.polMan.NodeCanHaveTag(nodeToRegister.View(), tag) { + approvedTags = append(approvedTags, tag) + } else { + rejectedTags = append(rejectedTags, tag) + } + } + + // Reject registration if any requested tags are unauthorized + if len(rejectedTags) > 0 { + return types.NodeView{}, fmt.Errorf("%w %v are invalid or not permitted", ErrRequestedTagsInvalidOrNotPermitted, rejectedTags) + } + + if len(approvedTags) > 0 { + nodeToRegister.Tags = approvedTags + slices.Sort(nodeToRegister.Tags) + nodeToRegister.Tags = slices.Compact(nodeToRegister.Tags) + + // Tagged nodes have key expiry disabled. + nodeToRegister.Expiry = nil + + log.Info(). + Str("node.name", nodeToRegister.Hostname). + Strs("tags", nodeToRegister.Tags). + Msg("approved advertise-tags during registration") + } + } + + // Validate before saving + err := validateNodeOwnership(&nodeToRegister) + if err != nil { + return types.NodeView{}, err } // Allocate new IPs @@ -1110,23 +1239,110 @@ func (s *State) createAndSaveNewNode(params newNodeParams) (types.NodeView, erro return s.nodeStore.PutNode(*savedNode), nil } +// processReauthTags handles tag changes during node re-authentication. +// It processes RequestTags from the client and updates node tags accordingly. +// Returns rejected tags (if any) for post-validation error handling. +func (s *State) processReauthTags( + node *types.Node, + requestTags []string, + user *types.User, + oldTags []string, +) []string { + wasAuthKeyTagged := node.AuthKey != nil && node.AuthKey.IsTagged() + + logEvent := log.Debug(). + Uint64("node.id", uint64(node.ID)). + Str("node.name", node.Hostname). + Strs("request.tags", requestTags). + Strs("current.tags", node.Tags). + Bool("is.tagged", node.IsTagged()). + Bool("was.authkey.tagged", wasAuthKeyTagged) + logEvent.Msg("Processing RequestTags during reauth") + + // Empty RequestTags means untag node (transition to user-owned) + if len(requestTags) == 0 { + if node.IsTagged() { + log.Info(). + Uint64("node.id", uint64(node.ID)). + Str("node.name", node.Hostname). + Strs("removed.tags", node.Tags). + Str("user.name", user.Name). + Bool("was.authkey.tagged", wasAuthKeyTagged). + Msg("Reauth: removing all tags, returning node ownership to user") + + node.Tags = []string{} + node.UserID = &user.ID + } + + return nil + } + + // Non-empty RequestTags: validate and apply + var approvedTags, rejectedTags []string + + for _, tag := range requestTags { + if s.polMan.NodeCanHaveTag(node.View(), tag) { + approvedTags = append(approvedTags, tag) + } else { + rejectedTags = append(rejectedTags, tag) + } + } + + if len(rejectedTags) > 0 { + log.Warn(). + Uint64("node.id", uint64(node.ID)). + Str("node.name", node.Hostname). + Strs("rejected.tags", rejectedTags). + Msg("Reauth: requested tags are not permitted") + + return rejectedTags + } + + if len(approvedTags) > 0 { + slices.Sort(approvedTags) + approvedTags = slices.Compact(approvedTags) + + wasTagged := node.IsTagged() + node.Tags = approvedTags + + // Note: UserID is preserved as "created by" tracking, consistent with SetNodeTags + if !wasTagged { + log.Info(). + Uint64("node.id", uint64(node.ID)). + Str("node.name", node.Hostname). + Strs("new.tags", approvedTags). + Str("old.user", user.Name). + Msg("Reauth: applying tags, transferring node to tagged-devices") + } else { + log.Info(). + Uint64("node.id", uint64(node.ID)). + Str("node.name", node.Hostname). + Strs("old.tags", oldTags). + Strs("new.tags", approvedTags). + Msg("Reauth: updating tags on already-tagged node") + } + } + + return nil +} + // HandleNodeFromAuthPath handles node registration through authentication flow (like OIDC). func (s *State) HandleNodeFromAuthPath( registrationID types.RegistrationID, userID types.UserID, expiry *time.Time, registrationMethod string, -) (types.NodeView, change.ChangeSet, error) { +) (types.NodeView, change.Change, error) { // Get the registration entry from cache regEntry, ok := s.GetRegistrationCacheEntry(registrationID) if !ok { - return types.NodeView{}, change.EmptySet, hsdb.ErrNodeNotFoundRegistrationCache + return types.NodeView{}, change.Change{}, hsdb.ErrNodeNotFoundRegistrationCache } // Get the user user, err := s.db.GetUserByID(userID) if err != nil { - return types.NodeView{}, change.EmptySet, fmt.Errorf("failed to find user: %w", err) + return types.NodeView{}, change.Change{}, fmt.Errorf("failed to find user: %w", err) } // Ensure we have a valid hostname from the registration cache entry @@ -1143,7 +1359,7 @@ func (s *State) HandleNodeFromAuthPath( logHostinfoValidation( regEntry.Node.MachineKey.ShortString(), regEntry.Node.NodeKey.String(), - user.Username(), + user.Name, hostname, regEntry.Node.Hostinfo, ) @@ -1155,14 +1371,26 @@ func (s *State) HandleNodeFromAuthPath( // If this node exists for this user, update the node in place. if existsSameUser && existingNodeSameUser.Valid() { - log.Debug(). + log.Info(). Caller(). Str("registration_id", registrationID.String()). - Str("user.name", user.Username()). + Str("user.name", user.Name). Str("registrationMethod", registrationMethod). Str("node.name", existingNodeSameUser.Hostname()). Uint64("node.id", existingNodeSameUser.ID().Uint64()). - Msg("Updating existing node registration") + Interface("hostinfo", regEntry.Node.Hostinfo). + Msg("Updating existing node registration via reauth") + + // Process RequestTags during reauth (#2979) + // Due to json:",omitempty", we treat empty/nil as "clear tags" + var requestTags []string + if regEntry.Node.Hostinfo != nil { + requestTags = regEntry.Node.Hostinfo.RequestTags + } + + oldTags := existingNodeSameUser.Tags().AsSlice() + + var rejectedTags []string // Update existing node - NodeStore first, then database updatedNodeView, ok := s.nodeStore.UpdateNode(existingNodeSameUser.ID(), func(node *types.Node) { @@ -1182,15 +1410,25 @@ func (s *State) HandleNodeFromAuthPath( node.IsOnline = ptr.To(false) node.LastSeen = ptr.To(time.Now()) - if expiry != nil { - node.Expiry = expiry - } else { - node.Expiry = regEntry.Node.Expiry + // Tagged nodes keep their existing expiry (disabled). + // User-owned nodes update expiry from the provided value or registration entry. + if !node.IsTagged() { + if expiry != nil { + node.Expiry = expiry + } else { + node.Expiry = regEntry.Node.Expiry + } } + + rejectedTags = s.processReauthTags(node, requestTags, user, oldTags) }) if !ok { - return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", existingNodeSameUser.ID()) + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotInNodeStore, existingNodeSameUser.ID()) + } + + if len(rejectedTags) > 0 { + return types.NodeView{}, change.Change{}, fmt.Errorf("%w %v are invalid or not permitted", ErrRequestedTagsInvalidOrNotPermitted, rejectedTags) } _, err = hsdb.Write(s.db.DB, func(tx *gorm.DB) (*types.Node, error) { @@ -1202,7 +1440,7 @@ func (s *State) HandleNodeFromAuthPath( return nil, nil }) if err != nil { - return types.NodeView{}, change.EmptySet, err + return types.NodeView{}, change.Change{}, err } log.Trace(). @@ -1220,7 +1458,7 @@ func (s *State) HandleNodeFromAuthPath( // Check if node exists with this machine key for a different user (for netinfo preservation) existingNodeAnyUser, existsAnyUser := s.nodeStore.GetNodeByMachineKeyAnyUser(regEntry.Node.MachineKey) - if existsAnyUser && existingNodeAnyUser.Valid() && existingNodeAnyUser.UserID() != user.ID { + if existsAnyUser && existingNodeAnyUser.Valid() && existingNodeAnyUser.UserID().Get() != user.ID { // Node exists but belongs to a different user // Create a NEW node for the new user (do not transfer) // This allows the same machine to have separate node identities per user @@ -1230,8 +1468,8 @@ func (s *State) HandleNodeFromAuthPath( Str("existing.node.name", existingNodeAnyUser.Hostname()). Uint64("existing.node.id", existingNodeAnyUser.ID().Uint64()). Str("machine.key", regEntry.Node.MachineKey.ShortString()). - Str("old.user", oldUser.Username()). - Str("new.user", user.Username()). + Str("old.user", oldUser.Name()). + Str("new.user", user.Name). Str("method", registrationMethod). Msg("Creating new node for different user (same machine key exists for another user)") } @@ -1240,7 +1478,7 @@ func (s *State) HandleNodeFromAuthPath( log.Debug(). Caller(). Str("registration_id", registrationID.String()). - Str("user.name", user.Username()). + Str("user.name", user.Name). Str("registrationMethod", registrationMethod). Str("expiresAt", fmt.Sprintf("%v", expiry)). Msg("Registering new node from auth callback") @@ -1260,7 +1498,7 @@ func (s *State) HandleNodeFromAuthPath( ExistingNodeForNetinfo: cmp.Or(existingNodeAnyUser, types.NodeView{}), }) if err != nil { - return types.NodeView{}, change.EmptySet, err + return types.NodeView{}, change.Change{}, err } } @@ -1281,8 +1519,8 @@ func (s *State) HandleNodeFromAuthPath( return finalNode, change.NodeAdded(finalNode.ID()), fmt.Errorf("failed to update policy manager nodes: %w", err) } - var c change.ChangeSet - if !usersChange.Empty() || !nodesChange.Empty() { + var c change.Change + if !usersChange.IsEmpty() || !nodesChange.IsEmpty() { c = change.PolicyChange() } else { c = change.NodeAdded(finalNode.ID()) @@ -1295,23 +1533,50 @@ func (s *State) HandleNodeFromAuthPath( func (s *State) HandleNodeFromPreAuthKey( regReq tailcfg.RegisterRequest, machineKey key.MachinePublic, -) (types.NodeView, change.ChangeSet, error) { +) (types.NodeView, change.Change, error) { pak, err := s.GetPreAuthKey(regReq.Auth.AuthKey) if err != nil { - return types.NodeView{}, change.EmptySet, err + return types.NodeView{}, change.Change{}, err + } + + // Helper to get username for logging (handles nil User for tags-only keys) + pakUsername := func() string { + if pak.User != nil { + return pak.User.Username() + } + + return types.TaggedDevices.Name } // Check if node exists with same machine key before validating the key. // For #2830: container restarts send the same pre-auth key which may be used/expired. // Skip validation for existing nodes re-registering with the same NodeKey, as the // key was only needed for initial authentication. NodeKey rotation requires validation. - existingNodeSameUser, existsSameUser := s.nodeStore.GetNodeByMachineKey(machineKey, types.UserID(pak.User.ID)) + // + // For tags-only keys (pak.User == nil), we skip the user-based lookup since there's + // no user to match against. These keys create tagged nodes without user ownership. + var existingNodeSameUser types.NodeView - // Skip validation only if both the AuthKeyID and NodeKey match (not a rotation). - isExistingNodeReregistering := existsSameUser && existingNodeSameUser.Valid() && - existingNodeSameUser.AuthKey().Valid() && - existingNodeSameUser.AuthKeyID().Valid() && - existingNodeSameUser.AuthKeyID().Get() == pak.ID + var existsSameUser bool + + if pak.User != nil { + existingNodeSameUser, existsSameUser = s.nodeStore.GetNodeByMachineKey(machineKey, types.UserID(pak.User.ID)) + } + + // For existing nodes, skip validation if: + // 1. MachineKey matches (cryptographic proof of machine identity) + // 2. User matches (from the PAK being used) + // 3. Not a NodeKey rotation (rotation requires fresh validation) + // + // Security: MachineKey is the cryptographic identity. If someone has the MachineKey, + // they control the machine. The PAK was only needed to authorize initial join. + // We don't check which specific PAK was used originally because: + // - Container restarts may use different PAKs (e.g., env var changed) + // - Original PAK may be deleted + // - MachineKey + User is sufficient to prove this is the same node + // + // Note: For tags-only keys, existsSameUser is always false, so we always validate. + isExistingNodeReregistering := existsSameUser && existingNodeSameUser.Valid() // Check if this is a NodeKey rotation (different NodeKey) isNodeKeyRotation := existsSameUser && existingNodeSameUser.Valid() && @@ -1334,12 +1599,11 @@ func (s *State) HandleNodeFromPreAuthKey( Bool("authkey.reusable", pak.Reusable). Bool("nodekey.rotation", isNodeKeyRotation). Msg("Existing node re-registering with same NodeKey and auth key, skipping validation") - } else { // New node or NodeKey rotation: require valid auth key. err = pak.Validate() if err != nil { - return types.NodeView{}, change.EmptySet, err + return types.NodeView{}, change.Change{}, err } } @@ -1357,7 +1621,7 @@ func (s *State) HandleNodeFromPreAuthKey( logHostinfoValidation( machineKey.ShortString(), regReq.NodeKey.ShortString(), - pak.User.Username(), + pakUsername(), hostname, regReq.Hostinfo, ) @@ -1367,12 +1631,13 @@ func (s *State) HandleNodeFromPreAuthKey( Str("node.name", hostname). Str("machine.key", machineKey.ShortString()). Str("node.key", regReq.NodeKey.ShortString()). - Str("user.name", pak.User.Username()). + Str("user.name", pakUsername()). Msg("Registering node with pre-auth key") var finalNode types.NodeView // If this node exists for this user, update the node in place. + // Note: For tags-only keys (pak.User == nil), existsSameUser is always false. if existsSameUser && existingNodeSameUser.Valid() { log.Trace(). Caller(). @@ -1380,7 +1645,7 @@ func (s *State) HandleNodeFromPreAuthKey( Uint64("node.id", existingNodeSameUser.ID().Uint64()). Str("machine.key", machineKey.ShortString()). Str("node.key", existingNodeSameUser.NodeKey().ShortString()). - Str("user.name", pak.User.Username()). + Str("user.name", pakUsername()). Msg("Node re-registering with existing machine key and user, updating in place") // Update existing node - NodeStore first, then database @@ -1397,20 +1662,25 @@ func (s *State) HandleNodeFromPreAuthKey( node.RegisterMethod = util.RegisterMethodAuthKey - // TODO(kradalby): This might need a rework as part of #2417 - node.ForcedTags = pak.Proto().GetAclTags() + // CRITICAL: Tags from PreAuthKey are ONLY applied during initial authentication + // On re-registration, we MUST NOT change tags or node ownership + // The node keeps whatever tags/user ownership it already has + // + // Only update AuthKey reference node.AuthKey = pak node.AuthKeyID = &pak.ID node.IsOnline = ptr.To(false) node.LastSeen = ptr.To(time.Now()) - // Update expiry, if it is zero, it means that the node will - // not have an expiry anymore. If it is non-zero, we set that. - node.Expiry = ®Req.Expiry + // Tagged nodes keep their existing expiry (disabled). + // User-owned nodes update expiry from the client request. + if !node.IsTagged() { + node.Expiry = ®Req.Expiry + } }) if !ok { - return types.NodeView{}, change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", existingNodeSameUser.ID()) + return types.NodeView{}, change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotInNodeStore, existingNodeSameUser.ID()) } _, err = hsdb.Write(s.db.DB, func(tx *gorm.DB) (*types.Node, error) { @@ -1430,7 +1700,7 @@ func (s *State) HandleNodeFromPreAuthKey( return nil, nil }) if err != nil { - return types.NodeView{}, change.EmptySet, fmt.Errorf("writing node to database: %w", err) + return types.NodeView{}, change.Change{}, fmt.Errorf("writing node to database: %w", err) } log.Trace(). @@ -1439,7 +1709,7 @@ func (s *State) HandleNodeFromPreAuthKey( Uint64("node.id", updatedNodeView.ID().Uint64()). Str("machine.key", machineKey.ShortString()). Str("node.key", updatedNodeView.NodeKey().ShortString()). - Str("user.name", pak.User.Username()). + Str("user.name", pakUsername()). Msg("Node re-authorized") finalNode = updatedNodeView @@ -1448,7 +1718,9 @@ func (s *State) HandleNodeFromPreAuthKey( // Check if node exists with this machine key for a different user existingNodeAnyUser, existsAnyUser := s.nodeStore.GetNodeByMachineKeyAnyUser(machineKey) - if existsAnyUser && existingNodeAnyUser.Valid() && existingNodeAnyUser.UserID() != pak.User.ID { + // For user-owned keys, check if node exists for a different user + // For tags-only keys (pak.User == nil), this check is skipped + if pak.User != nil && existsAnyUser && existingNodeAnyUser.Valid() && existingNodeAnyUser.UserID().Get() != pak.User.ID { // Node exists but belongs to a different user // Create a NEW node for the new user (do not transfer) // This allows the same machine to have separate node identities per user @@ -1458,18 +1730,25 @@ func (s *State) HandleNodeFromPreAuthKey( Str("existing.node.name", existingNodeAnyUser.Hostname()). Uint64("existing.node.id", existingNodeAnyUser.ID().Uint64()). Str("machine.key", machineKey.ShortString()). - Str("old.user", oldUser.Username()). - Str("new.user", pak.User.Username()). + Str("old.user", oldUser.Name()). + Str("new.user", pakUsername()). Msg("Creating new node for different user (same machine key exists for another user)") } - // This is a new node for this user - create it - // (Either completely new, or new for this user while existing for another user) + // This is a new node - create it + // For user-owned keys: create for the user + // For tags-only keys: create as tagged node (createAndSaveNewNode handles this via PreAuthKey) // Create and save new node + // Note: For tags-only keys, User is empty but createAndSaveNewNode uses PreAuthKey for ownership + var pakUser types.User + if pak.User != nil { + pakUser = *pak.User + } + var err error finalNode, err = s.createAndSaveNewNode(newNodeParams{ - User: pak.User, + User: pakUser, MachineKey: machineKey, NodeKey: regReq.NodeKey, DiscoKey: key.DiscoPublic{}, // DiscoKey not available in RegisterRequest @@ -1482,7 +1761,7 @@ func (s *State) HandleNodeFromPreAuthKey( ExistingNodeForNetinfo: cmp.Or(existingNodeAnyUser, types.NodeView{}), }) if err != nil { - return types.NodeView{}, change.EmptySet, fmt.Errorf("creating new node: %w", err) + return types.NodeView{}, change.Change{}, fmt.Errorf("creating new node: %w", err) } } @@ -1497,8 +1776,8 @@ func (s *State) HandleNodeFromPreAuthKey( return finalNode, change.NodeAdded(finalNode.ID()), fmt.Errorf("failed to update policy manager nodes: %w", err) } - var c change.ChangeSet - if !usersChange.Empty() || !nodesChange.Empty() { + var c change.Change + if !usersChange.IsEmpty() || !nodesChange.IsEmpty() { c = change.PolicyChange() } else { c = change.NodeAdded(finalNode.ID()) @@ -1513,17 +1792,17 @@ func (s *State) HandleNodeFromPreAuthKey( // have the list already available so it could go much quicker. Alternatively // the policy manager could have a remove or add list for users. // updatePolicyManagerUsers refreshes the policy manager with current user data. -func (s *State) updatePolicyManagerUsers() (change.ChangeSet, error) { +func (s *State) updatePolicyManagerUsers() (change.Change, error) { users, err := s.ListAllUsers() if err != nil { - return change.EmptySet, fmt.Errorf("listing users for policy update: %w", err) + return change.Change{}, fmt.Errorf("listing users for policy update: %w", err) } log.Debug().Caller().Int("user.count", len(users)).Msg("Policy manager user update initiated because user list modification detected") changed, err := s.polMan.SetUsers(users) if err != nil { - return change.EmptySet, fmt.Errorf("updating policy manager users: %w", err) + return change.Change{}, fmt.Errorf("updating policy manager users: %w", err) } log.Debug().Caller().Bool("policy.changed", changed).Msg("Policy manager user update completed because SetUsers operation finished") @@ -1532,7 +1811,15 @@ func (s *State) updatePolicyManagerUsers() (change.ChangeSet, error) { return change.PolicyChange(), nil } - return change.EmptySet, nil + return change.Change{}, nil +} + +// UpdatePolicyManagerUsersForTest updates the policy manager's user cache. +// This is exposed for testing purposes to sync the policy manager after +// creating test users via CreateUserForTest(). +func (s *State) UpdatePolicyManagerUsersForTest() error { + _, err := s.updatePolicyManagerUsers() + return err } // updatePolicyManagerNodes updates the policy manager with current nodes. @@ -1541,19 +1828,22 @@ func (s *State) updatePolicyManagerUsers() (change.ChangeSet, error) { // have the list already available so it could go much quicker. Alternatively // the policy manager could have a remove or add list for nodes. // updatePolicyManagerNodes refreshes the policy manager with current node data. -func (s *State) updatePolicyManagerNodes() (change.ChangeSet, error) { +func (s *State) updatePolicyManagerNodes() (change.Change, error) { nodes := s.ListNodes() changed, err := s.polMan.SetNodes(nodes) if err != nil { - return change.EmptySet, fmt.Errorf("updating policy manager nodes: %w", err) + return change.Change{}, fmt.Errorf("updating policy manager nodes: %w", err) } if changed { + // Rebuild peer maps because policy-affecting node changes (tags, user, IPs) + // affect ACL visibility. Without this, cached peer relationships use stale data. + s.nodeStore.RebuildPeerMaps() return change.PolicyChange(), nil } - return change.EmptySet, nil + return change.Change{}, nil } // PingDB checks if the database connection is healthy. @@ -1567,14 +1857,16 @@ func (s *State) PingDB(ctx context.Context) error { // TODO(kradalby): This is kind of messy, maybe this is another +1 // for an event bus. See example comments here. // autoApproveNodes automatically approves nodes based on policy rules. -func (s *State) autoApproveNodes() ([]change.ChangeSet, error) { +func (s *State) autoApproveNodes() ([]change.Change, error) { nodes := s.ListNodes() // Approve routes concurrently, this should make it likely // that the writes end in the same batch in the nodestore write. - var errg errgroup.Group - var cs []change.ChangeSet - var mu sync.Mutex + var ( + errg errgroup.Group + cs []change.Change + mu sync.Mutex + ) for _, nv := range nodes.All() { errg.Go(func() error { approved, changed := policy.ApproveRoutesWithPolicy(s.polMan, nv, nv.ApprovedRoutes().AsSlice(), nv.AnnouncedRoutes()) @@ -1615,20 +1907,29 @@ func (s *State) autoApproveNodes() ([]change.ChangeSet, error) { // - node.PeerChangeFromMapRequest // - node.ApplyPeerChange // - logTracePeerChange in poll.go. -func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest) (change.ChangeSet, error) { +func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest) (change.Change, error) { log.Trace(). Caller(). Uint64("node.id", id.Uint64()). Interface("request", req). Msg("Processing MapRequest for node") - var routeChange bool - var hostinfoChanged bool - var needsRouteApproval bool + var ( + routeChange bool + hostinfoChanged bool + needsRouteApproval bool + autoApprovedRoutes []netip.Prefix + endpointChanged bool + derpChanged bool + ) // We need to ensure we update the node as it is in the NodeStore at // the time of the request. updatedNode, ok := s.nodeStore.UpdateNode(id, func(currentNode *types.Node) { peerChange := currentNode.PeerChangeFromMapRequest(req) + + // Track what specifically changed + endpointChanged = peerChange.Endpoints != nil + derpChanged = peerChange.DERPRegion != 0 hostinfoChanged = !hostinfoEqual(currentNode.View(), req.Hostinfo) // Get the correct NetInfo to use @@ -1649,7 +1950,6 @@ func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest } // Calculate route approval before NodeStore update to avoid calling View() inside callback - var autoApprovedRoutes []netip.Prefix var hasNewRoutes bool if hi := req.Hostinfo; hi != nil { hasNewRoutes = len(hi.RoutableIPs) > 0 @@ -1715,91 +2015,109 @@ func (s *State) UpdateNodeFromMapRequest(id types.NodeID, req tailcfg.MapRequest Strs("newApprovedRoutes", util.PrefixesToString(autoApprovedRoutes)). Bool("routeChanged", routeChange). Msg("applying route approval results") - currentNode.ApprovedRoutes = autoApprovedRoutes } } }) if !ok { - return change.EmptySet, fmt.Errorf("node not found in NodeStore: %d", id) + return change.Change{}, fmt.Errorf("%w: %d", ErrNodeNotInNodeStore, id) } - nodeRouteChange := change.EmptySet - - // Handle route changes after NodeStore update - // We need to update node routes if either: - // 1. The approved routes changed (routeChange is true), OR - // 2. The announced routes changed (even if approved routes stayed the same) - // This is because SubnetRoutes is the intersection of announced AND approved routes. - needsRouteUpdate := false - var routesChangedButNotApproved bool - if hostinfoChanged && needsRouteApproval && !routeChange { - if hi := req.Hostinfo; hi != nil { - routesChangedButNotApproved = true - } - } if routeChange { - needsRouteUpdate = true log.Debug(). - Caller(). Uint64("node.id", id.Uint64()). - Msg("updating routes because approved routes changed") - } else if routesChangedButNotApproved { - needsRouteUpdate = true - log.Debug(). - Caller(). - Uint64("node.id", id.Uint64()). - Msg("updating routes because announced routes changed but approved routes did not") - } + Strs("autoApprovedRoutes", util.PrefixesToString(autoApprovedRoutes)). + Msg("Persisting auto-approved routes from MapRequest") - if needsRouteUpdate { - // SetNodeRoutes sets the active/distributed routes, so we must use AllApprovedRoutes() - // which returns only the intersection of announced AND approved routes. - // Using AnnouncedRoutes() would bypass the security model and auto-approve everything. - log.Debug(). - Caller(). - Uint64("node.id", id.Uint64()). - Strs("announcedRoutes", util.PrefixesToString(updatedNode.AnnouncedRoutes())). - Strs("approvedRoutes", util.PrefixesToString(updatedNode.ApprovedRoutes().AsSlice())). - Strs("allApprovedRoutes", util.PrefixesToString(updatedNode.AllApprovedRoutes())). - Msg("updating node routes for distribution") - nodeRouteChange = s.SetNodeRoutes(id, updatedNode.AllApprovedRoutes()...) - } + // SetApprovedRoutes will update both database and PrimaryRoutes table + _, c, err := s.SetApprovedRoutes(id, autoApprovedRoutes) + if err != nil { + return change.Change{}, fmt.Errorf("persisting auto-approved routes: %w", err) + } + + // If SetApprovedRoutes resulted in a policy change, return it + if !c.IsEmpty() { + return c, nil + } + } // Continue with the rest of the processing using the updated node + + // Handle route changes after NodeStore update. + // Update routes if announced routes changed (even if approved routes stayed the same) + // because SubnetRoutes is the intersection of announced AND approved routes. + nodeRouteChange := s.maybeUpdateNodeRoutes(id, updatedNode, hostinfoChanged, needsRouteApproval, routeChange, req.Hostinfo) _, policyChange, err := s.persistNodeToDB(updatedNode) if err != nil { - return change.EmptySet, fmt.Errorf("saving to database: %w", err) + return change.Change{}, fmt.Errorf("saving to database: %w", err) } if policyChange.IsFull() { return policyChange, nil } - if !nodeRouteChange.Empty() { + + if !nodeRouteChange.IsEmpty() { return nodeRouteChange, nil } + // Determine the most specific change type based on what actually changed. + // This allows us to send lightweight patch updates instead of full map responses. + return buildMapRequestChangeResponse(id, updatedNode, hostinfoChanged, endpointChanged, derpChanged) +} + +// buildMapRequestChangeResponse determines the appropriate response type for a MapRequest update. +// Hostinfo changes require a full update, while endpoint/DERP changes can use lightweight patches. +func buildMapRequestChangeResponse( + id types.NodeID, + node types.NodeView, + hostinfoChanged, endpointChanged, derpChanged bool, +) (change.Change, error) { + // Hostinfo changes require NodeAdded (full update) as they may affect many fields. + if hostinfoChanged { + return change.NodeAdded(id), nil + } + + // Return specific change types for endpoint and/or DERP updates. + if endpointChanged || derpChanged { + patch := &tailcfg.PeerChange{NodeID: id.NodeID()} + + if endpointChanged { + patch.Endpoints = node.Endpoints().AsSlice() + } + + if derpChanged { + if hi := node.Hostinfo(); hi.Valid() { + if ni := hi.NetInfo(); ni.Valid() { + patch.DERPRegion = ni.PreferredDERP() + } + } + } + + return change.EndpointOrDERPUpdate(id, patch), nil + } + return change.NodeAdded(id), nil } -func hostinfoEqual(oldNode types.NodeView, new *tailcfg.Hostinfo) bool { - if !oldNode.Valid() && new == nil { +func hostinfoEqual(oldNode types.NodeView, newHI *tailcfg.Hostinfo) bool { + if !oldNode.Valid() && newHI == nil { return true } - if !oldNode.Valid() || new == nil { + + if !oldNode.Valid() || newHI == nil { return false } old := oldNode.AsStruct().Hostinfo - return old.Equal(new) + return old.Equal(newHI) } -func routesChanged(oldNode types.NodeView, new *tailcfg.Hostinfo) bool { +func routesChanged(oldNode types.NodeView, newHI *tailcfg.Hostinfo) bool { var oldRoutes []netip.Prefix if oldNode.Valid() && oldNode.AsStruct().Hostinfo != nil { oldRoutes = oldNode.AsStruct().Hostinfo.RoutableIPs } - newRoutes := new.RoutableIPs + newRoutes := newHI.RoutableIPs if newRoutes == nil { newRoutes = []netip.Prefix{} } @@ -1819,3 +2137,34 @@ func peerChangeEmpty(peerChange tailcfg.PeerChange) bool { peerChange.LastSeen == nil && peerChange.KeyExpiry == nil } + +// maybeUpdateNodeRoutes updates node routes if announced routes changed but approved routes didn't. +// This is needed because SubnetRoutes is the intersection of announced AND approved routes. +func (s *State) maybeUpdateNodeRoutes( + id types.NodeID, + node types.NodeView, + hostinfoChanged, needsRouteApproval, routeChange bool, + hostinfo *tailcfg.Hostinfo, +) change.Change { + // Only update if announced routes changed without approval change + if !hostinfoChanged || !needsRouteApproval || routeChange || hostinfo == nil { + return change.Change{} + } + + log.Debug(). + Caller(). + Uint64("node.id", id.Uint64()). + Msg("updating routes because announced routes changed but approved routes did not") + + // SetNodeRoutes sets the active/distributed routes using AllApprovedRoutes() + // which returns only the intersection of announced AND approved routes. + log.Debug(). + Caller(). + Uint64("node.id", id.Uint64()). + Strs("announcedRoutes", util.PrefixesToString(node.AnnouncedRoutes())). + Strs("approvedRoutes", util.PrefixesToString(node.ApprovedRoutes().AsSlice())). + Strs("allApprovedRoutes", util.PrefixesToString(node.AllApprovedRoutes())). + Msg("updating node routes for distribution") + + return s.SetNodeRoutes(id, node.AllApprovedRoutes()...) +} diff --git a/hscontrol/state/tags.go b/hscontrol/state/tags.go new file mode 100644 index 00000000..ef745241 --- /dev/null +++ b/hscontrol/state/tags.go @@ -0,0 +1,68 @@ +package state + +import ( + "errors" + "fmt" + + "github.com/juanfont/headscale/hscontrol/types" + "github.com/rs/zerolog/log" +) + +var ( + // ErrNodeMarkedTaggedButHasNoTags is returned when a node is marked as tagged but has no tags. + ErrNodeMarkedTaggedButHasNoTags = errors.New("node marked as tagged but has no tags") + + // ErrNodeHasNeitherUserNorTags is returned when a node has neither a user nor tags. + ErrNodeHasNeitherUserNorTags = errors.New("node has neither user nor tags - must be owned by user or tagged") + + // ErrRequestedTagsInvalidOrNotPermitted is returned when requested tags are invalid or not permitted. + // This message format matches Tailscale SaaS: "requested tags [tag:xxx] are invalid or not permitted". + ErrRequestedTagsInvalidOrNotPermitted = errors.New("requested tags") +) + +// validateNodeOwnership ensures proper node ownership model. +// A node must be EITHER user-owned OR tagged (mutually exclusive by behavior). +// Tagged nodes CAN have a UserID for "created by" tracking, but the tag is the owner. +func validateNodeOwnership(node *types.Node) error { + isTagged := node.IsTagged() + + // Tagged nodes: Must have tags, UserID is optional (just "created by") + if isTagged { + if len(node.Tags) == 0 { + return fmt.Errorf("%w: %q", ErrNodeMarkedTaggedButHasNoTags, node.Hostname) + } + // UserID can be set (created by) or nil (orphaned), both valid for tagged nodes + return nil + } + + // User-owned nodes: Must have UserID, must NOT have tags + if node.UserID == nil { + return fmt.Errorf("%w: %q", ErrNodeHasNeitherUserNorTags, node.Hostname) + } + + return nil +} + +// logTagOperation logs tag assignment operations for audit purposes. +func logTagOperation(existingNode types.NodeView, newTags []string) { + if existingNode.IsTagged() { + log.Info(). + Uint64("node.id", existingNode.ID().Uint64()). + Str("node.name", existingNode.Hostname()). + Strs("old.tags", existingNode.Tags().AsSlice()). + Strs("new.tags", newTags). + Msg("Updating tags on already-tagged node") + } else { + var userID uint + if existingNode.UserID().Valid() { + userID = existingNode.UserID().Get() + } + + log.Info(). + Uint64("node.id", existingNode.ID().Uint64()). + Str("node.name", existingNode.Hostname()). + Uint("created.by.user", userID). + Strs("new.tags", newTags). + Msg("Converting user-owned node to tagged node (irreversible)") + } +} diff --git a/hscontrol/state/test_helpers.go b/hscontrol/state/test_helpers.go new file mode 100644 index 00000000..95203106 --- /dev/null +++ b/hscontrol/state/test_helpers.go @@ -0,0 +1,12 @@ +package state + +import ( + "time" +) + +// Test configuration for NodeStore batching. +// These values are optimized for test speed rather than production use. +const ( + TestBatchSize = 5 + TestBatchTimeout = 5 * time.Millisecond +) diff --git a/hscontrol/suite_test.go b/hscontrol/suite_test.go deleted file mode 100644 index fb64d18e..00000000 --- a/hscontrol/suite_test.go +++ /dev/null @@ -1,56 +0,0 @@ -package hscontrol - -import ( - "os" - "testing" - - "github.com/juanfont/headscale/hscontrol/types" - "gopkg.in/check.v1" -) - -func Test(t *testing.T) { - check.TestingT(t) -} - -var _ = check.Suite(&Suite{}) - -type Suite struct{} - -var ( - tmpDir string - app *Headscale -) - -func (s *Suite) SetUpTest(c *check.C) { - s.ResetDB(c) -} - -func (s *Suite) TearDownTest(c *check.C) { - os.RemoveAll(tmpDir) -} - -func (s *Suite) ResetDB(c *check.C) { - if len(tmpDir) != 0 { - os.RemoveAll(tmpDir) - } - var err error - tmpDir, err = os.MkdirTemp("", "autoygg-client-test2") - if err != nil { - c.Fatal(err) - } - cfg := types.Config{ - NoisePrivateKeyPath: tmpDir + "/noise_private.key", - Database: types.DatabaseConfig{ - Type: "sqlite3", - Sqlite: types.SqliteConfig{ - Path: tmpDir + "/headscale_test.db", - }, - }, - OIDC: types.OIDCConfig{}, - } - - app, err = NewHeadscale(&cfg) - if err != nil { - c.Fatal(err) - } -} diff --git a/hscontrol/templates/apple.go b/hscontrol/templates/apple.go index 84928ed5..3b120069 100644 --- a/hscontrol/templates/apple.go +++ b/hscontrol/templates/apple.go @@ -5,48 +5,43 @@ import ( "github.com/chasefleming/elem-go" "github.com/chasefleming/elem-go/attrs" + "github.com/chasefleming/elem-go/styles" ) func Apple(url string) *elem.Element { return HtmlStructure( elem.Title(nil, elem.Text("headscale - Apple")), - elem.Body(attrs.Props{ - attrs.Style: bodyStyle.ToInline(), - }, - headerOne("headscale: iOS configuration"), - headerTwo("GUI"), - elem.Ol(nil, + mdTypesetBody( + headscaleLogo(), + H1(elem.Text("iOS configuration")), + H2(elem.Text("GUI")), + Ol( elem.Li( nil, elem.Text("Install the official Tailscale iOS client from the "), - elem.A( - attrs.Props{ - attrs.Href: "https://apps.apple.com/app/tailscale/id1470499037", - }, - elem.Text("App store"), - ), + externalLink("https://apps.apple.com/app/tailscale/id1470499037", "App Store"), ), elem.Li( nil, - elem.Text("Open the Tailscale app"), + elem.Text("Open the "), + elem.Strong(nil, elem.Text("Tailscale")), + elem.Text(" app"), ), elem.Li( nil, - elem.Text(`Click the account icon in the top-right corner and select "Log in…".`), + elem.Text("Click the account icon in the top-right corner and select "), + elem.Strong(nil, elem.Text("Log in…")), ), elem.Li( nil, - elem.Text(`Tap the top-right options menu button and select "Use custom coordination server".`), + elem.Text("Tap the top-right options menu button and select "), + elem.Strong(nil, elem.Text("Use custom coordination server")), ), elem.Li( nil, - elem.Text( - fmt.Sprintf( - `Enter your instance URL: "%s"`, - url, - ), - ), + elem.Text("Enter your instance URL: "), + Code(elem.Text(url)), ), elem.Li( nil, @@ -55,65 +50,50 @@ func Apple(url string) *elem.Element { ), ), ), - headerOne("headscale: macOS configuration"), - headerTwo("Command line"), - elem.P(nil, + H1(elem.Text("macOS configuration")), + H2(elem.Text("Command line")), + P( elem.Text("Use Tailscale's login command to add your profile:"), ), - elem.Pre(nil, - elem.Code(nil, - elem.Text("tailscale login --login-server "+url), - ), - ), - headerTwo("GUI"), - elem.Ol(nil, + Pre(PreCode("tailscale login --login-server "+url)), + H2(elem.Text("GUI")), + Ol( elem.Li( nil, - elem.Text( - "Option + Click the Tailscale icon in the menu and hover over the Debug menu", - ), + elem.Text("Option + Click the "), + elem.Strong(nil, elem.Text("Tailscale")), + elem.Text(" icon in the menu and hover over the "), + elem.Strong(nil, elem.Text("Debug")), + elem.Text(" menu"), ), elem.Li(nil, - elem.Text(`Under "Custom Login Server", select "Add Account..."`), + elem.Text("Under "), + elem.Strong(nil, elem.Text("Custom Login Server")), + elem.Text(", select "), + elem.Strong(nil, elem.Text("Add Account...")), ), elem.Li( nil, - elem.Text( - fmt.Sprintf( - `Enter "%s" of the headscale instance and press "Add Account"`, - url, - ), - ), + elem.Text("Enter "), + Code(elem.Text(url)), + elem.Text(" of the headscale instance and press "), + elem.Strong(nil, elem.Text("Add Account")), ), elem.Li(nil, - elem.Text(`Follow the login procedure in the browser`), + elem.Text("Follow the login procedure in the browser"), ), ), - headerTwo("Profiles"), - elem.P( - nil, + H2(elem.Text("Profiles")), + P( elem.Text( "Headscale can be set to the default server by installing a Headscale configuration profile:", ), ), - elem.P( - nil, - elem.A( - attrs.Props{ - attrs.Href: "/apple/macos-app-store", - attrs.Download: "headscale_macos.mobileconfig", - }, - elem.Text("macOS AppStore profile "), - ), - elem.A( - attrs.Props{ - attrs.Href: "/apple/macos-standalone", - attrs.Download: "headscale_macos.mobileconfig", - }, - elem.Text("macOS Standalone profile"), - ), + elem.Div(attrs.Props{attrs.Style: styles.Props{styles.MarginTop: spaceL, styles.MarginBottom: spaceL}.ToInline()}, + downloadButton("/apple/macos-app-store", "macOS AppStore profile"), + downloadButton("/apple/macos-standalone", "macOS Standalone profile"), ), - elem.Ol(nil, + Ol( elem.Li( nil, elem.Text( @@ -121,105 +101,82 @@ func Apple(url string) *elem.Element { ), ), elem.Li(nil, - elem.Text(`Open System Preferences and go to "Profiles"`), + elem.Text("Open "), + elem.Strong(nil, elem.Text("System Preferences")), + elem.Text(" and go to "), + elem.Strong(nil, elem.Text("Profiles")), ), elem.Li(nil, - elem.Text(`Find and install the Headscale profile`), + elem.Text("Find and install the "), + elem.Strong(nil, elem.Text("Headscale")), + elem.Text(" profile"), ), elem.Li(nil, - elem.Text(`Restart Tailscale.app and log in`), + elem.Text("Restart "), + elem.Strong(nil, elem.Text("Tailscale.app")), + elem.Text(" and log in"), ), ), - elem.P(nil, elem.Text("Or")), - elem.P( - nil, + orDivider(), + P( elem.Text( - "Use your terminal to configure the default setting for Tailscale by issuing:", + "Use your terminal to configure the default setting for Tailscale by issuing one of the following commands:", ), ), - elem.Ul(nil, - elem.Li(nil, - elem.Text(`for app store client:`), - elem.Code( - nil, - elem.Text( - "defaults write io.tailscale.ipn.macos ControlURL "+url, - ), - ), - ), - elem.Li(nil, - elem.Text(`for standalone client:`), - elem.Code( - nil, - elem.Text( - "defaults write io.tailscale.ipn.macsys ControlURL "+url, - ), - ), - ), + P(elem.Text("For app store client:")), + Pre(PreCode("defaults write io.tailscale.ipn.macos ControlURL "+url)), + P(elem.Text("For standalone client:")), + Pre(PreCode("defaults write io.tailscale.ipn.macsys ControlURL "+url)), + P( + elem.Text("Restart "), + elem.Strong(nil, elem.Text("Tailscale.app")), + elem.Text(" and log in."), ), - elem.P(nil, - elem.Text("Restart Tailscale.app and log in."), - ), - headerThree("Caution"), - elem.P( - nil, - elem.Text( - "You should always download and inspect the profile before installing it:", - ), - ), - elem.Ul(nil, - elem.Li(nil, - elem.Text(`for app store client: `), - elem.Code(nil, - elem.Text(fmt.Sprintf(`curl %s/apple/macos-app-store`, url)), - ), - ), - elem.Li(nil, - elem.Text(`for standalone client: `), - elem.Code(nil, - elem.Text(fmt.Sprintf(`curl %s/apple/macos-standalone`, url)), - ), - ), - ), - headerOne("headscale: tvOS configuration"), - headerTwo("GUI"), - elem.Ol(nil, + warningBox("Caution", "You should always download and inspect the profile before installing it."), + P(elem.Text("For app store client:")), + Pre(PreCode(fmt.Sprintf(`curl %s/apple/macos-app-store`, url))), + P(elem.Text("For standalone client:")), + Pre(PreCode(fmt.Sprintf(`curl %s/apple/macos-standalone`, url))), + H1(elem.Text("tvOS configuration")), + H2(elem.Text("GUI")), + Ol( elem.Li( nil, elem.Text("Install the official Tailscale tvOS client from the "), - elem.A( - attrs.Props{ - attrs.Href: "https://apps.apple.com/app/tailscale/id1470499037", - }, - elem.Text("App store"), - ), + externalLink("https://apps.apple.com/app/tailscale/id1470499037", "App Store"), ), elem.Li( nil, - elem.Text( - "Open Settings (the Apple tvOS settings) > Apps > Tailscale", - ), + elem.Text("Open "), + elem.Strong(nil, elem.Text("Settings")), + elem.Text(" (the Apple tvOS settings) > "), + elem.Strong(nil, elem.Text("Apps")), + elem.Text(" > "), + elem.Strong(nil, elem.Text("Tailscale")), ), elem.Li( nil, - elem.Text( - fmt.Sprintf( - `Enter "%s" under "ALTERNATE COORDINATION SERVER URL"`, - url, - ), - ), + elem.Text("Enter "), + Code(elem.Text(url)), + elem.Text(" under "), + elem.Strong(nil, elem.Text("ALTERNATE COORDINATION SERVER URL")), ), elem.Li(nil, - elem.Text("Return to the tvOS Home screen"), + elem.Text("Return to the tvOS "), + elem.Strong(nil, elem.Text("Home")), + elem.Text(" screen"), ), elem.Li(nil, - elem.Text("Open Tailscale"), + elem.Text("Open "), + elem.Strong(nil, elem.Text("Tailscale")), ), elem.Li(nil, - elem.Text(`Select "Install VPN configuration"`), + elem.Text("Select "), + elem.Strong(nil, elem.Text("Install VPN configuration")), ), elem.Li(nil, - elem.Text(`Select "Allow"`), + elem.Text("Select "), + elem.Strong(nil, elem.Text("Allow")), ), elem.Li(nil, elem.Text("Scan the QR code and follow the login procedure"), @@ -228,6 +185,7 @@ func Apple(url string) *elem.Element { elem.Text("Headscale should now be working on your tvOS device"), ), ), + pageFooter(), ), ) } diff --git a/hscontrol/templates/design.go b/hscontrol/templates/design.go new file mode 100644 index 00000000..615c0e41 --- /dev/null +++ b/hscontrol/templates/design.go @@ -0,0 +1,482 @@ +package templates + +import ( + elem "github.com/chasefleming/elem-go" + "github.com/chasefleming/elem-go/attrs" + "github.com/chasefleming/elem-go/styles" +) + +// Design System Constants +// These constants define the visual language for all Headscale HTML templates. +// They ensure consistency across all pages and make it easy to maintain and update the design. + +// Color System +// EXTRACTED FROM: https://headscale.net/stable/assets/stylesheets/main.342714a4.min.css +// Material for MkDocs design system - exact values from official docs. +const ( + // Text colors - from --md-default-fg-color CSS variables. + colorTextPrimary = "#000000de" //nolint:unused // rgba(0,0,0,0.87) - Body text + colorTextSecondary = "#0000008a" //nolint:unused // rgba(0,0,0,0.54) - Headings (--md-default-fg-color--light) + colorTextTertiary = "#00000052" //nolint:unused // rgba(0,0,0,0.32) - Lighter text + colorTextLightest = "#00000012" //nolint:unused // rgba(0,0,0,0.07) - Lightest text + + // Code colors - from --md-code-* CSS variables. + colorCodeFg = "#36464e" //nolint:unused // Code text color (--md-code-fg-color) + colorCodeBg = "#f5f5f5" //nolint:unused // Code background (--md-code-bg-color) + + // Border colors. + colorBorderLight = "#e5e7eb" //nolint:unused // Light borders + colorBorderMedium = "#d1d5db" //nolint:unused // Medium borders + + // Background colors. + colorBackgroundPage = "#ffffff" //nolint:unused // Page background + colorBackgroundCard = "#ffffff" //nolint:unused // Card/content background + + // Accent colors - from --md-primary/accent-fg-color. + colorPrimaryAccent = "#4051b5" //nolint:unused // Primary accent (links) + colorAccent = "#526cfe" //nolint:unused // Secondary accent + + // Success colors. + colorSuccess = "#059669" //nolint:unused // Success states + colorSuccessLight = "#d1fae5" //nolint:unused // Success backgrounds +) + +// Spacing System +// Based on 4px/8px base unit for consistent rhythm. +// Uses rem units for scalability with user font size preferences. +const ( + spaceXS = "0.25rem" //nolint:unused // 4px - Tight spacing + spaceS = "0.5rem" //nolint:unused // 8px - Small spacing + spaceM = "1rem" //nolint:unused // 16px - Medium spacing (base) + spaceL = "1.5rem" //nolint:unused // 24px - Large spacing + spaceXL = "2rem" //nolint:unused // 32px - Extra large spacing + space2XL = "3rem" //nolint:unused // 48px - 2x extra large spacing + space3XL = "4rem" //nolint:unused // 64px - 3x extra large spacing +) + +// Typography System +// EXTRACTED FROM: https://headscale.net/stable/assets/stylesheets/main.342714a4.min.css +// Material for MkDocs typography - exact values from .md-typeset CSS. +const ( + // Font families - from CSS custom properties. + fontFamilySystem = `"Roboto", -apple-system, BlinkMacSystemFont, "Segoe UI", "Helvetica Neue", Arial, sans-serif` //nolint:unused + fontFamilyCode = `"Roboto Mono", "SF Mono", Monaco, "Cascadia Code", Consolas, "Courier New", monospace` //nolint:unused + + // Font sizes - from .md-typeset CSS rules. + fontSizeBase = "0.8rem" //nolint:unused // 12.8px - Base text (.md-typeset) + fontSizeH1 = "2em" //nolint:unused // 2x base - Main headings + fontSizeH2 = "1.5625em" //nolint:unused // 1.5625x base - Section headings + fontSizeH3 = "1.25em" //nolint:unused // 1.25x base - Subsection headings + fontSizeSmall = "0.8em" //nolint:unused // 0.8x base - Small text + fontSizeCode = "0.85em" //nolint:unused // 0.85x base - Inline code + + // Line heights - from .md-typeset CSS rules. + lineHeightBase = "1.6" //nolint:unused // Body text (.md-typeset) + lineHeightH1 = "1.3" //nolint:unused // H1 headings + lineHeightH2 = "1.4" //nolint:unused // H2 headings + lineHeightH3 = "1.5" //nolint:unused // H3 headings + lineHeightCode = "1.4" //nolint:unused // Code blocks (pre) +) + +// Responsive Container Component +// Creates a centered container with responsive padding and max-width. +// Mobile-first approach: starts at 100% width with padding, constrains on larger screens. +// +//nolint:unused // Reserved for future use in Phase 4. +func responsiveContainer(children ...elem.Node) *elem.Element { + return elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Width: "100%", + styles.MaxWidth: "min(800px, 90vw)", // Responsive: 90% of viewport or 800px max + styles.Margin: "0 auto", // Center horizontally + styles.Padding: "clamp(1rem, 5vw, 2.5rem)", // Fluid padding: 16px to 40px + }.ToInline(), + }, children...) +} + +// Card Component +// Reusable card for grouping related content with visual separation. +// Parameters: +// - title: Optional title for the card (empty string for no title) +// - children: Content elements to display in the card +// +//nolint:unused // Reserved for future use in Phase 4. +func card(title string, children ...elem.Node) *elem.Element { + cardContent := children + if title != "" { + // Prepend title as H3 if provided + cardContent = append([]elem.Node{ + elem.H3(attrs.Props{ + attrs.Style: styles.Props{ + styles.MarginTop: "0", + styles.MarginBottom: spaceM, + styles.FontSize: fontSizeH3, + styles.LineHeight: lineHeightH3, // 1.5 - H3 line height + styles.Color: colorTextSecondary, + }.ToInline(), + }, elem.Text(title)), + }, children...) + } + + return elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Background: colorBackgroundCard, + styles.Border: "1px solid " + colorBorderLight, + styles.BorderRadius: "0.5rem", // 8px rounded corners + styles.Padding: "clamp(1rem, 3vw, 1.5rem)", // Responsive padding + styles.MarginBottom: spaceL, + styles.BoxShadow: "0 1px 3px rgba(0,0,0,0.1)", // Subtle shadow + }.ToInline(), + }, cardContent...) +} + +// Code Block Component +// EXTRACTED FROM: .md-typeset pre CSS rules +// Exact styling from Material for MkDocs documentation. +// +//nolint:unused // Used across apple.go, windows.go, register_web.go templates. +func codeBlock(code string) *elem.Element { + return elem.Pre(attrs.Props{ + attrs.Style: styles.Props{ + styles.Display: "block", + styles.Padding: "0.77em 1.18em", // From .md-typeset pre + styles.Border: "none", // No border in original + styles.BorderRadius: "0.1rem", // From .md-typeset code + styles.BackgroundColor: colorCodeBg, // #f5f5f5 + styles.FontFamily: fontFamilyCode, // Roboto Mono + styles.FontSize: fontSizeCode, // 0.85em + styles.LineHeight: lineHeightCode, // 1.4 + styles.OverflowX: "auto", // Horizontal scroll + "overflow-wrap": "break-word", // Word wrapping + "word-wrap": "break-word", // Legacy support + styles.WhiteSpace: "pre-wrap", // Preserve whitespace + styles.MarginTop: spaceM, // 1em + styles.MarginBottom: spaceM, // 1em + styles.Color: colorCodeFg, // #36464e + styles.BoxShadow: "none", // No shadow in original + }.ToInline(), + }, + elem.Code(nil, elem.Text(code)), + ) +} + +// Base Typeset Styles +// Returns inline styles for the main content container that matches .md-typeset. +// EXTRACTED FROM: .md-typeset CSS rule from Material for MkDocs. +// +//nolint:unused // Used in general.go for mdTypesetBody. +func baseTypesetStyles() styles.Props { + return styles.Props{ + styles.FontSize: fontSizeBase, // 0.8rem + styles.LineHeight: lineHeightBase, // 1.6 + styles.Color: colorTextPrimary, + styles.FontFamily: fontFamilySystem, + "overflow-wrap": "break-word", + styles.TextAlign: "left", + } +} + +// H1 Styles +// Returns inline styles for H1 headings that match .md-typeset h1. +// EXTRACTED FROM: .md-typeset h1 CSS rule from Material for MkDocs. +// +//nolint:unused // Used across templates for main headings. +func h1Styles() styles.Props { + return styles.Props{ + styles.Color: colorTextSecondary, // rgba(0, 0, 0, 0.54) + styles.FontSize: fontSizeH1, // 2em + styles.LineHeight: lineHeightH1, // 1.3 + styles.Margin: "0 0 1.25em", + styles.FontWeight: "300", + "letter-spacing": "-0.01em", + styles.FontFamily: fontFamilySystem, // Roboto + "overflow-wrap": "break-word", + } +} + +// H2 Styles +// Returns inline styles for H2 headings that match .md-typeset h2. +// EXTRACTED FROM: .md-typeset h2 CSS rule from Material for MkDocs. +// +//nolint:unused // Used across templates for section headings. +func h2Styles() styles.Props { + return styles.Props{ + styles.FontSize: fontSizeH2, // 1.5625em + styles.LineHeight: lineHeightH2, // 1.4 + styles.Margin: "1.6em 0 0.64em", + styles.FontWeight: "300", + "letter-spacing": "-0.01em", + styles.Color: colorTextSecondary, // rgba(0, 0, 0, 0.54) + styles.FontFamily: fontFamilySystem, // Roboto + "overflow-wrap": "break-word", + } +} + +// H3 Styles +// Returns inline styles for H3 headings that match .md-typeset h3. +// EXTRACTED FROM: .md-typeset h3 CSS rule from Material for MkDocs. +// +//nolint:unused // Used across templates for subsection headings. +func h3Styles() styles.Props { + return styles.Props{ + styles.FontSize: fontSizeH3, // 1.25em + styles.LineHeight: lineHeightH3, // 1.5 + styles.Margin: "1.6em 0 0.8em", + styles.FontWeight: "400", + "letter-spacing": "-0.01em", + styles.Color: colorTextSecondary, // rgba(0, 0, 0, 0.54) + styles.FontFamily: fontFamilySystem, // Roboto + "overflow-wrap": "break-word", + } +} + +// Paragraph Styles +// Returns inline styles for paragraphs that match .md-typeset p. +// EXTRACTED FROM: .md-typeset p CSS rule from Material for MkDocs. +// +//nolint:unused // Used for consistent paragraph spacing. +func paragraphStyles() styles.Props { + return styles.Props{ + styles.Margin: "1em 0", + styles.FontFamily: fontFamilySystem, // Roboto + styles.FontSize: fontSizeBase, // 0.8rem - inherited from .md-typeset + styles.LineHeight: lineHeightBase, // 1.6 - inherited from .md-typeset + styles.Color: colorTextPrimary, // rgba(0, 0, 0, 0.87) + "overflow-wrap": "break-word", + } +} + +// Ordered List Styles +// Returns inline styles for ordered lists that match .md-typeset ol. +// EXTRACTED FROM: .md-typeset ol CSS rule from Material for MkDocs. +// +//nolint:unused // Used for numbered instruction lists. +func orderedListStyles() styles.Props { + return styles.Props{ + styles.MarginBottom: "1em", + styles.MarginTop: "1em", + styles.PaddingLeft: "2em", + styles.FontFamily: fontFamilySystem, // Roboto - inherited from .md-typeset + styles.FontSize: fontSizeBase, // 0.8rem - inherited from .md-typeset + styles.LineHeight: lineHeightBase, // 1.6 - inherited from .md-typeset + styles.Color: colorTextPrimary, // rgba(0, 0, 0, 0.87) - inherited from .md-typeset + "overflow-wrap": "break-word", + } +} + +// Unordered List Styles +// Returns inline styles for unordered lists that match .md-typeset ul. +// EXTRACTED FROM: .md-typeset ul CSS rule from Material for MkDocs. +// +//nolint:unused // Used for bullet point lists. +func unorderedListStyles() styles.Props { + return styles.Props{ + styles.MarginBottom: "1em", + styles.MarginTop: "1em", + styles.PaddingLeft: "2em", + styles.FontFamily: fontFamilySystem, // Roboto - inherited from .md-typeset + styles.FontSize: fontSizeBase, // 0.8rem - inherited from .md-typeset + styles.LineHeight: lineHeightBase, // 1.6 - inherited from .md-typeset + styles.Color: colorTextPrimary, // rgba(0, 0, 0, 0.87) - inherited from .md-typeset + "overflow-wrap": "break-word", + } +} + +// Link Styles +// Returns inline styles for links that match .md-typeset a. +// EXTRACTED FROM: .md-typeset a CSS rule from Material for MkDocs. +// Note: Hover states cannot be implemented with inline styles. +// +//nolint:unused // Used for text links. +func linkStyles() styles.Props { + return styles.Props{ + styles.Color: colorPrimaryAccent, // #4051b5 - var(--md-primary-fg-color) + styles.TextDecoration: "none", + "word-break": "break-word", + styles.FontFamily: fontFamilySystem, // Roboto - inherited from .md-typeset + } +} + +// Inline Code Styles (updated) +// Returns inline styles for inline code that matches .md-typeset code. +// EXTRACTED FROM: .md-typeset code CSS rule from Material for MkDocs. +// +//nolint:unused // Used for inline code snippets. +func inlineCodeStyles() styles.Props { + return styles.Props{ + styles.BackgroundColor: colorCodeBg, // #f5f5f5 + styles.Color: colorCodeFg, // #36464e + styles.BorderRadius: "0.1rem", + styles.FontSize: fontSizeCode, // 0.85em + styles.FontFamily: fontFamilyCode, // Roboto Mono + styles.Padding: "0 0.2941176471em", + "word-break": "break-word", + } +} + +// Inline Code Component +// For inline code snippets within text. +// +//nolint:unused // Reserved for future inline code usage. +func inlineCode(code string) *elem.Element { + return elem.Code(attrs.Props{ + attrs.Style: inlineCodeStyles().ToInline(), + }, elem.Text(code)) +} + +// orDivider creates a visual "or" divider between sections. +// Styled with lines on either side for better visual separation. +// +//nolint:unused // Used in apple.go template. +func orDivider() *elem.Element { + return elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Display: "flex", + styles.AlignItems: "center", + styles.Gap: spaceM, + styles.MarginTop: space2XL, + styles.MarginBottom: space2XL, + styles.Width: "100%", + }.ToInline(), + }, + elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Flex: "1", + styles.Height: "1px", + styles.BackgroundColor: colorBorderLight, + }.ToInline(), + }), + elem.Strong(attrs.Props{ + attrs.Style: styles.Props{ + styles.Color: colorTextSecondary, + styles.FontSize: fontSizeBase, + styles.FontWeight: "500", + "text-transform": "uppercase", + "letter-spacing": "0.05em", + }.ToInline(), + }, elem.Text("or")), + elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Flex: "1", + styles.Height: "1px", + styles.BackgroundColor: colorBorderLight, + }.ToInline(), + }), + ) +} + +// warningBox creates a warning message box with icon and content. +// +//nolint:unused // Used in apple.go template. +func warningBox(title, message string) *elem.Element { + return elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Display: "flex", + styles.AlignItems: "flex-start", + styles.Gap: spaceM, + styles.Padding: spaceL, + styles.BackgroundColor: "#fef3c7", // yellow-100 + styles.Border: "1px solid #f59e0b", // yellow-500 + styles.BorderRadius: "0.5rem", + styles.MarginTop: spaceL, + styles.MarginBottom: spaceL, + }.ToInline(), + }, + elem.Raw(``), + elem.Div(nil, + elem.Strong(attrs.Props{ + attrs.Style: styles.Props{ + styles.Display: "block", + styles.Color: "#92400e", // yellow-800 + styles.FontSize: fontSizeH3, + styles.MarginBottom: spaceXS, + }.ToInline(), + }, elem.Text(title)), + elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Color: colorTextPrimary, + styles.FontSize: fontSizeBase, + }.ToInline(), + }, elem.Text(message)), + ), + ) +} + +// downloadButton creates a nice button-style link for downloads. +// +//nolint:unused // Used in apple.go template. +func downloadButton(href, text string) *elem.Element { + return elem.A(attrs.Props{ + attrs.Href: href, + attrs.Download: "headscale_macos.mobileconfig", + attrs.Style: styles.Props{ + styles.Display: "inline-block", + styles.Padding: "0.75rem 1.5rem", + styles.BackgroundColor: "#3b82f6", // blue-500 + styles.Color: "#ffffff", + styles.TextDecoration: "none", + styles.BorderRadius: "0.5rem", + styles.FontWeight: "500", + styles.Transition: "background-color 0.2s", + styles.MarginRight: spaceM, + styles.MarginBottom: spaceM, + }.ToInline(), + }, elem.Text(text)) +} + +// External Link Component +// Creates a link with proper security attributes for external URLs. +// Automatically adds rel="noreferrer noopener" and target="_blank". +// +//nolint:unused // Used in apple.go, oidc_callback.go templates. +func externalLink(href, text string) *elem.Element { + return elem.A(attrs.Props{ + attrs.Href: href, + attrs.Rel: "noreferrer noopener", + attrs.Target: "_blank", + attrs.Style: styles.Props{ + styles.Color: colorPrimaryAccent, // #4051b5 - base link color + styles.TextDecoration: "none", + }.ToInline(), + }, elem.Text(text)) +} + +// Instruction Step Component +// For numbered instruction lists with consistent formatting. +// +//nolint:unused // Reserved for future use in Phase 4. +func instructionStep(_ int, text string) *elem.Element { + return elem.Li(attrs.Props{ + attrs.Style: styles.Props{ + styles.MarginBottom: spaceS, + styles.LineHeight: lineHeightBase, + }.ToInline(), + }, elem.Text(text)) +} + +// Status Message Component +// For displaying success/error/info messages with appropriate styling. +// +//nolint:unused // Reserved for future use in Phase 4. +func statusMessage(message string, isSuccess bool) *elem.Element { + bgColor := colorSuccessLight + textColor := colorSuccess + + if !isSuccess { + bgColor = "#fee2e2" // red-100 + textColor = "#dc2626" // red-600 + } + + return elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Padding: spaceM, + styles.BackgroundColor: bgColor, + styles.Color: textColor, + styles.BorderRadius: "0.5rem", + styles.Border: "1px solid " + textColor, + styles.MarginBottom: spaceL, + styles.FontSize: fontSizeBase, + styles.LineHeight: lineHeightBase, + }.ToInline(), + }, elem.Text(message)) +} diff --git a/hscontrol/templates/general.go b/hscontrol/templates/general.go index 3728b736..ccc5a360 100644 --- a/hscontrol/templates/general.go +++ b/hscontrol/templates/general.go @@ -4,40 +4,167 @@ import ( "github.com/chasefleming/elem-go" "github.com/chasefleming/elem-go/attrs" "github.com/chasefleming/elem-go/styles" + "github.com/juanfont/headscale/hscontrol/assets" ) -var bodyStyle = styles.Props{ - styles.Margin: "40px auto", - styles.MaxWidth: "800px", - styles.LineHeight: "1.5", - styles.FontSize: "16px", - styles.Color: "#444", - styles.Padding: "0 10px", - styles.FontFamily: "Sans-serif", +// mdTypesetBody creates a body element with md-typeset styling +// that matches the official Headscale documentation design. +// Uses CSS classes with styles defined in assets.CSS. +func mdTypesetBody(children ...elem.Node) *elem.Element { + return elem.Body(attrs.Props{ + attrs.Style: styles.Props{ + styles.MinHeight: "100vh", + styles.Display: "flex", + styles.FlexDirection: "column", + styles.AlignItems: "center", + styles.BackgroundColor: "#ffffff", + styles.Padding: "3rem 1.5rem", + }.ToInline(), + "translate": "no", + }, + elem.Div(attrs.Props{ + attrs.Class: "md-typeset", + attrs.Style: styles.Props{ + styles.MaxWidth: "min(800px, 90vw)", + styles.Width: "100%", + }.ToInline(), + }, children...), + ) } -var headerStyle = styles.Props{ - styles.LineHeight: "1.2", +// Styled Element Wrappers +// These functions wrap elem-go elements using CSS classes. +// Styling is handled by the CSS in assets.CSS. + +// H1 creates a H1 element styled by .md-typeset h1 +func H1(children ...elem.Node) *elem.Element { + return elem.H1(nil, children...) } +// H2 creates a H2 element styled by .md-typeset h2 +func H2(children ...elem.Node) *elem.Element { + return elem.H2(nil, children...) +} + +// H3 creates a H3 element styled by .md-typeset h3 +func H3(children ...elem.Node) *elem.Element { + return elem.H3(nil, children...) +} + +// P creates a paragraph element styled by .md-typeset p +func P(children ...elem.Node) *elem.Element { + return elem.P(nil, children...) +} + +// Ol creates an ordered list element styled by .md-typeset ol +func Ol(children ...elem.Node) *elem.Element { + return elem.Ol(nil, children...) +} + +// Ul creates an unordered list element styled by .md-typeset ul +func Ul(children ...elem.Node) *elem.Element { + return elem.Ul(nil, children...) +} + +// A creates a link element styled by .md-typeset a +func A(href string, children ...elem.Node) *elem.Element { + return elem.A(attrs.Props{attrs.Href: href}, children...) +} + +// Code creates an inline code element styled by .md-typeset code +func Code(children ...elem.Node) *elem.Element { + return elem.Code(nil, children...) +} + +// Pre creates a preformatted text block styled by .md-typeset pre +func Pre(children ...elem.Node) *elem.Element { + return elem.Pre(nil, children...) +} + +// PreCode creates a code block inside Pre styled by .md-typeset pre > code +func PreCode(code string) *elem.Element { + return elem.Code(nil, elem.Text(code)) +} + +// Deprecated: use H1, H2, H3 instead func headerOne(text string) *elem.Element { - return elem.H1(attrs.Props{attrs.Style: headerStyle.ToInline()}, elem.Text(text)) + return H1(elem.Text(text)) } +// Deprecated: use H1, H2, H3 instead func headerTwo(text string) *elem.Element { - return elem.H2(attrs.Props{attrs.Style: headerStyle.ToInline()}, elem.Text(text)) + return H2(elem.Text(text)) } +// Deprecated: use H1, H2, H3 instead func headerThree(text string) *elem.Element { - return elem.H3(attrs.Props{attrs.Style: headerStyle.ToInline()}, elem.Text(text)) + return H3(elem.Text(text)) } +// contentContainer wraps page content with proper width. +// Content inside is left-aligned by default. +func contentContainer(children ...elem.Node) *elem.Element { + containerStyle := styles.Props{ + styles.MaxWidth: "720px", + styles.Width: "100%", + styles.Display: "flex", + styles.FlexDirection: "column", + styles.AlignItems: "flex-start", // Left-align all children + } + + return elem.Div(attrs.Props{attrs.Style: containerStyle.ToInline()}, children...) +} + +// headscaleLogo returns the Headscale SVG logo for consistent branding across all pages. +// The logo is styled by the .headscale-logo CSS class. +func headscaleLogo() elem.Node { + // Return the embedded SVG as-is + return elem.Raw(assets.SVG) +} + +// pageFooter creates a consistent footer for all pages. +func pageFooter() *elem.Element { + footerStyle := styles.Props{ + styles.MarginTop: space3XL, + styles.TextAlign: "center", + styles.FontSize: fontSizeSmall, + styles.Color: colorTextSecondary, + styles.LineHeight: lineHeightBase, + } + + linkStyle := styles.Props{ + styles.Color: colorTextSecondary, + styles.TextDecoration: "underline", + } + + return elem.Div(attrs.Props{attrs.Style: footerStyle.ToInline()}, + elem.Text("Powered by "), + elem.A(attrs.Props{ + attrs.Href: "https://github.com/juanfont/headscale", + attrs.Rel: "noreferrer noopener", + attrs.Target: "_blank", + attrs.Style: linkStyle.ToInline(), + }, elem.Text("Headscale")), + ) +} + +// listStyle provides consistent styling for ordered and unordered lists +// EXTRACTED FROM: .md-typeset ol, .md-typeset ul CSS rules +var listStyle = styles.Props{ + styles.LineHeight: lineHeightBase, // 1.6 - From .md-typeset + styles.MarginTop: "1em", // From CSS: margin-top: 1em + styles.MarginBottom: "1em", // From CSS: margin-bottom: 1em + styles.PaddingLeft: "clamp(1.5rem, 5vw, 2.5rem)", // Responsive indentation +} + +// HtmlStructure creates a complete HTML document structure with proper meta tags +// and semantic HTML5 structure. The head and body elements are passed as parameters +// to allow for customization of each page. +// Styling is provided via a CSS stylesheet (Material for MkDocs design system) with +// minimal inline styles for layout and positioning. func HtmlStructure(head, body *elem.Element) *elem.Element { - return elem.Html(nil, - elem.Head( - attrs.Props{ - attrs.Lang: "en", - }, + return elem.Html(attrs.Props{attrs.Lang: "en"}, + elem.Head(nil, elem.Meta(attrs.Props{ attrs.Charset: "UTF-8", }), @@ -49,8 +176,41 @@ func HtmlStructure(head, body *elem.Element) *elem.Element { attrs.Name: "viewport", attrs.Content: "width=device-width, initial-scale=1.0", }), + elem.Link(attrs.Props{ + attrs.Rel: "icon", + attrs.Href: "/favicon.ico", + }), + // Google Fonts for Roboto and Roboto Mono + elem.Link(attrs.Props{ + attrs.Rel: "preconnect", + attrs.Href: "https://fonts.gstatic.com", + "crossorigin": "", + }), + elem.Link(attrs.Props{ + attrs.Rel: "stylesheet", + attrs.Href: "https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;500;700&family=Roboto+Mono:wght@400;700&display=swap", + }), + // Material for MkDocs CSS styles + elem.Style(attrs.Props{attrs.Type: "text/css"}, elem.Raw(assets.CSS)), head, ), body, ) } + +// BlankPage creates a minimal blank HTML page with favicon. +// Used for endpoints that need to return a valid HTML page with no content. +func BlankPage() *elem.Element { + return elem.Html(attrs.Props{attrs.Lang: "en"}, + elem.Head(nil, + elem.Meta(attrs.Props{ + attrs.Charset: "UTF-8", + }), + elem.Link(attrs.Props{ + attrs.Rel: "icon", + attrs.Href: "/favicon.ico", + }), + ), + elem.Body(nil), + ) +} diff --git a/hscontrol/templates/oidc_callback.go b/hscontrol/templates/oidc_callback.go new file mode 100644 index 00000000..16c08fde --- /dev/null +++ b/hscontrol/templates/oidc_callback.go @@ -0,0 +1,69 @@ +package templates + +import ( + "github.com/chasefleming/elem-go" + "github.com/chasefleming/elem-go/attrs" + "github.com/chasefleming/elem-go/styles" +) + +// checkboxIcon returns the success checkbox SVG icon as raw HTML. +func checkboxIcon() elem.Node { + return elem.Raw(``) +} + +// OIDCCallback renders the OIDC authentication success callback page. +func OIDCCallback(user, verb string) *elem.Element { + // Success message box + successBox := elem.Div(attrs.Props{ + attrs.Style: styles.Props{ + styles.Display: "flex", + styles.AlignItems: "center", + styles.Gap: spaceM, + styles.Padding: spaceL, + styles.BackgroundColor: colorSuccessLight, + styles.Border: "1px solid " + colorSuccess, + styles.BorderRadius: "0.5rem", + styles.MarginBottom: spaceXL, + }.ToInline(), + }, + checkboxIcon(), + elem.Div(nil, + elem.Strong(attrs.Props{ + attrs.Style: styles.Props{ + styles.Display: "block", + styles.Color: colorSuccess, + styles.FontSize: fontSizeH3, + styles.MarginBottom: spaceXS, + }.ToInline(), + }, elem.Text("Signed in successfully")), + elem.P(attrs.Props{ + attrs.Style: styles.Props{ + styles.Margin: "0", + styles.Color: colorTextPrimary, + styles.FontSize: fontSizeBase, + }.ToInline(), + }, elem.Text(verb), elem.Text(" as "), elem.Strong(nil, elem.Text(user)), elem.Text(". You can now close this window.")), + ), + ) + + return HtmlStructure( + elem.Title(nil, elem.Text("Headscale Authentication Succeeded")), + mdTypesetBody( + headscaleLogo(), + successBox, + H2(elem.Text("Getting started")), + P(elem.Text("Check out the documentation to learn more about headscale and Tailscale:")), + Ul( + elem.Li(nil, + externalLink("https://headscale.net/stable/", "Headscale documentation"), + ), + elem.Li(nil, + externalLink("https://tailscale.com/kb/", "Tailscale knowledge base"), + ), + ), + pageFooter(), + ), + ) +} diff --git a/hscontrol/templates/register_web.go b/hscontrol/templates/register_web.go index 967b6573..829af7fb 100644 --- a/hscontrol/templates/register_web.go +++ b/hscontrol/templates/register_web.go @@ -4,32 +4,18 @@ import ( "fmt" "github.com/chasefleming/elem-go" - "github.com/chasefleming/elem-go/attrs" - "github.com/chasefleming/elem-go/styles" "github.com/juanfont/headscale/hscontrol/types" ) -var codeStyleRegisterWebAPI = styles.Props{ - styles.Display: "block", - styles.Padding: "20px", - styles.Border: "1px solid #bbb", - styles.BackgroundColor: "#eee", -} - func RegisterWeb(registrationID types.RegistrationID) *elem.Element { return HtmlStructure( elem.Title(nil, elem.Text("Registration - Headscale")), - elem.Body(attrs.Props{ - attrs.Style: styles.Props{ - styles.FontFamily: "sans", - }.ToInline(), - }, - elem.H1(nil, elem.Text("headscale")), - elem.H2(nil, elem.Text("Machine registration")), - elem.P(nil, elem.Text("Run the command below in the headscale server to add this machine to your network: ")), - elem.Code(attrs.Props{attrs.Style: codeStyleRegisterWebAPI.ToInline()}, - elem.Text(fmt.Sprintf("headscale nodes register --key %s --user USERNAME", registrationID.String())), - ), + mdTypesetBody( + headscaleLogo(), + H1(elem.Text("Machine registration")), + P(elem.Text("Run the command below in the headscale server to add this machine to your network:")), + Pre(PreCode(fmt.Sprintf("headscale nodes register --key %s --user USERNAME", registrationID.String()))), + pageFooter(), ), ) } diff --git a/hscontrol/templates/windows.go b/hscontrol/templates/windows.go index ecf7d77c..f649509a 100644 --- a/hscontrol/templates/windows.go +++ b/hscontrol/templates/windows.go @@ -2,7 +2,6 @@ package templates import ( "github.com/chasefleming/elem-go" - "github.com/chasefleming/elem-go/attrs" ) func Windows(url string) *elem.Element { @@ -10,28 +9,19 @@ func Windows(url string) *elem.Element { elem.Title(nil, elem.Text("headscale - Windows"), ), - elem.Body(attrs.Props{ - attrs.Style: bodyStyle.ToInline(), - }, - headerOne("headscale: Windows configuration"), - elem.P(nil, + mdTypesetBody( + headscaleLogo(), + H1(elem.Text("Windows configuration")), + P( elem.Text("Download "), - elem.A(attrs.Props{ - attrs.Href: "https://tailscale.com/download/windows", - attrs.Rel: "noreferrer noopener", - attrs.Target: "_blank", - }, - elem.Text("Tailscale for Windows ")), - elem.Text("and install it."), + externalLink("https://tailscale.com/download/windows", "Tailscale for Windows"), + elem.Text(" and install it."), ), - elem.P(nil, - elem.Text("Open a Command Prompt or Powershell and use Tailscale's login command to connect with headscale: "), - ), - elem.Pre(nil, - elem.Code(nil, - elem.Text("tailscale login --login-server "+url), - ), + P( + elem.Text("Open a Command Prompt or PowerShell and use Tailscale's login command to connect with headscale:"), ), + Pre(PreCode("tailscale login --login-server "+url)), + pageFooter(), ), ) } diff --git a/hscontrol/templates_consistency_test.go b/hscontrol/templates_consistency_test.go new file mode 100644 index 00000000..369639cc --- /dev/null +++ b/hscontrol/templates_consistency_test.go @@ -0,0 +1,213 @@ +package hscontrol + +import ( + "strings" + "testing" + + "github.com/juanfont/headscale/hscontrol/templates" + "github.com/juanfont/headscale/hscontrol/types" + "github.com/stretchr/testify/assert" +) + +func TestTemplateHTMLConsistency(t *testing.T) { + // Test all templates produce consistent modern HTML + testCases := []struct { + name string + html string + }{ + { + name: "OIDC Callback", + html: templates.OIDCCallback("test@example.com", "Logged in").Render(), + }, + { + name: "Register Web", + html: templates.RegisterWeb(types.RegistrationID("test-key-123")).Render(), + }, + { + name: "Windows Config", + html: templates.Windows("https://example.com").Render(), + }, + { + name: "Apple Config", + html: templates.Apple("https://example.com").Render(), + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + // Check DOCTYPE + assert.True(t, strings.HasPrefix(tc.html, ""), + "%s should start with ", tc.name) + + // Check HTML5 lang attribute + assert.Contains(t, tc.html, ``, + "%s should have html lang=\"en\"", tc.name) + + // Check UTF-8 charset + assert.Contains(t, tc.html, `charset="UTF-8"`, + "%s should have UTF-8 charset", tc.name) + + // Check viewport meta tag + assert.Contains(t, tc.html, `name="viewport"`, + "%s should have viewport meta tag", tc.name) + + // Check IE compatibility meta tag + assert.Contains(t, tc.html, `X-UA-Compatible`, + "%s should have X-UA-Compatible meta tag", tc.name) + + // Check closing tags + assert.Contains(t, tc.html, "", + "%s should have closing html tag", tc.name) + assert.Contains(t, tc.html, "", + "%s should have closing head tag", tc.name) + assert.Contains(t, tc.html, "", + "%s should have closing body tag", tc.name) + }) + } +} + +func TestTemplateModernHTMLFeatures(t *testing.T) { + testCases := []struct { + name string + html string + }{ + { + name: "OIDC Callback", + html: templates.OIDCCallback("test@example.com", "Logged in").Render(), + }, + { + name: "Register Web", + html: templates.RegisterWeb(types.RegistrationID("test-key-123")).Render(), + }, + { + name: "Windows Config", + html: templates.Windows("https://example.com").Render(), + }, + { + name: "Apple Config", + html: templates.Apple("https://example.com").Render(), + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + // Check no deprecated tags + assert.NotContains(t, tc.html, " tag", tc.name) + assert.NotContains(t, tc.html, " tag", tc.name) + + // Check modern structure + assert.Contains(t, tc.html, "", + "%s should have section", tc.name) + assert.Contains(t, tc.html, " section", tc.name) + assert.Contains(t, tc.html, "", + "%s should have <title> tag", tc.name) + }) + } +} + +func TestTemplateExternalLinkSecurity(t *testing.T) { + // Test that all external links (http/https) have proper security attributes + testCases := []struct { + name string + html string + externalURLs []string // URLs that should have security attributes + }{ + { + name: "OIDC Callback", + html: templates.OIDCCallback("test@example.com", "Logged in").Render(), + externalURLs: []string{ + "https://headscale.net/stable/", + "https://tailscale.com/kb/", + }, + }, + { + name: "Register Web", + html: templates.RegisterWeb(types.RegistrationID("test-key-123")).Render(), + externalURLs: []string{}, // No external links + }, + { + name: "Windows Config", + html: templates.Windows("https://example.com").Render(), + externalURLs: []string{ + "https://tailscale.com/download/windows", + }, + }, + { + name: "Apple Config", + html: templates.Apple("https://example.com").Render(), + externalURLs: []string{ + "https://apps.apple.com/app/tailscale/id1470499037", + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + for _, url := range tc.externalURLs { + // Find the link tag containing this URL + if !strings.Contains(tc.html, url) { + t.Errorf("%s should contain external link %s", tc.name, url) + continue + } + + // Check for rel="noreferrer noopener" + // We look for the pattern: href="URL"...rel="noreferrer noopener" + // The attributes might be in any order, so we check within a reasonable window + idx := strings.Index(tc.html, url) + if idx == -1 { + continue + } + + // Look for the closing > of the <a> tag (within 200 chars should be safe) + endIdx := strings.Index(tc.html[idx:idx+200], ">") + if endIdx == -1 { + endIdx = 200 + } + + linkTag := tc.html[idx : idx+endIdx] + + assert.Contains(t, linkTag, `rel="noreferrer noopener"`, + "%s external link %s should have rel=\"noreferrer noopener\"", tc.name, url) + assert.Contains(t, linkTag, `target="_blank"`, + "%s external link %s should have target=\"_blank\"", tc.name, url) + } + }) + } +} + +func TestTemplateAccessibilityAttributes(t *testing.T) { + // Test that all templates have proper accessibility attributes + testCases := []struct { + name string + html string + }{ + { + name: "OIDC Callback", + html: templates.OIDCCallback("test@example.com", "Logged in").Render(), + }, + { + name: "Register Web", + html: templates.RegisterWeb(types.RegistrationID("test-key-123")).Render(), + }, + { + name: "Windows Config", + html: templates.Windows("https://example.com").Render(), + }, + { + name: "Apple Config", + html: templates.Apple("https://example.com").Render(), + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + // Check for translate="no" on body tag to prevent browser translation + // This is important for technical documentation with commands + assert.Contains(t, tc.html, `translate="no"`, + "%s should have translate=\"no\" attribute on body tag", tc.name) + }) + } +} diff --git a/hscontrol/types/api_key.go b/hscontrol/types/api_key.go index 8ca00044..b6a12b65 100644 --- a/hscontrol/types/api_key.go +++ b/hscontrol/types/api_key.go @@ -7,6 +7,13 @@ import ( "google.golang.org/protobuf/types/known/timestamppb" ) +const ( + // NewAPIKeyPrefixLength is the length of the prefix for new API keys. + NewAPIKeyPrefixLength = 12 + // LegacyAPIKeyPrefixLength is the length of the prefix for legacy API keys. + LegacyAPIKeyPrefixLength = 7 +) + // APIKey describes the datamodel for API keys used to remotely authenticate with // headscale. type APIKey struct { @@ -21,8 +28,16 @@ type APIKey struct { func (key *APIKey) Proto() *v1.ApiKey { protoKey := v1.ApiKey{ - Id: key.ID, - Prefix: key.Prefix, + Id: key.ID, + } + + // Show prefix format: distinguish between new (12-char) and legacy (7-char) keys + if len(key.Prefix) == NewAPIKeyPrefixLength { + // New format key (12-char prefix) + protoKey.Prefix = "hskey-api-" + key.Prefix + "-***" + } else { + // Legacy format key (7-char prefix) or fallback + protoKey.Prefix = key.Prefix + "***" } if key.Expiration != nil { diff --git a/hscontrol/types/change/change.go b/hscontrol/types/change/change.go index 36cf8a4f..a76fb7c4 100644 --- a/hscontrol/types/change/change.go +++ b/hscontrol/types/change/change.go @@ -1,223 +1,457 @@ -//go:generate go tool stringer -type=Change package change import ( - "errors" + "slices" "time" "github.com/juanfont/headscale/hscontrol/types" + "tailscale.com/tailcfg" ) -type ( - NodeID = types.NodeID - UserID = types.UserID -) +// Change declares what should be included in a MapResponse. +// The mapper uses this to build the response without guessing. +type Change struct { + // Reason is a human-readable description for logging/debugging. + Reason string -type Change int + // TargetNode, if set, means this response should only be sent to this node. + TargetNode types.NodeID -const ( - ChangeUnknown Change = 0 + // OriginNode is the node that triggered this change. + // Used for self-update detection and filtering. + OriginNode types.NodeID - // Deprecated: Use specific change instead - // Full is a legacy change to ensure places where we - // have not yet determined the specific update, can send. - Full Change = 9 + // Content flags - what to include in the MapResponse. + IncludeSelf bool + IncludeDERPMap bool + IncludeDNS bool + IncludeDomain bool + IncludePolicy bool // PacketFilters and SSHPolicy - always sent together - // Server changes. - Policy Change = 11 - DERP Change = 12 - ExtraRecords Change = 13 + // Peer changes. + PeersChanged []types.NodeID + PeersRemoved []types.NodeID + PeerPatches []*tailcfg.PeerChange + SendAllPeers bool - // Node changes. - NodeCameOnline Change = 21 - NodeWentOffline Change = 22 - NodeRemove Change = 23 - NodeKeyExpiry Change = 24 - NodeNewOrUpdate Change = 25 + // RequiresRuntimePeerComputation indicates that peer visibility + // must be computed at runtime per-node. Used for policy changes + // where each node may have different peer visibility. + RequiresRuntimePeerComputation bool +} - // User changes. - UserNewOrUpdate Change = 51 - UserRemove Change = 52 -) +// boolFieldNames returns all boolean field names for exhaustive testing. +// When adding a new boolean field to Change, add it here. +// Tests use reflection to verify this matches the struct. +func (r Change) boolFieldNames() []string { + return []string{ + "IncludeSelf", + "IncludeDERPMap", + "IncludeDNS", + "IncludeDomain", + "IncludePolicy", + "SendAllPeers", + "RequiresRuntimePeerComputation", + } +} -// AlsoSelf reports whether this change should also be sent to the node itself. -func (c Change) AlsoSelf() bool { - switch c { - case NodeRemove, NodeKeyExpiry, NodeNewOrUpdate: - return true +func (r Change) Merge(other Change) Change { + merged := r + + merged.IncludeSelf = r.IncludeSelf || other.IncludeSelf + merged.IncludeDERPMap = r.IncludeDERPMap || other.IncludeDERPMap + merged.IncludeDNS = r.IncludeDNS || other.IncludeDNS + merged.IncludeDomain = r.IncludeDomain || other.IncludeDomain + merged.IncludePolicy = r.IncludePolicy || other.IncludePolicy + merged.SendAllPeers = r.SendAllPeers || other.SendAllPeers + merged.RequiresRuntimePeerComputation = r.RequiresRuntimePeerComputation || other.RequiresRuntimePeerComputation + + merged.PeersChanged = uniqueNodeIDs(append(r.PeersChanged, other.PeersChanged...)) + merged.PeersRemoved = uniqueNodeIDs(append(r.PeersRemoved, other.PeersRemoved...)) + merged.PeerPatches = append(r.PeerPatches, other.PeerPatches...) + + // Preserve OriginNode for self-update detection. + // If either change has OriginNode set, keep it so the mapper + // can detect self-updates and send the node its own changes. + if merged.OriginNode == 0 { + merged.OriginNode = other.OriginNode } - return false -} - -type ChangeSet struct { - Change Change - - // SelfUpdateOnly indicates that this change should only be sent - // to the node itself, and not to other nodes. - // This is used for changes that are not relevant to other nodes. - // NodeID must be set if this is true. - SelfUpdateOnly bool - - // NodeID if set, is the ID of the node that is being changed. - // It must be set if this is a node change. - NodeID types.NodeID - - // UserID if set, is the ID of the user that is being changed. - // It must be set if this is a user change. - UserID types.UserID - - // IsSubnetRouter indicates whether the node is a subnet router. - IsSubnetRouter bool - - // NodeExpiry is set if the change is NodeKeyExpiry. - NodeExpiry *time.Time -} - -func (c *ChangeSet) Validate() error { - if c.Change >= NodeCameOnline || c.Change <= NodeNewOrUpdate { - if c.NodeID == 0 { - return errors.New("ChangeSet.NodeID must be set for node updates") - } + // Preserve TargetNode for targeted responses. + if merged.TargetNode == 0 { + merged.TargetNode = other.TargetNode } - if c.Change >= UserNewOrUpdate || c.Change <= UserRemove { - if c.UserID == 0 { - return errors.New("ChangeSet.UserID must be set for user updates") - } + if r.Reason != "" && other.Reason != "" && r.Reason != other.Reason { + merged.Reason = r.Reason + "; " + other.Reason + } else if other.Reason != "" { + merged.Reason = other.Reason } - return nil + return merged } -// Empty reports whether the ChangeSet is empty, meaning it does not -// represent any change. -func (c ChangeSet) Empty() bool { - return c.Change == ChangeUnknown && c.NodeID == 0 && c.UserID == 0 +func (r Change) IsEmpty() bool { + if r.IncludeSelf || r.IncludeDERPMap || r.IncludeDNS || + r.IncludeDomain || r.IncludePolicy || r.SendAllPeers { + return false + } + + if r.RequiresRuntimePeerComputation { + return false + } + + return len(r.PeersChanged) == 0 && + len(r.PeersRemoved) == 0 && + len(r.PeerPatches) == 0 } -// IsFull reports whether the ChangeSet represents a full update. -func (c ChangeSet) IsFull() bool { - return c.Change == Full || c.Change == Policy +func (r Change) IsSelfOnly() bool { + if r.TargetNode == 0 || !r.IncludeSelf { + return false + } + + if r.SendAllPeers || len(r.PeersChanged) > 0 || len(r.PeersRemoved) > 0 || len(r.PeerPatches) > 0 { + return false + } + + return true } -func HasFull(cs []ChangeSet) bool { - for _, c := range cs { - if c.IsFull() { +// IsTargetedToNode returns true if this response should only be sent to TargetNode. +func (r Change) IsTargetedToNode() bool { + return r.TargetNode != 0 +} + +// IsFull reports whether this is a full update response. +func (r Change) IsFull() bool { + return r.SendAllPeers && r.IncludeSelf && r.IncludeDERPMap && + r.IncludeDNS && r.IncludeDomain && r.IncludePolicy +} + +// Type returns a categorized type string for metrics. +// This provides a bounded set of values suitable for Prometheus labels, +// unlike Reason which is free-form text for logging. +func (r Change) Type() string { + if r.IsFull() { + return "full" + } + + if r.IsSelfOnly() { + return "self" + } + + if r.RequiresRuntimePeerComputation { + return "policy" + } + + if len(r.PeerPatches) > 0 && len(r.PeersChanged) == 0 && len(r.PeersRemoved) == 0 && !r.SendAllPeers { + return "patch" + } + + if len(r.PeersChanged) > 0 || len(r.PeersRemoved) > 0 || r.SendAllPeers { + return "peers" + } + + if r.IncludeDERPMap || r.IncludeDNS || r.IncludeDomain || r.IncludePolicy { + return "config" + } + + return "unknown" +} + +// ShouldSendToNode determines if this response should be sent to nodeID. +// It handles self-only targeting and filtering out self-updates for non-origin nodes. +func (r Change) ShouldSendToNode(nodeID types.NodeID) bool { + // If targeted to a specific node, only send to that node + if r.TargetNode != 0 { + return r.TargetNode == nodeID + } + + return true +} + +// HasFull returns true if any response in the slice is a full update. +func HasFull(rs []Change) bool { + for _, r := range rs { + if r.IsFull() { return true } } + return false } -func SplitAllAndSelf(cs []ChangeSet) (all []ChangeSet, self []ChangeSet) { - for _, c := range cs { - if c.SelfUpdateOnly { - self = append(self, c) +// SplitTargetedAndBroadcast separates responses into targeted (to specific node) and broadcast. +func SplitTargetedAndBroadcast(rs []Change) ([]Change, []Change) { + var broadcast, targeted []Change + + for _, r := range rs { + if r.IsTargetedToNode() { + targeted = append(targeted, r) } else { - all = append(all, c) + broadcast = append(broadcast, r) } } - return all, self + + return broadcast, targeted } -func RemoveUpdatesForSelf(id types.NodeID, cs []ChangeSet) (ret []ChangeSet) { - for _, c := range cs { - if c.NodeID != id || c.Change.AlsoSelf() { - ret = append(ret, c) +// FilterForNode returns responses that should be sent to the given node. +func FilterForNode(nodeID types.NodeID, rs []Change) []Change { + var result []Change + + for _, r := range rs { + if r.ShouldSendToNode(nodeID) { + result = append(result, r) } } - return ret + + return result } -// IsSelfUpdate reports whether this ChangeSet represents an update to the given node itself. -func (c ChangeSet) IsSelfUpdate(nodeID types.NodeID) bool { - return c.NodeID == nodeID -} - -func (c ChangeSet) AlsoSelf() bool { - // If NodeID is 0, it means this ChangeSet is not related to a specific node, - // so we consider it as a change that should be sent to all nodes. - if c.NodeID == 0 { - return true +func uniqueNodeIDs(ids []types.NodeID) []types.NodeID { + if len(ids) == 0 { + return nil } - return c.Change.AlsoSelf() || c.SelfUpdateOnly + + slices.Sort(ids) + + return slices.Compact(ids) } -var ( - EmptySet = ChangeSet{Change: ChangeUnknown} - FullSet = ChangeSet{Change: Full} - DERPSet = ChangeSet{Change: DERP} - PolicySet = ChangeSet{Change: Policy} - ExtraRecordsSet = ChangeSet{Change: ExtraRecords} -) +// Constructor functions -func FullSelf(id types.NodeID) ChangeSet { - return ChangeSet{ - Change: Full, - SelfUpdateOnly: true, - NodeID: id, +func FullUpdate() Change { + return Change{ + Reason: "full update", + IncludeSelf: true, + IncludeDERPMap: true, + IncludeDNS: true, + IncludeDomain: true, + IncludePolicy: true, + SendAllPeers: true, } } -func NodeAdded(id types.NodeID) ChangeSet { - return ChangeSet{ - Change: NodeNewOrUpdate, - NodeID: id, +// FullSelf returns a full update targeted at a specific node. +func FullSelf(nodeID types.NodeID) Change { + return Change{ + Reason: "full self update", + TargetNode: nodeID, + IncludeSelf: true, + IncludeDERPMap: true, + IncludeDNS: true, + IncludeDomain: true, + IncludePolicy: true, + SendAllPeers: true, } } -func NodeRemoved(id types.NodeID) ChangeSet { - return ChangeSet{ - Change: NodeRemove, - NodeID: id, +func SelfUpdate(nodeID types.NodeID) Change { + return Change{ + Reason: "self update", + TargetNode: nodeID, + IncludeSelf: true, } } -func NodeOnline(id types.NodeID) ChangeSet { - return ChangeSet{ - Change: NodeCameOnline, - NodeID: id, +func PolicyOnly() Change { + return Change{ + Reason: "policy update", + IncludePolicy: true, } } -func NodeOffline(id types.NodeID) ChangeSet { - return ChangeSet{ - Change: NodeWentOffline, - NodeID: id, +func PolicyAndPeers(changedPeers ...types.NodeID) Change { + return Change{ + Reason: "policy and peers update", + IncludePolicy: true, + PeersChanged: changedPeers, } } -func KeyExpiry(id types.NodeID, expiry time.Time) ChangeSet { - return ChangeSet{ - Change: NodeKeyExpiry, - NodeID: id, - NodeExpiry: &expiry, +func VisibilityChange(reason string, added, removed []types.NodeID) Change { + return Change{ + Reason: reason, + IncludePolicy: true, + PeersChanged: added, + PeersRemoved: removed, } } -func UserAdded(id types.UserID) ChangeSet { - return ChangeSet{ - Change: UserNewOrUpdate, - UserID: id, +func PeersChanged(reason string, peerIDs ...types.NodeID) Change { + return Change{ + Reason: reason, + PeersChanged: peerIDs, } } -func UserRemoved(id types.UserID) ChangeSet { - return ChangeSet{ - Change: UserRemove, - UserID: id, +func PeersRemoved(peerIDs ...types.NodeID) Change { + return Change{ + Reason: "peers removed", + PeersRemoved: peerIDs, } } -func PolicyChange() ChangeSet { - return ChangeSet{ - Change: Policy, +func PeerPatched(reason string, patches ...*tailcfg.PeerChange) Change { + return Change{ + Reason: reason, + PeerPatches: patches, } } -func DERPChange() ChangeSet { - return ChangeSet{ - Change: DERP, +func DERPMap() Change { + return Change{ + Reason: "DERP map update", + IncludeDERPMap: true, } } + +// PolicyChange creates a response for policy changes. +// Policy changes require runtime peer visibility computation. +func PolicyChange() Change { + return Change{ + Reason: "policy change", + IncludePolicy: true, + RequiresRuntimePeerComputation: true, + } +} + +// DNSConfig creates a response for DNS configuration updates. +func DNSConfig() Change { + return Change{ + Reason: "DNS config update", + IncludeDNS: true, + } +} + +// NodeOnline creates a patch response for a node coming online. +func NodeOnline(nodeID types.NodeID) Change { + return Change{ + Reason: "node online", + PeerPatches: []*tailcfg.PeerChange{ + { + NodeID: nodeID.NodeID(), + Online: ptrTo(true), + }, + }, + } +} + +// NodeOffline creates a patch response for a node going offline. +func NodeOffline(nodeID types.NodeID) Change { + return Change{ + Reason: "node offline", + PeerPatches: []*tailcfg.PeerChange{ + { + NodeID: nodeID.NodeID(), + Online: ptrTo(false), + }, + }, + } +} + +// KeyExpiry creates a patch response for a node's key expiry change. +func KeyExpiry(nodeID types.NodeID, expiry *time.Time) Change { + return Change{ + Reason: "key expiry", + PeerPatches: []*tailcfg.PeerChange{ + { + NodeID: nodeID.NodeID(), + KeyExpiry: expiry, + }, + }, + } +} + +// ptrTo returns a pointer to the given value. +func ptrTo[T any](v T) *T { + return &v +} + +// High-level change constructors + +// NodeAdded returns a Change for when a node is added or updated. +// The OriginNode field enables self-update detection by the mapper. +func NodeAdded(id types.NodeID) Change { + c := PeersChanged("node added", id) + c.OriginNode = id + + return c +} + +// NodeRemoved returns a Change for when a node is removed. +func NodeRemoved(id types.NodeID) Change { + return PeersRemoved(id) +} + +// NodeOnlineFor returns a Change for when a node comes online. +// If the node is a subnet router, a full update is sent instead of a patch. +func NodeOnlineFor(node types.NodeView) Change { + if node.IsSubnetRouter() { + c := FullUpdate() + c.Reason = "subnet router online" + + return c + } + + return NodeOnline(node.ID()) +} + +// NodeOfflineFor returns a Change for when a node goes offline. +// If the node is a subnet router, a full update is sent instead of a patch. +func NodeOfflineFor(node types.NodeView) Change { + if node.IsSubnetRouter() { + c := FullUpdate() + c.Reason = "subnet router offline" + + return c + } + + return NodeOffline(node.ID()) +} + +// KeyExpiryFor returns a Change for when a node's key expiry changes. +// The OriginNode field enables self-update detection by the mapper. +func KeyExpiryFor(id types.NodeID, expiry time.Time) Change { + c := KeyExpiry(id, &expiry) + c.OriginNode = id + + return c +} + +// EndpointOrDERPUpdate returns a Change for when a node's endpoints or DERP region changes. +// The OriginNode field enables self-update detection by the mapper. +func EndpointOrDERPUpdate(id types.NodeID, patch *tailcfg.PeerChange) Change { + c := PeerPatched("endpoint/DERP update", patch) + c.OriginNode = id + + return c +} + +// UserAdded returns a Change for when a user is added or updated. +// A full update is sent to refresh user profiles on all nodes. +func UserAdded() Change { + c := FullUpdate() + c.Reason = "user added" + + return c +} + +// UserRemoved returns a Change for when a user is removed. +// A full update is sent to refresh user profiles on all nodes. +func UserRemoved() Change { + c := FullUpdate() + c.Reason = "user removed" + + return c +} + +// ExtraRecords returns a Change for when DNS extra records change. +func ExtraRecords() Change { + c := DNSConfig() + c.Reason = "extra records update" + + return c +} diff --git a/hscontrol/types/change/change_string.go b/hscontrol/types/change/change_string.go deleted file mode 100644 index dbf9d17e..00000000 --- a/hscontrol/types/change/change_string.go +++ /dev/null @@ -1,57 +0,0 @@ -// Code generated by "stringer -type=Change"; DO NOT EDIT. - -package change - -import "strconv" - -func _() { - // An "invalid array index" compiler error signifies that the constant values have changed. - // Re-run the stringer command to generate them again. - var x [1]struct{} - _ = x[ChangeUnknown-0] - _ = x[Full-9] - _ = x[Policy-11] - _ = x[DERP-12] - _ = x[ExtraRecords-13] - _ = x[NodeCameOnline-21] - _ = x[NodeWentOffline-22] - _ = x[NodeRemove-23] - _ = x[NodeKeyExpiry-24] - _ = x[NodeNewOrUpdate-25] - _ = x[UserNewOrUpdate-51] - _ = x[UserRemove-52] -} - -const ( - _Change_name_0 = "ChangeUnknown" - _Change_name_1 = "Full" - _Change_name_2 = "PolicyDERPExtraRecords" - _Change_name_3 = "NodeCameOnlineNodeWentOfflineNodeRemoveNodeKeyExpiryNodeNewOrUpdate" - _Change_name_4 = "UserNewOrUpdateUserRemove" -) - -var ( - _Change_index_2 = [...]uint8{0, 6, 10, 22} - _Change_index_3 = [...]uint8{0, 14, 29, 39, 52, 67} - _Change_index_4 = [...]uint8{0, 15, 25} -) - -func (i Change) String() string { - switch { - case i == 0: - return _Change_name_0 - case i == 9: - return _Change_name_1 - case 11 <= i && i <= 13: - i -= 11 - return _Change_name_2[_Change_index_2[i]:_Change_index_2[i+1]] - case 21 <= i && i <= 25: - i -= 21 - return _Change_name_3[_Change_index_3[i]:_Change_index_3[i+1]] - case 51 <= i && i <= 52: - i -= 51 - return _Change_name_4[_Change_index_4[i]:_Change_index_4[i+1]] - default: - return "Change(" + strconv.FormatInt(int64(i), 10) + ")" - } -} diff --git a/hscontrol/types/change/change_test.go b/hscontrol/types/change/change_test.go new file mode 100644 index 00000000..9f181dd6 --- /dev/null +++ b/hscontrol/types/change/change_test.go @@ -0,0 +1,479 @@ +package change + +import ( + "reflect" + "testing" + + "github.com/juanfont/headscale/hscontrol/types" + "github.com/stretchr/testify/assert" + "tailscale.com/tailcfg" +) + +func TestChange_FieldSync(t *testing.T) { + r := Change{} + fieldNames := r.boolFieldNames() + + typ := reflect.TypeFor[Change]() + boolCount := 0 + + for i := range typ.NumField() { + if typ.Field(i).Type.Kind() == reflect.Bool { + boolCount++ + } + } + + if len(fieldNames) != boolCount { + t.Fatalf("boolFieldNames() returns %d fields but struct has %d bool fields; "+ + "update boolFieldNames() when adding new bool fields", len(fieldNames), boolCount) + } +} + +func TestChange_IsEmpty(t *testing.T) { + tests := []struct { + name string + response Change + want bool + }{ + { + name: "zero value is empty", + response: Change{}, + want: true, + }, + { + name: "only reason is still empty", + response: Change{Reason: "test"}, + want: true, + }, + { + name: "IncludeSelf not empty", + response: Change{IncludeSelf: true}, + want: false, + }, + { + name: "IncludeDERPMap not empty", + response: Change{IncludeDERPMap: true}, + want: false, + }, + { + name: "IncludeDNS not empty", + response: Change{IncludeDNS: true}, + want: false, + }, + { + name: "IncludeDomain not empty", + response: Change{IncludeDomain: true}, + want: false, + }, + { + name: "IncludePolicy not empty", + response: Change{IncludePolicy: true}, + want: false, + }, + { + name: "SendAllPeers not empty", + response: Change{SendAllPeers: true}, + want: false, + }, + { + name: "PeersChanged not empty", + response: Change{PeersChanged: []types.NodeID{1}}, + want: false, + }, + { + name: "PeersRemoved not empty", + response: Change{PeersRemoved: []types.NodeID{1}}, + want: false, + }, + { + name: "PeerPatches not empty", + response: Change{PeerPatches: []*tailcfg.PeerChange{{}}}, + want: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := tt.response.IsEmpty() + assert.Equal(t, tt.want, got) + }) + } +} + +func TestChange_IsSelfOnly(t *testing.T) { + tests := []struct { + name string + response Change + want bool + }{ + { + name: "empty is not self only", + response: Change{}, + want: false, + }, + { + name: "IncludeSelf without TargetNode is not self only", + response: Change{IncludeSelf: true}, + want: false, + }, + { + name: "TargetNode without IncludeSelf is not self only", + response: Change{TargetNode: 1}, + want: false, + }, + { + name: "TargetNode with IncludeSelf is self only", + response: Change{TargetNode: 1, IncludeSelf: true}, + want: true, + }, + { + name: "self only with SendAllPeers is not self only", + response: Change{TargetNode: 1, IncludeSelf: true, SendAllPeers: true}, + want: false, + }, + { + name: "self only with PeersChanged is not self only", + response: Change{TargetNode: 1, IncludeSelf: true, PeersChanged: []types.NodeID{2}}, + want: false, + }, + { + name: "self only with PeersRemoved is not self only", + response: Change{TargetNode: 1, IncludeSelf: true, PeersRemoved: []types.NodeID{2}}, + want: false, + }, + { + name: "self only with PeerPatches is not self only", + response: Change{TargetNode: 1, IncludeSelf: true, PeerPatches: []*tailcfg.PeerChange{{}}}, + want: false, + }, + { + name: "self only with other include flags is still self only", + response: Change{ + TargetNode: 1, + IncludeSelf: true, + IncludePolicy: true, + IncludeDNS: true, + }, + want: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := tt.response.IsSelfOnly() + assert.Equal(t, tt.want, got) + }) + } +} + +func TestChange_Merge(t *testing.T) { + tests := []struct { + name string + r1 Change + r2 Change + want Change + }{ + { + name: "empty merge", + r1: Change{}, + r2: Change{}, + want: Change{}, + }, + { + name: "bool fields OR together", + r1: Change{IncludeSelf: true, IncludePolicy: true}, + r2: Change{IncludeDERPMap: true, IncludePolicy: true}, + want: Change{IncludeSelf: true, IncludeDERPMap: true, IncludePolicy: true}, + }, + { + name: "all bool fields merge", + r1: Change{IncludeSelf: true, IncludeDNS: true, IncludePolicy: true}, + r2: Change{IncludeDERPMap: true, IncludeDomain: true, SendAllPeers: true}, + want: Change{ + IncludeSelf: true, + IncludeDERPMap: true, + IncludeDNS: true, + IncludeDomain: true, + IncludePolicy: true, + SendAllPeers: true, + }, + }, + { + name: "peers deduplicated and sorted", + r1: Change{PeersChanged: []types.NodeID{3, 1}}, + r2: Change{PeersChanged: []types.NodeID{2, 1}}, + want: Change{PeersChanged: []types.NodeID{1, 2, 3}}, + }, + { + name: "peers removed deduplicated", + r1: Change{PeersRemoved: []types.NodeID{1, 2}}, + r2: Change{PeersRemoved: []types.NodeID{2, 3}}, + want: Change{PeersRemoved: []types.NodeID{1, 2, 3}}, + }, + { + name: "peer patches concatenated", + r1: Change{PeerPatches: []*tailcfg.PeerChange{{NodeID: 1}}}, + r2: Change{PeerPatches: []*tailcfg.PeerChange{{NodeID: 2}}}, + want: Change{PeerPatches: []*tailcfg.PeerChange{{NodeID: 1}, {NodeID: 2}}}, + }, + { + name: "reasons combined when different", + r1: Change{Reason: "route change"}, + r2: Change{Reason: "tag change"}, + want: Change{Reason: "route change; tag change"}, + }, + { + name: "same reason not duplicated", + r1: Change{Reason: "policy"}, + r2: Change{Reason: "policy"}, + want: Change{Reason: "policy"}, + }, + { + name: "empty reason takes other", + r1: Change{}, + r2: Change{Reason: "update"}, + want: Change{Reason: "update"}, + }, + { + name: "OriginNode preserved from first", + r1: Change{OriginNode: 42}, + r2: Change{IncludePolicy: true}, + want: Change{OriginNode: 42, IncludePolicy: true}, + }, + { + name: "OriginNode preserved from second when first is zero", + r1: Change{IncludePolicy: true}, + r2: Change{OriginNode: 42}, + want: Change{OriginNode: 42, IncludePolicy: true}, + }, + { + name: "OriginNode first wins when both set", + r1: Change{OriginNode: 1}, + r2: Change{OriginNode: 2}, + want: Change{OriginNode: 1}, + }, + { + name: "TargetNode preserved from first", + r1: Change{TargetNode: 42}, + r2: Change{IncludeSelf: true}, + want: Change{TargetNode: 42, IncludeSelf: true}, + }, + { + name: "TargetNode preserved from second when first is zero", + r1: Change{IncludeSelf: true}, + r2: Change{TargetNode: 42}, + want: Change{TargetNode: 42, IncludeSelf: true}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := tt.r1.Merge(tt.r2) + assert.Equal(t, tt.want, got) + }) + } +} + +func TestChange_Constructors(t *testing.T) { + tests := []struct { + name string + constructor func() Change + wantReason string + want Change + }{ + { + name: "FullUpdateResponse", + constructor: FullUpdate, + wantReason: "full update", + want: Change{ + Reason: "full update", + IncludeSelf: true, + IncludeDERPMap: true, + IncludeDNS: true, + IncludeDomain: true, + IncludePolicy: true, + SendAllPeers: true, + }, + }, + { + name: "PolicyOnlyResponse", + constructor: PolicyOnly, + wantReason: "policy update", + want: Change{ + Reason: "policy update", + IncludePolicy: true, + }, + }, + { + name: "DERPMapResponse", + constructor: DERPMap, + wantReason: "DERP map update", + want: Change{ + Reason: "DERP map update", + IncludeDERPMap: true, + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + r := tt.constructor() + assert.Equal(t, tt.wantReason, r.Reason) + assert.Equal(t, tt.want, r) + }) + } +} + +func TestSelfUpdate(t *testing.T) { + r := SelfUpdate(42) + assert.Equal(t, "self update", r.Reason) + assert.Equal(t, types.NodeID(42), r.TargetNode) + assert.True(t, r.IncludeSelf) + assert.True(t, r.IsSelfOnly()) +} + +func TestPolicyAndPeers(t *testing.T) { + r := PolicyAndPeers(1, 2, 3) + assert.Equal(t, "policy and peers update", r.Reason) + assert.True(t, r.IncludePolicy) + assert.Equal(t, []types.NodeID{1, 2, 3}, r.PeersChanged) +} + +func TestVisibilityChange(t *testing.T) { + r := VisibilityChange("tag change", []types.NodeID{1}, []types.NodeID{2, 3}) + assert.Equal(t, "tag change", r.Reason) + assert.True(t, r.IncludePolicy) + assert.Equal(t, []types.NodeID{1}, r.PeersChanged) + assert.Equal(t, []types.NodeID{2, 3}, r.PeersRemoved) +} + +func TestPeersChanged(t *testing.T) { + r := PeersChanged("routes approved", 1, 2) + assert.Equal(t, "routes approved", r.Reason) + assert.Equal(t, []types.NodeID{1, 2}, r.PeersChanged) + assert.False(t, r.IncludePolicy) +} + +func TestPeersRemoved(t *testing.T) { + r := PeersRemoved(1, 2, 3) + assert.Equal(t, "peers removed", r.Reason) + assert.Equal(t, []types.NodeID{1, 2, 3}, r.PeersRemoved) +} + +func TestPeerPatched(t *testing.T) { + patch := &tailcfg.PeerChange{NodeID: 1} + r := PeerPatched("endpoint change", patch) + assert.Equal(t, "endpoint change", r.Reason) + assert.Equal(t, []*tailcfg.PeerChange{patch}, r.PeerPatches) +} + +func TestChange_Type(t *testing.T) { + tests := []struct { + name string + response Change + want string + }{ + { + name: "full update", + response: FullUpdate(), + want: "full", + }, + { + name: "self only", + response: SelfUpdate(1), + want: "self", + }, + { + name: "policy with runtime computation", + response: PolicyChange(), + want: "policy", + }, + { + name: "patch only", + response: PeerPatched("test", &tailcfg.PeerChange{NodeID: 1}), + want: "patch", + }, + { + name: "peers changed", + response: PeersChanged("test", 1, 2), + want: "peers", + }, + { + name: "peers removed", + response: PeersRemoved(1, 2), + want: "peers", + }, + { + name: "config - DERP map", + response: DERPMap(), + want: "config", + }, + { + name: "config - DNS", + response: DNSConfig(), + want: "config", + }, + { + name: "config - policy only (no runtime)", + response: PolicyOnly(), + want: "config", + }, + { + name: "empty is unknown", + response: Change{}, + want: "unknown", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := tt.response.Type() + assert.Equal(t, tt.want, got) + }) + } +} + +func TestUniqueNodeIDs(t *testing.T) { + tests := []struct { + name string + input []types.NodeID + want []types.NodeID + }{ + { + name: "nil input", + input: nil, + want: nil, + }, + { + name: "empty input", + input: []types.NodeID{}, + want: nil, + }, + { + name: "single element", + input: []types.NodeID{1}, + want: []types.NodeID{1}, + }, + { + name: "no duplicates", + input: []types.NodeID{1, 2, 3}, + want: []types.NodeID{1, 2, 3}, + }, + { + name: "with duplicates", + input: []types.NodeID{3, 1, 2, 1, 3}, + want: []types.NodeID{1, 2, 3}, + }, + { + name: "all same", + input: []types.NodeID{5, 5, 5, 5}, + want: []types.NodeID{5}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := uniqueNodeIDs(tt.input) + assert.Equal(t, tt.want, got) + }) + } +} diff --git a/hscontrol/types/common.go b/hscontrol/types/common.go index a7d815bf..f4814519 100644 --- a/hscontrol/types/common.go +++ b/hscontrol/types/common.go @@ -220,10 +220,12 @@ func DefaultBatcherWorkers() int { // DefaultBatcherWorkersFor returns the default number of batcher workers for a given CPU count. // Default to 3/4 of CPU cores, minimum 1, no maximum. func DefaultBatcherWorkersFor(cpuCount int) int { - defaultWorkers := (cpuCount * 3) / 4 - if defaultWorkers < 1 { - defaultWorkers = 1 - } + const ( + workerNumerator = 3 + workerDenominator = 4 + ) + + defaultWorkers := max((cpuCount*workerNumerator)/workerDenominator, 1) return defaultWorkers } diff --git a/hscontrol/types/config.go b/hscontrol/types/config.go index 732b4d5a..4068d72e 100644 --- a/hscontrol/types/config.go +++ b/hscontrol/types/config.go @@ -28,6 +28,8 @@ const ( maxDuration time.Duration = 1<<63 - 1 PKCEMethodPlain string = "plain" PKCEMethodS256 string = "S256" + + defaultNodeStoreBatchSize = 100 ) var ( @@ -92,6 +94,7 @@ type Config struct { LogTail LogTailConfig RandomizeClientPort bool + Taildrop TaildropConfig CLI CLIConfig @@ -182,6 +185,7 @@ type OIDCConfig struct { AllowedDomains []string AllowedUsers []string AllowedGroups []string + EmailVerifiedRequired bool Expiry time.Duration UseExpiryFromToken bool PKCE PKCEConfig @@ -209,6 +213,10 @@ type LogTailConfig struct { Enabled bool } +type TaildropConfig struct { + Enabled bool +} + type CLIConfig struct { Address string APIKey string @@ -230,13 +238,63 @@ type LogConfig struct { Level zerolog.Level } +// Tuning contains advanced performance tuning parameters for Headscale. +// These settings control internal batching, timeouts, and resource allocation. +// The defaults are carefully chosen for typical deployments and should rarely +// need adjustment. Changes to these values can significantly impact performance +// and resource usage. type Tuning struct { - NotifierSendTimeout time.Duration - BatchChangeDelay time.Duration + // NotifierSendTimeout is the maximum time to wait when sending notifications + // to connected clients about network changes. + NotifierSendTimeout time.Duration + + // BatchChangeDelay controls how long to wait before sending batched updates + // to clients when multiple changes occur in rapid succession. + BatchChangeDelay time.Duration + + // NodeMapSessionBufferedChanSize sets the buffer size for the channel that + // queues map updates to be sent to connected clients. NodeMapSessionBufferedChanSize int - BatcherWorkers int - RegisterCacheCleanup time.Duration - RegisterCacheExpiration time.Duration + + // BatcherWorkers controls the number of parallel workers processing map + // updates for connected clients. + BatcherWorkers int + + // RegisterCacheCleanup is the interval between cleanup operations for + // expired registration cache entries. + RegisterCacheCleanup time.Duration + + // RegisterCacheExpiration is how long registration cache entries remain + // valid before being eligible for cleanup. + RegisterCacheExpiration time.Duration + + // NodeStoreBatchSize controls how many write operations are accumulated + // before rebuilding the in-memory node snapshot. + // + // The NodeStore batches write operations (add/update/delete nodes) before + // rebuilding its in-memory data structures. Rebuilding involves recalculating + // peer relationships between all nodes based on the current ACL policy, which + // is computationally expensive and scales with the square of the number of nodes. + // + // By batching writes, Headscale can process N operations but only rebuild once, + // rather than rebuilding N times. This significantly reduces CPU usage during + // bulk operations like initial sync or policy updates. + // + // Trade-off: Higher values reduce CPU usage from rebuilds but increase latency + // for individual operations waiting for their batch to complete. + NodeStoreBatchSize int + + // NodeStoreBatchTimeout is the maximum time to wait before processing a + // partial batch of node operations. + // + // When NodeStoreBatchSize operations haven't accumulated, this timeout ensures + // writes don't wait indefinitely. The batch processes when either the size + // threshold is reached OR this timeout expires, whichever comes first. + // + // Trade-off: Lower values provide faster response for individual operations + // but trigger more frequent (expensive) peer map rebuilds. Higher values + // optimize for bulk throughput at the cost of individual operation latency. + NodeStoreBatchTimeout time.Duration } func validatePKCEMethod(method string) error { @@ -327,15 +385,19 @@ func LoadConfig(path string, isFile bool) error { viper.SetDefault("oidc.use_expiry_from_token", false) viper.SetDefault("oidc.pkce.enabled", false) viper.SetDefault("oidc.pkce.method", "S256") + viper.SetDefault("oidc.email_verified_required", true) viper.SetDefault("logtail.enabled", false) viper.SetDefault("randomize_client_port", false) + viper.SetDefault("taildrop.enabled", true) viper.SetDefault("ephemeral_node_inactivity_timeout", "120s") viper.SetDefault("tuning.notifier_send_timeout", "800ms") viper.SetDefault("tuning.batch_change_delay", "800ms") viper.SetDefault("tuning.node_mapsession_buffered_chan_size", 30) + viper.SetDefault("tuning.node_store_batch_size", defaultNodeStoreBatchSize) + viper.SetDefault("tuning.node_store_batch_timeout", "500ms") viper.SetDefault("prefixes.allocation", string(IPAllocationStrategySequential)) @@ -437,6 +499,21 @@ func validateServerConfig() error { } } + // Validate tuning parameters + if size := viper.GetInt("tuning.node_store_batch_size"); size <= 0 { + errorText += fmt.Sprintf( + "Fatal config error: tuning.node_store_batch_size must be positive, got %d\n", + size, + ) + } + + if timeout := viper.GetDuration("tuning.node_store_batch_timeout"); timeout <= 0 { + errorText += fmt.Sprintf( + "Fatal config error: tuning.node_store_batch_timeout must be positive, got %s\n", + timeout, + ) + } + if errorText != "" { // nolint return errors.New(strings.TrimSuffix(errorText, "\n")) @@ -947,14 +1024,15 @@ func LoadServerConfig() (*Config, error) { OnlyStartIfOIDCIsAvailable: viper.GetBool( "oidc.only_start_if_oidc_is_available", ), - Issuer: viper.GetString("oidc.issuer"), - ClientID: viper.GetString("oidc.client_id"), - ClientSecret: oidcClientSecret, - Scope: viper.GetStringSlice("oidc.scope"), - ExtraParams: viper.GetStringMapString("oidc.extra_params"), - AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"), - AllowedUsers: viper.GetStringSlice("oidc.allowed_users"), - AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"), + Issuer: viper.GetString("oidc.issuer"), + ClientID: viper.GetString("oidc.client_id"), + ClientSecret: oidcClientSecret, + Scope: viper.GetStringSlice("oidc.scope"), + ExtraParams: viper.GetStringMapString("oidc.extra_params"), + AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"), + AllowedUsers: viper.GetStringSlice("oidc.allowed_users"), + AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"), + EmailVerifiedRequired: viper.GetBool("oidc.email_verified_required"), Expiry: func() time.Duration { // if set to 0, we assume no expiry if value := viper.GetString("oidc.expiry"); value == "0" { @@ -979,6 +1057,9 @@ func LoadServerConfig() (*Config, error) { LogTail: logTailConfig, RandomizeClientPort: randomizeClientPort, + Taildrop: TaildropConfig{ + Enabled: viper.GetBool("taildrop.enabled"), + }, Policy: policyConfig(), @@ -991,7 +1072,6 @@ func LoadServerConfig() (*Config, error) { Log: logConfig, - // TODO(kradalby): Document these settings when more stable Tuning: Tuning{ NotifierSendTimeout: viper.GetDuration("tuning.notifier_send_timeout"), BatchChangeDelay: viper.GetDuration("tuning.batch_change_delay"), @@ -1006,6 +1086,8 @@ func LoadServerConfig() (*Config, error) { }(), RegisterCacheCleanup: viper.GetDuration("tuning.register_cache_cleanup"), RegisterCacheExpiration: viper.GetDuration("tuning.register_cache_expiration"), + NodeStoreBatchSize: viper.GetInt("tuning.node_store_batch_size"), + NodeStoreBatchTimeout: viper.GetDuration("tuning.node_store_batch_timeout"), }, }, nil } diff --git a/hscontrol/types/node.go b/hscontrol/types/node.go index c6429669..41cd9759 100644 --- a/hscontrol/types/node.go +++ b/hscontrol/types/node.go @@ -6,7 +6,6 @@ import ( "net/netip" "regexp" "slices" - "sort" "strconv" "strings" "time" @@ -28,10 +27,16 @@ var ( ErrHostnameTooLong = errors.New("hostname too long, cannot except 255 ASCII chars") ErrNodeHasNoGivenName = errors.New("node has no given name") ErrNodeUserHasNoName = errors.New("node user has no name") + ErrCannotRemoveAllTags = errors.New("cannot remove all tags from node") + ErrInvalidNodeView = errors.New("cannot convert invalid NodeView to tailcfg.Node") invalidDNSRegex = regexp.MustCompile("[^a-z0-9-.]+") ) +// RouteFunc is a function that takes a node ID and returns a list of +// netip.Prefixes representing the primary routes for that node. +type RouteFunc func(id NodeID) []netip.Prefix + type ( NodeID uint64 NodeIDs []NodeID @@ -97,16 +102,21 @@ type Node struct { // GivenName is the name used in all DNS related // parts of headscale. GivenName string `gorm:"type:varchar(63);unique_index"` - UserID uint - User User `gorm:"constraint:OnDelete:CASCADE;"` + + // UserID is set for ALL nodes (tagged and user-owned) to track "created by". + // For tagged nodes, this is informational only - the tag is the owner. + // For user-owned nodes, this identifies the owner. + // Only nil for orphaned nodes (should not happen in normal operation). + UserID *uint + User *User `gorm:"constraint:OnDelete:CASCADE;"` RegisterMethod string - // ForcedTags are tags set by CLI/API. It is not considered - // the source of truth, but is one of the sources from - // which a tag might originate. - // ForcedTags are _always_ applied to the node. - ForcedTags []string `gorm:"column:forced_tags;serializer:json"` + // Tags is the definitive owner for tagged nodes. + // When non-empty, the node is "tagged" and tags define its identity. + // Empty for user-owned nodes. + // Tags cannot be removed once set (one-way transition). + Tags []string `gorm:"column:tags;serializer:json"` // When a node has been created with a PreAuthKey, we need to // prevent the preauthkey from being deleted before the node. @@ -196,55 +206,32 @@ func (node *Node) HasIP(i netip.Addr) bool { return false } -// IsTagged reports if a device is tagged -// and therefore should not be treated as a -// user owned device. -// Currently, this function only handles tags set -// via CLI ("forced tags" and preauthkeys). +// IsTagged reports if a device is tagged and therefore should not be treated +// as a user-owned device. +// When a node has tags, the tags define its identity (not the user). func (node *Node) IsTagged() bool { - if len(node.ForcedTags) > 0 { - return true - } + return len(node.Tags) > 0 +} - if node.AuthKey != nil && len(node.AuthKey.Tags) > 0 { - return true - } - - if node.Hostinfo == nil { - return false - } - - // TODO(kradalby): Figure out how tagging should work - // and hostinfo.requestedtags. - // Do this in other work. - - return false +// IsUserOwned returns true if node is owned by a user (not tagged). +// Tagged nodes may have a UserID for "created by" tracking, but the tag is the owner. +func (node *Node) IsUserOwned() bool { + return !node.IsTagged() } // HasTag reports if a node has a given tag. -// Currently, this function only handles tags set -// via CLI ("forced tags" and preauthkeys). func (node *Node) HasTag(tag string) bool { - return slices.Contains(node.Tags(), tag) + return slices.Contains(node.Tags, tag) } -func (node *Node) Tags() []string { - var tags []string - - if node.AuthKey != nil { - tags = append(tags, node.AuthKey.Tags...) +// TypedUserID returns the UserID as a typed UserID type. +// Returns 0 if UserID is nil. +func (node *Node) TypedUserID() UserID { + if node.UserID == nil { + return 0 } - // TODO(kradalby): Figure out how tagging should work - // and hostinfo.requestedtags. - // Do this in other work. - // #2417 - - tags = append(tags, node.ForcedTags...) - sort.Strings(tags) - tags = slices.Compact(tags) - - return tags + return UserID(*node.UserID) } func (node *Node) RequestTags() []string { @@ -389,8 +376,8 @@ func (node *Node) Proto() *v1.Node { IpAddresses: node.IPsAsString(), Name: node.Hostname, GivenName: node.GivenName, - User: node.User.Proto(), - ForcedTags: node.ForcedTags, + User: nil, // Will be set below based on node type + Tags: node.Tags, Online: node.IsOnline != nil && *node.IsOnline, // Only ApprovedRoutes and AvailableRoutes is set here. SubnetRoutes has @@ -404,6 +391,13 @@ func (node *Node) Proto() *v1.Node { CreatedAt: timestamppb.New(node.CreatedAt), } + // Set User field based on node ownership + // Note: User will be set to TaggedDevices in the gRPC layer (grpcv1.go) + // for proper MapResponse formatting + if node.User != nil { + nodeProto.User = node.User.Proto() + } + if node.AuthKey != nil { nodeProto.PreAuthKey = node.AuthKey.Proto() } @@ -535,8 +529,10 @@ func (node *Node) PeerChangeFromMapRequest(req tailcfg.MapRequest) tailcfg.PeerC } } - // TODO(kradalby): Find a good way to compare updates - ret.Endpoints = req.Endpoints + // Compare endpoints using order-independent comparison + if EndpointsChanged(node.Endpoints, req.Endpoints) { + ret.Endpoints = req.Endpoints + } now := time.Now() ret.LastSeen = &now @@ -544,6 +540,32 @@ func (node *Node) PeerChangeFromMapRequest(req tailcfg.MapRequest) tailcfg.PeerC return ret } +// EndpointsChanged compares two endpoint slices and returns true if they differ. +// The comparison is order-independent - endpoints are sorted before comparison. +func EndpointsChanged(oldEndpoints, newEndpoints []netip.AddrPort) bool { + if len(oldEndpoints) != len(newEndpoints) { + return true + } + + if len(oldEndpoints) == 0 { + return false + } + + // Make copies to avoid modifying the original slices + oldCopy := slices.Clone(oldEndpoints) + newCopy := slices.Clone(newEndpoints) + + // Sort both slices to enable order-independent comparison + slices.SortFunc(oldCopy, func(a, b netip.AddrPort) int { + return a.Compare(b) + }) + slices.SortFunc(newCopy, func(a, b netip.AddrPort) int { + return a.Compare(b) + }) + + return !slices.Equal(oldCopy, newCopy) +} + func (node *Node) RegisterMethodToV1Enum() v1.RegisterMethod { switch node.RegisterMethod { case "authkey": @@ -673,8 +695,20 @@ func (nodes Nodes) DebugString() string { func (node Node) DebugString() string { var sb strings.Builder fmt.Fprintf(&sb, "%s(%s):\n", node.Hostname, node.ID) - fmt.Fprintf(&sb, "\tUser: %s (%d, %q)\n", node.User.Display(), node.User.ID, node.User.Username()) - fmt.Fprintf(&sb, "\tTags: %v\n", node.Tags()) + + // Show ownership status + if node.IsTagged() { + fmt.Fprintf(&sb, "\tTagged: %v\n", node.Tags) + + if node.User != nil { + fmt.Fprintf(&sb, "\tCreated by: %s (%d, %q)\n", node.User.Display(), node.User.ID, node.User.Username()) + } + } else if node.User != nil { + fmt.Fprintf(&sb, "\tUser-owned: %s (%d, %q)\n", node.User.Display(), node.User.ID, node.User.Username()) + } else { + fmt.Fprintf(&sb, "\tOrphaned: no user or tags\n") + } + fmt.Fprintf(&sb, "\tIPs: %v\n", node.IPs()) fmt.Fprintf(&sb, "\tApprovedRoutes: %v\n", node.ApprovedRoutes) fmt.Fprintf(&sb, "\tAnnouncedRoutes: %v\n", node.AnnouncedRoutes()) @@ -685,88 +719,94 @@ func (node Node) DebugString() string { return sb.String() } -func (v NodeView) UserView() UserView { - u := v.User() - return u.View() +// Owner returns the owner for display purposes. +// For tagged nodes, returns TaggedDevices. For user-owned nodes, returns the user. +func (nv NodeView) Owner() UserView { + if nv.IsTagged() { + return TaggedDevices.View() + } + + return nv.User() } -func (v NodeView) IPs() []netip.Addr { - if !v.Valid() { +func (nv NodeView) IPs() []netip.Addr { + if !nv.Valid() { return nil } - return v.ж.IPs() + + return nv.ж.IPs() } -func (v NodeView) InIPSet(set *netipx.IPSet) bool { - if !v.Valid() { - return false - } - return v.ж.InIPSet(set) -} - -func (v NodeView) CanAccess(matchers []matcher.Match, node2 NodeView) bool { - if !v.Valid() { +func (nv NodeView) InIPSet(set *netipx.IPSet) bool { + if !nv.Valid() { return false } - return v.ж.CanAccess(matchers, node2.AsStruct()) + return nv.ж.InIPSet(set) } -func (v NodeView) CanAccessRoute(matchers []matcher.Match, route netip.Prefix) bool { - if !v.Valid() { +func (nv NodeView) CanAccess(matchers []matcher.Match, node2 NodeView) bool { + if !nv.Valid() { return false } - return v.ж.CanAccessRoute(matchers, route) + return nv.ж.CanAccess(matchers, node2.AsStruct()) } -func (v NodeView) AnnouncedRoutes() []netip.Prefix { - if !v.Valid() { - return nil - } - return v.ж.AnnouncedRoutes() -} - -func (v NodeView) SubnetRoutes() []netip.Prefix { - if !v.Valid() { - return nil - } - return v.ж.SubnetRoutes() -} - -func (v NodeView) IsSubnetRouter() bool { - if !v.Valid() { +func (nv NodeView) CanAccessRoute(matchers []matcher.Match, route netip.Prefix) bool { + if !nv.Valid() { return false } - return v.ж.IsSubnetRouter() + + return nv.ж.CanAccessRoute(matchers, route) } -func (v NodeView) AllApprovedRoutes() []netip.Prefix { - if !v.Valid() { +func (nv NodeView) AnnouncedRoutes() []netip.Prefix { + if !nv.Valid() { return nil } - return v.ж.AllApprovedRoutes() + + return nv.ж.AnnouncedRoutes() } -func (v NodeView) AppendToIPSet(build *netipx.IPSetBuilder) { - if !v.Valid() { +func (nv NodeView) SubnetRoutes() []netip.Prefix { + if !nv.Valid() { + return nil + } + + return nv.ж.SubnetRoutes() +} + +func (nv NodeView) IsSubnetRouter() bool { + if !nv.Valid() { + return false + } + + return nv.ж.IsSubnetRouter() +} + +func (nv NodeView) AllApprovedRoutes() []netip.Prefix { + if !nv.Valid() { + return nil + } + + return nv.ж.AllApprovedRoutes() +} + +func (nv NodeView) AppendToIPSet(build *netipx.IPSetBuilder) { + if !nv.Valid() { return } - v.ж.AppendToIPSet(build) + + nv.ж.AppendToIPSet(build) } -func (v NodeView) RequestTagsSlice() views.Slice[string] { - if !v.Valid() || !v.Hostinfo().Valid() { +func (nv NodeView) RequestTagsSlice() views.Slice[string] { + if !nv.Valid() || !nv.Hostinfo().Valid() { return views.Slice[string]{} } - return v.Hostinfo().RequestTags() -} -func (v NodeView) Tags() []string { - if !v.Valid() { - return nil - } - return v.ж.Tags() + return nv.Hostinfo().RequestTags() } // IsTagged reports if a device is tagged @@ -774,128 +814,293 @@ func (v NodeView) Tags() []string { // user owned device. // Currently, this function only handles tags set // via CLI ("forced tags" and preauthkeys). -func (v NodeView) IsTagged() bool { - if !v.Valid() { +func (nv NodeView) IsTagged() bool { + if !nv.Valid() { return false } - return v.ж.IsTagged() + + return nv.ж.IsTagged() } // IsExpired returns whether the node registration has expired. -func (v NodeView) IsExpired() bool { - if !v.Valid() { +func (nv NodeView) IsExpired() bool { + if !nv.Valid() { return true } - return v.ж.IsExpired() + + return nv.ж.IsExpired() } // IsEphemeral returns if the node is registered as an Ephemeral node. // https://tailscale.com/kb/1111/ephemeral-nodes/ -func (v NodeView) IsEphemeral() bool { - if !v.Valid() { +func (nv NodeView) IsEphemeral() bool { + if !nv.Valid() { return false } - return v.ж.IsEphemeral() + + return nv.ж.IsEphemeral() } // PeerChangeFromMapRequest takes a MapRequest and compares it to the node // to produce a PeerChange struct that can be used to updated the node and // inform peers about smaller changes to the node. -func (v NodeView) PeerChangeFromMapRequest(req tailcfg.MapRequest) tailcfg.PeerChange { - if !v.Valid() { +func (nv NodeView) PeerChangeFromMapRequest(req tailcfg.MapRequest) tailcfg.PeerChange { + if !nv.Valid() { return tailcfg.PeerChange{} } - return v.ж.PeerChangeFromMapRequest(req) + + return nv.ж.PeerChangeFromMapRequest(req) } // GetFQDN returns the fully qualified domain name for the node. -func (v NodeView) GetFQDN(baseDomain string) (string, error) { - if !v.Valid() { +func (nv NodeView) GetFQDN(baseDomain string) (string, error) { + if !nv.Valid() { return "", errors.New("failed to create valid FQDN: node view is invalid") } - return v.ж.GetFQDN(baseDomain) + + return nv.ж.GetFQDN(baseDomain) } // ExitRoutes returns a list of both exit routes if the // node has any exit routes enabled. // If none are enabled, it will return nil. -func (v NodeView) ExitRoutes() []netip.Prefix { - if !v.Valid() { +func (nv NodeView) ExitRoutes() []netip.Prefix { + if !nv.Valid() { return nil } - return v.ж.ExitRoutes() + + return nv.ж.ExitRoutes() } -func (v NodeView) IsExitNode() bool { - if !v.Valid() { +func (nv NodeView) IsExitNode() bool { + if !nv.Valid() { return false } - return v.ж.IsExitNode() + + return nv.ж.IsExitNode() } // RequestTags returns the ACL tags that the node is requesting. -func (v NodeView) RequestTags() []string { - if !v.Valid() || !v.Hostinfo().Valid() { +func (nv NodeView) RequestTags() []string { + if !nv.Valid() || !nv.Hostinfo().Valid() { return []string{} } - return v.Hostinfo().RequestTags().AsSlice() + + return nv.Hostinfo().RequestTags().AsSlice() } // Proto converts the NodeView to a protobuf representation. -func (v NodeView) Proto() *v1.Node { - if !v.Valid() { +func (nv NodeView) Proto() *v1.Node { + if !nv.Valid() { return nil } - return v.ж.Proto() + + return nv.ж.Proto() } // HasIP reports if a node has a given IP address. -func (v NodeView) HasIP(i netip.Addr) bool { - if !v.Valid() { +func (nv NodeView) HasIP(i netip.Addr) bool { + if !nv.Valid() { return false } - return v.ж.HasIP(i) + + return nv.ж.HasIP(i) } // HasTag reports if a node has a given tag. -func (v NodeView) HasTag(tag string) bool { - if !v.Valid() { +func (nv NodeView) HasTag(tag string) bool { + if !nv.Valid() { return false } - return v.ж.HasTag(tag) + + return nv.ж.HasTag(tag) +} + +// TypedUserID returns the UserID as a typed UserID type. +// Returns 0 if UserID is nil or node is invalid. +func (nv NodeView) TypedUserID() UserID { + if !nv.Valid() { + return 0 + } + + return nv.ж.TypedUserID() +} + +// TailscaleUserID returns the user ID to use in Tailscale protocol. +// Tagged nodes always return TaggedDevices.ID, user-owned nodes return their actual UserID. +func (nv NodeView) TailscaleUserID() tailcfg.UserID { + if !nv.Valid() { + return 0 + } + + if nv.IsTagged() { + //nolint:gosec // G115: TaggedDevices.ID is a constant that fits in int64 + return tailcfg.UserID(int64(TaggedDevices.ID)) + } + + //nolint:gosec // G115: UserID values are within int64 range + return tailcfg.UserID(int64(nv.UserID().Get())) } // Prefixes returns the node IPs as netip.Prefix. -func (v NodeView) Prefixes() []netip.Prefix { - if !v.Valid() { +func (nv NodeView) Prefixes() []netip.Prefix { + if !nv.Valid() { return nil } - return v.ж.Prefixes() + + return nv.ж.Prefixes() } // IPsAsString returns the node IPs as strings. -func (v NodeView) IPsAsString() []string { - if !v.Valid() { +func (nv NodeView) IPsAsString() []string { + if !nv.Valid() { return nil } - return v.ж.IPsAsString() + + return nv.ж.IPsAsString() } // HasNetworkChanges checks if the node has network-related changes. // Returns true if IPs, announced routes, or approved routes changed. // This is primarily used for policy cache invalidation. -func (v NodeView) HasNetworkChanges(other NodeView) bool { - if !slices.Equal(v.IPs(), other.IPs()) { +func (nv NodeView) HasNetworkChanges(other NodeView) bool { + if !slices.Equal(nv.IPs(), other.IPs()) { return true } - if !slices.Equal(v.AnnouncedRoutes(), other.AnnouncedRoutes()) { + if !slices.Equal(nv.AnnouncedRoutes(), other.AnnouncedRoutes()) { return true } - if !slices.Equal(v.SubnetRoutes(), other.SubnetRoutes()) { + if !slices.Equal(nv.SubnetRoutes(), other.SubnetRoutes()) { return true } return false } + +// HasPolicyChange reports whether the node has changes that affect policy evaluation. +func (nv NodeView) HasPolicyChange(other NodeView) bool { + if nv.UserID() != other.UserID() { + return true + } + + if !views.SliceEqual(nv.Tags(), other.Tags()) { + return true + } + + if !slices.Equal(nv.IPs(), other.IPs()) { + return true + } + + return false +} + +// TailNodes converts a slice of NodeViews into Tailscale tailcfg.Nodes. +func TailNodes( + nodes views.Slice[NodeView], + capVer tailcfg.CapabilityVersion, + primaryRouteFunc RouteFunc, + cfg *Config, +) ([]*tailcfg.Node, error) { + tNodes := make([]*tailcfg.Node, 0, nodes.Len()) + + for _, node := range nodes.All() { + tNode, err := node.TailNode(capVer, primaryRouteFunc, cfg) + if err != nil { + return nil, err + } + + tNodes = append(tNodes, tNode) + } + + return tNodes, nil +} + +// TailNode converts a NodeView into a Tailscale tailcfg.Node. +func (nv NodeView) TailNode( + capVer tailcfg.CapabilityVersion, + primaryRouteFunc RouteFunc, + cfg *Config, +) (*tailcfg.Node, error) { + if !nv.Valid() { + return nil, ErrInvalidNodeView + } + + hostname, err := nv.GetFQDN(cfg.BaseDomain) + if err != nil { + return nil, err + } + + var derp int + // TODO(kradalby): legacyDERP was removed in tailscale/tailscale@2fc4455e6dd9ab7f879d4e2f7cffc2be81f14077 + // and should be removed after 111 is the minimum capver. + legacyDERP := "127.3.3.40:0" // Zero means disconnected or unknown. + if nv.Hostinfo().Valid() && nv.Hostinfo().NetInfo().Valid() { + legacyDERP = fmt.Sprintf("127.3.3.40:%d", nv.Hostinfo().NetInfo().PreferredDERP()) + derp = nv.Hostinfo().NetInfo().PreferredDERP() + } + + var keyExpiry time.Time + if nv.Expiry().Valid() { + keyExpiry = nv.Expiry().Get() + } + + primaryRoutes := primaryRouteFunc(nv.ID()) + allowedIPs := slices.Concat(nv.Prefixes(), primaryRoutes, nv.ExitRoutes()) + tsaddr.SortPrefixes(allowedIPs) + + capMap := tailcfg.NodeCapMap{ + tailcfg.CapabilityAdmin: []tailcfg.RawMessage{}, + tailcfg.CapabilitySSH: []tailcfg.RawMessage{}, + } + if cfg.RandomizeClientPort { + capMap[tailcfg.NodeAttrRandomizeClientPort] = []tailcfg.RawMessage{} + } + + if cfg.Taildrop.Enabled { + capMap[tailcfg.CapabilityFileSharing] = []tailcfg.RawMessage{} + } + + tNode := tailcfg.Node{ + //nolint:gosec // G115: NodeID values are within int64 range + ID: tailcfg.NodeID(nv.ID()), + StableID: nv.ID().StableID(), + Name: hostname, + Cap: capVer, + CapMap: capMap, + + User: nv.TailscaleUserID(), + + Key: nv.NodeKey(), + KeyExpiry: keyExpiry.UTC(), + + Machine: nv.MachineKey(), + DiscoKey: nv.DiscoKey(), + Addresses: nv.Prefixes(), + PrimaryRoutes: primaryRoutes, + AllowedIPs: allowedIPs, + Endpoints: nv.Endpoints().AsSlice(), + HomeDERP: derp, + LegacyDERPString: legacyDERP, + Hostinfo: nv.Hostinfo(), + Created: nv.CreatedAt().UTC(), + + Online: nv.IsOnline().Clone(), + + Tags: nv.Tags().AsSlice(), + + MachineAuthorized: !nv.IsExpired(), + Expired: nv.IsExpired(), + } + + // Set LastSeen only for offline nodes to avoid confusing Tailscale clients + // during rapid reconnection cycles. Online nodes should not have LastSeen set + // as this can make clients interpret them as "not online" despite Online=true. + if nv.LastSeen().Valid() && nv.IsOnline().Valid() && !nv.IsOnline().Get() { + lastSeen := nv.LastSeen().Get() + tNode.LastSeen = &lastSeen + } + + return &tNode, nil +} diff --git a/hscontrol/types/node_tags_test.go b/hscontrol/types/node_tags_test.go new file mode 100644 index 00000000..72598b3c --- /dev/null +++ b/hscontrol/types/node_tags_test.go @@ -0,0 +1,295 @@ +package types + +import ( + "testing" + + "github.com/juanfont/headscale/hscontrol/util" + "github.com/stretchr/testify/assert" + "gorm.io/gorm" + "tailscale.com/types/ptr" +) + +// TestNodeIsTagged tests the IsTagged() method for determining if a node is tagged. +func TestNodeIsTagged(t *testing.T) { + tests := []struct { + name string + node Node + want bool + }{ + { + name: "node with tags - is tagged", + node: Node{ + Tags: []string{"tag:server", "tag:prod"}, + }, + want: true, + }, + { + name: "node with single tag - is tagged", + node: Node{ + Tags: []string{"tag:web"}, + }, + want: true, + }, + { + name: "node with no tags - not tagged", + node: Node{ + Tags: []string{}, + }, + want: false, + }, + { + name: "node with nil tags - not tagged", + node: Node{ + Tags: nil, + }, + want: false, + }, + { + // Tags should be copied from AuthKey during registration, so a node + // with only AuthKey.Tags and no Tags would be invalid in practice. + // IsTagged() only checks node.Tags, not AuthKey.Tags. + name: "node registered with tagged authkey only - not tagged (tags should be copied)", + node: Node{ + AuthKey: &PreAuthKey{ + Tags: []string{"tag:database"}, + }, + }, + want: false, + }, + { + name: "node with both tags and authkey tags - is tagged", + node: Node{ + Tags: []string{"tag:server"}, + AuthKey: &PreAuthKey{ + Tags: []string{"tag:database"}, + }, + }, + want: true, + }, + { + name: "node with user and no tags - not tagged", + node: Node{ + UserID: ptr.To(uint(42)), + Tags: []string{}, + }, + want: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := tt.node.IsTagged() + assert.Equal(t, tt.want, got, "IsTagged() returned unexpected value") + }) + } +} + +// TestNodeViewIsTagged tests the IsTagged() method on NodeView. +func TestNodeViewIsTagged(t *testing.T) { + tests := []struct { + name string + node Node + want bool + }{ + { + name: "tagged node via Tags field", + node: Node{ + Tags: []string{"tag:server"}, + }, + want: true, + }, + { + // Tags should be copied from AuthKey during registration, so a node + // with only AuthKey.Tags and no Tags would be invalid in practice. + name: "node with only AuthKey tags - not tagged (tags should be copied)", + node: Node{ + AuthKey: &PreAuthKey{ + Tags: []string{"tag:web"}, + }, + }, + want: false, // IsTagged() only checks node.Tags + }, + { + name: "user-owned node", + node: Node{ + UserID: ptr.To(uint(1)), + }, + want: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + view := tt.node.View() + got := view.IsTagged() + assert.Equal(t, tt.want, got, "NodeView.IsTagged() returned unexpected value") + }) + } +} + +// TestNodeHasTag tests the HasTag() method for checking specific tag membership. +func TestNodeHasTag(t *testing.T) { + tests := []struct { + name string + node Node + tag string + want bool + }{ + { + name: "node has the tag", + node: Node{ + Tags: []string{"tag:server", "tag:prod"}, + }, + tag: "tag:server", + want: true, + }, + { + name: "node does not have the tag", + node: Node{ + Tags: []string{"tag:server", "tag:prod"}, + }, + tag: "tag:web", + want: false, + }, + { + // Tags should be copied from AuthKey during registration + // HasTag() only checks node.Tags, not AuthKey.Tags + name: "node has tag only in authkey - returns false", + node: Node{ + AuthKey: &PreAuthKey{ + Tags: []string{"tag:database"}, + }, + }, + tag: "tag:database", + want: false, + }, + { + // node.Tags is what matters, not AuthKey.Tags + name: "node has tag in Tags but not in AuthKey", + node: Node{ + Tags: []string{"tag:server"}, + AuthKey: &PreAuthKey{ + Tags: []string{"tag:database"}, + }, + }, + tag: "tag:server", + want: true, + }, + { + name: "invalid tag format still returns false", + node: Node{ + Tags: []string{"tag:server"}, + }, + tag: "invalid-tag", + want: false, + }, + { + name: "empty tag returns false", + node: Node{ + Tags: []string{"tag:server"}, + }, + tag: "", + want: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := tt.node.HasTag(tt.tag) + assert.Equal(t, tt.want, got, "HasTag() returned unexpected value") + }) + } +} + +// TestNodeTagsImmutableAfterRegistration tests that tags can only be set during registration. +func TestNodeTagsImmutableAfterRegistration(t *testing.T) { + // Test that a node registered with tags keeps them + taggedNode := Node{ + ID: 1, + Tags: []string{"tag:server"}, + AuthKey: &PreAuthKey{ + Tags: []string{"tag:server"}, + }, + RegisterMethod: util.RegisterMethodAuthKey, + } + + // Node should be tagged + assert.True(t, taggedNode.IsTagged(), "Node registered with tags should be tagged") + + // Node should have the tag + has := taggedNode.HasTag("tag:server") + assert.True(t, has, "Node should have the tag it was registered with") + + // Test that a user-owned node is not tagged + userNode := Node{ + ID: 2, + UserID: ptr.To(uint(42)), + Tags: []string{}, + RegisterMethod: util.RegisterMethodOIDC, + } + + assert.False(t, userNode.IsTagged(), "User-owned node should not be tagged") +} + +// TestNodeOwnershipModel tests the tags-as-identity model. +func TestNodeOwnershipModel(t *testing.T) { + tests := []struct { + name string + node Node + wantIsTagged bool + description string + }{ + { + name: "tagged node has tags, UserID is informational", + node: Node{ + ID: 1, + UserID: ptr.To(uint(5)), // "created by" user 5 + Tags: []string{"tag:server"}, + }, + wantIsTagged: true, + description: "Tagged nodes may have UserID set for tracking, but ownership is defined by tags", + }, + { + name: "user-owned node has no tags", + node: Node{ + ID: 2, + UserID: ptr.To(uint(5)), + Tags: []string{}, + }, + wantIsTagged: false, + description: "User-owned nodes are owned by the user, not by tags", + }, + { + // Tags should be copied from AuthKey to Node during registration + // IsTagged() only checks node.Tags, not AuthKey.Tags + name: "node with only authkey tags - not tagged (tags should be copied)", + node: Node{ + ID: 3, + UserID: ptr.To(uint(5)), // "created by" user 5 + AuthKey: &PreAuthKey{ + Tags: []string{"tag:database"}, + }, + }, + wantIsTagged: false, + description: "IsTagged() only checks node.Tags; AuthKey.Tags should be copied during registration", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := tt.node.IsTagged() + assert.Equal(t, tt.wantIsTagged, got, tt.description) + }) + } +} + +// TestUserTypedID tests the TypedID() helper method. +func TestUserTypedID(t *testing.T) { + user := User{ + Model: gorm.Model{ID: 42}, + } + + typedID := user.TypedID() + assert.NotNil(t, typedID, "TypedID() should return non-nil pointer") + assert.Equal(t, UserID(42), *typedID, "TypedID() should return correct UserID value") +} diff --git a/hscontrol/types/node_test.go b/hscontrol/types/node_test.go index c992219e..9518833f 100644 --- a/hscontrol/types/node_test.go +++ b/hscontrol/types/node_test.go @@ -139,7 +139,7 @@ func TestNodeFQDN(t *testing.T) { name: "no-dnsconfig-with-username", node: Node{ GivenName: "test", - User: User{ + User: &User{ Name: "user", }, }, @@ -150,7 +150,7 @@ func TestNodeFQDN(t *testing.T) { name: "all-set", node: Node{ GivenName: "test", - User: User{ + User: &User{ Name: "user", }, }, @@ -160,7 +160,7 @@ func TestNodeFQDN(t *testing.T) { { name: "no-given-name", node: Node{ - User: User{ + User: &User{ Name: "user", }, }, @@ -179,7 +179,7 @@ func TestNodeFQDN(t *testing.T) { name: "no-dnsconfig", node: Node{ GivenName: "test", - User: User{ + User: &User{ Name: "user", }, }, diff --git a/hscontrol/types/preauth_key.go b/hscontrol/types/preauth_key.go index 659e0a76..2ce02f02 100644 --- a/hscontrol/types/preauth_key.go +++ b/hscontrol/types/preauth_key.go @@ -14,35 +14,95 @@ func (e PAKError) Error() string { return string(e) } // PreAuthKey describes a pre-authorization key usable in a particular user. type PreAuthKey struct { - ID uint64 `gorm:"primary_key"` - Key string - UserID uint - User User `gorm:"constraint:OnDelete:SET NULL;"` + ID uint64 `gorm:"primary_key"` + + // Legacy plaintext key (for backwards compatibility) + Key string + + // New bcrypt-based authentication + Prefix string + Hash []byte // bcrypt + + // For tagged keys: UserID tracks who created the key (informational) + // For user-owned keys: UserID tracks the node owner + // Can be nil for system-created tagged keys + UserID *uint + User *User `gorm:"constraint:OnDelete:SET NULL;"` + Reusable bool Ephemeral bool `gorm:"default:false"` Used bool `gorm:"default:false"` - // Tags are always applied to the node and is one of - // the sources of tags a node might have. They are copied - // from the PreAuthKey when the node logs in the first time, - // and ignored after. + // Tags to assign to nodes registered with this key. + // Tags are copied to the node during registration. + // If non-empty, this creates tagged nodes (not user-owned). Tags []string `gorm:"serializer:json"` CreatedAt *time.Time Expiration *time.Time } -func (key *PreAuthKey) Proto() *v1.PreAuthKey { +// PreAuthKeyNew is returned once when the key is created. +type PreAuthKeyNew struct { + ID uint64 `gorm:"primary_key"` + Key string + Reusable bool + Ephemeral bool + Tags []string + Expiration *time.Time + CreatedAt *time.Time + User *User // Can be nil for system-created tagged keys +} + +func (key *PreAuthKeyNew) Proto() *v1.PreAuthKey { protoKey := v1.PreAuthKey{ - User: key.User.Proto(), Id: key.ID, Key: key.Key, + User: nil, // Will be set below if not nil + Reusable: key.Reusable, + Ephemeral: key.Ephemeral, + AclTags: key.Tags, + } + + if key.User != nil { + protoKey.User = key.User.Proto() + } + + if key.Expiration != nil { + protoKey.Expiration = timestamppb.New(*key.Expiration) + } + + if key.CreatedAt != nil { + protoKey.CreatedAt = timestamppb.New(*key.CreatedAt) + } + + return &protoKey +} + +func (key *PreAuthKey) Proto() *v1.PreAuthKey { + protoKey := v1.PreAuthKey{ + User: nil, // Will be set below if not nil + Id: key.ID, Ephemeral: key.Ephemeral, Reusable: key.Reusable, Used: key.Used, AclTags: key.Tags, } + if key.User != nil { + protoKey.User = key.User.Proto() + } + + // For new keys (with prefix/hash), show the prefix so users can identify the key + // For legacy keys (with plaintext key), show the full key for backwards compatibility + if key.Prefix != "" { + protoKey.Key = "hskey-auth-" + key.Prefix + "-***" + } else if key.Key != "" { + // Legacy key - show full key for backwards compatibility + // TODO: Consider hiding this in a future major version + protoKey.Key = key.Key + } + if key.Expiration != nil { protoKey.Expiration = timestamppb.New(*key.Expiration) } @@ -90,3 +150,9 @@ func (pak *PreAuthKey) Validate() error { return nil } + +// IsTagged returns true if this PreAuthKey creates tagged nodes. +// When a PreAuthKey has tags, nodes registered with it will be tagged nodes. +func (pak *PreAuthKey) IsTagged() bool { + return len(pak.Tags) > 0 +} diff --git a/hscontrol/types/types_clone.go b/hscontrol/types/types_clone.go index 3f530dc9..4dfeedc2 100644 --- a/hscontrol/types/types_clone.go +++ b/hscontrol/types/types_clone.go @@ -54,7 +54,13 @@ func (src *Node) Clone() *Node { if dst.IPv6 != nil { dst.IPv6 = ptr.To(*src.IPv6) } - dst.ForcedTags = append(src.ForcedTags[:0:0], src.ForcedTags...) + if dst.UserID != nil { + dst.UserID = ptr.To(*src.UserID) + } + if dst.User != nil { + dst.User = ptr.To(*src.User) + } + dst.Tags = append(src.Tags[:0:0], src.Tags...) if dst.AuthKeyID != nil { dst.AuthKeyID = ptr.To(*src.AuthKeyID) } @@ -87,10 +93,10 @@ var _NodeCloneNeedsRegeneration = Node(struct { IPv6 *netip.Addr Hostname string GivenName string - UserID uint - User User + UserID *uint + User *User RegisterMethod string - ForcedTags []string + Tags []string AuthKeyID *uint64 AuthKey *PreAuthKey Expiry *time.Time @@ -110,6 +116,13 @@ func (src *PreAuthKey) Clone() *PreAuthKey { } dst := new(PreAuthKey) *dst = *src + dst.Hash = append(src.Hash[:0:0], src.Hash...) + if dst.UserID != nil { + dst.UserID = ptr.To(*src.UserID) + } + if dst.User != nil { + dst.User = ptr.To(*src.User) + } dst.Tags = append(src.Tags[:0:0], src.Tags...) if dst.CreatedAt != nil { dst.CreatedAt = ptr.To(*src.CreatedAt) @@ -124,8 +137,10 @@ func (src *PreAuthKey) Clone() *PreAuthKey { var _PreAuthKeyCloneNeedsRegeneration = PreAuthKey(struct { ID uint64 Key string - UserID uint - User User + Prefix string + Hash []byte + UserID *uint + User *User Reusable bool Ephemeral bool Used bool diff --git a/hscontrol/types/types_view.go b/hscontrol/types/types_view.go index 5c31eac8..e48dd029 100644 --- a/hscontrol/types/types_view.go +++ b/hscontrol/types/types_view.go @@ -7,11 +7,13 @@ package types import ( "database/sql" - "encoding/json" + jsonv1 "encoding/json" "errors" "net/netip" "time" + jsonv2 "github.com/go-json-experiment/json" + "github.com/go-json-experiment/json/jsontext" "gorm.io/gorm" "tailscale.com/tailcfg" "tailscale.com/types/key" @@ -48,8 +50,17 @@ func (v UserView) AsStruct() *User { return v.ж.Clone() } -func (v UserView) MarshalJSON() ([]byte, error) { return json.Marshal(v.ж) } +// MarshalJSON implements [jsonv1.Marshaler]. +func (v UserView) MarshalJSON() ([]byte, error) { + return jsonv1.Marshal(v.ж) +} +// MarshalJSONTo implements [jsonv2.MarshalerTo]. +func (v UserView) MarshalJSONTo(enc *jsontext.Encoder) error { + return jsonv2.MarshalEncode(enc, v.ж) +} + +// UnmarshalJSON implements [jsonv1.Unmarshaler]. func (v *UserView) UnmarshalJSON(b []byte) error { if v.ж != nil { return errors.New("already initialized") @@ -58,20 +69,51 @@ func (v *UserView) UnmarshalJSON(b []byte) error { return nil } var x User - if err := json.Unmarshal(b, &x); err != nil { + if err := jsonv1.Unmarshal(b, &x); err != nil { return err } v.ж = &x return nil } -func (v UserView) Model() gorm.Model { return v.ж.Model } -func (v UserView) Name() string { return v.ж.Name } -func (v UserView) DisplayName() string { return v.ж.DisplayName } -func (v UserView) Email() string { return v.ж.Email } +// UnmarshalJSONFrom implements [jsonv2.UnmarshalerFrom]. +func (v *UserView) UnmarshalJSONFrom(dec *jsontext.Decoder) error { + if v.ж != nil { + return errors.New("already initialized") + } + var x User + if err := jsonv2.UnmarshalDecode(dec, &x); err != nil { + return err + } + v.ж = &x + return nil +} + +func (v UserView) Model() gorm.Model { return v.ж.Model } + +// Name (username) for the user, is used if email is empty +// Should not be used, please use Username(). +// It is unique if ProviderIdentifier is not set. +func (v UserView) Name() string { return v.ж.Name } + +// Typically the full name of the user +func (v UserView) DisplayName() string { return v.ж.DisplayName } + +// Email of the user +// Should not be used, please use Username(). +func (v UserView) Email() string { return v.ж.Email } + +// ProviderIdentifier is a unique or not set identifier of the +// user from OIDC. It is the combination of `iss` +// and `sub` claim in the OIDC token. +// It is unique if set. +// It is unique together with Name. func (v UserView) ProviderIdentifier() sql.NullString { return v.ж.ProviderIdentifier } -func (v UserView) Provider() string { return v.ж.Provider } -func (v UserView) ProfilePicURL() string { return v.ж.ProfilePicURL } + +// Provider is the origin of the user account, +// same as RegistrationMethod, without authkey. +func (v UserView) Provider() string { return v.ж.Provider } +func (v UserView) ProfilePicURL() string { return v.ж.ProfilePicURL } // A compilation failure here means this code must be regenerated, with the command at the top of this file. var _UserViewNeedsRegeneration = User(struct { @@ -112,8 +154,17 @@ func (v NodeView) AsStruct() *Node { return v.ж.Clone() } -func (v NodeView) MarshalJSON() ([]byte, error) { return json.Marshal(v.ж) } +// MarshalJSON implements [jsonv1.Marshaler]. +func (v NodeView) MarshalJSON() ([]byte, error) { + return jsonv1.Marshal(v.ж) +} +// MarshalJSONTo implements [jsonv2.MarshalerTo]. +func (v NodeView) MarshalJSONTo(enc *jsontext.Encoder) error { + return jsonv2.MarshalEncode(enc, v.ж) +} + +// UnmarshalJSON implements [jsonv1.Unmarshaler]. func (v *NodeView) UnmarshalJSON(b []byte) error { if v.ж != nil { return errors.New("already initialized") @@ -122,7 +173,20 @@ func (v *NodeView) UnmarshalJSON(b []byte) error { return nil } var x Node - if err := json.Unmarshal(b, &x); err != nil { + if err := jsonv1.Unmarshal(b, &x); err != nil { + return err + } + v.ж = &x + return nil +} + +// UnmarshalJSONFrom implements [jsonv2.UnmarshalerFrom]. +func (v *NodeView) UnmarshalJSONFrom(dec *jsontext.Decoder) error { + if v.ж != nil { + return errors.New("already initialized") + } + var x Node + if err := jsonv2.UnmarshalDecode(dec, &x); err != nil { return err } v.ж = &x @@ -139,21 +203,52 @@ func (v NodeView) IPv4() views.ValuePointer[netip.Addr] { return views.ValuePo func (v NodeView) IPv6() views.ValuePointer[netip.Addr] { return views.ValuePointerOf(v.ж.IPv6) } -func (v NodeView) Hostname() string { return v.ж.Hostname } -func (v NodeView) GivenName() string { return v.ж.GivenName } -func (v NodeView) UserID() uint { return v.ж.UserID } -func (v NodeView) User() User { return v.ж.User } -func (v NodeView) RegisterMethod() string { return v.ж.RegisterMethod } -func (v NodeView) ForcedTags() views.Slice[string] { return views.SliceOf(v.ж.ForcedTags) } +// Hostname represents the name given by the Tailscale +// client during registration +func (v NodeView) Hostname() string { return v.ж.Hostname } + +// Givenname represents either: +// a DNS normalized version of Hostname +// a valid name set by the User +// +// GivenName is the name used in all DNS related +// parts of headscale. +func (v NodeView) GivenName() string { return v.ж.GivenName } + +// UserID is set for ALL nodes (tagged and user-owned) to track "created by". +// For tagged nodes, this is informational only - the tag is the owner. +// For user-owned nodes, this identifies the owner. +// Only nil for orphaned nodes (should not happen in normal operation). +func (v NodeView) UserID() views.ValuePointer[uint] { return views.ValuePointerOf(v.ж.UserID) } + +func (v NodeView) User() UserView { return v.ж.User.View() } +func (v NodeView) RegisterMethod() string { return v.ж.RegisterMethod } + +// Tags is the definitive owner for tagged nodes. +// When non-empty, the node is "tagged" and tags define its identity. +// Empty for user-owned nodes. +// Tags cannot be removed once set (one-way transition). +func (v NodeView) Tags() views.Slice[string] { return views.SliceOf(v.ж.Tags) } + +// When a node has been created with a PreAuthKey, we need to +// prevent the preauthkey from being deleted before the node. +// The preauthkey can define "tags" of the node so we need it +// around. func (v NodeView) AuthKeyID() views.ValuePointer[uint64] { return views.ValuePointerOf(v.ж.AuthKeyID) } func (v NodeView) AuthKey() PreAuthKeyView { return v.ж.AuthKey.View() } func (v NodeView) Expiry() views.ValuePointer[time.Time] { return views.ValuePointerOf(v.ж.Expiry) } +// LastSeen is when the node was last in contact with +// headscale. It is best effort and not persisted. func (v NodeView) LastSeen() views.ValuePointer[time.Time] { return views.ValuePointerOf(v.ж.LastSeen) } +// ApprovedRoutes is a list of routes that the node is allowed to announce +// as a subnet router. They are not necessarily the routes that the node +// announces at the moment. +// See [Node.Hostinfo] func (v NodeView) ApprovedRoutes() views.Slice[netip.Prefix] { return views.SliceOf(v.ж.ApprovedRoutes) } @@ -179,10 +274,10 @@ var _NodeViewNeedsRegeneration = Node(struct { IPv6 *netip.Addr Hostname string GivenName string - UserID uint - User User + UserID *uint + User *User RegisterMethod string - ForcedTags []string + Tags []string AuthKeyID *uint64 AuthKey *PreAuthKey Expiry *time.Time @@ -222,8 +317,17 @@ func (v PreAuthKeyView) AsStruct() *PreAuthKey { return v.ж.Clone() } -func (v PreAuthKeyView) MarshalJSON() ([]byte, error) { return json.Marshal(v.ж) } +// MarshalJSON implements [jsonv1.Marshaler]. +func (v PreAuthKeyView) MarshalJSON() ([]byte, error) { + return jsonv1.Marshal(v.ж) +} +// MarshalJSONTo implements [jsonv2.MarshalerTo]. +func (v PreAuthKeyView) MarshalJSONTo(enc *jsontext.Encoder) error { + return jsonv2.MarshalEncode(enc, v.ж) +} + +// UnmarshalJSON implements [jsonv1.Unmarshaler]. func (v *PreAuthKeyView) UnmarshalJSON(b []byte) error { if v.ж != nil { return errors.New("already initialized") @@ -232,20 +336,50 @@ func (v *PreAuthKeyView) UnmarshalJSON(b []byte) error { return nil } var x PreAuthKey - if err := json.Unmarshal(b, &x); err != nil { + if err := jsonv1.Unmarshal(b, &x); err != nil { return err } v.ж = &x return nil } -func (v PreAuthKeyView) ID() uint64 { return v.ж.ID } -func (v PreAuthKeyView) Key() string { return v.ж.Key } -func (v PreAuthKeyView) UserID() uint { return v.ж.UserID } -func (v PreAuthKeyView) User() User { return v.ж.User } -func (v PreAuthKeyView) Reusable() bool { return v.ж.Reusable } -func (v PreAuthKeyView) Ephemeral() bool { return v.ж.Ephemeral } -func (v PreAuthKeyView) Used() bool { return v.ж.Used } +// UnmarshalJSONFrom implements [jsonv2.UnmarshalerFrom]. +func (v *PreAuthKeyView) UnmarshalJSONFrom(dec *jsontext.Decoder) error { + if v.ж != nil { + return errors.New("already initialized") + } + var x PreAuthKey + if err := jsonv2.UnmarshalDecode(dec, &x); err != nil { + return err + } + v.ж = &x + return nil +} + +func (v PreAuthKeyView) ID() uint64 { return v.ж.ID } + +// Legacy plaintext key (for backwards compatibility) +func (v PreAuthKeyView) Key() string { return v.ж.Key } + +// New bcrypt-based authentication +func (v PreAuthKeyView) Prefix() string { return v.ж.Prefix } + +// bcrypt +func (v PreAuthKeyView) Hash() views.ByteSlice[[]byte] { return views.ByteSliceOf(v.ж.Hash) } + +// For tagged keys: UserID tracks who created the key (informational) +// For user-owned keys: UserID tracks the node owner +// Can be nil for system-created tagged keys +func (v PreAuthKeyView) UserID() views.ValuePointer[uint] { return views.ValuePointerOf(v.ж.UserID) } + +func (v PreAuthKeyView) User() UserView { return v.ж.User.View() } +func (v PreAuthKeyView) Reusable() bool { return v.ж.Reusable } +func (v PreAuthKeyView) Ephemeral() bool { return v.ж.Ephemeral } +func (v PreAuthKeyView) Used() bool { return v.ж.Used } + +// Tags to assign to nodes registered with this key. +// Tags are copied to the node during registration. +// If non-empty, this creates tagged nodes (not user-owned). func (v PreAuthKeyView) Tags() views.Slice[string] { return views.SliceOf(v.ж.Tags) } func (v PreAuthKeyView) CreatedAt() views.ValuePointer[time.Time] { return views.ValuePointerOf(v.ж.CreatedAt) @@ -259,8 +393,10 @@ func (v PreAuthKeyView) Expiration() views.ValuePointer[time.Time] { var _PreAuthKeyViewNeedsRegeneration = PreAuthKey(struct { ID uint64 Key string - UserID uint - User User + Prefix string + Hash []byte + UserID *uint + User *User Reusable bool Ephemeral bool Used bool diff --git a/hscontrol/types/users.go b/hscontrol/types/users.go index b7cb1038..ec40492b 100644 --- a/hscontrol/types/users.go +++ b/hscontrol/types/users.go @@ -22,6 +22,21 @@ type UserID uint64 type Users []User +const ( + // TaggedDevicesUserID is the special user ID for tagged devices. + // This ID is used when rendering tagged nodes in the Tailscale protocol. + TaggedDevicesUserID = 2147455555 +) + +// TaggedDevices is a special user used in MapResponse for tagged nodes. +// Tagged nodes don't belong to a real user - the tag is their identity. +// This special user ID is used when rendering tagged nodes in the Tailscale protocol. +var TaggedDevices = User{ + Model: gorm.Model{ID: TaggedDevicesUserID}, + Name: "tagged-devices", + DisplayName: "Tagged Devices", +} + func (u Users) String() string { var sb strings.Builder sb.WriteString("[ ") @@ -77,6 +92,13 @@ func (u *User) StringID() string { return strconv.FormatUint(uint64(u.ID), 10) } +// TypedID returns a pointer to the user's ID as a UserID type. +// This is a convenience method to avoid ugly casting like ptr.To(types.UserID(user.ID)). +func (u *User) TypedID() *UserID { + uid := UserID(u.ID) + return &uid +} + // Username is the main way to get the username of a user, // it will return the email if it exists, the name if it exists, // the OIDCIdentifier if it exists, and the ID if nothing else exists. @@ -117,6 +139,13 @@ func (u UserView) TailscaleUser() tailcfg.User { return u.ж.TailscaleUser() } +// ID returns the user's ID. +// This is a custom accessor because gorm.Model.ID is embedded +// and the viewer generator doesn't always produce it. +func (u UserView) ID() uint { + return u.ж.ID +} + func (u *User) TailscaleLogin() tailcfg.Login { return tailcfg.Login{ ID: tailcfg.LoginID(u.ID), @@ -145,9 +174,17 @@ func (u UserView) TailscaleUserProfile() tailcfg.UserProfile { } func (u *User) Proto() *v1.User { + // Use Name if set, otherwise fall back to Username() which provides + // a display-friendly identifier (Email > ProviderIdentifier > ID). + // This ensures OIDC users (who typically have empty Name) display + // their email, while CLI users retain their original Name. + name := u.Name + if name == "" { + name = u.Username() + } return &v1.User{ Id: uint64(u.ID), - Name: u.Name, + Name: name, CreatedAt: timestamppb.New(u.CreatedAt), DisplayName: u.DisplayName, Email: u.Email, @@ -324,7 +361,7 @@ type OIDCUserInfo struct { // FromClaim overrides a User from OIDC claims. // All fields will be updated, except for the ID. -func (u *User) FromClaim(claims *OIDCClaims) { +func (u *User) FromClaim(claims *OIDCClaims, emailVerifiedRequired bool) { err := util.ValidateUsername(claims.Username) if err == nil { u.Name = claims.Username @@ -332,7 +369,7 @@ func (u *User) FromClaim(claims *OIDCClaims) { log.Debug().Caller().Err(err).Msgf("Username %s is not valid", claims.Username) } - if claims.EmailVerified { + if claims.EmailVerified || !FlexibleBoolean(emailVerifiedRequired) { _, err = mail.ParseAddress(claims.Email) if err == nil { u.Email = claims.Email diff --git a/hscontrol/types/users_test.go b/hscontrol/types/users_test.go index f36489a3..15386553 100644 --- a/hscontrol/types/users_test.go +++ b/hscontrol/types/users_test.go @@ -291,12 +291,14 @@ func TestCleanIdentifier(t *testing.T) { func TestOIDCClaimsJSONToUser(t *testing.T) { tests := []struct { - name string - jsonstr string - want User + name string + jsonstr string + emailVerifiedRequired bool + want User }{ { - name: "normal-bool", + name: "normal-bool", + emailVerifiedRequired: true, jsonstr: ` { "sub": "test", @@ -314,7 +316,8 @@ func TestOIDCClaimsJSONToUser(t *testing.T) { }, }, { - name: "string-bool-true", + name: "string-bool-true", + emailVerifiedRequired: true, jsonstr: ` { "sub": "test2", @@ -332,7 +335,8 @@ func TestOIDCClaimsJSONToUser(t *testing.T) { }, }, { - name: "string-bool-false", + name: "string-bool-false", + emailVerifiedRequired: true, jsonstr: ` { "sub": "test3", @@ -348,9 +352,29 @@ func TestOIDCClaimsJSONToUser(t *testing.T) { }, }, }, + { + name: "allow-unverified-email", + emailVerifiedRequired: false, + jsonstr: ` +{ + "sub": "test4", + "email": "test4@test.no", + "email_verified": "false" +} + `, + want: User{ + Provider: util.RegisterMethodOIDC, + Email: "test4@test.no", + ProviderIdentifier: sql.NullString{ + String: "/test4", + Valid: true, + }, + }, + }, { // From https://github.com/juanfont/headscale/issues/2333 - name: "okta-oidc-claim-20250121", + name: "okta-oidc-claim-20250121", + emailVerifiedRequired: true, jsonstr: ` { "sub": "00u7dr4qp7XXXXXXXXXX", @@ -375,6 +399,7 @@ func TestOIDCClaimsJSONToUser(t *testing.T) { want: User{ Provider: util.RegisterMethodOIDC, DisplayName: "Tim Horton", + Email: "", Name: "tim.horton@company.com", ProviderIdentifier: sql.NullString{ String: "https://sso.company.com/oauth2/default/00u7dr4qp7XXXXXXXXXX", @@ -384,7 +409,8 @@ func TestOIDCClaimsJSONToUser(t *testing.T) { }, { // From https://github.com/juanfont/headscale/issues/2333 - name: "okta-oidc-claim-20250121", + name: "okta-oidc-claim-20250121", + emailVerifiedRequired: true, jsonstr: ` { "aud": "79xxxxxx-xxxx-xxxx-xxxx-892146xxxxxx", @@ -409,6 +435,7 @@ func TestOIDCClaimsJSONToUser(t *testing.T) { Provider: util.RegisterMethodOIDC, DisplayName: "XXXXXX XXXX", Name: "user@domain.com", + Email: "", ProviderIdentifier: sql.NullString{ String: "https://login.microsoftonline.com/v2.0/I-70OQnj3TogrNSfkZQqB3f7dGwyBWSm1dolHNKrMzQ", Valid: true, @@ -417,7 +444,8 @@ func TestOIDCClaimsJSONToUser(t *testing.T) { }, { // From https://github.com/juanfont/headscale/issues/2333 - name: "casby-oidc-claim-20250513", + name: "casby-oidc-claim-20250513", + emailVerifiedRequired: true, jsonstr: ` { "sub": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", @@ -458,7 +486,7 @@ func TestOIDCClaimsJSONToUser(t *testing.T) { var user User - user.FromClaim(&got) + user.FromClaim(&got, tt.emailVerifiedRequired) if diff := cmp.Diff(user, tt.want); diff != "" { t.Errorf("TestOIDCClaimsJSONToUser() mismatch (-want +got):\n%s", diff) } diff --git a/hscontrol/util/dns.go b/hscontrol/util/dns.go index 898f965d..dcd58528 100644 --- a/hscontrol/util/dns.go +++ b/hscontrol/util/dns.go @@ -22,10 +22,7 @@ const ( LabelHostnameLength = 63 ) -var ( - invalidDNSRegex = regexp.MustCompile("[^a-z0-9-.]+") - invalidCharsInUserRegex = regexp.MustCompile("[^a-z0-9-.]+") -) +var invalidDNSRegex = regexp.MustCompile("[^a-z0-9-.]+") var ErrInvalidHostName = errors.New("invalid hostname") @@ -46,6 +43,7 @@ func ValidateUsername(username string) error { } atCount := 0 + for _, char := range username { switch { case unicode.IsLetter(char), @@ -90,18 +88,21 @@ func ValidateHostname(name string) error { strings.ToLower(name), ) } + if strings.HasPrefix(name, "-") || strings.HasSuffix(name, "-") { return fmt.Errorf( "hostname %q cannot start or end with a hyphen", name, ) } + if strings.HasPrefix(name, ".") || strings.HasSuffix(name, ".") { return fmt.Errorf( "hostname %q cannot start or end with a dot", name, ) } + if invalidDNSRegex.MatchString(name) { return fmt.Errorf( "hostname %q contains invalid characters, only lowercase letters, numbers, hyphens and dots are allowed", @@ -123,7 +124,8 @@ func ValidateHostname(name string) error { // After transformation, validates the result. func NormaliseHostname(name string) (string, error) { // Early return if already valid - if err := ValidateHostname(name); err == nil { + err := ValidateHostname(name) + if err == nil { return name, nil } @@ -139,7 +141,8 @@ func NormaliseHostname(name string) (string, error) { } // Validate result after transformation - if err := ValidateHostname(name); err != nil { + err = ValidateHostname(name) + if err != nil { return "", fmt.Errorf( "hostname invalid after normalisation: %w", err, diff --git a/hscontrol/util/log.go b/hscontrol/util/log.go index 936b374c..f28cd4a3 100644 --- a/hscontrol/util/log.go +++ b/hscontrol/util/log.go @@ -49,22 +49,22 @@ func (l *DBLogWrapper) LogMode(gormLogger.LogLevel) gormLogger.Interface { return l } -func (l *DBLogWrapper) Info(ctx context.Context, msg string, data ...interface{}) { +func (l *DBLogWrapper) Info(ctx context.Context, msg string, data ...any) { l.Logger.Info().Msgf(msg, data...) } -func (l *DBLogWrapper) Warn(ctx context.Context, msg string, data ...interface{}) { +func (l *DBLogWrapper) Warn(ctx context.Context, msg string, data ...any) { l.Logger.Warn().Msgf(msg, data...) } -func (l *DBLogWrapper) Error(ctx context.Context, msg string, data ...interface{}) { +func (l *DBLogWrapper) Error(ctx context.Context, msg string, data ...any) { l.Logger.Error().Msgf(msg, data...) } func (l *DBLogWrapper) Trace(ctx context.Context, begin time.Time, fc func() (sql string, rowsAffected int64), err error) { elapsed := time.Since(begin) sql, rowsAffected := fc() - fields := map[string]interface{}{ + fields := map[string]any{ "duration": elapsed, "sql": sql, "rowsAffected": rowsAffected, @@ -83,7 +83,7 @@ func (l *DBLogWrapper) Trace(ctx context.Context, begin time.Time, fc func() (sq l.Logger.Debug().Fields(fields).Msgf("") } -func (l *DBLogWrapper) ParamsFilter(ctx context.Context, sql string, params ...interface{}) (string, []interface{}) { +func (l *DBLogWrapper) ParamsFilter(ctx context.Context, sql string, params ...any) (string, []any) { if l.ParameterizedQueries { return sql, nil } diff --git a/hscontrol/util/util.go b/hscontrol/util/util.go index a9dc748e..4d828d02 100644 --- a/hscontrol/util/util.go +++ b/hscontrol/util/util.go @@ -294,3 +294,20 @@ func EnsureHostname(hostinfo *tailcfg.Hostinfo, machineKey, nodeKey string) stri return InvalidString() } + +// GenerateRegistrationKey generates a vanity key for tracking web authentication +// registration flows in logs. This key is NOT stored in the database and does NOT use bcrypt - +// it's purely for observability and correlating log entries during the registration process. +func GenerateRegistrationKey() (string, error) { + const ( + registerKeyPrefix = "hskey-reg-" //nolint:gosec // This is a vanity key for logging, not a credential + registerKeyLength = 64 + ) + + randomPart, err := GenerateRandomStringURLSafe(registerKeyLength) + if err != nil { + return "", fmt.Errorf("generating registration key: %w", err) + } + + return registerKeyPrefix + randomPart, nil +} diff --git a/hscontrol/util/util_test.go b/hscontrol/util/util_test.go index 22418e34..33f27b7a 100644 --- a/hscontrol/util/util_test.go +++ b/hscontrol/util/util_test.go @@ -1288,3 +1288,99 @@ func TestEnsureHostname_Idempotent(t *testing.T) { t.Errorf("hostnames not equal: %v != %v", hostname1, hostname2) } } + +func TestGenerateRegistrationKey(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + test func(*testing.T) + }{ + { + name: "generates_key_with_correct_prefix", + test: func(t *testing.T) { + t.Helper() + + key, err := GenerateRegistrationKey() + if err != nil { + t.Errorf("GenerateRegistrationKey() error = %v", err) + } + + if !strings.HasPrefix(key, "hskey-reg-") { + t.Errorf("key does not have expected prefix: %s", key) + } + }, + }, + { + name: "generates_key_with_correct_length", + test: func(t *testing.T) { + t.Helper() + + key, err := GenerateRegistrationKey() + if err != nil { + t.Errorf("GenerateRegistrationKey() error = %v", err) + } + + // Expected format: hskey-reg-{64-char-random} + // Total length: 10 (prefix) + 64 (random) = 74 + if len(key) != 74 { + t.Errorf("key length = %d, want 74", len(key)) + } + }, + }, + { + name: "generates_unique_keys", + test: func(t *testing.T) { + t.Helper() + + key1, err := GenerateRegistrationKey() + if err != nil { + t.Errorf("GenerateRegistrationKey() error = %v", err) + } + + key2, err := GenerateRegistrationKey() + if err != nil { + t.Errorf("GenerateRegistrationKey() error = %v", err) + } + + if key1 == key2 { + t.Error("generated keys should be unique") + } + }, + }, + { + name: "key_contains_only_valid_chars", + test: func(t *testing.T) { + t.Helper() + + key, err := GenerateRegistrationKey() + if err != nil { + t.Errorf("GenerateRegistrationKey() error = %v", err) + } + + // Remove prefix + _, randomPart, found := strings.Cut(key, "hskey-reg-") + if !found { + t.Error("key does not contain expected prefix") + } + + // Verify base64 URL-safe characters (A-Za-z0-9_-) + for _, ch := range randomPart { + if (ch < 'A' || ch > 'Z') && + (ch < 'a' || ch > 'z') && + (ch < '0' || ch > '9') && + ch != '_' && ch != '-' { + t.Errorf("key contains invalid character: %c", ch) + } + } + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + t.Parallel() + tt.test(t) + }) + } +} diff --git a/integration/acl_test.go b/integration/acl_test.go index 50924891..c746f900 100644 --- a/integration/acl_test.go +++ b/integration/acl_test.go @@ -77,11 +77,8 @@ func aclScenario( // tailscaled to stop configuring the wgengine, causing it // to not configure DNS. tsic.WithNetfilter("off"), - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", - }), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), tsic.WithDockerWorkdir("/"), }, hsic.WithACLPolicy(policy), @@ -311,6 +308,7 @@ func TestACLHostsInNetMapTable(t *testing.T) { []tsic.Option{}, hsic.WithACLPolicy(&testCase.policy), ) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -759,6 +757,7 @@ func TestACLNamedHostsCanReach(t *testing.T) { test1fqdn, err := test1.FQDN() require.NoError(t, err) + test1ip4URL := fmt.Sprintf("http://%s/etc/hostname", test1ip4.String()) test1ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test1ip6.String()) test1fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test1fqdn) @@ -770,6 +769,7 @@ func TestACLNamedHostsCanReach(t *testing.T) { test2fqdn, err := test2.FQDN() require.NoError(t, err) + test2ip4URL := fmt.Sprintf("http://%s/etc/hostname", test2ip4.String()) test2ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test2ip6.String()) test2fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test2fqdn) @@ -781,6 +781,7 @@ func TestACLNamedHostsCanReach(t *testing.T) { test3fqdn, err := test3.FQDN() require.NoError(t, err) + test3ip4URL := fmt.Sprintf("http://%s/etc/hostname", test3ip4.String()) test3ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test3ip6.String()) test3fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test3fqdn) @@ -1055,6 +1056,7 @@ func TestACLDevice1CanAccessDevice2(t *testing.T) { test1fqdn, err := test1.FQDN() require.NoError(t, err) + test1ipURL := fmt.Sprintf("http://%s/etc/hostname", test1ip.String()) test1ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test1ip6.String()) test1fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test1fqdn) @@ -1067,6 +1069,7 @@ func TestACLDevice1CanAccessDevice2(t *testing.T) { test2fqdn, err := test2.FQDN() require.NoError(t, err) + test2ipURL := fmt.Sprintf("http://%s/etc/hostname", test2ip.String()) test2ip6URL := fmt.Sprintf("http://[%s]/etc/hostname", test2ip6.String()) test2fqdnURL := fmt.Sprintf("http://%s/etc/hostname", test2fqdn) @@ -1142,6 +1145,7 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -1151,11 +1155,8 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) { // tailscaled to stop configuring the wgengine, causing it // to not configure DNS. tsic.WithNetfilter("off"), - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", - }), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), tsic.WithDockerWorkdir("/"), }, hsic.WithTestName("policyreload"), @@ -1221,6 +1222,7 @@ func TestPolicyUpdateWhileRunningWithCLIInDatabase(t *testing.T) { // Get the current policy and check // if it is the same as the one we set. var output *policyv2.Policy + err = executeAndUnmarshal( headscale, []string{ @@ -1302,9 +1304,11 @@ func TestACLAutogroupMember(t *testing.T) { // Test that untagged nodes can access each other for _, client := range allClients { var clientIsUntagged bool + assert.EventuallyWithT(t, func(c *assert.CollectT) { status, err := client.Status() assert.NoError(c, err) + clientIsUntagged = status.Self.Tags == nil || status.Self.Tags.Len() == 0 assert.True(c, clientIsUntagged, "Expected client %s to be untagged for autogroup:member test", client.Hostname()) }, 10*time.Second, 200*time.Millisecond, "Waiting for client %s to be untagged", client.Hostname()) @@ -1319,9 +1323,11 @@ func TestACLAutogroupMember(t *testing.T) { } var peerIsUntagged bool + assert.EventuallyWithT(t, func(c *assert.CollectT) { status, err := peer.Status() assert.NoError(c, err) + peerIsUntagged = status.Self.Tags == nil || status.Self.Tags.Len() == 0 assert.True(c, peerIsUntagged, "Expected peer %s to be untagged for autogroup:member test", peer.Hostname()) }, 10*time.Second, 200*time.Millisecond, "Waiting for peer %s to be untagged", peer.Hostname()) @@ -1355,6 +1361,7 @@ func TestACLAutogroupTagged(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -1383,33 +1390,42 @@ func TestACLAutogroupTagged(t *testing.T) { require.NoError(t, err) // Create users and nodes manually with specific tags + // Tags are now set via PreAuthKey (tags-as-identity model), not via --advertise-tags for _, userStr := range spec.Users { user, err := scenario.CreateUser(userStr) require.NoError(t, err) - // Create a single pre-auth key per user - authKey, err := scenario.CreatePreAuthKey(user.GetId(), true, false) + // Create two pre-auth keys per user: one tagged, one untagged + taggedAuthKey, err := scenario.CreatePreAuthKeyWithTags(user.GetId(), true, false, []string{"tag:test"}) + require.NoError(t, err) + + untaggedAuthKey, err := scenario.CreatePreAuthKey(user.GetId(), true, false) require.NoError(t, err) // Create nodes with proper naming for i := range spec.NodesPerUser { - var tags []string - var version string + var ( + authKey string + version string + ) if i == 0 { - // First node is tagged - tags = []string{"tag:test"} + // First node is tagged - use tagged PreAuthKey + authKey = taggedAuthKey.GetKey() version = "head" + t.Logf("Creating tagged node for %s", userStr) } else { - // Second node is untagged - tags = nil + // Second node is untagged - use untagged PreAuthKey + authKey = untaggedAuthKey.GetKey() version = "unstable" + t.Logf("Creating untagged node for %s", userStr) } // Get the network for this scenario networks := scenario.Networks() + var network *dockertest.Network if len(networks) > 0 { network = networks[0] @@ -1421,19 +1437,11 @@ func TestACLAutogroupTagged(t *testing.T) { tsic.WithHeadscaleName(headscale.GetHostname()), tsic.WithNetwork(network), tsic.WithNetfilter("off"), - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", - }), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), tsic.WithDockerWorkdir("/"), } - // Add tags if this is a tagged node - if len(tags) > 0 { - opts = append(opts, tsic.WithTags(tags)) - } - tsClient, err := tsic.New( scenario.Pool(), version, @@ -1444,8 +1452,8 @@ func TestACLAutogroupTagged(t *testing.T) { err = tsClient.WaitForNeedsLogin(integrationutil.PeerSyncTimeout()) require.NoError(t, err) - // Login with the auth key - err = tsClient.Login(headscale.GetEndpoint(), authKey.GetKey()) + // Login with the appropriate auth key (tags come from the PreAuthKey) + err = tsClient.Login(headscale.GetEndpoint(), authKey) require.NoError(t, err) err = tsClient.WaitForRunning(integrationutil.PeerSyncTimeout()) @@ -1464,10 +1472,13 @@ func TestACLAutogroupTagged(t *testing.T) { // Wait for nodes to see only their allowed peers // Tagged nodes should see each other (2 tagged nodes total) // Untagged nodes should see no one - var taggedClients []TailscaleClient - var untaggedClients []TailscaleClient + var ( + taggedClients []TailscaleClient + untaggedClients []TailscaleClient + ) // First, categorize nodes by checking their tags + for _, client := range allClients { hostname := client.Hostname() @@ -1481,12 +1492,14 @@ func TestACLAutogroupTagged(t *testing.T) { // Add to tagged list only once we've verified it found := false + for _, tc := range taggedClients { if tc.Hostname() == hostname { found = true break } } + if !found { taggedClients = append(taggedClients, client) } @@ -1496,12 +1509,14 @@ func TestACLAutogroupTagged(t *testing.T) { // Add to untagged list only once we've verified it found := false + for _, uc := range untaggedClients { if uc.Hostname() == hostname { found = true break } } + if !found { untaggedClients = append(untaggedClients, client) } @@ -1528,6 +1543,7 @@ func TestACLAutogroupTagged(t *testing.T) { assert.EventuallyWithT(t, func(c *assert.CollectT) { status, err := client.Status() assert.NoError(c, err) + if status.Self.Tags != nil { assert.Equal(c, 0, status.Self.Tags.Len(), "untagged node %s should have no tags", client.Hostname()) } @@ -1545,6 +1561,7 @@ func TestACLAutogroupTagged(t *testing.T) { require.NoError(t, err) url := fmt.Sprintf("http://%s/etc/hostname", fqdn) + t.Logf("Testing connection from tagged node %s to tagged node %s", client.Hostname(), peer.Hostname()) assert.EventuallyWithT(t, func(ct *assert.CollectT) { @@ -1563,6 +1580,7 @@ func TestACLAutogroupTagged(t *testing.T) { require.NoError(t, err) url := fmt.Sprintf("http://%s/etc/hostname", fqdn) + t.Logf("Testing connection from untagged node %s to tagged node %s (should fail)", client.Hostname(), peer.Hostname()) assert.EventuallyWithT(t, func(ct *assert.CollectT) { @@ -1582,6 +1600,7 @@ func TestACLAutogroupTagged(t *testing.T) { require.NoError(t, err) url := fmt.Sprintf("http://%s/etc/hostname", fqdn) + t.Logf("Testing connection from untagged node %s to untagged node %s (should fail)", client.Hostname(), peer.Hostname()) assert.EventuallyWithT(t, func(ct *assert.CollectT) { @@ -1599,6 +1618,7 @@ func TestACLAutogroupTagged(t *testing.T) { require.NoError(t, err) url := fmt.Sprintf("http://%s/etc/hostname", fqdn) + t.Logf("Testing connection from tagged node %s to untagged node %s (should fail)", client.Hostname(), peer.Hostname()) assert.EventuallyWithT(t, func(ct *assert.CollectT) { @@ -1614,7 +1634,7 @@ func TestACLAutogroupTagged(t *testing.T) { // Test structure: // - user1: 2 regular nodes (tests autogroup:self for same-user access) // - user2: 2 regular nodes (tests autogroup:self for same-user access and cross-user isolation) -// - user-router: 1 node with tag:router-node (tests that autogroup:self doesn't interfere with other rules) +// - user-router: 1 node with tag:router-node (tests that autogroup:self doesn't interfere with other rules). func TestACLAutogroupSelf(t *testing.T) { IntegrationSkip(t) @@ -1666,17 +1686,15 @@ func TestACLAutogroupSelf(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) err = scenario.CreateHeadscaleEnv( []tsic.Option{ tsic.WithNetfilter("off"), - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", - }), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), tsic.WithDockerWorkdir("/"), }, hsic.WithACLPolicy(policy), @@ -1688,6 +1706,7 @@ func TestACLAutogroupSelf(t *testing.T) { // Add router node for user-router (single shared router node) networks := scenario.Networks() + var network *dockertest.Network if len(networks) > 0 { network = networks[0] @@ -1699,23 +1718,20 @@ func TestACLAutogroupSelf(t *testing.T) { routerUser, err := scenario.CreateUser("user-router") require.NoError(t, err) - authKey, err := scenario.CreatePreAuthKey(routerUser.GetId(), true, false) + // Create a tagged PreAuthKey for the router node (tags-as-identity model) + authKey, err := scenario.CreatePreAuthKeyWithTags(routerUser.GetId(), true, false, []string{"tag:router-node"}) require.NoError(t, err) - // Create router node (tagged with tag:router-node) + // Create router node (tags come from the PreAuthKey) routerClient, err := tsic.New( scenario.Pool(), "unstable", tsic.WithCACert(headscale.GetCert()), tsic.WithHeadscaleName(headscale.GetHostname()), tsic.WithNetwork(network), - tsic.WithTags([]string{"tag:router-node"}), tsic.WithNetfilter("off"), - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", - }), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), tsic.WithDockerWorkdir("/"), ) require.NoError(t, err) @@ -1738,16 +1754,20 @@ func TestACLAutogroupSelf(t *testing.T) { require.NoError(t, err) var user1Regular, user2Regular []TailscaleClient + for _, client := range user1Clients { status, err := client.Status() require.NoError(t, err) + if status.Self != nil && (status.Self.Tags == nil || status.Self.Tags.Len() == 0) { user1Regular = append(user1Regular, client) } } + for _, client := range user2Clients { status, err := client.Status() require.NoError(t, err) + if status.Self != nil && (status.Self.Tags == nil || status.Self.Tags.Len() == 0) { user2Regular = append(user2Regular, client) } @@ -1765,10 +1785,12 @@ func TestACLAutogroupSelf(t *testing.T) { err := client.WaitForPeers(2, integrationutil.PeerSyncTimeout(), integrationutil.PeerSyncRetryInterval()) require.NoError(t, err, "user1 regular device %s should see 2 peers (1 same-user peer + 1 router)", client.Hostname()) } + for _, client := range user2Regular { err := client.WaitForPeers(2, integrationutil.PeerSyncTimeout(), integrationutil.PeerSyncRetryInterval()) require.NoError(t, err, "user2 regular device %s should see 2 peers (1 same-user peer + 1 router)", client.Hostname()) } + err = routerClient.WaitForPeers(4, integrationutil.PeerSyncTimeout(), integrationutil.PeerSyncRetryInterval()) require.NoError(t, err, "router should see 4 peers (all group:home regular nodes)") @@ -1818,6 +1840,7 @@ func TestACLAutogroupSelf(t *testing.T) { for _, client := range user1Regular { fqdn, err := routerClient.FQDN() require.NoError(t, err) + url := fmt.Sprintf("http://%s/etc/hostname", fqdn) t.Logf("url from %s (user1) to %s (router-node) - should SUCCEED", client.Hostname(), fqdn) @@ -1832,6 +1855,7 @@ func TestACLAutogroupSelf(t *testing.T) { for _, client := range user2Regular { fqdn, err := routerClient.FQDN() require.NoError(t, err) + url := fmt.Sprintf("http://%s/etc/hostname", fqdn) t.Logf("url from %s (user2) to %s (router-node) - should SUCCEED", client.Hostname(), fqdn) @@ -1881,6 +1905,7 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -1888,11 +1913,8 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { []tsic.Option{ // Install iptables to enable packet filtering for ACL tests. // Packet filters are essential for testing autogroup:self and other ACL policies. - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add python3 curl iptables ip6tables ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", - }), + tsic.WithPackages("curl", "iptables", "ip6tables"), + tsic.WithWebserver(80), tsic.WithDockerWorkdir("/"), }, hsic.WithTestName("aclpropagation"), @@ -1961,11 +1983,13 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { // Phase 1: Allow all policy t.Logf("Iteration %d: Setting allow-all policy", iteration) + err = headscale.SetPolicy(allowAllPolicy) require.NoError(t, err) // Wait for peer lists to sync with allow-all policy t.Logf("Iteration %d: Phase 1 - Waiting for peer lists to sync with allow-all policy", iteration) + err = scenario.WaitForTailscaleSync() require.NoError(t, err, "iteration %d: Phase 1 - failed to sync after allow-all policy", iteration) @@ -1993,11 +2017,13 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { // Phase 2: Autogroup:self policy (only same user can access) t.Logf("Iteration %d: Phase 2 - Setting autogroup:self policy", iteration) + err = headscale.SetPolicy(autogroupSelfPolicy) require.NoError(t, err) // Wait for peer lists to sync with autogroup:self - ensures cross-user peers are removed t.Logf("Iteration %d: Phase 2 - Waiting for peer lists to sync with autogroup:self", iteration) + err = scenario.WaitForTailscaleSyncPerUser(60*time.Second, 500*time.Millisecond) require.NoError(t, err, "iteration %d: Phase 2 - failed to sync after autogroup:self policy", iteration) @@ -2083,11 +2109,8 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { newClient := scenario.MustAddAndLoginClient(t, "user1", "all", headscale, tsic.WithNetfilter("off"), - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", - }), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), tsic.WithDockerWorkdir("/"), tsic.WithNetwork(networks[0]), ) @@ -2095,6 +2118,7 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { // Wait for peer lists to sync after new node addition (now 3 user1 nodes, still autogroup:self) t.Logf("Iteration %d: Phase 2b - Waiting for peer lists to sync after new node addition", iteration) + err = scenario.WaitForTailscaleSyncPerUser(60*time.Second, 500*time.Millisecond) require.NoError(t, err, "iteration %d: Phase 2b - failed to sync after new node addition", iteration) @@ -2145,8 +2169,11 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { t.Logf("Iteration %d: Phase 2b - Deleting the newly added node from user1", iteration) // Get the node list and find the newest node (highest ID) - var nodeList []*v1.Node - var nodeToDeleteID uint64 + var ( + nodeList []*v1.Node + nodeToDeleteID uint64 + ) + assert.EventuallyWithT(t, func(ct *assert.CollectT) { nodeList, err = headscale.ListNodes("user1") assert.NoError(ct, err) @@ -2168,15 +2195,19 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { // Remove the deleted client from the scenario's user.Clients map // This is necessary for WaitForTailscaleSyncPerUser to calculate correct peer counts t.Logf("Iteration %d: Phase 2b - Removing deleted client from scenario", iteration) + for clientName, client := range scenario.users["user1"].Clients { status := client.MustStatus() + nodeID, err := strconv.ParseUint(string(status.Self.ID), 10, 64) if err != nil { continue } + if nodeID == nodeToDeleteID { delete(scenario.users["user1"].Clients, clientName) t.Logf("Iteration %d: Phase 2b - Removed client %s (node ID %d) from scenario", iteration, clientName, nodeToDeleteID) + break } } @@ -2193,6 +2224,7 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { // Use WaitForTailscaleSyncPerUser because autogroup:self is still active, // so nodes only see same-user peers, not all nodes t.Logf("Iteration %d: Phase 2b - Waiting for sync after node deletion (with autogroup:self)", iteration) + err = scenario.WaitForTailscaleSyncPerUser(60*time.Second, 500*time.Millisecond) require.NoError(t, err, "iteration %d: failed to sync after node deletion", iteration) @@ -2210,6 +2242,7 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { // Phase 3: User1 can access user2 but not reverse t.Logf("Iteration %d: Phase 3 - Setting user1->user2 directional policy", iteration) + err = headscale.SetPolicy(user1ToUser2Policy) require.NoError(t, err) @@ -2256,3 +2289,1543 @@ func TestACLPolicyPropagationOverTime(t *testing.T) { t.Log("All 5 iterations completed successfully - ACL propagation is working correctly") } + +// TestACLTagPropagation validates that tag changes propagate immediately +// to ACLs without requiring a Headscale restart. +// This is the primary test for GitHub issue #2389. +func TestACLTagPropagation(t *testing.T) { + IntegrationSkip(t) + + tests := []struct { + name string + policy *policyv2.Policy + spec ScenarioSpec + // setup returns clients and any initial state needed + setup func(t *testing.T, scenario *Scenario, headscale ControlServer) ( + sourceClient TailscaleClient, + targetClient TailscaleClient, + targetNodeID uint64, + ) + // initialAccess: should source be able to reach target before tag change? + initialAccess bool + // tagChange: what tags to set on target node (nil = test uses custom logic) + tagChange []string + // finalAccess: should source be able to reach target after tag change? + finalAccess bool + }{ + { + name: "add-tag-grants-access", + policy: &policyv2.Policy{ + TagOwners: policyv2.TagOwners{ + "tag:shared": policyv2.Owners{usernameOwner("user1@")}, + }, + ACLs: []policyv2.ACL{ + // user1 self-access + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user1@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user1@"), tailcfg.PortRangeAny), + }, + }, + // user2 self-access + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + // user2 can access tag:shared + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(tagp("tag:shared"), tailcfg.PortRangeAny), + }, + }, + // tag:shared can respond to user2 (return path) + { + Action: "accept", + Sources: []policyv2.Alias{tagp("tag:shared")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + }, + }, + spec: ScenarioSpec{ + NodesPerUser: 1, + Users: []string{"user1", "user2"}, + }, + setup: func(t *testing.T, scenario *Scenario, headscale ControlServer) (TailscaleClient, TailscaleClient, uint64) { + t.Helper() + + user1Clients, err := scenario.ListTailscaleClients("user1") + require.NoError(t, err) + user2Clients, err := scenario.ListTailscaleClients("user2") + require.NoError(t, err) + + nodes, err := headscale.ListNodes("user1") + require.NoError(t, err) + + return user2Clients[0], user1Clients[0], nodes[0].GetId() + }, + initialAccess: false, // user2 cannot access user1 (no tag) + tagChange: []string{"tag:shared"}, // add tag:shared + finalAccess: true, // user2 can now access user1 + }, + { + name: "remove-tag-revokes-access", + policy: &policyv2.Policy{ + TagOwners: policyv2.TagOwners{ + "tag:shared": policyv2.Owners{usernameOwner("user1@")}, + "tag:other": policyv2.Owners{usernameOwner("user1@")}, + }, + ACLs: []policyv2.ACL{ + // user2 self-access + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + // user2 can access tag:shared only + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(tagp("tag:shared"), tailcfg.PortRangeAny), + }, + }, + { + Action: "accept", + Sources: []policyv2.Alias{tagp("tag:shared")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + }, + }, + spec: ScenarioSpec{ + NodesPerUser: 0, // manual creation for tagged node + Users: []string{"user1", "user2"}, + }, + setup: func(t *testing.T, scenario *Scenario, headscale ControlServer) (TailscaleClient, TailscaleClient, uint64) { + t.Helper() + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + // Create user1's node WITH tag:shared via PreAuthKey + taggedKey, err := scenario.CreatePreAuthKeyWithTags( + userMap["user1"].GetId(), false, false, []string{"tag:shared"}, + ) + require.NoError(t, err) + + user1Node, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + tsic.WithNetfilter("off"), + ) + require.NoError(t, err) + err = user1Node.Login(headscale.GetEndpoint(), taggedKey.GetKey()) + require.NoError(t, err) + + // Create user2's node (untagged) + untaggedKey, err := scenario.CreatePreAuthKey(userMap["user2"].GetId(), false, false) + require.NoError(t, err) + + user2Node, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + tsic.WithNetfilter("off"), + ) + require.NoError(t, err) + err = user2Node.Login(headscale.GetEndpoint(), untaggedKey.GetKey()) + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + nodes, err := headscale.ListNodes("user1") + require.NoError(t, err) + + return user2Node, user1Node, nodes[0].GetId() + }, + initialAccess: true, // user2 can access user1 (has tag:shared) + tagChange: []string{"tag:other"}, // replace with tag:other + finalAccess: false, // user2 cannot access (no ACL for tag:other) + }, + { + name: "change-tag-changes-access", + policy: &policyv2.Policy{ + TagOwners: policyv2.TagOwners{ + "tag:team-a": policyv2.Owners{usernameOwner("user1@")}, + "tag:team-b": policyv2.Owners{usernameOwner("user1@")}, + }, + ACLs: []policyv2.ACL{ + // user2 self-access + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + // user2 can access tag:team-b only (NOT tag:team-a) + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(tagp("tag:team-b"), tailcfg.PortRangeAny), + }, + }, + { + Action: "accept", + Sources: []policyv2.Alias{tagp("tag:team-b")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + }, + }, + spec: ScenarioSpec{ + NodesPerUser: 0, + Users: []string{"user1", "user2"}, + }, + setup: func(t *testing.T, scenario *Scenario, headscale ControlServer) (TailscaleClient, TailscaleClient, uint64) { + t.Helper() + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + // Create user1's node with tag:team-a (user2 has NO ACL for this) + taggedKey, err := scenario.CreatePreAuthKeyWithTags( + userMap["user1"].GetId(), false, false, []string{"tag:team-a"}, + ) + require.NoError(t, err) + + user1Node, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + tsic.WithNetfilter("off"), + ) + require.NoError(t, err) + err = user1Node.Login(headscale.GetEndpoint(), taggedKey.GetKey()) + require.NoError(t, err) + + // Create user2's node + untaggedKey, err := scenario.CreatePreAuthKey(userMap["user2"].GetId(), false, false) + require.NoError(t, err) + + user2Node, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + tsic.WithNetfilter("off"), + ) + require.NoError(t, err) + err = user2Node.Login(headscale.GetEndpoint(), untaggedKey.GetKey()) + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + nodes, err := headscale.ListNodes("user1") + require.NoError(t, err) + + return user2Node, user1Node, nodes[0].GetId() + }, + initialAccess: false, // user2 cannot access (tag:team-a not in ACL) + tagChange: []string{"tag:team-b"}, // change to tag:team-b + finalAccess: true, // user2 can now access (tag:team-b in ACL) + }, + { + name: "multiple-tags-partial-removal", + policy: &policyv2.Policy{ + TagOwners: policyv2.TagOwners{ + "tag:web": policyv2.Owners{usernameOwner("user1@")}, + "tag:internal": policyv2.Owners{usernameOwner("user1@")}, + }, + ACLs: []policyv2.ACL{ + // user2 self-access + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + // user2 can access tag:web + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(tagp("tag:web"), tailcfg.PortRangeAny), + }, + }, + { + Action: "accept", + Sources: []policyv2.Alias{tagp("tag:web")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + }, + }, + spec: ScenarioSpec{ + NodesPerUser: 0, + Users: []string{"user1", "user2"}, + }, + setup: func(t *testing.T, scenario *Scenario, headscale ControlServer) (TailscaleClient, TailscaleClient, uint64) { + t.Helper() + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + // Create user1's node with BOTH tags + taggedKey, err := scenario.CreatePreAuthKeyWithTags( + userMap["user1"].GetId(), false, false, []string{"tag:web", "tag:internal"}, + ) + require.NoError(t, err) + + user1Node, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + tsic.WithNetfilter("off"), + ) + require.NoError(t, err) + err = user1Node.Login(headscale.GetEndpoint(), taggedKey.GetKey()) + require.NoError(t, err) + + // Create user2's node + untaggedKey, err := scenario.CreatePreAuthKey(userMap["user2"].GetId(), false, false) + require.NoError(t, err) + + user2Node, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + tsic.WithNetfilter("off"), + ) + require.NoError(t, err) + err = user2Node.Login(headscale.GetEndpoint(), untaggedKey.GetKey()) + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + nodes, err := headscale.ListNodes("user1") + require.NoError(t, err) + + return user2Node, user1Node, nodes[0].GetId() + }, + initialAccess: true, // user2 can access (has tag:web) + tagChange: []string{"tag:internal"}, // remove tag:web, keep tag:internal + finalAccess: false, // user2 cannot access (no ACL for tag:internal) + }, + { + name: "tag-change-updates-peer-identity", + policy: &policyv2.Policy{ + TagOwners: policyv2.TagOwners{ + "tag:server": policyv2.Owners{usernameOwner("user1@")}, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(tagp("tag:server"), tailcfg.PortRangeAny), + }, + }, + { + Action: "accept", + Sources: []policyv2.Alias{tagp("tag:server")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + }, + }, + spec: ScenarioSpec{ + NodesPerUser: 1, + Users: []string{"user1", "user2"}, + }, + setup: func(t *testing.T, scenario *Scenario, headscale ControlServer) (TailscaleClient, TailscaleClient, uint64) { + t.Helper() + + user1Clients, err := scenario.ListTailscaleClients("user1") + require.NoError(t, err) + user2Clients, err := scenario.ListTailscaleClients("user2") + require.NoError(t, err) + + nodes, err := headscale.ListNodes("user1") + require.NoError(t, err) + + return user2Clients[0], user1Clients[0], nodes[0].GetId() + }, + initialAccess: false, // user2 cannot access user1 (no tag yet) + tagChange: []string{"tag:server"}, // assign tag:server + finalAccess: true, // user2 can now access via tag:server + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + scenario, err := NewScenario(tt.spec) + require.NoError(t, err) + + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{ + tsic.WithNetfilter("off"), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + }, + hsic.WithACLPolicy(tt.policy), + hsic.WithTestName("acl-tag-"+tt.name), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithTLS(), + ) + require.NoError(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + // Run test-specific setup + sourceClient, targetClient, targetNodeID := tt.setup(t, scenario, headscale) + + targetFQDN, err := targetClient.FQDN() + require.NoError(t, err) + + targetURL := fmt.Sprintf("http://%s/etc/hostname", targetFQDN) + + // Step 1: Verify initial access state + t.Logf("Step 1: Verifying initial access (expect success=%v)", tt.initialAccess) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + result, err := sourceClient.Curl(targetURL) + if tt.initialAccess { + assert.NoError(c, err, "Initial access should succeed") + assert.NotEmpty(c, result, "Initial access should return content") + } else { + assert.Error(c, err, "Initial access should fail") + } + }, 30*time.Second, 500*time.Millisecond, "verifying initial access state") + + // Step 1b: Verify initial NetMap visibility + t.Logf("Step 1b: Verifying initial NetMap visibility (expect visible=%v)", tt.initialAccess) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + status, err := sourceClient.Status() + assert.NoError(c, err) + + targetHostname := targetClient.Hostname() + found := false + + for _, peer := range status.Peer { + if strings.Contains(peer.HostName, targetHostname) { + found = true + break + } + } + + if tt.initialAccess { + assert.True(c, found, "Target should be visible in NetMap initially") + } else { + assert.False(c, found, "Target should NOT be visible in NetMap initially") + } + }, 30*time.Second, 500*time.Millisecond, "verifying initial NetMap visibility") + + // Step 2: Apply tag change + t.Logf("Step 2: Setting tags on node %d to %v", targetNodeID, tt.tagChange) + err = headscale.SetNodeTags(targetNodeID, tt.tagChange) + require.NoError(t, err) + + // Verify tag was applied + assert.EventuallyWithT(t, func(c *assert.CollectT) { + // List nodes by iterating through all users since tagged nodes may "move" + var node *v1.Node + + for _, user := range tt.spec.Users { + nodes, err := headscale.ListNodes(user) + if err != nil { + continue + } + + for _, n := range nodes { + if n.GetId() == targetNodeID { + node = n + break + } + } + } + // Also check nodes without user filter + if node == nil { + // Try listing all nodes + allNodes, _ := headscale.ListNodes("") + for _, n := range allNodes { + if n.GetId() == targetNodeID { + node = n + break + } + } + } + + assert.NotNil(c, node, "Node should still exist") + + if node != nil { + assert.ElementsMatch(c, tt.tagChange, node.GetTags(), "Tags should be updated") + } + }, 10*time.Second, 500*time.Millisecond, "verifying tag change applied") + + // Step 3: Verify final access state (this is the key test for #2389) + t.Logf("Step 3: Verifying final access after tag change (expect success=%v)", tt.finalAccess) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + result, err := sourceClient.Curl(targetURL) + if tt.finalAccess { + assert.NoError(c, err, "Final access should succeed after tag change") + assert.NotEmpty(c, result, "Final access should return content") + } else { + assert.Error(c, err, "Final access should fail after tag change") + } + }, 30*time.Second, 500*time.Millisecond, "verifying access propagated after tag change") + + // Step 3b: Verify final NetMap visibility + t.Logf("Step 3b: Verifying final NetMap visibility (expect visible=%v)", tt.finalAccess) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + status, err := sourceClient.Status() + assert.NoError(c, err) + + targetHostname := targetClient.Hostname() + found := false + + for _, peer := range status.Peer { + if strings.Contains(peer.HostName, targetHostname) { + found = true + break + } + } + + if tt.finalAccess { + assert.True(c, found, "Target should be visible in NetMap after tag change") + } else { + assert.False(c, found, "Target should NOT be visible in NetMap after tag change") + } + }, 60*time.Second, 500*time.Millisecond, "verifying NetMap visibility propagated after tag change") + + t.Logf("Test %s PASSED: Tag change propagated correctly", tt.name) + }) + } +} + +// TestACLTagPropagationPortSpecific validates that tag changes correctly update +// port-specific ACLs. When a tag change restricts access to specific ports, +// the peer should remain visible but only the allowed ports should be accessible. +func TestACLTagPropagationPortSpecific(t *testing.T) { + IntegrationSkip(t) + + // Policy: tag:webserver allows port 80, tag:sshonly allows port 22 + // When we change from tag:webserver to tag:sshonly, HTTP should fail but ping should still work + policy := &policyv2.Policy{ + TagOwners: policyv2.TagOwners{ + "tag:webserver": policyv2.Owners{usernameOwner("user1@")}, + "tag:sshonly": policyv2.Owners{usernameOwner("user1@")}, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + // user2 can access tag:webserver on port 80 only + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(tagp("tag:webserver"), tailcfg.PortRange{First: 80, Last: 80}), + }, + }, + // user2 can access tag:sshonly on port 22 only + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(tagp("tag:sshonly"), tailcfg.PortRange{First: 22, Last: 22}), + }, + }, + // Allow ICMP for ping tests + { + Action: "accept", + Sources: []policyv2.Alias{usernamep("user2@")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(tagp("tag:webserver"), tailcfg.PortRangeAny), + aliasWithPorts(tagp("tag:sshonly"), tailcfg.PortRangeAny), + }, + Protocol: "icmp", + }, + // Return path + { + Action: "accept", + Sources: []policyv2.Alias{tagp("tag:webserver"), tagp("tag:sshonly")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(usernamep("user2@"), tailcfg.PortRangeAny), + }, + }, + }, + } + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{"user1", "user2"}, + } + + scenario, err := NewScenario(spec) + require.NoError(t, err) + + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{ + tsic.WithNetfilter("off"), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + }, + hsic.WithACLPolicy(policy), + hsic.WithTestName("acl-tag-port-specific"), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithTLS(), + ) + require.NoError(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + // Create user1's node WITH tag:webserver + taggedKey, err := scenario.CreatePreAuthKeyWithTags( + userMap["user1"].GetId(), false, false, []string{"tag:webserver"}, + ) + require.NoError(t, err) + + user1Node, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; python3 -m http.server --bind :: 80 & tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + tsic.WithNetfilter("off"), + ) + require.NoError(t, err) + + err = user1Node.Login(headscale.GetEndpoint(), taggedKey.GetKey()) + require.NoError(t, err) + + // Create user2's node + untaggedKey, err := scenario.CreatePreAuthKey(userMap["user2"].GetId(), false, false) + require.NoError(t, err) + + user2Node, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithDockerEntrypoint([]string{ + "/bin/sh", "-c", + "/bin/sleep 3 ; apk add python3 curl ; update-ca-certificates ; tailscaled --tun=tsdev", + }), + tsic.WithDockerWorkdir("/"), + tsic.WithNetfilter("off"), + ) + require.NoError(t, err) + + err = user2Node.Login(headscale.GetEndpoint(), untaggedKey.GetKey()) + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + nodes, err := headscale.ListNodes("user1") + require.NoError(t, err) + + targetNodeID := nodes[0].GetId() + + targetFQDN, err := user1Node.FQDN() + require.NoError(t, err) + + targetURL := fmt.Sprintf("http://%s/etc/hostname", targetFQDN) + + // Step 1: Verify initial state - HTTP on port 80 should work with tag:webserver + t.Log("Step 1: Verifying HTTP access with tag:webserver (should succeed)") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + result, err := user2Node.Curl(targetURL) + assert.NoError(c, err, "HTTP should work with tag:webserver") + assert.NotEmpty(c, result) + }, 30*time.Second, 500*time.Millisecond, "initial HTTP access with tag:webserver") + + // Step 2: Change tag from webserver to sshonly + t.Logf("Step 2: Changing tag from webserver to sshonly on node %d", targetNodeID) + err = headscale.SetNodeTags(targetNodeID, []string{"tag:sshonly"}) + require.NoError(t, err) + + // Step 3: Verify peer is still visible in NetMap (partial access, not full removal) + t.Log("Step 3: Verifying peer remains visible in NetMap after tag change") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + status, err := user2Node.Status() + assert.NoError(c, err) + + targetHostname := user1Node.Hostname() + found := false + + for _, peer := range status.Peer { + if strings.Contains(peer.HostName, targetHostname) { + found = true + break + } + } + + assert.True(c, found, "Peer should still be visible with tag:sshonly (port 22 access)") + }, 60*time.Second, 500*time.Millisecond, "peer visibility after tag change") + + // Step 4: Verify HTTP on port 80 now fails (tag:sshonly only allows port 22) + t.Log("Step 4: Verifying HTTP access is now blocked (tag:sshonly only allows port 22)") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + _, err := user2Node.Curl(targetURL) + assert.Error(c, err, "HTTP should fail with tag:sshonly (only port 22 allowed)") + }, 60*time.Second, 500*time.Millisecond, "HTTP blocked after tag change to sshonly") + + t.Log("Test PASSED: Port-specific ACL changes propagated correctly") +} + +// TestACLGroupWithUnknownUser tests issue #2967 where a group containing +// a reference to a non-existent user should not break connectivity for +// valid users in the same group. The expected behavior is that unknown +// users are silently ignored during group resolution. +func TestACLGroupWithUnknownUser(t *testing.T) { + IntegrationSkip(t) + + // This test verifies that when a group contains a reference to a + // non-existent user (e.g., "nonexistent@"), the valid users in + // the group should still be able to connect to each other. + // + // Issue: https://github.com/juanfont/headscale/issues/2967 + + spec := ScenarioSpec{ + NodesPerUser: 1, + Users: []string{"user1", "user2"}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + // Create a policy with a group that includes a non-existent user + // alongside valid users. The group should still work for valid users. + policy := &policyv2.Policy{ + Groups: policyv2.Groups{ + // This group contains a reference to "nonexistent@" which does not exist + policyv2.Group("group:test"): []policyv2.Username{ + policyv2.Username("user1@"), + policyv2.Username("user2@"), + policyv2.Username("nonexistent@"), // This user does not exist + }, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{groupp("group:test")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(groupp("group:test"), tailcfg.PortRangeAny), + }, + }, + }, + } + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{ + tsic.WithNetfilter("off"), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), + tsic.WithDockerWorkdir("/"), + }, + hsic.WithACLPolicy(policy), + hsic.WithTestName("acl-unknown-user"), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithTLS(), + ) + require.NoError(t, err) + + _, err = scenario.ListTailscaleClientsFQDNs() + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + user1Clients, err := scenario.ListTailscaleClients("user1") + require.NoError(t, err) + require.Len(t, user1Clients, 1) + + user2Clients, err := scenario.ListTailscaleClients("user2") + require.NoError(t, err) + require.Len(t, user2Clients, 1) + + user1 := user1Clients[0] + user2 := user2Clients[0] + + // Get FQDNs for connectivity test + user1FQDN, err := user1.FQDN() + require.NoError(t, err) + user2FQDN, err := user2.FQDN() + require.NoError(t, err) + + // Test that user1 can reach user2 (valid users should be able to communicate) + // This is the key assertion for issue #2967: valid users should work + // even if the group contains references to non-existent users. + t.Log("Testing connectivity: user1 -> user2 (should succeed despite unknown user in group)") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user2FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should be able to reach user2") + assert.Len(c, result, 13, "expected hostname response") + }, 30*time.Second, 500*time.Millisecond, "user1 should reach user2") + + // Test that user2 can reach user1 (bidirectional) + t.Log("Testing connectivity: user2 -> user1 (should succeed despite unknown user in group)") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user2.Curl(url) + assert.NoError(c, err, "user2 should be able to reach user1") + assert.Len(c, result, 13, "expected hostname response") + }, 30*time.Second, 500*time.Millisecond, "user2 should reach user1") + + t.Log("Test PASSED: Valid users can communicate despite unknown user reference in group") +} + +// TestACLGroupAfterUserDeletion tests issue #2967 scenario where a user +// is deleted but their reference remains in an ACL group. The remaining +// valid users should still be able to communicate. +func TestACLGroupAfterUserDeletion(t *testing.T) { + IntegrationSkip(t) + + // This test verifies that when a user is deleted from headscale but + // their reference remains in an ACL group, the remaining valid users + // in the group should still be able to connect to each other. + // + // Issue: https://github.com/juanfont/headscale/issues/2967 + + spec := ScenarioSpec{ + NodesPerUser: 1, + Users: []string{"user1", "user2", "user3"}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + // Create a policy with a group containing all three users + policy := &policyv2.Policy{ + Groups: policyv2.Groups{ + policyv2.Group("group:all"): []policyv2.Username{ + policyv2.Username("user1@"), + policyv2.Username("user2@"), + policyv2.Username("user3@"), + }, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{groupp("group:all")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(groupp("group:all"), tailcfg.PortRangeAny), + }, + }, + }, + } + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{ + tsic.WithNetfilter("off"), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), + tsic.WithDockerWorkdir("/"), + }, + hsic.WithACLPolicy(policy), + hsic.WithTestName("acl-deleted-user"), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithTLS(), + hsic.WithPolicyMode(types.PolicyModeDB), // Use DB mode so policy persists after user deletion + ) + require.NoError(t, err) + + _, err = scenario.ListTailscaleClientsFQDNs() + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + user1Clients, err := scenario.ListTailscaleClients("user1") + require.NoError(t, err) + require.Len(t, user1Clients, 1) + + user2Clients, err := scenario.ListTailscaleClients("user2") + require.NoError(t, err) + require.Len(t, user2Clients, 1) + + user3Clients, err := scenario.ListTailscaleClients("user3") + require.NoError(t, err) + require.Len(t, user3Clients, 1) + + user1 := user1Clients[0] + user2 := user2Clients[0] + + // Get FQDNs for connectivity test + user1FQDN, err := user1.FQDN() + require.NoError(t, err) + user2FQDN, err := user2.FQDN() + require.NoError(t, err) + + // Step 1: Verify initial connectivity - all users can reach each other + t.Log("Step 1: Verifying initial connectivity between all users") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user2FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should be able to reach user2 initially") + assert.Len(c, result, 13, "expected hostname response") + }, 30*time.Second, 500*time.Millisecond, "initial user1 -> user2 connectivity") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user2.Curl(url) + assert.NoError(c, err, "user2 should be able to reach user1 initially") + assert.Len(c, result, 13, "expected hostname response") + }, 30*time.Second, 500*time.Millisecond, "initial user2 -> user1 connectivity") + + // Step 2: Get user3's node and user, then delete them + t.Log("Step 2: Deleting user3's node and user from headscale") + + // First, get user3's node ID + nodes, err := headscale.ListNodes("user3") + require.NoError(t, err) + require.Len(t, nodes, 1, "user3 should have exactly one node") + user3NodeID := nodes[0].GetId() + + // Delete user3's node first (required before deleting the user) + err = headscale.DeleteNode(user3NodeID) + require.NoError(t, err, "failed to delete user3's node") + + // Now get user3's user ID and delete the user + user3, err := GetUserByName(headscale, "user3") + require.NoError(t, err, "user3 should exist") + + // Now delete user3 (after their nodes are deleted) + err = headscale.DeleteUser(user3.GetId()) + require.NoError(t, err) + + // Verify user3 is deleted + _, err = GetUserByName(headscale, "user3") + require.Error(t, err, "user3 should be deleted") + + // Step 3: Verify that user1 and user2 can still communicate (before triggering policy refresh) + // The policy still references "user3@" in the group, but since user3 is deleted, + // connectivity may still work due to cached/stale policy state. + t.Log("Step 3: Verifying connectivity still works immediately after user3 deletion (stale cache)") + + // Test that user1 can still reach user2 + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user2FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should still be able to reach user2 after user3 deletion (stale cache)") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user1 -> user2 after user3 deletion") + + // Step 4: Create a NEW user - this triggers updatePolicyManagerUsers() which + // re-evaluates the policy. According to issue #2967, this is when the bug manifests: + // the deleted user3@ in the group causes the entire group to fail resolution. + t.Log("Step 4: Creating a new user (user4) to trigger policy re-evaluation") + + _, err = headscale.CreateUser("user4") + require.NoError(t, err, "failed to create user4") + + // Verify user4 was created + _, err = GetUserByName(headscale, "user4") + require.NoError(t, err, "user4 should exist after creation") + + // Step 5: THIS IS THE CRITICAL TEST - verify connectivity STILL works after + // creating a new user. Without the fix, the group containing the deleted user3@ + // would fail to resolve, breaking connectivity for user1 and user2. + t.Log("Step 5: Verifying connectivity AFTER creating new user (this triggers the bug)") + + // Test that user1 can still reach user2 AFTER the policy refresh triggered by user creation + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user2FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should still reach user2 after policy refresh (BUG if this fails)") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user1 -> user2 after policy refresh (issue #2967)") + + // Test that user2 can still reach user1 + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user2.Curl(url) + assert.NoError(c, err, "user2 should still reach user1 after policy refresh (BUG if this fails)") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user2 -> user1 after policy refresh (issue #2967)") + + t.Log("Test PASSED: Remaining users can communicate after deleted user and policy refresh") +} + +// TestACLGroupDeletionExactReproduction reproduces issue #2967 exactly as reported: +// The reporter had ACTIVE pinging between nodes while making changes. +// The bug is that deleting a user and then creating a new user causes +// connectivity to break for remaining users in the group. +// +// Key difference from other tests: We keep multiple nodes ACTIVE and pinging +// each other throughout the test, just like the reporter's scenario. +// +// Reporter's steps (v0.28.0-beta.1): +// 1. Start pinging between nodes +// 2. Create policy with group:admin = [user1@] +// 3. Create users "deleteable" and "existinguser" +// 4. Add deleteable@ to ACL: Pinging continues +// 5. Delete deleteable: Pinging continues +// 6. Add existinguser@ to ACL: Pinging continues +// 7. Create new user "anotheruser": Pinging continues +// 8. Add anotherinvaliduser@ to ACL: Pinging stops. +func TestACLGroupDeletionExactReproduction(t *testing.T) { + IntegrationSkip(t) + + // Issue: https://github.com/juanfont/headscale/issues/2967 + + const userToDelete = "user2" + + // We need 3 users with active nodes to properly test this: + // - user1: will remain throughout (like "ritty" in the issue) + // - user2: will be deleted (like "deleteable" in the issue) + // - user3: will remain and should still be able to ping user1 after user2 deletion + spec := ScenarioSpec{ + NodesPerUser: 1, + Users: []string{"user1", userToDelete, "user3"}, + } + + scenario, err := NewScenario(spec) + require.NoError(t, err) + + defer scenario.ShutdownAssertNoPanics(t) + + // Initial policy: all three users in group, can communicate with each other + initialPolicy := &policyv2.Policy{ + Groups: policyv2.Groups{ + policyv2.Group("group:admin"): []policyv2.Username{ + policyv2.Username("user1@"), + policyv2.Username(userToDelete + "@"), + policyv2.Username("user3@"), + }, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{groupp("group:admin")}, + Destinations: []policyv2.AliasWithPorts{ + // Use *:* like the reporter's ACL + aliasWithPorts(wildcard(), tailcfg.PortRangeAny), + }, + }, + }, + } + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{ + tsic.WithNetfilter("off"), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), + tsic.WithDockerWorkdir("/"), + }, + hsic.WithACLPolicy(initialPolicy), + hsic.WithTestName("acl-exact-repro"), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithTLS(), + hsic.WithPolicyMode(types.PolicyModeDB), + ) + require.NoError(t, err) + + _, err = scenario.ListTailscaleClientsFQDNs() + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + // Get all clients + user1Clients, err := scenario.ListTailscaleClients("user1") + require.NoError(t, err) + require.Len(t, user1Clients, 1) + user1 := user1Clients[0] + + user3Clients, err := scenario.ListTailscaleClients("user3") + require.NoError(t, err) + require.Len(t, user3Clients, 1) + user3 := user3Clients[0] + + user1FQDN, err := user1.FQDN() + require.NoError(t, err) + user3FQDN, err := user3.FQDN() + require.NoError(t, err) + + // Step 1: Verify initial connectivity - user1 and user3 can ping each other + t.Log("Step 1: Verifying initial connectivity (user1 <-> user3)") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user3FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should reach user3") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user1 -> user3") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user3.Curl(url) + assert.NoError(c, err, "user3 should reach user1") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user3 -> user1") + + t.Log("Step 1: PASSED - initial connectivity works") + + // Step 2: Delete user2's node and user (like reporter deleting "deleteable") + // The ACL still references user2@ but user2 no longer exists + t.Log("Step 2: Deleting user2 (node + user) from database - ACL still references user2@") + + nodes, err := headscale.ListNodes(userToDelete) + require.NoError(t, err) + require.Len(t, nodes, 1) + err = headscale.DeleteNode(nodes[0].GetId()) + require.NoError(t, err) + + userToDeleteObj, err := GetUserByName(headscale, userToDelete) + require.NoError(t, err, "user to delete should exist") + + err = headscale.DeleteUser(userToDeleteObj.GetId()) + require.NoError(t, err) + + t.Log("Step 2: DONE - user2 deleted, ACL still has user2@ reference") + + // Step 3: Verify connectivity still works after user2 deletion + // This tests the immediate effect of the fix - policy should be updated + t.Log("Step 3: Verifying connectivity STILL works after user2 deletion") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user3FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should still reach user3 after user2 deletion") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user1 -> user3 after user2 deletion") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user3.Curl(url) + assert.NoError(c, err, "user3 should still reach user1 after user2 deletion") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user3 -> user1 after user2 deletion") + + t.Log("Step 3: PASSED - connectivity works after user2 deletion") + + // Step 4: Create a NEW user - this triggers updatePolicyManagerUsers() + // According to the reporter, this is when the bug manifests + t.Log("Step 4: Creating new user (user4) - this triggers policy re-evaluation") + + _, err = headscale.CreateUser("user4") + require.NoError(t, err) + + // Step 5: THE CRITICAL TEST - verify connectivity STILL works + // Without the fix: DeleteUser didn't update policy, so when CreateUser + // triggers updatePolicyManagerUsers(), the stale user2@ is now unknown, + // potentially breaking the group. + t.Log("Step 5: Verifying connectivity AFTER creating new user (BUG trigger point)") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user3FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "BUG #2967: user1 should still reach user3 after user4 creation") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user1 -> user3 after user4 creation (issue #2967)") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user3.Curl(url) + assert.NoError(c, err, "BUG #2967: user3 should still reach user1 after user4 creation") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user3 -> user1 after user4 creation (issue #2967)") + + // Additional verification: check filter rules are not empty + filter, err := headscale.DebugFilter() + require.NoError(t, err) + t.Logf("Filter rules: %d", len(filter)) + require.NotEmpty(t, filter, "Filter rules should not be empty") + + t.Log("Test PASSED: Connectivity maintained throughout user deletion and creation") + t.Log("Issue #2967 would cause 'pinging to stop' at Step 5") +} + +// TestACLDynamicUnknownUserAddition tests the v0.28.0-beta.1 scenario from issue #2967: +// "Pinging still stops when a non-registered user is added to a group" +// +// This test verifies that when a policy is DYNAMICALLY updated (via SetPolicy) +// to include a non-existent user in a group, connectivity for valid users +// is maintained. The v2 policy engine should gracefully handle unknown users. +// +// Steps: +// 1. Start with a valid policy (only existing users in group) +// 2. Verify connectivity works +// 3. Update policy to add unknown user to the group +// 4. Verify connectivity STILL works for valid users. +func TestACLDynamicUnknownUserAddition(t *testing.T) { + IntegrationSkip(t) + + // Issue: https://github.com/juanfont/headscale/issues/2967 + // Comment: "Pinging still stops when a non-registered user is added to a group" + + spec := ScenarioSpec{ + NodesPerUser: 1, + Users: []string{"user1", "user2"}, + } + + scenario, err := NewScenario(spec) + require.NoError(t, err) + + defer scenario.ShutdownAssertNoPanics(t) + + // Start with a VALID policy - only existing users in the group + validPolicy := &policyv2.Policy{ + Groups: policyv2.Groups{ + policyv2.Group("group:test"): []policyv2.Username{ + policyv2.Username("user1@"), + policyv2.Username("user2@"), + }, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{groupp("group:test")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(wildcard(), tailcfg.PortRangeAny), + }, + }, + }, + } + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{ + tsic.WithNetfilter("off"), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), + tsic.WithDockerWorkdir("/"), + }, + hsic.WithACLPolicy(validPolicy), + hsic.WithTestName("acl-dynamic-unknown"), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithTLS(), + hsic.WithPolicyMode(types.PolicyModeDB), + ) + require.NoError(t, err) + + _, err = scenario.ListTailscaleClientsFQDNs() + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + user1Clients, err := scenario.ListTailscaleClients("user1") + require.NoError(t, err) + require.Len(t, user1Clients, 1) + user1 := user1Clients[0] + + user2Clients, err := scenario.ListTailscaleClients("user2") + require.NoError(t, err) + require.Len(t, user2Clients, 1) + user2 := user2Clients[0] + + user1FQDN, err := user1.FQDN() + require.NoError(t, err) + user2FQDN, err := user2.FQDN() + require.NoError(t, err) + + // Step 1: Verify initial connectivity with VALID policy + t.Log("Step 1: Verifying initial connectivity with valid policy (no unknown users)") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user2FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should reach user2") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "initial user1 -> user2") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user2.Curl(url) + assert.NoError(c, err, "user2 should reach user1") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "initial user2 -> user1") + + t.Log("Step 1: PASSED - connectivity works with valid policy") + + // Step 2: DYNAMICALLY update policy to add unknown user + // This mimics the v0.28.0-beta.1 scenario where a non-existent user is added + t.Log("Step 2: Updating policy to add unknown user (nonexistent@) to the group") + + policyWithUnknown := &policyv2.Policy{ + Groups: policyv2.Groups{ + policyv2.Group("group:test"): []policyv2.Username{ + policyv2.Username("user1@"), + policyv2.Username("user2@"), + policyv2.Username("nonexistent@"), // Added unknown user + }, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{groupp("group:test")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(wildcard(), tailcfg.PortRangeAny), + }, + }, + }, + } + + err = headscale.SetPolicy(policyWithUnknown) + require.NoError(t, err) + + // Wait for policy to propagate + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + // Step 3: THE CRITICAL TEST - verify connectivity STILL works + // v0.28.0-beta.1 issue: "Pinging still stops when a non-registered user is added to a group" + // With v2 policy graceful error handling, this should pass + t.Log("Step 3: Verifying connectivity AFTER adding unknown user to policy") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user2FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should STILL reach user2 after adding unknown user") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user1 -> user2 after unknown user added") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user2.Curl(url) + assert.NoError(c, err, "user2 should STILL reach user1 after adding unknown user") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user2 -> user1 after unknown user added") + + t.Log("Step 3: PASSED - connectivity maintained after adding unknown user") + t.Log("Test PASSED: v0.28.0-beta.1 scenario - unknown user added dynamically, valid users still work") +} + +// TestACLDynamicUnknownUserRemoval tests the scenario from issue #2967 comments: +// "Removing all invalid users from ACL restores connectivity" +// +// This test verifies that: +// 1. Start with a policy containing unknown user +// 2. Connectivity still works (v2 graceful handling) +// 3. Update policy to remove unknown user +// 4. Connectivity remains working +// +// This ensures the fix handles both: +// - Adding unknown users (tested above) +// - Removing unknown users from policy. +func TestACLDynamicUnknownUserRemoval(t *testing.T) { + IntegrationSkip(t) + + // Issue: https://github.com/juanfont/headscale/issues/2967 + // Comment: "Removing all invalid users from ACL restores connectivity" + + spec := ScenarioSpec{ + NodesPerUser: 1, + Users: []string{"user1", "user2"}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + // Start with a policy that INCLUDES an unknown user + policyWithUnknown := &policyv2.Policy{ + Groups: policyv2.Groups{ + policyv2.Group("group:test"): []policyv2.Username{ + policyv2.Username("user1@"), + policyv2.Username("user2@"), + policyv2.Username("invaliduser@"), // Unknown user from the start + }, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{groupp("group:test")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(wildcard(), tailcfg.PortRangeAny), + }, + }, + }, + } + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{ + tsic.WithNetfilter("off"), + tsic.WithPackages("curl"), + tsic.WithWebserver(80), + tsic.WithDockerWorkdir("/"), + }, + hsic.WithACLPolicy(policyWithUnknown), + hsic.WithTestName("acl-unknown-removal"), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithTLS(), + hsic.WithPolicyMode(types.PolicyModeDB), + ) + require.NoError(t, err) + + _, err = scenario.ListTailscaleClientsFQDNs() + require.NoError(t, err) + + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + user1Clients, err := scenario.ListTailscaleClients("user1") + require.NoError(t, err) + require.Len(t, user1Clients, 1) + user1 := user1Clients[0] + + user2Clients, err := scenario.ListTailscaleClients("user2") + require.NoError(t, err) + require.Len(t, user2Clients, 1) + user2 := user2Clients[0] + + user1FQDN, err := user1.FQDN() + require.NoError(t, err) + user2FQDN, err := user2.FQDN() + require.NoError(t, err) + + // Step 1: Verify initial connectivity WITH unknown user in policy + // With v2 graceful handling, this should work + t.Log("Step 1: Verifying connectivity with unknown user in policy (v2 graceful handling)") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user2FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should reach user2 even with unknown user in policy") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "initial user1 -> user2 with unknown") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user2.Curl(url) + assert.NoError(c, err, "user2 should reach user1 even with unknown user in policy") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "initial user2 -> user1 with unknown") + + t.Log("Step 1: PASSED - connectivity works even with unknown user (v2 graceful handling)") + + // Step 2: Update policy to REMOVE the unknown user + t.Log("Step 2: Updating policy to remove unknown user") + + cleanPolicy := &policyv2.Policy{ + Groups: policyv2.Groups{ + policyv2.Group("group:test"): []policyv2.Username{ + policyv2.Username("user1@"), + policyv2.Username("user2@"), + // invaliduser@ removed + }, + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{groupp("group:test")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(wildcard(), tailcfg.PortRangeAny), + }, + }, + }, + } + + err = headscale.SetPolicy(cleanPolicy) + require.NoError(t, err) + + // Wait for policy to propagate + err = scenario.WaitForTailscaleSync() + require.NoError(t, err) + + // Step 3: Verify connectivity after removing unknown user + // Issue comment: "Removing all invalid users from ACL restores connectivity" + t.Log("Step 3: Verifying connectivity AFTER removing unknown user") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user2FQDN) + result, err := user1.Curl(url) + assert.NoError(c, err, "user1 should reach user2 after removing unknown user") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user1 -> user2 after unknown removed") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + url := fmt.Sprintf("http://%s/etc/hostname", user1FQDN) + result, err := user2.Curl(url) + assert.NoError(c, err, "user2 should reach user1 after removing unknown user") + assert.Len(c, result, 13, "expected hostname response") + }, 60*time.Second, 500*time.Millisecond, "user2 -> user1 after unknown removed") + + t.Log("Step 3: PASSED - connectivity maintained after removing unknown user") + t.Log("Test PASSED: Removing unknown users from policy works correctly") +} diff --git a/integration/api_auth_test.go b/integration/api_auth_test.go index 6c2d07e4..df5f2455 100644 --- a/integration/api_auth_test.go +++ b/integration/api_auth_test.go @@ -98,7 +98,7 @@ func TestAPIAuthenticationBypass(t *testing.T) { // Should NOT contain user data after "Unauthorized" // This is the security bypass - if users array is present, auth was bypassed - var jsonCheck map[string]interface{} + var jsonCheck map[string]any jsonErr := json.Unmarshal(body, &jsonCheck) // If we can unmarshal JSON and it contains "users", that's the bypass @@ -278,8 +278,8 @@ func TestAPIAuthenticationBypassCurl(t *testing.T) { var responseBody string for _, line := range lines { - if strings.HasPrefix(line, "HTTP_CODE:") { - httpCode = strings.TrimPrefix(line, "HTTP_CODE:") + if after, ok := strings.CutPrefix(line, "HTTP_CODE:"); ok { + httpCode = after } else { responseBody += line } @@ -324,8 +324,8 @@ func TestAPIAuthenticationBypassCurl(t *testing.T) { var responseBody string for _, line := range lines { - if strings.HasPrefix(line, "HTTP_CODE:") { - httpCode = strings.TrimPrefix(line, "HTTP_CODE:") + if after, ok := strings.CutPrefix(line, "HTTP_CODE:"); ok { + httpCode = after } else { responseBody += line } @@ -359,8 +359,8 @@ func TestAPIAuthenticationBypassCurl(t *testing.T) { var responseBody string for _, line := range lines { - if strings.HasPrefix(line, "HTTP_CODE:") { - httpCode = strings.TrimPrefix(line, "HTTP_CODE:") + if after, ok := strings.CutPrefix(line, "HTTP_CODE:"); ok { + httpCode = after } else { responseBody += line } @@ -459,9 +459,9 @@ func TestGRPCAuthenticationBypass(t *testing.T) { outputStr := strings.ToLower(output) assert.True(t, strings.Contains(outputStr, "unauthenticated") || - strings.Contains(outputStr, "invalid token") || - strings.Contains(outputStr, "failed to validate token") || - strings.Contains(outputStr, "authentication"), + strings.Contains(outputStr, "invalid token") || + strings.Contains(outputStr, "failed to validate token") || + strings.Contains(outputStr, "authentication"), "Error should indicate authentication failure, got: %s", output) // Should NOT leak user data @@ -609,9 +609,9 @@ cli: outputStr := strings.ToLower(output) assert.True(t, strings.Contains(outputStr, "unauthenticated") || - strings.Contains(outputStr, "invalid token") || - strings.Contains(outputStr, "failed to validate token") || - strings.Contains(outputStr, "authentication"), + strings.Contains(outputStr, "invalid token") || + strings.Contains(outputStr, "failed to validate token") || + strings.Contains(outputStr, "authentication"), "Error should indicate authentication failure, got: %s", output) // Should NOT leak user data diff --git a/integration/auth_key_test.go b/integration/auth_key_test.go index 75106dc5..ba6a195b 100644 --- a/integration/auth_key_test.go +++ b/integration/auth_key_test.go @@ -9,12 +9,15 @@ import ( "time" v1 "github.com/juanfont/headscale/gen/go/headscale/v1" + policyv2 "github.com/juanfont/headscale/hscontrol/policy/v2" "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/integration/hsic" "github.com/juanfont/headscale/integration/tsic" "github.com/samber/lo" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + "tailscale.com/tailcfg" + "tailscale.com/types/ptr" ) func TestAuthKeyLogoutAndReloginSameUser(t *testing.T) { @@ -123,6 +126,7 @@ func TestAuthKeyLogoutAndReloginSameUser(t *testing.T) { // https://github.com/tailscale/tailscale/commit/1eaad7d3deb0815e8932e913ca1a862afa34db38 // https://github.com/juanfont/headscale/issues/2164 if !https { + //nolint:forbidigo // Intentional delay: Tailscale client requires 5 min wait before reconnecting over non-HTTPS time.Sleep(5 * time.Minute) } @@ -424,6 +428,7 @@ func TestAuthKeyLogoutAndReloginSameUserExpiredKey(t *testing.T) { // https://github.com/tailscale/tailscale/commit/1eaad7d3deb0815e8932e913ca1a862afa34db38 // https://github.com/juanfont/headscale/issues/2164 if !https { + //nolint:forbidigo // Intentional delay: Tailscale client requires 5 min wait before reconnecting over non-HTTPS time.Sleep(5 * time.Minute) } @@ -441,10 +446,9 @@ func TestAuthKeyLogoutAndReloginSameUserExpiredKey(t *testing.T) { []string{ "headscale", "preauthkeys", - "--user", - strconv.FormatUint(userMap[userName].GetId(), 10), "expire", - key.GetKey(), + "--id", + strconv.FormatUint(key.GetId(), 10), }) require.NoError(t, err) require.NoError(t, err) @@ -456,3 +460,280 @@ func TestAuthKeyLogoutAndReloginSameUserExpiredKey(t *testing.T) { } } +// TestAuthKeyDeleteKey tests Issue #2830: node with deleted auth key should still reconnect. +// Scenario from user report: "create node, delete the auth key, restart to validate it can connect" +// Steps: +// 1. Create node with auth key +// 2. DELETE the auth key from database (completely remove it) +// 3. Restart node - should successfully reconnect using MachineKey identity. +func TestAuthKeyDeleteKey(t *testing.T) { + IntegrationSkip(t) + + // Create scenario with NO nodes - we'll create the node manually so we can capture the auth key + scenario, err := NewScenario(ScenarioSpec{ + NodesPerUser: 0, // No nodes created automatically + Users: []string{"user1"}, + }) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv([]tsic.Option{}, hsic.WithTestName("delkey"), hsic.WithTLS(), hsic.WithDERPAsIP()) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Get the user + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap["user1"].GetId() + + // Create a pre-auth key - we keep the full key string before it gets redacted + authKey, err := scenario.CreatePreAuthKey(userID, false, false) + require.NoError(t, err) + + authKeyString := authKey.GetKey() + authKeyID := authKey.GetId() + t.Logf("Created pre-auth key ID %d: %s", authKeyID, authKeyString) + + // Create a tailscale client and log it in with the auth key + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + err = client.Login(headscale.GetEndpoint(), authKeyString) + require.NoError(t, err) + + // Wait for the node to be registered + var user1Nodes []*v1.Node + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + var err error + + user1Nodes, err = headscale.ListNodes("user1") + assert.NoError(c, err) + assert.Len(c, user1Nodes, 1) + }, 30*time.Second, 500*time.Millisecond, "waiting for node to be registered") + + nodeID := user1Nodes[0].GetId() + nodeName := user1Nodes[0].GetName() + t.Logf("Node %d (%s) created successfully with auth_key_id=%d", nodeID, nodeName, authKeyID) + + // Verify node is online + requireAllClientsOnline(t, headscale, []types.NodeID{types.NodeID(nodeID)}, true, "node should be online initially", 120*time.Second) + + // DELETE the pre-auth key using the API + t.Logf("Deleting pre-auth key ID %d using API", authKeyID) + + err = headscale.DeleteAuthKey(authKeyID) + require.NoError(t, err) + t.Logf("Successfully deleted auth key") + + // Simulate node restart (down + up) + t.Logf("Restarting node after deleting its auth key") + + err = client.Down() + require.NoError(t, err) + + // Wait for client to fully stop before bringing it back up + assert.EventuallyWithT(t, func(c *assert.CollectT) { + status, err := client.Status() + assert.NoError(c, err) + assert.Equal(c, "Stopped", status.BackendState) + }, 10*time.Second, 200*time.Millisecond, "client should be stopped") + + err = client.Up() + require.NoError(t, err) + + // Verify node comes back online + // This will FAIL without the fix because auth key validation will reject deleted key + // With the fix, MachineKey identity allows reconnection even with deleted key + requireAllClientsOnline(t, headscale, []types.NodeID{types.NodeID(nodeID)}, true, "node should reconnect after restart despite deleted key", 120*time.Second) + + t.Logf("✓ Node successfully reconnected after its auth key was deleted") +} + +// TestAuthKeyLogoutAndReloginRoutesPreserved tests that routes remain serving +// after a node logs out and re-authenticates with the same user. +// +// This test validates the fix for issue #2896: +// https://github.com/juanfont/headscale/issues/2896 +// +// Bug: When a node with already-approved routes restarts/re-authenticates, +// the routes show as "Approved" and "Available" but NOT "Serving" (Primary). +// A headscale restart would fix it, indicating a state management issue. +// +// The test scenario: +// 1. Node registers with auth key and advertises routes +// 2. Routes are auto-approved and verified as serving +// 3. Node logs out +// 4. Node re-authenticates with same auth key +// 5. Routes should STILL be serving (this is where the bug manifests). +func TestAuthKeyLogoutAndReloginRoutesPreserved(t *testing.T) { + IntegrationSkip(t) + + user := "routeuser" + advertiseRoute := "10.55.0.0/24" + + spec := ScenarioSpec{ + NodesPerUser: 1, + Users: []string{user}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{ + tsic.WithAcceptRoutes(), + // Advertise route on initial login + tsic.WithExtraLoginArgs([]string{"--advertise-routes=" + advertiseRoute}), + }, + hsic.WithTestName("routelogout"), + hsic.WithTLS(), + hsic.WithACLPolicy( + &policyv2.Policy{ + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{policyv2.Wildcard}, + Destinations: []policyv2.AliasWithPorts{{Alias: policyv2.Wildcard, Ports: []tailcfg.PortRange{tailcfg.PortRangeAny}}}, + }, + }, + AutoApprovers: policyv2.AutoApproverPolicy{ + Routes: map[netip.Prefix]policyv2.AutoApprovers{ + netip.MustParsePrefix(advertiseRoute): {ptr.To(policyv2.Username(user + "@test.no"))}, + }, + }, + }, + ), + ) + requireNoErrHeadscaleEnv(t, err) + + allClients, err := scenario.ListTailscaleClients() + requireNoErrListClients(t, err) + require.Len(t, allClients, 1) + + client := allClients[0] + + err = scenario.WaitForTailscaleSync() + requireNoErrSync(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Step 1: Verify initial route is advertised, approved, and SERVING + t.Logf("Step 1: Verifying initial route is advertised, approved, and SERVING at %s", time.Now().Format(TimestampFormat)) + + var initialNode *v1.Node + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + initialNode = nodes[0] + // Check: 1 announced, 1 approved, 1 serving (subnet route) + assert.Lenf(c, initialNode.GetAvailableRoutes(), 1, + "Node should have 1 available route, got %v", initialNode.GetAvailableRoutes()) + assert.Lenf(c, initialNode.GetApprovedRoutes(), 1, + "Node should have 1 approved route, got %v", initialNode.GetApprovedRoutes()) + assert.Lenf(c, initialNode.GetSubnetRoutes(), 1, + "Node should have 1 serving (subnet) route, got %v - THIS IS THE BUG if empty", initialNode.GetSubnetRoutes()) + assert.Contains(c, initialNode.GetSubnetRoutes(), advertiseRoute, + "Subnet routes should contain %s", advertiseRoute) + } + }, 30*time.Second, 500*time.Millisecond, "initial route should be serving") + + require.NotNil(t, initialNode, "Initial node should be found") + initialNodeID := initialNode.GetId() + t.Logf("Initial node ID: %d, Available: %v, Approved: %v, Serving: %v", + initialNodeID, initialNode.GetAvailableRoutes(), initialNode.GetApprovedRoutes(), initialNode.GetSubnetRoutes()) + + // Step 2: Logout + t.Logf("Step 2: Logging out at %s", time.Now().Format(TimestampFormat)) + + err = client.Logout() + require.NoError(t, err) + + // Wait for logout to complete + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := client.Status() + assert.NoError(ct, err) + assert.Equal(ct, "NeedsLogin", status.BackendState, "Expected NeedsLogin state after logout") + }, 30*time.Second, 1*time.Second, "waiting for logout to complete") + + t.Logf("Logout completed, node should still exist in database") + + // Verify node still exists (routes should still be in DB) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Node should persist in database after logout") + }, 10*time.Second, 500*time.Millisecond, "node should persist after logout") + + // Step 3: Re-authenticate with the SAME user (using auth key) + t.Logf("Step 3: Re-authenticating with same user at %s", time.Now().Format(TimestampFormat)) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + key, err := scenario.CreatePreAuthKey(userMap[user].GetId(), true, false) + require.NoError(t, err) + + // Re-login - the container already has extraLoginArgs with --advertise-routes + // from the initial setup, so routes will be advertised on re-login + err = scenario.RunTailscaleUp(user, headscale.GetEndpoint(), key.GetKey()) + require.NoError(t, err) + + // Wait for client to be running + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := client.Status() + assert.NoError(ct, err) + assert.Equal(ct, "Running", status.BackendState, "Expected Running state after relogin") + }, 30*time.Second, 1*time.Second, "waiting for relogin to complete") + + t.Logf("Re-authentication completed at %s", time.Now().Format(TimestampFormat)) + + // Step 4: THE CRITICAL TEST - Verify routes are STILL SERVING after re-authentication + t.Logf("Step 4: Verifying routes are STILL SERVING after re-authentication at %s", time.Now().Format(TimestampFormat)) + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should still have exactly 1 node after relogin") + + if len(nodes) == 1 { + node := nodes[0] + t.Logf("After relogin - Available: %v, Approved: %v, Serving: %v", + node.GetAvailableRoutes(), node.GetApprovedRoutes(), node.GetSubnetRoutes()) + + // This is where issue #2896 manifests: + // - Available shows the route (from Hostinfo.RoutableIPs) + // - Approved shows the route (from ApprovedRoutes) + // - BUT Serving (SubnetRoutes/PrimaryRoutes) is EMPTY! + assert.Lenf(c, node.GetAvailableRoutes(), 1, + "Node should have 1 available route after relogin, got %v", node.GetAvailableRoutes()) + assert.Lenf(c, node.GetApprovedRoutes(), 1, + "Node should have 1 approved route after relogin, got %v", node.GetApprovedRoutes()) + assert.Lenf(c, node.GetSubnetRoutes(), 1, + "BUG #2896: Node should have 1 SERVING route after relogin, got %v", node.GetSubnetRoutes()) + assert.Contains(c, node.GetSubnetRoutes(), advertiseRoute, + "BUG #2896: Subnet routes should contain %s after relogin", advertiseRoute) + + // Also verify node ID was preserved (same node, not new registration) + assert.Equal(c, initialNodeID, node.GetId(), + "Node ID should be preserved after same-user relogin") + } + }, 30*time.Second, 500*time.Millisecond, + "BUG #2896: routes should remain SERVING after logout/relogin with same user") + + t.Logf("Test completed - verifying issue #2896 fix") +} diff --git a/integration/auth_oidc_test.go b/integration/auth_oidc_test.go index 9040e5fd..359dd456 100644 --- a/integration/auth_oidc_test.go +++ b/integration/auth_oidc_test.go @@ -12,6 +12,7 @@ import ( "github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp/cmpopts" v1 "github.com/juanfont/headscale/gen/go/headscale/v1" + policyv2 "github.com/juanfont/headscale/hscontrol/policy/v2" "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/integration/hsic" "github.com/juanfont/headscale/integration/tsic" @@ -19,6 +20,8 @@ import ( "github.com/samber/lo" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + "tailscale.com/ipn/ipnstate" + "tailscale.com/tailcfg" ) func TestOIDCAuthenticationPingAll(t *testing.T) { @@ -898,7 +901,8 @@ func TestOIDCFollowUpUrl(t *testing.T) { require.NoError(t, err) // wait for the registration cache to expire - // a little bit more than HEADSCALE_TUNING_REGISTER_CACHE_EXPIRATION + // a little bit more than HEADSCALE_TUNING_REGISTER_CACHE_EXPIRATION (1m30s) + //nolint:forbidigo // Intentional delay: must wait for real-time cache expiration (HEADSCALE_TUNING_REGISTER_CACHE_EXPIRATION=1m30s) time.Sleep(2 * time.Minute) var newUrl *url.URL @@ -1422,3 +1426,492 @@ func TestOIDCExpiryAfterRestart(t *testing.T) { } }, 30*time.Second, 1*time.Second, "validating expiry preservation after restart") } + +// TestOIDCACLPolicyOnJoin validates that ACL policies are correctly applied +// to newly joined OIDC nodes without requiring a client restart. +// +// This test validates the fix for issue #2888: +// https://github.com/juanfont/headscale/issues/2888 +// +// Bug: Nodes joining via OIDC authentication did not get the appropriate ACL +// policy applied until they restarted their client. This was a regression +// introduced in v0.27.0. +// +// The test scenario: +// 1. Creates a CLI user (gateway) with a node advertising a route +// 2. Sets up ACL policy allowing all nodes to access advertised routes +// 3. OIDC user authenticates and joins with a new node +// 4. Verifies that the OIDC user's node IMMEDIATELY sees the advertised route +// +// Expected behavior: +// - Without fix: OIDC node cannot see the route (PrimaryRoutes is nil/empty) +// - With fix: OIDC node immediately sees the route in PrimaryRoutes +// +// Root cause: The buggy code called a.h.Change(c) immediately after user +// creation but BEFORE node registration completed, creating a race condition +// where policy change notifications were sent asynchronously before the node +// was fully registered. +func TestOIDCACLPolicyOnJoin(t *testing.T) { + IntegrationSkip(t) + + gatewayUser := "gateway" + oidcUser := "oidcuser" + + spec := ScenarioSpec{ + NodesPerUser: 1, + Users: []string{gatewayUser}, + OIDCUsers: []mockoidc.MockUser{ + oidcMockUser(oidcUser, true), + }, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + oidcMap := map[string]string{ + "HEADSCALE_OIDC_ISSUER": scenario.mockOIDC.Issuer(), + "HEADSCALE_OIDC_CLIENT_ID": scenario.mockOIDC.ClientID(), + "CREDENTIALS_DIRECTORY_TEST": "/tmp", + "HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret", + } + + // Create headscale environment with ACL policy that allows OIDC user + // to access routes advertised by gateway user + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{ + tsic.WithAcceptRoutes(), + }, + hsic.WithTestName("oidcaclpolicy"), + hsic.WithConfigEnv(oidcMap), + hsic.WithTLS(), + hsic.WithFileInContainer("/tmp/hs_client_oidc_secret", []byte(scenario.mockOIDC.ClientSecret())), + hsic.WithACLPolicy( + &policyv2.Policy{ + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{prefixp("100.64.0.0/10")}, + Destinations: []policyv2.AliasWithPorts{ + aliasWithPorts(prefixp("100.64.0.0/10"), tailcfg.PortRangeAny), + aliasWithPorts(prefixp("10.33.0.0/24"), tailcfg.PortRangeAny), + aliasWithPorts(prefixp("10.44.0.0/24"), tailcfg.PortRangeAny), + }, + }, + }, + AutoApprovers: policyv2.AutoApproverPolicy{ + Routes: map[netip.Prefix]policyv2.AutoApprovers{ + netip.MustParsePrefix("10.33.0.0/24"): {usernameApprover("gateway@test.no"), usernameApprover("oidcuser@headscale.net"), usernameApprover("jane.doe@example.com")}, + netip.MustParsePrefix("10.44.0.0/24"): {usernameApprover("gateway@test.no"), usernameApprover("oidcuser@headscale.net"), usernameApprover("jane.doe@example.com")}, + }, + }, + }, + ), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + // Get the gateway client (CLI user) - only one client at first + allClients, err := scenario.ListTailscaleClients() + requireNoErrListClients(t, err) + require.Len(t, allClients, 1, "Should have exactly 1 client (gateway) before OIDC login") + + gatewayClient := allClients[0] + + // Wait for initial sync (gateway logs in) + err = scenario.WaitForTailscaleSync() + requireNoErrSync(t, err) + + // Gateway advertises route 10.33.0.0/24 + advertiseRoute := "10.33.0.0/24" + command := []string{ + "tailscale", + "set", + "--advertise-routes=" + advertiseRoute, + } + _, _, err = gatewayClient.Execute(command) + require.NoErrorf(t, err, "failed to advertise route: %s", err) + + // Wait for route advertisement to propagate + var gatewayNodeID uint64 + + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(ct, err) + assert.Len(ct, nodes, 1) + + gatewayNode := nodes[0] + gatewayNodeID = gatewayNode.GetId() + assert.Len(ct, gatewayNode.GetAvailableRoutes(), 1) + assert.Contains(ct, gatewayNode.GetAvailableRoutes(), advertiseRoute) + }, 10*time.Second, 500*time.Millisecond, "route advertisement should propagate to headscale") + + // Approve the advertised route + _, err = headscale.ApproveRoutes( + gatewayNodeID, + []netip.Prefix{netip.MustParsePrefix(advertiseRoute)}, + ) + require.NoError(t, err) + + // Wait for route approval to propagate + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(ct, err) + assert.Len(ct, nodes, 1) + + gatewayNode := nodes[0] + assert.Len(ct, gatewayNode.GetApprovedRoutes(), 1) + assert.Contains(ct, gatewayNode.GetApprovedRoutes(), advertiseRoute) + }, 10*time.Second, 500*time.Millisecond, "route approval should propagate to headscale") + + // NOW create the OIDC user by having them join + // This is where issue #2888 manifests - the new OIDC node should immediately + // see the gateway's advertised route + t.Logf("OIDC user joining at %s", time.Now().Format(TimestampFormat)) + + // Create OIDC user's tailscale node + oidcAdvertiseRoute := "10.44.0.0/24" + oidcClient, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithAcceptRoutes(), + tsic.WithExtraLoginArgs([]string{"--advertise-routes=" + oidcAdvertiseRoute}), + ) + require.NoError(t, err) + + // OIDC login happens automatically via LoginWithURL + loginURL, err := oidcClient.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + _, err = doLoginURL(oidcClient.Hostname(), loginURL) + require.NoError(t, err) + + t.Logf("OIDC user logged in successfully at %s", time.Now().Format(TimestampFormat)) + + // THE CRITICAL TEST: Verify that the OIDC user's node can IMMEDIATELY + // see the gateway's advertised route WITHOUT needing a client restart. + // + // This is where the bug manifests: + // - Without fix: PrimaryRoutes will be nil/empty + // - With fix: PrimaryRoutes immediately contains the advertised route + t.Logf("Verifying OIDC user can immediately see advertised routes at %s", time.Now().Format(TimestampFormat)) + + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := oidcClient.Status() + assert.NoError(ct, err) + + // Find the gateway peer in the OIDC user's peer list + var gatewayPeer *ipnstate.PeerStatus + + for _, peerKey := range status.Peers() { + peer := status.Peer[peerKey] + // Gateway is the peer that's not the OIDC user + if peer.UserID != status.Self.UserID { + gatewayPeer = peer + break + } + } + + assert.NotNil(ct, gatewayPeer, "OIDC user should see gateway as peer") + + if gatewayPeer != nil { + // This is the critical assertion - PrimaryRoutes should NOT be nil + assert.NotNil(ct, gatewayPeer.PrimaryRoutes, + "BUG #2888: Gateway peer PrimaryRoutes is nil - ACL policy not applied to new OIDC node!") + + if gatewayPeer.PrimaryRoutes != nil { + routes := gatewayPeer.PrimaryRoutes.AsSlice() + assert.Contains(ct, routes, netip.MustParsePrefix(advertiseRoute), + "OIDC user should immediately see gateway's advertised route %s in PrimaryRoutes", advertiseRoute) + t.Logf("SUCCESS: OIDC user can see advertised route %s in gateway's PrimaryRoutes", advertiseRoute) + } + + // Also verify AllowedIPs includes the route + if gatewayPeer.AllowedIPs != nil && gatewayPeer.AllowedIPs.Len() > 0 { + allowedIPs := gatewayPeer.AllowedIPs.AsSlice() + t.Logf("Gateway peer AllowedIPs: %v", allowedIPs) + } + } + }, 15*time.Second, 500*time.Millisecond, + "OIDC user should immediately see gateway's advertised route without client restart (issue #2888)") + + // Verify that the Gateway node sees the OIDC node's advertised route (AutoApproveRoutes check) + t.Logf("Verifying Gateway user can immediately see OIDC advertised routes at %s", time.Now().Format(TimestampFormat)) + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := gatewayClient.Status() + assert.NoError(ct, err) + + // Find the OIDC peer in the Gateway user's peer list + var oidcPeer *ipnstate.PeerStatus + + for _, peerKey := range status.Peers() { + peer := status.Peer[peerKey] + if peer.UserID != status.Self.UserID { + oidcPeer = peer + break + } + } + + assert.NotNil(ct, oidcPeer, "Gateway user should see OIDC user as peer") + + if oidcPeer != nil { + assert.NotNil(ct, oidcPeer.PrimaryRoutes, + "BUG: OIDC peer PrimaryRoutes is nil - AutoApproveRoutes failed or overwritten!") + + if oidcPeer.PrimaryRoutes != nil { + routes := oidcPeer.PrimaryRoutes.AsSlice() + assert.Contains(ct, routes, netip.MustParsePrefix(oidcAdvertiseRoute), + "Gateway user should immediately see OIDC's advertised route %s in PrimaryRoutes", oidcAdvertiseRoute) + } + } + }, 15*time.Second, 500*time.Millisecond, + "Gateway user should immediately see OIDC's advertised route (AutoApproveRoutes check)") + + // Additional validation: Verify nodes in headscale match expectations + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(ct, err) + assert.Len(ct, nodes, 2, "Should have 2 nodes (gateway + oidcuser)") + + // Verify OIDC user was created correctly + users, err := headscale.ListUsers() + assert.NoError(ct, err) + // Note: mockoidc may create additional default users (like jane.doe) + // so we check for at least 2 users, not exactly 2 + assert.GreaterOrEqual(ct, len(users), 2, "Should have at least 2 users (gateway CLI user + oidcuser)") + + // Find gateway CLI user + var gatewayUser *v1.User + + for _, user := range users { + if user.GetName() == "gateway" && user.GetProvider() == "" { + gatewayUser = user + break + } + } + + assert.NotNil(ct, gatewayUser, "Should have gateway CLI user") + + if gatewayUser != nil { + assert.Equal(ct, "gateway", gatewayUser.GetName()) + } + + // Find OIDC user + var oidcUserFound *v1.User + + for _, user := range users { + if user.GetName() == "oidcuser" && user.GetProvider() == "oidc" { + oidcUserFound = user + break + } + } + + assert.NotNil(ct, oidcUserFound, "Should have OIDC user") + + if oidcUserFound != nil { + assert.Equal(ct, "oidcuser", oidcUserFound.GetName()) + assert.Equal(ct, "oidcuser@headscale.net", oidcUserFound.GetEmail()) + } + }, 10*time.Second, 500*time.Millisecond, "headscale should have correct users and nodes") + + t.Logf("Test completed successfully - issue #2888 fix validated") +} + +// TestOIDCReloginSameUserRoutesPreserved tests the scenario where: +// - A node logs in via OIDC and advertises routes +// - Routes are auto-approved and verified as SERVING +// - The node logs out +// - The node logs back in as the same user +// - Routes should STILL be SERVING (not just approved/available) +// +// This test validates the fix for issue #2896: +// https://github.com/juanfont/headscale/issues/2896 +// +// Bug: When a node with already-approved routes restarts/re-authenticates, +// the routes show as "Approved" and "Available" but NOT "Serving" (Primary). +// A headscale restart would fix it, indicating a state management issue. +func TestOIDCReloginSameUserRoutesPreserved(t *testing.T) { + IntegrationSkip(t) + + advertiseRoute := "10.55.0.0/24" + + // Create scenario with same user for both login attempts + scenario, err := NewScenario(ScenarioSpec{ + OIDCUsers: []mockoidc.MockUser{ + oidcMockUser("user1", true), // Initial login + oidcMockUser("user1", true), // Relogin with same user + }, + }) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + oidcMap := map[string]string{ + "HEADSCALE_OIDC_ISSUER": scenario.mockOIDC.Issuer(), + "HEADSCALE_OIDC_CLIENT_ID": scenario.mockOIDC.ClientID(), + "CREDENTIALS_DIRECTORY_TEST": "/tmp", + "HEADSCALE_OIDC_CLIENT_SECRET_PATH": "${CREDENTIALS_DIRECTORY_TEST}/hs_client_oidc_secret", + } + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{ + tsic.WithAcceptRoutes(), + }, + hsic.WithTestName("oidcrouterelogin"), + hsic.WithConfigEnv(oidcMap), + hsic.WithTLS(), + hsic.WithFileInContainer("/tmp/hs_client_oidc_secret", []byte(scenario.mockOIDC.ClientSecret())), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithDERPAsIP(), + hsic.WithACLPolicy( + &policyv2.Policy{ + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{policyv2.Wildcard}, + Destinations: []policyv2.AliasWithPorts{{Alias: policyv2.Wildcard, Ports: []tailcfg.PortRange{tailcfg.PortRangeAny}}}, + }, + }, + AutoApprovers: policyv2.AutoApproverPolicy{ + Routes: map[netip.Prefix]policyv2.AutoApprovers{ + netip.MustParsePrefix(advertiseRoute): {usernameApprover("user1@headscale.net")}, + }, + }, + }, + ), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + // Create client with route advertisement + ts, err := scenario.CreateTailscaleNode( + "unstable", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithAcceptRoutes(), + tsic.WithExtraLoginArgs([]string{"--advertise-routes=" + advertiseRoute}), + ) + require.NoError(t, err) + + // Initial login as user1 + u, err := ts.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + _, err = doLoginURL(ts.Hostname(), u) + require.NoError(t, err) + + // Wait for client to be running + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := ts.Status() + assert.NoError(ct, err) + assert.Equal(ct, "Running", status.BackendState) + }, 30*time.Second, 1*time.Second, "waiting for initial login to complete") + + // Step 1: Verify initial route is advertised, approved, and SERVING + t.Logf("Step 1: Verifying initial route is advertised, approved, and SERVING at %s", time.Now().Format(TimestampFormat)) + + var initialNode *v1.Node + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + initialNode = nodes[0] + // Check: 1 announced, 1 approved, 1 serving (subnet route) + assert.Lenf(c, initialNode.GetAvailableRoutes(), 1, + "Node should have 1 available route, got %v", initialNode.GetAvailableRoutes()) + assert.Lenf(c, initialNode.GetApprovedRoutes(), 1, + "Node should have 1 approved route, got %v", initialNode.GetApprovedRoutes()) + assert.Lenf(c, initialNode.GetSubnetRoutes(), 1, + "Node should have 1 serving (subnet) route, got %v - THIS IS THE BUG if empty", initialNode.GetSubnetRoutes()) + assert.Contains(c, initialNode.GetSubnetRoutes(), advertiseRoute, + "Subnet routes should contain %s", advertiseRoute) + } + }, 30*time.Second, 500*time.Millisecond, "initial route should be serving") + + require.NotNil(t, initialNode, "Initial node should be found") + initialNodeID := initialNode.GetId() + t.Logf("Initial node ID: %d, Available: %v, Approved: %v, Serving: %v", + initialNodeID, initialNode.GetAvailableRoutes(), initialNode.GetApprovedRoutes(), initialNode.GetSubnetRoutes()) + + // Step 2: Logout + t.Logf("Step 2: Logging out at %s", time.Now().Format(TimestampFormat)) + + err = ts.Logout() + require.NoError(t, err) + + // Wait for logout to complete + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := ts.Status() + assert.NoError(ct, err) + assert.Equal(ct, "NeedsLogin", status.BackendState, "Expected NeedsLogin state after logout") + }, 30*time.Second, 1*time.Second, "waiting for logout to complete") + + t.Logf("Logout completed, node should still exist in database") + + // Verify node still exists (routes should still be in DB) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Node should persist in database after logout") + }, 10*time.Second, 500*time.Millisecond, "node should persist after logout") + + // Step 3: Re-authenticate via OIDC as the same user + t.Logf("Step 3: Re-authenticating with same user via OIDC at %s", time.Now().Format(TimestampFormat)) + + u, err = ts.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + _, err = doLoginURL(ts.Hostname(), u) + require.NoError(t, err) + + // Wait for client to be running + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := ts.Status() + assert.NoError(ct, err) + assert.Equal(ct, "Running", status.BackendState, "Expected Running state after relogin") + }, 30*time.Second, 1*time.Second, "waiting for relogin to complete") + + t.Logf("Re-authentication completed at %s", time.Now().Format(TimestampFormat)) + + // Step 4: THE CRITICAL TEST - Verify routes are STILL SERVING after re-authentication + t.Logf("Step 4: Verifying routes are STILL SERVING after re-authentication at %s", time.Now().Format(TimestampFormat)) + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should still have exactly 1 node after relogin") + + if len(nodes) == 1 { + node := nodes[0] + t.Logf("After relogin - Available: %v, Approved: %v, Serving: %v", + node.GetAvailableRoutes(), node.GetApprovedRoutes(), node.GetSubnetRoutes()) + + // This is where issue #2896 manifests: + // - Available shows the route (from Hostinfo.RoutableIPs) + // - Approved shows the route (from ApprovedRoutes) + // - BUT Serving (SubnetRoutes/PrimaryRoutes) is EMPTY! + assert.Lenf(c, node.GetAvailableRoutes(), 1, + "Node should have 1 available route after relogin, got %v", node.GetAvailableRoutes()) + assert.Lenf(c, node.GetApprovedRoutes(), 1, + "Node should have 1 approved route after relogin, got %v", node.GetApprovedRoutes()) + assert.Lenf(c, node.GetSubnetRoutes(), 1, + "BUG #2896: Node should have 1 SERVING route after relogin, got %v", node.GetSubnetRoutes()) + assert.Contains(c, node.GetSubnetRoutes(), advertiseRoute, + "BUG #2896: Subnet routes should contain %s after relogin", advertiseRoute) + + // Also verify node ID was preserved (same node, not new registration) + assert.Equal(c, initialNodeID, node.GetId(), + "Node ID should be preserved after same-user relogin") + } + }, 30*time.Second, 500*time.Millisecond, + "BUG #2896: routes should remain SERVING after OIDC logout/relogin with same user") + + t.Logf("Test completed - verifying issue #2896 fix for OIDC") +} diff --git a/integration/cli_test.go b/integration/cli_test.go index 37e3c33d..65d82444 100644 --- a/integration/cli_test.go +++ b/integration/cli_test.go @@ -4,6 +4,7 @@ import ( "cmp" "encoding/json" "fmt" + "slices" "strconv" "strings" "testing" @@ -18,7 +19,6 @@ import ( "github.com/juanfont/headscale/integration/tsic" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" - "golang.org/x/exp/slices" "tailscale.com/tailcfg" ) @@ -54,6 +54,7 @@ func TestUserCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -63,8 +64,11 @@ func TestUserCommand(t *testing.T) { headscale, err := scenario.Headscale() require.NoError(t, err) - var listUsers []*v1.User - var result []string + var ( + listUsers []*v1.User + result []string + ) + assert.EventuallyWithT(t, func(ct *assert.CollectT) { err := executeAndUnmarshal(headscale, []string{ @@ -102,6 +106,7 @@ func TestUserCommand(t *testing.T) { require.NoError(t, err) var listAfterRenameUsers []*v1.User + assert.EventuallyWithT(t, func(ct *assert.CollectT) { err := executeAndUnmarshal(headscale, []string{ @@ -127,6 +132,7 @@ func TestUserCommand(t *testing.T) { }, 20*time.Second, 1*time.Second) var listByUsername []*v1.User + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal(headscale, []string{ @@ -143,6 +149,7 @@ func TestUserCommand(t *testing.T) { }, 10*time.Second, 200*time.Millisecond, "Waiting for user list by username") slices.SortFunc(listByUsername, sortWithID) + want := []*v1.User{ { Id: 1, @@ -156,6 +163,7 @@ func TestUserCommand(t *testing.T) { } var listByID []*v1.User + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal(headscale, []string{ @@ -172,6 +180,7 @@ func TestUserCommand(t *testing.T) { }, 10*time.Second, 200*time.Millisecond, "Waiting for user list by ID") slices.SortFunc(listByID, sortWithID) + want = []*v1.User{ { Id: 1, @@ -198,6 +207,7 @@ func TestUserCommand(t *testing.T) { assert.Contains(t, deleteResult, "User destroyed") var listAfterIDDelete []*v1.User + assert.EventuallyWithT(t, func(ct *assert.CollectT) { err := executeAndUnmarshal(headscale, []string{ @@ -212,6 +222,7 @@ func TestUserCommand(t *testing.T) { assert.NoError(ct, err) slices.SortFunc(listAfterIDDelete, sortWithID) + want := []*v1.User{ { Id: 2, @@ -238,6 +249,7 @@ func TestUserCommand(t *testing.T) { assert.Contains(t, deleteResult, "User destroyed") var listAfterNameDelete []v1.User + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal(headscale, []string{ @@ -265,6 +277,7 @@ func TestPreAuthKeyCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -275,10 +288,12 @@ func TestPreAuthKeyCommand(t *testing.T) { require.NoError(t, err) keys := make([]*v1.PreAuthKey, count) + require.NoError(t, err) for index := range count { var preAuthKey v1.PreAuthKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err := executeAndUnmarshal( headscale, @@ -307,14 +322,13 @@ func TestPreAuthKeyCommand(t *testing.T) { assert.Len(t, keys, 3) var listedPreAuthKeys []v1.PreAuthKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, []string{ "headscale", "preauthkeys", - "--user", - "1", "list", "--output", "json", @@ -337,9 +351,10 @@ func TestPreAuthKeyCommand(t *testing.T) { }, ) - assert.NotEmpty(t, listedPreAuthKeys[1].GetKey()) - assert.NotEmpty(t, listedPreAuthKeys[2].GetKey()) - assert.NotEmpty(t, listedPreAuthKeys[3].GetKey()) + // New keys show prefix after listing, so check the created keys instead + assert.NotEmpty(t, keys[0].GetKey()) + assert.NotEmpty(t, keys[1].GetKey()) + assert.NotEmpty(t, keys[2].GetKey()) assert.True(t, listedPreAuthKeys[1].GetExpiration().AsTime().After(time.Now())) assert.True(t, listedPreAuthKeys[2].GetExpiration().AsTime().After(time.Now())) @@ -375,23 +390,21 @@ func TestPreAuthKeyCommand(t *testing.T) { []string{ "headscale", "preauthkeys", - "--user", - "1", "expire", - listedPreAuthKeys[1].GetKey(), + "--id", + strconv.FormatUint(keys[0].GetId(), 10), }, ) require.NoError(t, err) var listedPreAuthKeysAfterExpire []v1.PreAuthKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, []string{ "headscale", "preauthkeys", - "--user", - "1", "list", "--output", "json", @@ -415,6 +428,7 @@ func TestPreAuthKeyCommandWithoutExpiry(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -425,6 +439,7 @@ func TestPreAuthKeyCommandWithoutExpiry(t *testing.T) { require.NoError(t, err) var preAuthKey v1.PreAuthKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -444,14 +459,13 @@ func TestPreAuthKeyCommandWithoutExpiry(t *testing.T) { }, 10*time.Second, 200*time.Millisecond, "Waiting for preauth key creation without expiry") var listedPreAuthKeys []v1.PreAuthKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, []string{ "headscale", "preauthkeys", - "--user", - "1", "list", "--output", "json", @@ -459,7 +473,7 @@ func TestPreAuthKeyCommandWithoutExpiry(t *testing.T) { &listedPreAuthKeys, ) assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for preauth keys list without expiry") + }, 10*time.Second, 200*time.Millisecond, "Waiting for preauth keys list") // There is one key created by "scenario.CreateHeadscaleEnv" assert.Len(t, listedPreAuthKeys, 2) @@ -480,6 +494,7 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -490,6 +505,7 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) { require.NoError(t, err) var preAuthReusableKey v1.PreAuthKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -509,6 +525,7 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) { }, 10*time.Second, 200*time.Millisecond, "Waiting for reusable preauth key creation") var preAuthEphemeralKey v1.PreAuthKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -531,14 +548,13 @@ func TestPreAuthKeyCommandReusableEphemeral(t *testing.T) { assert.False(t, preAuthEphemeralKey.GetReusable()) var listedPreAuthKeys []v1.PreAuthKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, []string{ "headscale", "preauthkeys", - "--user", - "1", "list", "--output", "json", @@ -564,6 +580,7 @@ func TestPreAuthKeyCorrectUserLoggedInCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -606,8 +623,10 @@ func TestPreAuthKeyCorrectUserLoggedInCommand(t *testing.T) { }, 10*time.Second, 200*time.Millisecond, "Waiting for user2 preauth key creation") var listNodes []*v1.Node + assert.EventuallyWithT(t, func(ct *assert.CollectT) { var err error + listNodes, err = headscale.ListNodes() assert.NoError(ct, err) assert.Len(ct, listNodes, 1, "Should have exactly 1 node for user1") @@ -642,16 +661,137 @@ func TestPreAuthKeyCorrectUserLoggedInCommand(t *testing.T) { status, err := client.Status() assert.NoError(ct, err) assert.Equal(ct, "Running", status.BackendState, "Expected node to be logged in, backend state: %s", status.BackendState) - assert.Equal(ct, "userid:2", status.Self.UserID.String(), "Expected node to be logged in as userid:2") + // With tags-as-identity model, tagged nodes show as TaggedDevices user (2147455555) + // The PreAuthKey was created with tags, so the node is tagged + assert.Equal(ct, "userid:2147455555", status.Self.UserID.String(), "Expected node to be logged in as tagged-devices user") }, 30*time.Second, 2*time.Second) assert.EventuallyWithT(t, func(ct *assert.CollectT) { var err error + listNodes, err = headscale.ListNodes() assert.NoError(ct, err) assert.Len(ct, listNodes, 2, "Should have 2 nodes after re-login") assert.Equal(ct, user1, listNodes[0].GetUser().GetName(), "First node should belong to user1") - assert.Equal(ct, user2, listNodes[1].GetUser().GetName(), "Second node should belong to user2") + // Second node is tagged (created with tagged PreAuthKey), so it shows as "tagged-devices" + assert.Equal(ct, "tagged-devices", listNodes[1].GetUser().GetName(), "Second node should be tagged-devices") + }, 20*time.Second, 1*time.Second) +} + +func TestTaggedNodesCLIOutput(t *testing.T) { + IntegrationSkip(t) + + user1 := "user1" + user2 := "user2" + + spec := ScenarioSpec{ + NodesPerUser: 1, + Users: []string{user1}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithTestName("tagcli"), + hsic.WithEmbeddedDERPServerOnly(), + hsic.WithTLS(), + ) + require.NoError(t, err) + + headscale, err := scenario.Headscale() + require.NoError(t, err) + + u2, err := headscale.CreateUser(user2) + require.NoError(t, err) + + var user2Key v1.PreAuthKey + + // Create a tagged PreAuthKey for user2 + assert.EventuallyWithT(t, func(c *assert.CollectT) { + err = executeAndUnmarshal( + headscale, + []string{ + "headscale", + "preauthkeys", + "--user", + strconv.FormatUint(u2.GetId(), 10), + "create", + "--reusable", + "--expiration", + "24h", + "--output", + "json", + "--tags", + "tag:test1,tag:test2", + }, + &user2Key, + ) + assert.NoError(c, err) + }, 10*time.Second, 200*time.Millisecond, "Waiting for user2 tagged preauth key creation") + + allClients, err := scenario.ListTailscaleClients() + requireNoErrListClients(t, err) + + require.Len(t, allClients, 1) + + client := allClients[0] + + // Log out from user1 + err = client.Logout() + require.NoError(t, err) + + err = scenario.WaitForTailscaleLogout() + require.NoError(t, err) + + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := client.Status() + assert.NoError(ct, err) + assert.NotContains(ct, []string{"Starting", "Running"}, status.BackendState, + "Expected node to be logged out, backend state: %s", status.BackendState) + }, 30*time.Second, 2*time.Second) + + // Log in with the tagged PreAuthKey (from user2, with tags) + err = client.Login(headscale.GetEndpoint(), user2Key.GetKey()) + require.NoError(t, err) + + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + status, err := client.Status() + assert.NoError(ct, err) + assert.Equal(ct, "Running", status.BackendState, "Expected node to be logged in, backend state: %s", status.BackendState) + // With tags-as-identity model, tagged nodes show as TaggedDevices user (2147455555) + assert.Equal(ct, "userid:2147455555", status.Self.UserID.String(), "Expected node to be logged in as tagged-devices user") + }, 30*time.Second, 2*time.Second) + + // Wait for the second node to appear + var listNodes []*v1.Node + + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + var err error + + listNodes, err = headscale.ListNodes() + assert.NoError(ct, err) + assert.Len(ct, listNodes, 2, "Should have 2 nodes after re-login with tagged key") + assert.Equal(ct, user1, listNodes[0].GetUser().GetName(), "First node should belong to user1") + assert.Equal(ct, "tagged-devices", listNodes[1].GetUser().GetName(), "Second node should be tagged-devices") + }, 20*time.Second, 1*time.Second) + + // Test: tailscale status output should show "tagged-devices" not "userid:2147455555" + // This is the fix for issue #2970 - the Tailscale client should display user-friendly names + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + stdout, stderr, err := client.Execute([]string{"tailscale", "status"}) + assert.NoError(ct, err, "tailscale status command should succeed, stderr: %s", stderr) + + t.Logf("Tailscale status output:\n%s", stdout) + + // The output should contain "tagged-devices" for tagged nodes + assert.Contains(ct, stdout, "tagged-devices", "Tailscale status should show 'tagged-devices' for tagged nodes") + + // The output should NOT show the raw numeric userid to the user + assert.NotContains(ct, stdout, "userid:2147455555", "Tailscale status should not show numeric userid for tagged nodes") }, 20*time.Second, 1*time.Second) } @@ -665,6 +805,7 @@ func TestApiKeyCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -697,6 +838,7 @@ func TestApiKeyCommand(t *testing.T) { assert.Len(t, keys, 5) var listedAPIKeys []v1.ApiKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal(headscale, []string{ @@ -771,6 +913,7 @@ func TestApiKeyCommand(t *testing.T) { } var listedAfterExpireAPIKeys []v1.ApiKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal(headscale, []string{ @@ -812,6 +955,7 @@ func TestApiKeyCommand(t *testing.T) { assert.NoError(t, err) var listedAPIKeysAfterDelete []v1.ApiKey + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal(headscale, []string{ @@ -827,264 +971,76 @@ func TestApiKeyCommand(t *testing.T) { }, 10*time.Second, 200*time.Millisecond, "Waiting for API keys list after delete") assert.Len(t, listedAPIKeysAfterDelete, 4) -} - -func TestNodeTagCommand(t *testing.T) { - IntegrationSkip(t) - - spec := ScenarioSpec{ - Users: []string{"user1"}, - } - - scenario, err := NewScenario(spec) - require.NoError(t, err) - defer scenario.ShutdownAssertNoPanics(t) - - err = scenario.CreateHeadscaleEnv([]tsic.Option{}, hsic.WithTestName("clins")) - require.NoError(t, err) - - headscale, err := scenario.Headscale() - require.NoError(t, err) - - regIDs := []string{ - types.MustRegistrationID().String(), - types.MustRegistrationID().String(), - } - nodes := make([]*v1.Node, len(regIDs)) - assert.NoError(t, err) - - for index, regID := range regIDs { - _, err := headscale.Execute( - []string{ - "headscale", - "debug", - "create-node", - "--name", - fmt.Sprintf("node-%d", index+1), - "--user", - "user1", - "--key", - regID, - "--output", - "json", - }, - ) - assert.NoError(t, err) - - var node v1.Node - assert.EventuallyWithT(t, func(c *assert.CollectT) { - err = executeAndUnmarshal( - headscale, - []string{ - "headscale", - "nodes", - "--user", - "user1", - "register", - "--key", - regID, - "--output", - "json", - }, - &node, - ) - assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for node registration") - - nodes[index] = &node - } - assert.EventuallyWithT(t, func(ct *assert.CollectT) { - assert.Len(ct, nodes, len(regIDs), "Should have correct number of nodes after CLI operations") - }, 15*time.Second, 1*time.Second) - - var node v1.Node - assert.EventuallyWithT(t, func(c *assert.CollectT) { - err = executeAndUnmarshal( - headscale, - []string{ - "headscale", - "nodes", - "tag", - "-i", "1", - "-t", "tag:test", - "--output", "json", - }, - &node, - ) - assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for node tag command") - - assert.Equal(t, []string{"tag:test"}, node.GetForcedTags()) + // Test expire by ID (using key at index 0) _, err = headscale.Execute( []string{ "headscale", - "nodes", - "tag", - "-i", "2", - "-t", "wrong-tag", - "--output", "json", - }, - ) - assert.ErrorContains(t, err, "tag must start with the string 'tag:'") + "apikeys", + "expire", + "--id", + strconv.FormatUint(listedAPIKeysAfterDelete[0].GetId(), 10), + }) + require.NoError(t, err) + + var listedAPIKeysAfterExpireByID []v1.ApiKey - // Test list all nodes after added seconds - resultMachines := make([]*v1.Node, len(regIDs)) assert.EventuallyWithT(t, func(c *assert.CollectT) { - err = executeAndUnmarshal( - headscale, + err = executeAndUnmarshal(headscale, []string{ "headscale", - "nodes", + "apikeys", "list", - "--output", "json", + "--output", + "json", }, - &resultMachines, + &listedAPIKeysAfterExpireByID, ) assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for nodes list after tagging") - found := false - for _, node := range resultMachines { - if node.GetForcedTags() != nil { - for _, tag := range node.GetForcedTags() { - if tag == "tag:test" { - found = true - } - } + }, 10*time.Second, 200*time.Millisecond, "Waiting for API keys list after expire by ID") + + // Verify the key was expired + for idx := range listedAPIKeysAfterExpireByID { + if listedAPIKeysAfterExpireByID[idx].GetId() == listedAPIKeysAfterDelete[0].GetId() { + assert.True(t, listedAPIKeysAfterExpireByID[idx].GetExpiration().AsTime().Before(time.Now()), + "Key expired by ID should have expiration in the past") } } - assert.True( - t, - found, - "should find a node with the tag 'tag:test' in the list of nodes", - ) -} -func TestNodeAdvertiseTagCommand(t *testing.T) { - IntegrationSkip(t) - - tests := []struct { - name string - policy *policyv2.Policy - wantTag bool - }{ - { - name: "no-policy", - wantTag: false, - }, - { - name: "with-policy-email", - policy: &policyv2.Policy{ - ACLs: []policyv2.ACL{ - { - Action: "accept", - Protocol: "tcp", - Sources: []policyv2.Alias{wildcard()}, - Destinations: []policyv2.AliasWithPorts{ - aliasWithPorts(wildcard(), tailcfg.PortRangeAny), - }, - }, - }, - TagOwners: policyv2.TagOwners{ - policyv2.Tag("tag:test"): policyv2.Owners{usernameOwner("user1@test.no")}, - }, - }, - wantTag: true, - }, - { - name: "with-policy-username", - policy: &policyv2.Policy{ - ACLs: []policyv2.ACL{ - { - Action: "accept", - Protocol: "tcp", - Sources: []policyv2.Alias{wildcard()}, - Destinations: []policyv2.AliasWithPorts{ - aliasWithPorts(wildcard(), tailcfg.PortRangeAny), - }, - }, - }, - TagOwners: policyv2.TagOwners{ - policyv2.Tag("tag:test"): policyv2.Owners{usernameOwner("user1@")}, - }, - }, - wantTag: true, - }, - { - name: "with-policy-groups", - policy: &policyv2.Policy{ - Groups: policyv2.Groups{ - policyv2.Group("group:admins"): []policyv2.Username{policyv2.Username("user1@")}, - }, - ACLs: []policyv2.ACL{ - { - Action: "accept", - Protocol: "tcp", - Sources: []policyv2.Alias{wildcard()}, - Destinations: []policyv2.AliasWithPorts{ - aliasWithPorts(wildcard(), tailcfg.PortRangeAny), - }, - }, - }, - TagOwners: policyv2.TagOwners{ - policyv2.Tag("tag:test"): policyv2.Owners{groupOwner("group:admins")}, - }, - }, - wantTag: true, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - spec := ScenarioSpec{ - NodesPerUser: 1, - Users: []string{"user1"}, - } - - scenario, err := NewScenario(spec) - require.NoError(t, err) - defer scenario.ShutdownAssertNoPanics(t) - - err = scenario.CreateHeadscaleEnv( - []tsic.Option{tsic.WithTags([]string{"tag:test"})}, - hsic.WithTestName("cliadvtags"), - hsic.WithACLPolicy(tt.policy), - ) - require.NoError(t, err) - - headscale, err := scenario.Headscale() - require.NoError(t, err) - - // Test list all nodes after added seconds - var resultMachines []*v1.Node - assert.EventuallyWithT(t, func(c *assert.CollectT) { - resultMachines = make([]*v1.Node, spec.NodesPerUser) - err = executeAndUnmarshal( - headscale, - []string{ - "headscale", - "nodes", - "list", - "--tags", - "--output", "json", - }, - &resultMachines, - ) - assert.NoError(c, err) - found := false - for _, node := range resultMachines { - if tags := node.GetValidTags(); tags != nil { - found = slices.Contains(tags, "tag:test") - } - } - assert.Equalf( - c, - tt.wantTag, - found, - "'tag:test' found(%t) is the list of nodes, expected %t", found, tt.wantTag, - ) - }, 10*time.Second, 200*time.Millisecond, "Waiting for tag propagation to nodes") + // Test delete by ID (using key at index 1) + deletedKeyID := listedAPIKeysAfterExpireByID[1].GetId() + _, err = headscale.Execute( + []string{ + "headscale", + "apikeys", + "delete", + "--id", + strconv.FormatUint(deletedKeyID, 10), }) + require.NoError(t, err) + + var listedAPIKeysAfterDeleteByID []v1.ApiKey + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + err = executeAndUnmarshal(headscale, + []string{ + "headscale", + "apikeys", + "list", + "--output", + "json", + }, + &listedAPIKeysAfterDeleteByID, + ) + assert.NoError(c, err) + }, 10*time.Second, 200*time.Millisecond, "Waiting for API keys list after delete by ID") + + assert.Len(t, listedAPIKeysAfterDeleteByID, 3) + + // Verify the specific key was deleted + for idx := range listedAPIKeysAfterDeleteByID { + assert.NotEqual(t, deletedKeyID, listedAPIKeysAfterDeleteByID[idx].GetId(), + "Deleted key should not be present in the list") } } @@ -1096,6 +1052,7 @@ func TestNodeCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -1113,6 +1070,7 @@ func TestNodeCommand(t *testing.T) { types.MustRegistrationID().String(), } nodes := make([]*v1.Node, len(regIDs)) + assert.NoError(t, err) for index, regID := range regIDs { @@ -1134,6 +1092,7 @@ func TestNodeCommand(t *testing.T) { assert.NoError(t, err) var node v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1162,6 +1121,7 @@ func TestNodeCommand(t *testing.T) { // Test list all nodes after added seconds var listAll []v1.Node + assert.EventuallyWithT(t, func(ct *assert.CollectT) { err := executeAndUnmarshal( headscale, @@ -1195,6 +1155,7 @@ func TestNodeCommand(t *testing.T) { types.MustRegistrationID().String(), } otherUserMachines := make([]*v1.Node, len(otherUserRegIDs)) + assert.NoError(t, err) for index, regID := range otherUserRegIDs { @@ -1216,6 +1177,7 @@ func TestNodeCommand(t *testing.T) { assert.NoError(t, err) var node v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1244,6 +1206,7 @@ func TestNodeCommand(t *testing.T) { // Test list all nodes after added otherUser var listAllWithotherUser []v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1270,6 +1233,7 @@ func TestNodeCommand(t *testing.T) { // Test list all nodes after added otherUser var listOnlyotherUserMachineUser []v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1321,6 +1285,7 @@ func TestNodeCommand(t *testing.T) { // Test: list main user after node is deleted var listOnlyMachineUserAfterDelete []v1.Node + assert.EventuallyWithT(t, func(ct *assert.CollectT) { err := executeAndUnmarshal( headscale, @@ -1348,6 +1313,7 @@ func TestNodeExpireCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -1385,6 +1351,7 @@ func TestNodeExpireCommand(t *testing.T) { assert.NoError(t, err) var node v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1410,6 +1377,7 @@ func TestNodeExpireCommand(t *testing.T) { assert.Len(t, nodes, len(regIDs)) var listAll []v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1447,6 +1415,7 @@ func TestNodeExpireCommand(t *testing.T) { } var listAllAfterExpiry []v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1479,6 +1448,7 @@ func TestNodeRenameCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -1496,6 +1466,7 @@ func TestNodeRenameCommand(t *testing.T) { types.MustRegistrationID().String(), } nodes := make([]*v1.Node, len(regIDs)) + assert.NoError(t, err) for index, regID := range regIDs { @@ -1517,6 +1488,7 @@ func TestNodeRenameCommand(t *testing.T) { require.NoError(t, err) var node v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1542,6 +1514,7 @@ func TestNodeRenameCommand(t *testing.T) { assert.Len(t, nodes, len(regIDs)) var listAll []v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1582,6 +1555,7 @@ func TestNodeRenameCommand(t *testing.T) { } var listAllAfterRename []v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1619,6 +1593,7 @@ func TestNodeRenameCommand(t *testing.T) { assert.ErrorContains(t, err, "must not exceed 63 characters") var listAllAfterRenameAttempt []v1.Node + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1643,178 +1618,6 @@ func TestNodeRenameCommand(t *testing.T) { assert.Contains(t, listAllAfterRenameAttempt[4].GetGivenName(), "node-5") } -func TestNodeMoveCommand(t *testing.T) { - IntegrationSkip(t) - - spec := ScenarioSpec{ - Users: []string{"old-user", "new-user"}, - } - - scenario, err := NewScenario(spec) - require.NoError(t, err) - defer scenario.ShutdownAssertNoPanics(t) - - err = scenario.CreateHeadscaleEnv([]tsic.Option{}, hsic.WithTestName("clins")) - require.NoError(t, err) - - headscale, err := scenario.Headscale() - require.NoError(t, err) - - // Randomly generated node key - regID := types.MustRegistrationID() - - userMap, err := headscale.MapUsers() - require.NoError(t, err) - - _, err = headscale.Execute( - []string{ - "headscale", - "debug", - "create-node", - "--name", - "nomad-node", - "--user", - "old-user", - "--key", - regID.String(), - "--output", - "json", - }, - ) - assert.NoError(t, err) - - var node v1.Node - assert.EventuallyWithT(t, func(c *assert.CollectT) { - err = executeAndUnmarshal( - headscale, - []string{ - "headscale", - "nodes", - "--user", - "old-user", - "register", - "--key", - regID.String(), - "--output", - "json", - }, - &node, - ) - assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for old-user node registration") - - assert.Equal(t, uint64(1), node.GetId()) - assert.Equal(t, "nomad-node", node.GetName()) - assert.Equal(t, "old-user", node.GetUser().GetName()) - - nodeID := strconv.FormatUint(node.GetId(), 10) - - assert.EventuallyWithT(t, func(c *assert.CollectT) { - err = executeAndUnmarshal( - headscale, - []string{ - "headscale", - "nodes", - "move", - "--identifier", - strconv.FormatUint(node.GetId(), 10), - "--user", - strconv.FormatUint(userMap["new-user"].GetId(), 10), - "--output", - "json", - }, - &node, - ) - assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for node move to new-user") - - assert.Equal(t, "new-user", node.GetUser().GetName()) - - var allNodes []v1.Node - assert.EventuallyWithT(t, func(c *assert.CollectT) { - err = executeAndUnmarshal( - headscale, - []string{ - "headscale", - "nodes", - "list", - "--output", - "json", - }, - &allNodes, - ) - assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for nodes list after move") - - assert.Len(t, allNodes, 1) - - assert.Equal(t, allNodes[0].GetId(), node.GetId()) - assert.Equal(t, allNodes[0].GetUser(), node.GetUser()) - assert.Equal(t, "new-user", allNodes[0].GetUser().GetName()) - - _, err = headscale.Execute( - []string{ - "headscale", - "nodes", - "move", - "--identifier", - nodeID, - "--user", - "999", - "--output", - "json", - }, - ) - assert.ErrorContains( - t, - err, - "user not found", - ) - assert.Equal(t, "new-user", node.GetUser().GetName()) - - assert.EventuallyWithT(t, func(c *assert.CollectT) { - err = executeAndUnmarshal( - headscale, - []string{ - "headscale", - "nodes", - "move", - "--identifier", - nodeID, - "--user", - strconv.FormatUint(userMap["old-user"].GetId(), 10), - "--output", - "json", - }, - &node, - ) - assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for node move back to old-user") - - assert.Equal(t, "old-user", node.GetUser().GetName()) - - assert.EventuallyWithT(t, func(c *assert.CollectT) { - err = executeAndUnmarshal( - headscale, - []string{ - "headscale", - "nodes", - "move", - "--identifier", - nodeID, - "--user", - strconv.FormatUint(userMap["old-user"].GetId(), 10), - "--output", - "json", - }, - &node, - ) - assert.NoError(c, err) - }, 10*time.Second, 200*time.Millisecond, "Waiting for node move to same user") - - assert.Equal(t, "old-user", node.GetUser().GetName()) -} - func TestPolicyCommand(t *testing.T) { IntegrationSkip(t) @@ -1823,6 +1626,7 @@ func TestPolicyCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -1878,6 +1682,7 @@ func TestPolicyCommand(t *testing.T) { // Get the current policy and check // if it is the same as the one we set. var output *policyv2.Policy + assert.EventuallyWithT(t, func(c *assert.CollectT) { err = executeAndUnmarshal( headscale, @@ -1906,6 +1711,7 @@ func TestPolicyBrokenConfigCommand(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) diff --git a/integration/control.go b/integration/control.go index e0e67e09..58a061e3 100644 --- a/integration/control.go +++ b/integration/control.go @@ -8,6 +8,7 @@ import ( policyv2 "github.com/juanfont/headscale/hscontrol/policy/v2" "github.com/juanfont/headscale/hscontrol/routes" "github.com/juanfont/headscale/hscontrol/types" + "github.com/juanfont/headscale/integration/hsic" "github.com/ory/dockertest/v3" "tailscale.com/tailcfg" ) @@ -24,13 +25,18 @@ type ControlServer interface { WaitForRunning() error CreateUser(user string) (*v1.User, error) CreateAuthKey(user uint64, reusable bool, ephemeral bool) (*v1.PreAuthKey, error) + CreateAuthKeyWithTags(user uint64, reusable bool, ephemeral bool, tags []string) (*v1.PreAuthKey, error) + CreateAuthKeyWithOptions(opts hsic.AuthKeyOptions) (*v1.PreAuthKey, error) + DeleteAuthKey(id uint64) error ListNodes(users ...string) ([]*v1.Node, error) DeleteNode(nodeID uint64) error NodesByUser() (map[string][]*v1.Node, error) NodesByName() (map[string]*v1.Node, error) ListUsers() ([]*v1.User, error) MapUsers() (map[string]*v1.User, error) + DeleteUser(userID uint64) error ApproveRoutes(uint64, []netip.Prefix) (*v1.Node, error) + SetNodeTags(nodeID uint64, tags []string) error GetCert() []byte GetHostname() string GetIPInNetwork(network *dockertest.Network) string diff --git a/integration/dns_test.go b/integration/dns_test.go index 7267bc09..e937a421 100644 --- a/integration/dns_test.go +++ b/integration/dns_test.go @@ -23,6 +23,7 @@ func TestResolveMagicDNS(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -79,6 +80,7 @@ func TestResolveMagicDNSExtraRecordsPath(t *testing.T) { } scenario, err := NewScenario(spec) + require.NoError(t, err) defer scenario.ShutdownAssertNoPanics(t) @@ -94,11 +96,7 @@ func TestResolveMagicDNSExtraRecordsPath(t *testing.T) { b, _ := json.Marshal(extraRecords) err = scenario.CreateHeadscaleEnv([]tsic.Option{ - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add python3 curl bind-tools ; update-ca-certificates ; tailscaled --tun=tsdev", - }), + tsic.WithPackages("python3", "curl", "bind-tools"), }, hsic.WithTestName("extrarecords"), hsic.WithConfigEnv(map[string]string{ diff --git a/integration/dockertestutil/build.go b/integration/dockertestutil/build.go index 635f91ef..dd082d22 100644 --- a/integration/dockertestutil/build.go +++ b/integration/dockertestutil/build.go @@ -1,17 +1,25 @@ package dockertestutil import ( + "context" "os/exec" + "time" ) // RunDockerBuildForDiagnostics runs docker build manually to get detailed error output. // This is used when a docker build fails to provide more detailed diagnostic information // than what dockertest typically provides. -func RunDockerBuildForDiagnostics(contextDir, dockerfile string) string { - cmd := exec.Command("docker", "build", "-f", dockerfile, contextDir) +// +// Returns the build output regardless of success/failure, and an error if the build failed. +func RunDockerBuildForDiagnostics(contextDir, dockerfile string) (string, error) { + // Use a context with timeout to prevent hanging builds + const buildTimeout = 10 * time.Minute + + ctx, cancel := context.WithTimeout(context.Background(), buildTimeout) + defer cancel() + + cmd := exec.CommandContext(ctx, "docker", "build", "--progress=plain", "--no-cache", "-f", dockerfile, contextDir) output, err := cmd.CombinedOutput() - if err != nil { - return string(output) - } - return "" + + return string(output), err } diff --git a/integration/dockertestutil/network.go b/integration/dockertestutil/network.go index 0ec6a69b..42483247 100644 --- a/integration/dockertestutil/network.go +++ b/integration/dockertestutil/network.go @@ -108,6 +108,8 @@ func CleanUnreferencedNetworks(pool *dockertest.Pool) error { } // CleanImagesInCI removes images if running in CI. +// It only removes dangling (untagged) images to avoid forcing rebuilds. +// Tagged images (golang:*, tailscale/tailscale:*, etc.) are automatically preserved. func CleanImagesInCI(pool *dockertest.Pool) error { if !util.IsCI() { log.Println("Skipping image cleanup outside of CI") @@ -119,9 +121,26 @@ func CleanImagesInCI(pool *dockertest.Pool) error { return fmt.Errorf("getting images: %w", err) } + removedCount := 0 for _, image := range images { - log.Printf("removing image: %s, %v", image.ID, image.RepoTags) - _ = pool.Client.RemoveImage(image.ID) + // Only remove dangling (untagged) images to avoid forcing rebuilds + // Dangling images have no RepoTags or only have "<none>:<none>" + if len(image.RepoTags) == 0 || (len(image.RepoTags) == 1 && image.RepoTags[0] == "<none>:<none>") { + log.Printf("Removing dangling image: %s", image.ID[:12]) + + err := pool.Client.RemoveImage(image.ID) + if err != nil { + log.Printf("Warning: failed to remove image %s: %v", image.ID[:12], err) + } else { + removedCount++ + } + } + } + + if removedCount > 0 { + log.Printf("Removed %d dangling images in CI", removedCount) + } else { + log.Println("No dangling images to remove in CI") } return nil diff --git a/integration/dsic/dsic.go b/integration/dsic/dsic.go index dd6c6978..d8a77575 100644 --- a/integration/dsic/dsic.go +++ b/integration/dsic/dsic.go @@ -103,6 +103,38 @@ func WithExtraHosts(hosts []string) Option { } } +// buildEntrypoint builds the container entrypoint command based on configuration. +// It constructs proper wait conditions instead of fixed sleeps: +// 1. Wait for network to be ready +// 2. Wait for TLS cert to be written (always written after container start) +// 3. Wait for CA certs if configured +// 4. Update CA certificates +// 5. Run derper with provided arguments. +func (dsic *DERPServerInContainer) buildEntrypoint(derperArgs string) []string { + var commands []string + + // Wait for network to be ready + commands = append(commands, "while ! ip route show default >/dev/null 2>&1; do sleep 0.1; done") + + // Wait for TLS cert to be written (always written after container start) + commands = append(commands, + fmt.Sprintf("while [ ! -f %s/%s.crt ]; do sleep 0.1; done", DERPerCertRoot, dsic.hostname)) + + // If CA certs are configured, wait for them to be written + if len(dsic.caCerts) > 0 { + commands = append(commands, + fmt.Sprintf("while [ ! -f %s/user-0.crt ]; do sleep 0.1; done", caCertRoot)) + } + + // Update CA certificates + commands = append(commands, "update-ca-certificates") + + // Run derper + commands = append(commands, "derper "+derperArgs) + + return []string{"/bin/sh", "-c", strings.Join(commands, " ; ")} +} + // New returns a new TailscaleInContainer instance. func New( pool *dockertest.Pool, @@ -115,7 +147,18 @@ func New( return nil, err } - hostname := fmt.Sprintf("derp-%s-%s", strings.ReplaceAll(version, ".", "-"), hash) + // Include run ID in hostname for easier identification of which test run owns this container + runID := dockertestutil.GetIntegrationRunID() + + var hostname string + + if runID != "" { + // Use last 6 chars of run ID (the random hash part) for brevity + runIDShort := runID[len(runID)-6:] + hostname = fmt.Sprintf("derp-%s-%s-%s", runIDShort, strings.ReplaceAll(version, ".", "-"), hash) + } else { + hostname = fmt.Sprintf("derp-%s-%s", strings.ReplaceAll(version, ".", "-"), hash) + } tlsCert, tlsKey, err := integrationutil.CreateCertificate(hostname) if err != nil { return nil, fmt.Errorf("failed to create certificates for headscale test: %w", err) @@ -150,8 +193,7 @@ func New( Name: hostname, Networks: dsic.networks, ExtraHosts: dsic.withExtraHosts, - // we currently need to give us some time to inject the certificate further down. - Entrypoint: []string{"/bin/sh", "-c", "/bin/sleep 3 ; update-ca-certificates ; derper " + cmdArgs.String()}, + Entrypoint: dsic.buildEntrypoint(cmdArgs.String()), ExposedPorts: []string{ "80/tcp", fmt.Sprintf("%d/tcp", dsic.derpPort), diff --git a/integration/embedded_derp_test.go b/integration/embedded_derp_test.go index 17cb01af..89154f63 100644 --- a/integration/embedded_derp_test.go +++ b/integration/embedded_derp_test.go @@ -178,7 +178,8 @@ func derpServerScenario( t.Logf("Run 1: %d successful pings out of %d", success, len(allClients)*len(allHostnames)) // Let the DERP updater run a couple of times to ensure it does not - // break the DERPMap. + // break the DERPMap. The updater runs on a 10s interval by default. + //nolint:forbidigo // Intentional delay: must wait for DERP updater to run multiple times (interval-based) time.Sleep(30 * time.Second) success = pingDerpAllHelper(t, allClients, allHostnames) diff --git a/integration/general_test.go b/integration/general_test.go index c68768f7..f44a0f03 100644 --- a/integration/general_test.go +++ b/integration/general_test.go @@ -14,6 +14,7 @@ import ( "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" "github.com/juanfont/headscale/integration/hsic" + "github.com/juanfont/headscale/integration/integrationutil" "github.com/juanfont/headscale/integration/tsic" "github.com/rs/zerolog/log" "github.com/samber/lo" @@ -366,12 +367,18 @@ func TestPingAllByHostname(t *testing.T) { // This might mean we approach setup slightly wrong, but for now, ignore // the linter // nolint:tparallel +// TestTaildrop tests the Taildrop file sharing functionality across multiple scenarios: +// 1. Same-user transfers: Nodes owned by the same user can send files to each other +// 2. Cross-user transfers: Nodes owned by different users cannot send files to each other +// 3. Tagged device transfers: Tagged devices cannot send nor receive files +// +// Each user gets len(MustTestVersions) nodes to ensure compatibility across all supported versions. func TestTaildrop(t *testing.T) { IntegrationSkip(t) spec := ScenarioSpec{ - NodesPerUser: len(MustTestVersions), - Users: []string{"user1"}, + NodesPerUser: 0, // We'll create nodes manually to control tags + Users: []string{"user1", "user2"}, } scenario, err := NewScenario(spec) @@ -385,16 +392,99 @@ func TestTaildrop(t *testing.T) { ) requireNoErrHeadscaleEnv(t, err) + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + networks := scenario.Networks() + require.NotEmpty(t, networks, "scenario should have at least one network") + network := networks[0] + + // Create untagged nodes for user1 using all test versions + user1Key, err := scenario.CreatePreAuthKey(userMap["user1"].GetId(), true, false) + require.NoError(t, err) + + var user1Clients []TailscaleClient + for i, version := range MustTestVersions { + t.Logf("Creating user1 client %d with version %s", i, version) + client, err := scenario.CreateTailscaleNode( + version, + tsic.WithNetwork(network), + ) + require.NoError(t, err) + + err = client.Login(headscale.GetEndpoint(), user1Key.GetKey()) + require.NoError(t, err) + + err = client.WaitForRunning(integrationutil.PeerSyncTimeout()) + require.NoError(t, err) + + user1Clients = append(user1Clients, client) + scenario.GetOrCreateUser("user1").Clients[client.Hostname()] = client + } + + // Create untagged nodes for user2 using all test versions + user2Key, err := scenario.CreatePreAuthKey(userMap["user2"].GetId(), true, false) + require.NoError(t, err) + + var user2Clients []TailscaleClient + for i, version := range MustTestVersions { + t.Logf("Creating user2 client %d with version %s", i, version) + client, err := scenario.CreateTailscaleNode( + version, + tsic.WithNetwork(network), + ) + require.NoError(t, err) + + err = client.Login(headscale.GetEndpoint(), user2Key.GetKey()) + require.NoError(t, err) + + err = client.WaitForRunning(integrationutil.PeerSyncTimeout()) + require.NoError(t, err) + + user2Clients = append(user2Clients, client) + scenario.GetOrCreateUser("user2").Clients[client.Hostname()] = client + } + + // Create a tagged device (tags-as-identity: tags come from PreAuthKey) + // Use "head" version to test latest behavior + taggedKey, err := scenario.CreatePreAuthKeyWithTags(userMap["user1"].GetId(), true, false, []string{"tag:server"}) + require.NoError(t, err) + + taggedClient, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(network), + ) + require.NoError(t, err) + + err = taggedClient.Login(headscale.GetEndpoint(), taggedKey.GetKey()) + require.NoError(t, err) + + err = taggedClient.WaitForRunning(integrationutil.PeerSyncTimeout()) + require.NoError(t, err) + + // Add tagged client to user1 for tracking (though it's tagged, not user-owned) + scenario.GetOrCreateUser("user1").Clients[taggedClient.Hostname()] = taggedClient + allClients, err := scenario.ListTailscaleClients() requireNoErrListClients(t, err) + // Expected: len(MustTestVersions) for user1 + len(MustTestVersions) for user2 + 1 tagged + expectedClientCount := len(MustTestVersions)*2 + 1 + require.Len(t, allClients, expectedClientCount, + "should have %d clients: %d user1 + %d user2 + 1 tagged", + expectedClientCount, len(MustTestVersions), len(MustTestVersions)) + err = scenario.WaitForTailscaleSync() requireNoErrSync(t, err) - // This will essentially fetch and cache all the FQDNs + // Cache FQDNs _, err = scenario.ListTailscaleClientsFQDNs() requireNoErrListFQDN(t, err) + // Install curl on all clients for _, client := range allClients { if !strings.Contains(client.Hostname(), "head") { command := []string{"apk", "add", "curl"} @@ -403,110 +493,269 @@ func TestTaildrop(t *testing.T) { t.Fatalf("failed to install curl on %s, err: %s", client.Hostname(), err) } } + } + + // Helper to get FileTargets for a client. + getFileTargets := func(client TailscaleClient) ([]apitype.FileTarget, error) { curlCommand := []string{ "curl", "--unix-socket", "/var/run/tailscale/tailscaled.sock", "http://local-tailscaled.sock/localapi/v0/file-targets", } + result, _, err := client.Execute(curlCommand) + if err != nil { + return nil, err + } + + var fts []apitype.FileTarget + if err := json.Unmarshal([]byte(result), &fts); err != nil { + return nil, fmt.Errorf("failed to parse file-targets response: %w (response: %s)", err, result) + } + + return fts, nil + } + + // Helper to check if a client is in the FileTargets list + isInFileTargets := func(fts []apitype.FileTarget, targetHostname string) bool { + for _, ft := range fts { + if strings.Contains(ft.Node.Name, targetHostname) { + return true + } + } + return false + } + + // Test 1: Verify user1 nodes can see each other in FileTargets but not user2 nodes or tagged node + t.Run("FileTargets-user1", func(t *testing.T) { + for _, client := range user1Clients { + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + fts, err := getFileTargets(client) + assert.NoError(ct, err) + + // Should see the other user1 clients + for _, peer := range user1Clients { + if peer.Hostname() == client.Hostname() { + continue + } + assert.True(ct, isInFileTargets(fts, peer.Hostname()), + "user1 client %s should see user1 peer %s in FileTargets", client.Hostname(), peer.Hostname()) + } + + // Should NOT see user2 clients + for _, peer := range user2Clients { + assert.False(ct, isInFileTargets(fts, peer.Hostname()), + "user1 client %s should NOT see user2 peer %s in FileTargets", client.Hostname(), peer.Hostname()) + } + + // Should NOT see tagged client + assert.False(ct, isInFileTargets(fts, taggedClient.Hostname()), + "user1 client %s should NOT see tagged client %s in FileTargets", client.Hostname(), taggedClient.Hostname()) + }, 10*time.Second, 1*time.Second) + } + }) + + // Test 2: Verify user2 nodes can see each other in FileTargets but not user1 nodes or tagged node + t.Run("FileTargets-user2", func(t *testing.T) { + for _, client := range user2Clients { + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + fts, err := getFileTargets(client) + assert.NoError(ct, err) + + // Should see the other user2 clients + for _, peer := range user2Clients { + if peer.Hostname() == client.Hostname() { + continue + } + assert.True(ct, isInFileTargets(fts, peer.Hostname()), + "user2 client %s should see user2 peer %s in FileTargets", client.Hostname(), peer.Hostname()) + } + + // Should NOT see user1 clients + for _, peer := range user1Clients { + assert.False(ct, isInFileTargets(fts, peer.Hostname()), + "user2 client %s should NOT see user1 peer %s in FileTargets", client.Hostname(), peer.Hostname()) + } + + // Should NOT see tagged client + assert.False(ct, isInFileTargets(fts, taggedClient.Hostname()), + "user2 client %s should NOT see tagged client %s in FileTargets", client.Hostname(), taggedClient.Hostname()) + }, 10*time.Second, 1*time.Second) + } + }) + + // Test 3: Verify tagged device has no FileTargets (empty list) + t.Run("FileTargets-tagged", func(t *testing.T) { assert.EventuallyWithT(t, func(ct *assert.CollectT) { - result, _, err := client.Execute(curlCommand) + fts, err := getFileTargets(taggedClient) assert.NoError(ct, err) - - var fts []apitype.FileTarget - err = json.Unmarshal([]byte(result), &fts) - assert.NoError(ct, err) - - if len(fts) != len(allClients)-1 { - ftStr := fmt.Sprintf("FileTargets for %s:\n", client.Hostname()) - for _, ft := range fts { - ftStr += fmt.Sprintf("\t%s\n", ft.Node.Name) - } - assert.Failf(ct, "client %s does not have all its peers as FileTargets", - "got %d, want: %d\n%s", - len(fts), - len(allClients)-1, - ftStr, - ) - } + assert.Empty(ct, fts, "tagged client %s should have no FileTargets", taggedClient.Hostname()) }, 10*time.Second, 1*time.Second) - } + }) - for _, client := range allClients { - command := []string{"touch", fmt.Sprintf("/tmp/file_from_%s", client.Hostname())} + // Test 4: Same-user file transfer works (user1 -> user1) for all version combinations + t.Run("SameUserTransfer", func(t *testing.T) { + for _, sender := range user1Clients { + // Create file on sender + filename := fmt.Sprintf("file_from_%s", sender.Hostname()) + command := []string{"touch", fmt.Sprintf("/tmp/%s", filename)} + _, _, err := sender.Execute(command) + require.NoError(t, err, "failed to create taildrop file on %s", sender.Hostname()) - if _, _, err := client.Execute(command); err != nil { - t.Fatalf("failed to create taildrop file on %s, err: %s", client.Hostname(), err) - } + for _, receiver := range user1Clients { + if sender.Hostname() == receiver.Hostname() { + continue + } - for _, peer := range allClients { - if client.Hostname() == peer.Hostname() { - continue + receiverFQDN, _ := receiver.FQDN() + + t.Run(fmt.Sprintf("%s->%s", sender.Hostname(), receiver.Hostname()), func(t *testing.T) { + sendCommand := []string{ + "tailscale", "file", "cp", + fmt.Sprintf("/tmp/%s", filename), + fmt.Sprintf("%s:", receiverFQDN), + } + + assert.EventuallyWithT(t, func(ct *assert.CollectT) { + t.Logf("Sending file from %s to %s", sender.Hostname(), receiver.Hostname()) + _, _, err := sender.Execute(sendCommand) + assert.NoError(ct, err) + }, 10*time.Second, 1*time.Second) + }) } + } - // It is safe to ignore this error as we handled it when caching it - peerFQDN, _ := peer.FQDN() + // Receive files on all user1 clients + for _, client := range user1Clients { + getCommand := []string{"tailscale", "file", "get", "/tmp/"} + _, _, err := client.Execute(getCommand) + require.NoError(t, err, "failed to get taildrop file on %s", client.Hostname()) - t.Run(fmt.Sprintf("%s-%s", client.Hostname(), peer.Hostname()), func(t *testing.T) { - command := []string{ - "tailscale", "file", "cp", - fmt.Sprintf("/tmp/file_from_%s", client.Hostname()), - fmt.Sprintf("%s:", peerFQDN), + // Verify files from all other user1 clients exist + for _, peer := range user1Clients { + if client.Hostname() == peer.Hostname() { + continue } - assert.EventuallyWithT(t, func(ct *assert.CollectT) { - t.Logf( - "Sending file from %s to %s\n", - client.Hostname(), - peer.Hostname(), - ) - _, _, err := client.Execute(command) - assert.NoError(ct, err) - }, 10*time.Second, 1*time.Second) - }) - } - } - - for _, client := range allClients { - command := []string{ - "tailscale", "file", - "get", - "/tmp/", - } - if _, _, err := client.Execute(command); err != nil { - t.Fatalf("failed to get taildrop file on %s, err: %s", client.Hostname(), err) - } - - for _, peer := range allClients { - if client.Hostname() == peer.Hostname() { - continue + t.Run(fmt.Sprintf("verify-%s-received-from-%s", client.Hostname(), peer.Hostname()), func(t *testing.T) { + lsCommand := []string{"ls", fmt.Sprintf("/tmp/file_from_%s", peer.Hostname())} + result, _, err := client.Execute(lsCommand) + require.NoErrorf(t, err, "failed to ls taildrop file from %s", peer.Hostname()) + assert.Equal(t, fmt.Sprintf("/tmp/file_from_%s\n", peer.Hostname()), result) + }) } - - t.Run(fmt.Sprintf("%s-%s", client.Hostname(), peer.Hostname()), func(t *testing.T) { - command := []string{ - "ls", - fmt.Sprintf("/tmp/file_from_%s", peer.Hostname()), - } - log.Printf( - "Checking file in %s from %s\n", - client.Hostname(), - peer.Hostname(), - ) - - result, _, err := client.Execute(command) - require.NoErrorf(t, err, "failed to execute command to ls taildrop") - - log.Printf("Result for %s: %s\n", peer.Hostname(), result) - if fmt.Sprintf("/tmp/file_from_%s\n", peer.Hostname()) != result { - t.Fatalf( - "taildrop result is not correct %s, wanted %s", - result, - fmt.Sprintf("/tmp/file_from_%s\n", peer.Hostname()), - ) - } - }) } - } + }) + + // Test 5: Cross-user file transfer fails (user1 -> user2) + t.Run("CrossUserTransferBlocked", func(t *testing.T) { + sender := user1Clients[0] + receiver := user2Clients[0] + + // Create file on sender + filename := fmt.Sprintf("cross_user_file_from_%s", sender.Hostname()) + command := []string{"touch", fmt.Sprintf("/tmp/%s", filename)} + _, _, err := sender.Execute(command) + require.NoError(t, err, "failed to create taildrop file on %s", sender.Hostname()) + + // Attempt to send file - this should fail + receiverFQDN, _ := receiver.FQDN() + sendCommand := []string{ + "tailscale", "file", "cp", + fmt.Sprintf("/tmp/%s", filename), + fmt.Sprintf("%s:", receiverFQDN), + } + + t.Logf("Attempting cross-user file send from %s to %s (should fail)", sender.Hostname(), receiver.Hostname()) + _, stderr, err := sender.Execute(sendCommand) + + // The file transfer should fail because user2 is not in user1's FileTargets + // Either the command errors, or it silently fails (check stderr for error message) + if err != nil { + t.Logf("Cross-user transfer correctly failed with error: %v", err) + } else if strings.Contains(stderr, "not a valid peer") || strings.Contains(stderr, "unknown target") { + t.Logf("Cross-user transfer correctly rejected: %s", stderr) + } else { + // Even if command succeeded, verify the file was NOT received + getCommand := []string{"tailscale", "file", "get", "/tmp/"} + receiver.Execute(getCommand) + + lsCommand := []string{"ls", fmt.Sprintf("/tmp/%s", filename)} + _, _, lsErr := receiver.Execute(lsCommand) + assert.Error(t, lsErr, "Cross-user file should NOT have been received") + } + }) + + // Test 6: Tagged device cannot send files + t.Run("TaggedCannotSend", func(t *testing.T) { + // Create file on tagged client + filename := fmt.Sprintf("file_from_tagged_%s", taggedClient.Hostname()) + command := []string{"touch", fmt.Sprintf("/tmp/%s", filename)} + _, _, err := taggedClient.Execute(command) + require.NoError(t, err, "failed to create taildrop file on tagged client") + + // Attempt to send to user1 client - should fail because tagged client has no FileTargets + receiver := user1Clients[0] + receiverFQDN, _ := receiver.FQDN() + sendCommand := []string{ + "tailscale", "file", "cp", + fmt.Sprintf("/tmp/%s", filename), + fmt.Sprintf("%s:", receiverFQDN), + } + + t.Logf("Attempting tagged->user file send from %s to %s (should fail)", taggedClient.Hostname(), receiver.Hostname()) + _, stderr, err := taggedClient.Execute(sendCommand) + + if err != nil { + t.Logf("Tagged client send correctly failed with error: %v", err) + } else if strings.Contains(stderr, "not a valid peer") || strings.Contains(stderr, "unknown target") || strings.Contains(stderr, "no matches for") { + t.Logf("Tagged client send correctly rejected: %s", stderr) + } else { + // Verify file was NOT received + getCommand := []string{"tailscale", "file", "get", "/tmp/"} + receiver.Execute(getCommand) + + lsCommand := []string{"ls", fmt.Sprintf("/tmp/%s", filename)} + _, _, lsErr := receiver.Execute(lsCommand) + assert.Error(t, lsErr, "Tagged client's file should NOT have been received") + } + }) + + // Test 7: Tagged device cannot receive files (user1 tries to send to tagged) + t.Run("TaggedCannotReceive", func(t *testing.T) { + sender := user1Clients[0] + + // Create file on sender + filename := fmt.Sprintf("file_to_tagged_from_%s", sender.Hostname()) + command := []string{"touch", fmt.Sprintf("/tmp/%s", filename)} + _, _, err := sender.Execute(command) + require.NoError(t, err, "failed to create taildrop file on %s", sender.Hostname()) + + // Attempt to send to tagged client - should fail because tagged is not in user1's FileTargets + taggedFQDN, _ := taggedClient.FQDN() + sendCommand := []string{ + "tailscale", "file", "cp", + fmt.Sprintf("/tmp/%s", filename), + fmt.Sprintf("%s:", taggedFQDN), + } + + t.Logf("Attempting user->tagged file send from %s to %s (should fail)", sender.Hostname(), taggedClient.Hostname()) + _, stderr, err := sender.Execute(sendCommand) + + if err != nil { + t.Logf("Send to tagged client correctly failed with error: %v", err) + } else if strings.Contains(stderr, "not a valid peer") || strings.Contains(stderr, "unknown target") || strings.Contains(stderr, "no matches for") { + t.Logf("Send to tagged client correctly rejected: %s", stderr) + } else { + // Verify file was NOT received by tagged client + getCommand := []string{"tailscale", "file", "get", "/tmp/"} + taggedClient.Execute(getCommand) + + lsCommand := []string{"ls", fmt.Sprintf("/tmp/%s", filename)} + _, _, lsErr := taggedClient.Execute(lsCommand) + assert.Error(t, lsErr, "File to tagged client should NOT have been received") + } + }) } func TestUpdateHostnameFromClient(t *testing.T) { diff --git a/integration/helpers.go b/integration/helpers.go index 133a175b..7d40c8e6 100644 --- a/integration/helpers.go +++ b/integration/helpers.go @@ -56,13 +56,6 @@ type NodeSystemStatus struct { NodeStore bool } -// requireNotNil validates that an object is not nil and fails the test if it is. -// This helper provides consistent error messaging for nil checks in integration tests. -func requireNotNil(t *testing.T, object interface{}) { - t.Helper() - require.NotNil(t, object) -} - // requireNoErrHeadscaleEnv validates that headscale environment creation succeeded. // Provides specific error context for headscale environment setup failures. func requireNoErrHeadscaleEnv(t *testing.T, err error) { diff --git a/integration/hsic/hsic.go b/integration/hsic/hsic.go index 775e7937..42bb8e93 100644 --- a/integration/hsic/hsic.go +++ b/integration/hsic/hsic.go @@ -10,6 +10,7 @@ import ( "fmt" "io" "log" + "maps" "net/http" "net/netip" "os" @@ -32,7 +33,6 @@ import ( "github.com/ory/dockertest/v3" "github.com/ory/dockertest/v3/docker" "gopkg.in/yaml.v3" - "tailscale.com/envknob" "tailscale.com/tailcfg" "tailscale.com/util/mak" ) @@ -48,7 +48,12 @@ const ( IntegrationTestDockerFileName = "Dockerfile.integration" ) -var errHeadscaleStatusCodeNotOk = errors.New("headscale status code not ok") +var ( + errHeadscaleStatusCodeNotOk = errors.New("headscale status code not ok") + errInvalidHeadscaleImageFormat = errors.New("invalid HEADSCALE_INTEGRATION_HEADSCALE_IMAGE format, expected repository:tag") + errHeadscaleImageRequiredInCI = errors.New("HEADSCALE_INTEGRATION_HEADSCALE_IMAGE must be set in CI") + errInvalidPostgresImageFormat = errors.New("invalid HEADSCALE_INTEGRATION_POSTGRES_IMAGE format, expected repository:tag") +) type fileInContainer struct { path string @@ -69,7 +74,7 @@ type HeadscaleInContainer struct { // optional config port int extraPorts []string - debugPort int + hostMetricsPort string // Dynamically assigned host port for metrics/pprof access caCerts [][]byte hostPortBindings map[string][]string aclPolicy *policyv2.Policy @@ -132,9 +137,7 @@ func WithCustomTLS(cert, key []byte) Option { // can be used to override Headscale configuration. func WithConfigEnv(configEnv map[string]string) Option { return func(hsic *HeadscaleInContainer) { - for key, value := range configEnv { - hsic.env[key] = value - } + maps.Copy(hsic.env, configEnv) } } @@ -282,26 +285,39 @@ func WithDERPAsIP() Option { } } -// WithDebugPort sets the debug port for delve debugging. -func WithDebugPort(port int) Option { - return func(hsic *HeadscaleInContainer) { - hsic.debugPort = port - } -} - // buildEntrypoint builds the container entrypoint command based on configuration. +// It constructs proper wait conditions instead of fixed sleeps: +// 1. Wait for network to be ready +// 2. Wait for config.yaml (always written after container start) +// 3. Wait for CA certs if configured +// 4. Update CA certificates +// 5. Run headscale serve +// 6. Sleep at end to keep container alive for log collection on shutdown. func (hsic *HeadscaleInContainer) buildEntrypoint() []string { - debugCmd := fmt.Sprintf( - "/go/bin/dlv --listen=0.0.0.0:%d --headless=true --api-version=2 --accept-multiclient --allow-non-terminal-interactive=true exec /go/bin/headscale --continue -- serve", - hsic.debugPort, - ) + var commands []string - entrypoint := fmt.Sprintf( - "/bin/sleep 3 ; update-ca-certificates ; %s ; /bin/sleep 30", - debugCmd, - ) + // Wait for network to be ready + commands = append(commands, "while ! ip route show default >/dev/null 2>&1; do sleep 0.1; done") - return []string{"/bin/bash", "-c", entrypoint} + // Wait for config.yaml to be written (always written after container start) + commands = append(commands, "while [ ! -f /etc/headscale/config.yaml ]; do sleep 0.1; done") + + // If CA certs are configured, wait for them to be written + if len(hsic.caCerts) > 0 { + commands = append(commands, + fmt.Sprintf("while [ ! -f %s/user-0.crt ]; do sleep 0.1; done", caCertRoot)) + } + + // Update CA certificates + commands = append(commands, "update-ca-certificates") + + // Run headscale serve + commands = append(commands, "/usr/local/bin/headscale serve") + + // Keep container alive after headscale exits for log collection + commands = append(commands, "/bin/sleep 30") + + return []string{"/bin/bash", "-c", strings.Join(commands, " ; ")} } // New returns a new HeadscaleInContainer instance. @@ -315,20 +331,22 @@ func New( return nil, err } - hostname := "hs-" + hash + // Include run ID in hostname for easier identification of which test run owns this container + runID := dockertestutil.GetIntegrationRunID() - // Get debug port from environment or use default - debugPort := 40000 - if envDebugPort := envknob.String("HEADSCALE_DEBUG_PORT"); envDebugPort != "" { - if port, err := strconv.Atoi(envDebugPort); err == nil { - debugPort = port - } + var hostname string + + if runID != "" { + // Use last 6 chars of run ID (the random hash part) for brevity + runIDShort := runID[len(runID)-6:] + hostname = fmt.Sprintf("hs-%s-%s", runIDShort, hash) + } else { + hostname = "hs-" + hash } hsic := &HeadscaleInContainer{ - hostname: hostname, - port: headscaleDefaultPort, - debugPort: debugPort, + hostname: hostname, + port: headscaleDefaultPort, pool: pool, networks: networks, @@ -345,7 +363,6 @@ func New( log.Println("NAME: ", hsic.hostname) portProto := fmt.Sprintf("%d/tcp", hsic.port) - debugPortProto := fmt.Sprintf("%d/tcp", hsic.debugPort) headscaleBuildOptions := &dockertest.BuildOptions{ Dockerfile: IntegrationTestDockerFileName, @@ -360,10 +377,24 @@ func New( hsic.env["HEADSCALE_DATABASE_POSTGRES_NAME"] = "headscale" delete(hsic.env, "HEADSCALE_DATABASE_SQLITE_PATH") + // Determine postgres image - use prebuilt if available, otherwise pull from registry + pgRepo := "postgres" + pgTag := "latest" + + if prebuiltImage := os.Getenv("HEADSCALE_INTEGRATION_POSTGRES_IMAGE"); prebuiltImage != "" { + repo, tag, found := strings.Cut(prebuiltImage, ":") + if !found { + return nil, errInvalidPostgresImageFormat + } + + pgRepo = repo + pgTag = tag + } + pgRunOptions := &dockertest.RunOptions{ Name: "postgres-" + hash, - Repository: "postgres", - Tag: "latest", + Repository: pgRepo, + Tag: pgTag, Networks: networks, Env: []string{ "POSTGRES_USER=headscale", @@ -410,7 +441,7 @@ func New( runOptions := &dockertest.RunOptions{ Name: hsic.hostname, - ExposedPorts: append([]string{portProto, debugPortProto, "9090/tcp"}, hsic.extraPorts...), + ExposedPorts: append([]string{portProto, "9090/tcp"}, hsic.extraPorts...), Networks: networks, // Cmd: []string{"headscale", "serve"}, // TODO(kradalby): Get rid of this hack, we currently need to give us some @@ -419,15 +450,13 @@ func New( Env: env, } - // Always bind debug port and metrics port to predictable host ports + // Bind metrics port to dynamic host port (kernel assigns free port) if runOptions.PortBindings == nil { runOptions.PortBindings = map[docker.Port][]docker.PortBinding{} } - runOptions.PortBindings[docker.Port(debugPortProto)] = []docker.PortBinding{ - {HostPort: strconv.Itoa(hsic.debugPort)}, - } + runOptions.PortBindings["9090/tcp"] = []docker.PortBinding{ - {HostPort: "49090"}, + {HostPort: "0"}, // Let kernel assign a free port } if len(hsic.hostPortBindings) > 0 { @@ -452,30 +481,85 @@ func New( // Add integration test labels if running under hi tool dockertestutil.DockerAddIntegrationLabels(runOptions, "headscale") - container, err := pool.BuildAndRunWithBuildOptions( - headscaleBuildOptions, - runOptions, - dockertestutil.DockerRestartPolicy, - dockertestutil.DockerAllowLocalIPv6, - dockertestutil.DockerAllowNetworkAdministration, - ) - if err != nil { - // Try to get more detailed build output - log.Printf("Docker build failed, attempting to get detailed output...") - buildOutput := dockertestutil.RunDockerBuildForDiagnostics(dockerContextPath, IntegrationTestDockerFileName) - if buildOutput != "" { - return nil, fmt.Errorf("could not start headscale container: %w\n\nDetailed build output:\n%s", err, buildOutput) + var container *dockertest.Resource + + // Check if a pre-built image is available via environment variable + prebuiltImage := os.Getenv("HEADSCALE_INTEGRATION_HEADSCALE_IMAGE") + + if prebuiltImage != "" { + log.Printf("Using pre-built headscale image: %s", prebuiltImage) + // Parse image into repository and tag + repo, tag, ok := strings.Cut(prebuiltImage, ":") + if !ok { + return nil, errInvalidHeadscaleImageFormat + } + + runOptions.Repository = repo + runOptions.Tag = tag + + container, err = pool.RunWithOptions( + runOptions, + dockertestutil.DockerRestartPolicy, + dockertestutil.DockerAllowLocalIPv6, + dockertestutil.DockerAllowNetworkAdministration, + ) + if err != nil { + return nil, fmt.Errorf("could not run pre-built headscale container %q: %w", prebuiltImage, err) + } + } else if util.IsCI() { + return nil, errHeadscaleImageRequiredInCI + } else { + container, err = pool.BuildAndRunWithBuildOptions( + headscaleBuildOptions, + runOptions, + dockertestutil.DockerRestartPolicy, + dockertestutil.DockerAllowLocalIPv6, + dockertestutil.DockerAllowNetworkAdministration, + ) + if err != nil { + // Try to get more detailed build output + log.Printf("Docker build/run failed, attempting to get detailed output...") + + buildOutput, buildErr := dockertestutil.RunDockerBuildForDiagnostics(dockerContextPath, IntegrationTestDockerFileName) + + // Show the last 100 lines of build output to avoid overwhelming the logs + lines := strings.Split(buildOutput, "\n") + + const maxLines = 100 + + startLine := 0 + if len(lines) > maxLines { + startLine = len(lines) - maxLines + } + + relevantOutput := strings.Join(lines[startLine:], "\n") + + if buildErr != nil { + // The diagnostic build also failed - this is the real error + return nil, fmt.Errorf("could not start headscale container: %w\n\nDocker build failed. Last %d lines of output:\n%s", err, maxLines, relevantOutput) + } + + if buildOutput != "" { + // Build succeeded on retry but container creation still failed + return nil, fmt.Errorf("could not start headscale container: %w\n\nDocker build succeeded on retry, but container creation failed. Last %d lines of build output:\n%s", err, maxLines, relevantOutput) + } + + // No output at all - diagnostic build command may have failed + return nil, fmt.Errorf("could not start headscale container: %w\n\nUnable to get diagnostic build output (command may have failed silently)", err) } - return nil, fmt.Errorf("could not start headscale container: %w", err) } log.Printf("Created %s container\n", hsic.hostname) hsic.container = container + // Get the dynamically assigned host port for metrics/pprof + hsic.hostMetricsPort = container.GetHostPort("9090/tcp") + log.Printf( - "Debug ports for %s: delve=%s, metrics/pprof=49090\n", + "Headscale %s metrics available at http://localhost:%s/metrics (debug at http://localhost:%s/debug/)\n", hsic.hostname, - hsic.GetHostDebugPort(), + hsic.hostMetricsPort, + hsic.hostMetricsPort, ) // Write the CA certificates to the container @@ -865,14 +949,11 @@ func (t *HeadscaleInContainer) GetPort() string { return strconv.Itoa(t.port) } -// GetDebugPort returns the debug port as a string. -func (t *HeadscaleInContainer) GetDebugPort() string { - return strconv.Itoa(t.debugPort) -} - -// GetHostDebugPort returns the host port mapped to the debug port. -func (t *HeadscaleInContainer) GetHostDebugPort() string { - return strconv.Itoa(t.debugPort) +// GetHostMetricsPort returns the dynamically assigned host port for metrics/pprof access. +// This port can be used by operators to access metrics at http://localhost:{port}/metrics +// and debug endpoints at http://localhost:{port}/debug/ while tests are running. +func (t *HeadscaleInContainer) GetHostMetricsPort() string { + return t.hostMetricsPort } // GetHealthEndpoint returns a health endpoint for the HeadscaleInContainer @@ -986,33 +1067,52 @@ func (t *HeadscaleInContainer) CreateUser( return &u, nil } -// CreateAuthKey creates a new "authorisation key" for a User that can be used -// to authorise a TailscaleClient with the Headscale instance. -func (t *HeadscaleInContainer) CreateAuthKey( - user uint64, - reusable bool, - ephemeral bool, -) (*v1.PreAuthKey, error) { +// AuthKeyOptions defines options for creating an auth key. +type AuthKeyOptions struct { + // User is the user ID that owns the auth key. If nil and Tags are specified, + // the auth key is owned by the tags only (tags-as-identity model). + User *uint64 + // Reusable indicates if the key can be used multiple times + Reusable bool + // Ephemeral indicates if nodes registered with this key should be ephemeral + Ephemeral bool + // Tags are the tags to assign to the auth key + Tags []string +} + +// CreateAuthKeyWithOptions creates a new "authorisation key" with the specified options. +// This supports both user-owned and tags-only auth keys. +func (t *HeadscaleInContainer) CreateAuthKeyWithOptions(opts AuthKeyOptions) (*v1.PreAuthKey, error) { command := []string{ "headscale", - "--user", - strconv.FormatUint(user, 10), + } + + // Only add --user flag if User is specified + if opts.User != nil { + command = append(command, "--user", strconv.FormatUint(*opts.User, 10)) + } + + command = append(command, "preauthkeys", "create", "--expiration", "24h", "--output", "json", - } + ) - if reusable { + if opts.Reusable { command = append(command, "--reusable") } - if ephemeral { + if opts.Ephemeral { command = append(command, "--ephemeral") } + if len(opts.Tags) > 0 { + command = append(command, "--tags", strings.Join(opts.Tags, ",")) + } + result, _, err := dockertestutil.ExecuteCommand( t.container, command, @@ -1023,6 +1123,7 @@ func (t *HeadscaleInContainer) CreateAuthKey( } var preAuthKey v1.PreAuthKey + err = json.Unmarshal([]byte(result), &preAuthKey) if err != nil { return nil, fmt.Errorf("failed to unmarshal auth key: %w", err) @@ -1031,6 +1132,62 @@ func (t *HeadscaleInContainer) CreateAuthKey( return &preAuthKey, nil } +// CreateAuthKey creates a new "authorisation key" for a User that can be used +// to authorise a TailscaleClient with the Headscale instance. +func (t *HeadscaleInContainer) CreateAuthKey( + user uint64, + reusable bool, + ephemeral bool, +) (*v1.PreAuthKey, error) { + return t.CreateAuthKeyWithOptions(AuthKeyOptions{ + User: &user, + Reusable: reusable, + Ephemeral: ephemeral, + }) +} + +// CreateAuthKeyWithTags creates a new "authorisation key" for a User with the specified tags. +// This is used to create tagged PreAuthKeys for testing the tags-as-identity model. +func (t *HeadscaleInContainer) CreateAuthKeyWithTags( + user uint64, + reusable bool, + ephemeral bool, + tags []string, +) (*v1.PreAuthKey, error) { + return t.CreateAuthKeyWithOptions(AuthKeyOptions{ + User: &user, + Reusable: reusable, + Ephemeral: ephemeral, + Tags: tags, + }) +} + +// DeleteAuthKey deletes an "authorisation key" by ID. +func (t *HeadscaleInContainer) DeleteAuthKey( + id uint64, +) error { + command := []string{ + "headscale", + "preauthkeys", + "delete", + "--id", + strconv.FormatUint(id, 10), + "--output", + "json", + } + + _, _, err := dockertestutil.ExecuteCommand( + t.container, + command, + []string{}, + ) + if err != nil { + return fmt.Errorf("failed to execute delete auth key command: %w", err) + } + + return nil +} + // ListNodes lists the currently registered Nodes in headscale. // Optionally a list of usernames can be passed to get users for // specific users. @@ -1176,6 +1333,31 @@ func (t *HeadscaleInContainer) MapUsers() (map[string]*v1.User, error) { return userMap, nil } +// DeleteUser deletes a user from the Headscale instance. +func (t *HeadscaleInContainer) DeleteUser(userID uint64) error { + command := []string{ + "headscale", + "users", + "delete", + "--identifier", + strconv.FormatUint(userID, 10), + "--force", + "--output", + "json", + } + + _, _, err := dockertestutil.ExecuteCommand( + t.container, + command, + []string{}, + ) + if err != nil { + return fmt.Errorf("failed to execute delete user command: %w", err) + } + + return nil +} + func (h *HeadscaleInContainer) SetPolicy(pol *policyv2.Policy) error { err := h.writePolicy(pol) if err != nil { @@ -1320,6 +1502,36 @@ func (t *HeadscaleInContainer) ApproveRoutes(id uint64, routes []netip.Prefix) ( return node, nil } +// SetNodeTags sets tags on a node via the headscale CLI. +// This simulates what the Tailscale admin console UI does - it calls the headscale +// SetTags API which is exposed via the CLI command: headscale nodes tag -i <id> -t <tags>. +func (t *HeadscaleInContainer) SetNodeTags(nodeID uint64, tags []string) error { + command := []string{ + "headscale", "nodes", "tag", + "--identifier", strconv.FormatUint(nodeID, 10), + "--output", "json", + } + + // Add tags - the CLI expects -t flag for each tag or comma-separated + if len(tags) > 0 { + command = append(command, "--tags", strings.Join(tags, ",")) + } else { + // Empty tags to clear all tags + command = append(command, "--tags", "") + } + + _, _, err := dockertestutil.ExecuteCommand( + t.container, + command, + []string{}, + ) + if err != nil { + return fmt.Errorf("failed to execute set tags command (node %d, tags %v): %w", nodeID, tags, err) + } + + return nil +} + // WriteFile save file inside the Headscale container. func (t *HeadscaleInContainer) WriteFile(path string, data []byte) error { return integrationutil.WriteFileToContainer(t.pool, t.container, path, data) diff --git a/integration/route_test.go b/integration/route_test.go index 867aa9b7..0460b5ef 100644 --- a/integration/route_test.go +++ b/integration/route_test.go @@ -4,6 +4,7 @@ import ( "cmp" "encoding/json" "fmt" + "maps" "net/netip" "slices" "sort" @@ -20,11 +21,11 @@ import ( "github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/util" "github.com/juanfont/headscale/integration/hsic" + "github.com/juanfont/headscale/integration/integrationutil" "github.com/juanfont/headscale/integration/tsic" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" xmaps "golang.org/x/exp/maps" - "tailscale.com/envknob" "tailscale.com/ipn/ipnstate" "tailscale.com/net/tsaddr" "tailscale.com/tailcfg" @@ -1978,6 +1979,11 @@ func MustFindNode(hostname string, nodes []*v1.Node) *v1.Node { // - Verify that routes can now be seen by peers. func TestAutoApproveMultiNetwork(t *testing.T) { IntegrationSkip(t) + + // Timeout for EventuallyWithT assertions. + // Set generously to account for CI infrastructure variability. + assertTimeout := 60 * time.Second + bigRoute := netip.MustParsePrefix("10.42.0.0/16") subRoute := netip.MustParsePrefix("10.42.7.0/24") notApprovedRoute := netip.MustParsePrefix("192.168.0.0/24") @@ -2216,31 +2222,24 @@ func TestAutoApproveMultiNetwork(t *testing.T) { }, } - // Check if we should run the full matrix of tests - // By default, we only run a minimal subset to avoid overwhelming Docker/disk - // Set HEADSCALE_INTEGRATION_FULL_MATRIX=1 to run all combinations - fullMatrix := envknob.Bool("HEADSCALE_INTEGRATION_FULL_MATRIX") - - // Minimal test set: 3 tests covering all key dimensions - // - Both auth methods (authkey, webauth) - // - All 3 approver types (tag, user, group) - // - Both policy modes (database, file) - // - Both advertiseDuringUp values (true, false) - minimalTestSet := map[string]bool{ - "authkey-tag-advertiseduringup-false-pol-database": true, // authkey + database + tag + false - "webauth-user-advertiseduringup-true-pol-file": true, // webauth + file + user + true - "authkey-group-advertiseduringup-false-pol-file": true, // authkey + file + group + false - } - for _, tt := range tests { for _, polMode := range []types.PolicyMode{types.PolicyModeDB, types.PolicyModeFile} { for _, advertiseDuringUp := range []bool{false, true} { name := fmt.Sprintf("%s-advertiseduringup-%t-pol-%s", tt.name, advertiseDuringUp, polMode) t.Run(name, func(t *testing.T) { - // Skip tests not in minimal set unless full matrix is enabled - if !fullMatrix && !minimalTestSet[name] { - t.Skip("Skipping to reduce test matrix size. Set HEADSCALE_INTEGRATION_FULL_MATRIX=1 to run all tests.") + // Create a deep copy of the policy to avoid mutating the shared test case. + // Each subtest modifies AutoApprovers.Routes (add then delete), so we need + // an isolated copy to prevent state leakage between sequential test runs. + pol := &policyv2.Policy{ + ACLs: slices.Clone(tt.pol.ACLs), + Groups: maps.Clone(tt.pol.Groups), + TagOwners: maps.Clone(tt.pol.TagOwners), + AutoApprovers: policyv2.AutoApproverPolicy{ + ExitNode: slices.Clone(tt.pol.AutoApprovers.ExitNode), + Routes: maps.Clone(tt.pol.AutoApprovers.Routes), + }, } + scenario, err := NewScenario(tt.spec) require.NoErrorf(t, err, "failed to create scenario: %s", err) defer scenario.ShutdownAssertNoPanics(t) @@ -2250,7 +2249,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { hsic.WithTestName("autoapprovemulti"), hsic.WithEmbeddedDERPServerOnly(), hsic.WithTLS(), - hsic.WithACLPolicy(tt.pol), + hsic.WithACLPolicy(pol), hsic.WithPolicyMode(polMode), } @@ -2258,16 +2257,25 @@ func TestAutoApproveMultiNetwork(t *testing.T) { tsic.WithAcceptRoutes(), } - if tt.approver == "tag:approve" { - tsOpts = append(tsOpts, - tsic.WithTags([]string{"tag:approve"}), - ) - } - route, err := scenario.SubnetOfNetwork("usernet1") require.NoError(t, err) - err = scenario.createHeadscaleEnv(tt.withURL, tsOpts, + // For tag-based approvers, nodes must be tagged with that tag + // (tags-as-identity model: tagged nodes are identified by their tags) + var ( + preAuthKeyTags []string + webauthTagUser string + ) + + if strings.HasPrefix(tt.approver, "tag:") { + preAuthKeyTags = []string{tt.approver} + if tt.withURL { + // For webauth, only user1 can request tags (per tagOwners policy) + webauthTagUser = "user1" + } + } + + err = scenario.createHeadscaleEnvWithTags(tt.withURL, tsOpts, preAuthKeyTags, webauthTagUser, opts..., ) requireNoErrHeadscaleEnv(t, err) @@ -2300,12 +2308,10 @@ func TestAutoApproveMultiNetwork(t *testing.T) { default: approvers = append(approvers, usernameApprover(tt.approver)) } - if tt.pol.AutoApprovers.Routes == nil { - tt.pol.AutoApprovers.Routes = make(map[netip.Prefix]policyv2.AutoApprovers) - } + // pol.AutoApprovers.Routes is already initialized in the deep copy above prefix := *route - tt.pol.AutoApprovers.Routes[prefix] = approvers - err = headscale.SetPolicy(tt.pol) + pol.AutoApprovers.Routes[prefix] = approvers + err = headscale.SetPolicy(pol) require.NoError(t, err) if advertiseDuringUp { @@ -2314,6 +2320,12 @@ func TestAutoApproveMultiNetwork(t *testing.T) { ) } + // For webauth with tag approver, the node needs to advertise the tag during registration + // (tags-as-identity model: webauth nodes can use --advertise-tags if authorized by tagOwners) + if tt.withURL && strings.HasPrefix(tt.approver, "tag:") { + tsOpts = append(tsOpts, tsic.WithTags([]string{tt.approver})) + } + tsOpts = append(tsOpts, tsic.WithNetwork(usernet1)) // This whole dance is to add a node _after_ all the other nodes @@ -2323,7 +2335,11 @@ func TestAutoApproveMultiNetwork(t *testing.T) { // into a HA node, which isn't something we are testing here. routerUsernet1, err := scenario.CreateTailscaleNode("head", tsOpts...) require.NoError(t, err) - defer routerUsernet1.Shutdown() + + defer func() { + _, _, err := routerUsernet1.Shutdown() + require.NoError(t, err) + }() if tt.withURL { u, err := routerUsernet1.LoginWithURL(headscale.GetEndpoint()) @@ -2332,12 +2348,26 @@ func TestAutoApproveMultiNetwork(t *testing.T) { body, err := doLoginURL(routerUsernet1.Hostname(), u) require.NoError(t, err) - scenario.runHeadscaleRegister("user1", body) + err = scenario.runHeadscaleRegister("user1", body) + require.NoError(t, err) + + // Wait for the client to sync with the server after webauth registration. + // Unlike authkey login which blocks until complete, webauth registration + // happens on the server side and the client needs time to receive the network map. + err = routerUsernet1.WaitForRunning(integrationutil.PeerSyncTimeout()) + require.NoError(t, err, "webauth client failed to reach Running state") } else { userMap, err := headscale.MapUsers() require.NoError(t, err) - pak, err := scenario.CreatePreAuthKey(userMap["user1"].GetId(), false, false) + // If the approver is a tag, create a tagged PreAuthKey + // (tags-as-identity model: tags come from PreAuthKey, not --advertise-tags) + var pak *v1.PreAuthKey + if strings.HasPrefix(tt.approver, "tag:") { + pak, err = scenario.CreatePreAuthKeyWithTags(userMap["user1"].GetId(), false, false, []string{tt.approver}) + } else { + pak, err = scenario.CreatePreAuthKey(userMap["user1"].GetId(), false, false) + } require.NoError(t, err) err = routerUsernet1.Login(headscale.GetEndpoint(), pak.GetKey()) @@ -2345,6 +2375,26 @@ func TestAutoApproveMultiNetwork(t *testing.T) { } // extra creation end. + // Wait for the node to be fully running before getting its ID + // This is especially important for webauth flow where login is asynchronous + err = routerUsernet1.WaitForRunning(30 * time.Second) + require.NoError(t, err) + + // Wait for bidirectional peer synchronization. + // Both the router and all existing clients must see each other. + // This is critical for connectivity - without this, the WireGuard + // tunnels may not be established despite peers appearing in netmaps. + + // Router waits for all existing clients + err = routerUsernet1.WaitForPeers(len(allClients), 60*time.Second, 1*time.Second) + require.NoError(t, err, "router failed to see all peers") + + // All clients wait for the router (they should see 6 peers including the router) + for _, existingClient := range allClients { + err = existingClient.WaitForPeers(len(allClients), 60*time.Second, 1*time.Second) + require.NoErrorf(t, err, "client %s failed to see all peers including router", existingClient.Hostname()) + } + routerUsernet1ID := routerUsernet1.MustID() web := services[0] @@ -2379,7 +2429,11 @@ func TestAutoApproveMultiNetwork(t *testing.T) { require.NoErrorf(t, err, "failed to advertise route: %s", err) } - // Wait for route state changes to propagate + // Wait for route state changes to propagate. + // Use a longer timeout (30s) to account for CI infrastructure variability - + // when advertiseDuringUp=true, routes are sent during registration and may + // take longer to propagate through the server's auto-approval logic in slow + // environments. assert.EventuallyWithT(t, func(c *assert.CollectT) { // These route should auto approve, so the node is expected to have a route // for all counts. @@ -2394,7 +2448,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { routerNode.GetSubnetRoutes()) requireNodeRouteCountWithCollect(c, routerNode, 1, 1, 1) - }, 10*time.Second, 500*time.Millisecond, "Initial route auto-approval: Route should be approved via policy") + }, assertTimeout, 500*time.Millisecond, "Initial route auto-approval: Route should be approved via policy") // Verify that the routes have been sent to the client. assert.EventuallyWithT(t, func(c *assert.CollectT) { @@ -2427,7 +2481,22 @@ func TestAutoApproveMultiNetwork(t *testing.T) { } assert.True(c, routerPeerFound, "Client should see the router peer") - }, 5*time.Second, 200*time.Millisecond, "Verifying routes sent to client after auto-approval") + }, assertTimeout, 200*time.Millisecond, "Verifying routes sent to client after auto-approval") + + // Verify WireGuard tunnel connectivity to the router before testing route. + // The client may have the route in its netmap but the actual tunnel may not + // be established yet, especially in CI environments with higher latency. + routerIPv4, err := routerUsernet1.IPv4() + require.NoError(t, err, "failed to get router IPv4") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + err := client.Ping( + routerIPv4.String(), + tsic.WithPingUntilDirect(false), // DERP relay is fine + tsic.WithPingCount(1), + tsic.WithPingTimeout(5*time.Second), + ) + assert.NoError(c, err, "ping to router should succeed") + }, assertTimeout, 200*time.Millisecond, "Verifying WireGuard tunnel to router is established") url := fmt.Sprintf("http://%s/etc/hostname", webip) t.Logf("url from %s to %s", client.Hostname(), url) @@ -2436,7 +2505,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { result, err := client.Curl(url) assert.NoError(c, err) assert.Len(c, result, 13) - }, 20*time.Second, 200*time.Millisecond, "Verifying client can reach webservice through auto-approved route") + }, assertTimeout, 200*time.Millisecond, "Verifying client can reach webservice through auto-approved route") assert.EventuallyWithT(t, func(c *assert.CollectT) { tr, err := client.Traceroute(webip) @@ -2446,12 +2515,12 @@ func TestAutoApproveMultiNetwork(t *testing.T) { return } assertTracerouteViaIPWithCollect(c, tr, ip) - }, 20*time.Second, 200*time.Millisecond, "Verifying traceroute goes through auto-approved router") + }, assertTimeout, 200*time.Millisecond, "Verifying traceroute goes through auto-approved router") // Remove the auto approval from the policy, any routes already enabled should be allowed. prefix = *route - delete(tt.pol.AutoApprovers.Routes, prefix) - err = headscale.SetPolicy(tt.pol) + delete(pol.AutoApprovers.Routes, prefix) + err = headscale.SetPolicy(pol) require.NoError(t, err) t.Logf("Policy updated: removed auto-approver for route %s", prefix) @@ -2469,7 +2538,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { routerNode.GetSubnetRoutes()) requireNodeRouteCountWithCollect(c, routerNode, 1, 1, 1) - }, 10*time.Second, 500*time.Millisecond, "Routes should remain approved after auto-approver removal") + }, assertTimeout, 500*time.Millisecond, "Routes should remain approved after auto-approver removal") // Verify that the routes have been sent to the client. assert.EventuallyWithT(t, func(c *assert.CollectT) { @@ -2489,7 +2558,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { requirePeerSubnetRoutesWithCollect(c, peerStatus, nil) } } - }, 5*time.Second, 200*time.Millisecond, "Verifying routes remain after policy change") + }, assertTimeout, 200*time.Millisecond, "Verifying routes remain after policy change") url = fmt.Sprintf("http://%s/etc/hostname", webip) t.Logf("url from %s to %s", client.Hostname(), url) @@ -2498,7 +2567,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { result, err := client.Curl(url) assert.NoError(c, err) assert.Len(c, result, 13) - }, 20*time.Second, 200*time.Millisecond, "Verifying client can still reach webservice after policy change") + }, assertTimeout, 200*time.Millisecond, "Verifying client can still reach webservice after policy change") assert.EventuallyWithT(t, func(c *assert.CollectT) { tr, err := client.Traceroute(webip) @@ -2508,7 +2577,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { return } assertTracerouteViaIPWithCollect(c, tr, ip) - }, 20*time.Second, 200*time.Millisecond, "Verifying traceroute still goes through router after policy change") + }, assertTimeout, 200*time.Millisecond, "Verifying traceroute still goes through router after policy change") // Disable the route, making it unavailable since it is no longer auto-approved _, err = headscale.ApproveRoutes( @@ -2524,7 +2593,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { nodes, err = headscale.ListNodes() assert.NoError(c, err) requireNodeRouteCountWithCollect(c, MustFindNode(routerUsernet1.Hostname(), nodes), 1, 0, 0) - }, 10*time.Second, 500*time.Millisecond, "route state changes should propagate") + }, assertTimeout, 500*time.Millisecond, "route state changes should propagate") // Verify that the routes have been sent to the client. assert.EventuallyWithT(t, func(c *assert.CollectT) { @@ -2535,7 +2604,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { peerStatus := status.Peer[peerKey] requirePeerSubnetRoutesWithCollect(c, peerStatus, nil) } - }, 5*time.Second, 200*time.Millisecond, "Verifying routes disabled after route removal") + }, assertTimeout, 200*time.Millisecond, "Verifying routes disabled after route removal") // Add the route back to the auto approver in the policy, the route should // now become available again. @@ -2548,12 +2617,10 @@ func TestAutoApproveMultiNetwork(t *testing.T) { default: newApprovers = append(newApprovers, usernameApprover(tt.approver)) } - if tt.pol.AutoApprovers.Routes == nil { - tt.pol.AutoApprovers.Routes = make(map[netip.Prefix]policyv2.AutoApprovers) - } + // pol.AutoApprovers.Routes is already initialized in the deep copy above prefix = *route - tt.pol.AutoApprovers.Routes[prefix] = newApprovers - err = headscale.SetPolicy(tt.pol) + pol.AutoApprovers.Routes[prefix] = newApprovers + err = headscale.SetPolicy(pol) require.NoError(t, err) // Wait for route state changes to propagate @@ -2563,7 +2630,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { nodes, err = headscale.ListNodes() assert.NoError(c, err) requireNodeRouteCountWithCollect(c, MustFindNode(routerUsernet1.Hostname(), nodes), 1, 1, 1) - }, 10*time.Second, 500*time.Millisecond, "route state changes should propagate") + }, assertTimeout, 500*time.Millisecond, "route state changes should propagate") // Verify that the routes have been sent to the client. assert.EventuallyWithT(t, func(c *assert.CollectT) { @@ -2583,7 +2650,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { requirePeerSubnetRoutesWithCollect(c, peerStatus, nil) } } - }, 5*time.Second, 200*time.Millisecond, "Verifying routes re-enabled after policy re-approval") + }, assertTimeout, 200*time.Millisecond, "Verifying routes re-enabled after policy re-approval") url = fmt.Sprintf("http://%s/etc/hostname", webip) t.Logf("url from %s to %s", client.Hostname(), url) @@ -2592,7 +2659,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { result, err := client.Curl(url) assert.NoError(c, err) assert.Len(c, result, 13) - }, 20*time.Second, 200*time.Millisecond, "Verifying client can reach webservice after route re-approval") + }, assertTimeout, 200*time.Millisecond, "Verifying client can reach webservice after route re-approval") assert.EventuallyWithT(t, func(c *assert.CollectT) { tr, err := client.Traceroute(webip) @@ -2602,7 +2669,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { return } assertTracerouteViaIPWithCollect(c, tr, ip) - }, 20*time.Second, 200*time.Millisecond, "Verifying traceroute goes through router after re-approval") + }, assertTimeout, 200*time.Millisecond, "Verifying traceroute goes through router after re-approval") // Advertise and validate a subnet of an auto approved route, /24 inside the // auto approved /16. @@ -2622,7 +2689,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { assert.NoError(c, err) requireNodeRouteCountWithCollect(c, MustFindNode(routerUsernet1.Hostname(), nodes), 1, 1, 1) requireNodeRouteCountWithCollect(c, nodes[1], 1, 1, 1) - }, 10*time.Second, 500*time.Millisecond, "route state changes should propagate") + }, assertTimeout, 500*time.Millisecond, "route state changes should propagate") // Verify that the routes have been sent to the client. assert.EventuallyWithT(t, func(c *assert.CollectT) { @@ -2646,7 +2713,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { requirePeerSubnetRoutesWithCollect(c, peerStatus, nil) } } - }, 5*time.Second, 200*time.Millisecond, "Verifying sub-route propagated to client") + }, assertTimeout, 200*time.Millisecond, "Verifying sub-route propagated to client") // Advertise a not approved route will not end up anywhere command = []string{ @@ -2666,7 +2733,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { requireNodeRouteCountWithCollect(c, MustFindNode(routerUsernet1.Hostname(), nodes), 1, 1, 1) requireNodeRouteCountWithCollect(c, nodes[1], 1, 1, 0) requireNodeRouteCountWithCollect(c, nodes[2], 0, 0, 0) - }, 10*time.Second, 500*time.Millisecond, "route state changes should propagate") + }, assertTimeout, 500*time.Millisecond, "route state changes should propagate") // Verify that the routes have been sent to the client. assert.EventuallyWithT(t, func(c *assert.CollectT) { @@ -2686,7 +2753,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { requirePeerSubnetRoutesWithCollect(c, peerStatus, nil) } } - }, 5*time.Second, 200*time.Millisecond, "Verifying unapproved route not propagated") + }, assertTimeout, 200*time.Millisecond, "Verifying unapproved route not propagated") // Exit routes are also automatically approved command = []string{ @@ -2704,7 +2771,7 @@ func TestAutoApproveMultiNetwork(t *testing.T) { requireNodeRouteCountWithCollect(c, MustFindNode(routerUsernet1.Hostname(), nodes), 1, 1, 1) requireNodeRouteCountWithCollect(c, nodes[1], 1, 1, 0) requireNodeRouteCountWithCollect(c, nodes[2], 2, 2, 2) - }, 10*time.Second, 500*time.Millisecond, "route state changes should propagate") + }, assertTimeout, 500*time.Millisecond, "route state changes should propagate") // Verify that the routes have been sent to the client. assert.EventuallyWithT(t, func(c *assert.CollectT) { @@ -2725,23 +2792,13 @@ func TestAutoApproveMultiNetwork(t *testing.T) { requirePeerSubnetRoutesWithCollect(c, peerStatus, nil) } } - }, 5*time.Second, 200*time.Millisecond, "Verifying exit node routes propagated to client") + }, assertTimeout, 200*time.Millisecond, "Verifying exit node routes propagated to client") }) } } } } -func assertTracerouteViaIP(t *testing.T, tr util.Traceroute, ip netip.Addr) { - t.Helper() - - require.NotNil(t, tr) - require.True(t, tr.Success) - require.NoError(t, tr.Err) - require.NotEmpty(t, tr.Route) - require.Equal(t, tr.Route[0].IP, ip) -} - // assertTracerouteViaIPWithCollect is a version of assertTracerouteViaIP that works with assert.CollectT. func assertTracerouteViaIPWithCollect(c *assert.CollectT, tr util.Traceroute, ip netip.Addr) { assert.NotNil(c, tr) @@ -2755,30 +2812,6 @@ func assertTracerouteViaIPWithCollect(c *assert.CollectT, tr util.Traceroute, ip } } -// requirePeerSubnetRoutes asserts that the peer has the expected subnet routes. -func requirePeerSubnetRoutes(t *testing.T, status *ipnstate.PeerStatus, expected []netip.Prefix) { - t.Helper() - if status.AllowedIPs.Len() <= 2 && len(expected) != 0 { - t.Fatalf("peer %s (%s) has no subnet routes, expected %v", status.HostName, status.ID, expected) - return - } - - if len(expected) == 0 { - expected = []netip.Prefix{} - } - - got := slicesx.Filter(nil, status.AllowedIPs.AsSlice(), func(p netip.Prefix) bool { - if tsaddr.IsExitRoute(p) { - return true - } - return !slices.ContainsFunc(status.TailscaleIPs, p.Contains) - }) - - if diff := cmpdiff.Diff(expected, got, util.PrefixComparer, cmpopts.EquateEmpty()); diff != "" { - t.Fatalf("peer %s (%s) subnet routes, unexpected result (-want +got):\n%s", status.HostName, status.ID, diff) - } -} - func SortPeerStatus(a, b *ipnstate.PeerStatus) int { return cmp.Compare(a.ID, b.ID) } @@ -2823,13 +2856,6 @@ func requirePeerSubnetRoutesWithCollect(c *assert.CollectT, status *ipnstate.Pee } } -func requireNodeRouteCount(t *testing.T, node *v1.Node, announced, approved, subnet int) { - t.Helper() - require.Lenf(t, node.GetAvailableRoutes(), announced, "expected %q announced routes(%v) to have %d route, had %d", node.GetName(), node.GetAvailableRoutes(), announced, len(node.GetAvailableRoutes())) - require.Lenf(t, node.GetApprovedRoutes(), approved, "expected %q approved routes(%v) to have %d route, had %d", node.GetName(), node.GetApprovedRoutes(), approved, len(node.GetApprovedRoutes())) - require.Lenf(t, node.GetSubnetRoutes(), subnet, "expected %q subnet routes(%v) to have %d route, had %d", node.GetName(), node.GetSubnetRoutes(), subnet, len(node.GetSubnetRoutes())) -} - func requireNodeRouteCountWithCollect(c *assert.CollectT, node *v1.Node, announced, approved, subnet int) { assert.Lenf(c, node.GetAvailableRoutes(), announced, "expected %q announced routes(%v) to have %d route, had %d", node.GetName(), node.GetAvailableRoutes(), announced, len(node.GetAvailableRoutes())) assert.Lenf(c, node.GetApprovedRoutes(), approved, "expected %q approved routes(%v) to have %d route, had %d", node.GetName(), node.GetApprovedRoutes(), approved, len(node.GetApprovedRoutes())) @@ -3009,7 +3035,7 @@ func TestSubnetRouteACLFiltering(t *testing.T) { // Check that the router has 3 routes now approved and available requireNodeRouteCountWithCollect(c, routerNode, 3, 3, 3) - }, 10*time.Second, 500*time.Millisecond, "route state changes should propagate") + }, 15*time.Second, 500*time.Millisecond, "route state changes should propagate") // Now check the client node status assert.EventuallyWithT(t, func(c *assert.CollectT) { @@ -3030,7 +3056,7 @@ func TestSubnetRouteACLFiltering(t *testing.T) { result, err := nodeClient.Curl(weburl) assert.NoError(c, err) assert.Len(c, result, 13) - }, 20*time.Second, 200*time.Millisecond, "Verifying node can reach webservice through allowed route") + }, 60*time.Second, 200*time.Millisecond, "Verifying node can reach webservice through allowed route") assert.EventuallyWithT(t, func(c *assert.CollectT) { tr, err := nodeClient.Traceroute(webip) @@ -3040,5 +3066,5 @@ func TestSubnetRouteACLFiltering(t *testing.T) { return } assertTracerouteViaIPWithCollect(c, tr, ip) - }, 20*time.Second, 200*time.Millisecond, "Verifying traceroute goes through router") + }, 60*time.Second, 200*time.Millisecond, "Verifying traceroute goes through router") } diff --git a/integration/scenario.go b/integration/scenario.go index c3b5549c..35fee73e 100644 --- a/integration/scenario.go +++ b/integration/scenario.go @@ -14,6 +14,7 @@ import ( "net/netip" "net/url" "os" + "slices" "strconv" "strings" "sync" @@ -63,7 +64,7 @@ var ( // // The rest of the version represents Tailscale versions that can be // found in Tailscale's apt repository. - AllVersions = append([]string{"head", "unstable"}, capver.TailscaleLatestMajorMinor(10, true)...) + AllVersions = append([]string{"head", "unstable"}, capver.TailscaleLatestMajorMinor(capver.SupportedMajorMinorVersions, true)...) // MustTestVersions is the minimum set of versions we should test. // At the moment, this is arbitrarily chosen as: @@ -246,9 +247,14 @@ func (s *Scenario) AddNetwork(name string) (*dockertest.Network, error) { // We run the test suite in a docker container that calls a couple of endpoints for // readiness checks, this ensures that we can run the tests with individual networks - // and have the client reach the different containers - // TODO(kradalby): Can the test-suite be renamed so we can have multiple? - err = dockertestutil.AddContainerToNetwork(s.pool, network, "headscale-test-suite") + // and have the client reach the different containers. + // The container name includes the run ID to support multiple concurrent test runs. + testSuiteName := "headscale-test-suite" + if runID := dockertestutil.GetIntegrationRunID(); runID != "" { + testSuiteName = "headscale-test-suite-" + runID + } + + err = dockertestutil.AddContainerToNetwork(s.pool, network, testSuiteName) if err != nil { return nil, fmt.Errorf("failed to add test suite container to network: %w", err) } @@ -472,6 +478,43 @@ func (s *Scenario) CreatePreAuthKey( return nil, fmt.Errorf("failed to create user: %w", errNoHeadscaleAvailable) } +// CreatePreAuthKeyWithOptions creates a "pre authorised key" with the specified options +// to be created in the Headscale instance on behalf of the Scenario. +func (s *Scenario) CreatePreAuthKeyWithOptions(opts hsic.AuthKeyOptions) (*v1.PreAuthKey, error) { + headscale, err := s.Headscale() + if err != nil { + return nil, fmt.Errorf("failed to create preauth key with options: %w", errNoHeadscaleAvailable) + } + + key, err := headscale.CreateAuthKeyWithOptions(opts) + if err != nil { + return nil, fmt.Errorf("failed to create preauth key with options: %w", err) + } + + return key, nil +} + +// CreatePreAuthKeyWithTags creates a "pre authorised key" with the specified tags +// to be created in the Headscale instance on behalf of the Scenario. +func (s *Scenario) CreatePreAuthKeyWithTags( + user uint64, + reusable bool, + ephemeral bool, + tags []string, +) (*v1.PreAuthKey, error) { + headscale, err := s.Headscale() + if err != nil { + return nil, fmt.Errorf("failed to create preauth key with tags: %w", errNoHeadscaleAvailable) + } + + key, err := headscale.CreateAuthKeyWithTags(user, reusable, ephemeral, tags) + if err != nil { + return nil, fmt.Errorf("failed to create preauth key with tags: %w", err) + } + + return key, nil +} + // CreateUser creates a User to be created in the // Headscale instance on behalf of the Scenario. func (s *Scenario) CreateUser(user string) (*v1.User, error) { @@ -766,6 +809,25 @@ func (s *Scenario) createHeadscaleEnv( withURL bool, tsOpts []tsic.Option, opts ...hsic.Option, +) error { + return s.createHeadscaleEnvWithTags(withURL, tsOpts, nil, "", opts...) +} + +// createHeadscaleEnvWithTags starts the headscale environment and the clients +// according to the ScenarioSpec passed to the Scenario. If preAuthKeyTags is +// non-empty and withURL is false, the tags will be applied to the PreAuthKey +// (tags-as-identity model). +// +// For webauth (withURL=true), if webauthTagUser is non-empty and preAuthKeyTags +// is non-empty, only nodes belonging to that user will request tags via +// --advertise-tags. This is necessary because tagOwners ACL controls which +// users can request specific tags. +func (s *Scenario) createHeadscaleEnvWithTags( + withURL bool, + tsOpts []tsic.Option, + preAuthKeyTags []string, + webauthTagUser string, + opts ...hsic.Option, ) error { headscale, err := s.Headscale(opts...) if err != nil { @@ -778,14 +840,20 @@ func (s *Scenario) createHeadscaleEnv( return err } - var opts []tsic.Option + var userOpts []tsic.Option if s.userToNetwork != nil { - opts = append(tsOpts, tsic.WithNetwork(s.userToNetwork[user])) + userOpts = append(tsOpts, tsic.WithNetwork(s.userToNetwork[user])) } else { - opts = append(tsOpts, tsic.WithNetwork(s.networks[s.testDefaultNetwork])) + userOpts = append(tsOpts, tsic.WithNetwork(s.networks[s.testDefaultNetwork])) } - err = s.CreateTailscaleNodesInUser(user, "all", s.spec.NodesPerUser, opts...) + // For webauth with tags, only apply tags to the specified webauthTagUser + // (other users may not be authorized via tagOwners) + if withURL && webauthTagUser != "" && len(preAuthKeyTags) > 0 && user == webauthTagUser { + userOpts = append(userOpts, tsic.WithTags(preAuthKeyTags)) + } + + err = s.CreateTailscaleNodesInUser(user, "all", s.spec.NodesPerUser, userOpts...) if err != nil { return err } @@ -796,7 +864,13 @@ func (s *Scenario) createHeadscaleEnv( return err } } else { - key, err := s.CreatePreAuthKey(u.GetId(), true, false) + // Use tagged PreAuthKey if tags are provided (tags-as-identity model) + var key *v1.PreAuthKey + if len(preAuthKeyTags) > 0 { + key, err = s.CreatePreAuthKeyWithTags(u.GetId(), true, false, preAuthKeyTags) + } else { + key, err = s.CreatePreAuthKey(u.GetId(), true, false) + } if err != nil { return err } @@ -1159,10 +1233,8 @@ func (s *Scenario) FindTailscaleClientByIP(ip netip.Addr) (TailscaleClient, erro for _, client := range clients { ips, _ := client.IPs() - for _, ip2 := range ips { - if ip == ip2 { - return client, nil - } + if slices.Contains(ips, ip) { + return client, nil } } diff --git a/integration/ssh_test.go b/integration/ssh_test.go index 33335ccd..2986bcea 100644 --- a/integration/ssh_test.go +++ b/integration/ssh_test.go @@ -42,11 +42,8 @@ func sshScenario(t *testing.T, policy *policyv2.Policy, clientsPerUser int) *Sce // tailscaled to stop configuring the wgengine, causing it // to not configure DNS. tsic.WithNetfilter("off"), - tsic.WithDockerEntrypoint([]string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; apk add openssh ; adduser ssh-it-user ; update-ca-certificates ; tailscaled --tun=tsdev", - }), + tsic.WithPackages("openssh"), + tsic.WithExtraCommands("adduser ssh-it-user"), tsic.WithDockerWorkdir("/"), }, hsic.WithACLPolicy(policy), @@ -83,10 +80,15 @@ func TestSSHOneUserToAll(t *testing.T) { }, SSHs: []policyv2.SSH{ { - Action: "accept", - Sources: policyv2.SSHSrcAliases{groupp("group:integration-test")}, - Destinations: policyv2.SSHDstAliases{wildcard()}, - Users: []policyv2.SSHUser{policyv2.SSHUser("ssh-it-user")}, + Action: "accept", + Sources: policyv2.SSHSrcAliases{groupp("group:integration-test")}, + // Use autogroup:member and autogroup:tagged instead of wildcard + // since wildcard (*) is no longer supported for SSH destinations + Destinations: policyv2.SSHDstAliases{ + ptr.To(policyv2.AutoGroupMember), + ptr.To(policyv2.AutoGroupTagged), + }, + Users: []policyv2.SSHUser{policyv2.SSHUser("ssh-it-user")}, }, }, }, @@ -130,6 +132,8 @@ func TestSSHOneUserToAll(t *testing.T) { } } +// TestSSHMultipleUsersAllToAll tests that users in a group can SSH to each other's devices +// using autogroup:self as the destination, which allows same-user SSH access. func TestSSHMultipleUsersAllToAll(t *testing.T) { IntegrationSkip(t) @@ -150,9 +154,13 @@ func TestSSHMultipleUsersAllToAll(t *testing.T) { }, SSHs: []policyv2.SSH{ { - Action: "accept", - Sources: policyv2.SSHSrcAliases{groupp("group:integration-test")}, - Destinations: policyv2.SSHDstAliases{usernamep("user1@"), usernamep("user2@")}, + Action: "accept", + Sources: policyv2.SSHSrcAliases{groupp("group:integration-test")}, + // Use autogroup:self to allow users to SSH to their own devices. + // Username destinations (e.g., "user1@") now require the source + // to be that exact same user only. For group-to-group SSH access, + // use autogroup:self instead. + Destinations: policyv2.SSHDstAliases{ptr.To(policyv2.AutoGroupSelf)}, Users: []policyv2.SSHUser{policyv2.SSHUser("ssh-it-user")}, }, }, @@ -173,16 +181,42 @@ func TestSSHMultipleUsersAllToAll(t *testing.T) { _, err = scenario.ListTailscaleClientsFQDNs() requireNoErrListFQDN(t, err) - testInterUserSSH := func(sourceClients []TailscaleClient, targetClients []TailscaleClient) { - for _, client := range sourceClients { - for _, peer := range targetClients { - assertSSHHostname(t, client, peer) + // With autogroup:self, users can SSH to their own devices, but not to other users' devices. + // Test that user1's devices can SSH to each other + for _, client := range nsOneClients { + for _, peer := range nsOneClients { + if client.Hostname() == peer.Hostname() { + continue } + + assertSSHHostname(t, client, peer) } } - testInterUserSSH(nsOneClients, nsTwoClients) - testInterUserSSH(nsTwoClients, nsOneClients) + // Test that user2's devices can SSH to each other + for _, client := range nsTwoClients { + for _, peer := range nsTwoClients { + if client.Hostname() == peer.Hostname() { + continue + } + + assertSSHHostname(t, client, peer) + } + } + + // Test that user1 cannot SSH to user2's devices (autogroup:self only allows same-user) + for _, client := range nsOneClients { + for _, peer := range nsTwoClients { + assertSSHPermissionDenied(t, client, peer) + } + } + + // Test that user2 cannot SSH to user1's devices (autogroup:self only allows same-user) + for _, client := range nsTwoClients { + for _, peer := range nsOneClients { + assertSSHPermissionDenied(t, client, peer) + } + } } func TestSSHNoSSHConfigured(t *testing.T) { @@ -251,7 +285,7 @@ func TestSSHIsBlockedInACL(t *testing.T) { { Action: "accept", Sources: policyv2.SSHSrcAliases{groupp("group:integration-test")}, - Destinations: policyv2.SSHDstAliases{usernamep("user1@")}, + Destinations: policyv2.SSHDstAliases{ptr.To(policyv2.AutoGroupSelf)}, Users: []policyv2.SSHUser{policyv2.SSHUser("ssh-it-user")}, }, }, @@ -300,16 +334,19 @@ func TestSSHUserOnlyIsolation(t *testing.T) { }, }, SSHs: []policyv2.SSH{ + // Use autogroup:self to allow users in each group to SSH to their own devices. + // Username destinations (e.g., "user1@") require the source to be that + // exact same user only, not a group containing that user. { Action: "accept", Sources: policyv2.SSHSrcAliases{groupp("group:ssh1")}, - Destinations: policyv2.SSHDstAliases{usernamep("user1@")}, + Destinations: policyv2.SSHDstAliases{ptr.To(policyv2.AutoGroupSelf)}, Users: []policyv2.SSHUser{policyv2.SSHUser("ssh-it-user")}, }, { Action: "accept", Sources: policyv2.SSHSrcAliases{groupp("group:ssh2")}, - Destinations: policyv2.SSHDstAliases{usernamep("user2@")}, + Destinations: policyv2.SSHDstAliases{ptr.To(policyv2.AutoGroupSelf)}, Users: []policyv2.SSHUser{policyv2.SSHUser("ssh-it-user")}, }, }, @@ -395,8 +432,10 @@ func doSSHWithRetry(t *testing.T, client TailscaleClient, peer TailscaleClient, log.Printf("Running from %s to %s", client.Hostname(), peer.Hostname()) log.Printf("Command: %s", strings.Join(command, " ")) - var result, stderr string - var err error + var ( + result, stderr string + err error + ) if retry { // Use assert.EventuallyWithT to retry SSH connections for success cases @@ -455,6 +494,7 @@ func assertSSHTimeout(t *testing.T, client TailscaleClient, peer TailscaleClient func assertSSHNoAccessStdError(t *testing.T, err error, stderr string) { t.Helper() assert.Error(t, err) + if !isSSHNoAccessStdError(stderr) { t.Errorf("expected stderr output suggesting access denied, got: %s", stderr) } @@ -462,7 +502,7 @@ func assertSSHNoAccessStdError(t *testing.T, err error, stderr string) { // TestSSHAutogroupSelf tests that SSH with autogroup:self works correctly: // - Users can SSH to their own devices -// - Users cannot SSH to other users' devices +// - Users cannot SSH to other users' devices. func TestSSHAutogroupSelf(t *testing.T) { IntegrationSkip(t) diff --git a/integration/tags_test.go b/integration/tags_test.go new file mode 100644 index 00000000..5dad36e5 --- /dev/null +++ b/integration/tags_test.go @@ -0,0 +1,3118 @@ +package integration + +import ( + "sort" + "testing" + "time" + + v1 "github.com/juanfont/headscale/gen/go/headscale/v1" + policyv2 "github.com/juanfont/headscale/hscontrol/policy/v2" + "github.com/juanfont/headscale/hscontrol/util" + "github.com/juanfont/headscale/integration/hsic" + "github.com/juanfont/headscale/integration/tsic" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "tailscale.com/tailcfg" + "tailscale.com/types/ptr" +) + +const tagTestUser = "taguser" + +// ============================================================================= +// Helper Functions +// ============================================================================= + +// tagsTestPolicy creates a policy for tag tests with: +// - tag:valid-owned: owned by the specified user +// - tag:second: owned by the specified user +// - tag:valid-unowned: owned by "other-user" (not the test user) +// - tag:nonexistent is deliberately NOT defined. +func tagsTestPolicy() *policyv2.Policy { + return &policyv2.Policy{ + TagOwners: policyv2.TagOwners{ + "tag:valid-owned": policyv2.Owners{ptr.To(policyv2.Username(tagTestUser + "@"))}, + "tag:second": policyv2.Owners{ptr.To(policyv2.Username(tagTestUser + "@"))}, + "tag:valid-unowned": policyv2.Owners{ptr.To(policyv2.Username("other-user@"))}, + // Note: tag:nonexistent deliberately NOT defined + }, + ACLs: []policyv2.ACL{ + { + Action: "accept", + Sources: []policyv2.Alias{policyv2.Wildcard}, + Destinations: []policyv2.AliasWithPorts{{Alias: policyv2.Wildcard, Ports: []tailcfg.PortRange{tailcfg.PortRangeAny}}}, + }, + }, + } +} + +// tagsEqual compares two tag slices as unordered sets. +func tagsEqual(actual, expected []string) bool { + if len(actual) != len(expected) { + return false + } + + sortedActual := append([]string{}, actual...) + sortedExpected := append([]string{}, expected...) + + sort.Strings(sortedActual) + sort.Strings(sortedExpected) + + for i := range sortedActual { + if sortedActual[i] != sortedExpected[i] { + return false + } + } + + return true +} + +// assertNodeHasTagsWithCollect asserts that a node has exactly the expected tags (order-independent). +func assertNodeHasTagsWithCollect(c *assert.CollectT, node *v1.Node, expectedTags []string) { + actualTags := node.GetTags() + sortedActual := append([]string{}, actualTags...) + sortedExpected := append([]string{}, expectedTags...) + + sort.Strings(sortedActual) + sort.Strings(sortedExpected) + assert.Equal(c, sortedExpected, sortedActual, "Node %s tags mismatch", node.GetName()) +} + +// assertNodeHasNoTagsWithCollect asserts that a node has no tags. +func assertNodeHasNoTagsWithCollect(c *assert.CollectT, node *v1.Node) { + assert.Empty(c, node.GetTags(), "Node %s should have no tags, but has: %v", node.GetName(), node.GetTags()) +} + +// assertNodeSelfHasTagsWithCollect asserts that a client's self view has exactly the expected tags. +// This validates that tag updates have propagated to the node's own status (issue #2978). +func assertNodeSelfHasTagsWithCollect(c *assert.CollectT, client TailscaleClient, expectedTags []string) { + status, err := client.Status() + //nolint:testifylint // must use assert with CollectT in EventuallyWithT + assert.NoError(c, err, "failed to get client status") + + if status == nil || status.Self == nil { + assert.Fail(c, "client status or self is nil") + return + } + + var actualTagsSlice []string + + if status.Self.Tags != nil { + for _, tag := range status.Self.Tags.All() { + actualTagsSlice = append(actualTagsSlice, tag) + } + } + + sortedActual := append([]string{}, actualTagsSlice...) + sortedExpected := append([]string{}, expectedTags...) + + sort.Strings(sortedActual) + sort.Strings(sortedExpected) + assert.Equal(c, sortedExpected, sortedActual, "Client %s self tags mismatch", client.Hostname()) +} + +// ============================================================================= +// Test Suite 2: Auth Key WITH Pre-assigned Tags +// ============================================================================= + +// TestTagsAuthKeyWithTagRequestDifferentTag tests that requesting a different tag +// than what the auth key provides results in registration failure. +// +// Test 2.1: Request different tag than key provides +// Setup: Run `tailscale up --advertise-tags="tag:second" --auth-key AUTH_KEY_WITH_TAG` +// Expected: Registration fails with error containing "requested tags [tag:second] are invalid or not permitted". +func TestTagsAuthKeyWithTagRequestDifferentTag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, // We'll create the node manually + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-diff"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey with tag:valid-owned + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + t.Logf("Created tagged PreAuthKey with tags: %v", authKey.GetAclTags()) + + // Create a tailscale client that will try to use --advertise-tags with a DIFFERENT tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:second"}), + ) + require.NoError(t, err) + + // Login should fail because the advertised tags don't match the auth key's tags + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + + // Document actual behavior - we expect this to fail + if err != nil { + t.Logf("Test 2.1 PASS: Registration correctly rejected with error: %v", err) + assert.ErrorContains(t, err, "requested tags") + } else { + // If it succeeded, document this unexpected behavior + t.Logf("Test 2.1 UNEXPECTED: Registration succeeded when it should have failed") + + // Check what tags the node actually has + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Node registered with tags: %v (expected rejection)", nodes[0].GetTags()) + } + }, 10*time.Second, 500*time.Millisecond, "checking node state") + + t.Fail() + } +} + +// TestTagsAuthKeyWithTagNoAdvertiseFlag tests that registering with a tagged auth key +// but no --advertise-tags flag results in the node inheriting the key's tags. +// +// Test 2.2: Register with no advertise-tags flag +// Setup: Run `tailscale up --auth-key AUTH_KEY_WITH_TAG` (no --advertise-tags) +// Expected: Registration succeeds, node has ["tag:valid-owned"] (inherited from key). +func TestTagsAuthKeyWithTagNoAdvertiseFlag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-inherit"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey with tag:valid-owned + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + t.Logf("Created tagged PreAuthKey with tags: %v", authKey.GetAclTags()) + + // Create a tailscale client WITHOUT --advertise-tags + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + // Note: NO WithExtraLoginArgs for --advertise-tags + ) + require.NoError(t, err) + + // Login with the tagged PreAuthKey + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for node to be registered and verify it has the key's tags + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + node := nodes[0] + t.Logf("Node registered with tags: %v", node.GetTags()) + assertNodeHasTagsWithCollect(c, node, []string{"tag:valid-owned"}) + } + }, 30*time.Second, 500*time.Millisecond, "verifying node inherited tags from auth key") + + t.Logf("Test 2.2 completed - node inherited tags from auth key") +} + +// TestTagsAuthKeyWithTagCannotAddViaCLI tests that nodes registered with a tagged auth key +// cannot add additional tags via the client CLI. +// +// Test 2.3: Cannot add tags via CLI after registration +// Setup: +// 1. Register with --auth-key AUTH_KEY_WITH_TAG +// 2. Run `tailscale up --advertise-tags="tag:valid-owned,tag:second" --auth-key AUTH_KEY_WITH_TAG` +// +// Expected: Command fails with error containing "requested tags [tag:second] are invalid or not permitted". +func TestTagsAuthKeyWithTagCannotAddViaCLI(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-noadd"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey with tag:valid-owned + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Initial login + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for initial registration + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + t.Logf("Node registered with tag:valid-owned, now attempting to add tag:second via CLI") + + // Attempt to add additional tags via tailscale up + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--authkey=" + authKey.GetKey(), + "--advertise-tags=tag:valid-owned,tag:second", + } + _, stderr, err := client.Execute(command) + + // Document actual behavior + if err != nil { + t.Logf("Test 2.3 PASS: CLI correctly rejected adding tags: %v, stderr: %s", err, stderr) + } else { + t.Logf("Test 2.3: CLI command succeeded, checking if tags actually changed") + + // Check if tags actually changed + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + // If still only has original tag, that's the expected behavior + if tagsEqual(nodes[0].GetTags(), []string{"tag:valid-owned"}) { + t.Logf("Test 2.3 PASS: Tags unchanged after CLI attempt: %v", nodes[0].GetTags()) + } else { + t.Logf("Test 2.3 FAIL: Tags changed unexpectedly to: %v", nodes[0].GetTags()) + assert.Fail(c, "Tags should not have changed") + } + } + }, 10*time.Second, 500*time.Millisecond, "verifying tags unchanged") + } +} + +// TestTagsAuthKeyWithTagCannotChangeViaCLI tests that nodes registered with a tagged auth key +// cannot change to a completely different tag set via the client CLI. +// +// Test 2.4: Cannot change to different tag set via CLI +// Setup: +// 1. Register with --auth-key AUTH_KEY_WITH_TAG +// 2. Run `tailscale up --advertise-tags="tag:second" --auth-key AUTH_KEY_WITH_TAG` +// +// Expected: Command fails, tags remain ["tag:valid-owned"]. +func TestTagsAuthKeyWithTagCannotChangeViaCLI(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-nochange"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey with tag:valid-owned + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Initial login + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for initial registration + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + t.Logf("Node registered, now attempting to change to different tag via CLI") + + // Attempt to change to a different tag via tailscale up + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--authkey=" + authKey.GetKey(), + "--advertise-tags=tag:second", + } + _, stderr, err := client.Execute(command) + + // Document actual behavior + if err != nil { + t.Logf("Test 2.4 PASS: CLI correctly rejected changing tags: %v, stderr: %s", err, stderr) + } else { + t.Logf("Test 2.4: CLI command succeeded, checking if tags actually changed") + + // Check if tags remain unchanged + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + if tagsEqual(nodes[0].GetTags(), []string{"tag:valid-owned"}) { + t.Logf("Test 2.4 PASS: Tags unchanged: %v", nodes[0].GetTags()) + } else { + t.Logf("Test 2.4 FAIL: Tags changed unexpectedly to: %v", nodes[0].GetTags()) + assert.Fail(c, "Tags should not have changed") + } + } + }, 10*time.Second, 500*time.Millisecond, "verifying tags unchanged") + } +} + +// TestTagsAuthKeyWithTagAdminOverrideReauthPreserves tests that admin-assigned tags +// are preserved even after reauthentication - admin decisions are authoritative. +// +// Test 2.5: Admin assignment is preserved through reauth +// Setup: +// 1. Register with --auth-key AUTH_KEY_WITH_TAG +// 2. Assign ["tag:second"] via headscale CLI +// 3. Run `tailscale up --auth-key AUTH_KEY_WITH_TAG --force-reauth` +// +// Expected: After step 2 tags are ["tag:second"], after step 3 tags remain ["tag:second"]. +func TestTagsAuthKeyWithTagAdminOverrideReauthPreserves(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-admin"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey with tag:valid-owned + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, true, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Initial login + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for initial registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + t.Logf("Step 1 complete: Node %d registered with tag:valid-owned", nodeID) + + // Step 2: Admin assigns different tags via headscale CLI + err = headscale.SetNodeTags(nodeID, []string{"tag:second"}) + require.NoError(t, err) + + // Verify admin assignment took effect (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("After admin assignment, server tags are: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying admin tag assignment on server") + + // Verify admin assignment propagated to node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "verifying admin tag assignment propagated to node self") + + t.Logf("Step 2 complete: Admin assigned tag:second (verified on both server and node self)") + + // Step 3: Force reauthentication + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--authkey=" + authKey.GetKey(), + "--force-reauth", + } + //nolint:errcheck // Intentionally ignoring error - we check results below + client.Execute(command) + + // Verify admin tags are preserved even after reauth - admin decisions are authoritative (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.GreaterOrEqual(c, len(nodes), 1, "Should have at least 1 node") + + if len(nodes) >= 1 { + // Find the most recently updated node (in case a new one was created) + node := nodes[len(nodes)-1] + t.Logf("After reauth, server tags are: %v", node.GetTags()) + + // Expected: admin-assigned tags are preserved through reauth + assertNodeHasTagsWithCollect(c, node, []string{"tag:second"}) + } + }, 30*time.Second, 500*time.Millisecond, "admin tags should be preserved after reauth on server") + + // Verify admin tags are preserved in node's self view after reauth (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "admin tags should be preserved after reauth in node self") + + t.Logf("Test 2.5 PASS: Admin tags preserved through reauth (admin decisions are authoritative)") +} + +// TestTagsAuthKeyWithTagCLICannotModifyAdminTags tests that the client CLI +// cannot modify admin-assigned tags. +// +// Test 2.6: Client CLI cannot modify admin-assigned tags +// Setup: +// 1. Register with --auth-key AUTH_KEY_WITH_TAG +// 2. Assign ["tag:valid-owned", "tag:second"] via headscale CLI +// 3. Run `tailscale up --advertise-tags="tag:valid-owned" --auth-key AUTH_KEY_WITH_TAG` +// +// Expected: Command either fails or is no-op, tags remain ["tag:valid-owned", "tag:second"]. +func TestTagsAuthKeyWithTagCLICannotModifyAdminTags(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-noadmin"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey with tag:valid-owned + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, true, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Initial login + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for initial registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + // Step 2: Admin assigns multiple tags via headscale CLI + err = headscale.SetNodeTags(nodeID, []string{"tag:valid-owned", "tag:second"}) + require.NoError(t, err) + + // Verify admin assignment (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned", "tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying admin tag assignment on server") + + // Verify admin assignment propagated to node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned", "tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "verifying admin tag assignment propagated to node self") + + t.Logf("Admin assigned both tags, now attempting to reduce via CLI") + + // Step 3: Attempt to reduce tags via CLI + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--authkey=" + authKey.GetKey(), + "--advertise-tags=tag:valid-owned", + } + _, stderr, err := client.Execute(command) + + t.Logf("CLI command result: err=%v, stderr=%s", err, stderr) + + // Verify admin tags are preserved - CLI should not be able to reduce them (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + t.Logf("After CLI attempt, server tags are: %v", nodes[0].GetTags()) + + // Expected: tags should remain unchanged (admin wins) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned", "tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "admin tags should be preserved after CLI attempt on server") + + // Verify admin tags are preserved in node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned", "tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "admin tags should be preserved after CLI attempt in node self") + + t.Logf("Test 2.6 PASS: Admin tags preserved - CLI cannot modify admin-assigned tags") +} + +// ============================================================================= +// Test Suite 3: Auth Key WITHOUT Tags +// ============================================================================= + +// TestTagsAuthKeyWithoutTagCannotRequestTags tests that nodes cannot request tags +// when using an auth key that has no tags. +// +// Test 3.1: Cannot request tags with tagless key +// Setup: Run `tailscale up --advertise-tags="tag:valid-owned" --auth-key AUTH_KEY_WITHOUT_TAG` +// Expected: Registration fails with error containing "requested tags [tag:valid-owned] are invalid or not permitted". +func TestTagsAuthKeyWithoutTagCannotRequestTags(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-nokey-req"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create an auth key WITHOUT tags + authKey, err := scenario.CreatePreAuthKey(userID, false, false) + require.NoError(t, err) + t.Logf("Created PreAuthKey without tags") + + // Create a tailscale client that will try to request tags + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned"}), + ) + require.NoError(t, err) + + // Login should fail because the auth key has no tags + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + if err != nil { + t.Logf("Test 3.1 PASS: Registration correctly rejected: %v", err) + assert.ErrorContains(t, err, "requested tags") + } else { + // If it succeeded, document this unexpected behavior + t.Logf("Test 3.1 UNEXPECTED: Registration succeeded when it should have failed") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Node registered with tags: %v (expected rejection)", nodes[0].GetTags()) + } + }, 10*time.Second, 500*time.Millisecond, "checking node state") + + t.Fail() + } +} + +// TestTagsAuthKeyWithoutTagRegisterNoTags tests that registering with a tagless auth key +// and no --advertise-tags results in a node with no tags. +// +// Test 3.2: Register with no tags +// Setup: Run `tailscale up --auth-key AUTH_KEY_WITHOUT_TAG` (no --advertise-tags) +// Expected: Registration succeeds, node has no tags (empty tag set). +func TestTagsAuthKeyWithoutTagRegisterNoTags(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-nokey-noreg"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create an auth key WITHOUT tags + authKey, err := scenario.CreatePreAuthKey(userID, false, false) + require.NoError(t, err) + + // Create a tailscale client without --advertise-tags + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Login should succeed + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Verify node has no tags + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + t.Logf("Node registered with tags: %v", nodes[0].GetTags()) + assertNodeHasNoTagsWithCollect(c, nodes[0]) + } + }, 30*time.Second, 500*time.Millisecond, "verifying node has no tags") + + t.Logf("Test 3.2 completed - node registered without tags") +} + +// TestTagsAuthKeyWithoutTagCannotAddViaCLI tests that nodes registered with a tagless +// auth key cannot add tags via the client CLI. +// +// Test 3.3: Cannot add tags via CLI after registration +// Setup: +// 1. Register with --auth-key AUTH_KEY_WITHOUT_TAG +// 2. Run `tailscale up --advertise-tags="tag:valid-owned" --auth-key AUTH_KEY_WITHOUT_TAG` +// +// Expected: Command fails, node remains with no tags. +func TestTagsAuthKeyWithoutTagCannotAddViaCLI(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-nokey-noadd"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create an auth key WITHOUT tags + authKey, err := scenario.CreatePreAuthKey(userID, true, false) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Initial login + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for initial registration + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + assertNodeHasNoTagsWithCollect(c, nodes[0]) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + t.Logf("Node registered without tags, attempting to add via CLI") + + // Attempt to add tags via tailscale up + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--authkey=" + authKey.GetKey(), + "--advertise-tags=tag:valid-owned", + } + _, stderr, err := client.Execute(command) + + // Document actual behavior + if err != nil { + t.Logf("Test 3.3 PASS: CLI correctly rejected adding tags: %v, stderr: %s", err, stderr) + } else { + t.Logf("Test 3.3: CLI command succeeded, checking if tags actually changed") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + if len(nodes[0].GetTags()) == 0 { + t.Logf("Test 3.3 PASS: Tags still empty after CLI attempt") + } else { + t.Logf("Test 3.3 FAIL: Tags changed to: %v", nodes[0].GetTags()) + assert.Fail(c, "Tags should not have changed") + } + } + }, 10*time.Second, 500*time.Millisecond, "verifying tags unchanged") + } +} + +// TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithReset tests that the client CLI +// is a no-op after admin tag assignment, even with --reset flag. +// +// Test 3.4: CLI no-op after admin tag assignment (with --reset) +// Setup: +// 1. Register with --auth-key AUTH_KEY_WITHOUT_TAG +// 2. Assign ["tag:valid-owned"] via headscale CLI +// 3. Run `tailscale up --auth-key AUTH_KEY_WITHOUT_TAG --reset` +// +// Expected: Command is no-op, tags remain ["tag:valid-owned"]. +func TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithReset(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-nokey-reset"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create an auth key WITHOUT tags + authKey, err := scenario.CreatePreAuthKey(userID, true, false) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Initial login + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for initial registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + assertNodeHasNoTagsWithCollect(c, nodes[0]) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + // Step 2: Admin assigns tags + err = headscale.SetNodeTags(nodeID, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Verify admin assignment (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying admin tag assignment on server") + + // Verify admin assignment propagated to node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned"}) + }, 30*time.Second, 500*time.Millisecond, "verifying admin tag assignment propagated to node self") + + t.Logf("Admin assigned tag, now running CLI with --reset") + + // Step 3: Run tailscale up with --reset + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--authkey=" + authKey.GetKey(), + "--reset", + } + _, stderr, err := client.Execute(command) + t.Logf("CLI --reset result: err=%v, stderr=%s", err, stderr) + + // Verify admin tags are preserved - --reset should not remove them (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + t.Logf("After --reset, server tags are: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 10*time.Second, 500*time.Millisecond, "admin tags should be preserved after --reset on server") + + // Verify admin tags are preserved in node's self view after --reset (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned"}) + }, 30*time.Second, 500*time.Millisecond, "admin tags should be preserved after --reset in node self") + + t.Logf("Test 3.4 PASS: Admin tags preserved after --reset") +} + +// TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithEmptyAdvertise tests that the client CLI +// is a no-op after admin tag assignment, even with empty --advertise-tags. +// +// Test 3.5: CLI no-op after admin tag assignment (with empty advertise-tags) +// Setup: +// 1. Register with --auth-key AUTH_KEY_WITHOUT_TAG +// 2. Assign ["tag:valid-owned"] via headscale CLI +// 3. Run `tailscale up --auth-key AUTH_KEY_WITHOUT_TAG --advertise-tags=""` +// +// Expected: Command is no-op, tags remain ["tag:valid-owned"]. +func TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithEmptyAdvertise(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-nokey-empty"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create an auth key WITHOUT tags + authKey, err := scenario.CreatePreAuthKey(userID, true, false) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Initial login + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for initial registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + // Step 2: Admin assigns tags + err = headscale.SetNodeTags(nodeID, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Verify admin assignment (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying admin tag assignment on server") + + // Verify admin assignment propagated to node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned"}) + }, 30*time.Second, 500*time.Millisecond, "verifying admin tag assignment propagated to node self") + + t.Logf("Admin assigned tag, now running CLI with empty --advertise-tags") + + // Step 3: Run tailscale up with empty --advertise-tags + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--authkey=" + authKey.GetKey(), + "--advertise-tags=", + } + _, stderr, err := client.Execute(command) + t.Logf("CLI empty advertise-tags result: err=%v, stderr=%s", err, stderr) + + // Verify admin tags are preserved - empty --advertise-tags should not remove them (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + t.Logf("After empty --advertise-tags, server tags are: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 10*time.Second, 500*time.Millisecond, "admin tags should be preserved after empty --advertise-tags on server") + + // Verify admin tags are preserved in node's self view after empty --advertise-tags (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned"}) + }, 30*time.Second, 500*time.Millisecond, "admin tags should be preserved after empty --advertise-tags in node self") + + t.Logf("Test 3.5 PASS: Admin tags preserved after empty --advertise-tags") +} + +// TestTagsAuthKeyWithoutTagCLICannotReduceAdminMultiTag tests that the client CLI +// cannot reduce an admin-assigned multi-tag set. +// +// Test 3.6: Client CLI cannot reduce admin-assigned multi-tag set +// Setup: +// 1. Register with --auth-key AUTH_KEY_WITHOUT_TAG +// 2. Assign ["tag:valid-owned", "tag:second"] via headscale CLI +// 3. Run `tailscale up --advertise-tags="tag:valid-owned" --auth-key AUTH_KEY_WITHOUT_TAG` +// +// Expected: Command is no-op (or fails), tags remain ["tag:valid-owned", "tag:second"]. +func TestTagsAuthKeyWithoutTagCLICannotReduceAdminMultiTag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-nokey-reduce"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create an auth key WITHOUT tags + authKey, err := scenario.CreatePreAuthKey(userID, true, false) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + // Initial login + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for initial registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + // Step 2: Admin assigns multiple tags + err = headscale.SetNodeTags(nodeID, []string{"tag:valid-owned", "tag:second"}) + require.NoError(t, err) + + // Verify admin assignment (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned", "tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying admin tag assignment on server") + + // Verify admin assignment propagated to node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned", "tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "verifying admin tag assignment propagated to node self") + + t.Logf("Admin assigned both tags, now attempting to reduce via CLI") + + // Step 3: Attempt to reduce tags via CLI + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--authkey=" + authKey.GetKey(), + "--advertise-tags=tag:valid-owned", + } + _, stderr, err := client.Execute(command) + t.Logf("CLI reduce result: err=%v, stderr=%s", err, stderr) + + // Verify admin tags are preserved - CLI should not be able to reduce them (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + t.Logf("After CLI reduce attempt, server tags are: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned", "tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "admin tags should be preserved after CLI reduce attempt on server") + + // Verify admin tags are preserved in node's self view after CLI reduce attempt (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned", "tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "admin tags should be preserved after CLI reduce attempt in node self") + + t.Logf("Test 3.6 PASS: Admin tags preserved - CLI cannot reduce admin-assigned multi-tag set") +} + +// ============================================================================= +// Test Suite 1: User Login Authentication (Web Auth Flow) +// ============================================================================= + +// TestTagsUserLoginOwnedTagAtRegistration tests that a user can advertise an owned tag +// during web auth registration. +// +// Test 1.1: Advertise owned tag at registration +// Setup: Web auth login with --advertise-tags="tag:valid-owned" +// Expected: Node has ["tag:valid-owned"]. +func TestTagsUserLoginOwnedTagAtRegistration(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, // We'll create the node manually + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{ + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned"}), + }, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-webauth-owned"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Create a tailscale client with --advertise-tags + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned"}), + ) + require.NoError(t, err) + + // Login via web auth flow + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + // Complete the web auth by visiting the login URL + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + // Register the node via headscale CLI + err = scenario.runHeadscaleRegister(tagTestUser, body) + require.NoError(t, err) + + // Wait for client to be running + err = client.WaitForRunning(120 * time.Second) + require.NoError(t, err) + + // Verify node has the advertised tag + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + t.Logf("Node registered with tags: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 30*time.Second, 500*time.Millisecond, "verifying node has advertised tag") + + t.Logf("Test 1.1 completed - web auth with owned tag succeeded") +} + +// TestTagsUserLoginNonExistentTagAtRegistration tests that advertising a non-existent tag +// during web auth registration fails. +// +// Test 1.2: Advertise non-existent tag at registration +// Setup: Web auth login with --advertise-tags="tag:nonexistent" +// Expected: Registration fails - node should not be registered OR should have no tags. +func TestTagsUserLoginNonExistentTagAtRegistration(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-webauth-nonexist"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Create a tailscale client with non-existent tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:nonexistent"}), + ) + require.NoError(t, err) + + // Login via web auth flow + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + // Complete the web auth by visiting the login URL + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + // Register the node via headscale CLI - this should fail due to non-existent tag + err = scenario.runHeadscaleRegister(tagTestUser, body) + + // We expect registration to fail with an error about invalid/unauthorized tags + if err != nil { + t.Logf("Test 1.2 PASS: Registration correctly rejected with error: %v", err) + assert.ErrorContains(t, err, "requested tags") + } else { + // Check the result - if registration succeeded, the node should not have the invalid tag + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err, "Should be able to list nodes") + + if len(nodes) == 0 { + t.Logf("Test 1.2 PASS: Registration rejected - no nodes registered") + } else { + // If a node was registered, it should NOT have the non-existent tag + assert.NotContains(c, nodes[0].GetTags(), "tag:nonexistent", + "Non-existent tag should not be applied to node") + t.Logf("Test 1.2: Node registered with tags: %v (non-existent tag correctly rejected)", nodes[0].GetTags()) + } + }, 10*time.Second, 500*time.Millisecond, "checking node registration result") + } +} + +// TestTagsUserLoginUnownedTagAtRegistration tests that advertising an unowned tag +// during web auth registration is rejected. +// +// Test 1.3: Advertise unowned tag at registration +// Setup: Web auth login with --advertise-tags="tag:valid-unowned" +// Expected: Registration fails - node should not be registered OR should have no tags. +func TestTagsUserLoginUnownedTagAtRegistration(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-webauth-unowned"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Create a tailscale client with unowned tag (tag:valid-unowned is owned by "other-user", not "taguser") + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-unowned"}), + ) + require.NoError(t, err) + + // Login via web auth flow + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + // Complete the web auth + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + // Register the node - should fail or reject the unowned tag + _ = scenario.runHeadscaleRegister(tagTestUser, body) + + // Check the result - user should NOT be able to claim an unowned tag + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err, "Should be able to list nodes") + + // Either: no nodes registered (ideal), or node registered without the unowned tag + if len(nodes) == 0 { + t.Logf("Test 1.3 PASS: Registration rejected - no nodes registered") + } else { + // If a node was registered, it should NOT have the unowned tag + assert.NotContains(c, nodes[0].GetTags(), "tag:valid-unowned", + "Unowned tag should not be applied to node (tag:valid-unowned is owned by other-user)") + t.Logf("Test 1.3: Node registered with tags: %v (unowned tag correctly rejected)", nodes[0].GetTags()) + } + }, 10*time.Second, 500*time.Millisecond, "checking node registration result") +} + +// TestTagsUserLoginAddTagViaCLIReauth tests that a user can add tags via CLI reauthentication. +// +// Test 1.4: Add tag via CLI reauthentication +// Setup: +// 1. Register with --advertise-tags="tag:valid-owned" +// 2. Run tailscale up --advertise-tags="tag:valid-owned,tag:second" +// +// Expected: Triggers full reauthentication, node has both tags. +func TestTagsUserLoginAddTagViaCLIReauth(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-webauth-addtag"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Step 1: Create and register with one tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned"}), + ) + require.NoError(t, err) + + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + err = scenario.runHeadscaleRegister(tagTestUser, body) + require.NoError(t, err) + + err = client.WaitForRunning(120 * time.Second) + require.NoError(t, err) + + // Verify initial tag + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Initial tags: %v", nodes[0].GetTags()) + } + }, 30*time.Second, 500*time.Millisecond, "checking initial tags") + + // Step 2: Try to add second tag via CLI + t.Logf("Attempting to add second tag via CLI reauth") + + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--advertise-tags=tag:valid-owned,tag:second", + } + _, stderr, err := client.Execute(command) + t.Logf("CLI result: err=%v, stderr=%s", err, stderr) + + // Check final state - EventuallyWithT handles waiting for propagation + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) >= 1 { + t.Logf("Test 1.4: After CLI, tags are: %v", nodes[0].GetTags()) + + if tagsEqual(nodes[0].GetTags(), []string{"tag:valid-owned", "tag:second"}) { + t.Logf("Test 1.4 PASS: Both tags present after reauth") + } else { + t.Logf("Test 1.4: Tags are %v (may require manual reauth completion)", nodes[0].GetTags()) + } + } + }, 30*time.Second, 500*time.Millisecond, "checking tags after CLI") +} + +// TestTagsUserLoginRemoveTagViaCLIReauth tests that a user can remove tags via CLI reauthentication. +// +// Test 1.5: Remove tag via CLI reauthentication +// Setup: +// 1. Register with --advertise-tags="tag:valid-owned,tag:second" +// 2. Run tailscale up --advertise-tags="tag:valid-owned" +// +// Expected: Triggers full reauthentication, node has only ["tag:valid-owned"]. +func TestTagsUserLoginRemoveTagViaCLIReauth(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-webauth-rmtag"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Step 1: Create and register with two tags + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned,tag:second"}), + ) + require.NoError(t, err) + + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + err = scenario.runHeadscaleRegister(tagTestUser, body) + require.NoError(t, err) + + err = client.WaitForRunning(120 * time.Second) + require.NoError(t, err) + + // Verify initial tags + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Initial tags: %v", nodes[0].GetTags()) + } + }, 30*time.Second, 500*time.Millisecond, "checking initial tags") + + // Step 2: Try to remove second tag via CLI + t.Logf("Attempting to remove tag via CLI reauth") + + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--advertise-tags=tag:valid-owned", + } + _, stderr, err := client.Execute(command) + t.Logf("CLI result: err=%v, stderr=%s", err, stderr) + + // Check final state - EventuallyWithT handles waiting for propagation + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) >= 1 { + t.Logf("Test 1.5: After CLI, tags are: %v", nodes[0].GetTags()) + + if tagsEqual(nodes[0].GetTags(), []string{"tag:valid-owned"}) { + t.Logf("Test 1.5 PASS: Only one tag after removal") + } + } + }, 30*time.Second, 500*time.Millisecond, "checking tags after CLI") +} + +// TestTagsUserLoginCLINoOpAfterAdminAssignment tests that CLI advertise-tags becomes +// a no-op after admin tag assignment. +// +// Test 1.6: CLI advertise-tags becomes no-op after admin tag assignment +// Setup: +// 1. Register with --advertise-tags="tag:valid-owned" +// 2. Assign ["tag:second"] via headscale CLI +// 3. Run tailscale up --advertise-tags="tag:valid-owned" +// +// Expected: Step 3 does NOT trigger reauthentication, tags remain ["tag:second"]. +func TestTagsUserLoginCLINoOpAfterAdminAssignment(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-webauth-adminwin"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Step 1: Register with one tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned"}), + ) + require.NoError(t, err) + + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + err = scenario.runHeadscaleRegister(tagTestUser, body) + require.NoError(t, err) + + err = client.WaitForRunning(120 * time.Second) + require.NoError(t, err) + + // Get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + t.Logf("Step 1: Node %d registered with tags: %v", nodeID, nodes[0].GetTags()) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + // Step 2: Admin assigns different tag + err = headscale.SetNodeTags(nodeID, []string{"tag:second"}) + require.NoError(t, err) + + // Verify admin assignment (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Step 2: After admin assignment, server tags: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying admin assignment on server") + + // Verify admin assignment propagated to node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "verifying admin assignment propagated to node self") + + // Step 3: Try to change tags via CLI + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--advertise-tags=tag:valid-owned", + } + _, stderr, err := client.Execute(command) + t.Logf("Step 3 CLI result: err=%v, stderr=%s", err, stderr) + + // Verify admin tags are preserved - CLI advertise-tags should be a no-op after admin assignment (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + t.Logf("Step 3: After CLI, server tags are: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "admin tags should be preserved - CLI advertise-tags should be no-op on server") + + // Verify admin tags are preserved in node's self view after CLI attempt (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "admin tags should be preserved - CLI advertise-tags should be no-op in node self") + + t.Logf("Test 1.6 PASS: Admin tags preserved (CLI was no-op)") +} + +// TestTagsUserLoginCLICannotRemoveAdminTags tests that CLI cannot remove admin-assigned tags. +// +// Test 1.7: CLI cannot remove admin-assigned tags +// Setup: +// 1. Register with --advertise-tags="tag:valid-owned" +// 2. Assign ["tag:valid-owned", "tag:second"] via headscale CLI +// 3. Run tailscale up --advertise-tags="tag:valid-owned" +// +// Expected: Command is no-op, tags remain ["tag:valid-owned", "tag:second"]. +func TestTagsUserLoginCLICannotRemoveAdminTags(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-webauth-norem"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Step 1: Register with one tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned"}), + ) + require.NoError(t, err) + + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + err = scenario.runHeadscaleRegister(tagTestUser, body) + require.NoError(t, err) + + err = client.WaitForRunning(120 * time.Second) + require.NoError(t, err) + + // Get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + // Step 2: Admin assigns both tags + err = headscale.SetNodeTags(nodeID, []string{"tag:valid-owned", "tag:second"}) + require.NoError(t, err) + + // Verify admin assignment (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("After admin assignment, server tags: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned", "tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying admin assignment on server") + + // Verify admin assignment propagated to node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned", "tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "verifying admin assignment propagated to node self") + + // Step 3: Try to reduce tags via CLI + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--advertise-tags=tag:valid-owned", + } + _, stderr, err := client.Execute(command) + t.Logf("CLI result: err=%v, stderr=%s", err, stderr) + + // Verify admin tags are preserved - CLI should not be able to remove admin-assigned tags (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + t.Logf("Test 1.7: After CLI, server tags are: %v", nodes[0].GetTags()) + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned", "tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "admin tags should be preserved - CLI cannot remove them on server") + + // Verify admin tags are preserved in node's self view after CLI attempt (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned", "tag:second"}) + }, 30*time.Second, 500*time.Millisecond, "admin tags should be preserved - CLI cannot remove them in node self") + + t.Logf("Test 1.7 PASS: Admin tags preserved (CLI cannot remove)") +} + +// ============================================================================= +// Test Suite 2 (continued): Additional Auth Key WITH Tags Tests +// ============================================================================= + +// TestTagsAuthKeyWithTagRequestNonExistentTag tests that requesting a non-existent tag +// with a tagged auth key results in registration failure. +// +// Test 2.7: Request non-existent tag with tagged key +// Setup: Run `tailscale up --advertise-tags="tag:nonexistent" --auth-key AUTH_KEY_WITH_TAG` +// Expected: Registration fails with error containing "requested tags". +func TestTagsAuthKeyWithTagRequestNonExistentTag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-nonexist"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey with tag:valid-owned + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + t.Logf("Created tagged PreAuthKey with tags: %v", authKey.GetAclTags()) + + // Create a tailscale client that will try to use --advertise-tags with a NON-EXISTENT tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:nonexistent"}), + ) + require.NoError(t, err) + + // Login should fail because ANY advertise-tags is rejected for PreAuthKey registrations + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + if err != nil { + t.Logf("Test 2.7 PASS: Registration correctly rejected with error: %v", err) + assert.ErrorContains(t, err, "requested tags") + } else { + t.Logf("Test 2.7 UNEXPECTED: Registration succeeded when it should have failed") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Node registered with tags: %v (expected rejection)", nodes[0].GetTags()) + } + }, 10*time.Second, 500*time.Millisecond, "checking node state") + + t.Fail() + } +} + +// TestTagsAuthKeyWithTagRequestUnownedTag tests that requesting an unowned tag +// with a tagged auth key results in registration failure. +// +// Test 2.8: Request unowned tag with tagged key +// Setup: Run `tailscale up --advertise-tags="tag:valid-unowned" --auth-key AUTH_KEY_WITH_TAG` +// Expected: Registration fails with error containing "requested tags". +func TestTagsAuthKeyWithTagRequestUnownedTag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-unowned"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey with tag:valid-owned + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + t.Logf("Created tagged PreAuthKey with tags: %v", authKey.GetAclTags()) + + // Create a tailscale client that will try to use --advertise-tags with an UNOWNED tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-unowned"}), + ) + require.NoError(t, err) + + // Login should fail because ANY advertise-tags is rejected for PreAuthKey registrations + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + if err != nil { + t.Logf("Test 2.8 PASS: Registration correctly rejected with error: %v", err) + assert.ErrorContains(t, err, "requested tags") + } else { + t.Logf("Test 2.8 UNEXPECTED: Registration succeeded when it should have failed") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Node registered with tags: %v (expected rejection)", nodes[0].GetTags()) + } + }, 10*time.Second, 500*time.Millisecond, "checking node state") + + t.Fail() + } +} + +// ============================================================================= +// Test Suite 3 (continued): Additional Auth Key WITHOUT Tags Tests +// ============================================================================= + +// TestTagsAuthKeyWithoutTagRequestNonExistentTag tests that requesting a non-existent tag +// with a tagless auth key results in registration failure. +// +// Test 3.7: Request non-existent tag with tagless key +// Setup: Run `tailscale up --advertise-tags="tag:nonexistent" --auth-key AUTH_KEY_WITHOUT_TAG` +// Expected: Registration fails with error containing "requested tags". +func TestTagsAuthKeyWithoutTagRequestNonExistentTag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-nokey-nonexist"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create an auth key WITHOUT tags + authKey, err := scenario.CreatePreAuthKey(userID, false, false) + require.NoError(t, err) + t.Logf("Created PreAuthKey without tags") + + // Create a tailscale client that will try to request a NON-EXISTENT tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:nonexistent"}), + ) + require.NoError(t, err) + + // Login should fail because ANY advertise-tags is rejected for PreAuthKey registrations + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + if err != nil { + t.Logf("Test 3.7 PASS: Registration correctly rejected: %v", err) + assert.ErrorContains(t, err, "requested tags") + } else { + t.Logf("Test 3.7 UNEXPECTED: Registration succeeded when it should have failed") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Node registered with tags: %v (expected rejection)", nodes[0].GetTags()) + } + }, 10*time.Second, 500*time.Millisecond, "checking node state") + + t.Fail() + } +} + +// TestTagsAuthKeyWithoutTagRequestUnownedTag tests that requesting an unowned tag +// with a tagless auth key results in registration failure. +// +// Test 3.8: Request unowned tag with tagless key +// Setup: Run `tailscale up --advertise-tags="tag:valid-unowned" --auth-key AUTH_KEY_WITHOUT_TAG` +// Expected: Registration fails with error containing "requested tags". +func TestTagsAuthKeyWithoutTagRequestUnownedTag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-nokey-unowned"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create an auth key WITHOUT tags + authKey, err := scenario.CreatePreAuthKey(userID, false, false) + require.NoError(t, err) + t.Logf("Created PreAuthKey without tags") + + // Create a tailscale client that will try to request an UNOWNED tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-unowned"}), + ) + require.NoError(t, err) + + // Login should fail because ANY advertise-tags is rejected for PreAuthKey registrations + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + if err != nil { + t.Logf("Test 3.8 PASS: Registration correctly rejected: %v", err) + assert.ErrorContains(t, err, "requested tags") + } else { + t.Logf("Test 3.8 UNEXPECTED: Registration succeeded when it should have failed") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + t.Logf("Node registered with tags: %v (expected rejection)", nodes[0].GetTags()) + } + }, 10*time.Second, 500*time.Millisecond, "checking node state") + + t.Fail() + } +} + +// ============================================================================= +// Test Suite 4: Admin API (SetNodeTags) Validation Tests +// ============================================================================= + +// TestTagsAdminAPICannotSetNonExistentTag tests that the admin API rejects +// setting a tag that doesn't exist in the policy. +// +// Test 4.1: Admin cannot set non-existent tag +// Setup: Create node, then call SetNodeTags with ["tag:nonexistent"] +// Expected: SetNodeTags returns error. +func TestTagsAdminAPICannotSetNonExistentTag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-admin-nonexist"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey to register a node + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + t.Logf("Node %d registered with tags: %v", nodeID, nodes[0].GetTags()) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for registration") + + // Try to set a non-existent tag via admin API - should fail + err = headscale.SetNodeTags(nodeID, []string{"tag:nonexistent"}) + + require.Error(t, err, "SetNodeTags should fail for non-existent tag") + t.Logf("Test 4.1 PASS: Admin API correctly rejected non-existent tag: %v", err) +} + +// TestTagsAdminAPICanSetUnownedTag tests that the admin API CAN set a tag +// that exists in policy but is owned by a different user. +// Admin has full authority over tags - ownership only matters for client requests. +// +// Test 4.2: Admin CAN set unowned tag (admin has full authority) +// Setup: Create node, then call SetNodeTags with ["tag:valid-unowned"] +// Expected: SetNodeTags succeeds (admin can assign any existing tag). +func TestTagsAdminAPICanSetUnownedTag(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-admin-unowned"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey to register a node + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + t.Logf("Node %d registered with tags: %v", nodeID, nodes[0].GetTags()) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for registration") + + // Admin sets an "unowned" tag - should SUCCEED because admin has full authority + // (tag:valid-unowned is owned by other-user, but admin can assign it) + err = headscale.SetNodeTags(nodeID, []string{"tag:valid-unowned"}) + require.NoError(t, err, "SetNodeTags should succeed for admin setting any existing tag") + + // Verify the tag was applied (server-side) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-unowned"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying unowned tag was applied on server") + + // Verify the tag was propagated to node's self view (issue #2978) + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-unowned"}) + }, 30*time.Second, 500*time.Millisecond, "verifying unowned tag propagated to node self") + + t.Logf("Test 4.2 PASS: Admin API correctly allowed setting unowned tag") +} + +// TestTagsAdminAPICannotRemoveAllTags tests that the admin API rejects +// removing all tags from a node (would orphan the node). +// +// Test 4.3: Admin cannot remove all tags +// Setup: Create tagged node, then call SetNodeTags with [] +// Expected: SetNodeTags returns error. +func TestTagsAdminAPICannotRemoveAllTags(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-admin-empty"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey to register a node + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + t.Logf("Node %d registered with tags: %v", nodeID, nodes[0].GetTags()) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for registration") + + // Try to remove all tags - should fail + err = headscale.SetNodeTags(nodeID, []string{}) + + require.Error(t, err, "SetNodeTags should fail when trying to remove all tags") + t.Logf("Test 4.3 PASS: Admin API correctly rejected removing all tags: %v", err) + + // Verify original tags are preserved + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying original tags preserved") +} + +// assertNetmapSelfHasTagsWithCollect asserts that the client's netmap self node has expected tags. +// This validates at a deeper level than status - directly from tailscale debug netmap. +func assertNetmapSelfHasTagsWithCollect(c *assert.CollectT, client TailscaleClient, expectedTags []string) { + nm, err := client.Netmap() + //nolint:testifylint // must use assert with CollectT in EventuallyWithT + assert.NoError(c, err, "failed to get client netmap") + + if nm == nil { + assert.Fail(c, "client netmap is nil") + return + } + + var actualTagsSlice []string + + if nm.SelfNode.Valid() { + for _, tag := range nm.SelfNode.Tags().All() { + actualTagsSlice = append(actualTagsSlice, tag) + } + } + + sortedActual := append([]string{}, actualTagsSlice...) + sortedExpected := append([]string{}, expectedTags...) + + sort.Strings(sortedActual) + sort.Strings(sortedExpected) + assert.Equal(c, sortedExpected, sortedActual, "Client %s netmap self tags mismatch", client.Hostname()) +} + +// TestTagsIssue2978ReproTagReplacement specifically tests issue #2978: +// When tags are changed on the server, the node's self view should update. +// This test performs multiple tag replacements and checks for immediate propagation. +// +// Issue scenario (from nblock's report): +// 1. Node registers via CLI auth with --advertise-tags=tag:foo +// 2. Admin changes tag to tag:bar via headscale CLI/API +// 3. Node's self view should show tag:bar (not tag:foo). +// +// This test uses web auth with --advertise-tags to match the reporter's flow. +func TestTagsIssue2978ReproTagReplacement(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + // Use CreateHeadscaleEnvWithLoginURL for web auth flow + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{ + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned"}), + }, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-issue-2978"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Create a tailscale client with --advertise-tags (matching nblock's "cli auth with --advertise-tags=tag:foo") + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned"}), + ) + require.NoError(t, err) + + // Login via web auth flow (this is "cli auth" - tailscale up triggers web auth) + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + // Complete the web auth by visiting the login URL + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + // Register the node via headscale CLI + err = scenario.runHeadscaleRegister(tagTestUser, body) + require.NoError(t, err) + + // Wait for client to be running + err = client.WaitForRunning(120 * time.Second) + require.NoError(t, err) + + // Wait for initial registration with tag:valid-owned + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for initial registration") + + // Verify client initially sees tag:valid-owned + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-owned"}) + }, 30*time.Second, 500*time.Millisecond, "client should see initial tag") + + t.Logf("Step 1: Node %d registered via web auth with --advertise-tags=tag:valid-owned, client sees it", nodeID) + + // Step 2: Admin changes tag to tag:second (FIRST CALL - this is "tag:bar" in issue terms) + // According to issue #2978, the first SetNodeTags call updates the server but + // the client's self view does NOT update until a SECOND call with the same tag. + t.Log("Step 2: Calling SetNodeTags FIRST time with tag:second") + + err = headscale.SetNodeTags(nodeID, []string{"tag:second"}) + require.NoError(t, err) + + // Verify server-side update happened + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:second"}) + } + }, 10*time.Second, 500*time.Millisecond, "server should show tag:second after first call") + + t.Log("Step 2a: Server shows tag:second after first call") + + // CRITICAL BUG CHECK: According to nblock, after the first SetNodeTags call, + // the client's self view does NOT update even after waiting ~1 minute. + // We wait 10 seconds and check - if the client STILL shows the OLD tag, + // that demonstrates the bug. If the client shows the NEW tag, the bug is fixed. + t.Log("Step 2b: Waiting 10 seconds to see if client self view updates (bug: it should NOT)") + //nolint:forbidigo // intentional sleep to demonstrate bug timing - client should get update immediately, not after waiting + time.Sleep(10 * time.Second) + + // Check client status after waiting + status, err := client.Status() + require.NoError(t, err) + + var selfTagsAfterFirstCall []string + + if status.Self != nil && status.Self.Tags != nil { + for _, tag := range status.Self.Tags.All() { + selfTagsAfterFirstCall = append(selfTagsAfterFirstCall, tag) + } + } + + t.Logf("Step 2c: Client self tags after FIRST SetNodeTags + 10s wait: %v", selfTagsAfterFirstCall) + + // Also check netmap + nm, nmErr := client.Netmap() + + var netmapTagsAfterFirstCall []string + + if nmErr == nil && nm != nil && nm.SelfNode.Valid() { + for _, tag := range nm.SelfNode.Tags().All() { + netmapTagsAfterFirstCall = append(netmapTagsAfterFirstCall, tag) + } + } + + t.Logf("Step 2d: Client netmap self tags after FIRST SetNodeTags + 10s wait: %v", netmapTagsAfterFirstCall) + + // Step 3: Call SetNodeTags AGAIN with the SAME tag (SECOND CALL) + // According to nblock, this second call with the same tag triggers the update. + t.Log("Step 3: Calling SetNodeTags SECOND time with SAME tag:second") + + err = headscale.SetNodeTags(nodeID, []string{"tag:second"}) + require.NoError(t, err) + + // Now the client should see the update quickly (within a few seconds) + t.Log("Step 3a: Verifying client self view updates after SECOND call") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:second"}) + }, 10*time.Second, 500*time.Millisecond, "client status.Self should update to tag:second after SECOND call") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNetmapSelfHasTagsWithCollect(c, client, []string{"tag:second"}) + }, 10*time.Second, 500*time.Millisecond, "client netmap.SelfNode should update to tag:second after SECOND call") + + t.Log("Step 3b: Client self view updated to tag:second after SECOND call") + + // Step 4: Do another tag change to verify the pattern repeats + t.Log("Step 4: Calling SetNodeTags FIRST time with tag:valid-unowned") + + err = headscale.SetNodeTags(nodeID, []string{"tag:valid-unowned"}) + require.NoError(t, err) + + // Verify server-side update + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-unowned"}) + } + }, 10*time.Second, 500*time.Millisecond, "server should show tag:valid-unowned") + + t.Log("Step 4a: Server shows tag:valid-unowned after first call") + + // Wait and check - bug means client still shows old tag + t.Log("Step 4b: Waiting 10 seconds to see if client self view updates (bug: it should NOT)") + //nolint:forbidigo // intentional sleep to demonstrate bug timing - client should get update immediately, not after waiting + time.Sleep(10 * time.Second) + + status, err = client.Status() + require.NoError(t, err) + + var selfTagsAfterSecondChange []string + + if status.Self != nil && status.Self.Tags != nil { + for _, tag := range status.Self.Tags.All() { + selfTagsAfterSecondChange = append(selfTagsAfterSecondChange, tag) + } + } + + t.Logf("Step 4c: Client self tags after FIRST SetNodeTags(tag:valid-unowned) + 10s wait: %v", selfTagsAfterSecondChange) + + // Step 5: Call SetNodeTags AGAIN with the SAME tag + t.Log("Step 5: Calling SetNodeTags SECOND time with SAME tag:valid-unowned") + + err = headscale.SetNodeTags(nodeID, []string{"tag:valid-unowned"}) + require.NoError(t, err) + + // Now the client should see the update quickly + t.Log("Step 5a: Verifying client self view updates after SECOND call") + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNodeSelfHasTagsWithCollect(c, client, []string{"tag:valid-unowned"}) + }, 10*time.Second, 500*time.Millisecond, "client status.Self should update to tag:valid-unowned after SECOND call") + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + assertNetmapSelfHasTagsWithCollect(c, client, []string{"tag:valid-unowned"}) + }, 10*time.Second, 500*time.Millisecond, "client netmap.SelfNode should update to tag:valid-unowned after SECOND call") + + t.Log("Test complete - see logs for bug reproduction details") +} + +// TestTagsAdminAPICannotSetInvalidFormat tests that the admin API rejects +// tags that don't have the correct format (must start with "tag:"). +// +// Test 4.4: Admin cannot set invalid format tag +// Setup: Create node, then call SetNodeTags with ["invalid-no-prefix"] +// Expected: SetNodeTags returns error. +func TestTagsAdminAPICannotSetInvalidFormat(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-admin-invalid"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + userMap, err := headscale.MapUsers() + require.NoError(t, err) + + userID := userMap[tagTestUser].GetId() + + // Create a tagged PreAuthKey to register a node + authKey, err := scenario.CreatePreAuthKeyWithTags(userID, false, false, []string{"tag:valid-owned"}) + require.NoError(t, err) + + // Create and register a tailscale client + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + ) + require.NoError(t, err) + + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for registration and get node ID + var nodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + nodeID = nodes[0].GetId() + t.Logf("Node %d registered with tags: %v", nodeID, nodes[0].GetTags()) + } + }, 30*time.Second, 500*time.Millisecond, "waiting for registration") + + // Try to set a tag without the "tag:" prefix - should fail + err = headscale.SetNodeTags(nodeID, []string{"invalid-no-prefix"}) + + require.Error(t, err, "SetNodeTags should fail for invalid tag format") + t.Logf("Test 4.4 PASS: Admin API correctly rejected invalid tag format: %v", err) + + // Verify original tags are preserved + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1) + + if len(nodes) == 1 { + assertNodeHasTagsWithCollect(c, nodes[0], []string{"tag:valid-owned"}) + } + }, 10*time.Second, 500*time.Millisecond, "verifying original tags preserved") +} + +// ============================================================================= +// Test for Issue #2979: Reauth to untag a device +// ============================================================================= + +// TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags tests that reauthenticating +// with an empty tag list (--advertise-tags= --force-reauth) removes all tags +// and returns ownership to the user. +// +// Bug #2979: Reauth to untag a device keeps it tagged +// Setup: Register a node with tags via user login, then reauth with --advertise-tags= --force-reauth +// Expected: Node should have no tags and ownership should return to the user. +// +// Note: This only works with --force-reauth because without it, the Tailscale +// client doesn't trigger a full reauth to the server - it only updates local state. +func TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags(t *testing.T) { + IntegrationSkip(t) + + t.Run("with force-reauth", func(t *testing.T) { + tc := struct { + name string + testName string + forceReauth bool + }{ + name: "with force-reauth", + testName: "with-force-reauth", + forceReauth: true, + } + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnvWithLoginURL( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-reauth-untag-2979-"+tc.testName), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Step 1: Create and register a node with tags + t.Logf("Step 1: Registering node with tags") + + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:valid-owned,tag:second"}), + ) + require.NoError(t, err) + + loginURL, err := client.LoginWithURL(headscale.GetEndpoint()) + require.NoError(t, err) + + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + err = scenario.runHeadscaleRegister(tagTestUser, body) + require.NoError(t, err) + + err = client.WaitForRunning(120 * time.Second) + require.NoError(t, err) + + // Verify initial tags + var initialNodeID uint64 + + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Expected exactly one node") + + if len(nodes) == 1 { + node := nodes[0] + initialNodeID = node.GetId() + t.Logf("Initial state - Node ID: %d, Tags: %v, User: %s", + node.GetId(), node.GetTags(), node.GetUser().GetName()) + + // Verify node has the expected tags + assertNodeHasTagsWithCollect(c, node, []string{"tag:valid-owned", "tag:second"}) + } + }, 30*time.Second, 500*time.Millisecond, "checking initial tags") + + // Step 2: Reauth with empty tags to remove all tags + t.Logf("Step 2: Reauthenticating with empty tag list to untag device (%s)", tc.name) + + if tc.forceReauth { + // Manually run tailscale up with --force-reauth and empty tags + // This will output a login URL that we need to complete + // Include --hostname to match the initial login command + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--hostname=" + client.Hostname(), + "--advertise-tags=", + "--force-reauth", + } + + stdout, stderr, _ := client.Execute(command) + t.Logf("Reauth command stderr: %s", stderr) + + // Parse the login URL from the command output + loginURL, err := util.ParseLoginURLFromCLILogin(stdout + stderr) + require.NoError(t, err, "Failed to parse login URL from reauth command") + t.Logf("Reauth login URL: %s", loginURL) + + body, err := doLoginURL(client.Hostname(), loginURL) + require.NoError(t, err) + + err = scenario.runHeadscaleRegister(tagTestUser, body) + require.NoError(t, err) + + err = client.WaitForRunning(120 * time.Second) + require.NoError(t, err) + t.Logf("Completed reauth with empty tags") + } else { + // Without force-reauth, just try tailscale up + // Include --hostname to match the initial login command + command := []string{ + "tailscale", "up", + "--login-server=" + headscale.GetEndpoint(), + "--hostname=" + client.Hostname(), + "--advertise-tags=", + } + stdout, stderr, err := client.Execute(command) + t.Logf("CLI reauth result: err=%v, stdout=%s, stderr=%s", err, stdout, stderr) + } + + // Step 3: Verify tags are removed and ownership is returned to user + // This is the key assertion for bug #2979 + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes(tagTestUser) + assert.NoError(c, err) + + if len(nodes) >= 1 { + node := nodes[0] + t.Logf("After reauth - Node ID: %d, Tags: %v, User: %s", + node.GetId(), node.GetTags(), node.GetUser().GetName()) + + // Assert: Node should have NO tags + assertNodeHasNoTagsWithCollect(c, node) + + // Assert: Node should be owned by the user (not tagged-devices) + assert.Equal(c, tagTestUser, node.GetUser().GetName(), + "Node ownership should return to user %s after untagging", tagTestUser) + + // Verify the node ID is still the same (not a new registration) + assert.Equal(c, initialNodeID, node.GetId(), + "Node ID should remain the same after reauth") + + if len(node.GetTags()) == 0 && node.GetUser().GetName() == tagTestUser { + t.Logf("Test #2979 (%s) PASS: Node successfully untagged and ownership returned to user", tc.name) + } else { + t.Logf("Test #2979 (%s) FAIL: Expected no tags and user=%s, got tags=%v user=%s", + tc.name, tagTestUser, node.GetTags(), node.GetUser().GetName()) + } + } + }, 60*time.Second, 1*time.Second, "verifying tags removed and ownership returned") + }) +} + +// ============================================================================= +// Test Suite 5: Auth Key WITHOUT User (Tags-Only Ownership) +// ============================================================================= + +// TestTagsAuthKeyWithoutUserInheritsTags tests that when an auth key without a user +// (tags-only) is used without --advertise-tags, the node inherits the key's tags. +// +// Test 5.1: Auth key without user, no --advertise-tags flag +// Setup: Run `tailscale up --auth-key AUTH_KEY_WITH_TAGS_NO_USER` +// Expected: Node registers with the tags from the auth key. +func TestTagsAuthKeyWithoutUserInheritsTags(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-no-user-inherit"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Create an auth key with tags but WITHOUT a user + authKey, err := scenario.CreatePreAuthKeyWithOptions(hsic.AuthKeyOptions{ + User: nil, + Reusable: false, + Ephemeral: false, + Tags: []string{"tag:valid-owned"}, + }) + require.NoError(t, err) + t.Logf("Created tags-only PreAuthKey with tags: %v", authKey.GetAclTags()) + + // Create a tailscale client WITHOUT --advertise-tags + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + // Note: NO WithExtraLoginArgs for --advertise-tags + ) + require.NoError(t, err) + + // Login with the tags-only auth key + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + require.NoError(t, err) + + // Wait for node to be registered and verify it has the key's tags + // Note: Tags-only nodes don't have a user, so we list all nodes + assert.EventuallyWithT(t, func(c *assert.CollectT) { + nodes, err := headscale.ListNodes() + assert.NoError(c, err) + assert.Len(c, nodes, 1, "Should have exactly 1 node") + + if len(nodes) == 1 { + node := nodes[0] + t.Logf("Node registered with tags: %v", node.GetTags()) + assertNodeHasTagsWithCollect(c, node, []string{"tag:valid-owned"}) + } + }, 30*time.Second, 500*time.Millisecond, "verifying node inherited tags from auth key") + + t.Logf("Test 5.1 PASS: Node inherited tags from tags-only auth key") +} + +// TestTagsAuthKeyWithoutUserRejectsAdvertisedTags tests that when an auth key without +// a user (tags-only) is used WITH --advertise-tags, the registration is rejected. +// PreAuthKey registrations do not allow client-requested tags. +// +// Test 5.2: Auth key without user, with --advertise-tags (should be rejected) +// Setup: Run `tailscale up --advertise-tags="tag:second" --auth-key AUTH_KEY_WITH_TAGS_NO_USER` +// Expected: Registration fails with error containing "requested tags". +func TestTagsAuthKeyWithoutUserRejectsAdvertisedTags(t *testing.T) { + IntegrationSkip(t) + + policy := tagsTestPolicy() + + spec := ScenarioSpec{ + NodesPerUser: 0, + Users: []string{tagTestUser}, + } + + scenario, err := NewScenario(spec) + + require.NoError(t, err) + defer scenario.ShutdownAssertNoPanics(t) + + err = scenario.CreateHeadscaleEnv( + []tsic.Option{}, + hsic.WithACLPolicy(policy), + hsic.WithTestName("tags-authkey-no-user-reject-advertise"), + hsic.WithTLS(), + ) + requireNoErrHeadscaleEnv(t, err) + + headscale, err := scenario.Headscale() + requireNoErrGetHeadscale(t, err) + + // Create an auth key with tags but WITHOUT a user + authKey, err := scenario.CreatePreAuthKeyWithOptions(hsic.AuthKeyOptions{ + User: nil, + Reusable: false, + Ephemeral: false, + Tags: []string{"tag:valid-owned"}, + }) + require.NoError(t, err) + t.Logf("Created tags-only PreAuthKey with tags: %v", authKey.GetAclTags()) + + // Create a tailscale client WITH --advertise-tags for a DIFFERENT tag + client, err := scenario.CreateTailscaleNode( + "head", + tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), + tsic.WithExtraLoginArgs([]string{"--advertise-tags=tag:second"}), + ) + require.NoError(t, err) + + // Login should fail because ANY advertise-tags is rejected for PreAuthKey registrations + err = client.Login(headscale.GetEndpoint(), authKey.GetKey()) + if err != nil { + t.Logf("Test 5.2 PASS: Registration correctly rejected with error: %v", err) + assert.ErrorContains(t, err, "requested tags") + } else { + t.Logf("Test 5.2 UNEXPECTED: Registration succeeded when it should have failed") + t.Fail() + } +} diff --git a/integration/tsic/tsic.go b/integration/tsic/tsic.go index 462c3ea3..fb07896b 100644 --- a/integration/tsic/tsic.go +++ b/integration/tsic/tsic.go @@ -14,6 +14,7 @@ import ( "os" "reflect" "runtime/debug" + "slices" "strconv" "strings" "time" @@ -54,6 +55,10 @@ var ( errTailscaleNotConnected = errors.New("tailscale not connected") errTailscaledNotReadyForLogin = errors.New("tailscaled not ready for login") errInvalidClientConfig = errors.New("verifiably invalid client config requested") + errInvalidTailscaleImageFormat = errors.New("invalid HEADSCALE_INTEGRATION_TAILSCALE_IMAGE format, expected repository:tag") + errTailscaleImageRequiredInCI = errors.New("HEADSCALE_INTEGRATION_TAILSCALE_IMAGE must be set in CI for HEAD version") + errContainerNotInitialized = errors.New("container not initialized") + errFQDNNotYetAvailable = errors.New("FQDN not yet available") ) const ( @@ -90,6 +95,9 @@ type TailscaleInContainer struct { netfilter string extraLoginArgs []string withAcceptRoutes bool + withPackages []string // Alpine packages to install at container start + withWebserverPort int // Port for built-in HTTP server (0 = disabled) + withExtraCommands []string // Extra shell commands to run before tailscaled // build options, solely for HEAD buildConfig TailscaleInContainerBuildConfig @@ -212,6 +220,82 @@ func WithAcceptRoutes() Option { } } +// WithPackages specifies Alpine packages to install when the container starts. +// This requires internet access and uses `apk add`. Common packages: +// - "python3" for HTTP server +// - "curl" for HTTP client +// - "bind-tools" for dig command +// - "iptables", "ip6tables" for firewall rules +// Note: Tests using this option require internet access and cannot use +// the built-in DERP server in offline mode. +func WithPackages(packages ...string) Option { + return func(tsic *TailscaleInContainer) { + tsic.withPackages = append(tsic.withPackages, packages...) + } +} + +// WithWebserver starts a Python HTTP server on the specified port +// alongside tailscaled. This is useful for testing subnet routing +// and ACL connectivity. Automatically adds "python3" to packages if needed. +// The server serves files from the root directory (/). +func WithWebserver(port int) Option { + return func(tsic *TailscaleInContainer) { + tsic.withWebserverPort = port + } +} + +// WithExtraCommands adds extra shell commands to run before tailscaled starts. +// Commands are run after package installation and CA certificate updates. +func WithExtraCommands(commands ...string) Option { + return func(tsic *TailscaleInContainer) { + tsic.withExtraCommands = append(tsic.withExtraCommands, commands...) + } +} + +// buildEntrypoint constructs the container entrypoint command based on +// configured options (packages, webserver, etc.). +func (t *TailscaleInContainer) buildEntrypoint() []string { + var commands []string + + // Wait for network to be ready + commands = append(commands, "while ! ip route show default >/dev/null 2>&1; do sleep 0.1; done") + + // If CA certs are configured, wait for them to be written by the Go code + // (certs are written after container start via tsic.WriteFile) + if len(t.caCerts) > 0 { + commands = append(commands, + fmt.Sprintf("while [ ! -f %s/user-0.crt ]; do sleep 0.1; done", caCertRoot)) + } + + // Install packages if requested (requires internet access) + packages := t.withPackages + if t.withWebserverPort > 0 && !slices.Contains(packages, "python3") { + packages = append(packages, "python3") + } + + if len(packages) > 0 { + commands = append(commands, "apk add --no-cache "+strings.Join(packages, " ")) + } + + // Update CA certificates + commands = append(commands, "update-ca-certificates") + + // Run extra commands if any + commands = append(commands, t.withExtraCommands...) + + // Start webserver in background if requested + // Use subshell to avoid & interfering with command joining + if t.withWebserverPort > 0 { + commands = append(commands, + fmt.Sprintf("(python3 -m http.server --bind :: %d &)", t.withWebserverPort)) + } + + // Start tailscaled (must be last as it's the foreground process) + commands = append(commands, "tailscaled --tun=tsdev --verbose=10") + + return []string{"/bin/sh", "-c", strings.Join(commands, " ; ")} +} + // New returns a new TailscaleInContainer instance. func New( pool *dockertest.Pool, @@ -223,25 +307,36 @@ func New( return nil, err } - hostname := fmt.Sprintf("ts-%s-%s", strings.ReplaceAll(version, ".", "-"), hash) + // Include run ID in hostname for easier identification of which test run owns this container + runID := dockertestutil.GetIntegrationRunID() + + var hostname string + + if runID != "" { + // Use last 6 chars of run ID (the random hash part) for brevity + runIDShort := runID[len(runID)-6:] + hostname = fmt.Sprintf("ts-%s-%s-%s", runIDShort, strings.ReplaceAll(version, ".", "-"), hash) + } else { + hostname = fmt.Sprintf("ts-%s-%s", strings.ReplaceAll(version, ".", "-"), hash) + } tsic := &TailscaleInContainer{ version: version, hostname: hostname, pool: pool, - - withEntrypoint: []string{ - "/bin/sh", - "-c", - "/bin/sleep 3 ; update-ca-certificates ; tailscaled --tun=tsdev --verbose=10", - }, } for _, opt := range opts { opt(tsic) } + // Build the entrypoint command dynamically based on options. + // Only build if no custom entrypoint was provided via WithDockerEntrypoint. + if len(tsic.withEntrypoint) == 0 { + tsic.withEntrypoint = tsic.buildEntrypoint() + } + if tsic.network == nil { return nil, fmt.Errorf("no network set, called from: \n%s", string(debug.Stack())) } @@ -291,6 +386,7 @@ func New( // build options are not meaningful with pre-existing images, // let's not lead anyone astray by pretending otherwise. defaultBuildConfig := TailscaleInContainerBuildConfig{} + hasBuildConfig := !reflect.DeepEqual(defaultBuildConfig, tsic.buildConfig) if hasBuildConfig { return tsic, errInvalidClientConfig @@ -299,42 +395,117 @@ func New( switch version { case VersionHead: - buildOptions := &dockertest.BuildOptions{ - Dockerfile: "Dockerfile.tailscale-HEAD", - ContextDir: dockerContextPath, - BuildArgs: []docker.BuildArg{}, + // Check if a pre-built image is available via environment variable + prebuiltImage := os.Getenv("HEADSCALE_INTEGRATION_TAILSCALE_IMAGE") + + // If custom build tags are required (e.g., for websocket DERP), we cannot use + // the pre-built image as it won't have the necessary code compiled in. + hasBuildTags := len(tsic.buildConfig.tags) > 0 + if hasBuildTags && prebuiltImage != "" { + log.Printf("Ignoring pre-built image %s because custom build tags are required: %v", + prebuiltImage, tsic.buildConfig.tags) + prebuiltImage = "" } - buildTags := strings.Join(tsic.buildConfig.tags, ",") - if len(buildTags) > 0 { - buildOptions.BuildArgs = append( - buildOptions.BuildArgs, - docker.BuildArg{ - Name: "BUILD_TAGS", - Value: buildTags, - }, + if prebuiltImage != "" { + log.Printf("Using pre-built tailscale image: %s", prebuiltImage) + + // Parse image into repository and tag + repo, tag, ok := strings.Cut(prebuiltImage, ":") + if !ok { + return nil, errInvalidTailscaleImageFormat + } + + tailscaleOptions.Repository = repo + tailscaleOptions.Tag = tag + + container, err = pool.RunWithOptions( + tailscaleOptions, + dockertestutil.DockerRestartPolicy, + dockertestutil.DockerAllowLocalIPv6, + dockertestutil.DockerAllowNetworkAdministration, + dockertestutil.DockerMemoryLimit, ) - } + if err != nil { + return nil, fmt.Errorf("could not run pre-built tailscale container %q: %w", prebuiltImage, err) + } + } else if util.IsCI() && !hasBuildTags { + // In CI, we require a pre-built image unless custom build tags are needed + return nil, errTailscaleImageRequiredInCI + } else { + buildOptions := &dockertest.BuildOptions{ + Dockerfile: "Dockerfile.tailscale-HEAD", + ContextDir: dockerContextPath, + BuildArgs: []docker.BuildArg{}, + } - container, err = pool.BuildAndRunWithBuildOptions( - buildOptions, - tailscaleOptions, - dockertestutil.DockerRestartPolicy, - dockertestutil.DockerAllowLocalIPv6, - dockertestutil.DockerAllowNetworkAdministration, - dockertestutil.DockerMemoryLimit, - ) - if err != nil { - // Try to get more detailed build output - log.Printf("Docker build failed for %s, attempting to get detailed output...", hostname) - buildOutput := dockertestutil.RunDockerBuildForDiagnostics(dockerContextPath, "Dockerfile.tailscale-HEAD") - if buildOutput != "" { + buildTags := strings.Join(tsic.buildConfig.tags, ",") + if len(buildTags) > 0 { + buildOptions.BuildArgs = append( + buildOptions.BuildArgs, + docker.BuildArg{ + Name: "BUILD_TAGS", + Value: buildTags, + }, + ) + } + + container, err = pool.BuildAndRunWithBuildOptions( + buildOptions, + tailscaleOptions, + dockertestutil.DockerRestartPolicy, + dockertestutil.DockerAllowLocalIPv6, + dockertestutil.DockerAllowNetworkAdministration, + dockertestutil.DockerMemoryLimit, + ) + if err != nil { + // Try to get more detailed build output + log.Printf("Docker build failed for %s, attempting to get detailed output...", hostname) + + buildOutput, buildErr := dockertestutil.RunDockerBuildForDiagnostics(dockerContextPath, "Dockerfile.tailscale-HEAD") + + // Show the last 100 lines of build output to avoid overwhelming the logs + lines := strings.Split(buildOutput, "\n") + + const maxLines = 100 + + startLine := 0 + if len(lines) > maxLines { + startLine = len(lines) - maxLines + } + + relevantOutput := strings.Join(lines[startLine:], "\n") + + if buildErr != nil { + // The diagnostic build also failed - this is the real error + return nil, fmt.Errorf( + "%s could not start tailscale container (version: %s): %w\n\nDocker build failed. Last %d lines of output:\n%s", + hostname, + version, + err, + maxLines, + relevantOutput, + ) + } + + if buildOutput != "" { + // Build succeeded on retry but container creation still failed + return nil, fmt.Errorf( + "%s could not start tailscale container (version: %s): %w\n\nDocker build succeeded on retry, but container creation failed. Last %d lines of build output:\n%s", + hostname, + version, + err, + maxLines, + relevantOutput, + ) + } + + // No output at all - diagnostic build command may have failed return nil, fmt.Errorf( - "%s could not start tailscale container (version: %s): %w\n\nDetailed build output:\n%s", + "%s could not start tailscale container (version: %s): %w\n\nUnable to get diagnostic build output (command may have failed silently)", hostname, version, err, - buildOutput, ) } } @@ -376,6 +547,7 @@ func New( err, ) } + log.Printf("Created %s container\n", hostname) tsic.container = container @@ -435,7 +607,6 @@ func (t *TailscaleInContainer) Execute( if err != nil { // log.Printf("command issued: %s", strings.Join(command, " ")) // log.Printf("command stderr: %s\n", stderr) - if stdout != "" { log.Printf("command stdout: %s\n", stdout) } @@ -561,7 +732,7 @@ func (t *TailscaleInContainer) Logout() error { // "tailscale up" with any auth keys stored in environment variables. func (t *TailscaleInContainer) Restart() error { if t.container == nil { - return fmt.Errorf("container not initialized") + return errContainerNotInitialized } // Use Docker API to restart the container @@ -578,9 +749,9 @@ func (t *TailscaleInContainer) Restart() error { if err != nil { return struct{}{}, fmt.Errorf("container not ready: %w", err) } + return struct{}{}, nil }, backoff.WithBackOff(backoff.NewExponentialBackOff()), backoff.WithMaxElapsedTime(30*time.Second)) - if err != nil { return fmt.Errorf("timeout waiting for container %s to restart and become ready: %w", t.hostname, err) } @@ -645,15 +816,18 @@ func (t *TailscaleInContainer) IPs() ([]netip.Addr, error) { } ips := make([]netip.Addr, 0) + for address := range strings.SplitSeq(result, "\n") { address = strings.TrimSuffix(address, "\n") if len(address) < 1 { continue } + ip, err := netip.ParseAddr(address) if err != nil { return nil, fmt.Errorf("failed to parse IP %s: %w", address, err) } + ips = append(ips, ip) } @@ -675,6 +849,7 @@ func (t *TailscaleInContainer) MustIPs() []netip.Addr { if err != nil { panic(err) } + return ips } @@ -699,6 +874,7 @@ func (t *TailscaleInContainer) MustIPv4() netip.Addr { if err != nil { panic(err) } + return ip } @@ -708,6 +884,7 @@ func (t *TailscaleInContainer) MustIPv6() netip.Addr { return ip } } + panic("no ipv6 found") } @@ -725,6 +902,7 @@ func (t *TailscaleInContainer) Status(save ...bool) (*ipnstate.Status, error) { } var status ipnstate.Status + err = json.Unmarshal([]byte(result), &status) if err != nil { return nil, fmt.Errorf("failed to unmarshal tailscale status: %w", err) @@ -784,6 +962,7 @@ func (t *TailscaleInContainer) Netmap() (*netmap.NetworkMap, error) { } var nm netmap.NetworkMap + err = json.Unmarshal([]byte(result), &nm) if err != nil { return nil, fmt.Errorf("failed to unmarshal tailscale netmap: %w", err) @@ -829,6 +1008,7 @@ func (t *TailscaleInContainer) watchIPN(ctx context.Context) (*ipn.Notify, error notify *ipn.Notify err error } + resultChan := make(chan result, 1) // There is no good way to kill the goroutine with watch-ipn, @@ -860,7 +1040,9 @@ func (t *TailscaleInContainer) watchIPN(ctx context.Context) (*ipn.Notify, error decoder := json.NewDecoder(pr) for decoder.More() { var notify ipn.Notify - if err := decoder.Decode(¬ify); err != nil { + + err := decoder.Decode(¬ify) + if err != nil { resultChan <- result{nil, fmt.Errorf("parse notify: %w", err)} } @@ -907,6 +1089,7 @@ func (t *TailscaleInContainer) DebugDERPRegion(region string) (*ipnstate.DebugDE } var report ipnstate.DebugDERPRegionReport + err = json.Unmarshal([]byte(result), &report) if err != nil { return nil, fmt.Errorf("failed to unmarshal tailscale derp region report: %w", err) @@ -930,6 +1113,7 @@ func (t *TailscaleInContainer) Netcheck() (*netcheck.Report, error) { } var nm netcheck.Report + err = json.Unmarshal([]byte(result), &nm) if err != nil { return nil, fmt.Errorf("failed to unmarshal tailscale netcheck: %w", err) @@ -952,7 +1136,7 @@ func (t *TailscaleInContainer) FQDN() (string, error) { } if status.Self.DNSName == "" { - return "", fmt.Errorf("FQDN not yet available") + return "", errFQDNNotYetAvailable } return status.Self.DNSName, nil @@ -970,6 +1154,7 @@ func (t *TailscaleInContainer) MustFQDN() string { if err != nil { panic(err) } + return fqdn } @@ -1063,12 +1248,14 @@ func (t *TailscaleInContainer) WaitForPeers(expected int, timeout, retryInterval defer cancel() var lastErrs []error + for { select { case <-ctx.Done(): if len(lastErrs) > 0 { return fmt.Errorf("timeout waiting for %d peers on %s after %v, errors: %w", expected, t.hostname, timeout, multierr.New(lastErrs...)) } + return fmt.Errorf("timeout waiting for %d peers on %s after %v", expected, t.hostname, timeout) case <-ticker.C: status, err := t.Status() @@ -1092,6 +1279,7 @@ func (t *TailscaleInContainer) WaitForPeers(expected int, timeout, retryInterval // Verify that the peers of a given node is Online // has a hostname and a DERP relay. var peerErrors []error + for _, peerKey := range status.Peers() { peer := status.Peer[peerKey] @@ -1285,6 +1473,7 @@ func (t *TailscaleInContainer) Curl(url string, opts ...CurlOption) (string, err } var result string + result, _, err := t.Execute(command) if err != nil { log.Printf( @@ -1318,6 +1507,7 @@ func (t *TailscaleInContainer) Traceroute(ip netip.Addr) (util.Traceroute, error } var result util.Traceroute + stdout, stderr, err := t.Execute(command) if err != nil { return result, err @@ -1363,12 +1553,14 @@ func (t *TailscaleInContainer) ReadFile(path string) ([]byte, error) { } var out bytes.Buffer + tr := tar.NewReader(bytes.NewReader(tarBytes)) for { hdr, err := tr.Next() if err == io.EOF { break // End of archive } + if err != nil { return nil, fmt.Errorf("reading tar header: %w", err) } @@ -1397,6 +1589,7 @@ func (t *TailscaleInContainer) GetNodePrivateKey() (*key.NodePrivate, error) { if err != nil { return nil, fmt.Errorf("failed to read state file: %w", err) } + store := &mem.Store{} if err = store.LoadFromJSON(state); err != nil { return nil, fmt.Errorf("failed to unmarshal state file: %w", err) @@ -1406,6 +1599,7 @@ func (t *TailscaleInContainer) GetNodePrivateKey() (*key.NodePrivate, error) { if err != nil { return nil, fmt.Errorf("failed to read current profile state key: %w", err) } + currentProfile, err := store.ReadState(ipn.StateKey(currentProfileKey)) if err != nil { return nil, fmt.Errorf("failed to read current profile state: %w", err) diff --git a/mkdocs.yml b/mkdocs.yml index 45634ece..c5d82eec 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -41,20 +41,26 @@ theme: - toc.follow # - toc.integrate palette: - - scheme: default + - media: "(prefers-color-scheme)" + toggle: + icon: material/brightness-auto + name: Switch to light mode + - media: "(prefers-color-scheme: light)" + scheme: default primary: white toggle: icon: material/brightness-7 name: Switch to dark mode - - scheme: slate + - media: "(prefers-color-scheme: dark)" + scheme: slate toggle: icon: material/brightness-4 - name: Switch to light mode + name: Switch to system preference font: text: Roboto code: Roboto Mono favicon: assets/favicon.png - logo: ./logo/headscale3-dots.svg + logo: assets/logo/headscale3-dots.svg # Excludes exclude_docs: | @@ -77,11 +83,12 @@ plugins: apple-client.md: usage/connect/apple.md dns-records.md: ref/dns.md exit-node.md: ref/routes.md - ref/exit-node.md: ref/routes.md faq.md: about/faq.md iOS-client.md: usage/connect/apple.md#ios oidc.md: ref/oidc.md - remote-cli.md: ref/remote-cli.md + ref/exit-node.md: ref/routes.md + ref/remote-cli.md: ref/api.md#grpc + remote-cli.md: ref/api.md#grpc reverse-proxy.md: ref/integration/reverse-proxy.md tls.md: ref/tls.md web-ui.md: ref/integration/web-ui.md @@ -104,7 +111,7 @@ extra: - icon: fontawesome/brands/discord link: https://discord.gg/c84AZQhmpx headscale: - version: 0.27.1 + version: 0.28.0-beta.1 # Extensions markdown_extensions: @@ -182,7 +189,7 @@ nav: - ACLs: ref/acls.md - DNS: ref/dns.md - DERP: ref/derp.md - - Remote CLI: ref/remote-cli.md + - API: ref/api.md - Debug: ref/debug.md - Integration: - Reverse proxy: ref/integration/reverse-proxy.md diff --git a/nix/README.md b/nix/README.md new file mode 100644 index 00000000..533e4b5e --- /dev/null +++ b/nix/README.md @@ -0,0 +1,41 @@ +# Headscale NixOS Module + +This directory contains the NixOS module for Headscale. + +## Rationale + +The module is maintained in this repository to keep the code and module +synchronized at the same commit. This allows faster iteration and ensures the +module stays compatible with the latest Headscale changes. All changes should +aim to be upstreamed to nixpkgs. + +## Files + +- **[`module.nix`](./module.nix)** - The NixOS module implementation +- **[`example-configuration.nix`](./example-configuration.nix)** - Example + configuration demonstrating all major features +- **[`tests/`](./tests/)** - NixOS integration tests + +## Usage + +Add to your flake inputs: + +```nix +inputs.headscale.url = "github:juanfont/headscale"; +``` + +Then import the module: + +```nix +imports = [ inputs.headscale.nixosModules.default ]; +``` + +See [`example-configuration.nix`](./example-configuration.nix) for configuration +options. + +## Upstream + +- [nixpkgs module](https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/networking/headscale.nix) +- [nixpkgs package](https://github.com/NixOS/nixpkgs/blob/master/pkgs/by-name/he/headscale/package.nix) + +The module in this repository may be newer than the nixpkgs version. diff --git a/nix/example-configuration.nix b/nix/example-configuration.nix new file mode 100644 index 00000000..e1f6cec7 --- /dev/null +++ b/nix/example-configuration.nix @@ -0,0 +1,145 @@ +# Example NixOS configuration using the headscale module +# +# This file demonstrates how to use the headscale NixOS module from this flake. +# To use in your own configuration, add this to your flake.nix inputs: +# +# inputs.headscale.url = "github:juanfont/headscale"; +# +# Then import the module: +# +# imports = [ inputs.headscale.nixosModules.default ]; +# + +{ config, pkgs, ... }: + +{ + # Import the headscale module + # In a real configuration, this would come from the flake input + # imports = [ inputs.headscale.nixosModules.default ]; + + services.headscale = { + enable = true; + + # Optional: Use a specific package (defaults to pkgs.headscale) + # package = pkgs.headscale; + + # Listen on all interfaces (default is 127.0.0.1) + address = "0.0.0.0"; + port = 8080; + + settings = { + # The URL clients will connect to + server_url = "https://headscale.example.com"; + + # IP prefixes for the tailnet + # These use the freeform settings - you can set any headscale config option + prefixes = { + v4 = "100.64.0.0/10"; + v6 = "fd7a:115c:a1e0::/48"; + allocation = "sequential"; + }; + + # DNS configuration with MagicDNS + dns = { + magic_dns = true; + base_domain = "tailnet.example.com"; + + # Whether to override client's local DNS settings (default: true) + # When true, nameservers.global must be set + override_local_dns = true; + + nameservers = { + global = [ "1.1.1.1" "8.8.8.8" ]; + }; + }; + + # DERP (relay) configuration + derp = { + # Use default Tailscale DERP servers + urls = [ "https://controlplane.tailscale.com/derpmap/default" ]; + auto_update_enabled = true; + update_frequency = "24h"; + + # Optional: Run your own DERP server + # server = { + # enabled = true; + # region_id = 999; + # stun_listen_addr = "0.0.0.0:3478"; + # }; + }; + + # Database configuration (SQLite is recommended) + database = { + type = "sqlite"; + sqlite = { + path = "/var/lib/headscale/db.sqlite"; + write_ahead_log = true; + }; + + # PostgreSQL example (not recommended for new deployments) + # type = "postgres"; + # postgres = { + # host = "localhost"; + # port = 5432; + # name = "headscale"; + # user = "headscale"; + # password_file = "/run/secrets/headscale-db-password"; + # }; + }; + + # Logging configuration + log = { + level = "info"; + format = "text"; + }; + + # Optional: OIDC authentication + # oidc = { + # issuer = "https://accounts.google.com"; + # client_id = "your-client-id"; + # client_secret_path = "/run/secrets/oidc-client-secret"; + # scope = [ "openid" "profile" "email" ]; + # allowed_domains = [ "example.com" ]; + # }; + + # Optional: Let's Encrypt TLS certificates + # tls_letsencrypt_hostname = "headscale.example.com"; + # tls_letsencrypt_challenge_type = "HTTP-01"; + + # Optional: Provide your own TLS certificates + # tls_cert_path = "/path/to/cert.pem"; + # tls_key_path = "/path/to/key.pem"; + + # ACL policy configuration + policy = { + mode = "file"; + path = "/var/lib/headscale/policy.hujson"; + }; + + # You can add ANY headscale configuration option here thanks to freeform settings + # For example, experimental features or settings not explicitly defined above: + # experimental_feature = true; + # custom_setting = "value"; + }; + }; + + # Optional: Open firewall ports + networking.firewall = { + allowedTCPPorts = [ 8080 ]; + # If running a DERP server: + # allowedUDPPorts = [ 3478 ]; + }; + + # Optional: Use with nginx reverse proxy for TLS termination + # services.nginx = { + # enable = true; + # virtualHosts."headscale.example.com" = { + # enableACME = true; + # forceSSL = true; + # locations."/" = { + # proxyPass = "http://127.0.0.1:8080"; + # proxyWebsockets = true; + # }; + # }; + # }; +} diff --git a/nix/module.nix b/nix/module.nix new file mode 100644 index 00000000..a75398fb --- /dev/null +++ b/nix/module.nix @@ -0,0 +1,727 @@ +{ config +, lib +, pkgs +, ... +}: +let + cfg = config.services.headscale; + + dataDir = "/var/lib/headscale"; + runDir = "/run/headscale"; + + cliConfig = { + # Turn off update checks since the origin of our package + # is nixpkgs and not Github. + disable_check_updates = true; + + unix_socket = "${runDir}/headscale.sock"; + }; + + settingsFormat = pkgs.formats.yaml { }; + configFile = settingsFormat.generate "headscale.yaml" cfg.settings; + cliConfigFile = settingsFormat.generate "headscale.yaml" cliConfig; + + assertRemovedOption = option: message: { + assertion = !lib.hasAttrByPath option cfg; + message = + "The option `services.headscale.${lib.options.showOption option}` was removed. " + message; + }; +in +{ + # Disable the upstream NixOS module to prevent conflicts + disabledModules = [ "services/networking/headscale.nix" ]; + + options = { + services.headscale = { + enable = lib.mkEnableOption "headscale, Open Source coordination server for Tailscale"; + + package = lib.mkPackageOption pkgs "headscale" { }; + + user = lib.mkOption { + default = "headscale"; + type = lib.types.str; + description = '' + User account under which headscale runs. + + ::: {.note} + If left as the default value this user will automatically be created + on system activation, otherwise you are responsible for + ensuring the user exists before the headscale service starts. + ::: + ''; + }; + + group = lib.mkOption { + default = "headscale"; + type = lib.types.str; + description = '' + Group under which headscale runs. + + ::: {.note} + If left as the default value this group will automatically be created + on system activation, otherwise you are responsible for + ensuring the user exists before the headscale service starts. + ::: + ''; + }; + + address = lib.mkOption { + type = lib.types.str; + default = "127.0.0.1"; + description = '' + Listening address of headscale. + ''; + example = "0.0.0.0"; + }; + + port = lib.mkOption { + type = lib.types.port; + default = 8080; + description = '' + Listening port of headscale. + ''; + example = 443; + }; + + settings = lib.mkOption { + description = '' + Overrides to {file}`config.yaml` as a Nix attribute set. + Check the [example config](https://github.com/juanfont/headscale/blob/main/config-example.yaml) + for possible options. + ''; + type = lib.types.submodule { + freeformType = settingsFormat.type; + + options = { + server_url = lib.mkOption { + type = lib.types.str; + default = "http://127.0.0.1:8080"; + description = '' + The url clients will connect to. + ''; + example = "https://myheadscale.example.com:443"; + }; + + noise.private_key_path = lib.mkOption { + type = lib.types.path; + default = "${dataDir}/noise_private.key"; + description = '' + Path to noise private key file, generated automatically if it does not exist. + ''; + }; + + prefixes = + let + prefDesc = '' + Each prefix consists of either an IPv4 or IPv6 address, + and the associated prefix length, delimited by a slash. + It must be within IP ranges supported by the Tailscale + client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48. + ''; + in + { + v4 = lib.mkOption { + type = lib.types.str; + default = "100.64.0.0/10"; + description = prefDesc; + }; + + v6 = lib.mkOption { + type = lib.types.str; + default = "fd7a:115c:a1e0::/48"; + description = prefDesc; + }; + + allocation = lib.mkOption { + type = lib.types.enum [ + "sequential" + "random" + ]; + example = "random"; + default = "sequential"; + description = '' + Strategy used for allocation of IPs to nodes, available options: + - sequential (default): assigns the next free IP from the previous given IP. + - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand). + ''; + }; + }; + + derp = { + urls = lib.mkOption { + type = lib.types.listOf lib.types.str; + default = [ "https://controlplane.tailscale.com/derpmap/default" ]; + description = '' + List of urls containing DERP maps. + See [How Tailscale works](https://tailscale.com/blog/how-tailscale-works/) for more information on DERP maps. + ''; + }; + + paths = lib.mkOption { + type = lib.types.listOf lib.types.path; + default = [ ]; + description = '' + List of file paths containing DERP maps. + See [How Tailscale works](https://tailscale.com/blog/how-tailscale-works/) for more information on DERP maps. + ''; + }; + + auto_update_enabled = lib.mkOption { + type = lib.types.bool; + default = true; + description = '' + Whether to automatically update DERP maps on a set frequency. + ''; + example = false; + }; + + update_frequency = lib.mkOption { + type = lib.types.str; + default = "24h"; + description = '' + Frequency to update DERP maps. + ''; + example = "5m"; + }; + + server.private_key_path = lib.mkOption { + type = lib.types.path; + default = "${dataDir}/derp_server_private.key"; + description = '' + Path to derp private key file, generated automatically if it does not exist. + ''; + }; + }; + + ephemeral_node_inactivity_timeout = lib.mkOption { + type = lib.types.str; + default = "30m"; + description = '' + Time before an inactive ephemeral node is deleted. + ''; + example = "5m"; + }; + + database = { + type = lib.mkOption { + type = lib.types.enum [ + "sqlite" + "sqlite3" + "postgres" + ]; + example = "postgres"; + default = "sqlite"; + description = '' + Database engine to use. + Please note that using Postgres is highly discouraged as it is only supported for legacy reasons. + All new development, testing and optimisations are done with SQLite in mind. + ''; + }; + + sqlite = { + path = lib.mkOption { + type = lib.types.nullOr lib.types.str; + default = "${dataDir}/db.sqlite"; + description = "Path to the sqlite3 database file."; + }; + + write_ahead_log = lib.mkOption { + type = lib.types.bool; + default = true; + description = '' + Enable WAL mode for SQLite. This is recommended for production environments. + <https://www.sqlite.org/wal.html> + ''; + example = true; + }; + }; + + postgres = { + host = lib.mkOption { + type = lib.types.nullOr lib.types.str; + default = null; + example = "127.0.0.1"; + description = "Database host address."; + }; + + port = lib.mkOption { + type = lib.types.nullOr lib.types.port; + default = null; + example = 3306; + description = "Database host port."; + }; + + name = lib.mkOption { + type = lib.types.nullOr lib.types.str; + default = null; + example = "headscale"; + description = "Database name."; + }; + + user = lib.mkOption { + type = lib.types.nullOr lib.types.str; + default = null; + example = "headscale"; + description = "Database user."; + }; + + password_file = lib.mkOption { + type = lib.types.nullOr lib.types.path; + default = null; + example = "/run/keys/headscale-dbpassword"; + description = '' + A file containing the password corresponding to + {option}`database.user`. + ''; + }; + }; + }; + + log = { + level = lib.mkOption { + type = lib.types.str; + default = "info"; + description = '' + headscale log level. + ''; + example = "debug"; + }; + + format = lib.mkOption { + type = lib.types.str; + default = "text"; + description = '' + headscale log format. + ''; + example = "json"; + }; + }; + + dns = { + magic_dns = lib.mkOption { + type = lib.types.bool; + default = true; + description = '' + Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/). + ''; + example = false; + }; + + base_domain = lib.mkOption { + type = lib.types.str; + default = ""; + description = '' + Defines the base domain to create the hostnames for MagicDNS. + This domain must be different from the {option}`server_url` + domain. + {option}`base_domain` must be a FQDN, without the trailing dot. + The FQDN of the hosts will be `hostname.base_domain` (e.g. + `myhost.tailnet.example.com`). + ''; + example = "tailnet.example.com"; + }; + + override_local_dns = lib.mkOption { + type = lib.types.bool; + default = true; + description = '' + Whether to use the local DNS settings of a node or override + the local DNS settings and force the use of Headscale's DNS + configuration. + ''; + example = false; + }; + + nameservers = { + global = lib.mkOption { + type = lib.types.listOf lib.types.str; + default = [ ]; + description = '' + List of nameservers to pass to Tailscale clients. + Required when {option}`override_local_dns` is true. + ''; + }; + }; + + search_domains = lib.mkOption { + type = lib.types.listOf lib.types.str; + default = [ ]; + description = '' + Search domains to inject to Tailscale clients. + ''; + example = [ "mydomain.internal" ]; + }; + }; + + oidc = { + issuer = lib.mkOption { + type = lib.types.str; + default = ""; + description = '' + URL to OpenID issuer. + ''; + example = "https://openid.example.com"; + }; + + client_id = lib.mkOption { + type = lib.types.str; + default = ""; + description = '' + OpenID Connect client ID. + ''; + }; + + client_secret_path = lib.mkOption { + type = lib.types.nullOr lib.types.str; + default = null; + description = '' + Path to OpenID Connect client secret file. Expands environment variables in format ''${VAR}. + ''; + }; + + scope = lib.mkOption { + type = lib.types.listOf lib.types.str; + default = [ + "openid" + "profile" + "email" + ]; + description = '' + Scopes used in the OIDC flow. + ''; + }; + + extra_params = lib.mkOption { + type = lib.types.attrsOf lib.types.str; + default = { }; + description = '' + Custom query parameters to send with the Authorize Endpoint request. + ''; + example = { + domain_hint = "example.com"; + }; + }; + + allowed_domains = lib.mkOption { + type = lib.types.listOf lib.types.str; + default = [ ]; + description = '' + Allowed principal domains. if an authenticated user's domain + is not in this list authentication request will be rejected. + ''; + example = [ "example.com" ]; + }; + + allowed_users = lib.mkOption { + type = lib.types.listOf lib.types.str; + default = [ ]; + description = '' + Users allowed to authenticate even if not in allowedDomains. + ''; + example = [ "alice@example.com" ]; + }; + + pkce = { + enabled = lib.mkOption { + type = lib.types.bool; + default = false; + description = '' + Enable or disable PKCE (Proof Key for Code Exchange) support. + PKCE adds an additional layer of security to the OAuth 2.0 + authorization code flow by preventing authorization code + interception attacks + See https://datatracker.ietf.org/doc/html/rfc7636 + ''; + example = true; + }; + + method = lib.mkOption { + type = lib.types.str; + default = "S256"; + description = '' + PKCE method to use: + - plain: Use plain code verifier + - S256: Use SHA256 hashed code verifier (default, recommended) + ''; + }; + }; + }; + + tls_letsencrypt_hostname = lib.mkOption { + type = lib.types.nullOr lib.types.str; + default = ""; + description = '' + Domain name to request a TLS certificate for. + ''; + }; + + tls_letsencrypt_challenge_type = lib.mkOption { + type = lib.types.enum [ + "TLS-ALPN-01" + "HTTP-01" + ]; + default = "HTTP-01"; + description = '' + Type of ACME challenge to use, currently supported types: + `HTTP-01` or `TLS-ALPN-01`. + ''; + }; + + tls_letsencrypt_listen = lib.mkOption { + type = lib.types.nullOr lib.types.str; + default = ":http"; + description = '' + When HTTP-01 challenge is chosen, letsencrypt must set up a + verification endpoint, and it will be listening on: + `:http = port 80`. + ''; + }; + + tls_cert_path = lib.mkOption { + type = lib.types.nullOr lib.types.path; + default = null; + description = '' + Path to already created certificate. + ''; + }; + + tls_key_path = lib.mkOption { + type = lib.types.nullOr lib.types.path; + default = null; + description = '' + Path to key for already created certificate. + ''; + }; + + policy = { + mode = lib.mkOption { + type = lib.types.enum [ + "file" + "database" + ]; + default = "file"; + description = '' + The mode can be "file" or "database" that defines + where the ACL policies are stored and read from. + ''; + }; + + path = lib.mkOption { + type = lib.types.nullOr lib.types.path; + default = null; + description = '' + If the mode is set to "file", the path to a + HuJSON file containing ACL policies. + ''; + }; + }; + }; + }; + }; + }; + }; + + imports = with lib; [ + (mkRenamedOptionModule + [ "services" "headscale" "derp" "autoUpdate" ] + [ "services" "headscale" "settings" "derp" "auto_update_enabled" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "derp" "auto_update_enable" ] + [ "services" "headscale" "settings" "derp" "auto_update_enabled" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "derp" "paths" ] + [ "services" "headscale" "settings" "derp" "paths" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "derp" "updateFrequency" ] + [ "services" "headscale" "settings" "derp" "update_frequency" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "derp" "urls" ] + [ "services" "headscale" "settings" "derp" "urls" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "ephemeralNodeInactivityTimeout" ] + [ "services" "headscale" "settings" "ephemeral_node_inactivity_timeout" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "logLevel" ] + [ "services" "headscale" "settings" "log" "level" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "openIdConnect" "clientId" ] + [ "services" "headscale" "settings" "oidc" "client_id" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "openIdConnect" "clientSecretFile" ] + [ "services" "headscale" "settings" "oidc" "client_secret_path" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "openIdConnect" "issuer" ] + [ "services" "headscale" "settings" "oidc" "issuer" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "serverUrl" ] + [ "services" "headscale" "settings" "server_url" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "tls" "certFile" ] + [ "services" "headscale" "settings" "tls_cert_path" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "tls" "keyFile" ] + [ "services" "headscale" "settings" "tls_key_path" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "tls" "letsencrypt" "challengeType" ] + [ "services" "headscale" "settings" "tls_letsencrypt_challenge_type" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "tls" "letsencrypt" "hostname" ] + [ "services" "headscale" "settings" "tls_letsencrypt_hostname" ] + ) + (mkRenamedOptionModule + [ "services" "headscale" "tls" "letsencrypt" "httpListen" ] + [ "services" "headscale" "settings" "tls_letsencrypt_listen" ] + ) + + (mkRemovedOptionModule [ "services" "headscale" "openIdConnect" "domainMap" ] '' + Headscale no longer uses domain_map. If you're using an old version of headscale you can still set this option via services.headscale.settings.oidc.domain_map. + '') + ]; + + config = lib.mkIf cfg.enable { + assertions = [ + { + assertion = with cfg.settings; dns.magic_dns -> dns.base_domain != ""; + message = "dns.base_domain must be set when using MagicDNS"; + } + { + assertion = with cfg.settings; dns.override_local_dns -> (dns.nameservers.global != [ ]); + message = "dns.nameservers.global must be set when dns.override_local_dns is true"; + } + (assertRemovedOption [ "settings" "acl_policy_path" ] "Use `policy.path` instead.") + (assertRemovedOption [ "settings" "db_host" ] "Use `database.postgres.host` instead.") + (assertRemovedOption [ "settings" "db_name" ] "Use `database.postgres.name` instead.") + (assertRemovedOption [ + "settings" + "db_password_file" + ] "Use `database.postgres.password_file` instead.") + (assertRemovedOption [ "settings" "db_path" ] "Use `database.sqlite.path` instead.") + (assertRemovedOption [ "settings" "db_port" ] "Use `database.postgres.port` instead.") + (assertRemovedOption [ "settings" "db_type" ] "Use `database.type` instead.") + (assertRemovedOption [ "settings" "db_user" ] "Use `database.postgres.user` instead.") + (assertRemovedOption [ "settings" "dns_config" ] "Use `dns` instead.") + (assertRemovedOption [ "settings" "dns_config" "domains" ] "Use `dns.search_domains` instead.") + (assertRemovedOption [ + "settings" + "dns_config" + "nameservers" + ] "Use `dns.nameservers.global` instead.") + (assertRemovedOption [ + "settings" + "oidc" + "strip_email_domain" + ] "The strip_email_domain option got removed upstream") + ]; + + services.headscale.settings = lib.mkMerge [ + cliConfig + { + listen_addr = lib.mkDefault "${cfg.address}:${toString cfg.port}"; + + tls_letsencrypt_cache_dir = "${dataDir}/.cache"; + } + ]; + + environment = { + # Headscale CLI needs a minimal config to be able to locate the unix socket + # to talk to the server instance. + etc."headscale/config.yaml".source = cliConfigFile; + + systemPackages = [ cfg.package ]; + }; + + users.groups.headscale = lib.mkIf (cfg.group == "headscale") { }; + + users.users.headscale = lib.mkIf (cfg.user == "headscale") { + description = "headscale user"; + home = dataDir; + group = cfg.group; + isSystemUser = true; + }; + + systemd.services.headscale = { + description = "headscale coordination server for Tailscale"; + wants = [ "network-online.target" ]; + after = [ "network-online.target" ]; + wantedBy = [ "multi-user.target" ]; + + script = '' + ${lib.optionalString (cfg.settings.database.postgres.password_file != null) '' + export HEADSCALE_DATABASE_POSTGRES_PASS="$(head -n1 ${lib.escapeShellArg cfg.settings.database.postgres.password_file})" + ''} + + exec ${lib.getExe cfg.package} serve --config ${configFile} + ''; + + serviceConfig = + let + capabilityBoundingSet = [ "CAP_CHOWN" ] ++ lib.optional (cfg.port < 1024) "CAP_NET_BIND_SERVICE"; + in + { + Restart = "always"; + RestartSec = "5s"; + Type = "simple"; + User = cfg.user; + Group = cfg.group; + + # Hardening options + RuntimeDirectory = "headscale"; + # Allow headscale group access so users can be added and use the CLI. + RuntimeDirectoryMode = "0750"; + + StateDirectory = "headscale"; + StateDirectoryMode = "0750"; + + ProtectSystem = "strict"; + ProtectHome = true; + PrivateTmp = true; + PrivateDevices = true; + ProtectKernelTunables = true; + ProtectControlGroups = true; + RestrictSUIDSGID = true; + PrivateMounts = true; + ProtectKernelModules = true; + ProtectKernelLogs = true; + ProtectHostname = true; + ProtectClock = true; + ProtectProc = "invisible"; + ProcSubset = "pid"; + RestrictNamespaces = true; + RemoveIPC = true; + UMask = "0077"; + + CapabilityBoundingSet = capabilityBoundingSet; + AmbientCapabilities = capabilityBoundingSet; + NoNewPrivileges = true; + LockPersonality = true; + RestrictRealtime = true; + SystemCallFilter = [ + "@system-service" + "~@privileged" + "@chown" + ]; + SystemCallArchitectures = "native"; + RestrictAddressFamilies = "AF_INET AF_INET6 AF_UNIX"; + }; + }; + }; + + meta.maintainers = with lib.maintainers; [ + kradalby + misterio77 + ]; +} diff --git a/nix/tests/headscale.nix b/nix/tests/headscale.nix new file mode 100644 index 00000000..7dc93870 --- /dev/null +++ b/nix/tests/headscale.nix @@ -0,0 +1,102 @@ +{ pkgs, lib, ... }: +let + tls-cert = pkgs.runCommand "selfSignedCerts" { buildInputs = [ pkgs.openssl ]; } '' + openssl req \ + -x509 -newkey rsa:4096 -sha256 -days 365 \ + -nodes -out cert.pem -keyout key.pem \ + -subj '/CN=headscale' -addext "subjectAltName=DNS:headscale" + + mkdir -p $out + cp key.pem cert.pem $out + ''; +in +{ + name = "headscale"; + meta.maintainers = with lib.maintainers; [ + kradalby + misterio77 + ]; + + nodes = + let + headscalePort = 8080; + stunPort = 3478; + peer = { + services.tailscale.enable = true; + security.pki.certificateFiles = [ "${tls-cert}/cert.pem" ]; + }; + in + { + peer1 = peer; + peer2 = peer; + + headscale = { + services = { + headscale = { + enable = true; + port = headscalePort; + settings = { + server_url = "https://headscale"; + ip_prefixes = [ "100.64.0.0/10" ]; + derp.server = { + enabled = true; + region_id = 999; + stun_listen_addr = "0.0.0.0:${toString stunPort}"; + }; + dns = { + base_domain = "tailnet"; + extra_records = [ + { + name = "foo.bar"; + type = "A"; + value = "100.64.0.2"; + } + ]; + override_local_dns = false; + }; + }; + }; + nginx = { + enable = true; + virtualHosts.headscale = { + addSSL = true; + sslCertificate = "${tls-cert}/cert.pem"; + sslCertificateKey = "${tls-cert}/key.pem"; + locations."/" = { + proxyPass = "http://127.0.0.1:${toString headscalePort}"; + proxyWebsockets = true; + }; + }; + }; + }; + networking.firewall = { + allowedTCPPorts = [ + 80 + 443 + ]; + allowedUDPPorts = [ stunPort ]; + }; + environment.systemPackages = [ pkgs.headscale ]; + }; + }; + + testScript = '' + start_all() + headscale.wait_for_unit("headscale") + headscale.wait_for_open_port(443) + + # Create headscale user and preauth-key + headscale.succeed("headscale users create test") + authkey = headscale.succeed("headscale preauthkeys -u 1 create --reusable") + + # Connect peers + up_cmd = f"tailscale up --login-server 'https://headscale' --auth-key {authkey}" + peer1.execute(up_cmd) + peer2.execute(up_cmd) + + # Check that they are reachable from the tailnet + peer1.wait_until_succeeds("tailscale ping peer2") + peer2.wait_until_succeeds("tailscale ping peer1.tailnet") + assert (res := peer1.wait_until_succeeds("${lib.getExe pkgs.dig} +short foo.bar").strip()) == "100.64.0.2", f"Domain {res} did not match 100.64.0.2" + ''; +} diff --git a/proto/headscale/v1/apikey.proto b/proto/headscale/v1/apikey.proto index c51ac05f..6ea0d669 100644 --- a/proto/headscale/v1/apikey.proto +++ b/proto/headscale/v1/apikey.proto @@ -16,7 +16,10 @@ message CreateApiKeyRequest { google.protobuf.Timestamp expiration = 1; } message CreateApiKeyResponse { string api_key = 1; } -message ExpireApiKeyRequest { string prefix = 1; } +message ExpireApiKeyRequest { + string prefix = 1; + uint64 id = 2; +} message ExpireApiKeyResponse {} @@ -24,6 +27,9 @@ message ListApiKeysRequest {} message ListApiKeysResponse { repeated ApiKey api_keys = 1; } -message DeleteApiKeyRequest { string prefix = 1; } +message DeleteApiKeyRequest { + string prefix = 1; + uint64 id = 2; +} message DeleteApiKeyResponse {} diff --git a/proto/headscale/v1/headscale.proto b/proto/headscale/v1/headscale.proto index 3b42a3f3..5e556255 100644 --- a/proto/headscale/v1/headscale.proto +++ b/proto/headscale/v1/headscale.proto @@ -55,6 +55,13 @@ service HeadscaleService { }; } + rpc DeletePreAuthKey(DeletePreAuthKeyRequest) + returns (DeletePreAuthKeyResponse) { + option (google.api.http) = { + delete : "/api/v1/preauthkey" + }; + } + rpc ListPreAuthKeys(ListPreAuthKeysRequest) returns (ListPreAuthKeysResponse) { option (google.api.http) = { @@ -123,13 +130,6 @@ service HeadscaleService { }; } - rpc MoveNode(MoveNodeRequest) returns (MoveNodeResponse) { - option (google.api.http) = { - post : "/api/v1/node/{node_id}/user", - body : "*" - }; - } - rpc BackfillNodeIPs(BackfillNodeIPsRequest) returns (BackfillNodeIPsResponse) { option (google.api.http) = { diff --git a/proto/headscale/v1/node.proto b/proto/headscale/v1/node.proto index fb074008..3ce83c4b 100644 --- a/proto/headscale/v1/node.proto +++ b/proto/headscale/v1/node.proto @@ -35,7 +35,7 @@ message Node { RegisterMethod register_method = 13; - reserved 14 to 17; + reserved 14 to 20; // google.protobuf.Timestamp updated_at = 14; // google.protobuf.Timestamp deleted_at = 15; @@ -43,14 +43,16 @@ message Node { // bytes endpoints = 16; // bytes enabled_routes = 17; - repeated string forced_tags = 18; - repeated string invalid_tags = 19; - repeated string valid_tags = 20; + // Deprecated + // repeated string forced_tags = 18; + // repeated string invalid_tags = 19; + // repeated string valid_tags = 20; string given_name = 21; bool online = 22; repeated string approved_routes = 23; repeated string available_routes = 24; repeated string subnet_routes = 25; + repeated string tags = 26; } message RegisterNodeRequest { @@ -58,27 +60,39 @@ message RegisterNodeRequest { string key = 2; } -message RegisterNodeResponse { Node node = 1; } +message RegisterNodeResponse { + Node node = 1; +} -message GetNodeRequest { uint64 node_id = 1; } +message GetNodeRequest { + uint64 node_id = 1; +} -message GetNodeResponse { Node node = 1; } +message GetNodeResponse { + Node node = 1; +} message SetTagsRequest { uint64 node_id = 1; repeated string tags = 2; } -message SetTagsResponse { Node node = 1; } +message SetTagsResponse { + Node node = 1; +} message SetApprovedRoutesRequest { uint64 node_id = 1; repeated string routes = 2; } -message SetApprovedRoutesResponse { Node node = 1; } +message SetApprovedRoutesResponse { + Node node = 1; +} -message DeleteNodeRequest { uint64 node_id = 1; } +message DeleteNodeRequest { + uint64 node_id = 1; +} message DeleteNodeResponse {} @@ -87,25 +101,26 @@ message ExpireNodeRequest { google.protobuf.Timestamp expiry = 2; } -message ExpireNodeResponse { Node node = 1; } +message ExpireNodeResponse { + Node node = 1; +} message RenameNodeRequest { uint64 node_id = 1; string new_name = 2; } -message RenameNodeResponse { Node node = 1; } - -message ListNodesRequest { string user = 1; } - -message ListNodesResponse { repeated Node nodes = 1; } - -message MoveNodeRequest { - uint64 node_id = 1; - uint64 user = 2; +message RenameNodeResponse { + Node node = 1; } -message MoveNodeResponse { Node node = 1; } +message ListNodesRequest { + string user = 1; +} + +message ListNodesResponse { + repeated Node nodes = 1; +} message DebugCreateNodeRequest { string user = 1; @@ -114,8 +129,14 @@ message DebugCreateNodeRequest { repeated string routes = 4; } -message DebugCreateNodeResponse { Node node = 1; } +message DebugCreateNodeResponse { + Node node = 1; +} -message BackfillNodeIPsRequest { bool confirmed = 1; } +message BackfillNodeIPsRequest { + bool confirmed = 1; +} -message BackfillNodeIPsResponse { repeated string changes = 1; } +message BackfillNodeIPsResponse { + repeated string changes = 1; +} diff --git a/proto/headscale/v1/preauthkey.proto b/proto/headscale/v1/preauthkey.proto index de75af11..04e88821 100644 --- a/proto/headscale/v1/preauthkey.proto +++ b/proto/headscale/v1/preauthkey.proto @@ -1,10 +1,11 @@ syntax = "proto3"; package headscale.v1; -option go_package = "github.com/juanfont/headscale/gen/go/v1"; import "google/protobuf/timestamp.proto"; import "headscale/v1/user.proto"; +option go_package = "github.com/juanfont/headscale/gen/go/v1"; + message PreAuthKey { User user = 1; uint64 id = 2; @@ -25,15 +26,24 @@ message CreatePreAuthKeyRequest { repeated string acl_tags = 5; } -message CreatePreAuthKeyResponse { PreAuthKey pre_auth_key = 1; } +message CreatePreAuthKeyResponse { + PreAuthKey pre_auth_key = 1; +} message ExpirePreAuthKeyRequest { - uint64 user = 1; - string key = 2; + uint64 id = 1; } message ExpirePreAuthKeyResponse {} -message ListPreAuthKeysRequest { uint64 user = 1; } +message DeletePreAuthKeyRequest { + uint64 id = 1; +} -message ListPreAuthKeysResponse { repeated PreAuthKey pre_auth_keys = 1; } +message DeletePreAuthKeyResponse {} + +message ListPreAuthKeysRequest {} + +message ListPreAuthKeysResponse { + repeated PreAuthKey pre_auth_keys = 1; +} diff --git a/swagger.go b/swagger.go index 306fc1f6..fa764568 100644 --- a/swagger.go +++ b/swagger.go @@ -20,7 +20,7 @@ func SwaggerUI( <html> <head> <link rel="stylesheet" type="text/css" href="https://unpkg.com/swagger-ui-dist@3/swagger-ui.css"> - + <link rel="icon" href="/favicon.ico"> <script src="https://unpkg.com/swagger-ui-dist@3/swagger-ui-standalone-preset.js"></script> <script src="https://unpkg.com/swagger-ui-dist@3/swagger-ui-bundle.js" charset="UTF-8"></script> </head> @@ -57,6 +57,7 @@ func SwaggerUI( writer.Header().Set("Content-Type", "text/plain; charset=utf-8") writer.WriteHeader(http.StatusInternalServerError) + _, err := writer.Write([]byte("Could not render Swagger")) if err != nil { log.Error(). @@ -70,6 +71,7 @@ func SwaggerUI( writer.Header().Set("Content-Type", "text/html; charset=utf-8") writer.WriteHeader(http.StatusOK) + _, err := writer.Write(payload.Bytes()) if err != nil { log.Error(). @@ -85,6 +87,7 @@ func SwaggerAPIv1( ) { writer.Header().Set("Content-Type", "application/json; charset=utf-8") writer.WriteHeader(http.StatusOK) + if _, err := writer.Write(apiV1JSON); err != nil { log.Error(). Caller(). diff --git a/tools/capver/main.go b/tools/capver/main.go index cbb5435c..80468c4a 100644 --- a/tools/capver/main.go +++ b/tools/capver/main.go @@ -3,7 +3,9 @@ package main //go:generate go run main.go import ( + "context" "encoding/json" + "errors" "fmt" "go/format" "io" @@ -11,6 +13,7 @@ import ( "net/http" "os" "regexp" + "slices" "sort" "strconv" "strings" @@ -20,57 +23,211 @@ import ( ) const ( - releasesURL = "https://api.github.com/repos/tailscale/tailscale/releases" - rawFileURL = "https://github.com/tailscale/tailscale/raw/refs/tags/%s/tailcfg/tailcfg.go" - outputFile = "../../hscontrol/capver/capver_generated.go" + ghcrTokenURL = "https://ghcr.io/token?service=ghcr.io&scope=repository:tailscale/tailscale:pull" //nolint:gosec + ghcrTagsURL = "https://ghcr.io/v2/tailscale/tailscale/tags/list?n=10000" + rawFileURL = "https://github.com/tailscale/tailscale/raw/refs/tags/%s/tailcfg/tailcfg.go" + outputFile = "../../hscontrol/capver/capver_generated.go" + testFile = "../../hscontrol/capver/capver_test_data.go" + fallbackCapVer = 90 + maxTestCases = 4 + supportedMajorMinorVersions = 10 + filePermissions = 0o600 + semverMatchGroups = 4 + latest3Count = 3 + latest2Count = 2 ) -type Release struct { - Name string `json:"name"` +var errUnexpectedStatusCode = errors.New("unexpected status code") + +// GHCRTokenResponse represents the response from GHCR token endpoint. +type GHCRTokenResponse struct { + Token string `json:"token"` } -func getCapabilityVersions() (map[string]tailcfg.CapabilityVersion, error) { - // Fetch the releases - resp, err := http.Get(releasesURL) +// GHCRTagsResponse represents the response from GHCR tags list endpoint. +type GHCRTagsResponse struct { + Name string `json:"name"` + Tags []string `json:"tags"` +} + +// getGHCRToken fetches an anonymous token from GHCR for accessing public container images. +func getGHCRToken(ctx context.Context) (string, error) { + client := &http.Client{} + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, ghcrTokenURL, nil) if err != nil { - return nil, fmt.Errorf("error fetching releases: %w", err) + return "", fmt.Errorf("error creating token request: %w", err) + } + + resp, err := client.Do(req) + if err != nil { + return "", fmt.Errorf("error fetching GHCR token: %w", err) } defer resp.Body.Close() - body, err := io.ReadAll(resp.Body) - if err != nil { - return nil, fmt.Errorf("error reading response body: %w", err) + if resp.StatusCode != http.StatusOK { + return "", fmt.Errorf("%w: %d", errUnexpectedStatusCode, resp.StatusCode) } - var releases []Release - err = json.Unmarshal(body, &releases) + body, err := io.ReadAll(resp.Body) if err != nil { - return nil, fmt.Errorf("error unmarshalling JSON: %w", err) + return "", fmt.Errorf("error reading token response: %w", err) } + var tokenResp GHCRTokenResponse + + err = json.Unmarshal(body, &tokenResp) + if err != nil { + return "", fmt.Errorf("error parsing token response: %w", err) + } + + return tokenResp.Token, nil +} + +// getGHCRTags fetches all available tags from GHCR for tailscale/tailscale. +func getGHCRTags(ctx context.Context) ([]string, error) { + token, err := getGHCRToken(ctx) + if err != nil { + return nil, fmt.Errorf("failed to get GHCR token: %w", err) + } + + client := &http.Client{} + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, ghcrTagsURL, nil) + if err != nil { + return nil, fmt.Errorf("error creating tags request: %w", err) + } + + req.Header.Set("Authorization", "Bearer "+token) + + resp, err := client.Do(req) + if err != nil { + return nil, fmt.Errorf("error fetching tags: %w", err) + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("%w: %d", errUnexpectedStatusCode, resp.StatusCode) + } + + body, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("error reading tags response: %w", err) + } + + var tagsResp GHCRTagsResponse + + err = json.Unmarshal(body, &tagsResp) + if err != nil { + return nil, fmt.Errorf("error parsing tags response: %w", err) + } + + return tagsResp.Tags, nil +} + +// semverRegex matches semantic version tags like v1.90.0 or v1.90.1. +var semverRegex = regexp.MustCompile(`^v(\d+)\.(\d+)\.(\d+)$`) + +// parseSemver extracts major, minor, patch from a semver tag. +// Returns -1 for all values if not a valid semver. +func parseSemver(tag string) (int, int, int) { + matches := semverRegex.FindStringSubmatch(tag) + if len(matches) != semverMatchGroups { + return -1, -1, -1 + } + + major, _ := strconv.Atoi(matches[1]) + minor, _ := strconv.Atoi(matches[2]) + patch, _ := strconv.Atoi(matches[3]) + + return major, minor, patch +} + +// getMinorVersionsFromTags processes container tags and returns a map of minor versions +// to the first available patch version for each minor. +// For example: {"v1.90": "v1.90.0", "v1.92": "v1.92.0"}. +func getMinorVersionsFromTags(tags []string) map[string]string { + // Map minor version (e.g., "v1.90") to lowest patch version available + minorToLowestPatch := make(map[string]struct { + patch int + fullVer string + }) + + for _, tag := range tags { + major, minor, patch := parseSemver(tag) + if major < 0 { + continue // Not a semver tag + } + + minorKey := fmt.Sprintf("v%d.%d", major, minor) + + existing, exists := minorToLowestPatch[minorKey] + if !exists || patch < existing.patch { + minorToLowestPatch[minorKey] = struct { + patch int + fullVer string + }{ + patch: patch, + fullVer: tag, + } + } + } + + // Convert to simple map + result := make(map[string]string) + for minorVer, info := range minorToLowestPatch { + result[minorVer] = info.fullVer + } + + return result +} + +// getCapabilityVersions fetches container tags from GHCR, identifies minor versions, +// and fetches the capability version for each from the Tailscale source. +func getCapabilityVersions(ctx context.Context) (map[string]tailcfg.CapabilityVersion, error) { + // Fetch container tags from GHCR + tags, err := getGHCRTags(ctx) + if err != nil { + return nil, fmt.Errorf("failed to get container tags: %w", err) + } + + log.Printf("Found %d container tags", len(tags)) + + // Get minor versions with their representative patch versions + minorVersions := getMinorVersionsFromTags(tags) + log.Printf("Found %d minor versions", len(minorVersions)) + // Regular expression to find the CurrentCapabilityVersion line re := regexp.MustCompile(`const CurrentCapabilityVersion CapabilityVersion = (\d+)`) versions := make(map[string]tailcfg.CapabilityVersion) + client := &http.Client{} - for _, release := range releases { - version := strings.TrimSpace(release.Name) - if !strings.HasPrefix(version, "v") { - version = "v" + version + for minorVer, patchVer := range minorVersions { + // Fetch the raw Go file for the patch version + rawURL := fmt.Sprintf(rawFileURL, patchVer) + + req, err := http.NewRequestWithContext(ctx, http.MethodGet, rawURL, nil) //nolint:gosec + if err != nil { + log.Printf("Warning: failed to create request for %s: %v", patchVer, err) + continue } - // Fetch the raw Go file - rawURL := fmt.Sprintf(rawFileURL, version) - resp, err := http.Get(rawURL) + resp, err := client.Do(req) if err != nil { - log.Printf("Error fetching raw file for version %s: %v\n", version, err) + log.Printf("Warning: failed to fetch %s: %v", patchVer, err) continue } defer resp.Body.Close() + if resp.StatusCode != http.StatusOK { + log.Printf("Warning: got status %d for %s", resp.StatusCode, patchVer) + continue + } + body, err := io.ReadAll(resp.Body) if err != nil { - log.Printf("Error reading raw file for version %s: %v\n", version, err) + log.Printf("Warning: failed to read response for %s: %v", patchVer, err) continue } @@ -79,16 +236,32 @@ func getCapabilityVersions() (map[string]tailcfg.CapabilityVersion, error) { if len(matches) > 1 { capabilityVersionStr := matches[1] capabilityVersion, _ := strconv.Atoi(capabilityVersionStr) - versions[version] = tailcfg.CapabilityVersion(capabilityVersion) - } else { - log.Printf("Version: %s, CurrentCapabilityVersion not found\n", version) + versions[minorVer] = tailcfg.CapabilityVersion(capabilityVersion) + log.Printf(" %s (from %s): capVer %d", minorVer, patchVer, capabilityVersion) } } return versions, nil } -func writeCapabilityVersionsToFile(versions map[string]tailcfg.CapabilityVersion) error { +func calculateMinSupportedCapabilityVersion(versions map[string]tailcfg.CapabilityVersion) tailcfg.CapabilityVersion { + // Since we now store minor versions directly, just sort and take the oldest of the latest N + minorVersions := xmaps.Keys(versions) + sort.Strings(minorVersions) + + supportedCount := min(len(minorVersions), supportedMajorMinorVersions) + + if supportedCount == 0 { + return fallbackCapVer + } + + // The minimum supported version is the oldest of the latest 10 + oldestSupportedMinor := minorVersions[len(minorVersions)-supportedCount] + + return versions[oldestSupportedMinor] +} + +func writeCapabilityVersionsToFile(versions map[string]tailcfg.CapabilityVersion, minSupportedCapVer tailcfg.CapabilityVersion) error { // Generate the Go code as a string var content strings.Builder content.WriteString("package capver\n\n") @@ -99,35 +272,48 @@ func writeCapabilityVersionsToFile(versions map[string]tailcfg.CapabilityVersion sortedVersions := xmaps.Keys(versions) sort.Strings(sortedVersions) + for _, version := range sortedVersions { fmt.Fprintf(&content, "\t\"%s\": %d,\n", version, versions[version]) } + content.WriteString("}\n") content.WriteString("\n\n") content.WriteString("var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{\n") capVarToTailscaleVer := make(map[tailcfg.CapabilityVersion]string) + for _, v := range sortedVersions { - cap := versions[v] + capabilityVersion := versions[v] // If it is already set, skip and continue, - // we only want the first tailscale vsion per - // capability vsion. - if _, ok := capVarToTailscaleVer[cap]; ok { + // we only want the first tailscale version per + // capability version. + if _, ok := capVarToTailscaleVer[capabilityVersion]; ok { continue } - capVarToTailscaleVer[cap] = v + + capVarToTailscaleVer[capabilityVersion] = v } capsSorted := xmaps.Keys(capVarToTailscaleVer) - sort.Slice(capsSorted, func(i, j int) bool { - return capsSorted[i] < capsSorted[j] - }) + slices.Sort(capsSorted) + for _, capVer := range capsSorted { fmt.Fprintf(&content, "\t%d:\t\t\"%s\",\n", capVer, capVarToTailscaleVer[capVer]) } - content.WriteString("}\n") + + content.WriteString("}\n\n") + + // Add the SupportedMajorMinorVersions constant + content.WriteString("// SupportedMajorMinorVersions is the number of major.minor Tailscale versions supported.\n") + fmt.Fprintf(&content, "const SupportedMajorMinorVersions = %d\n\n", supportedMajorMinorVersions) + + // Add the MinSupportedCapabilityVersion constant + content.WriteString("// MinSupportedCapabilityVersion represents the minimum capability version\n") + content.WriteString("// supported by this Headscale instance (latest 10 minor versions)\n") + fmt.Fprintf(&content, "const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = %d\n", minSupportedCapVer) // Format the generated code formatted, err := format.Source([]byte(content.String())) @@ -136,7 +322,7 @@ func writeCapabilityVersionsToFile(versions map[string]tailcfg.CapabilityVersion } // Write to file - err = os.WriteFile(outputFile, formatted, 0o644) + err = os.WriteFile(outputFile, formatted, filePermissions) if err != nil { return fmt.Errorf("error writing file: %w", err) } @@ -144,18 +330,156 @@ func writeCapabilityVersionsToFile(versions map[string]tailcfg.CapabilityVersion return nil } +func writeTestDataFile(versions map[string]tailcfg.CapabilityVersion, minSupportedCapVer tailcfg.CapabilityVersion) error { + // Sort minor versions + minorVersions := xmaps.Keys(versions) + sort.Strings(minorVersions) + + // Take latest N + supportedCount := min(len(minorVersions), supportedMajorMinorVersions) + + latest10 := minorVersions[len(minorVersions)-supportedCount:] + latest3 := minorVersions[len(minorVersions)-min(latest3Count, len(minorVersions)):] + latest2 := minorVersions[len(minorVersions)-min(latest2Count, len(minorVersions)):] + + // Generate test data file content + var content strings.Builder + content.WriteString("package capver\n\n") + content.WriteString("// Generated DO NOT EDIT\n\n") + content.WriteString("import \"tailscale.com/tailcfg\"\n\n") + + // Generate complete test struct for TailscaleLatestMajorMinor + content.WriteString("var tailscaleLatestMajorMinorTests = []struct {\n") + content.WriteString("\tn int\n") + content.WriteString("\tstripV bool\n") + content.WriteString("\texpected []string\n") + content.WriteString("}{\n") + + // Latest 3 with v prefix + content.WriteString("\t{3, false, []string{") + + for i, version := range latest3 { + content.WriteString(fmt.Sprintf("\"%s\"", version)) + + if i < len(latest3)-1 { + content.WriteString(", ") + } + } + + content.WriteString("}},\n") + + // Latest 2 without v prefix + content.WriteString("\t{2, true, []string{") + + for i, version := range latest2 { + // Strip v prefix for this test case + verNoV := strings.TrimPrefix(version, "v") + content.WriteString(fmt.Sprintf("\"%s\"", verNoV)) + + if i < len(latest2)-1 { + content.WriteString(", ") + } + } + + content.WriteString("}},\n") + + // Latest N without v prefix (all supported) + content.WriteString(fmt.Sprintf("\t{%d, true, []string{\n", supportedMajorMinorVersions)) + + for _, version := range latest10 { + verNoV := strings.TrimPrefix(version, "v") + content.WriteString(fmt.Sprintf("\t\t\"%s\",\n", verNoV)) + } + + content.WriteString("\t}},\n") + + // Empty case + content.WriteString("\t{0, false, nil},\n") + content.WriteString("}\n\n") + + // Build capVerToTailscaleVer for test data + capVerToTailscaleVer := make(map[tailcfg.CapabilityVersion]string) + sortedVersions := xmaps.Keys(versions) + sort.Strings(sortedVersions) + + for _, v := range sortedVersions { + capabilityVersion := versions[v] + if _, ok := capVerToTailscaleVer[capabilityVersion]; !ok { + capVerToTailscaleVer[capabilityVersion] = v + } + } + + // Generate complete test struct for CapVerMinimumTailscaleVersion + content.WriteString("var capVerMinimumTailscaleVersionTests = []struct {\n") + content.WriteString("\tinput tailcfg.CapabilityVersion\n") + content.WriteString("\texpected string\n") + content.WriteString("}{\n") + + // Add minimum supported version + minVersionString := capVerToTailscaleVer[minSupportedCapVer] + content.WriteString(fmt.Sprintf("\t{%d, \"%s\"},\n", minSupportedCapVer, minVersionString)) + + // Add a few more test cases + capsSorted := xmaps.Keys(capVerToTailscaleVer) + slices.Sort(capsSorted) + + testCount := 0 + for _, capVer := range capsSorted { + if testCount >= maxTestCases { + break + } + + if capVer != minSupportedCapVer { // Don't duplicate the min version test + version := capVerToTailscaleVer[capVer] + content.WriteString(fmt.Sprintf("\t{%d, \"%s\"},\n", capVer, version)) + + testCount++ + } + } + + // Edge cases + content.WriteString("\t{9001, \"\"}, // Test case for a version higher than any in the map\n") + content.WriteString("\t{60, \"\"}, // Test case for a version lower than any in the map\n") + content.WriteString("}\n") + + // Format the generated code + formatted, err := format.Source([]byte(content.String())) + if err != nil { + return fmt.Errorf("error formatting test data Go code: %w", err) + } + + // Write to file + err = os.WriteFile(testFile, formatted, filePermissions) + if err != nil { + return fmt.Errorf("error writing test data file: %w", err) + } + + return nil +} + func main() { - versions, err := getCapabilityVersions() + ctx := context.Background() + + versions, err := getCapabilityVersions(ctx) if err != nil { log.Println("Error:", err) return } - err = writeCapabilityVersionsToFile(versions) + // Calculate the minimum supported capability version + minSupportedCapVer := calculateMinSupportedCapabilityVersion(versions) + + err = writeCapabilityVersionsToFile(versions, minSupportedCapVer) if err != nil { log.Println("Error writing to file:", err) return } + err = writeTestDataFile(versions, minSupportedCapVer) + if err != nil { + log.Println("Error writing test data file:", err) + return + } + log.Println("Capability versions written to", outputFile) }