Merge branch 'refs/heads/develop' into feature/postgresql

This commit is contained in:
Michael Mayer 2026-01-21 17:08:57 +01:00
commit f6b6e3f3a9
61 changed files with 9329 additions and 576 deletions

View file

@ -32,9 +32,12 @@ This file tells automated coding agents (and humans) where to find the single so
- Auto-generated configuration and command references live under `specs/generated/`. Agents MUST NOT read, analyse, or modify anything in this directory; refer humans to `specs/generated/README.md` if regeneration is required.
- Regenerate `NOTICE` files with `make notice` when dependencies change (e.g., updates to `go.mod`, `go.sum`, `package-lock.json`, or other lockfiles). Do not edit `NOTICE` or `frontend/NOTICE` manually.
- When writing CLI examples or scripts, place option flags before positional arguments unless the command requires a different order.
- Use RFC 3339 UTC timestamps in request and response examples, and valid ID, UID and UUID examples in docs and tests.
> Document headings must use **Title Case** (in APA or AP style) across Markdown files to keep generated navigation and changelogs consistent. Always spell the product name as `PhotoPrism`; this proper noun is an exception to generic naming rules.
> Refresh the `**Last Updated:**` date at the top of documents whenever you make changes to their contents, using the format `January 20, 2026` (without time); leave it as-is for simple formatting or whitespace-only edits.
## Safety & Data
- If `git status` shows unexpected changes, assume a human might be editing; if you think you caused them, ask for permission before using reset commands like `git checkout` or `git reset`.
@ -162,8 +165,6 @@ Note: Across our public documentation, official images, and in production, the c
- JS/Vue: use the lint/format scripts in `frontend/package.json` (ESLint + Prettier)
- All added code and tests **must** be formatted according to our standards.
> Remember to update the `**Last Updated:**` line at the top whenever you edit these guidelines or other files containing a timestamp.
## Tests
- From within the Development Environment:
@ -183,7 +184,7 @@ Note: Across our public documentation, official images, and in production, the c
Use `playwright__browser_navigate` to open `/library/login`, sign in, and then call `playwright__browser_take_screenshot` to capture the page state.
- **Viewport Defaults** — Desktop sessions open with a `1280×900` viewport by default.
Use `playwright__browser_resize` if the viewport is not preconfigured or you need to adjust it mid-run.
- **Mobile Workflows** — When testing responsive layouts, use the `playwright_mobile` server (for example, `playwright_mobile__browser_navigate`).
- **Mobile Workflows** — When testing responsive layouts, use the `playwright_mobile` server (for example, `playwright_mobile__browser_navigate`).
It launches with a `375×667` viewport, matching a typical smartphone display, so you can capture mobile layouts without manual resizing.
- **Authentication** — Default admin credentials are `admin` / `photoprism`:
- If login fails, check your active Compose file or container environment for `PHOTOPRISM_ADMIN_USER` and `PHOTOPRISM_ADMIN_PASSWORD`.

View file

@ -1,5 +1,5 @@
# Ubuntu 25.10 (Questing Quokka)
FROM photoprism/develop:251211-questing
FROM photoprism/develop:251223-questing
# Harden npm usage by default (applies to npm ci / install in dev container)
ENV NPM_CONFIG_IGNORE_SCRIPTS=true

26
NOTICE
View file

@ -9,7 +9,7 @@ The following 3rd-party software packages may be used by or distributed with
PhotoPrism. Any information relevant to third-party vendors listed below are
collected using common, reasonable means.
Date generated: 2025-12-12
Date generated: 2026-01-07
================================================================================
@ -1122,8 +1122,8 @@ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
--------------------------------------------------------------------------------
Package: github.com/gabriel-vasile/mimetype
Version: v1.4.11
License: MIT (https://github.com/gabriel-vasile/mimetype/blob/v1.4.11/LICENSE)
Version: v1.4.12
License: MIT (https://github.com/gabriel-vasile/mimetype/blob/v1.4.12/LICENSE)
MIT License
@ -1234,8 +1234,8 @@ THE SOFTWARE.
--------------------------------------------------------------------------------
Package: github.com/go-co-op/gocron/v2
Version: v2.18.2
License: MIT (https://github.com/go-co-op/gocron/blob/v2.18.2/LICENSE)
Version: v2.19.0
License: MIT (https://github.com/go-co-op/gocron/blob/v2.19.0/LICENSE)
MIT License
@ -2471,8 +2471,8 @@ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLI
--------------------------------------------------------------------------------
Package: github.com/golang/geo
Version: v0.0.0-20251209161508-25c597310d4b
License: Apache-2.0 (https://github.com/golang/geo/blob/25c597310d4b/LICENSE)
Version: v0.0.0-20251223115337-4c285675e7fb
License: Apache-2.0 (https://github.com/golang/geo/blob/4c285675e7fb/LICENSE)
Apache License
@ -5353,8 +5353,8 @@ License: Apache-2.0 (https://github.com/prometheus/client_model/blob/v0.6.2/LICE
--------------------------------------------------------------------------------
Package: github.com/prometheus/common
Version: v0.67.4
License: Apache-2.0 (https://github.com/prometheus/common/blob/v0.67.4/LICENSE)
Version: v0.67.5
License: Apache-2.0 (https://github.com/prometheus/common/blob/v0.67.5/LICENSE)
Apache License
Version 2.0, January 2004
@ -6376,8 +6376,8 @@ CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
--------------------------------------------------------------------------------
Package: github.com/yalue/onnxruntime_go
Version: v1.24.0
License: MIT (https://github.com/yalue/onnxruntime_go/blob/v1.24.0/LICENSE)
Version: v1.25.0
License: MIT (https://github.com/yalue/onnxruntime_go/blob/v1.25.0/LICENSE)
Copyright (c) 2023 Nathan Otterness
@ -8324,8 +8324,8 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------------------
Package: golang.org/x/oauth2
Version: v0.33.0
License: BSD-3-Clause (https://cs.opensource.google/go/x/oauth2/+/v0.33.0:LICENSE)
Version: v0.34.0
License: BSD-3-Clause (https://cs.opensource.google/go/x/oauth2/+/v0.34.0:LICENSE)
Copyright 2009 The Go Authors.

View file

@ -0,0 +1,417 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: ci@photoprism.app\n"
"POT-Creation-Date: 2025-10-17 17:32+0000\n"
"PO-Revision-Date: 2026-01-15 09:02+0000\n"
"Last-Translator: Janis Eglitis <alfabeta.lv@gmail.com>\n"
"Language-Team: none\n"
"Language: lv\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=3; plural=(n % 10 == 0 || n % 100 >= 11 && n % 100 <= "
"19) ? 0 : ((n % 10 == 1 && n % 100 != 11) ? 1 : 2);\n"
"X-Generator: Weblate 5.15.1\n"
#: messages.go:104
msgid "Something went wrong, try again"
msgstr "Kaut kas nogāja greizi, mēģiniet vēlreiz"
#: messages.go:105
msgid "Unable to do that"
msgstr "To nevar izdarīt"
#: messages.go:106
msgid "Changes could not be saved"
msgstr "Izmaiņas nevarēja saglabāt"
#: messages.go:107
msgid "Could not be deleted"
msgstr "Nevarēja izdzēst"
#: messages.go:108
#, c-format
msgid "%s already exists"
msgstr "%s jau pastāv"
#: messages.go:109
msgid "Not found"
msgstr "Nav atrasts"
#: messages.go:110
msgid "File not found"
msgstr "Fails nav atrasts"
#: messages.go:111
msgid "File too large"
msgstr "Fails ir pārāk liels"
#: messages.go:112
msgid "Unsupported"
msgstr "Neatbalstīts"
#: messages.go:113
msgid "Unsupported type"
msgstr "Neatbalstīts tips"
#: messages.go:114
msgid "Unsupported format"
msgstr "Neatbalstīts formāts"
#: messages.go:115
msgid "Originals folder is empty"
msgstr "Oriģinālu mape ir tukša"
#: messages.go:116
msgid "Selection not found"
msgstr "Izvēle nav atrasta"
#: messages.go:117
msgid "Entity not found"
msgstr "Vienība nav atrasta"
#: messages.go:118
msgid "Account not found"
msgstr "Konts nav atrasts"
#: messages.go:119
msgid "User not found"
msgstr "Lietotājs nav atrasts"
#: messages.go:120
msgid "Label not found"
msgstr "Birka nav atrasta"
#: messages.go:121
msgid "Album not found"
msgstr "Albums nav atrasts"
#: messages.go:122
msgid "Subject not found"
msgstr "Tēma nav atrasta"
#: messages.go:123
msgid "Person not found"
msgstr "Persona nav atrasta"
#: messages.go:124
msgid "Face not found"
msgstr "Seja nav atrasta"
#: messages.go:125
msgid "Not available in public mode"
msgstr "Nav pieejams publiskajā režīmā"
#: messages.go:126
msgid "Not available in read-only mode"
msgstr "Nav pieejams lasīšanas režīmā"
#: messages.go:127
msgid "Please log in to your account"
msgstr "Lūdzu, piesakieties savā kontā"
#: messages.go:128
msgid "Permission denied"
msgstr "Atļauja liegta"
#: messages.go:129
msgid "Payment required"
msgstr "Nepieciešams maksājums"
#: messages.go:130
msgid "Upload might be offensive"
msgstr "Augšupielāde varētu būt aizskaroša"
#: messages.go:131
msgid "Upload failed"
msgstr "Augšupielāde neizdevās"
#: messages.go:132
msgid "No items selected"
msgstr "Nav izvēlēts neviens ieraksts"
#: messages.go:133
msgid "Failed creating file, please check permissions"
msgstr "Neizdevās izveidot failu, lūdzu, pārbaudiet atļaujas"
#: messages.go:134
msgid "Failed creating folder, please check permissions"
msgstr "Neizdevās izveidot mapi, lūdzu, pārbaudiet atļaujas"
#: messages.go:135
msgid "Could not connect, please try again"
msgstr "Nevarēja izveidot savienojumu, lūdzu, mēģiniet vēlreiz"
#: messages.go:136
msgid "Enter verification code"
msgstr "Ievadiet verifikācijas kodu"
#: messages.go:137
msgid "Invalid verification code, please try again"
msgstr "Nederīgs verifikācijas kods, lūdzu, mēģiniet vēlreiz"
#: messages.go:138
msgid "Invalid password, please try again"
msgstr "Nederīga parole, lūdzu, mēģiniet vēlreiz"
#: messages.go:139
msgid "Feature disabled"
msgstr "Funkcionalitāte ir izslēgta"
#: messages.go:140
msgid "No labels selected"
msgstr "Nav atlasītas nevienas birkas"
#: messages.go:141
msgid "No albums selected"
msgstr "Nav atlasīts neviens albums"
#: messages.go:142
msgid "No files available for download"
msgstr "Nav lejupielādei pieejamu failu"
#: messages.go:143
msgid "Failed to create zip file"
msgstr "Neizdevās izveidot zip failu"
#: messages.go:144
msgid "Invalid credentials"
msgstr "Nederīgs lietotājvārds vai parole"
#: messages.go:145
msgid "Invalid link"
msgstr "Nederīga saite"
#: messages.go:146
msgid "Invalid name"
msgstr "Nederīgs nosaukums"
#: messages.go:147
msgid "Busy, please try again later"
msgstr "Aizņemts, lūdzu, mēģiniet vēlreiz vēlāk"
#: messages.go:148
#, c-format
msgid "The wakeup interval is %s, but must be 1h or less"
msgstr "Pamošanās intervāls ir %s, bet tam jābūt 1 h vai mazākam"
#: messages.go:149
msgid "Your account could not be connected"
msgstr "Jūsu kontu nevarēja savienot"
#: messages.go:150
msgid "Too many requests"
msgstr "Pārāk daudz pieprasījumu"
#: messages.go:151
msgid "Insufficient storage"
msgstr "Nepietiek brīvas vietas"
#: messages.go:152
msgid "Quota exceeded"
msgstr "Kvota pārsniegta"
#: messages.go:155
msgid "Changes successfully saved"
msgstr "Izmaiņas veiksmīgi saglabātas"
#: messages.go:156
msgid "Album created"
msgstr "Albums izveidots"
#: messages.go:157
msgid "Album saved"
msgstr "Albums saglabāts"
#: messages.go:158
#, c-format
msgid "Album %s deleted"
msgstr "Albums %s ir dzēsts"
#: messages.go:159
msgid "Album contents cloned"
msgstr "Albuma saturs ir dublēts"
#: messages.go:160
msgid "File removed from stack"
msgstr "Fails izņemts no saraksta"
#: messages.go:161
msgid "File deleted"
msgstr "Fails izdzēsts"
#: messages.go:162
#, c-format
msgid "Selection added to %s"
msgstr "Atlasījums pievienots %s"
#: messages.go:163
#, c-format
msgid "One entry added to %s"
msgstr "Viens ieraksts pievienots %s"
#: messages.go:164
#, c-format
msgid "%d entries added to %s"
msgstr "%d ieraksti pievienoti %s"
#: messages.go:165
#, c-format
msgid "One entry removed from %s"
msgstr "Viens ieraksts noņemts no %s"
#: messages.go:166
#, c-format
msgid "%d entries removed from %s"
msgstr "%d ieraksti noņemti no %s"
#: messages.go:167
msgid "Account created"
msgstr "Konts izveidots"
#: messages.go:168
msgid "Account saved"
msgstr "Konts saglabāts"
#: messages.go:169
msgid "Account deleted"
msgstr "Konts ir dzēsts"
#: messages.go:170
msgid "Settings saved"
msgstr "Iestatījumi saglabāti"
#: messages.go:171
msgid "Password changed"
msgstr "Parole nomainīta"
#: messages.go:172
#, c-format
msgid "Import completed in %d s"
msgstr "Importēšana pabeigta %d s laikā"
#: messages.go:173
msgid "Import canceled"
msgstr "Importēšana atcelta"
#: messages.go:174
#, c-format
msgid "Indexing completed in %d s"
msgstr "Indeksēšana pabeigta %d s laikā"
#: messages.go:175
msgid "Indexing originals..."
msgstr "Notiek oriģinālu indeksēšana..."
#: messages.go:176
#, c-format
msgid "Indexing files in %s"
msgstr "Notiek failu indeksēšana %s"
#: messages.go:177
msgid "Indexing canceled"
msgstr "Indeksēšana atcelta"
#: messages.go:178
#, c-format
msgid "Removed %d files and %d photos"
msgstr "Noņemti %d faili un %d fotoattēli"
#: messages.go:179
#, c-format
msgid "Moving files from %s"
msgstr "Failu pārvietošana no %s"
#: messages.go:180
#, c-format
msgid "Copying files from %s"
msgstr "Failu kopēšana no %s"
#: messages.go:181
msgid "Labels deleted"
msgstr "Birkas ir dzēstas"
#: messages.go:182
msgid "Label saved"
msgstr "Birka saglabāta"
#: messages.go:183
msgid "Subject saved"
msgstr "Tēma saglabāta"
#: messages.go:184
msgid "Subject deleted"
msgstr "Tēma dzēsta"
#: messages.go:185
msgid "Person saved"
msgstr "Persona saglabāta"
#: messages.go:186
msgid "Person deleted"
msgstr "Persona dzēsta"
#: messages.go:187
msgid "File uploaded"
msgstr "Fails augšupielādēts"
#: messages.go:188
#, c-format
msgid "%d files uploaded in %d s"
msgstr "%d faili augšupielādēti %d sekundēs"
#: messages.go:189
msgid "Processing upload..."
msgstr "Notiek augšupielādes apstrāde..."
#: messages.go:190
msgid "Upload has been processed"
msgstr "Augšupielāde ir apstrādāta"
#: messages.go:191
msgid "Selection approved"
msgstr "Atlase apstiprināta"
#: messages.go:192
msgid "Selection archived"
msgstr "Atlase arhivēta"
#: messages.go:193
msgid "Selection restored"
msgstr "Atlase atjaunota"
#: messages.go:194
msgid "Selection marked as private"
msgstr "Izvēle atzīmēta kā privāta"
#: messages.go:195
msgid "Albums deleted"
msgstr "Albumi ir dzēsti"
#: messages.go:196
#, c-format
msgid "Zip created in %d s"
msgstr "ZIP fails izveidots %d sekundēs"
#: messages.go:197
msgid "Permanently deleted"
msgstr "Neatgriezeniski dzēsts"
#: messages.go:198
#, c-format
msgid "%s has been restored"
msgstr "%s ir atjaunots"
#: messages.go:199
msgid "Successfully verified"
msgstr "Veiksmīgi verificēts"
#: messages.go:200
msgid "Successfully activated"
msgstr "Veiksmīgi aktivizēts"

File diff suppressed because it is too large Load diff

View file

@ -43,14 +43,14 @@
"@eslint/js": "^9.33.0",
"@mdi/font": "^7.4.47",
"@testing-library/jest-dom": "^6.9.1",
"@testing-library/react": "^16.3.0",
"@testing-library/react": "^16.3.1",
"@vitejs/plugin-react": "^5.1.2",
"@vitejs/plugin-vue": "^6.0.3",
"@vitest/browser": "^3.2.4",
"@vitest/coverage-v8": "^3.2.4",
"@vitest/ui": "^3.2.4",
"@vue/compiler-sfc": "^3.5.18",
"@vue/language-server": "^3.1.8",
"@vue/language-server": "^3.2.2",
"@vue/test-utils": "^2.4.6",
"@vvo/tzdb": "^6.198.0",
"axios": "^1.13.2",
@ -65,7 +65,7 @@
"css-loader": "^7.1.2",
"cssnano": "^7.1.2",
"escape-string-regexp": "^4.0.0",
"eslint": "^9.39.1",
"eslint": "^9.39.2",
"eslint-config-prettier": "^10.1.8",
"eslint-formatter-pretty": "^6.0.1",
"eslint-plugin-html": "^8.1.3",
@ -84,7 +84,7 @@
"i": "^0.3.7",
"jsdom": "^26.1.0",
"luxon": "^3.7.2",
"maplibre-gl": "^5.14.0",
"maplibre-gl": "^5.15.0",
"memoize-one": "^6.0.0",
"mini-css-extract-plugin": "^2.9.4",
"minimist": "^1.2.8",
@ -95,7 +95,7 @@
"postcss": "^8.5.6",
"postcss-import": "^16.1.1",
"postcss-loader": "^8.2.0",
"postcss-preset-env": "^10.5.0",
"postcss-preset-env": "^10.6.0",
"postcss-reporter": "^7.1.0",
"postcss-url": "^10.1.3",
"prettier": "^3.7.4",
@ -103,7 +103,7 @@
"regenerator-runtime": "^0.14.1",
"resolve-url-loader": "^5.0.0",
"sanitize-html": "^2.17.0",
"sass": "^1.96.0",
"sass": "^1.97.2",
"sass-loader": "^16.0.6",
"sockette": "^2.0.6",
"style-loader": "^4.0.0",
@ -122,8 +122,8 @@
"vue-sanitize-directive": "^0.2.1",
"vue-style-loader": "^4.1.3",
"vue3-gettext": "^2.4.0",
"vuetify": "^3.11.3",
"webpack": "^5.103.0",
"vuetify": "^3.11.6",
"webpack": "^5.104.1",
"webpack-bundle-analyzer": "^4.10.2",
"webpack-cli": "^6.0.1",
"webpack-hot-middleware": "^2.26.1",

View file

@ -196,6 +196,10 @@ export let Options = [
value: "fa",
rtl: true,
},
{
text: "Latviešu", // Latvian
value: "lv",
},
];
// Returns the Vuetify UI messages translated with Gettext.

File diff suppressed because one or more lines are too long

4775
frontend/src/locales/lv.po Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,4 @@
import { describe, it, expect } from "vitest";
import { describe, it, expect, beforeEach, afterEach } from "vitest";
import "../fixtures";
import Config from "common/config";
import StorageShim from "node-storage-shim";
@ -11,7 +11,31 @@ const createTestConfig = () => {
return new Config(new StorageShim(), values);
};
const resetThemesToDefault = () => {
themes.SetOptions([
{
text: "Default",
value: "default",
disabled: false,
},
]);
themes.Set("default", {
name: "default",
title: "Default",
colors: {},
variables: {},
});
};
describe("common/config", () => {
beforeEach(() => {
resetThemesToDefault();
});
afterEach(() => {
resetThemesToDefault();
});
it("should get all config values", () => {
const storage = new StorageShim();
const values = { siteTitle: "Foo", name: "testConfig", year: "2300" };
@ -116,42 +140,12 @@ describe("common/config", () => {
variables: {},
};
themes.SetOptions([
{
text: "Default",
value: "default",
disabled: false,
},
]);
themes.Set("default", {
name: "default",
title: "Default",
colors: {},
variables: {},
});
themes.Assign([forcedTheme]);
cfg.setTheme("default");
expect(cfg.themeName).toBe("portal-forced");
expect(cfg.theme.colors.background).toBe("#111111");
themes.Remove("portal-forced");
themes.SetOptions([
{
text: "Default",
value: "default",
disabled: false,
},
]);
themes.Set("default", {
name: "default",
title: "Default",
colors: {},
variables: {},
});
});
it("should return app edition", () => {

View file

@ -0,0 +1,370 @@
import { describe, it, expect, vi, afterEach } from "vitest";
import { shallowMount, config as VTUConfig } from "@vue/test-utils";
import PNavigation from "component/navigation.vue";
function mountNavigation({
routeName = "photos",
routeMeta = { hideNav: false },
isPublic = false,
isRestricted = false,
sessionAuth = true,
featureOverrides = {},
configValues = {},
allowMock,
routerPush,
vuetifyDisplay = { smAndDown: false },
eventPublish,
utilOverrides = {},
sessionOverrides = {},
} = {}) {
const baseConfig = VTUConfig.global.mocks.$config || {};
const baseEvent = VTUConfig.global.mocks.$event || {};
const baseUtil = VTUConfig.global.mocks.$util || {};
const baseNotify = VTUConfig.global.mocks.$notify || {};
const featureFlags = {
files: true,
settings: true,
upload: true,
account: true,
logs: true,
library: true,
places: true,
...featureOverrides,
};
const values = {
siteUrl: "http://localhost:2342/",
usage: { filesTotal: 1024, filesUsed: 512 },
legalUrl: configValues.legalUrl ?? null,
legalInfo: configValues.legalInfo ?? "",
disable: { settings: false },
count: {},
...configValues,
};
const configMock = {
...baseConfig,
getName: baseConfig.getName || vi.fn(() => "PhotoPrism"),
getAbout: baseConfig.getAbout || vi.fn(() => "About"),
getIcon: baseConfig.getIcon || vi.fn(() => "/icon.png"),
getTier: baseConfig.getTier || vi.fn(() => 1),
isPro: baseConfig.isPro || vi.fn(() => false),
isSponsor: baseConfig.isSponsor || vi.fn(() => false),
get: vi.fn((key) => {
if (key === "demo") return false;
if (key === "public") return isPublic;
if (key === "readonly") return false;
return false;
}),
feature: vi.fn((name) => {
if (name in featureFlags) {
return !!featureFlags[name];
}
return true;
}),
allow: allowMock || baseConfig.allow || vi.fn(() => true),
deny: vi.fn((resource, action) => (resource === "photos" && action === "access_library" ? isRestricted : false)),
values,
disconnected: false,
page: { title: "Photos" },
test: false,
};
const session = {
auth: sessionAuth,
isAdmin: vi.fn(() => true),
isSuperAdmin: vi.fn(() => true),
hasScope: vi.fn(() => false),
getUser: vi.fn(() => ({
getDisplayName: vi.fn(() => "Test User"),
getAccountInfo: vi.fn(() => "test@example.com"),
getAvatarURL: vi.fn(() => "/avatar.jpg"),
})),
logout: vi.fn(),
...sessionOverrides,
};
const publish = eventPublish || baseEvent.publish || vi.fn();
const eventBus = {
...baseEvent,
publish,
subscribe: baseEvent.subscribe || vi.fn(() => "sub-id"),
unsubscribe: baseEvent.unsubscribe || vi.fn(),
};
const notify = {
...baseNotify,
info: baseNotify.info || vi.fn(),
blockUI: baseNotify.blockUI || vi.fn(),
};
const util = {
...baseUtil,
openExternalUrl: vi.fn(),
gigaBytes: vi.fn((bytes) => bytes),
...utilOverrides,
};
const push = routerPush || vi.fn();
const wrapper = shallowMount(PNavigation, {
global: {
mocks: {
$config: configMock,
$session: session,
$router: { push },
$route: { name: routeName, meta: routeMeta },
$vuetify: { display: { smAndDown: !!vuetifyDisplay.smAndDown } },
$event: eventBus,
$util: util,
$notify: notify,
$isRtl: false,
},
stubs: {
"router-link": { template: "<a><slot /></a>" },
},
},
});
return {
wrapper,
configMock,
session,
eventBus,
notify,
util,
push,
};
}
describe("component/navigation", () => {
afterEach(() => {
vi.restoreAllMocks();
});
describe("routeName", () => {
it("returns true when current route starts with given name", () => {
const { wrapper } = mountNavigation({ routeName: "photos_browse" });
expect(wrapper.vm.routeName("photos")).toBe(true);
expect(wrapper.vm.routeName("albums")).toBe(false);
});
it("returns false when name or route name is missing", () => {
const { wrapper } = mountNavigation({ routeName: "" });
expect(wrapper.vm.routeName("photos")).toBe(false);
expect(wrapper.vm.routeName("")).toBe(false);
});
});
describe("auth and visibility", () => {
it("auth is true when session is authenticated", () => {
const { wrapper } = mountNavigation({ sessionAuth: true, isPublic: false });
expect(wrapper.vm.auth).toBe(true);
});
it("auth is true when instance is public even without session", () => {
const { wrapper } = mountNavigation({ sessionAuth: false, isPublic: true });
expect(wrapper.vm.auth).toBe(true);
});
it("auth is false when neither session nor public access is available", () => {
const { wrapper } = mountNavigation({ sessionAuth: false, isPublic: false });
expect(wrapper.vm.auth).toBe(false);
});
it("visible is false when route meta.hideNav is true", () => {
const { wrapper } = mountNavigation({ routeMeta: { hideNav: true } });
expect(wrapper.vm.visible).toBe(false);
});
});
describe("drawer behavior", () => {
it("toggleDrawer toggles drawer on small screens", () => {
const { wrapper } = mountNavigation({
vuetifyDisplay: { smAndDown: true },
sessionAuth: true,
});
// Force small-screen mode and authenticated session
wrapper.vm.$vuetify.display.smAndDown = true;
wrapper.vm.session.auth = true;
wrapper.vm.isPublic = false;
wrapper.vm.drawer = false;
wrapper.vm.toggleDrawer({ target: {} });
expect(wrapper.vm.drawer).toBe(true);
wrapper.vm.toggleDrawer({ target: {} });
expect(wrapper.vm.drawer).toBe(false);
});
it("toggleDrawer toggles mini mode on desktop", () => {
const { wrapper } = mountNavigation({
vuetifyDisplay: { smAndDown: false },
isRestricted: false,
});
const initial = wrapper.vm.isMini;
wrapper.vm.toggleDrawer({ target: {} });
expect(wrapper.vm.isMini).toBe(!initial);
});
it("toggleIsMini respects restricted mode and updates localStorage", () => {
const setItemSpy = vi.spyOn(Storage.prototype, "setItem");
const { wrapper } = mountNavigation({ isRestricted: false });
const initial = wrapper.vm.isMini;
wrapper.vm.toggleIsMini();
expect(wrapper.vm.isMini).toBe(!initial);
expect(setItemSpy).toHaveBeenCalledWith("navigation.mode", `${!initial}`);
wrapper.vm.isRestricted = true;
const before = wrapper.vm.isMini;
wrapper.vm.toggleIsMini();
expect(wrapper.vm.isMini).toBe(before);
});
});
describe("account and legal navigation", () => {
it("showAccountSettings routes to account settings when account feature is enabled", () => {
const { wrapper, push } = mountNavigation({
featureOverrides: { account: true },
});
wrapper.vm.showAccountSettings();
expect(push).toHaveBeenCalledWith({ name: "settings_account" });
});
it("showAccountSettings falls back to general settings when account feature is disabled", () => {
const { wrapper, push } = mountNavigation({
featureOverrides: { account: false },
});
wrapper.vm.showAccountSettings();
expect(push).toHaveBeenCalledWith({ name: "settings" });
});
it("showLegalInfo opens external URL when legalUrl is configured", () => {
const { wrapper, util } = mountNavigation({
configValues: { legalUrl: "https://example.com/legal" },
});
wrapper.vm.showLegalInfo();
expect(util.openExternalUrl).toHaveBeenCalledWith("https://example.com/legal");
});
it("showLegalInfo routes to about page when legalUrl is missing", () => {
const { wrapper, push } = mountNavigation({
configValues: { legalUrl: null },
});
wrapper.vm.showLegalInfo();
expect(push).toHaveBeenCalledWith({ name: "about" });
});
});
describe("home and upload actions", () => {
it("onHome toggles drawer on small screens and does not navigate", () => {
const { wrapper, push } = mountNavigation({
vuetifyDisplay: { smAndDown: true },
routeName: "browse",
});
// Ensure mobile mode and authenticated session so drawer logic runs
wrapper.vm.$vuetify.display.smAndDown = true;
wrapper.vm.session.auth = true;
wrapper.vm.isPublic = false;
wrapper.vm.drawer = false;
wrapper.vm.onHome({ target: {} });
expect(wrapper.vm.drawer).toBe(true);
expect(push).not.toHaveBeenCalled();
});
it("onHome navigates to home on desktop when not already there", () => {
const { wrapper, push } = mountNavigation({
vuetifyDisplay: { smAndDown: false },
routeName: "albums",
});
// Force desktop mode explicitly to avoid relying on Vuetify defaults
wrapper.vm.$vuetify.display.smAndDown = false;
wrapper.vm.onHome({ target: {} });
expect(push).toHaveBeenCalledWith({ name: "home" });
});
it("openUpload publishes dialog.upload event", () => {
const publish = vi.fn();
const { wrapper, eventBus } = mountNavigation({ eventPublish: publish });
wrapper.vm.openUpload();
expect(eventBus.publish).toHaveBeenCalledWith("dialog.upload");
});
});
describe("info and usage actions", () => {
it("reloadApp shows info notification and blocks UI", () => {
vi.useFakeTimers();
const { wrapper, notify } = mountNavigation();
const setTimeoutSpy = vi.spyOn(global, "setTimeout");
wrapper.vm.reloadApp();
expect(notify.info).toHaveBeenCalledWith("Reloading…");
expect(notify.blockUI).toHaveBeenCalled();
expect(setTimeoutSpy).toHaveBeenCalled();
vi.useRealTimers();
});
it("showUsageInfo routes to index files", () => {
const { wrapper, push } = mountNavigation();
wrapper.vm.showUsageInfo();
expect(push).toHaveBeenCalledWith({ path: "/index/files" });
});
it("showServerConnectionHelp routes to websockets help", () => {
const { wrapper, push } = mountNavigation();
wrapper.vm.showServerConnectionHelp();
expect(push).toHaveBeenCalledWith({ path: "/help/websockets" });
});
});
describe("indexing state", () => {
it("onIndex sets indexing true for file, folder and indexing events", () => {
const { wrapper } = mountNavigation();
wrapper.vm.onIndex("index.file");
expect(wrapper.vm.indexing).toBe(true);
wrapper.vm.onIndex("index.folder");
expect(wrapper.vm.indexing).toBe(true);
wrapper.vm.onIndex("index.indexing");
expect(wrapper.vm.indexing).toBe(true);
});
it("onIndex sets indexing false when completed", () => {
const { wrapper } = mountNavigation();
wrapper.vm.indexing = true;
wrapper.vm.onIndex("index.completed");
expect(wrapper.vm.indexing).toBe(false);
});
});
describe("logout", () => {
it("onLogout calls session.logout", () => {
const logout = vi.fn();
const { wrapper, session } = mountNavigation({
sessionOverrides: { logout },
});
wrapper.vm.onLogout();
expect(session.logout).toHaveBeenCalled();
});
});
});

View file

@ -1,5 +1,5 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest";
import { shallowMount } from "@vue/test-utils";
import { shallowMount, config as VTUConfig } from "@vue/test-utils";
import { nextTick } from "vue";
import PPhotoBatchEdit from "component/photo/batch-edit.vue";
import * as contexts from "options/contexts";
@ -146,20 +146,9 @@ describe("component/photo/batch-edit", () => {
},
global: {
mocks: {
$notify: {
success: vi.fn(),
error: vi.fn(),
},
$lightbox: {
openView: vi.fn(),
},
$event: {
subscribe: vi.fn(),
unsubscribe: vi.fn(),
},
$config: {
feature: vi.fn().mockReturnValue(true),
},
$vuetify: { display: { mdAndDown: false } },
},
stubs: {
@ -202,6 +191,7 @@ describe("component/photo/batch-edit", () => {
});
afterEach(() => {
vi.restoreAllMocks();
if (wrapper) {
wrapper.unmount();
}
@ -395,8 +385,6 @@ describe("component/photo/batch-edit", () => {
expect(ctx.allowEdit).toBe(false);
expect(ctx.allowSelect).toBe(false);
expect(ctx.context).toBe(contexts.BatchEdit);
spy.mockRestore();
});
it("should clamp invalid index to first photo", () => {
@ -407,8 +395,6 @@ describe("component/photo/batch-edit", () => {
expect(ctx.index).toBe(0);
expect(ctx.allowSelect).toBe(false);
spy.mockRestore();
});
});

View file

@ -0,0 +1,325 @@
import { describe, it, expect, vi, afterEach } from "vitest";
import { shallowMount, config as VTUConfig } from "@vue/test-utils";
import PTabPhotoFiles from "component/photo/edit/files.vue";
import Thumb from "model/thumb";
function createFile(overrides = {}) {
return {
UID: "file-uid",
Name: "2018/01/dir:with#hash/file.jpg",
FileType: "jpg",
Error: "",
Primary: false,
Sidecar: false,
Root: "/",
Missing: false,
Pages: 0,
Frames: 0,
Duration: 0,
FPS: 0,
Hash: "hash123",
OriginalName: "file.jpg",
ColorProfile: "",
MainColor: "",
Chroma: 0,
CreatedAt: "2023-01-01T12:00:00Z",
CreatedIn: 1000,
UpdatedAt: "2023-01-02T12:00:00Z",
UpdatedIn: 2000,
thumbnailUrl: vi.fn(() => "/thumb/file.jpg"),
storageInfo: vi.fn(() => "local"),
typeInfo: vi.fn(() => "JPEG"),
sizeInfo: vi.fn(() => "1 MB"),
isAnimated: vi.fn(() => false),
baseName: vi.fn(() => "file.jpg"),
download: vi.fn(),
...overrides,
};
}
function mountPhotoFiles({
fileOverrides = {},
featuresOverrides = {},
experimental = false,
isMobile = false,
modelOverrides = {},
routerOverrides = {},
} = {}) {
const baseConfig = VTUConfig.global.mocks.$config || {};
const baseSettings = baseConfig.getSettings ? baseConfig.getSettings() : { features: {} };
const features = {
...(baseSettings.features || {}),
download: true,
edit: true,
delete: true,
...featuresOverrides,
};
const configMock = {
...baseConfig,
getSettings: vi.fn(() => ({
...baseSettings,
features,
})),
get: vi.fn((key) => {
if (key === "experimental") return experimental;
if (baseConfig.get) {
return baseConfig.get(key);
}
return false;
}),
getTimeZone: baseConfig.getTimeZone || vi.fn(() => "UTC"),
allow: baseConfig.allow || vi.fn(() => true),
values: baseConfig.values || {},
};
const file = createFile(fileOverrides);
const model = {
fileModels: vi.fn(() => [file]),
deleteFile: vi.fn(() => Promise.resolve()),
unstackFile: vi.fn(),
setPrimaryFile: vi.fn(),
changeFileOrientation: vi.fn(() => Promise.resolve()),
...modelOverrides,
};
const router = {
push: vi.fn(),
resolve: vi.fn((route) => ({ href: route.path || "" })),
...routerOverrides,
};
const lightbox = {
openModels: vi.fn(),
};
const baseUtil = VTUConfig.global.mocks.$util || {};
const util = {
...baseUtil,
openUrl: vi.fn(),
formatDuration: baseUtil.formatDuration || vi.fn((d) => String(d)),
fileType: baseUtil.fileType || vi.fn((t) => t),
codecName: baseUtil.codecName || vi.fn((c) => c),
formatNs: baseUtil.formatNs || vi.fn((n) => String(n)),
};
const wrapper = shallowMount(PTabPhotoFiles, {
props: {
uid: "photo-uid",
},
global: {
mocks: {
$config: configMock,
$view: {
getData: () => ({
model,
}),
},
$router: router,
$lightbox: lightbox,
$util: util,
$isMobile: isMobile,
$gettext: VTUConfig.global.mocks.$gettext || ((s) => s),
$notify: VTUConfig.global.mocks.$notify,
$isRtl: false,
},
stubs: {
"p-file-delete-dialog": true,
},
},
});
return {
wrapper,
file,
model,
router,
lightbox,
util,
configMock,
};
}
describe("component/photo/edit/files", () => {
afterEach(() => {
vi.restoreAllMocks();
});
describe("action buttons visibility", () => {
it("shows download, primary, unstack and delete buttons for editable JPG file", () => {
const { wrapper } = mountPhotoFiles({
fileOverrides: { FileType: "jpg", Primary: false, Sidecar: false, Root: "/", Error: "" },
featuresOverrides: { download: true, edit: true, delete: true },
});
const file = wrapper.vm.view.model.fileModels()[0];
const { features, experimental, canAccessPrivate } = wrapper.vm;
// Download button conditions
expect(features.download).toBe(true);
// Primary button conditions
expect(features.edit && (file.FileType === "jpg" || file.FileType === "png") && !file.Error && !file.Primary).toBe(true);
// Unstack button conditions
expect(features.edit && !file.Sidecar && !file.Error && !file.Primary && file.Root === "/").toBe(true);
// Delete button conditions
expect(features.delete && !file.Primary).toBe(true);
// Browse button should not be visible in this scenario
expect(experimental && canAccessPrivate && file.Primary).toBe(false);
});
it("shows browse button only for primary file when experimental and private access are enabled", () => {
const { wrapper } = mountPhotoFiles({
fileOverrides: { Primary: true, Root: "/", FileType: "jpg" },
experimental: true,
});
const file = wrapper.vm.view.model.fileModels()[0];
const { features, experimental, canAccessPrivate } = wrapper.vm;
// Browse button conditions
expect(experimental && canAccessPrivate && file.Primary).toBe(true);
// Other actions should not be available for primary file in this scenario
expect(features.edit && (file.FileType === "jpg" || file.FileType === "png") && !file.Error && !file.Primary).toBe(false);
expect(features.edit && !file.Sidecar && !file.Error && !file.Primary && file.Root === "/").toBe(false);
expect(features.delete && !file.Primary).toBe(false);
});
});
describe("openFile", () => {
it("opens file in lightbox using Thumb.fromFile", () => {
const thumbModel = {};
const { wrapper, file, model, lightbox } = mountPhotoFiles();
const thumbSpy = vi.spyOn(Thumb, "fromFile").mockReturnValue(thumbModel);
wrapper.vm.openFile(file);
expect(thumbSpy).toHaveBeenCalledWith(model, file);
expect(lightbox.openModels).toHaveBeenCalledWith([thumbModel], 0);
});
});
describe("openFolder", () => {
it("emits close and navigates via router.push on mobile", () => {
const { wrapper, router, util, file } = mountPhotoFiles({
isMobile: true,
fileOverrides: { Name: "2018/01/file.jpg" },
});
wrapper.vm.openFolder(file);
expect(wrapper.emitted("close")).toBeTruthy();
expect(router.push).toHaveBeenCalledWith({ path: "/index/files/2018/01" });
expect(util.openUrl).not.toHaveBeenCalled();
});
it("opens folder in new tab on desktop with encoded path", () => {
const encodedPath = "/index/files/2018/01/dir%3Awith%23hash";
const resolve = vi.fn((route) => ({ href: route.path }));
const { wrapper, util, file } = mountPhotoFiles({
isMobile: false,
routerOverrides: { resolve },
fileOverrides: { Name: "2018/01/dir:with#hash/file.jpg" },
});
wrapper.vm.openFolder(file);
expect(resolve).toHaveBeenCalledWith({ path: encodedPath });
expect(util.openUrl).toHaveBeenCalledWith(encodedPath);
});
});
describe("file actions", () => {
it("downloadFile shows notification and calls file.download", async () => {
const { wrapper, file } = mountPhotoFiles();
const { default: notifyModule } = await import("common/notify");
const notifySpy = vi.spyOn(notifyModule, "success");
wrapper.vm.downloadFile(file);
expect(notifySpy).toHaveBeenCalledWith("Downloading…");
expect(file.download).toHaveBeenCalledTimes(1);
});
it("unstackFile and setPrimaryFile delegate to model when file is present", () => {
const unstackSpy = vi.fn();
const setPrimarySpy = vi.fn();
const { wrapper, file } = mountPhotoFiles({
modelOverrides: {
unstackFile: unstackSpy,
setPrimaryFile: setPrimarySpy,
},
});
wrapper.vm.unstackFile(file);
wrapper.vm.setPrimaryFile(file);
expect(unstackSpy).toHaveBeenCalledWith(file.UID);
expect(setPrimarySpy).toHaveBeenCalledWith(file.UID);
unstackSpy.mockClear();
setPrimarySpy.mockClear();
wrapper.vm.unstackFile(null);
wrapper.vm.setPrimaryFile(null);
expect(unstackSpy).not.toHaveBeenCalled();
expect(setPrimarySpy).not.toHaveBeenCalled();
});
it("confirmDeleteFile calls model.deleteFile and closes dialog", async () => {
const deleteFileSpy = vi.fn(() => Promise.resolve());
const { wrapper, file } = mountPhotoFiles({
modelOverrides: {
deleteFile: deleteFileSpy,
},
});
wrapper.vm.deleteFile.dialog = true;
wrapper.vm.deleteFile.file = file;
await wrapper.vm.confirmDeleteFile();
expect(deleteFileSpy).toHaveBeenCalledWith(file.UID);
expect(wrapper.vm.deleteFile.dialog).toBe(false);
expect(wrapper.vm.deleteFile.file).toBeNull();
});
});
describe("changeOrientation", () => {
it("calls model.changeFileOrientation and shows success message", async () => {
const changeOrientationSpy = vi.fn(() => Promise.resolve());
const { wrapper, file } = mountPhotoFiles({
modelOverrides: {
changeFileOrientation: changeOrientationSpy,
},
});
const notifySuccessSpy = vi.spyOn(wrapper.vm.$notify, "success");
wrapper.vm.changeOrientation(file);
expect(wrapper.vm.busy).toBe(true);
await Promise.resolve();
expect(changeOrientationSpy).toHaveBeenCalledWith(file);
expect(notifySuccessSpy).toHaveBeenCalledWith("Changes successfully saved");
expect(wrapper.vm.busy).toBe(false);
});
it("does nothing when file is missing", () => {
const changeOrientationSpy = vi.fn(() => Promise.resolve());
const { wrapper } = mountPhotoFiles({
modelOverrides: {
changeFileOrientation: changeOrientationSpy,
},
});
wrapper.vm.changeOrientation(null);
expect(changeOrientationSpy).not.toHaveBeenCalled();
expect(wrapper.vm.busy).toBe(false);
});
});
});

View file

@ -0,0 +1,226 @@
import { describe, it, expect, vi, afterEach } from "vitest";
import { shallowMount, config as VTUConfig } from "@vue/test-utils";
import PTabPhotoLabels from "component/photo/edit/labels.vue";
import Thumb from "model/thumb";
function mountPhotoLabels({ modelOverrides = {}, routerOverrides = {}, utilOverrides = {}, notifyOverrides = {}, viewHasModel = true } = {}) {
const baseConfig = VTUConfig.global.mocks.$config || {};
const baseNotify = VTUConfig.global.mocks.$notify || {};
const baseUtil = VTUConfig.global.mocks.$util || {};
const model = viewHasModel
? {
removeLabel: vi.fn(() => Promise.resolve()),
addLabel: vi.fn(() => Promise.resolve()),
activateLabel: vi.fn(),
...modelOverrides,
}
: null;
const router = {
push: vi.fn(() => Promise.resolve()),
...routerOverrides,
};
const util = {
...baseUtil,
sourceName: vi.fn((s) => `source-${s}`),
...utilOverrides,
};
const notify = {
...baseNotify,
success: baseNotify.success || vi.fn(),
error: baseNotify.error || vi.fn(),
warn: baseNotify.warn || vi.fn(),
...notifyOverrides,
};
const lightbox = {
openModels: vi.fn(),
};
const wrapper = shallowMount(PTabPhotoLabels, {
props: {
uid: "photo-uid",
},
global: {
mocks: {
$config: baseConfig,
$view: {
getData: () => ({
model,
}),
},
$router: router,
$util: util,
$notify: notify,
$lightbox: lightbox,
$gettext: VTUConfig.global.mocks.$gettext || ((s) => s),
$isRtl: false,
},
},
});
return {
wrapper,
model,
router,
util,
notify,
lightbox,
};
}
describe("component/photo/edit/labels", () => {
afterEach(() => {
vi.restoreAllMocks();
});
describe("sourceName", () => {
it("delegates to $util.sourceName", () => {
const sourceNameSpy = vi.fn(() => "Human");
const { wrapper, util } = mountPhotoLabels({
utilOverrides: { sourceName: sourceNameSpy },
});
const result = wrapper.vm.sourceName("auto");
expect(sourceNameSpy).toHaveBeenCalledWith("auto");
expect(result).toBe("Human");
// Ensure util on instance is the same object so we actually spied on the right method
expect(wrapper.vm.$util).toBe(util);
});
});
describe("removeLabel", () => {
it("does nothing when label is missing", () => {
const removeSpy = vi.fn(() => Promise.resolve());
const { wrapper } = mountPhotoLabels({
modelOverrides: { removeLabel: removeSpy },
});
wrapper.vm.removeLabel(null);
expect(removeSpy).not.toHaveBeenCalled();
});
it("calls model.removeLabel and shows success message", async () => {
const removeSpy = vi.fn(() => Promise.resolve());
const notifySuccessSpy = vi.fn();
const { wrapper } = mountPhotoLabels({
modelOverrides: { removeLabel: removeSpy },
notifyOverrides: { success: notifySuccessSpy },
});
const label = { ID: 5, Name: "Cat" };
wrapper.vm.removeLabel(label);
await Promise.resolve();
expect(removeSpy).toHaveBeenCalledWith(5);
expect(notifySuccessSpy).toHaveBeenCalledWith("removed Cat");
});
});
describe("addLabel", () => {
it("does nothing when newLabel is empty", () => {
const addSpy = vi.fn(() => Promise.resolve());
const { wrapper } = mountPhotoLabels({
modelOverrides: { addLabel: addSpy },
});
wrapper.vm.newLabel = "";
wrapper.vm.addLabel();
expect(addSpy).not.toHaveBeenCalled();
});
it("calls model.addLabel, shows success message and clears newLabel", async () => {
const addSpy = vi.fn(() => Promise.resolve());
const notifySuccessSpy = vi.fn();
const { wrapper } = mountPhotoLabels({
modelOverrides: { addLabel: addSpy },
notifyOverrides: { success: notifySuccessSpy },
});
wrapper.vm.newLabel = "Dog";
wrapper.vm.addLabel();
await Promise.resolve();
expect(addSpy).toHaveBeenCalledWith("Dog");
expect(notifySuccessSpy).toHaveBeenCalledWith("added Dog");
expect(wrapper.vm.newLabel).toBe("");
});
});
describe("activateLabel", () => {
it("does nothing when label is missing", () => {
const activateSpy = vi.fn();
const { wrapper } = mountPhotoLabels({
modelOverrides: { activateLabel: activateSpy },
});
wrapper.vm.activateLabel(null);
expect(activateSpy).not.toHaveBeenCalled();
});
it("delegates to model.activateLabel for valid label", () => {
const activateSpy = vi.fn();
const { wrapper } = mountPhotoLabels({
modelOverrides: { activateLabel: activateSpy },
});
const label = { ID: 7, Name: "Summer" };
wrapper.vm.activateLabel(label);
expect(activateSpy).toHaveBeenCalledWith(7);
});
});
describe("searchLabel", () => {
it("navigates to all route with label query and emits close", () => {
const push = vi.fn(() => Promise.resolve());
const { wrapper, router } = mountPhotoLabels({
routerOverrides: { push },
});
const label = { Slug: "animals" };
wrapper.vm.searchLabel(label);
expect(router.push).toHaveBeenCalledWith({
name: "all",
query: { q: "label:animals" },
});
expect(wrapper.emitted("close")).toBeTruthy();
});
});
describe("openPhoto", () => {
it("opens photo in lightbox using Thumb.fromPhotos when model is present", () => {
const thumbModel = {};
const fromPhotosSpy = vi.spyOn(Thumb, "fromPhotos").mockReturnValue([thumbModel]);
const { wrapper, model, lightbox } = mountPhotoLabels();
wrapper.vm.openPhoto();
expect(fromPhotosSpy).toHaveBeenCalledWith([model]);
expect(lightbox.openModels).toHaveBeenCalledWith([thumbModel], 0);
});
it("does nothing when model is missing", () => {
const fromPhotosSpy = vi.spyOn(Thumb, "fromPhotos").mockReturnValue([]);
const { wrapper, lightbox } = mountPhotoLabels({ viewHasModel: false });
wrapper.vm.openPhoto();
expect(fromPhotosSpy).not.toHaveBeenCalled();
expect(lightbox.openModels).not.toHaveBeenCalled();
});
});
});

View file

@ -0,0 +1,346 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest";
import { shallowMount, config as VTUConfig } from "@vue/test-utils";
import PPhotoToolbar from "component/photo/toolbar.vue";
import * as contexts from "options/contexts";
import "../../fixtures";
function mountToolbar({
context = contexts.Photos,
embedded = false,
filter = {
q: "",
country: "",
camera: 0,
year: 0,
month: 0,
color: "",
label: "",
order: "newest",
latlng: null,
},
staticFilter = {},
settings = { view: "mosaic" },
featuresOverrides = {},
searchOverrides = {},
allowMock,
refresh = vi.fn(),
updateFilter = vi.fn(),
updateQuery = vi.fn(),
eventPublish,
routerOverrides = {},
clipboard,
openUrlSpy,
} = {}) {
const baseConfig = VTUConfig.global.mocks.$config;
const baseSettings = baseConfig.getSettings ? baseConfig.getSettings() : { features: {} };
const features = {
...(baseSettings.features || {}),
upload: true,
delete: true,
settings: true,
...featuresOverrides,
};
const search = {
listView: true,
...searchOverrides,
};
const configMock = {
...baseConfig,
getSettings: vi.fn(() => ({
...baseSettings,
features,
search,
})),
allow: allowMock || vi.fn(() => true),
values: {
countries: [],
cameras: [],
categories: [],
...(baseConfig.values || {}),
},
};
const publish = eventPublish || vi.fn();
const router = {
push: vi.fn(),
resolve: vi.fn((route) => ({
href: `/library/${route.name || "browse"}`,
})),
...routerOverrides,
};
const clipboardMock =
clipboard ||
{
clear: vi.fn(),
};
const baseUtil = VTUConfig.global.mocks.$util || {};
const util = {
...baseUtil,
openUrl: openUrlSpy || vi.fn(),
};
const wrapper = shallowMount(PPhotoToolbar, {
props: {
context,
filter,
staticFilter,
settings,
embedded,
refresh,
updateFilter,
updateQuery,
},
global: {
mocks: {
$config: configMock,
$session: { isSuperAdmin: vi.fn(() => false) },
$event: {
...(VTUConfig.global.mocks.$event || {}),
publish,
},
$router: router,
$clipboard: clipboardMock,
$util: util,
},
stubs: {
PActionMenu: true,
PConfirmDialog: true,
},
},
});
return {
wrapper,
configMock,
publish,
router,
clipboard: clipboardMock,
refresh,
updateFilter,
updateQuery,
openUrl: util.openUrl,
};
}
describe("component/photo/toolbar", () => {
afterEach(() => {
vi.restoreAllMocks();
});
describe("menuActions", () => {
it("shows upload and docs actions for photos context when upload is allowed", () => {
const { wrapper } = mountToolbar();
const actions = wrapper.vm.menuActions();
const byName = (name) => actions.find((a) => a.name === name);
const refreshAction = byName("refresh");
const uploadAction = byName("upload");
const docsAction = byName("docs");
const troubleshootingAction = byName("troubleshooting");
expect(refreshAction).toBeDefined();
expect(uploadAction).toBeDefined();
expect(docsAction).toBeDefined();
expect(troubleshootingAction).toBeDefined();
expect(refreshAction.visible).toBe(true);
expect(uploadAction.visible).toBe(true);
expect(docsAction.visible).toBe(true);
expect(troubleshootingAction.visible).toBe(false);
});
it("hides upload action in archive and hidden contexts", () => {
const { wrapper: archiveWrapper } = mountToolbar({ context: contexts.Archive });
const archiveActions = archiveWrapper.vm.menuActions();
const archiveUpload = archiveActions.find((a) => a.name === "upload");
const archiveDocs = archiveActions.find((a) => a.name === "docs");
const archiveTroubleshooting = archiveActions.find((a) => a.name === "troubleshooting");
expect(archiveUpload).toBeDefined();
expect(archiveDocs).toBeDefined();
expect(archiveTroubleshooting).toBeDefined();
expect(archiveUpload.visible).toBe(false);
expect(archiveDocs.visible).toBe(true);
expect(archiveTroubleshooting.visible).toBe(false);
const { wrapper: hiddenWrapper } = mountToolbar({ context: contexts.Hidden });
const hiddenActions = hiddenWrapper.vm.menuActions();
const hiddenUpload = hiddenActions.find((a) => a.name === "upload");
const hiddenDocs = hiddenActions.find((a) => a.name === "docs");
const hiddenTroubleshooting = hiddenActions.find((a) => a.name === "troubleshooting");
expect(hiddenUpload).toBeDefined();
expect(hiddenDocs).toBeDefined();
expect(hiddenTroubleshooting).toBeDefined();
expect(hiddenUpload.visible).toBe(false);
expect(hiddenDocs.visible).toBe(false);
expect(hiddenTroubleshooting.visible).toBe(true);
});
it("invokes refresh prop and publishes upload dialog events on click", () => {
const refresh = vi.fn();
const publish = vi.fn();
const { wrapper } = mountToolbar({ refresh, eventPublish: publish });
const actions = wrapper.vm.menuActions();
const refreshAction = actions.find((a) => a.name === "refresh");
const uploadAction = actions.find((a) => a.name === "upload");
expect(refreshAction).toBeDefined();
expect(uploadAction).toBeDefined();
refreshAction.click();
expect(refresh).toHaveBeenCalledTimes(1);
uploadAction.click();
expect(publish).toHaveBeenCalledWith("dialog.upload");
});
});
describe("view handling", () => {
it("setView keeps list when listView search setting is enabled", () => {
const refresh = vi.fn();
const { wrapper } = mountToolbar({
refresh,
searchOverrides: { listView: true },
});
wrapper.vm.expanded = true;
wrapper.vm.setView("list");
expect(refresh).toHaveBeenCalledWith({ view: "list" });
expect(wrapper.vm.expanded).toBe(false);
});
it("setView falls back to mosaic when list view is disabled", () => {
const refresh = vi.fn();
const { wrapper } = mountToolbar({
refresh,
searchOverrides: { listView: false },
});
wrapper.vm.expanded = true;
wrapper.vm.setView("list");
expect(refresh).toHaveBeenCalledWith({ view: "mosaic" });
expect(wrapper.vm.expanded).toBe(false);
});
});
describe("sortOptions", () => {
it("provides archive-specific sort options for archive context", () => {
const { wrapper } = mountToolbar({ context: contexts.Archive });
const values = wrapper.vm.sortOptions.map((o) => o.value);
expect(values).toContain("archived");
expect(values).not.toContain("similar");
expect(values).not.toContain("relevance");
});
it("includes similarity and relevance options in default photos context", () => {
const { wrapper } = mountToolbar({ context: contexts.Photos });
const values = wrapper.vm.sortOptions.map((o) => o.value);
expect(values).toContain("similar");
expect(values).toContain("relevance");
});
});
describe("delete actions", () => {
it("deleteAll opens confirmation dialog only when delete is allowed", () => {
const allowAll = vi.fn(() => true);
const { wrapper } = mountToolbar({
allowMock: allowAll,
featuresOverrides: { delete: true },
});
wrapper.vm.deleteAll();
expect(wrapper.vm.dialog.delete).toBe(true);
const denyDelete = vi.fn((resource, action) => {
if (resource === "photos" && action === "delete") {
return false;
}
return true;
});
const { wrapper: noDeleteWrapper } = mountToolbar({
allowMock: denyDelete,
featuresOverrides: { delete: true },
});
noDeleteWrapper.vm.deleteAll();
expect(noDeleteWrapper.vm.dialog.delete).toBe(false);
});
it("batchDelete posts delete request and clears clipboard on success", async () => {
const clipboard = { clear: vi.fn() };
const { default: $notify } = await import("common/notify");
const { default: $api } = await import("common/api");
const postSpy = vi.spyOn($api, "post").mockResolvedValue({ data: {} });
const notifySpy = vi.spyOn($notify, "success");
const { wrapper } = mountToolbar({
clipboard,
featuresOverrides: { delete: true },
});
wrapper.vm.dialog.delete = true;
await wrapper.vm.batchDelete();
expect(postSpy).toHaveBeenCalledWith("batch/photos/delete", { all: true });
expect(wrapper.vm.dialog.delete).toBe(false);
expect(notifySpy).toHaveBeenCalledWith("Permanently deleted");
expect(clipboard.clear).toHaveBeenCalledTimes(1);
});
});
describe("browse actions", () => {
it("clearLocation navigates back to browse list", () => {
const push = vi.fn();
const { wrapper } = mountToolbar({
routerOverrides: {
push,
},
});
wrapper.vm.clearLocation();
expect(push).toHaveBeenCalledWith({ name: "browse" });
});
it("onBrowse opens places browse in new tab on desktop", () => {
const push = vi.fn();
const openUrlSpy = vi.fn();
const staticFilter = { q: "country:US" };
const { wrapper, router, openUrl } = mountToolbar({
staticFilter,
routerOverrides: {
push,
resolve: vi.fn((route) => ({
href: `/library/${route.name}?q=${route.query?.q || ""}`,
})),
},
openUrlSpy,
});
wrapper.vm.onBrowse();
expect(push).not.toHaveBeenCalled();
expect(router.resolve).toHaveBeenCalledWith({ name: "places_browse", query: staticFilter });
expect(openUrl).toHaveBeenCalledWith("/library/places_browse?q=country:US");
});
});
});

View file

@ -1,8 +1,18 @@
import { describe, it, expect } from "vitest";
import { describe, it, expect, beforeEach, afterEach } from "vitest";
import "../fixtures";
import { Album, BatchSize } from "model/album";
describe("model/album", () => {
let originalBatchSize;
beforeEach(() => {
originalBatchSize = Album.batchSize();
});
afterEach(() => {
Album.setBatchSize(originalBatchSize);
});
it("should get route view", () => {
const values = { ID: 5, Title: "Christmas 2019", Slug: "christmas-2019" };
const album = new Album(values);
@ -312,7 +322,6 @@ describe("model/album", () => {
expect(Album.batchSize()).toBe(BatchSize);
Album.setBatchSize(30);
expect(Album.batchSize()).toBe(30);
Album.setBatchSize(BatchSize);
});
it("should like album", () => {

View file

@ -1,8 +1,18 @@
import { describe, it, expect } from "vitest";
import { describe, it, expect, beforeEach, afterEach } from "vitest";
import "../fixtures";
import { Face, BatchSize } from "model/face";
describe("model/face", () => {
let originalBatchSize;
beforeEach(() => {
originalBatchSize = Face.batchSize();
});
afterEach(() => {
Face.setBatchSize(originalBatchSize);
});
it("should get face defaults", () => {
const values = {};
const face = new Face(values);
@ -146,7 +156,6 @@ describe("model/face", () => {
expect(Face.batchSize()).toBe(BatchSize);
Face.setBatchSize(30);
expect(Face.batchSize()).toBe(30);
Face.setBatchSize(BatchSize);
});
it("should get collection resource", () => {

View file

@ -1,8 +1,18 @@
import { describe, it, expect } from "vitest";
import { describe, it, expect, beforeEach, afterEach } from "vitest";
import "../fixtures";
import { Label, BatchSize } from "model/label";
describe("model/label", () => {
let originalBatchSize;
beforeEach(() => {
originalBatchSize = Label.batchSize();
});
afterEach(() => {
Label.setBatchSize(originalBatchSize);
});
it("should get route view", () => {
const values = { ID: 5, UID: "ABC123", Name: "Black Cat", Slug: "black-cat" };
const label = new Label(values);
@ -15,7 +25,6 @@ describe("model/label", () => {
expect(Label.batchSize()).toBe(BatchSize);
Label.setBatchSize(30);
expect(Label.batchSize()).toBe(30);
Label.setBatchSize(BatchSize);
});
it("should return classes", () => {

View file

@ -1,8 +1,18 @@
import { describe, it, expect } from "vitest";
import { describe, it, expect, beforeEach, afterEach } from "vitest";
import "../fixtures";
import { Marker, BatchSize } from "model/marker";
describe("model/marker", () => {
let originalBatchSize;
beforeEach(() => {
originalBatchSize = Marker.batchSize();
});
afterEach(() => {
Marker.setBatchSize(originalBatchSize);
});
it("should get marker defaults", () => {
const values = { FileUID: "fghjojp" };
const marker = new Marker(values);
@ -193,7 +203,6 @@ describe("model/marker", () => {
expect(Marker.batchSize()).toBe(BatchSize);
Marker.setBatchSize(30);
expect(Marker.batchSize()).toBe(30);
Marker.setBatchSize(BatchSize);
});
it("should get collection resource", () => {

View file

@ -344,21 +344,21 @@ describe("model/photo", () => {
expect(result5).toBe("July 2012");
});
it("should test whether photo has location", () => {
it("should report hasLocation true for non-zero coordinates", () => {
const values = { ID: 5, Title: "Crazy Cat", Lat: 36.442881666666665, Lng: 28.229493333333334 };
const photo = new Photo(values);
const result = photo.hasLocation();
expect(result).toBe(true);
});
it("should test whether photo has location", () => {
it("should report hasLocation false for zero coordinates", () => {
const values = { ID: 5, Title: "Crazy Cat", Lat: 0, Lng: 0 };
const photo = new Photo(values);
const result = photo.hasLocation();
expect(result).toBe(false);
});
it("should get location", () => {
it("should get primary location label with country", () => {
const values = {
ID: 5,
Title: "Crazy Cat",
@ -372,7 +372,7 @@ describe("model/photo", () => {
expect(result).toBe("Cape Point, South Africa");
});
it("should get location", () => {
it("should get full location with state and country", () => {
const values = {
ID: 5,
Title: "Crazy Cat",
@ -389,7 +389,7 @@ describe("model/photo", () => {
expect(result).toBe("Cape Point, State, South Africa");
});
it("should get location", () => {
it("should return Unknown when country name does not match", () => {
const values = {
ID: 5,
Title: "Crazy Cat",
@ -405,14 +405,14 @@ describe("model/photo", () => {
expect(result).toBe("Unknown");
});
it("should get location", () => {
it("should return Unknown when only country name is set", () => {
const values = { ID: 5, Title: "Crazy Cat", CountryName: "Africa", PlaceCity: "Cape Town" };
const photo = new Photo(values);
const result = photo.locationInfo();
expect(result).toBe("Unknown");
});
it("should get camera", () => {
it("should get camera from model and file camera data", () => {
const values = { ID: 5, Title: "Crazy Cat", CameraModel: "EOSD10", CameraMake: "Canon" };
const photo = new Photo(values);
const result = photo.getCamera();
@ -438,7 +438,7 @@ describe("model/photo", () => {
expect(photo2.getCamera()).toBe("Canon abc");
});
it("should get camera", () => {
it("should return Unknown when camera info is missing", () => {
const values = { ID: 5, Title: "Crazy Cat" };
const photo = new Photo(values);
const result = photo.getCamera();

View file

@ -1,8 +1,18 @@
import { describe, it, expect } from "vitest";
import { describe, it, expect, beforeEach, afterEach } from "vitest";
import "../fixtures";
import { Subject, BatchSize } from "model/subject";
describe("model/subject", () => {
let originalBatchSize;
beforeEach(() => {
originalBatchSize = Subject.batchSize();
});
afterEach(() => {
Subject.setBatchSize(originalBatchSize);
});
it("should get face defaults", () => {
const values = {};
const subject = new Subject(values);
@ -238,7 +248,6 @@ describe("model/subject", () => {
expect(Subject.batchSize()).toBe(BatchSize);
Subject.setBatchSize(30);
expect(Subject.batchSize()).toBe(30);
Subject.setBatchSize(BatchSize);
});
it("should get collection resource", () => {

View file

@ -1,4 +1,4 @@
import { describe, it, expect } from "vitest";
import { describe, it, expect, beforeEach, afterEach } from "vitest";
import "../fixtures";
import * as options from "options/options";
import {
@ -25,6 +25,15 @@ import {
} from "options/options";
describe("options/options", () => {
let originalDefaultLocale;
beforeEach(() => {
originalDefaultLocale = options.DefaultLocale;
});
afterEach(() => {
SetDefaultLocale(originalDefaultLocale);
});
it("should get timezones", () => {
const timezones = options.TimeZones();
expect(timezones[0].ID).toBe("Local");
@ -93,13 +102,10 @@ describe("options/options", () => {
});
it("should set default locale", () => {
// Assuming DefaultLocale is exported and mutable for testing purposes
// Initial state check might depend on test execution order, so we control it here.
SetDefaultLocale("en"); // Ensure starting state
SetDefaultLocale("en");
expect(options.DefaultLocale).toBe("en");
SetDefaultLocale("de");
expect(options.DefaultLocale).toBe("de");
SetDefaultLocale("en"); // Reset for other tests
});
it("should return default when no locale is provided", () => {

View file

@ -30,9 +30,9 @@ if (typeof global.ResizeObserver === "undefined") {
constructor(callback) {
this.callback = callback;
}
observe() {}
unobserve() {}
disconnect() {}
observe() { }
unobserve() { }
disconnect() { }
};
}
@ -54,22 +54,22 @@ config.global.mocks = {
$event: {
subscribe: () => "sub-id",
subscribeOnce: () => "sub-id-once",
unsubscribe: () => {},
publish: () => {},
unsubscribe: () => { },
publish: () => { },
},
$view: {
enter: () => {},
leave: () => {},
enter: () => { },
leave: () => { },
isActive: () => true,
},
$notify: { success: () => {}, error: () => {}, warn: () => {} },
$notify: { success: vi.fn(), error: vi.fn(), warn: vi.fn(), info: vi.fn() },
$fullscreen: {
isSupported: () => true,
isEnabled: () => false,
request: () => Promise.resolve(),
exit: () => Promise.resolve(),
},
$clipboard: { selection: [], has: () => false, toggle: () => {} },
$clipboard: { selection: [], has: () => false, toggle: () => { } },
$util: {
hasTouch: () => false,
encodeHTML: (s) => s,

View file

@ -78,7 +78,9 @@ const config = {
clean: true,
},
resolve: {
modules: isCustom ? [PATHS.custom, PATHS.src, PATHS.modules] : [PATHS.src, PATHS.modules],
modules: isCustom
? [PATHS.custom, PATHS.src, "node_modules", PATHS.modules]
: [PATHS.src, "node_modules", PATHS.modules],
preferRelative: true,
alias: {
"vue$": "vue/dist/vue.runtime.esm-bundler.js",

12
go.mod
View file

@ -14,7 +14,7 @@ require (
github.com/esimov/pigo v1.4.6
github.com/gin-contrib/gzip v1.2.5
github.com/gin-gonic/gin v1.11.0
github.com/golang/geo v0.0.0-20251209161508-25c597310d4b
github.com/golang/geo v0.0.0-20251223115337-4c285675e7fb
github.com/google/open-location-code/go v0.0.0-20250620134813-83986da0156b
github.com/gorilla/websocket v1.5.3
github.com/gosimple/slug v1.15.0
@ -55,7 +55,7 @@ require github.com/google/uuid v1.6.0
require github.com/chzyer/readline v1.5.1 // indirect
require github.com/gabriel-vasile/mimetype v1.4.11
require github.com/gabriel-vasile/mimetype v1.4.12
require (
golang.org/x/sync v0.19.0
@ -66,7 +66,7 @@ require github.com/go-ldap/ldap/v3 v3.4.12
require (
github.com/prometheus/client_golang v1.23.2
github.com/prometheus/common v0.67.4
github.com/prometheus/common v0.67.5
)
require github.com/dustinkirkland/golang-petname v0.0.0-20240428194347-eebcea082ee0
@ -76,7 +76,7 @@ require golang.org/x/text v0.32.0
require (
github.com/IGLOU-EU/go-wildcard v1.0.3
github.com/davidbyttow/govips/v2 v2.16.0
github.com/go-co-op/gocron/v2 v2.18.2
github.com/go-co-op/gocron/v2 v2.19.0
github.com/go-sql-driver/mysql v1.9.3
github.com/golang-jwt/jwt/v5 v5.3.0
github.com/pquerna/otp v1.5.0
@ -87,7 +87,7 @@ require (
github.com/ugjka/go-tz/v2 v2.2.6
github.com/urfave/cli/v2 v2.27.7
github.com/wamuir/graft v0.10.0
github.com/yalue/onnxruntime_go v1.24.0
github.com/yalue/onnxruntime_go v1.25.0
github.com/zitadel/oidc/v3 v3.45.1
golang.org/x/mod v0.31.0
golang.org/x/sys v0.39.0
@ -184,7 +184,7 @@ require (
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/oauth2 v0.34.0 // indirect
golang.org/x/tools v0.39.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

24
go.sum
View file

@ -119,8 +119,8 @@ github.com/esimov/pigo v1.4.6/go.mod h1:uqj9Y3+3IRYhFK071rxz1QYq0ePhA6+R9jrUZavi
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/fogleman/gg v1.3.0/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
github.com/gabriel-vasile/mimetype v1.4.11 h1:AQvxbp830wPhHTqc1u7nzoLT+ZFxGY7emj5DR5DYFik=
github.com/gabriel-vasile/mimetype v1.4.11/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/gabriel-vasile/mimetype v1.4.12 h1:e9hWvmLYvtp846tLHam2o++qitpguFiYCKbn0w9jyqw=
github.com/gabriel-vasile/mimetype v1.4.12/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/gin-contrib/gzip v1.2.5 h1:fIZs0S+l17pIu1P5XRJOo/YNqfIuPCrZZ3TWB7pjckI=
github.com/gin-contrib/gzip v1.2.5/go.mod h1:aomRgR7ftdZV3uWY0gW/m8rChfxau0n8YVvwlOHONzw=
github.com/gin-contrib/sse v1.1.0 h1:n0w2GMuUpWDVp7qSpvze6fAu9iRxJY4Hmj6AmBOU05w=
@ -129,8 +129,8 @@ github.com/gin-gonic/gin v1.11.0 h1:OW/6PLjyusp2PPXtyxKHU0RbX6I/l28FTdDlae5ueWk=
github.com/gin-gonic/gin v1.11.0/go.mod h1:+iq/FyxlGzII0KHiBGjuNn4UNENUlKbGlNmc+W50Dls=
github.com/go-chi/chi/v5 v5.2.3 h1:WQIt9uxdsAbgIYgid+BpYc+liqQZGMHRaUwp0JUcvdE=
github.com/go-chi/chi/v5 v5.2.3/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/go-co-op/gocron/v2 v2.18.2 h1:+5VU41FUXPWSPKLXZQ/77SGzUiPCcakU0v7ENc2H20Q=
github.com/go-co-op/gocron/v2 v2.18.2/go.mod h1:Zii6he+Zfgy5W9B+JKk/KwejFOW0kZTFvHtwIpR4aBI=
github.com/go-co-op/gocron/v2 v2.19.0 h1:OKf2y6LXPs/BgBI2fl8PxUpNAI1DA9Mg+hSeGOS38OU=
github.com/go-co-op/gocron/v2 v2.19.0/go.mod h1:5lEiCKk1oVJV39Zg7/YG10OnaVrDAV5GGR6O0663k6U=
github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q=
github.com/go-errors/errors v1.0.2/go.mod h1:psDX2osz5VnTOnFWbDeWwS7yejl+uV3FEWEp4lssFEs=
github.com/go-errors/errors v1.1.1/go.mod h1:psDX2osz5VnTOnFWbDeWwS7yejl+uV3FEWEp4lssFEs=
@ -200,8 +200,8 @@ github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGw
github.com/golang/geo v0.0.0-20190916061304-5b978397cfec/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
github.com/golang/geo v0.0.0-20200319012246-673a6f80352d/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
github.com/golang/geo v0.0.0-20251209161508-25c597310d4b h1:6y9D6yfaR5FyqoNoV2S+XJyhzeMUlkdIeUX1Ssj0FJQ=
github.com/golang/geo v0.0.0-20251209161508-25c597310d4b/go.mod h1:Mymr9kRGDc64JPr03TSZmuIBODZ3KyswLzm1xL0HFA8=
github.com/golang/geo v0.0.0-20251223115337-4c285675e7fb h1:XHz60cDX6bukdcgD8DazP+Y5OlOMHfzAL++bJl4mYc8=
github.com/golang/geo v0.0.0-20251223115337-4c285675e7fb/go.mod h1:Mymr9kRGDc64JPr03TSZmuIBODZ3KyswLzm1xL0HFA8=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@ -354,8 +354,8 @@ github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UH
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.67.4 h1:yR3NqWO1/UyO1w2PhUvXlGQs/PtFmoveVO0KZ4+Lvsc=
github.com/prometheus/common v0.67.4/go.mod h1:gP0fq6YjjNCLssJCQp0yk4M8W6ikLURwkdd/YKtTbyI=
github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4=
github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=
github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
github.com/quic-go/qpack v0.5.1 h1:giqksBPnT/HDtZ6VhtFKgoLOWmlyo9Ei6u9PqzIMbhI=
@ -421,8 +421,8 @@ github.com/wamuir/graft v0.10.0 h1:HSpBUvm7O+jwsRIuDQlw80xW4xMXRFkOiVLtWaZCU2s=
github.com/wamuir/graft v0.10.0/go.mod h1:k6NJX3fCM/xzh5NtHky9USdgHTcz2vAvHp4c23I6UK4=
github.com/xrash/smetrics v0.0.0-20250705151800-55b8f293f342 h1:FnBeRrxr7OU4VvAzt5X7s6266i6cSVkkFPS0TuXWbIg=
github.com/xrash/smetrics v0.0.0-20250705151800-55b8f293f342/go.mod h1:Ohn+xnUBiLI6FVj/9LpzZWtj1/D6lUovWYBkxHVV3aM=
github.com/yalue/onnxruntime_go v1.24.0 h1:IdgJLxxyotlsUTmL1UnHZgBzXJGgY51LZ4vQ5rZeOXU=
github.com/yalue/onnxruntime_go v1.24.0/go.mod h1:b4X26A8pekNb1ACJ58wAXgNKeUCGEAQ9dmACut9Sm/4=
github.com/yalue/onnxruntime_go v1.25.0 h1:nlhVau1BpLZ/BYr+WpPZCJRD/WES0qo6dK7aKyyAs3g=
github.com/yalue/onnxruntime_go v1.25.0/go.mod h1:b4X26A8pekNb1ACJ58wAXgNKeUCGEAQ9dmACut9Sm/4=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/zitadel/logging v0.6.2 h1:MW2kDDR0ieQynPZ0KIZPrh9ote2WkxfBif5QoARDQcU=
github.com/zitadel/logging v0.6.2/go.mod h1:z6VWLWUkJpnNVDSLzrPSQSQyttysKZ6bCRongw0ROK4=
@ -537,8 +537,8 @@ golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4Iltr
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=

View file

@ -0,0 +1,31 @@
## PhotoPrism — Classification Package
**Last Updated:** December 23, 2025
### Overview
`internal/ai/classify` wraps PhotoPrisms TensorFlow-based image classification (labels). It loads SavedModel classifiers (Nasnet by default), prepares inputs, runs inference, and maps output probabilities to label rules.
### How It Works
- **Model Loading** — The classifier loads a SavedModel under `assets/models/<name>` and resolves model tags and input/output ops (see `vision.yml` overrides for custom models).
- **Input Preparation** — JPEGs are decoded and resized/cropped to the models expected input resolution.
- **Inference** — The model outputs probabilities; `Rules` apply thresholds and priority to produce final labels.
### Memory & Performance
TensorFlow tensors allocate C memory and are freed by Go GC finalizers. To keep RSS bounded during long runs, PhotoPrism periodically triggers garbage collection to return freed tensor memory to the OS. Tune with:
- `PHOTOPRISM_TF_GC_EVERY` (default **200**, `0` disables).
Lower values reduce peak RSS but increase GC overhead and can slow indexing.
### Troubleshooting Tips
- **Labels are empty:** Verify the model labels file and that `Rules` thresholds are not too strict.
- **Model load failures:** Ensure `saved_model.pb` and `variables/` exist under the configured model path.
- **Unexpected outputs:** Check `TensorFlow.Input/Output` settings in `vision.yml` for custom models.
### Related Docs
- [`internal/ai/vision/README.md`](../vision/README.md) — model registry and `vision.yml` configuration
- [`internal/ai/tensorflow/README.md`](../tensorflow/README.md) — TensorFlow helpers, GC behavior, and model loading

View file

@ -133,6 +133,8 @@ func (m *Model) Run(img []byte, confidenceThreshold int) (result Labels, err err
return nil, loadErr
}
defer tensorflow.MaybeCollectTensorMemory()
// Create input tensor from image.
tensor, err := m.createTensor(img)

View file

@ -1,6 +1,6 @@
## Face Detection and Embedding Guidelines
**Last Updated:** October 10, 2025
**Last Updated:** December 23, 2025
### Overview
@ -46,6 +46,10 @@ Runtime selection lives in `Config.FaceEngine()`; `auto` resolves to ONNX when t
### Embedding Handling
#### Memory Management
FaceNet embeddings are generated through TensorFlow bindings that allocate tensors in C memory. Those allocations are released by Go GC finalizers, so long-running indexing jobs can show steadily rising RSS even when the Go heap stays small. To keep memory bounded during extended face indexing runs, PhotoPrism now triggers periodic garbage collection and returns freed C-allocated tensor buffers to the OS. You can tune this behavior with `PHOTOPRISM_TF_GC_EVERY` (default **200**; set to `0` to disable). Lower values reduce peak RSS but increase GC overhead and can slow indexing, so keep the default unless memory pressure is severe.
#### Normalization
All embeddings, regardless of origin, are normalized to unit length (‖x‖₂=1):

View file

@ -129,6 +129,8 @@ func (m *Model) loadModel() error {
// Run returns the face embeddings for an image.
func (m *Model) Run(img image.Image) Embeddings {
defer tensorflow.MaybeCollectTensorMemory()
// Create input tensor from image.
tensor, err := imageToTensor(img, m.resolution)

View file

@ -0,0 +1,31 @@
## PhotoPrism — NSFW Package
**Last Updated:** December 23, 2025
### Overview
`internal/ai/nsfw` runs the built-in TensorFlow NSFW classifier to score images for drawing, hentai, neutral, porn, and sexy content. It is used during indexing and metadata workflows when the NSFW model is enabled.
### How It Works
- **Model Loading** — Loads the NSFW SavedModel from `assets/models/` and resolves input/output ops (inferred if missing).
- **Input Preparation** — JPEG images are decoded and transformed to the configured input resolution.
- **Inference & Output** — Produces five class probabilities mapped into a `Result` struct for downstream thresholds and UI badges.
### Memory & Performance
TensorFlow tensors allocate C memory and are freed by Go GC finalizers. To keep RSS bounded during long runs, PhotoPrism periodically triggers garbage collection to return freed tensor memory to the OS. Tune with:
- `PHOTOPRISM_TF_GC_EVERY` (default **200**, `0` disables).
Lower values reduce peak RSS but increase GC overhead and can slow indexing.
### Troubleshooting Tips
- **Model fails to load:** Verify `saved_model.pb` and `variables/` exist under the model path.
- **Unexpected scores:** Confirm the input resolution matches the model and that logits are handled correctly.
- **High memory usage:** Adjust `PHOTOPRISM_TF_GC_EVERY` or reduce concurrent indexing load.
### Related Docs
- [`internal/ai/vision/README.md`](../vision/README.md) — model registry and run scheduling
- [`internal/ai/tensorflow/README.md`](../tensorflow/README.md) — TensorFlow helpers, GC behavior, and model loading

View file

@ -75,6 +75,8 @@ func (m *Model) Run(img []byte) (result Result, err error) {
return result, loadErr
}
defer tensorflow.MaybeCollectTensorMemory()
// Create input tensor from image.
input, err := tensorflow.ImageTransform(
img, fs.ImageJpeg, m.meta.Input.Resolution())

View file

@ -0,0 +1,41 @@
## PhotoPrism — TensorFlow Package
**Last Updated:** December 23, 2025
### Overview
`internal/ai/tensorflow` provides the shared TensorFlow helpers used by PhotoPrisms built-in AI features (labels, NSFW, and FaceNet embeddings). It wraps SavedModel loading, input/output discovery, image tensor preparation, and label handling so higher-level packages can focus on domain logic.
### Key Components
- **Model Loading**`SavedModel`, `GetModelTagsInfo`, and `GetInputAndOutputFromSavedModel` discover and load SavedModel graphs with appropriate tags.
- **Input Preparation**`Image`, `ImageTransform`, and `ImageTensorBuilder` convert JPEG images to tensors with the configured resolution, color order, and resize strategy.
- **Output Handling**`AddSoftmax` can insert a softmax op when a model exports logits.
- **Labels**`LoadLabels` loads label lists for classification models.
### Model Loading Notes
- Built-in models live under `assets/models/` and are accessed via helpers in `internal/ai/vision` and `internal/ai/classify`.
- When a model lacks explicit tags or signatures, the helpers attempt to infer input/output operations. Logs will show when inference kicks in.
- Classification models may emit logits; if `ModelInfo.Output.Logits` is true, a softmax op is injected at load time.
### Memory & Garbage Collection
TensorFlow tensors are allocated in C memory and freed by Go GC finalizers in the TensorFlow bindings. Long-running inference can therefore show increasing RSS even when the Go heap is small. PhotoPrism periodically triggers garbage collection to return freed C-allocated tensor buffers to the OS. Control this behavior with:
- `PHOTOPRISM_TF_GC_EVERY` (default **200**, `0` disables).
Lower values reduce peak RSS but increase GC overhead and can slow indexing.
### Troubleshooting Tips
- **Model fails to load:** Verify the SavedModel path, tags, and that `saved_model.pb` plus `variables/` exist under `assets/models/<name>`.
- **Input/output mismatch:** Check logs for inferred inputs/outputs and confirm `vision.yml` overrides (name, resolution, and `TensorFlow.Input/Output`).
- **Unexpected probabilities:** Ensure logits are handled correctly and labels match output indices.
- **High memory usage:** Confirm `PHOTOPRISM_TF_GC_EVERY` is set appropriately; model weights remain resident for the life of the process by design.
### Related Docs
- [`internal/ai/vision/README.md`](../vision/README.md) — model registry, `vision.yml` configuration, and run scheduling
- [`internal/ai/face/README.md`](../face/README.md) — FaceNet embeddings and face-specific tuning
- [`internal/ai/classify/README.md`](../classify/README.md) — classification workflow using TensorFlow helpers
- [`internal/ai/nsfw/README.md`](../nsfw/README.md) — NSFW model usage and result mapping

View file

@ -0,0 +1,43 @@
package tensorflow
import (
"os"
"runtime/debug"
"strconv"
"strings"
"sync/atomic"
)
const gcEveryDefault uint64 = 200
var (
gcEvery = gcEveryDefault
gcCounter uint64
)
func init() {
if v := strings.TrimSpace(os.Getenv("PHOTOPRISM_TF_GC_EVERY")); v != "" {
if strings.HasPrefix(v, "-") {
gcEvery = 0
return
}
if n, err := strconv.ParseUint(v, 10, 64); err == nil {
gcEvery = n
}
}
}
// MaybeCollectTensorMemory triggers GC and returns freed C-allocated tensor memory
// to the OS every gcEvery calls; set gcEvery to 0 to disable the throttling.
func MaybeCollectTensorMemory() {
if gcEvery == 0 {
return
}
if atomic.AddUint64(&gcCounter, 1)%gcEvery != 0 {
return
}
debug.FreeOSMemory()
}

View file

@ -1,12 +1,12 @@
## PhotoPrism — Vision Package
**Last Updated:** December 10, 2025
**Last Updated:** December 23, 2025
### Overview
`internal/ai/vision` provides the shared model registry, request builders, and parsers that power PhotoPrisms caption, label, face, NSFW, and future generate workflows. It reads `vision.yml`, normalizes models, and dispatches calls to one of three engines:
- **TensorFlow (builtin)** — default Nasnet / NSFW / Facenet models, no remote service required.
- **TensorFlow (builtin)** — default Nasnet / NSFW / Facenet models, no remote service required. Long-running TensorFlow inference can accumulate C-allocated tensor memory until GC finalizers run, so PhotoPrism periodically triggers garbage collection to return that memory to the OS; tune with `PHOTOPRISM_TF_GC_EVERY` (default **200**, `0` disables). Lower values reduce peak RSS but increase GC overhead and can slow indexing, so keep the default unless memory pressure is severe.
- **Ollama** — local or proxied multimodal LLMs. See [`ollama/README.md`](ollama/README.md) for tuning and schema details. The engine defaults to `${OLLAMA_BASE_URL:-http://ollama:11434}/api/generate`, trimming any trailing slash on the base URL; set `OLLAMA_BASE_URL=https://ollama.com` to opt into cloud defaults.
- **OpenAI** — cloud Responses API. See [`openai/README.md`](openai/README.md) for prompts, schema variants, and header requirements.
@ -199,6 +199,10 @@ Models:
- **Ollama**: private, GPU/CPU-hosted multimodal LLMs; best for richer captions/labels without cloud traffic.
- **OpenAI**: highest quality reasoning and multimodal support; requires API key and network access.
### Model Unload on Idle
PhotoPrism currently keeps TensorFlow models resident for the lifetime of the process to avoid repeated load costs. A future “model unload on idle” mode would track last-use timestamps and close the TensorFlow session/graph after a configurable idle period, releasing the models memory footprint back to the OS. The trade-off is higher latency and CPU overhead when a model is used again, plus extra I/O to reload weights. This may be attractive for low-frequency or memory-constrained deployments but would slow continuous indexing jobs, so it is not enabled today.
### Related Docs
- Ollama specifics: [`internal/ai/vision/ollama/README.md`](ollama/README.md)

View file

@ -17,10 +17,13 @@ func NewApiRequestOllama(images Files, fileScheme scheme.Type) (*ApiRequest, err
for i := range images {
switch fileScheme {
case scheme.Data, scheme.Base64:
if file, err := os.Open(images[i]); err != nil {
file, err := os.Open(images[i])
if err != nil {
return nil, fmt.Errorf("%s (create data url)", err)
} else {
imagesData[i] = media.DataBase64(file)
}
imagesData[i] = media.DataBase64(file)
if err := file.Close(); err != nil {
return nil, fmt.Errorf("%s (close data url)", err)
}
default:
return nil, fmt.Errorf("unsupported file scheme %s", clean.Log(fileScheme))

View file

@ -132,10 +132,13 @@ func NewApiRequestImages(images Files, fileScheme scheme.Type) (*ApiRequest, err
imageUrls[i] = fmt.Sprintf("%s/%s", DownloadUrl, fileUuid)
}
case scheme.Data:
if file, err := os.Open(images[i]); err != nil {
file, err := os.Open(images[i])
if err != nil {
return nil, fmt.Errorf("%s (create data url)", err)
} else {
imageUrls[i] = media.DataUrl(file)
}
imageUrls[i] = media.DataUrl(file)
if err := file.Close(); err != nil {
return nil, fmt.Errorf("%s (close data url)", err)
}
default:
return nil, fmt.Errorf("unsupported file scheme %s", clean.Log(fileScheme))

View file

@ -67,6 +67,7 @@ var PhotoPrism = []*cli.Command{
MomentsCommand,
ConvertCommand,
ThumbsCommand,
VideosCommands,
MigrateCommand,
MigrationsCommands,
BackupCommand,

View file

@ -70,7 +70,7 @@ func findAction(ctx *cli.Context) error {
return nil
}
cols := []string{"File Name", "Mime Type", "Size", "SHA1 Hash"}
cols := []string{"File Name", "Mime Type", "Size", "Checksum"}
rows := make([][]string, 0, len(results))
for _, found := range results {

View file

@ -0,0 +1,388 @@
package commands
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/dustin/go-humanize"
"github.com/photoprism/photoprism/internal/entity"
"github.com/photoprism/photoprism/internal/entity/search"
"github.com/photoprism/photoprism/pkg/media/video"
"github.com/photoprism/photoprism/pkg/txt/report"
)
// videoNormalizeFilter converts CLI args into a search query, mapping bare tokens to name/filename filters.
func videoNormalizeFilter(args []string) string {
parts := make([]string, 0, len(args))
for _, arg := range args {
token := strings.TrimSpace(arg)
if token == "" {
continue
}
if strings.Contains(token, ":") {
parts = append(parts, token)
continue
}
if strings.Contains(token, "/") {
parts = append(parts, fmt.Sprintf("filename:%s", token))
} else {
parts = append(parts, fmt.Sprintf("name:%s", token))
}
}
return strings.TrimSpace(strings.Join(parts, " "))
}
// videoSplitTrimArgs separates filter args from the trailing trim duration argument.
func videoSplitTrimArgs(args []string) ([]string, string, error) {
if len(args) == 0 {
return nil, "", fmt.Errorf("missing duration argument")
}
filterArgs := make([]string, len(args)-1)
copy(filterArgs, args[:len(args)-1])
durationArg := strings.TrimSpace(args[len(args)-1])
if durationArg == "" {
return nil, "", fmt.Errorf("missing duration argument")
}
return filterArgs, durationArg, nil
}
// videoParseTrimDuration parses the trim duration string with the precedence and rules from the spec.
func videoParseTrimDuration(value string) (time.Duration, error) {
raw := strings.TrimSpace(value)
if raw == "" {
return 0, fmt.Errorf("duration is empty")
}
sign := 1
if strings.HasPrefix(raw, "-") {
sign = -1
raw = strings.TrimSpace(strings.TrimPrefix(raw, "-"))
}
if raw == "" {
return 0, fmt.Errorf("duration is empty")
}
if isDigits(raw) {
secs, err := strconv.ParseInt(raw, 10, 64)
if err != nil {
return 0, fmt.Errorf("invalid duration %q", value)
}
if secs == 0 {
return 0, fmt.Errorf("duration must be non-zero")
}
return time.Duration(sign) * time.Duration(secs) * time.Second, nil
}
if strings.Contains(raw, ":") {
if strings.ContainsAny(raw, "hms") {
return 0, fmt.Errorf("invalid duration %q", value)
}
parts := strings.Split(raw, ":")
if len(parts) != 2 && len(parts) != 3 {
return 0, fmt.Errorf("invalid duration %q", value)
}
for _, p := range parts {
if !isDigits(p) {
return 0, fmt.Errorf("invalid duration %q", value)
}
}
if len(parts) == 2 && len(parts[1]) != 2 {
return 0, fmt.Errorf("invalid duration %q", value)
}
if len(parts) == 3 && (len(parts[1]) != 2 || len(parts[2]) != 2) {
return 0, fmt.Errorf("invalid duration %q", value)
}
var hours, minutes, seconds int64
if len(parts) == 2 {
minutes, _ = strconv.ParseInt(parts[0], 10, 64)
seconds, _ = strconv.ParseInt(parts[1], 10, 64)
} else {
hours, _ = strconv.ParseInt(parts[0], 10, 64)
minutes, _ = strconv.ParseInt(parts[1], 10, 64)
seconds, _ = strconv.ParseInt(parts[2], 10, 64)
}
total := time.Duration(hours)*time.Hour + time.Duration(minutes)*time.Minute + time.Duration(seconds)*time.Second
if total == 0 {
return 0, fmt.Errorf("duration must be non-zero")
}
return time.Duration(sign) * total, nil
}
parsed, err := time.ParseDuration(applySign(raw, sign))
if err != nil {
return 0, fmt.Errorf("invalid duration %q", value)
}
if parsed == 0 {
return 0, fmt.Errorf("duration must be non-zero")
}
return parsed, nil
}
// videoListColumns returns the ordered column list for the video ls output.
func videoListColumns() []string {
return []string{"Video", "Size", "Resolution", "Duration", "Frames", "FPS", "Content Type", "Checksum"}
}
// videoResultFiles returns the related files for a merged search result or falls back to the file fields on the result.
func videoResultFiles(found search.Photo) []entity.File {
if len(found.Files) > 0 {
return found.Files
}
return []entity.File{videoFileFromSearch(found)}
}
// videoFileFromSearch builds a file record from the file fields of a search result.
func videoFileFromSearch(found search.Photo) entity.File {
return entity.File{
ID: found.FileID,
PhotoUID: found.PhotoUID,
FileUID: found.FileUID,
FileRoot: found.FileRoot,
FileName: found.FileName,
OriginalName: found.OriginalName,
FileHash: found.FileHash,
FileWidth: found.FileWidth,
FileHeight: found.FileHeight,
FilePortrait: found.FilePortrait,
FilePrimary: found.FilePrimary,
FileSidecar: found.FileSidecar,
FileMissing: found.FileMissing,
FileVideo: found.FileVideo,
FileDuration: found.FileDuration,
FileFPS: found.FileFPS,
FileFrames: found.FileFrames,
FilePages: found.FilePages,
FileCodec: found.FileCodec,
FileType: found.FileType,
MediaType: found.MediaType,
FileMime: found.FileMime,
FileSize: found.FileSize,
FileOrientation: found.FileOrientation,
FileProjection: found.FileProjection,
FileAspectRatio: found.FileAspectRatio,
FileColors: found.FileColors,
FileDiff: found.FileDiff,
FileChroma: found.FileChroma,
FileLuminance: found.FileLuminance,
OmitMarkers: true,
}
}
// videoPrimaryFile selects the best video file from a merged search result, preferring non-sidecar entries.
func videoPrimaryFile(found search.Photo) (entity.File, bool) {
files := videoResultFiles(found)
if len(files) == 0 {
return entity.File{}, false
}
for _, file := range files {
if file.FileVideo && !file.FileSidecar {
return file, true
}
}
for _, file := range files {
if file.FileVideo {
return file, true
}
}
return files[0], true
}
// videoListRow renders a search result row for table outputs with human-friendly values.
func videoListRow(found search.Photo) []string {
videoFile, _ := videoPrimaryFile(found)
row := []string{
videoFile.FileName,
videoHumanSize(videoFile.FileSize),
fmt.Sprintf("%dx%d", videoFile.FileWidth, videoFile.FileHeight),
videoHumanDuration(videoFile.FileDuration),
videoHumanInt(videoFile.FileFrames),
videoHumanFloat(videoFile.FileFPS),
video.ContentType(videoFile.FileMime, videoFile.FileType, videoFile.FileCodec, videoFile.FileHDR),
videoFile.FileHash,
}
return row
}
// videoListJSONRow renders a search result row for JSON output with canonical column keys.
func videoListJSONRow(found search.Photo) map[string]interface{} {
videoFile, _ := videoPrimaryFile(found)
data := map[string]interface{}{
"video": videoFile.FileName,
"size": videoNonNegativeSize(videoFile.FileSize),
"resolution": fmt.Sprintf("%dx%d", videoFile.FileWidth, videoFile.FileHeight),
"duration": videoFile.FileDuration.Nanoseconds(),
"frames": videoFile.FileFrames,
"fps": videoFile.FileFPS,
"content_type": video.ContentType(videoFile.FileMime, videoFile.FileType, videoFile.FileCodec, videoFile.FileHDR),
"checksum": videoFile.FileHash,
}
return data
}
// videoListJSON marshals a list of JSON rows using the canonical keys for each column.
func videoListJSON(rows []map[string]interface{}, cols []string) (string, error) {
canon := make([]string, len(cols))
for i, col := range cols {
canon[i] = report.CanonKey(col)
}
payload := make([]map[string]interface{}, 0, len(rows))
for _, row := range rows {
item := make(map[string]interface{}, len(canon))
for _, key := range canon {
item[key] = row[key]
}
payload = append(payload, item)
}
data, err := json.Marshal(payload)
if err != nil {
return "", err
}
return string(data), nil
}
// videoHumanDuration formats a duration for human-readable tables.
func videoHumanDuration(d time.Duration) string {
if d <= 0 {
return ""
}
return d.String()
}
// videoHumanInt formats non-zero integers for human-readable tables.
func videoHumanInt(value int) string {
if value <= 0 {
return ""
}
return strconv.Itoa(value)
}
// videoHumanFloat formats non-zero floats without unnecessary trailing zeros.
func videoHumanFloat(value float64) string {
if value <= 0 {
return ""
}
return strconv.FormatFloat(value, 'f', -1, 64)
}
// videoHumanSize formats file sizes with human-readable units.
func videoHumanSize(size int64) string {
return humanize.Bytes(uint64(videoNonNegativeSize(size))) //nolint:gosec // size is bounded to non-negative values
}
// videoNonNegativeSize clamps negative sizes to zero before formatting.
func videoNonNegativeSize(size int64) int64 {
if size < 0 {
return 0
}
return size
}
// videoTempPath creates a temporary file path in the destination directory.
func videoTempPath(dir, pattern string) (string, error) {
if dir == "" {
return "", fmt.Errorf("temp directory is empty")
}
tmpFile, err := os.CreateTemp(dir, pattern)
if err != nil {
return "", err
}
if err = tmpFile.Close(); err != nil {
return "", err
}
if err = os.Remove(tmpFile.Name()); err != nil {
return "", err
}
return tmpFile.Name(), nil
}
// videoFFmpegSeconds converts a duration into an ffmpeg-friendly seconds string.
func videoFFmpegSeconds(d time.Duration) string {
seconds := d.Seconds()
return strconv.FormatFloat(seconds, 'f', 3, 64)
}
// isDigits reports whether the string contains only decimal digits.
func isDigits(value string) bool {
if value == "" {
return false
}
for _, r := range value {
if r < '0' || r > '9' {
return false
}
}
return true
}
// applySign applies a numeric sign to a duration string for parsing.
func applySign(value string, sign int) string {
if sign >= 0 {
return value
}
return "-" + value
}
// videoSidecarPath builds the sidecar destination path for an originals file without creating directories.
func videoSidecarPath(srcName, originalsPath, sidecarPath string) string {
src := filepath.ToSlash(srcName)
orig := filepath.ToSlash(originalsPath)
if orig != "" {
orig = strings.TrimSuffix(orig, "/") + "/"
}
rel := strings.TrimPrefix(src, orig)
if rel == src {
rel = filepath.Base(srcName)
}
rel = strings.TrimPrefix(rel, "/")
return filepath.Join(sidecarPath, filepath.FromSlash(rel))
}

View file

@ -0,0 +1,98 @@
package commands
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/photoprism/photoprism/internal/entity"
"github.com/photoprism/photoprism/internal/entity/search"
"github.com/photoprism/photoprism/pkg/media/video"
)
func TestVideoNormalizeFilter(t *testing.T) {
t.Run("NormalizeTokens", func(t *testing.T) {
args := []string{"foo", "2024/clip.mp4", "name:bar", "filename:2025/a.mov", ""}
expected := "name:foo filename:2024/clip.mp4 name:bar filename:2025/a.mov"
assert.Equal(t, expected, videoNormalizeFilter(args))
})
}
func TestVideoParseTrimDuration(t *testing.T) {
t.Run("Seconds", func(t *testing.T) {
d, err := videoParseTrimDuration("5")
assert.NoError(t, err)
assert.Equal(t, 5*time.Second, d)
})
t.Run("NegativeSeconds", func(t *testing.T) {
d, err := videoParseTrimDuration("-10")
assert.NoError(t, err)
assert.Equal(t, -10*time.Second, d)
})
t.Run("MinutesSeconds", func(t *testing.T) {
d, err := videoParseTrimDuration("02:05")
assert.NoError(t, err)
assert.Equal(t, 2*time.Minute+5*time.Second, d)
})
t.Run("HoursMinutesSeconds", func(t *testing.T) {
d, err := videoParseTrimDuration("01:02:03")
assert.NoError(t, err)
assert.Equal(t, time.Hour+2*time.Minute+3*time.Second, d)
})
t.Run("GoDuration", func(t *testing.T) {
d, err := videoParseTrimDuration("2m5s")
assert.NoError(t, err)
assert.Equal(t, 2*time.Minute+5*time.Second, d)
})
t.Run("Invalid", func(t *testing.T) {
_, err := videoParseTrimDuration("1:30s")
assert.Error(t, err)
})
}
func TestVideoListJSONRow(t *testing.T) {
t.Run("NumericFields", func(t *testing.T) {
found := search.Photo{
Files: []entity.File{
{
FileName: "clip.avc",
FileRoot: "/",
FileDuration: time.Second,
FileCodec: "avc1",
FileMime: "video/mp4",
FileWidth: 640,
FileHeight: 360,
FileFPS: 24,
FileFrames: 24,
FileSize: 42,
FileHash: "sidecar",
FileSidecar: true,
FileVideo: true,
},
{
FileName: "clip.mp4",
FileRoot: "/",
FileDuration: 2 * time.Second,
FileCodec: "avc1",
FileMime: "video/mp4",
FileWidth: 1920,
FileHeight: 1080,
FileFPS: 29.97,
FileFrames: 120,
FileSize: 1234,
FileHash: "abc",
FileVideo: true,
},
},
}
row := videoListJSONRow(found)
assert.Equal(t, "clip.mp4", row["video"])
assert.Equal(t, int64(2*time.Second), row["duration"])
assert.Equal(t, int64(1234), row["size"])
assert.Equal(t, "1920x1080", row["resolution"])
assert.Equal(t, video.ContentType("video/mp4", "", "avc1", false), row["content_type"])
assert.Equal(t, "abc", row["checksum"])
})
}

View file

@ -0,0 +1,34 @@
package commands
import (
"fmt"
"github.com/photoprism/photoprism/internal/config"
"github.com/photoprism/photoprism/internal/photoprism"
"github.com/photoprism/photoprism/internal/photoprism/get"
)
// videoReindexRelated reindexes the related file group for the given main file.
func videoReindexRelated(conf *config.Config, fileName string) error {
if fileName == "" {
return fmt.Errorf("index: missing filename")
}
mediaFile, err := photoprism.NewMediaFile(fileName)
if err != nil {
return err
}
related, err := mediaFile.RelatedFiles(conf.Settings().Stack.Name)
if err != nil {
return err
}
index := get.Index()
result := photoprism.IndexRelated(related, index, photoprism.IndexOptionsSingle(conf))
if result.Err != nil {
return result.Err
}
return nil
}

View file

@ -0,0 +1,223 @@
package commands
import (
"bytes"
"encoding/json"
"fmt"
"os/exec"
"strings"
"github.com/urfave/cli/v2"
"github.com/photoprism/photoprism/internal/config"
"github.com/photoprism/photoprism/internal/entity"
"github.com/photoprism/photoprism/internal/entity/search"
"github.com/photoprism/photoprism/internal/meta"
"github.com/photoprism/photoprism/internal/photoprism"
"github.com/photoprism/photoprism/pkg/clean"
)
// VideoInfoCommand configures the command name, flags, and action.
var VideoInfoCommand = &cli.Command{
Name: "info",
Usage: "Displays diagnostic information for indexed videos",
ArgsUsage: "[filter]...",
Flags: []cli.Flag{
videoCountFlag,
OffsetFlag,
JsonFlag(),
videoVerboseFlag,
},
Action: videoInfoAction,
}
// videoInfoAction prints indexed, ExifTool, and ffprobe metadata for matching videos.
func videoInfoAction(ctx *cli.Context) error {
return CallWithDependencies(ctx, func(conf *config.Config) error {
filter := videoNormalizeFilter(ctx.Args().Slice())
results, err := videoSearchResults(filter, ctx.Int(videoCountFlag.Name), ctx.Int(OffsetFlag.Name))
if err != nil {
return err
}
entries := make([]videoInfoEntry, 0, len(results))
for _, found := range results {
entry, err := videoInfoEntryFor(conf, found, ctx.Bool(videoVerboseFlag.Name))
if err != nil {
log.Warnf("info: %s", clean.Error(err))
}
entries = append(entries, entry)
}
if ctx.Bool("json") {
payload, err := json.Marshal(entries)
if err != nil {
return err
}
fmt.Println(string(payload))
return nil
}
for _, entry := range entries {
videoPrintInfo(entry)
}
return nil
})
}
// videoInfoEntry describes all metadata sections for a single video.
type videoInfoEntry struct {
Index map[string]interface{} `json:"index"`
Exif interface{} `json:"exif,omitempty"`
FFprobe interface{} `json:"ffprobe,omitempty"`
Raw map[string]string `json:"raw,omitempty"`
}
// videoInfoEntryFor collects indexed, ExifTool, and ffprobe metadata for a search result.
func videoInfoEntryFor(conf *config.Config, found search.Photo, verbose bool) (videoInfoEntry, error) {
videoFile, ok := videoPrimaryFile(found)
if !ok {
return videoInfoEntry{}, fmt.Errorf("info: missing video file for %s", found.PhotoUID)
}
entry := videoInfoEntry{
Index: videoIndexSummary(found, videoFile),
}
filePath := photoprism.FileName(videoFile.FileRoot, videoFile.FileName)
mediaFile, err := photoprism.NewMediaFile(filePath)
if err != nil {
return entry, err
}
if conf.DisableExifTool() {
entry.Exif = nil
} else {
exif := mediaFile.MetaData()
entry.Exif = exif
if verbose {
entry.ensureRaw()
entry.Raw["exif"] = videoPrettyJSON(exif)
}
}
ffprobeBin := conf.FFprobeBin()
if ffprobeBin == "" {
entry.FFprobe = nil
} else if ffprobe, raw, err := videoRunFFprobe(ffprobeBin, filePath); err != nil {
entry.FFprobe = nil
if verbose {
entry.ensureRaw()
entry.Raw["ffprobe"] = raw
}
} else {
entry.FFprobe = ffprobe
if verbose {
entry.ensureRaw()
entry.Raw["ffprobe"] = raw
}
}
return entry, nil
}
// videoIndexSummary builds a concise map of indexed fields for diagnostics.
func videoIndexSummary(found search.Photo, file entity.File) map[string]interface{} {
return map[string]interface{}{
"file_name": file.FileName,
"file_root": file.FileRoot,
"file_uid": file.FileUID,
"photo_uid": found.PhotoUID,
"media_type": file.MediaType,
"file_type": file.FileType,
"file_mime": file.FileMime,
"file_codec": file.FileCodec,
"file_hash": file.FileHash,
"file_size": file.FileSize,
"file_duration": file.FileDuration.Nanoseconds(),
"photo_duration": found.PhotoDuration.Nanoseconds(),
"file_frames": file.FileFrames,
"file_fps": file.FileFPS,
"file_width": file.FileWidth,
"file_height": file.FileHeight,
"file_sidecar": file.FileSidecar,
"file_missing": file.FileMissing,
"file_video": file.FileVideo,
"original_name": file.OriginalName,
"instance_id": file.InstanceID,
"photo_taken_at": found.TakenAt,
"photo_taken_src": found.TakenSrc,
}
}
// videoRunFFprobe executes ffprobe and returns parsed JSON plus raw output.
func videoRunFFprobe(ffprobeBin, filePath string) (interface{}, string, error) {
cmd := exec.Command(ffprobeBin, "-v", "quiet", "-print_format", "json", "-show_format", "-show_streams", filePath) //nolint:gosec // args are validated paths
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
return nil, strings.TrimSpace(stdout.String()), fmt.Errorf("ffprobe failed: %s", strings.TrimSpace(stderr.String()))
}
raw := strings.TrimSpace(stdout.String())
if raw == "" {
return nil, raw, nil
}
var data interface{}
if err := json.Unmarshal([]byte(raw), &data); err != nil {
return nil, raw, nil
}
return data, raw, nil
}
// ensureRaw initializes the raw map for verbose output.
func (v *videoInfoEntry) ensureRaw() {
if v.Raw == nil {
v.Raw = make(map[string]string)
}
}
// videoPrettyJSON returns indented JSON for human-readable output.
func videoPrettyJSON(value interface{}) string {
data, err := json.MarshalIndent(value, "", " ")
if err != nil {
return ""
}
return string(data)
}
// videoPrintInfo prints a human-readable metadata summary to stdout.
func videoPrintInfo(entry videoInfoEntry) {
fmt.Println("Indexed Metadata:")
fmt.Println(videoPrettyJSON(entry.Index))
if entry.Exif == nil {
fmt.Println("ExifTool Metadata: disabled or unavailable")
} else if exifMap, ok := entry.Exif.(meta.Data); ok {
fmt.Println("ExifTool Metadata:")
fmt.Println(videoPrettyJSON(exifMap))
} else {
fmt.Println("ExifTool Metadata:")
fmt.Println(videoPrettyJSON(entry.Exif))
}
if entry.FFprobe == nil {
fmt.Println("FFprobe Diagnostics: unavailable")
} else {
fmt.Println("FFprobe Diagnostics:")
fmt.Println(videoPrettyJSON(entry.FFprobe))
}
if len(entry.Raw) > 0 {
fmt.Println("Raw Metadata:")
fmt.Println(videoPrettyJSON(entry.Raw))
}
}

View file

@ -0,0 +1,78 @@
package commands
import (
"fmt"
"github.com/urfave/cli/v2"
"github.com/photoprism/photoprism/internal/config"
"github.com/photoprism/photoprism/pkg/txt/report"
)
// VideoListCommand configures the command name, flags, and action.
var VideoListCommand = &cli.Command{
Name: "ls",
Usage: "Lists indexed video files matching the specified filters",
ArgsUsage: "[filter]...",
Flags: append(append([]cli.Flag{}, report.CliFlags...),
videoCountFlag,
OffsetFlag,
),
Action: videoListAction,
}
// videoListAction renders a filtered list of indexed video files.
func videoListAction(ctx *cli.Context) error {
return CallWithDependencies(ctx, func(conf *config.Config) error {
// Ensure config is initialized before querying the index.
if conf == nil {
return fmt.Errorf("config is not available")
}
format, err := report.CliFormatStrict(ctx)
if err != nil {
return err
}
filter := videoNormalizeFilter(ctx.Args().Slice())
results, err := videoSearchResults(filter, ctx.Int(videoCountFlag.Name), ctx.Int(OffsetFlag.Name))
if err != nil {
return err
}
cols := videoListColumns()
if format == report.JSON {
rows := make([]map[string]interface{}, 0, len(results))
for _, found := range results {
rows = append(rows, videoListJSONRow(found))
}
payload, jsonErr := videoListJSON(rows, cols)
if jsonErr != nil {
return jsonErr
}
fmt.Println(payload)
return nil
}
rows := make([][]string, 0, len(results))
for _, found := range results {
rows = append(rows, videoListRow(found))
}
output, err := report.RenderFormat(rows, cols, format)
if err != nil {
return err
}
fmt.Println(output)
return nil
})
}

View file

@ -0,0 +1,308 @@
package commands
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/manifoldco/promptui"
"github.com/urfave/cli/v2"
"github.com/photoprism/photoprism/internal/config"
"github.com/photoprism/photoprism/internal/entity"
"github.com/photoprism/photoprism/internal/entity/search"
"github.com/photoprism/photoprism/internal/ffmpeg"
"github.com/photoprism/photoprism/internal/ffmpeg/encode"
"github.com/photoprism/photoprism/internal/photoprism"
"github.com/photoprism/photoprism/internal/photoprism/get"
"github.com/photoprism/photoprism/pkg/clean"
"github.com/photoprism/photoprism/pkg/fs"
"github.com/photoprism/photoprism/pkg/media/video"
)
// VideoRemuxCommand configures the command name, flags, and action.
var VideoRemuxCommand = &cli.Command{
Name: "remux",
Usage: "Remuxes AVC videos into an MP4 container",
ArgsUsage: "[filter]...",
Flags: []cli.Flag{
videoCountFlag,
OffsetFlag,
videoForceFlag,
DryRunFlag("prints planned remux operations without writing files"),
YesFlag(),
},
Action: videoRemuxAction,
}
// videoRemuxAction remuxes matching AVC files into MP4 containers.
func videoRemuxAction(ctx *cli.Context) error {
return CallWithDependencies(ctx, func(conf *config.Config) error {
if conf.DisableFFmpeg() {
return fmt.Errorf("ffmpeg is disabled")
}
filter := videoNormalizeFilter(ctx.Args().Slice())
results, err := videoSearchResults(filter, ctx.Int(videoCountFlag.Name), ctx.Int(OffsetFlag.Name))
if err != nil {
return err
}
plans, preflight, err := videoBuildRemuxPlans(conf, results, ctx.Bool(videoForceFlag.Name))
if err != nil {
return err
}
if len(plans) == 0 {
log.Infof("remux: found no matching videos")
return nil
}
if !ctx.Bool("dry-run") {
if err = videoCheckFreeSpace(preflight); err != nil {
return err
}
}
if !ctx.Bool("dry-run") && !RunNonInteractively(ctx.Bool("yes")) {
prompt := promptui.Prompt{
Label: fmt.Sprintf("Remux %d video files?", len(plans)),
IsConfirm: true,
}
if _, err = prompt.Run(); err != nil {
log.Info("remux: cancelled")
return nil
}
}
var processed, skipped, failed int
convert := get.Convert()
for _, plan := range plans {
if ctx.Bool("dry-run") {
log.Infof("remux: would remux %s to %s", clean.Log(plan.SrcPath), clean.Log(plan.DestPath))
skipped++
continue
}
if err = videoRemuxFile(conf, convert, plan, ctx.Bool(videoForceFlag.Name), true); err != nil {
log.Errorf("remux: %s", clean.Error(err))
failed++
continue
}
processed++
}
log.Infof("remux: processed %d, skipped %d, failed %d", processed, skipped, failed)
if failed > 0 {
return fmt.Errorf("remux: %d files failed", failed)
}
return nil
})
}
// videoRemuxPlan holds a resolved remux operation for a single video file.
type videoRemuxPlan struct {
IndexPath string
SrcPath string
DestPath string
SizeBytes int64
Sidecar bool
}
// videoBuildRemuxPlans prepares remux operations and preflight size checks from search results.
func videoBuildRemuxPlans(conf *config.Config, results []search.Photo, force bool) ([]videoRemuxPlan, []videoOutputPlan, error) {
plans := make([]videoRemuxPlan, 0, len(results))
preflight := make([]videoOutputPlan, 0, len(results))
for _, found := range results {
videoFile, ok := videoPrimaryFile(found)
if !ok {
log.Warnf("remux: missing video file for %s", clean.Log(found.PhotoUID))
continue
}
if videoFile.FileSidecar {
log.Warnf("remux: skipping sidecar file %s", clean.Log(videoFile.FileName))
continue
}
if videoFile.MediaType == entity.MediaLive {
log.Warnf("remux: skipping live photo video %s", clean.Log(videoFile.FileName))
continue
}
srcPath := photoprism.FileName(videoFile.FileRoot, videoFile.FileName)
if !fs.FileExistsNotEmpty(srcPath) {
log.Warnf("remux: missing file %s", clean.Log(srcPath))
continue
}
if !videoCodecIsAvc(videoFile.FileCodec) && !force {
if !videoFallbackCodecAvc(srcPath) {
log.Warnf("remux: skipping non-AVC video %s", clean.Log(videoFile.FileName))
continue
}
}
destPath := fs.StripKnownExt(srcPath) + fs.ExtMp4
useSidecar := false
indexPath := destPath
if conf.ReadOnly() || !fs.PathWritable(filepath.Dir(srcPath)) || !fs.Writable(srcPath) {
if !conf.SidecarWritable() || !fs.PathWritable(conf.SidecarPath()) {
return nil, nil, config.ErrReadOnly
}
sidecarBase := videoSidecarPath(srcPath, conf.OriginalsPath(), conf.SidecarPath())
destPath = fs.StripKnownExt(sidecarBase) + fs.ExtMp4
useSidecar = true
indexPath = srcPath
}
if destPath != srcPath && fs.FileExistsNotEmpty(destPath) && !force {
log.Warnf("remux: output already exists %s", clean.Log(destPath))
continue
}
plans = append(plans, videoRemuxPlan{
IndexPath: indexPath,
SrcPath: srcPath,
DestPath: destPath,
SizeBytes: videoFile.FileSize,
Sidecar: useSidecar,
})
preflight = append(preflight, videoOutputPlan{
Destination: destPath,
SizeBytes: videoFile.FileSize,
})
}
return plans, preflight, nil
}
// videoRemuxFile runs ffmpeg remuxing and refreshes previews/thumbnails before reindexing.
func videoRemuxFile(conf *config.Config, convert *photoprism.Convert, plan videoRemuxPlan, force, noBackup bool) error {
tempDir := filepath.Dir(plan.DestPath)
tempPath, err := videoTempPath(tempDir, ".remux-*.mp4")
if err != nil {
return err
}
opt := encode.NewRemuxOptions(conf.FFmpegBin(), fs.VideoMp4, true)
opt.Force = true
if err = ffmpeg.RemuxFile(plan.SrcPath, tempPath, opt); err != nil {
return err
}
if !fs.FileExistsNotEmpty(tempPath) {
_ = os.Remove(tempPath)
return fmt.Errorf("remux output missing for %s", clean.Log(plan.SrcPath))
}
if err = os.Chmod(tempPath, fs.ModeFile); err != nil {
return err
}
if plan.Sidecar {
if fs.FileExists(plan.DestPath) && !force {
_ = os.Remove(tempPath)
return fmt.Errorf("output already exists %s", clean.Log(plan.DestPath))
}
if fs.FileExists(plan.DestPath) {
_ = os.Remove(plan.DestPath)
}
if err = os.Rename(tempPath, plan.DestPath); err != nil {
_ = os.Remove(tempPath)
return err
}
} else {
if plan.DestPath != plan.SrcPath && fs.FileExists(plan.DestPath) && !force {
_ = os.Remove(tempPath)
return fmt.Errorf("output already exists %s", clean.Log(plan.DestPath))
}
if noBackup {
if plan.DestPath != plan.SrcPath {
_ = os.Remove(plan.DestPath)
}
} else {
backupPath := plan.SrcPath + ".backup"
if fs.FileExists(backupPath) {
_ = os.Remove(backupPath)
}
if err = os.Rename(plan.SrcPath, backupPath); err != nil {
_ = os.Remove(tempPath)
return err
}
_ = os.Chmod(backupPath, fs.ModeBackupFile)
}
if plan.DestPath != plan.SrcPath && fs.FileExists(plan.DestPath) {
_ = os.Remove(plan.DestPath)
}
if err = os.Rename(tempPath, plan.DestPath); err != nil {
_ = os.Remove(tempPath)
return err
}
}
mediaFile, err := photoprism.NewMediaFile(plan.DestPath)
if err != nil {
return err
}
if convert != nil {
if img, imgErr := convert.ToImage(mediaFile, true); imgErr != nil {
log.Warnf("remux: %s", clean.Error(imgErr))
} else if img != nil {
if thumbsErr := img.GenerateThumbnails(conf.ThumbCachePath(), true); thumbsErr != nil {
log.Warnf("remux: %s", clean.Error(thumbsErr))
}
}
}
return videoReindexRelated(conf, plan.IndexPath)
}
// videoCodecIsAvc reports whether a codec string maps to an AVC/H.264 variant.
func videoCodecIsAvc(codec string) bool {
value := strings.ToLower(strings.TrimSpace(codec))
if value == "" {
return false
}
if value == "h264" || value == "x264" {
return true
}
switch video.Codecs[value] {
case video.CodecAvc1, video.CodecAvc2, video.CodecAvc3, video.CodecAvc4:
return true
default:
return false
}
}
// videoFallbackCodecAvc probes codec metadata when the indexed codec is missing.
func videoFallbackCodecAvc(srcPath string) bool {
mediaFile, err := photoprism.NewMediaFile(srcPath)
if err != nil {
return false
}
if info := mediaFile.VideoInfo(); info.VideoCodec != "" {
return videoCodecIsAvc(info.VideoCodec)
}
return mediaFile.MetaData().CodecAvc()
}

View file

@ -0,0 +1,153 @@
package commands
import (
"fmt"
"github.com/photoprism/photoprism/internal/entity"
"github.com/photoprism/photoprism/internal/entity/search"
"github.com/photoprism/photoprism/internal/entity/sortby"
"github.com/photoprism/photoprism/internal/form"
)
// videoSearchResults runs a video-only search and applies offset/count after merging related files.
func videoSearchResults(query string, count int, offset int) ([]search.Photo, error) {
if offset < 0 {
offset = 0
}
if count <= 0 {
return []search.Photo{}, nil
}
frm := form.SearchPhotos{
Query: query,
Primary: false,
Merged: true,
Video: true,
Order: sortby.Name,
}
target := count + offset
if target < 0 {
target = 0
}
collected := make([]search.Photo, 0, target)
index := make(map[string]int, target)
searchOffset := 0
batchSize := count
if batchSize < 200 {
batchSize = 200
}
needComplete := false
for len(collected) < target || needComplete {
frm.Count = batchSize
frm.Offset = searchOffset
results, rawCount, err := search.Photos(frm)
if err != nil {
return nil, err
}
if len(results) == 0 || rawCount == 0 {
break
}
for _, found := range results {
key := videoSearchKey(found)
if idx, ok := index[key]; ok {
collected[idx].Files = videoMergeFiles(collected[idx].Files, found.Files)
if len(collected[idx].Files) > 1 {
collected[idx].Merged = true
}
continue
}
collected = append(collected, found)
index[key] = len(collected) - 1
}
searchOffset += rawCount
needComplete = false
if len(collected) >= target {
lastNeededKey := videoSearchKey(collected[target-1])
lastBatchKey := videoSearchKey(results[len(results)-1])
if lastNeededKey == lastBatchKey && rawCount == batchSize {
needComplete = true
}
}
if rawCount < batchSize {
break
}
}
if offset >= len(collected) {
return []search.Photo{}, nil
}
end := offset + count
if end > len(collected) {
end = len(collected)
}
return collected[offset:end], nil
}
// videoSearchKey returns a stable key for de-duplicating merged photo results.
func videoSearchKey(found search.Photo) string {
if found.ID > 0 {
return fmt.Sprintf("id:%d", found.ID)
}
return found.PhotoUID
}
// videoMergeFiles appends unique files from additions into the existing list.
func videoMergeFiles(existing []entity.File, additions []entity.File) []entity.File {
if len(additions) == 0 {
return existing
}
if len(existing) == 0 {
return additions
}
seen := make(map[string]struct{}, len(existing))
for _, file := range existing {
seen[videoFileKey(file)] = struct{}{}
}
for _, file := range additions {
key := videoFileKey(file)
if _, ok := seen[key]; ok {
continue
}
existing = append(existing, file)
seen[key] = struct{}{}
}
return existing
}
// videoFileKey returns a stable identifier for a file entry when merging search results.
func videoFileKey(file entity.File) string {
if file.FileUID != "" {
return "uid:" + file.FileUID
}
if file.ID != 0 {
return fmt.Sprintf("id:%d", file.ID)
}
if file.FileHash != "" {
return "hash:" + file.FileHash
}
return fmt.Sprintf("name:%s/%s:%d", file.FileRoot, file.FileName, file.FileSize)
}

View file

@ -0,0 +1,48 @@
package commands
import (
"fmt"
"path/filepath"
"github.com/dustin/go-humanize"
"github.com/photoprism/photoprism/pkg/clean"
"github.com/photoprism/photoprism/pkg/fs/duf"
)
// videoOutputPlan describes a planned output file for preflight checks.
type videoOutputPlan struct {
Destination string
SizeBytes int64
}
// videoCheckFreeSpace validates that destination filesystems have enough free space for outputs.
func videoCheckFreeSpace(plans []videoOutputPlan) error {
required := make(map[string]uint64)
for _, plan := range plans {
if plan.Destination == "" {
continue
}
dir := filepath.Dir(plan.Destination)
required[dir] += uint64(videoNonNegativeSize(plan.SizeBytes)) //nolint:gosec // size is clamped to non-negative values
}
for dir, size := range required {
mount, err := duf.PathInfo(dir)
if err != nil {
return err
}
if mount.Free < size {
return fmt.Errorf("insufficient free space in %s: need %s, have %s",
clean.Log(dir),
humanize.Bytes(size),
humanize.Bytes(mount.Free),
)
}
}
return nil
}

View file

@ -0,0 +1,215 @@
package commands
import (
"fmt"
"os"
"github.com/manifoldco/promptui"
"github.com/urfave/cli/v2"
"github.com/photoprism/photoprism/internal/config"
"github.com/photoprism/photoprism/internal/entity"
"github.com/photoprism/photoprism/internal/entity/search"
"github.com/photoprism/photoprism/internal/photoprism"
"github.com/photoprism/photoprism/internal/photoprism/get"
"github.com/photoprism/photoprism/pkg/clean"
"github.com/photoprism/photoprism/pkg/fs"
)
// VideoTranscodeCommand configures the command name, flags, and action.
var VideoTranscodeCommand = &cli.Command{
Name: "transcode",
Usage: "Transcodes matching videos to AVC sidecar files",
ArgsUsage: "[filter]...",
Flags: []cli.Flag{
videoCountFlag,
OffsetFlag,
videoForceFlag,
DryRunFlag("prints planned transcode operations without writing files"),
YesFlag(),
},
Action: videoTranscodeAction,
}
// videoTranscodeAction transcodes matching videos into sidecar AVC files.
func videoTranscodeAction(ctx *cli.Context) error {
return CallWithDependencies(ctx, func(conf *config.Config) error {
if conf.DisableFFmpeg() {
return fmt.Errorf("ffmpeg is disabled")
}
filter := videoNormalizeFilter(ctx.Args().Slice())
results, err := videoSearchResults(filter, ctx.Int(videoCountFlag.Name), ctx.Int(OffsetFlag.Name))
if err != nil {
return err
}
plans, preflight, err := videoBuildTranscodePlans(conf, results, ctx.Bool(videoForceFlag.Name))
if err != nil {
return err
}
if len(plans) == 0 {
log.Infof("transcode: found no matching videos")
return nil
}
if !ctx.Bool("dry-run") {
if err = videoCheckFreeSpace(preflight); err != nil {
return err
}
}
if !ctx.Bool("dry-run") && !RunNonInteractively(ctx.Bool("yes")) {
prompt := promptui.Prompt{
Label: fmt.Sprintf("Transcode %d video files?", len(plans)),
IsConfirm: true,
}
if _, err = prompt.Run(); err != nil {
log.Info("transcode: cancelled")
return nil
}
}
var processed, skipped, failed int
convert := get.Convert()
for _, plan := range plans {
if ctx.Bool("dry-run") {
log.Infof("transcode: would transcode %s to %s", clean.Log(plan.SrcPath), clean.Log(plan.DestPath))
skipped++
continue
}
file, err := videoTranscodeFile(conf, convert, plan, ctx.Bool(videoForceFlag.Name))
if err != nil {
log.Errorf("transcode: %s", clean.Error(err))
failed++
continue
}
if file != nil {
if chmodErr := os.Chmod(file.FileName(), fs.ModeFile); chmodErr != nil {
log.Warnf("transcode: %s", clean.Error(chmodErr))
}
}
if err = videoReindexRelated(conf, plan.IndexPath); err != nil {
log.Errorf("transcode: %s", clean.Error(err))
failed++
continue
}
processed++
}
log.Infof("transcode: processed %d, skipped %d, failed %d", processed, skipped, failed)
if failed > 0 {
return fmt.Errorf("transcode: %d files failed", failed)
}
return nil
})
}
// videoTranscodePlan holds a resolved transcode operation for a single video file.
type videoTranscodePlan struct {
IndexPath string
SrcPath string
DestPath string
SizeBytes int64
}
// videoBuildTranscodePlans prepares transcode operations and preflight size checks from search results.
func videoBuildTranscodePlans(conf *config.Config, results []search.Photo, force bool) ([]videoTranscodePlan, []videoOutputPlan, error) {
plans := make([]videoTranscodePlan, 0, len(results))
preflight := make([]videoOutputPlan, 0, len(results))
for _, found := range results {
videoFile, ok := videoPrimaryFile(found)
if !ok {
log.Warnf("transcode: missing video file for %s", clean.Log(found.PhotoUID))
continue
}
if videoFile.FileSidecar {
log.Warnf("transcode: skipping sidecar file %s", clean.Log(videoFile.FileName))
continue
}
if videoFile.MediaType == entity.MediaLive {
log.Warnf("transcode: skipping live photo video %s", clean.Log(videoFile.FileName))
continue
}
srcPath := photoprism.FileName(videoFile.FileRoot, videoFile.FileName)
if !fs.FileExistsNotEmpty(srcPath) {
log.Warnf("transcode: missing file %s", clean.Log(srcPath))
continue
}
if !conf.SidecarWritable() || !fs.PathWritable(conf.SidecarPath()) {
return nil, nil, config.ErrReadOnly
}
destPath, err := videoTranscodeTarget(conf, srcPath)
if err != nil {
log.Warnf("transcode: %s", clean.Error(err))
continue
}
if destPath == srcPath {
log.Warnf("transcode: skipping because output equals source %s", clean.Log(srcPath))
continue
}
if fs.FileExistsNotEmpty(destPath) && !force {
log.Warnf("transcode: output already exists %s", clean.Log(destPath))
continue
}
plans = append(plans, videoTranscodePlan{
IndexPath: srcPath,
SrcPath: srcPath,
DestPath: destPath,
SizeBytes: videoFile.FileSize,
})
preflight = append(preflight, videoOutputPlan{
Destination: destPath,
SizeBytes: videoFile.FileSize,
})
}
return plans, preflight, nil
}
// videoTranscodeTarget computes the sidecar output path for an AVC transcode.
func videoTranscodeTarget(conf *config.Config, srcPath string) (string, error) {
mediaFile, err := photoprism.NewMediaFile(srcPath)
if err != nil {
return "", err
}
base := videoSidecarPath(srcPath, conf.OriginalsPath(), conf.SidecarPath())
if mediaFile.IsAnimatedImage() {
return fs.StripKnownExt(base) + fs.ExtMp4, nil
}
return fs.StripKnownExt(base) + fs.ExtAvc, nil
}
// videoTranscodeFile runs the transcode operation and returns the resulting media file.
func videoTranscodeFile(conf *config.Config, convert *photoprism.Convert, plan videoTranscodePlan, force bool) (*photoprism.MediaFile, error) {
if convert == nil {
return nil, fmt.Errorf("transcode: convert service unavailable")
}
mediaFile, err := photoprism.NewMediaFile(plan.SrcPath)
if err != nil {
return nil, err
}
return convert.ToAvc(mediaFile, conf.FFmpegEncoder(), false, force)
}

View file

@ -0,0 +1,346 @@
package commands
import (
"bytes"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/manifoldco/promptui"
"github.com/urfave/cli/v2"
"github.com/photoprism/photoprism/internal/config"
"github.com/photoprism/photoprism/internal/entity"
"github.com/photoprism/photoprism/internal/entity/search"
"github.com/photoprism/photoprism/internal/photoprism"
"github.com/photoprism/photoprism/internal/photoprism/get"
"github.com/photoprism/photoprism/pkg/clean"
"github.com/photoprism/photoprism/pkg/fs"
)
// VideoTrimCommand configures the command name, flags, and action.
var VideoTrimCommand = &cli.Command{
Name: "trim",
Usage: "Trims a duration from the start (positive) or end (negative) of matching videos",
ArgsUsage: "[filter]... <duration>",
Flags: []cli.Flag{
videoCountFlag,
OffsetFlag,
DryRunFlag("prints planned trim operations without writing files"),
YesFlag(),
},
Action: videoTrimAction,
}
// videoTrimAction trims matching video files in-place or to sidecar outputs when originals are read-only.
func videoTrimAction(ctx *cli.Context) error {
return CallWithDependencies(ctx, func(conf *config.Config) error {
if conf.DisableFFmpeg() {
return fmt.Errorf("ffmpeg is disabled")
}
filterArgs, durationArg, err := videoSplitTrimArgs(ctx.Args().Slice())
if err != nil {
return cli.Exit(err.Error(), 2)
}
trimDuration, err := videoParseTrimDuration(durationArg)
if err != nil {
return cli.Exit(err.Error(), 2)
}
filter := videoNormalizeFilter(filterArgs)
results, err := videoSearchResults(filter, ctx.Int(videoCountFlag.Name), ctx.Int(OffsetFlag.Name))
if err != nil {
return err
}
plans, preflight, err := videoBuildTrimPlans(conf, results, trimDuration)
if err != nil {
return err
}
if len(plans) == 0 {
log.Infof("trim: found no matching videos")
return nil
}
if !ctx.Bool("dry-run") {
if err = videoCheckFreeSpace(preflight); err != nil {
return err
}
}
if !ctx.Bool("dry-run") && !RunNonInteractively(ctx.Bool("yes")) {
prompt := promptui.Prompt{
Label: fmt.Sprintf("Trim %d video files?", len(plans)),
IsConfirm: true,
}
if _, err = prompt.Run(); err != nil {
log.Info("trim: cancelled")
return nil
}
}
var processed, skipped, failed int
convert := get.Convert()
for _, plan := range plans {
if ctx.Bool("dry-run") {
log.Infof("trim: would trim %s by %s", clean.Log(plan.IndexPath), trimDuration.String())
skipped++
continue
}
if err = videoTrimFile(conf, convert, plan, trimDuration, true); err != nil {
log.Errorf("trim: %s", clean.Error(err))
failed++
continue
}
processed++
}
log.Infof("trim: processed %d, skipped %d, failed %d", processed, skipped, failed)
if failed > 0 {
return fmt.Errorf("trim: %d files failed", failed)
}
return nil
})
}
// videoTrimPlan holds a resolved trim operation for a single video file.
type videoTrimPlan struct {
IndexPath string
SrcPath string
DestPath string
Duration time.Duration
SizeBytes int64
Sidecar bool
}
// videoBuildTrimPlans prepares trim operations and preflight size checks from search results.
func videoBuildTrimPlans(conf *config.Config, results []search.Photo, trimDuration time.Duration) ([]videoTrimPlan, []videoOutputPlan, error) {
plans := make([]videoTrimPlan, 0, len(results))
preflight := make([]videoOutputPlan, 0, len(results))
absTrim := trimDuration
if absTrim < 0 {
absTrim = -absTrim
}
for _, found := range results {
videoFile, ok := videoPrimaryFile(found)
if !ok {
log.Warnf("trim: missing video file for %s", clean.Log(found.PhotoUID))
continue
}
if videoFile.FileSidecar {
log.Warnf("trim: skipping sidecar file %s", clean.Log(videoFile.FileName))
continue
}
if videoFile.MediaType == entity.MediaLive {
log.Warnf("trim: skipping live photo video %s", clean.Log(videoFile.FileName))
continue
}
if videoFile.FileDuration <= 0 {
log.Warnf("trim: missing duration for %s", clean.Log(videoFile.FileName))
continue
}
remaining := videoFile.FileDuration - absTrim
if remaining < time.Second {
log.Errorf("trim: duration exceeds available length for %s", clean.Log(videoFile.FileName))
continue
}
srcPath := photoprism.FileName(videoFile.FileRoot, videoFile.FileName)
if !fs.FileExistsNotEmpty(srcPath) {
log.Warnf("trim: missing file %s", clean.Log(srcPath))
continue
}
destPath := srcPath
useSidecar := false
if conf.ReadOnly() || !fs.PathWritable(filepath.Dir(srcPath)) || !fs.Writable(srcPath) {
if !conf.SidecarWritable() || !fs.PathWritable(conf.SidecarPath()) {
return nil, nil, config.ErrReadOnly
}
destPath = videoSidecarPath(srcPath, conf.OriginalsPath(), conf.SidecarPath())
useSidecar = true
}
if useSidecar && fs.FileExistsNotEmpty(destPath) {
log.Warnf("trim: output already exists %s", clean.Log(destPath))
continue
}
plans = append(plans, videoTrimPlan{
IndexPath: srcPath,
SrcPath: srcPath,
DestPath: destPath,
Duration: videoFile.FileDuration,
SizeBytes: videoFile.FileSize,
Sidecar: useSidecar,
})
preflight = append(preflight, videoOutputPlan{
Destination: destPath,
SizeBytes: videoFile.FileSize,
})
}
return plans, preflight, nil
}
// videoTrimFile executes the trim operation and refreshes previews/thumbnails before reindexing.
func videoTrimFile(conf *config.Config, convert *photoprism.Convert, plan videoTrimPlan, trimDuration time.Duration, noBackup bool) error {
start := time.Duration(0)
absTrim := trimDuration
if absTrim < 0 {
absTrim = -absTrim
}
if trimDuration > 0 {
start = absTrim
}
remaining := plan.Duration - absTrim
if remaining < time.Second {
return fmt.Errorf("remaining duration too short for %s", clean.Log(plan.SrcPath))
}
destDir := filepath.Dir(plan.DestPath)
ext := filepath.Ext(plan.DestPath)
if ext == "" {
ext = filepath.Ext(plan.SrcPath)
}
if ext == "" {
ext = ".tmp"
}
tempPath, err := videoTempPath(destDir, ".trim-*"+ext)
if err != nil {
return err
}
cmd := videoTrimCmd(conf.FFmpegBin(), plan.SrcPath, tempPath, start, remaining)
cmd.Env = append(cmd.Env, fmt.Sprintf("HOME=%s", conf.CmdCachePath()))
log.Debugf("ffmpeg: %s", clean.Log(cmd.String()))
var stderr bytes.Buffer
cmd.Stderr = &stderr
if err = cmd.Run(); err != nil {
return fmt.Errorf("ffmpeg failed for %s: %s", clean.Log(plan.SrcPath), strings.TrimSpace(stderr.String()))
}
if !fs.FileExistsNotEmpty(tempPath) {
_ = os.Remove(tempPath)
return fmt.Errorf("trim output missing for %s", clean.Log(plan.SrcPath))
}
if err = os.Chmod(tempPath, fs.ModeFile); err != nil {
return err
}
if plan.Sidecar {
if fs.FileExists(plan.DestPath) {
_ = os.Remove(tempPath)
return fmt.Errorf("output already exists %s", clean.Log(plan.DestPath))
}
if err = os.Rename(tempPath, plan.DestPath); err != nil {
_ = os.Remove(tempPath)
return err
}
} else {
if noBackup {
_ = os.Remove(plan.DestPath)
} else {
backupPath := plan.DestPath + ".backup"
if fs.FileExists(backupPath) {
_ = os.Remove(backupPath)
}
if err = os.Rename(plan.DestPath, backupPath); err != nil {
_ = os.Remove(tempPath)
return err
}
_ = os.Chmod(backupPath, fs.ModeBackupFile)
}
if err = os.Rename(tempPath, plan.DestPath); err != nil {
_ = os.Remove(tempPath)
return err
}
}
mediaFile, err := photoprism.NewMediaFile(plan.DestPath)
if err != nil {
return err
}
if convert != nil {
if img, imgErr := convert.ToImage(mediaFile, true); imgErr != nil {
log.Warnf("trim: %s", clean.Error(imgErr))
} else if img != nil {
if thumbsErr := img.GenerateThumbnails(conf.ThumbCachePath(), true); thumbsErr != nil {
log.Warnf("trim: %s", clean.Error(thumbsErr))
}
}
}
return videoReindexRelated(conf, plan.IndexPath)
}
// videoTrimCmd builds an ffmpeg command that trims a source file with stream copy.
func videoTrimCmd(ffmpegBin, srcName, destName string, start, duration time.Duration) *exec.Cmd {
args := []string{
"-hide_banner",
"-y",
}
if start > 0 {
args = append(args, "-ss", videoFFmpegSeconds(start))
}
args = append(args,
"-i", srcName,
"-t", videoFFmpegSeconds(duration),
"-map", "0",
"-dn",
"-ignore_unknown",
"-codec", "copy",
"-avoid_negative_ts", "make_zero",
)
if videoTrimFastStart(destName) {
args = append(args, "-movflags", "+faststart")
}
args = append(args, destName)
// #nosec G204 -- arguments are built from validated inputs and config.
return exec.Command(ffmpegBin, args...)
}
// videoTrimFastStart reports whether the trim output should enable faststart for MP4/MOV containers.
func videoTrimFastStart(destName string) bool {
switch strings.ToLower(filepath.Ext(destName)) {
case fs.ExtMp4, fs.ExtMov, fs.ExtQT, ".m4v":
return true
default:
return false
}
}

View file

@ -0,0 +1,17 @@
package commands
import (
"testing"
"github.com/stretchr/testify/assert"
)
// TestVideoTrimFastStart verifies which extensions get the faststart flag.
func TestVideoTrimFastStart(t *testing.T) {
assert.True(t, videoTrimFastStart("clip.mp4"))
assert.True(t, videoTrimFastStart("clip.MOV"))
assert.True(t, videoTrimFastStart("clip.m4v"))
assert.True(t, videoTrimFastStart("clip.qt"))
assert.False(t, videoTrimFastStart("clip.mkv"))
assert.False(t, videoTrimFastStart(""))
}

View file

@ -0,0 +1,38 @@
package commands
import "github.com/urfave/cli/v2"
// VideosCommands configures the CLI subcommands for working with indexed videos.
var VideosCommands = &cli.Command{
Name: "videos",
Aliases: []string{"video"},
Usage: "Video troubleshooting and editing subcommands",
Subcommands: []*cli.Command{
VideoListCommand,
VideoTrimCommand,
VideoRemuxCommand,
VideoTranscodeCommand,
VideoInfoCommand,
},
}
// videoCountFlag limits the number of results returned by video commands.
var videoCountFlag = &cli.IntFlag{
Name: "count",
Aliases: []string{"n"},
Usage: "maximum `NUMBER` of results",
Value: 10000,
}
// videoForceFlag allows overwriting existing output files for remux/transcode.
var videoForceFlag = &cli.BoolFlag{
Name: "force",
Aliases: []string{"f"},
Usage: "replace existing output files",
}
// videoVerboseFlag adds raw metadata to video info output.
var videoVerboseFlag = &cli.BoolFlag{
Name: "verbose",
Usage: "include raw metadata output",
}

View file

@ -46,7 +46,7 @@ type SearchPhotos struct {
Square bool `form:"square" notes:"Finds square pictures only (aspect ratio 1:1)"`
Archived bool `form:"archived" notes:"Finds archived content"`
Public bool `form:"public" notes:"Excludes private content"`
Private bool `form:"private" notes:"Finds private content"`
Private bool `form:"private" notes:"Finds private content only (except when public:true)"`
Review bool `form:"review" notes:"Finds content in review"`
Error bool `form:"error" notes:"Finds content with errors"`
Hidden bool `form:"hidden" notes:"Finds hidden content (broken or unsupported)"`

View file

@ -47,7 +47,7 @@ type SearchPhotosGeo struct {
Square bool `form:"square" notes:"Finds square pictures only (aspect ratio 1:1)"`
Archived bool `form:"archived" notes:"Finds archived content"`
Public bool `form:"public" notes:"Excludes private content"`
Private bool `form:"private" notes:"Finds private content"`
Private bool `form:"private" notes:"Finds private content only (except when public:true)"`
Review bool `form:"review" notes:"Finds content in review"`
Quality int `form:"quality" notes:"Minimum quality score (1-7)"`
Face string `form:"face" notes:"Find pictures with a specific face ID, you can also specify yes, no, new, or a face type"`

View file

@ -46,7 +46,7 @@ func (w *Convert) ToAvc(f *MediaFile, encoder encode.Encoder, noMutex, force boo
if mp4Name, mp4Err := fs.FileName(f.FileName(), w.conf.SidecarPath(), w.conf.OriginalsPath(), fs.ExtMp4); mp4Err != nil {
return nil, fmt.Errorf("convert: %s in %s (remux)", mp4Err, clean.Log(f.RootRelName()))
} else if mp4Err = ffmpeg.RemuxFile(f.FileName(), mp4Name, encode.NewRemuxOptions(conf.FFmpegBin(), fs.VideoMp4, false)); mp4Err != nil {
return nil, fmt.Errorf("convert: %s in %s (remux)", err, clean.Log(f.RootRelName()))
return nil, fmt.Errorf("convert: %s in %s (remux)", mp4Err, clean.Log(f.RootRelName()))
} else if mp4File, fileErr := NewMediaFile(mp4Name); mp4File == nil || fileErr != nil {
log.Warnf("convert: %s could not be converted to mp4", logFileName)
} else if jsonErr := mp4File.CreateExifToolJson(w); jsonErr != nil {

View file

@ -38,7 +38,7 @@ func (m *MediaFile) GenerateCaption(captionSrc entity.Src) (caption *vision.Capt
fileName, fileErr := m.Thumbnail(Config().ThumbCachePath(), size.Name)
if fileErr != nil {
return caption, err
return caption, fileErr
}
// Get matching labels from computer vision model.
@ -101,7 +101,7 @@ func (m *MediaFile) GenerateLabels(labelSrc entity.Src) (labels classify.Labels)
// Get thumbnail filenames for the selected sizes.
for _, s := range sizes {
if thumbnail, fileErr := m.Thumbnail(Config().ThumbCachePath(), s); fileErr != nil {
log.Debugf("index: %s in %s", err, clean.Log(m.RootRelName()))
log.Debugf("index: %s in %s", fileErr, clean.Log(m.RootRelName()))
continue
} else {
thumbnails = append(thumbnails, thumbnail)

View file

@ -79,6 +79,7 @@ func Vips(imageName string, imageBuffer []byte, hash, thumbPath string, width, h
// Use defaults.
}
// Embed an ICC profile when a JPEG declares its color space via the EXIF InteroperabilityIndex tag.
if err = vipsSetIccProfileForInteropIndex(img, clean.Log(filepath.Base(imageName))); err != nil {
log.Debugf("vips: %s in %s (set icc profile for interop index tag)", err, clean.Log(filepath.Base(imageName)))
}

View file

@ -27,10 +27,28 @@ const (
// but lacks an embedded profile. If an ICC profile is already present, it
// leaves the image untouched.
func vipsSetIccProfileForInteropIndex(img *vips.ImageRef, logName string) (err error) {
if img.HasICCProfile() {
return nil
}
// Some cameras signal color space via EXIF InteroperabilityIndex instead of
// embedding an ICC profile. Browsers and libvips ignore this tag, so we
// inject a matching ICC profile to produce correct thumbnails.
iiFull := img.GetString("exif-ifd4-InteroperabilityIndex")
iiField := "exif-ifd4-InteroperabilityIndex"
hasInterop := false
for _, field := range img.GetFields() {
if field == iiField {
hasInterop = true
break
}
}
if !hasInterop {
return nil
}
iiFull := img.GetString(iiField)
if iiFull == "" {
return nil
@ -40,19 +58,14 @@ func vipsSetIccProfileForInteropIndex(img *vips.ImageRef, logName string) (err e
// a string with a trailing space. Using the first three bytes covers the
// meaningful code (e.g., "R03", "R98").
if len(iiFull) < 3 {
log.Debugf("interopindex: %s has unexpected interop index %q", logName, iiFull)
log.Debugf("vips: %s has unexpected interop index %q", logName, iiFull)
return nil
}
ii := iiFull[:3]
log.Tracef("interopindex: %s read exif and got interopindex %s, %s", logName, ii, iiFull)
log.Tracef("vips: %s read exif and got interopindex %s, %s", logName, ii, iiFull)
if img.HasICCProfile() {
log.Debugf("interopindex: %s already has an embedded ICC profile; skipping fallback.", logName)
return nil
}
profilePath := ""
var profilePath string
switch ii {
case InteropIndexAdobeRGB:
@ -60,14 +73,14 @@ func vipsSetIccProfileForInteropIndex(img *vips.ImageRef, logName string) (err e
profilePath, err = GetIccProfile(IccAdobeRGBCompat, IccAdobeRGBCompatV2, IccAdobeRGBCompatV4)
if err != nil {
return fmt.Errorf("interopindex %s: %w", ii, err)
return fmt.Errorf("vips: failed to get %s profile for %s (%w)", ii, logName, err)
}
case InteropIndexSRGB:
// sRGB: browsers and libvips assume sRGB by default, so no embed needed.
case InteropIndexThumb:
// Thumbnail file; specification unclear—treat as sRGB and do nothing.
default:
log.Debugf("interopindex: %s has unknown interop index %s", logName, ii)
log.Debugf("vips: %s has unknown interop index %s", logName, ii)
}
if profilePath == "" {

View file

@ -65,8 +65,8 @@ install-clitools:
@apt-get -qq install zsh nano >/dev/null 2>&1 && echo "init: successfully installed zsh and nano" || echo "init: packages zsh and nano not available for installation"
@apt-get -qq install exa duf >/dev/null 2>&1 && echo "init: successfully installed exa and duf" || echo "init: packages exa and duf not available for installation"
darktable:
echo 'deb http://download.opensuse.org/repositories/graphics:/darktable/xUbuntu_23.04/ /' | sudo tee /etc/apt/sources.list.d/graphics:darktable.list
curl -fsSL https://download.opensuse.org/repositories/graphics:darktable/xUbuntu_23.04/Release.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/graphics_darktable.gpg > /dev/null
echo 'deb http://download.opensuse.org/repositories/graphics:/darktable/xUbuntu_25.10/ /' | sudo tee /etc/apt/sources.list.d/graphics:darktable.list
curl -fsSL https://download.opensuse.org/repositories/graphics:darktable/xUbuntu_25.10/Release.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/graphics_darktable.gpg > /dev/null
sudo apt-get update
sudo apt-get install darktable
sudo apt-get autoremove

View file

@ -2,7 +2,7 @@
set -euo pipefail
ONNX_VERSION=${ONNX_VERSION:-1.22.0}
ONNX_VERSION=${ONNX_VERSION:-1.23.2}
TODAY=$(date -u +%Y%m%d)
TMPDIR=${TMPDIR:-/tmp}
SYSTEM=$(uname -s)
@ -30,11 +30,11 @@ case "${SYSTEM}" in
case "${ARCH}" in
amd64|AMD64|x86_64|x86-64)
archive="onnxruntime-linux-x64-${ONNX_VERSION}.tgz"
sha="8344d55f93d5bc5021ce342db50f62079daf39aaafb5d311a451846228be49b3"
sha="1fa4dcaef22f6f7d5cd81b28c2800414350c10116f5fdd46a2160082551c5f9b"
;;
arm64|ARM64|aarch64)
archive="onnxruntime-linux-aarch64-${ONNX_VERSION}.tgz"
sha="bb76395092d150b52c7092dc6b8f2fe4d80f0f3bf0416d2f269193e347e24702"
sha="7c63c73560ed76b1fac6cff8204ffe34fe180e70d6582b5332ec094810241e5c"
;;
*)
echo "Warning: ONNX Runtime is not provided for Linux/${ARCH}; skipping install." >&2
@ -46,7 +46,7 @@ case "${SYSTEM}" in
case "${ARCH}" in
arm64|ARM64|aarch64|x86_64|x86-64)
archive="onnxruntime-osx-universal2-${ONNX_VERSION}.tgz"
sha="cfa6f6584d87555ed9f6e7e8a000d3947554d589efe3723b8bfa358cd263d03c"
sha="49ae8e3a66ccb18d98ad3fe7f5906b6d7887df8a5edd40f49eb2b14e20885809"
;;
*)
echo "Unsupported macOS architecture '${ARCH}'." >&2

File diff suppressed because one or more lines are too long

View file

@ -74,7 +74,7 @@ If you have used a *.deb* package for installation, you may need to remove the c
## Dependencies
PhotoPrism packages bundle TensorFlow 2.18.0 and, starting with the October 2025 builds, ONNX Runtime 1.22.0 as described in [`ai/face/README.md`](https://github.com/photoprism/photoprism/blob/develop/internal/ai/face/README.md). The shared libraries for both frameworks are shipped inside `/opt/photoprism/lib`, so no additional system packages are needed to switch `PHOTOPRISM_FACE_ENGINE` to `onnx`. The binaries still rely on glibc ≥ 2.35 and the standard C/C++ runtime libraries (`libstdc++6`, `libgcc_s1`, `libgomp1`, …) provided by your distribution.
PhotoPrism packages bundle TensorFlow 2.18.0 and, starting with the December 2025 builds, ONNX Runtime 1.23.2 as described in [`ai/face/README.md`](https://github.com/photoprism/photoprism/blob/develop/internal/ai/face/README.md). The shared libraries for both frameworks are shipped inside `/opt/photoprism/lib`, so no additional system packages are needed to switch `PHOTOPRISM_FACE_ENGINE` to `onnx`. The binaries still rely on glibc ≥ 2.35 and the standard C/C++ runtime libraries (`libstdc++6`, `libgcc_s1`, `libgomp1`, …) provided by your distribution.
### Required Runtime Packages