Files
OnlyFrames/docs/superpowers/plans/2026-04-07-foto-kurator.md
Ferdinand 3d22b41bf2 feat: duplicate detection via perceptual hashing
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-07 13:23:01 +02:00

1089 lines
32 KiB
Markdown

# Foto-Kurator Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Eine lokale Webanwendung, die den manuellen Foto-Culling-Prozess automatisiert — Fotos werden nach Qualitätskriterien (Unschärfe, Belichtung, Duplikate) analysiert und in einen `_aussortiert/`-Unterordner verschoben.
**Architecture:** Python FastAPI-Backend analysiert Bilder direkt auf dem Dateisystem und stellt zwei REST-Endpunkte bereit (`/analyze`, `/move`). Ein einzelnes `index.html` dient als Frontend und kommuniziert mit dem Backend via `fetch()`. Das Backend startet automatisch den Browser.
**Tech Stack:** Python 3.10+, FastAPI, Uvicorn, Pillow, OpenCV (`opencv-python-headless`), imagehash, anthropic SDK (optional), Vanilla JS/HTML/CSS
---
## File Map
| Datei | Verantwortlichkeit |
|-------|-------------------|
| `server.py` | FastAPI-App, Endpunkte `/analyze`, `/move`, `/preview`, startet Browser |
| `analyzer.py` | Bildanalyse: Unschärfe, Belichtung, Duplikate, Claude Vision |
| `index.html` | Single-Page Frontend: Einstellungen, Fortschritt, Review, Ergebnis |
| `requirements.txt` | Python-Abhängigkeiten |
| `.env.example` | Vorlage für `ANTHROPIC_API_KEY` |
| `tests/test_analyzer.py` | Unit-Tests für `analyzer.py` |
---
## Task 1: Projektstruktur & Abhängigkeiten
**Files:**
- Create: `requirements.txt`
- Create: `.env.example`
- Create: `tests/__init__.py`
- [ ] **Schritt 1: requirements.txt erstellen**
```
fastapi==0.111.0
uvicorn==0.29.0
pillow==10.3.0
opencv-python-headless==4.9.0.80
imagehash==4.3.1
python-dotenv==1.0.1
anthropic==0.25.0
pytest==8.1.1
httpx==0.27.0
```
- [ ] **Schritt 2: .env.example erstellen**
```
# Claude Vision API Key (optional — nur für KI-Analyse benötigt)
ANTHROPIC_API_KEY=sk-ant-...
```
- [ ] **Schritt 3: Abhängigkeiten installieren**
```bash
pip install -r requirements.txt
```
Erwartete Ausgabe: Alle Pakete erfolgreich installiert, kein Fehler.
- [ ] **Schritt 4: tests/__init__.py erstellen**
Leere Datei.
- [ ] **Schritt 5: Commit**
```bash
git init
git add requirements.txt .env.example tests/__init__.py
git commit -m "chore: project setup"
```
---
## Task 2: Unschärfe-Erkennung
**Files:**
- Create: `analyzer.py`
- Create: `tests/test_analyzer.py`
- [ ] **Schritt 1: Failing test schreiben**
`tests/test_analyzer.py`:
```python
import pytest
from pathlib import Path
from analyzer import is_blurry
def make_test_image(tmp_path, color=(200, 200, 200)):
from PIL import Image
img = Image.new("RGB", (100, 100), color=color)
p = tmp_path / "test.jpg"
img.save(p)
return str(p)
def test_solid_color_image_is_blurry(tmp_path):
path = make_test_image(tmp_path)
assert is_blurry(path, threshold=100) is True
def test_normal_image_is_not_blurry(tmp_path):
from PIL import Image, ImageDraw
img = Image.new("RGB", (100, 100), color=(255, 255, 255))
draw = ImageDraw.Draw(img)
for i in range(0, 100, 2):
draw.line([(i, 0), (i, 100)], fill=(0, 0, 0), width=1)
p = tmp_path / "sharp.jpg"
img.save(p)
assert is_blurry(str(p), threshold=100) is False
```
- [ ] **Schritt 2: Test fehlschlagen lassen**
```bash
pytest tests/test_analyzer.py -v
```
Erwartete Ausgabe: `ImportError: cannot import name 'is_blurry' from 'analyzer'`
- [ ] **Schritt 3: is_blurry implementieren**
`analyzer.py`:
```python
import cv2
import numpy as np
def is_blurry(path: str, threshold: float = 100.0) -> bool:
"""Gibt True zurueck, wenn das Bild unscharf ist (Laplacian Variance < threshold)."""
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
if img is None:
return False
variance = cv2.Laplacian(img, cv2.CV_64F).var()
return variance < threshold
```
- [ ] **Schritt 4: Tests bestehen lassen**
```bash
pytest tests/test_analyzer.py -v
```
Erwartete Ausgabe: `2 passed`
- [ ] **Schritt 5: Commit**
```bash
git add analyzer.py tests/test_analyzer.py
git commit -m "feat: blur detection via Laplacian variance"
```
---
## Task 3: Belichtungs-Erkennung
**Files:**
- Modify: `analyzer.py`
- Modify: `tests/test_analyzer.py`
- [ ] **Schritt 1: Failing tests hinzufügen**
An `tests/test_analyzer.py` anhängen:
```python
from analyzer import is_overexposed, is_underexposed
def test_white_image_is_overexposed(tmp_path):
path = make_test_image(tmp_path, color=(255, 255, 255))
assert is_overexposed(path, threshold=240) is True
def test_dark_image_is_underexposed(tmp_path):
path = make_test_image(tmp_path, color=(10, 10, 10))
assert is_underexposed(path, threshold=30) is True
def test_normal_image_is_neither(tmp_path):
path = make_test_image(tmp_path, color=(128, 128, 128))
assert is_overexposed(path, threshold=240) is False
assert is_underexposed(path, threshold=30) is False
```
- [ ] **Schritt 2: Test fehlschlagen lassen**
```bash
pytest tests/test_analyzer.py -v
```
Erwartete Ausgabe: `ImportError: cannot import name 'is_overexposed'`
- [ ] **Schritt 3: Belichtungsfunktionen implementieren**
An `analyzer.py` anhängen:
```python
from PIL import Image
def _mean_brightness(path: str) -> float:
"""Durchschnittliche Helligkeit eines Bildes (0-255)."""
img = Image.open(path).convert("L")
arr = np.array(img, dtype=np.float32)
return float(arr.mean())
def is_overexposed(path: str, threshold: float = 240.0) -> bool:
"""Gibt True zurueck, wenn das Bild ueberbelichtet ist."""
return _mean_brightness(path) > threshold
def is_underexposed(path: str, threshold: float = 30.0) -> bool:
"""Gibt True zurueck, wenn das Bild unterbelichtet ist."""
return _mean_brightness(path) < threshold
```
- [ ] **Schritt 4: Tests bestehen lassen**
```bash
pytest tests/test_analyzer.py -v
```
Erwartete Ausgabe: `5 passed`
- [ ] **Schritt 5: Commit**
```bash
git add analyzer.py tests/test_analyzer.py
git commit -m "feat: exposure detection (over/underexposed)"
```
---
## Task 4: Duplikat-Erkennung
**Files:**
- Modify: `analyzer.py`
- Modify: `tests/test_analyzer.py`
- [ ] **Schritt 1: Failing tests hinzufügen**
An `tests/test_analyzer.py` anhängen:
```python
from analyzer import find_duplicates
def test_identical_images_are_duplicates(tmp_path):
p1 = make_test_image(tmp_path, color=(100, 150, 200))
import shutil
p2 = tmp_path / "copy.jpg"
shutil.copy(p1, p2)
groups = find_duplicates([p1, str(p2)], threshold=8)
assert len(groups) == 1
assert len(groups[0]) == 2
def test_different_images_are_not_duplicates(tmp_path):
from PIL import Image
p1 = make_test_image(tmp_path, color=(0, 0, 0))
img = Image.new("RGB", (100, 100), color=(255, 0, 0))
p2 = tmp_path / "red.jpg"
img.save(p2)
groups = find_duplicates([p1, str(p2)], threshold=8)
assert len(groups) == 0
```
- [ ] **Schritt 2: Test fehlschlagen lassen**
```bash
pytest tests/test_analyzer.py -v
```
Erwartete Ausgabe: `ImportError: cannot import name 'find_duplicates'`
- [ ] **Schritt 3: find_duplicates implementieren**
An `analyzer.py` anhängen:
```python
import imagehash
from typing import List
def find_duplicates(paths: List[str], threshold: int = 8) -> List[List[str]]:
"""
Findet Gruppen aehnlicher Bilder via perceptual hashing.
Das erste Element jeder Gruppe gilt als Original, der Rest als Duplikate.
"""
hashes = {}
for path in paths:
try:
h = imagehash.phash(Image.open(path))
hashes[path] = h
except Exception:
continue
groups = []
used = set()
path_list = list(hashes.keys())
for i, p1 in enumerate(path_list):
if p1 in used:
continue
group = [p1]
for p2 in path_list[i + 1:]:
if p2 in used:
continue
if abs(hashes[p1] - hashes[p2]) <= threshold:
group.append(p2)
used.add(p2)
if len(group) > 1:
used.add(p1)
groups.append(group)
return groups
```
- [ ] **Schritt 4: Tests bestehen lassen**
```bash
pytest tests/test_analyzer.py -v
```
Erwartete Ausgabe: `7 passed`
- [ ] **Schritt 5: Commit**
```bash
git add analyzer.py tests/test_analyzer.py
git commit -m "feat: duplicate detection via perceptual hashing"
```
---
## Task 5: Haupt-Analysefunktion
**Files:**
- Modify: `analyzer.py`
- Modify: `tests/test_analyzer.py`
- [ ] **Schritt 1: Failing test hinzufügen**
An `tests/test_analyzer.py` anhängen:
```python
from analyzer import analyze_folder
def test_analyze_folder_returns_results(tmp_path):
make_test_image(tmp_path, color=(128, 128, 128))
from PIL import Image
white = tmp_path / "white.jpg"
Image.new("RGB", (100, 100), color=(255, 255, 255)).save(white)
results = analyze_folder(
folder=str(tmp_path),
blur_threshold=100,
over_threshold=240,
under_threshold=30,
dup_threshold=8,
use_ai=False,
)
reasons_flat = [r for item in results for r in item["reasons"]]
assert "ueberbelichtet" in reasons_flat
```
- [ ] **Schritt 2: Test fehlschlagen lassen**
```bash
pytest tests/test_analyzer.py::test_analyze_folder_returns_results -v
```
Erwartete Ausgabe: `ImportError: cannot import name 'analyze_folder'`
- [ ] **Schritt 3: analyze_folder implementieren**
An `analyzer.py` anhängen:
```python
import os
from typing import Optional
SUPPORTED_EXTENSIONS = {".jpg", ".jpeg", ".png"}
def analyze_folder(
folder: str,
blur_threshold: float = 100.0,
over_threshold: float = 240.0,
under_threshold: float = 30.0,
dup_threshold: int = 8,
use_ai: bool = False,
api_key: Optional[str] = None,
) -> List[dict]:
"""
Analysiert alle Bilder im Ordner.
Gibt Liste zurueck: [{"path": "/foo/bar.jpg", "reasons": ["unscharf"]}, ...]
Nur Bilder mit mindestens einem Grund werden zurueckgegeben.
"""
paths = [
os.path.join(folder, f)
for f in os.listdir(folder)
if os.path.splitext(f)[1].lower() in SUPPORTED_EXTENSIONS
]
results: dict = {path: [] for path in paths}
for path in paths:
try:
if is_blurry(path, blur_threshold):
results[path].append("unscharf")
if is_overexposed(path, over_threshold):
results[path].append("ueberbelichtet")
if is_underexposed(path, under_threshold):
results[path].append("unterbelichtet")
except Exception:
continue
dup_groups = find_duplicates(paths, dup_threshold)
for group in dup_groups:
original = os.path.basename(group[0])
for dup_path in group[1:]:
results[dup_path].append(f"Duplikat von {original}")
if use_ai and api_key:
ai_results = _analyze_with_ai(paths, api_key)
for path, ai_reasons in ai_results.items():
results[path].extend(ai_reasons)
return [
{"path": path, "reasons": reasons}
for path, reasons in results.items()
if reasons
]
```
Hinweis: `_analyze_with_ai` wird in Task 6 definiert. Da `analyze_folder` mit `use_ai=False` getestet wird, ist das noch kein Problem.
- [ ] **Schritt 4: Tests bestehen lassen**
```bash
pytest tests/test_analyzer.py -v
```
Erwartete Ausgabe: `8 passed`
- [ ] **Schritt 5: Commit**
```bash
git add analyzer.py tests/test_analyzer.py
git commit -m "feat: analyze_folder orchestrates all checks"
```
---
## Task 6: Claude Vision Integration
**Files:**
- Modify: `analyzer.py`
- [ ] **Schritt 1: _analyze_with_ai vor analyze_folder einfuegen**
In `analyzer.py` VOR der `analyze_folder`-Funktion einfuegen:
```python
import base64
def _analyze_with_ai(paths: List[str], api_key: str) -> dict:
"""
Sendet Bilder an Claude Vision API zur Qualitaetsanalyse.
Gibt {path: [reasons]} zurueck. Bei Fehler wird der Pfad uebersprungen.
"""
import anthropic
client = anthropic.Anthropic(api_key=api_key)
ai_results: dict = {path: [] for path in paths}
PROMPT = (
"Analysiere dieses Foto auf Qualitaetsprobleme fuer einen professionellen Fotografen. "
"Antworte NUR mit einer kommagetrennten Liste von Problemen aus diesen Kategorien: "
"unscharf, ueberbelichtet, unterbelichtet, schlechter Bildausschnitt, stoerende Elemente, "
"schlechter Weissabgleich. Wenn das Bild in Ordnung ist, antworte mit 'ok'."
)
for path in paths:
try:
with open(path, "rb") as f:
img_data = base64.standard_b64encode(f.read()).decode("utf-8")
ext = os.path.splitext(path)[1].lower().lstrip(".")
media_type = "image/jpeg" if ext in ("jpg", "jpeg") else "image/png"
response = client.messages.create(
model="claude-opus-4-6",
max_tokens=100,
messages=[{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": media_type,
"data": img_data,
},
},
{"type": "text", "text": PROMPT},
],
}],
)
answer = response.content[0].text.strip().lower()
if answer != "ok":
reasons = [r.strip() for r in answer.split(",") if r.strip()]
ai_results[path].extend(reasons)
except Exception:
continue
return ai_results
```
- [ ] **Schritt 2: Alle Tests nochmals ausfuehren**
```bash
pytest tests/test_analyzer.py -v
```
Erwartete Ausgabe: `8 passed` (keine Regression)
- [ ] **Schritt 3: Commit**
```bash
git add analyzer.py
git commit -m "feat: Claude Vision AI analysis integration"
```
---
## Task 7: FastAPI Backend
**Files:**
- Create: `server.py`
- [ ] **Schritt 1: server.py erstellen**
```python
import os
import shutil
import webbrowser
import threading
from typing import List
from dotenv import load_dotenv
from fastapi import FastAPI, HTTPException
from fastapi.responses import FileResponse, Response
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
import uvicorn
from analyzer import analyze_folder
load_dotenv()
app = FastAPI(title="Foto-Kurator")
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:8000"],
allow_methods=["GET", "POST"],
allow_headers=["Content-Type"],
)
class AnalyzeRequest(BaseModel):
folder: str
blur_threshold: float = 100.0
over_threshold: float = 240.0
under_threshold: float = 30.0
dup_threshold: int = 8
use_ai: bool = False
class MoveRequest(BaseModel):
paths: List[str]
folder: str
@app.get("/")
def serve_frontend():
return FileResponse("index.html")
@app.get("/preview")
def preview(path: str):
if not os.path.isfile(path):
raise HTTPException(status_code=404, detail="Datei nicht gefunden")
ext = os.path.splitext(path)[1].lower()
media = "image/jpeg" if ext in (".jpg", ".jpeg") else "image/png"
with open(path, "rb") as f:
return Response(content=f.read(), media_type=media)
@app.post("/analyze")
def analyze(req: AnalyzeRequest):
if not os.path.isdir(req.folder):
raise HTTPException(status_code=400, detail=f"Ordner nicht gefunden: {req.folder}")
api_key = os.getenv("ANTHROPIC_API_KEY") if req.use_ai else None
results = analyze_folder(
folder=req.folder,
blur_threshold=req.blur_threshold,
over_threshold=req.over_threshold,
under_threshold=req.under_threshold,
dup_threshold=req.dup_threshold,
use_ai=req.use_ai,
api_key=api_key,
)
return {"results": results}
@app.post("/move")
def move_files(req: MoveRequest):
target_dir = os.path.join(req.folder, "_aussortiert")
os.makedirs(target_dir, exist_ok=True)
moved = []
errors = []
for path in req.paths:
try:
dest = os.path.join(target_dir, os.path.basename(path))
shutil.move(path, dest)
moved.append(path)
except Exception as e:
errors.append({"path": path, "error": str(e)})
return {"moved": moved, "errors": errors}
def open_browser():
webbrowser.open("http://localhost:8000")
if __name__ == "__main__":
threading.Timer(1.0, open_browser).start()
uvicorn.run(app, host="127.0.0.1", port=8000)
```
- [ ] **Schritt 2: Backend-Smoke-Test**
```bash
python server.py &
sleep 2
curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/
```
Erwartete Ausgabe: `200` oder `404` (index.html fehlt noch — ok)
```bash
kill %1
```
- [ ] **Schritt 3: Commit**
```bash
git add server.py
git commit -m "feat: FastAPI backend with /analyze, /move, /preview endpoints"
```
---
## Task 8: Frontend
**Files:**
- Create: `index.html`
- [ ] **Schritt 1: index.html erstellen**
Wichtig: Alle Werte aus Server-Antworten werden per `textContent` gesetzt (kein `innerHTML` mit Nutzerdaten) um XSS zu verhindern.
```html
<!DOCTYPE html>
<html lang="de">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Foto-Kurator</title>
<style>
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
body { font-family: system-ui, sans-serif; background: #1a1a2e; color: #e0e0e0; min-height: 100vh; display: flex; flex-direction: column; align-items: center; padding: 2rem 1rem; }
h1 { font-size: 2rem; margin-bottom: 0.25rem; color: #fff; }
.subtitle { color: #888; margin-bottom: 2rem; font-size: 0.95rem; }
.card { background: #16213e; border-radius: 12px; padding: 1.5rem; width: 100%; max-width: 640px; margin-bottom: 1.5rem; }
label { display: block; font-size: 0.85rem; color: #aaa; margin-bottom: 0.4rem; }
input[type="text"] { width: 100%; padding: 0.6rem 0.8rem; border-radius: 8px; border: 1px solid #333; background: #0f3460; color: #fff; font-size: 1rem; }
input[type="text"]:focus { outline: 2px solid #e94560; }
button.primary { margin-top: 1rem; width: 100%; padding: 0.75rem; border-radius: 8px; border: none; background: #e94560; color: #fff; font-size: 1rem; font-weight: 600; cursor: pointer; }
button.primary:hover { background: #c73652; }
button.primary:disabled { background: #555; cursor: not-allowed; }
.toggle-row { display: flex; justify-content: space-between; align-items: center; margin-bottom: 1rem; }
.toggle-label { font-size: 0.95rem; }
.toggle-note { font-size: 0.78rem; color: #888; margin-top: 0.2rem; }
.switch { position: relative; width: 44px; height: 24px; flex-shrink: 0; }
.switch input { opacity: 0; width: 0; height: 0; }
.knob { position: absolute; inset: 0; background: #333; border-radius: 24px; cursor: pointer; transition: background 0.2s; }
.knob::before { content: ""; position: absolute; width: 18px; height: 18px; left: 3px; top: 3px; background: #fff; border-radius: 50%; transition: transform 0.2s; }
input:checked + .knob { background: #e94560; }
input:checked + .knob::before { transform: translateX(20px); }
.slider-row { margin-bottom: 0.75rem; }
.slider-label { display: flex; justify-content: space-between; font-size: 0.85rem; color: #aaa; margin-bottom: 0.3rem; }
input[type="range"] { width: 100%; accent-color: #e94560; }
.view { display: none; }
.view.active { display: block; }
progress { width: 100%; height: 12px; border-radius: 6px; overflow: hidden; appearance: none; }
progress::-webkit-progress-bar { background: #0f3460; border-radius: 6px; }
progress::-webkit-progress-value { background: #e94560; border-radius: 6px; }
.progress-label { font-size: 0.85rem; color: #aaa; margin-top: 0.5rem; text-align: center; }
.photo-item { display: flex; align-items: center; gap: 1rem; padding: 0.75rem 0; border-bottom: 1px solid #222; }
.photo-item img { width: 64px; height: 64px; object-fit: cover; border-radius: 6px; flex-shrink: 0; }
.photo-info { flex: 1; min-width: 0; }
.photo-name { font-size: 0.9rem; font-weight: 500; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; }
.photo-reasons { font-size: 0.8rem; color: #e94560; margin-top: 0.2rem; }
.keep-btn { padding: 0.3rem 0.7rem; border-radius: 6px; border: 1px solid #555; background: transparent; color: #aaa; cursor: pointer; font-size: 0.8rem; flex-shrink: 0; }
.keep-btn:hover { border-color: #fff; color: #fff; }
.kept { opacity: 0.35; }
.kept .photo-name { text-decoration: line-through; }
.stat { display: flex; justify-content: space-between; padding: 0.5rem 0; border-bottom: 1px solid #222; font-size: 0.95rem; }
.stat-value { font-weight: 700; color: #e94560; }
.hint { margin-top: 1rem; color: #888; font-size: 0.85rem; }
</style>
</head>
<body>
<h1>Foto-Kurator</h1>
<p class="subtitle">Automatisches Aussortieren von Fotos nach Qualitaet</p>
<!-- Start -->
<div id="view-start" class="view active card">
<label for="folder-input">Ordnerpfad</label>
<input type="text" id="folder-input" placeholder="/Users/name/Fotos/Shooting-2026" />
<div style="margin-top: 1.5rem;">
<div class="toggle-row">
<div>
<div class="toggle-label">Ueberpruefung vor dem Verschieben</div>
<div class="toggle-note">Zeigt aussortierte Fotos zur Bestaetigung an</div>
</div>
<label class="switch">
<input type="checkbox" id="toggle-review" checked>
<span class="knob"></span>
</label>
</div>
<div class="toggle-row">
<div>
<div class="toggle-label">KI-Analyse (Claude Vision)</div>
<div class="toggle-note">Genauer, aber ~0,003 EUR pro Foto &middot; Internetverbindung erforderlich</div>
</div>
<label class="switch">
<input type="checkbox" id="toggle-ai">
<span class="knob"></span>
</label>
</div>
</div>
<details style="margin-top: 1rem;">
<summary style="cursor: pointer; color: #aaa; font-size: 0.9rem; user-select: none;">Schwellenwerte anpassen</summary>
<div style="margin-top: 1rem;">
<div class="slider-row">
<div class="slider-label"><span>Unschaerfe-Grenze</span><span id="blur-val">100</span></div>
<input type="range" id="blur-threshold" min="10" max="500" value="100">
</div>
<div class="slider-row">
<div class="slider-label"><span>Ueberbelichtung (Helligkeit &gt;)</span><span id="over-val">240</span></div>
<input type="range" id="over-threshold" min="180" max="255" value="240">
</div>
<div class="slider-row">
<div class="slider-label"><span>Unterbelichtung (Helligkeit &lt;)</span><span id="under-val">30</span></div>
<input type="range" id="under-threshold" min="0" max="80" value="30">
</div>
<div class="slider-row">
<div class="slider-label"><span>Duplikat-Aehnlichkeit (pHash &le;)</span><span id="dup-val">8</span></div>
<input type="range" id="dup-threshold" min="0" max="20" value="8">
</div>
</div>
</details>
<button class="primary" id="start-btn">Analyse starten</button>
</div>
<!-- Progress -->
<div id="view-progress" class="view card">
<h2 style="margin-bottom: 1rem;">Analyse laeuft...</h2>
<progress id="progress-bar"></progress>
<p class="progress-label" id="progress-label">Vorbereitung...</p>
<button style="margin-top: 1rem; padding: 0.5rem 1rem; border-radius: 8px; border: 1px solid #555; background: transparent; color: #aaa; cursor: pointer;" id="cancel-btn">Abbrechen</button>
</div>
<!-- Review -->
<div id="view-review" class="view card">
<h2 style="margin-bottom: 0.5rem;">Vorschau aussortierter Fotos</h2>
<p style="color: #888; font-size: 0.85rem; margin-bottom: 1rem;">Klicke "Behalten", um ein Foto von der Liste zu entfernen.</p>
<div id="review-list"></div>
<button class="primary" id="confirm-btn">Alle bestaetigen &amp; verschieben</button>
</div>
<!-- Result -->
<div id="view-result" class="view card">
<h2 style="margin-bottom: 1rem;">Fertig!</h2>
<div id="result-stats"></div>
<button class="primary" id="restart-btn">Neuen Ordner analysieren</button>
</div>
<script>
// --- State ---
let analysisResults = [];
let currentFolder = "";
// --- Helpers ---
function showView(id) {
document.querySelectorAll(".view").forEach(v => v.classList.remove("active"));
document.getElementById(id).classList.add("active");
}
function el(id) { return document.getElementById(id); }
// --- Slider labels ---
["blur", "over", "under", "dup"].forEach(key => {
const input = el(key + "-threshold");
const label = el(key + "-val");
input.addEventListener("input", () => { label.textContent = input.value; });
});
// --- Cancel ---
el("cancel-btn").addEventListener("click", () => location.reload());
el("restart-btn").addEventListener("click", () => location.reload());
// --- Start Analysis ---
el("start-btn").addEventListener("click", async () => {
const folder = el("folder-input").value.trim();
if (!folder) { alert("Bitte einen Ordnerpfad eingeben."); return; }
currentFolder = folder;
showView("view-progress");
el("progress-label").textContent = "Analyse startet...";
const payload = {
folder,
blur_threshold: parseFloat(el("blur-threshold").value),
over_threshold: parseFloat(el("over-threshold").value),
under_threshold: parseFloat(el("under-threshold").value),
dup_threshold: parseInt(el("dup-threshold").value, 10),
use_ai: el("toggle-ai").checked,
};
let data;
try {
const res = await fetch("/analyze", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
if (!res.ok) {
const err = await res.json();
throw new Error(err.detail || "Serverfehler");
}
data = await res.json();
} catch (e) {
alert("Fehler: " + e.message);
showView("view-start");
return;
}
analysisResults = data.results;
if (analysisResults.length === 0) {
renderResult(0);
return;
}
if (el("toggle-review").checked) {
renderReview();
showView("view-review");
} else {
await doMove();
}
});
// --- Review ---
function renderReview() {
const list = el("review-list");
list.textContent = "";
analysisResults.forEach((item, idx) => {
const name = item.path.split("/").pop();
const row = document.createElement("div");
row.className = "photo-item";
row.id = "item-" + idx;
const img = document.createElement("img");
img.src = "/preview?path=" + encodeURIComponent(item.path);
img.alt = name;
img.onerror = function() { this.style.display = "none"; };
const info = document.createElement("div");
info.className = "photo-info";
const nameEl = document.createElement("div");
nameEl.className = "photo-name";
nameEl.textContent = name;
const reasonsEl = document.createElement("div");
reasonsEl.className = "photo-reasons";
reasonsEl.textContent = item.reasons.join(", ");
info.appendChild(nameEl);
info.appendChild(reasonsEl);
const btn = document.createElement("button");
btn.className = "keep-btn";
btn.textContent = "Behalten";
btn.addEventListener("click", () => {
row.classList.toggle("kept");
btn.textContent = row.classList.contains("kept") ? "Aussortieren" : "Behalten";
});
row.appendChild(img);
row.appendChild(info);
row.appendChild(btn);
list.appendChild(row);
});
}
// --- Confirm & Move ---
el("confirm-btn").addEventListener("click", () => doMove(false));
async function doMove(skipReview = true) {
const toMove = skipReview
? analysisResults.map(r => r.path)
: analysisResults.filter((_, idx) => {
const row = el("item-" + idx);
return row && !row.classList.contains("kept");
}).map(r => r.path);
if (toMove.length === 0) {
renderResult(0);
return;
}
showView("view-progress");
el("progress-label").textContent = "Verschiebe " + toMove.length + " Fotos...";
try {
const res = await fetch("/move", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ paths: toMove, folder: currentFolder }),
});
const data = await res.json();
renderResult(data.moved.length);
} catch (e) {
alert("Fehler beim Verschieben: " + e.message);
showView("view-review");
}
}
// --- Result ---
function renderResult(movedCount) {
const stats = el("result-stats");
stats.textContent = "";
const byReason = {};
analysisResults.forEach(item => {
item.reasons.forEach(r => { byReason[r] = (byReason[r] || 0) + 1; });
});
function addStat(label, value) {
const row = document.createElement("div");
row.className = "stat";
const l = document.createElement("span");
l.textContent = label;
const v = document.createElement("span");
v.className = "stat-value";
v.textContent = value;
row.appendChild(l);
row.appendChild(v);
stats.appendChild(row);
}
addStat("Analysierte Fotos", analysisResults.length);
addStat("Aussortiert", movedCount);
Object.entries(byReason).forEach(([reason, count]) => {
addStat(" davon: " + reason, count);
});
const hint = document.createElement("p");
hint.className = "hint";
hint.textContent = "Aussortierte Fotos befinden sich im Unterordner _aussortiert/";
stats.appendChild(hint);
showView("view-result");
}
</script>
</body>
</html>
```
- [ ] **Schritt 2: Integrationstest manuell durchfuehren**
```bash
python server.py
```
1. Browser oeffnet `http://localhost:8000`
2. Ordnerpfad mit Testfotos eingeben
3. "Analyse starten" — Fortschrittsanzeige pruefen
4. Review-Liste: Vorschaubilder und Gruende sichtbar (als Text, nicht HTML)?
5. "Behalten" toggeln — Foto wird ausgegraut?
6. "Alle bestaetigen" — `_aussortiert/` Ordner pruefen
- [ ] **Schritt 3: Alle Tests ausfuehren**
```bash
pytest tests/ -v
```
Erwartete Ausgabe: `8 passed`
- [ ] **Schritt 4: Commit**
```bash
git add index.html
git commit -m "feat: complete frontend with review flow and XSS-safe DOM rendering"
```
---
## Task 9: README
**Files:**
- Create: `README.md`
- [ ] **Schritt 1: README.md erstellen**
```markdown
# Foto-Kurator
Automatisches Aussortieren von Fotos nach Qualitaetskriterien.
## Setup
```bash
pip install -r requirements.txt
```
Fuer KI-Analyse (optional):
```bash
cp .env.example .env
# ANTHROPIC_API_KEY in .env eintragen
```
## Starten
```bash
python server.py
```
Der Browser oeffnet automatisch http://localhost:8000.
## Kriterien
- **Unscharf** - Laplacian Variance (einstellbar)
- **Ueberbelichtet / Unterbelichtet** - Durchschnittliche Helligkeit (einstellbar)
- **Duplikate** - Perceptual Hashing (einstellbar)
- **KI-Analyse** - Claude Vision API (optional, ca. 0,003 EUR / Foto)
Aussortierte Fotos landen in `_aussortiert/` im analysierten Ordner.
```
- [ ] **Schritt 2: Commit**
```bash
git add README.md
git commit -m "docs: add README with setup instructions"
```
---
## Fertig
Nach Task 9 ist die App vollstaendig:
- `python server.py` startet alles, Browser oeffnet automatisch
- Ordnerpfad eingeben, Analyse starten
- Optional: Review vor dem Verschieben
- Optional: KI-Analyse via Claude Vision
- Alle Schwellenwerte per Schieberegler einstellbar
- Aussortierte Fotos landen sicher in `_aussortiert/`