Skip to content

ui.server

Web UI for ground-truth label collection and live prediction monitoring. The shared runtime classes (ActivityMonitor, _LabelSuggester) live in taskclf.ui.runtime, while tray-specific orchestration stays in taskclf.ui.tray.TrayLabeler.

Launch

taskclf ui
taskclf ui --port 8741 --model-dir models/run_20260226

For frontend development with hot reload:

taskclf ui --dev

For browser-based full-stack development with frontend HMR plus backend auto-reload:

taskclf ui --dev --browser

Options:

Option Default Description
--port 8741 Port for the web server
--model-dir (none) Model bundle for live predictions
--aw-host http://localhost:5600 ActivityWatch server URL
--poll-seconds 60 Seconds between AW polling
--title-salt taskclf-default-salt Salt for hashing window titles
--data-dir data/processed (ephemeral in --dev) Processed data directory; omit with --dev for an auto-cleaned temp dir
--transition-minutes 3 Minutes before suggesting a label change
--idle-transition-minutes 1 Minutes before a lockscreen/idle transition fires (separate from general transitions)
--dev off Start Vite dev server for frontend hot reload; in browser mode, the FastAPI backend also runs with auto-reload. Uses an ephemeral data dir unless --data-dir is set

Panels

  • Label -- Form with date/time pickers, CoreLabel dropdown, confidence slider, and user ID input.
  • Recent -- Quick-label with preset durations (now / 1 / 5 / 10 / 15 / 30 / 60 min) or a custom duration input supporting seconds, minutes, hours, and days. "now" labels the span from the last label's end_ts to the current moment (falling back to 1 minute if no last label exists or the span would be < 1 minute); the button dynamically shows the duration (e.g. "now (12m)"). Other values label the corresponding trailing window. The "Extend until next label" checkbox (on by default) sets extend_forward=true on the new label; when the next label is created, this label's end_ts is automatically stretched to meet the next label's start_ts, producing contiguous coverage without gaps. For a zero-duration "now" label with extend_forward=true, the server performs a same-timestamp handoff by ending the currently active same-user label at now, then creating the new open-ended label at now, so no overlap dialog is needed in the common switch-label-now flow. The History view immediately renders the new span as open-ended (start – Now, duration until next label) so users get instant confirmation that the label was recorded before the next label closes the span. Shows a live ActivityWatch summary when available. The footer switches between a compact "Last: Label Nm ago" summary for completed labels and "Current: Label since Nm ago" for any label whose extend_forward coverage is still active. Quick-label keeps those two notions separate: GET /api/labels/current detects the active extend_forward label that drives the footer and Stop current label control, while GET /api/labels?limit=1 still returns the span with the latest end_ts first for gap-fill and "last ended" behavior, including when overlapping spans exist (allow_overlap). That keeps the stop action visible for running labels even if some completed overlapping span ends later. When a current label is active, the footer offers a two-step Stop current label action that closes the running span at the moment the user confirms, without deleting the label that has already been recorded. This applies both to zero-duration "from now" labels and to earlier backfilled spans that were saved with extend_forward=true and are still the active label. On success, the compact badge immediately drops the stale manual label and falls back to the latest passive live-status label from the model. The gap shortcut is hidden while a current extend_forward label is active so the quick-label surface does not imply there is unlabeled time to backfill. When the last completed label is shown, the gap button (e.g. gap 5m) reflects unlabeled time since that label’s end_ts and refreshes on a ~30s wall-clock tick so the duration stays current while the window stays open. After a successful label, a brief "Saved" flash appears (click to dismiss instantly); the grid stays open for rapid consecutive labeling. Non-overlap quick-label failures now render a persistent inline error banner with Copy error and Close actions instead of auto-dismissing after a timeout. On overlap errors, instead of a dead-end flash the grid shows a compact confirmation prompt listing the conflicting label names (e.g. "Overlaps 3 labels: Write, Debug, Review") with the affected time range shown below (e.g. "(10:50–11:20 will be replaced)"), plus Overwrite/Keep All/Cancel buttons. For a single conflict the per-span time range is shown inline; for multiple conflicts a "show details" toggle reveals per-span times. Choosing Overwrite re-submits the label with overwrite: true, which truncates, splits, or removes the conflicting existing span(s) to make room for the new label. Choosing Keep All re-submits with allow_overlap: true, preserving all existing and new labels on the overlapping time range. If lastLabel changes while the prompt is visible (e.g. from another tab or WebSocket), the prompt is automatically dismissed since the conflict data may be stale.
  • Queue -- Pending LabelRequest items sorted by confidence (lowest first). Shows time range, predicted label, confidence, and reason.

Live Features

  • Live badge (compact) -- Header pill showing the current label/app and connection dot. Visible in the collapsed tray window. When a suggest_label event arrives, the compact badge immediately switches to the suggested label as a frontend-only display override. If the suggestion is skipped, the pill restores the pre-suggestion display; if the suggestion is accepted, the pill keeps the accepted suggestion until a fresher prediction or live_status update supersedes it.
  • State panel -- Tabbed panel with three views, selected via a segmented control at the top:
  • System tab -- Internal-state debug panel with a collapsible accordion layout: each section header shows an inline summary badge (e.g., current app, predicted label, connection status) so all states are scannable at a glance. Click any header to expand its detail rows. Activity Monitor and Last Prediction default to open; all other sections start collapsed. Eight sections:
    • Activity Monitor -- summary: state · current_app. Details: state, current_app, since, poll_interval, poll_count, last_poll timestamp, uptime. When a transition candidate exists: candidate_app, candidate_progress with duration/threshold/percentage and a visual progress bar.
    • Last Prediction -- summary: mapped_label confidence%. Details: label, mapped_label, confidence (color-coded green/red at 50% threshold), ts, trigger_app.
    • Model -- summary: loaded/not loaded (color-coded). Details: loaded status, model_dir, schema_hash, suggested label, suggestion_conf. When a model is loaded and the tray exposes model_dir, the panel also shows bundle-saved validation metrics (val_macro_f1, val_weighted_f1, training range, top confusion pairs) from GET /api/train/models/current/inspect — static metrics on disk, not a live replay of the test split.
    • Transitions -- summary: transition count. Details: total count, last transition details: prev → new apps, block time range, fired_at timestamp.
    • Active Suggestion -- appears when the model suggests a label on transition. Summary: suggested confidence%. Details: suggested, confidence, reason, old_label, block time range.
    • Activity Source -- summary: provider state (checking, ready, setup required) with provider-neutral diagnostics. Details: provider name, endpoint, resolved source_id, last_sample_count, last-sample app distribution, and a setup callout when the configured source is unavailable. The setup callout is intentionally non-blocking: manual label creation, editing, and suggestion accept/skip flows remain enabled even when summaries are unavailable.
    • WebSocket -- summary: connection status (color-coded). Details: status, messages total, per-type breakdown (st/pred/tray/sug), last_received timestamp, reconnects count, connected_since.
    • Config -- summary: dev/prod. Details: data_dir, ui_port, dev_mode, labels_saved count.
  • Training tab -- Model training interface with data readiness checks, a training form (date range, boost rounds, class weight, synthetic toggle), real-time progress via WebSocket, result display (macro/weighted F1), and a model bundle list. Each bundle row can expand Bundle metrics to lazy-load the same bundle-saved validation inspection as GET /api/train/models/{model_id}/inspect. Validation/start/cancel failures and failed-run messages use the same persistent inline error banner with Copy error and Close controls. See training.md for endpoint details.
  • History tab -- Single-day view with date navigation (prev/next arrows and a date picker). Shows a full-day timeline strip (00:00–23:59) with color-coded labeled segments and clickable unlabeled gaps. Click any label row to expand an inline editor showing the ActivityContext for that time window (apps used, input stats), a label-change grid (current label highlighted), and a delete button with confirmation. Label time edits are minute-based and use end-exclusive semantics (15:00-15:01 means one minute); equal start/end values (15:00-15:00) are rejected. Row display stays compact at HH:MM, but includes seconds when either boundary has non-zero seconds. Click any unlabeled gap row to expand an inline editor with time range inputs (pre-filled to the gap boundaries, adjustable to label a sub-range), ActivityContext for the selected range, and a label picker grid; selecting a label creates a new label span for the chosen sub-range. Navigating to a past date with no labels shows the entire day as a single labelable gap. The active date refetches immediately on labels_changed WebSocket events, so manual labels, accepted suggestions, edits, deletes, and imports appear without requiring a tab switch. Provides a dedicated review surface separate from the quick-label popup, with the full panel height available for browsing.
  • Suggestion banner -- When present, it is shown at the top of the label column (above the duration/time picker and the rolling activity summary) so the model prompt is visible first. It appears when a new inferred label arrives from suggest_label. The banner shows the current label, suggested label, confidence, and the applicable block time range (block_start -> block_end), using a compact local-time display and adding the date when the suggested range crosses midnight locally. Below that, an inline activity summary for that same range reuses the same surface as the main label panel (ActivitySummary via GET /api/activity/summary): top apps, input-rate hints when available, and bucket/session coverage. If the configured activity source is unavailable, the banner shows the same non-blocking setup callout used elsewhere; if the range is simply empty, it shows No activity data for this window. Label names use the same color cues as the label grid. Clicking Use suggestion immediately writes the suggested label to label history. When that suggestion lands inside the effective coverage of the current extend_forward label, the backend automatically splits the running label into a before-fragment and a resumed open-ended fragment after the suggestion so the active label continues past the suggested block. Other same-user overlaps still use the same Overwrite All / Keep All / Cancel prompt as quick-labeling; retries use POST /api/notification/accept with overwrite or allow_overlap so the span stays provenance="suggestion". Skip calls the notification-skip flow without creating labels. Manual quick-label saves (POST /api/labels) do not dismiss the banner; it clears when the suggestion is accepted, skipped, or cleared by tray-side auto-save paths. The compact badge mirrors the suggestion immediately while the banner is present, reverts to its pre-suggestion display on skip, and stays on the accepted suggestion until a fresher explicit badge signal arrives. Optional client auto-dismiss is configured by suggestion_banner_ttl_seconds in config.toml (see core/config.md and GET /api/config/user); 0 disables the timer. Save/skip failures surface as persistent inline errors with Copy error and Close actions rather than disappearing on a timer.
  • Auto-save BreakIdle -- When a completed activity block is detected as idle (lockscreen app was dominant, or the model suggested BreakIdle), the tray auto-saves the label with provenance="auto_idle" and publishes a label_created event. No user confirmation is required since the user was away. Lockscreen/idle transitions use a separate, faster threshold (idle_transition_minutes, default 1 min) so breaks are detected quickly without affecting the general transition cadence (transition_minutes, default 2 min). The fast threshold applies in both directions: when the lockscreen app becomes dominant, and when the user returns from lockscreen. Matching uses normalized app IDs: com.apple.loginwindow (macOS), com.microsoft.LockApp/com.microsoft.LogonUI (Windows), org.gnome.ScreenSaver, org.gnome.Shell, org.i3wm.i3lock, org.swaywm.swaylock, org.jwz.xscreensaver, org.freedesktop.light-locker, org.suckless.slock (Linux).

Architecture

The UI is a SolidJS single-page application served by a FastAPI backend:

  • REST endpoints (/api/labels, /api/labels/export, /api/labels/import, /api/labels/stats, /api/queue, /api/features/summary, /api/activity/summary, /api/aw/live, /api/config/labels, /api/config/user, /api/tray/pause, /api/tray/state, /api/train/*) handle label CRUD, import/export, stats, queue management, user configuration, tray control, model training, and data queries. See training.md for the /api/train/* endpoints (including bundle-only model inspection under /api/train/models/.../inspect).
  • GET /api/labels accepts optional limit (default 50, max 500), date (ISO-8601 date string, e.g. 2025-03-07), and optional range_start / range_end (ISO-8601 UTC bounds) query parameters. After filters are applied, results are sorted by end_ts descending (latest-ended first). When date is provided, only labels overlapping that day are returned. The History tab uses this to fetch labels for the selected date.
  • GET /api/labels/current returns the most recently started label whose extend_forward coverage still contains "now", or null when none exists. That includes zero-duration "from now" labels (start_ts == end_ts) and earlier spans saved with extend_forward=true when no later same-user label has ended that coverage yet. Quick-label uses this endpoint for the footer's current badge and stop action, so the active label remains discoverable even when GET /api/labels is ordered by latest end_ts.
  • POST /api/labels accepts optional extend_forward, overwrite, and allow_overlap booleans. extend_forward persists the label with extend_forward=true; when the next label is created for the same user, this label's end_ts is automatically stretched to the new label's start_ts, producing contiguous coverage. For zero-duration extend_forward labels (start_ts == end_ts, used by "label from now"), the server first truncates the currently active same-user span at that same timestamp, then appends the new label, ensuring a contiguous no-overlap handoff. The quick-label UI sets this flag by default. Before overlap checks, same-user boundaries within 1 ms are snapped together so timestamps that passed through JavaScript Date do not fail with false microsecond overlaps. When overwrite is true, conflicting same-user spans are truncated, split, or removed to make room for the new label (no 409 is returned). When allow_overlap is true, the overlap check is skipped entirely and multiple labels are allowed to coexist on the same time range; this is useful for multi-task periods. When both are false (default), an overlap returns 409 with structured conflict details: {"detail": {"error": "...", "conflicting_start_ts": "...", "conflicting_end_ts": "..."}} so the frontend can prompt the user to overwrite or keep all.
  • PUT /api/labels changes the label on an existing span identified by start_ts + end_ts. Optional new_start_ts, new_end_ts, and extend_forward fields can also change the time range and running/current state. The quick-label "Stop current label" action uses new_end_ts=<click time> with extend_forward=false to close the active label without deleting it. Returns 404 if no matching span exists.
  • DELETE /api/labels removes a span identified by start_ts + end_ts. Returns 404 if no matching span exists.
  • GET /api/labels/export downloads all label spans as a CSV file (text/csv). Returns 404 if no labels file exists or the file contains no spans.
  • POST /api/labels/import accepts a multipart CSV file upload (file) and an optional strategy form field ("merge" or "overwrite", default "merge"). In merge mode, imported spans are deduplicated against existing labels by (start_ts, end_ts, user_id) and overlap-checked; conflicts return 409. In overwrite mode, all existing labels are replaced. Returns {"status": "ok", "imported": N, "total": M, "strategy": "merge"|"overwrite"}. Returns 422 on invalid CSV or strategy.
  • GET /api/labels/stats returns labeling statistics for a given day. Accepts an optional date query parameter (ISO-8601 date string, defaults to today UTC). Returns {"date": "2026-03-01", "count": 5, "total_minutes": 75.0, "breakdown": {"Build": 45.0, "Meet": 20.0, "Write": 10.0}}.
  • GET /api/activity/summary is the frontend-facing activity summary endpoint. Query params: start, end (ISO-8601). It returns the usual aggregate stats plus activity_provider, recent_apps, range_state, and message. range_state="ok" means summary data exists, range_state="no_data" means the source is reachable but the requested window is empty, and range_state="provider_unavailable" means the configured source is not ready and the response includes setup guidance. Manual labeling remains available regardless of range_state.
  • POST /api/tray/pause toggles the monitoring pause state. Returns {"status": "ok", "paused": true/false} when connected to a tray, or {"status": "unavailable", "paused": false} when no tray callbacks are configured.
  • GET /api/tray/state returns tray availability and pause state. When the tray backend provides get_tray_state, the payload may also include model_dir and models_dir (each a string path or null). models_dir is set when --models-dir is configured so clients (e.g. Electron) can enable Advanced → Edit Inference Policy.
  • POST /api/notification/accept confirms an inferred label suggestion and writes it to labels.parquet with provenance="suggestion". Required body: {"block_start": "...", "block_end": "...", "label": "..."}. Optional overwrite and allow_overlap booleans match POST /api/labels. When the suggested block falls inside the effective coverage of the current same-user extend_forward label, the server automatically uses overwrite-style splitting so the prior label resumes after the accepted suggestion. Other overlaps still return 409 with the same structured conflict details as manual labeling, and the UI can retry with overwrite=true or allow_overlap=true without changing provenance.
  • POST /api/notification/skip dismisses the current suggestion without saving a label and broadcasts suggestion_cleared with reason "skipped" so all connected clients clear the prompt.
  • WebSocket (/ws/predictions) streams live events from the ActivityMonitor:
  • status -- every poll cycle: state ("collecting" or "paused"), current_app, current_app_since, candidate_app, candidate_duration_s, transition_threshold_s, poll_seconds, poll_count, last_poll_ts, uptime_s, and nested activity_provider (provider_id, provider_name, state, summary_available, endpoint, source_id, last_sample_count, last_sample_breakdown, setup_title, setup_message, setup_steps, help_url). Legacy aw_connected, aw_bucket_id, aw_host, last_event_count, and last_app_counts are still emitted temporarily as compatibility aliases. When monitoring is paused, state is "paused" and polling/transition detection is skipped.
  • tray_state -- every poll cycle: model_loaded, model_dir, model_schema_hash, suggested_label, suggested_confidence, transition_count, last_transition (with prev_app, new_app, block_start, block_end, fired_at), labels_saved_count, data_dir, ui_port, dev_mode, paused.
  • initial_app -- once on startup when the first dominant app is detected: app, ts. Allows the UI to prompt the user to label the pre-start period that would otherwise be unlabeled.
  • prediction -- on app transition with model suggestion: label, confidence, ts, mapped_label, current_app. Reserved for actual model outputs; manual labels no longer use this event type.
  • no_model_transition -- on app transition without a loaded model: current_app, ts, block_start, block_end. The frontend uses this to distinguish "no model loaded" from "model predicted unknown"; LiveBadge shows "No Model" instead of "Unknown Label" when trayState.model_loaded === false.
  • label_created -- when a label with extend_forward=true is created via POST /api/labels, or when the tray auto-saves a BreakIdle label (lockscreen/idle detection): label, confidence, ts (end), start_ts, extend_forward. The frontend maps this to a Prediction-compatible object so LiveBadge and StatePanel update automatically.
  • label_stopped -- when an open-ended running label is closed via PUT /api/labels: ts (the new end timestamp). Clients clear any stale manual badge prediction at or before that timestamp, then fall back to the latest live_status label if available.
  • labels_changed -- on any label-history mutation that should invalidate day views: reason, ts. Published after POST /api/labels, PUT /api/labels, DELETE /api/labels, POST /api/labels/import, POST /api/notification/accept, and tray-side auto-saved BreakIdle labels.
  • suggestion_cleared -- published after successful suggestion acceptance (POST /api/notification/accept), dismissals (POST /api/notification/skip), and auto-saved BreakIdle labels: reason (e.g. "label_saved", "skipped", "auto_saved_breakidle"). Manual POST /api/labels saves do not publish this event. Clients clear the active suggestion on receipt. The compact badge treats "skipped" as a restore-to-previous-display signal; accepted/auto-saved clears keep the assumed suggestion visible until a fresher prediction or live_status event arrives. The frontend may also auto-dismiss after suggestion_banner_ttl_seconds from config.toml (via GET /api/config/user); 0 disables that timer.
  • suggest_label -- on app transition with model suggestion: suggested, confidence, reason, old_label, block_start, block_end.
  • prompt_label -- on task transition with labeling prompt: prev_app, new_app, block_start, block_end, duration_min, suggested_label, suggestion_text. The structured block_start / block_end bounds remain UTC; the human-readable suggestion_text range is rendered in the user's local timezone for notification display. Frontend notification surfaces may also derive a second exact local-time range line directly from block_start / block_end for higher-precision display.
  • label_grid_show -- triggered by POST /api/window/show-label-grid: type ("label_grid_show", no other fields).
  • train_progress -- during training: job_id, step, progress_pct, message.
  • train_complete -- on training success: job_id, metrics (macro_f1, weighted_f1), model_dir.
  • train_failed -- on training failure: job_id, error.
  • unlabeled_time -- every poll cycle when unlabeled time exists: unlabeled_minutes, text, last_label_end, ts.
  • gap_fill_prompt -- at idle return (>5 min), session start, or post-acceptance: trigger, unlabeled_minutes, text, last_label_end, ts.
  • gap_fill_escalated -- when unlabeled time exceeds threshold: unlabeled_minutes, threshold_minutes.

Backpressure policy: Each WebSocket subscriber has a 256-event queue. When the queue is full, the oldest event is evicted to make room for the new one. The subscriber is never silently dropped; it continues receiving events at the cost of missing stale ones.

Timestamp format: All ISO-8601 timestamps emitted by the server (REST responses and WebSocket events) include an explicit UTC timezone suffix (+00:00). Incoming timestamps in request bodies are accepted with or without timezone info and normalized to timezone-aware UTC via ts_utc_aware_get(). All internal comparisons, filters, and storage operations use aware UTC. Legacy naive timestamps in existing Parquet files are treated as UTC and normalized to aware UTC on read.

Privacy

The UI never displays raw window titles, keystrokes, or URLs. Only aggregated metrics and application identifiers are shown. Transition notifications (web and desktop fallback) redact app names by default (privacy_notifications=True); set to False to show raw app identifiers. Desktop fallback notifications can be disabled entirely with notifications_enabled=False.

taskclf.ui.server

FastAPI backend for the taskclf labeling web UI.

Provides REST endpoints for label CRUD, queue management, feature summaries, and model training, plus a WebSocket channel for live prediction streaming and training progress.

create_app(*, data_dir=Path(DEFAULT_DATA_DIR), models_dir=None, aw_host=DEFAULT_AW_HOST, title_salt=DEFAULT_TITLE_SALT, event_bus=None, window_api=None, on_label_saved=None, on_model_trained=None, on_suggestion_accepted=None, pause_toggle=None, is_paused=None, tray_actions=None, get_tray_state=None, get_activity_provider_status=None)

Build and return the FastAPI application.

Parameters:

Name Type Description Default
data_dir Path

Path to the processed data directory.

Path(DEFAULT_DATA_DIR)
models_dir Path | None

Path to the directory containing model bundles.

None
aw_host str

ActivityWatch server URL.

DEFAULT_AW_HOST
title_salt str

Salt for hashing window titles.

DEFAULT_TITLE_SALT
event_bus EventBus | None

Shared event bus for WebSocket broadcasting.

None
window_api Any

Optional WindowAPI for pywebview window control.

None
on_label_saved Callable[[], None] | None

Optional callback invoked after a label is successfully saved (via POST /api/labels or POST /api/notification/accept).

None
on_model_trained Callable[[str], None] | None

Optional callback invoked with the model run directory path after training completes successfully.

None
on_suggestion_accepted Callable[[], None] | None

Optional callback invoked after a transition suggestion is accepted via POST /api/notification/accept. Used by TrayLabeler to trigger gap-fill prompting when adjacent unlabeled time exists.

None
pause_toggle Callable[[], bool] | None

Optional callback to toggle pause state; returns new paused boolean.

None
is_paused Callable[[], bool] | None

Optional callable returning current paused state.

None
tray_actions dict[str, Callable[..., Any]] | None

Optional mapping of action names to callbacks for the tray menu.

None
get_tray_state Callable[[], dict[str, Any]] | None

Optional callable returning the full tray state dictionary.

None
get_activity_provider_status Callable[[], dict[str, Any]] | None

Optional callable returning the latest cached activity-source status snapshot from the runtime monitor.

None
Source code in src/taskclf/ui/server.py
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
def create_app(
    *,
    data_dir: Path = Path(DEFAULT_DATA_DIR),
    models_dir: Path | None = None,
    aw_host: str = DEFAULT_AW_HOST,
    title_salt: str = DEFAULT_TITLE_SALT,
    event_bus: EventBus | None = None,
    window_api: Any = None,
    on_label_saved: Callable[[], None] | None = None,
    on_model_trained: Callable[[str], None] | None = None,
    on_suggestion_accepted: Callable[[], None] | None = None,
    pause_toggle: Callable[[], bool] | None = None,
    is_paused: Callable[[], bool] | None = None,
    tray_actions: dict[str, Callable[..., Any]] | None = None,
    get_tray_state: Callable[[], dict[str, Any]] | None = None,
    get_activity_provider_status: Callable[[], dict[str, Any]] | None = None,
) -> FastAPI:
    """Build and return the FastAPI application.

    Args:
        data_dir: Path to the processed data directory.
        models_dir: Path to the directory containing model bundles.
        aw_host: ActivityWatch server URL.
        title_salt: Salt for hashing window titles.
        event_bus: Shared event bus for WebSocket broadcasting.
        window_api: Optional ``WindowAPI`` for pywebview window control.
        on_label_saved: Optional callback invoked after a label is
            successfully saved (via ``POST /api/labels`` or
            ``POST /api/notification/accept``).
        on_model_trained: Optional callback invoked with the model run
            directory path after training completes successfully.
        on_suggestion_accepted: Optional callback invoked after a
            transition suggestion is accepted via
            ``POST /api/notification/accept``.  Used by ``TrayLabeler``
            to trigger gap-fill prompting when adjacent unlabeled time
            exists.
        pause_toggle: Optional callback to toggle pause state; returns
            new paused boolean.
        is_paused: Optional callable returning current paused state.
        tray_actions: Optional mapping of action names to callbacks for the tray menu.
        get_tray_state: Optional callable returning the full tray state dictionary.
        get_activity_provider_status: Optional callable returning the latest
            cached activity-source status snapshot from the runtime monitor.
    """
    bus = event_bus or EventBus()
    labels_path = data_dir / "labels_v1" / "labels.parquet"
    queue_path = data_dir / "labels_v1" / "queue.json"
    user_config = UserConfig(data_dir)
    effective_title_salt = title_salt or user_config.title_secret
    title_salt = effective_title_salt
    effective_models_dir = models_dir or Path(DEFAULT_MODELS_DIR)
    train_job = _TrainJob()
    activity_provider = ActivityWatchProvider(
        endpoint=aw_host,
        title_salt=effective_title_salt,
    )

    @asynccontextmanager
    async def lifespan(_app: FastAPI):  # type: ignore[no-untyped-def]
        bus.bind_loop(asyncio.get_running_loop())
        yield

    app = FastAPI(
        title="taskclf",
        docs_url="/api/docs",
        openapi_url="/api/openapi.json",
        lifespan=lifespan,
    )

    async def publish_labels_changed(reason: str) -> None:
        await bus.publish(
            LabelsChangedEventResponse(
                reason=reason,
                ts=_utc_iso(dt.datetime.now(dt.timezone.utc)),
            ).model_dump()
        )

    def _empty_feature_summary() -> FeatureSummaryResponse:
        return FeatureSummaryResponse(
            top_apps=[],
            mean_keys_per_min=None,
            mean_clicks_per_min=None,
            mean_scroll_per_min=None,
            total_buckets=0,
            session_count=0,
        )

    def _feature_summary_for_range(
        start_ts: dt.datetime,
        end_ts: dt.datetime,
    ) -> FeatureSummaryResponse:
        import pandas as pd

        from taskclf.core.store import read_parquet

        empty_resp = _empty_feature_summary()
        frames: list[pd.DataFrame] = []
        dates_missing_parquet: list[dt.date] = []
        current = start_ts.date()
        while current <= end_ts.date():
            fp = _feature_parquet_for_date(data_dir, current)
            if fp is not None and fp.exists():
                tmp = read_parquet(fp)
                if not tmp.empty:
                    frames.append(tmp)
                else:
                    dates_missing_parquet.append(current)
            else:
                dates_missing_parquet.append(current)
            current += dt.timedelta(days=1)

        if dates_missing_parquet:
            try:
                from taskclf.features.build import _fetch_aw_features_for_date

                for date_value in dates_missing_parquet:
                    rows = _fetch_aw_features_for_date(
                        date_value,
                        aw_host=aw_host,
                        title_salt=title_salt,
                    )
                    if rows:
                        frames.append(pd.DataFrame([row.model_dump() for row in rows]))
            except Exception:
                logger.debug("AW live feature fallback unavailable", exc_info=True)

        if not frames:
            return empty_resp

        df = pd.concat(frames, ignore_index=True)
        if "bucket_start_ts" not in df.columns:
            return empty_resp

        summary = generate_label_summary(df, start_ts, end_ts)
        return FeatureSummaryResponse(**summary)

    def _activity_provider_status_snapshot() -> ActivityProviderStatusResponse:
        snapshot = (
            get_activity_provider_status() if get_activity_provider_status else None
        )
        if snapshot is not None:
            validated = ActivityProviderStatusResponse.model_validate(snapshot)
            if validated.state != "checking":
                return validated
        probe_timeout = min(
            2,
            int(
                user_config.as_dict().get(
                    "aw_timeout_seconds", DEFAULT_AW_TIMEOUT_SECONDS
                )
            ),
        )
        return ActivityProviderStatusResponse.model_validate(
            activity_provider.probe_status(timeout_seconds=probe_timeout).to_payload()
        )

    # -- REST: labels ---------------------------------------------------------

    @app.get("/api/labels")
    async def api_op_labels_get(
        limit: int = Query(50, ge=1, le=500),
        date: str | None = Query(
            None, description="ISO-8601 date to filter labels by (e.g. 2025-03-07)"
        ),
        range_start: str | None = Query(
            None, description="UTC start of visible range (ISO-8601)"
        ),
        range_end: str | None = Query(
            None, description="UTC end of visible range (ISO-8601)"
        ),
    ) -> list[LabelResponse]:
        if not labels_path.exists():
            return []
        spans = read_label_spans(labels_path)

        if range_start is not None and range_end is not None:
            try:
                rs = _ensure_utc(dt.datetime.fromisoformat(range_start))
                re_ = _ensure_utc(dt.datetime.fromisoformat(range_end))
            except (ValueError, Exception) as exc:
                raise HTTPException(
                    status_code=400, detail=f"Invalid range: {exc}"
                ) from exc
            spans = [s for s in spans if s.end_ts > rs and s.start_ts < re_]
        elif date is not None:
            try:
                target = dt.date.fromisoformat(date)
            except ValueError as exc:
                raise HTTPException(
                    status_code=400, detail=f"Invalid date: {date}"
                ) from exc
            day_start = dt.datetime.combine(target, dt.time.min, tzinfo=dt.timezone.utc)
            day_end = dt.datetime.combine(target, dt.time.max, tzinfo=dt.timezone.utc)
            spans = [s for s in spans if s.end_ts > day_start and s.start_ts < day_end]

        # Latest-ended first: matches tray gap-fill (max end_ts) and quick-label gap.
        spans.sort(key=lambda s: s.end_ts, reverse=True)
        return [_label_response_from_span(s) for s in spans[:limit]]

    @app.get("/api/labels/current")
    async def api_op_labels_current_get() -> LabelResponse | None:
        if not labels_path.exists():
            return None

        spans = read_label_spans(labels_path)
        current = [
            span for i, span in enumerate(spans) if _label_span_is_current(spans, i)
        ]
        if not current:
            return None

        return _label_response_from_span(max(current, key=lambda span: span.start_ts))

    @app.post("/api/labels", status_code=201)
    async def api_op_labels_post(body: LabelCreateRequest) -> LabelResponse:
        uid = body.user_id if body.user_id is not None else user_config.user_id
        try:
            span = LabelSpan(
                start_ts=_ensure_utc(dt.datetime.fromisoformat(body.start_ts)),
                end_ts=_ensure_utc(dt.datetime.fromisoformat(body.end_ts)),
                label=body.label,
                provenance="manual",
                user_id=uid,
                confidence=body.confidence if body.confidence is not None else 1.0,
                extend_forward=body.extend_forward,
            )
        except (ValueError, Exception) as exc:
            raise HTTPException(status_code=422, detail=str(exc)) from exc
        if body.overwrite:
            overwrite_label_span(span, labels_path)
        else:
            try:
                append_label_span(
                    span,
                    labels_path,
                    allow_overlap=body.allow_overlap,
                )
            except ValueError as exc:
                existing = read_label_spans(labels_path) if labels_path.exists() else []
                detail = _parse_overlap_error(str(exc), span, existing)
                raise HTTPException(
                    status_code=409,
                    detail=detail.model_dump(),
                ) from exc

        if on_label_saved is not None:
            on_label_saved()

        if span.extend_forward:
            await bus.publish(
                {
                    "type": "label_created",
                    "label": span.label,
                    "confidence": span.confidence
                    if span.confidence is not None
                    else 1.0,
                    "ts": _utc_iso(span.end_ts),
                    "start_ts": _utc_iso(span.start_ts),
                    "extend_forward": True,
                }
            )
        await publish_labels_changed("created")

        return LabelResponse(
            start_ts=_utc_iso(span.start_ts),
            end_ts=_utc_iso(span.end_ts),
            label=span.label,
            provenance=span.provenance,
            user_id=span.user_id,
            confidence=span.confidence,
            extend_forward=span.extend_forward,
        )

    @app.put("/api/labels")
    async def api_op_labels_put(body: LabelUpdateRequest) -> LabelResponse:
        prior_span: LabelSpan | None = None
        existing_spans: list[LabelSpan] = []
        try:
            start = _ensure_utc(dt.datetime.fromisoformat(body.start_ts))
            end = _ensure_utc(dt.datetime.fromisoformat(body.end_ts))
            new_start = (
                _ensure_utc(dt.datetime.fromisoformat(body.new_start_ts))
                if body.new_start_ts
                else None
            )
            new_end = (
                _ensure_utc(dt.datetime.fromisoformat(body.new_end_ts))
                if body.new_end_ts
                else None
            )
        except (ValueError, Exception) as exc:
            raise HTTPException(status_code=422, detail=str(exc)) from exc
        if labels_path.exists():
            existing_spans = read_label_spans(labels_path)
            for existing in existing_spans:
                if existing.start_ts == start and existing.end_ts == end:
                    prior_span = existing
                    break
        try:
            span = update_label_span(
                start,
                end,
                body.label,
                labels_path,
                new_start_ts=new_start,
                new_end_ts=new_end,
                new_extend_forward=body.extend_forward,
            )
        except ValueError as exc:
            raise HTTPException(status_code=404, detail=str(exc)) from exc
        was_current = False
        if prior_span is not None:
            for i, existing in enumerate(existing_spans):
                if (
                    existing.start_ts == prior_span.start_ts
                    and existing.end_ts == prior_span.end_ts
                    and existing.label == prior_span.label
                    and existing.user_id == prior_span.user_id
                ):
                    was_current = _label_span_is_current(existing_spans, i)
                    break
        is_closed = not span.extend_forward and span.end_ts > span.start_ts
        if was_current and is_closed:
            await bus.publish(
                LabelStoppedEventResponse(ts=_utc_iso(span.end_ts)).model_dump()
            )
        await publish_labels_changed("updated")
        return _label_response_from_span(span)

    @app.delete("/api/labels")
    async def api_op_labels_delete(body: LabelDeleteRequest) -> dict[str, str]:
        try:
            start = _ensure_utc(dt.datetime.fromisoformat(body.start_ts))
            end = _ensure_utc(dt.datetime.fromisoformat(body.end_ts))
        except (ValueError, Exception) as exc:
            raise HTTPException(status_code=422, detail=str(exc)) from exc
        try:
            delete_label_span(start, end, labels_path)
        except ValueError as exc:
            raise HTTPException(status_code=404, detail=str(exc)) from exc
        await publish_labels_changed("deleted")
        return {"status": "deleted"}

    @app.get("/api/labels/export")
    async def api_op_labels_export_get() -> StreamingResponse:
        """Download all label spans as a CSV file."""
        import io
        import tempfile

        if not labels_path.exists():
            raise HTTPException(status_code=404, detail="No labels file found")

        with tempfile.TemporaryDirectory() as tmpdir:
            csv_path = Path(tmpdir) / "labels_export.csv"
            try:
                export_labels_to_csv(labels_path, csv_path)
            except ValueError as exc:
                raise HTTPException(status_code=404, detail=str(exc)) from exc
            csv_bytes = csv_path.read_bytes()

        return StreamingResponse(
            io.BytesIO(csv_bytes),
            media_type="text/csv",
            headers={"Content-Disposition": "attachment; filename=labels_export.csv"},
        )

    @app.get("/api/labels/stats")
    async def label_stats(
        date: str | None = Query(
            None, description="ISO-8601 date (defaults to today UTC)"
        ),
    ) -> LabelStatsResponse:
        """Return labeling stats for a given day."""
        target = (
            dt.date.fromisoformat(date)
            if date
            else dt.datetime.now(dt.timezone.utc).date()
        )
        if not labels_path.exists():
            return LabelStatsResponse(
                date=target.isoformat(),
                count=0,
                total_minutes=0.0,
                breakdown={},
            )
        spans = read_label_spans(labels_path)
        day_spans = [s for s in spans if s.start_ts.date() == target]
        breakdown: dict[str, float] = {}
        for s in day_spans:
            mins = round((s.end_ts - s.start_ts).total_seconds() / 60, 1)
            breakdown[s.label] = round(breakdown.get(s.label, 0) + mins, 1)
        total = round(sum(breakdown.values()), 1)
        return LabelStatsResponse(
            date=target.isoformat(),
            count=len(day_spans),
            total_minutes=total,
            breakdown=breakdown,
        )

    @app.post("/api/labels/import")
    async def api_op_labels_import_post(
        file: UploadFile,
        strategy: str = Form("merge"),
    ) -> LabelImportResponse:
        """Import label spans from an uploaded CSV file.

        Accepts ``strategy`` of ``"merge"`` (deduplicate and combine
        with existing labels) or ``"overwrite"`` (replace all labels).
        """
        import tempfile

        if strategy not in ("merge", "overwrite"):
            raise HTTPException(
                status_code=422,
                detail=f"Invalid strategy {strategy!r}; must be 'merge' or 'overwrite'",
            )

        with tempfile.NamedTemporaryFile(suffix=".csv", delete=False) as tmp:
            tmp.write(await file.read())
            tmp_path = Path(tmp.name)

        try:
            imported = import_labels_from_csv(tmp_path)
        except ValueError as exc:
            tmp_path.unlink(missing_ok=True)
            raise HTTPException(status_code=422, detail=str(exc)) from exc
        except Exception as exc:
            tmp_path.unlink(missing_ok=True)
            raise HTTPException(status_code=422, detail=str(exc)) from exc
        finally:
            tmp_path.unlink(missing_ok=True)

        if strategy == "overwrite":
            labels_path.parent.mkdir(parents=True, exist_ok=True)
            write_label_spans(imported, labels_path)
            total = len(imported)
        else:
            existing: list = []
            if labels_path.exists():
                existing = read_label_spans(labels_path)
            try:
                merged = merge_label_spans(existing, imported)
            except ValueError as exc:
                raise HTTPException(status_code=409, detail=str(exc)) from exc
            labels_path.parent.mkdir(parents=True, exist_ok=True)
            write_label_spans(merged, labels_path)
            total = len(merged)

        await publish_labels_changed("imported")

        return LabelImportResponse(
            status="ok",
            imported=len(imported),
            total=total,
            strategy=strategy,
        )

    # -- REST: queue ----------------------------------------------------------

    @app.get("/api/queue")
    async def api_op_queue_get(
        limit: int = Query(20, ge=1, le=100),
    ) -> list[QueueItemResponse]:
        if not queue_path.exists():
            return []
        queue = ActiveLabelingQueue(queue_path)
        pending = queue.get_pending(limit=limit)
        return [
            QueueItemResponse(
                request_id=r.request_id,
                user_id=r.user_id,
                bucket_start_ts=_utc_iso(r.bucket_start_ts),
                bucket_end_ts=_utc_iso(r.bucket_end_ts),
                reason=r.reason,
                confidence=r.confidence,
                predicted_label=r.predicted_label,
                status=r.status,
            )
            for r in pending
        ]

    @app.post("/api/queue/{request_id}/done")
    async def api_op_queue_done_post(
        request_id: str, body: MarkDoneRequest
    ) -> dict[str, str]:
        if not queue_path.exists():
            return {"status": "not_found"}
        queue = ActiveLabelingQueue(queue_path)
        result = queue.mark_done(request_id, status=body.status)
        if result is None:
            return {"status": "not_found"}
        return {"status": result.status}

    # -- REST: features -------------------------------------------------------

    @app.get("/api/features/summary")
    async def feature_summary(
        start: str = Query(..., description="ISO-8601 start"),
        end: str = Query(..., description="ISO-8601 end"),
    ) -> FeatureSummaryResponse:
        start_ts = _ensure_utc(dt.datetime.fromisoformat(start))
        end_ts = _ensure_utc(dt.datetime.fromisoformat(end))
        return _feature_summary_for_range(start_ts, end_ts)

    @app.get("/api/activity/summary")
    async def activity_summary(
        start: str = Query(..., description="ISO-8601 start"),
        end: str = Query(..., description="ISO-8601 end"),
    ) -> ActivitySummaryResponse:
        start_ts = _ensure_utc(dt.datetime.fromisoformat(start))
        end_ts = _ensure_utc(dt.datetime.fromisoformat(end))

        provider_status = _activity_provider_status_snapshot()
        empty_summary = _empty_feature_summary()

        if provider_status.state == "setup_required":
            return ActivitySummaryResponse(
                **empty_summary.model_dump(),
                activity_provider=provider_status,
                recent_apps=[],
                range_state="provider_unavailable",
                message=provider_status.setup_message,
            )

        try:
            provider_snapshot, recent_apps = activity_provider.recent_app_summary(
                start_ts,
                end_ts,
                source_id=provider_status.source_id,
                timeout_seconds=min(
                    2,
                    int(
                        user_config.as_dict().get(
                            "aw_timeout_seconds",
                            DEFAULT_AW_TIMEOUT_SECONDS,
                        )
                    ),
                ),
            )
            provider_status = ActivityProviderStatusResponse.model_validate(
                provider_snapshot.to_payload()
            )
        except ActivityProviderUnavailableError:
            provider_status = ActivityProviderStatusResponse.model_validate(
                activity_provider.setup_required_status(
                    source_id=provider_status.source_id,
                    last_sample_count=provider_status.last_sample_count,
                    last_sample_breakdown=provider_status.last_sample_breakdown,
                ).to_payload()
            )
            return ActivitySummaryResponse(
                **empty_summary.model_dump(),
                activity_provider=provider_status,
                recent_apps=[],
                range_state="provider_unavailable",
                message=provider_status.setup_message,
            )

        feature_payload = _feature_summary_for_range(start_ts, end_ts)
        has_context = bool(recent_apps) or feature_payload.total_buckets > 0
        range_state: Literal["ok", "no_data", "provider_unavailable"] = (
            "ok" if has_context else "no_data"
        )
        message = None if has_context else "No activity data for this window"

        return ActivitySummaryResponse(
            **feature_payload.model_dump(),
            activity_provider=provider_status,
            recent_apps=[
                AWLiveEntry(app=entry.app, events=entry.events) for entry in recent_apps
            ],
            range_state=range_state,
            message=message,
        )

    # -- REST: ActivityWatch live proxy ---------------------------------------

    @app.get("/api/aw/live")
    async def aw_live_summary(
        start: str = Query(...),
        end: str = Query(...),
    ) -> list[AWLiveEntry]:
        try:
            start_ts = _ensure_utc(dt.datetime.fromisoformat(start))
            end_ts = _ensure_utc(dt.datetime.fromisoformat(end))
            _, recent_apps = activity_provider.recent_app_summary(
                start_ts,
                end_ts,
                timeout_seconds=min(
                    2,
                    int(
                        user_config.as_dict().get(
                            "aw_timeout_seconds",
                            DEFAULT_AW_TIMEOUT_SECONDS,
                        )
                    ),
                ),
            )
            return [
                AWLiveEntry(app=entry.app, events=entry.events) for entry in recent_apps
            ]
        except Exception:
            logger.debug("AW live summary unavailable", exc_info=True)
            return []

    # -- REST: config ---------------------------------------------------------

    @app.get("/api/config/labels")
    async def config_labels() -> list[str]:
        return [cl.value for cl in CoreLabel]

    def _suggestion_banner_ttl_seconds() -> int:
        raw = user_config.as_dict().get("suggestion_banner_ttl_seconds", 0)
        try:
            n = int(raw)
        except TypeError, ValueError:
            return 0
        return max(0, n)

    @app.get("/api/config/user")
    async def api_op_config_user_get() -> UserConfigResponse:
        return UserConfigResponse(
            user_id=user_config.user_id,
            username=user_config.username,
            suggestion_banner_ttl_seconds=_suggestion_banner_ttl_seconds(),
        )

    @app.put("/api/config/user")
    async def api_op_config_user_put(
        body: UserConfigUpdateRequest,
    ) -> UserConfigResponse:
        patch = {k: v for k, v in body.model_dump().items() if v is not None}
        if patch:
            try:
                user_config.update(patch)
            except ValueError as exc:
                raise HTTPException(status_code=422, detail=str(exc)) from exc
        return UserConfigResponse(
            user_id=user_config.user_id,
            username=user_config.username,
            suggestion_banner_ttl_seconds=_suggestion_banner_ttl_seconds(),
        )

    # -- REST: window control -------------------------------------------------

    @app.post("/api/window/toggle")
    async def window_toggle() -> dict[str, Any]:
        if window_api is None:
            return {"status": "no_window", "visible": False}
        window_api.window_toggle()
        return {"status": "ok", "visible": window_api.visible}

    @app.get("/api/window/state")
    async def window_state() -> dict[str, Any]:
        if window_api is None:
            return {"available": False, "visible": False}
        return {"available": True, "visible": window_api.visible}

    @app.post("/api/window/show-label-grid")
    async def window_label_grid_show() -> dict[str, str]:
        if window_api is not None:
            window_api.label_grid_show()
        await bus.publish({"type": "label_grid_show"})
        return {"status": "ok"}

    # -- REST: notification actions -------------------------------------------

    @app.post("/api/notification/skip")
    async def notification_skip() -> dict[str, str]:
        await bus.publish({"type": "suggestion_cleared", "reason": "skipped"})
        logger.info("Notification skipped by user (no label change needed)")
        return {"status": "skipped"}

    @app.post("/api/notification/accept")
    async def notification_accept(body: NotificationAcceptRequest) -> LabelResponse:
        uid = user_config.user_id
        try:
            span = LabelSpan(
                start_ts=_ensure_utc(dt.datetime.fromisoformat(body.block_start)),
                end_ts=_ensure_utc(dt.datetime.fromisoformat(body.block_end)),
                label=body.label,
                provenance="suggestion",
                user_id=uid,
            )
        except (ValueError, Exception) as exc:
            raise HTTPException(status_code=422, detail=str(exc)) from exc
        existing = read_label_spans(labels_path) if labels_path.exists() else []
        auto_resume_active = _extend_forward_coverage_contains(existing, span)

        if body.overwrite or auto_resume_active:
            overwrite_label_span(span, labels_path)
        else:
            try:
                append_label_span(
                    span,
                    labels_path,
                    allow_overlap=body.allow_overlap,
                )
            except ValueError as exc:
                existing = read_label_spans(labels_path) if labels_path.exists() else []
                detail = _parse_overlap_error(str(exc), span, existing)
                raise HTTPException(
                    status_code=409,
                    detail=detail.model_dump(),
                ) from exc

        if on_label_saved is not None:
            on_label_saved()

        await bus.publish({"type": "suggestion_cleared", "reason": "label_saved"})
        await publish_labels_changed("suggestion_accepted")

        if on_suggestion_accepted is not None:
            on_suggestion_accepted()

        logger.info(
            "Accepted suggested label: %s (%s%s)",
            body.label,
            body.block_start,
            body.block_end,
        )
        return LabelResponse(
            start_ts=_utc_iso(span.start_ts),
            end_ts=_utc_iso(span.end_ts),
            label=span.label,
            provenance=span.provenance,
            user_id=span.user_id,
            confidence=span.confidence,
            extend_forward=span.extend_forward,
        )

    # -- REST: tray control ---------------------------------------------------

    @app.post("/api/tray/pause")
    async def tray_pause_toggle() -> dict[str, Any]:
        if pause_toggle is None:
            return {"status": "unavailable", "paused": False}
        paused = pause_toggle()
        return {"status": "ok", "paused": paused}

    @app.get("/api/tray/state")
    async def tray_state() -> dict[str, Any]:
        if get_tray_state is not None:
            state = get_tray_state()
            state["available"] = True
            return state
        if is_paused is None:
            return {"available": False, "paused": False}
        return {"available": True, "paused": is_paused()}

    @app.post("/api/tray/action/{action}")
    async def tray_action(
        action: str, body: TrayActionRequest | None = None
    ) -> dict[str, Any]:
        if tray_actions is None or action not in tray_actions:
            raise HTTPException(
                status_code=404,
                detail=f"Action {action} not found or no tray actions configured",
            )

        try:
            if action == "switch_model" and body and body.model_id:
                from taskclf.model_registry import list_bundles

                bundles = list_bundles(effective_models_dir)
                target = next((b for b in bundles if b.model_id == body.model_id), None)
                if not target:
                    raise HTTPException(
                        status_code=404, detail=f"Model {body.model_id} not found"
                    )
                tray_actions[action](target.path)
            else:
                tray_actions[action]()
            return {"status": "ok"}
        except Exception as exc:
            logger.warning("Tray action %s failed: %s", action, exc, exc_info=True)
            raise HTTPException(status_code=500, detail=str(exc)) from exc

    # -- REST: training -------------------------------------------------------

    def _run_training_pipeline(
        job: _TrainJob,
        *,
        date_from: str,
        date_to: str,
        num_boost_round: int,
        class_weight: str,
        synthetic: bool,
    ) -> None:
        """Background thread: load data, train, save bundle, publish progress."""
        import pandas as pd

        try:
            start = dt.date.fromisoformat(date_from)
            end = dt.date.fromisoformat(date_to)

            def _update(step: str, pct: int, msg: str) -> None:
                job.step = step
                job.progress_pct = pct
                job.message = msg
                bus.publish_threadsafe(
                    {
                        "type": "train_progress",
                        "job_id": job.job_id,
                        "step": step,
                        "progress_pct": pct,
                        "message": msg,
                    }
                )

            if job._cancel.is_set():
                raise InterruptedError("Cancelled")

            _update("loading_features", 10, "Loading features…")

            from taskclf.core.store import read_parquet as _read_pq
            from taskclf.features.build import generate_dummy_features
            from taskclf.labels.store import generate_dummy_labels

            all_features: list[pd.DataFrame] = []
            all_labels: list = []
            current = start

            if not synthetic:
                lp = data_dir / "labels_v1" / "labels.parquet"
                if lp.exists():
                    from taskclf.labels.store import read_label_spans as _read_ls

                    all_spans = _read_ls(lp)
                    start_dt = pd.Timestamp(
                        year=start.year,
                        month=start.month,
                        day=start.day,
                        tz="UTC",
                    )
                    end_dt = pd.Timestamp(
                        year=end.year,
                        month=end.month,
                        day=end.day,
                        hour=23,
                        minute=59,
                        second=59,
                        tz="UTC",
                    )

                    def _to_utc(ts: dt.datetime) -> pd.Timestamp:
                        t = pd.Timestamp(ts)
                        if t.tzinfo is None:
                            return t.tz_localize("UTC")
                        return t.tz_convert("UTC")

                    all_labels = [
                        s
                        for s in all_spans
                        if _to_utc(s.end_ts) >= start_dt
                        and _to_utc(s.start_ts) <= end_dt
                    ]

            while current <= end:
                if job._cancel.is_set():
                    raise InterruptedError("Cancelled")
                if synthetic:
                    rows = generate_dummy_features(current, n_rows=60)
                    df = pd.DataFrame([r.model_dump() for r in rows])
                    labels = generate_dummy_labels(current, n_rows=60)
                    all_labels.extend(labels)
                else:
                    fp = _feature_parquet_for_date(data_dir, current)
                    if fp is not None and fp.exists():
                        df = _read_pq(fp)
                    else:
                        current += dt.timedelta(days=1)
                        continue
                all_features.append(df)
                current += dt.timedelta(days=1)

            if not all_features:
                raise ValueError("No feature data found for the given date range")

            features_df = pd.concat(all_features, ignore_index=True)

            if features_df.empty or "bucket_start_ts" not in features_df.columns:
                raise ValueError(
                    "Feature files exist but contain 0 rows — "
                    "ActivityWatch may not be running or has no data for the selected range"
                )

            if not all_labels:
                raise ValueError(
                    "No label spans overlap the selected date range — "
                    "create or import labels before training"
                )

            if job._cancel.is_set():
                raise InterruptedError("Cancelled")

            _update(
                "projecting_labels",
                30,
                f"Projecting {len(all_labels)} labels onto {len(features_df)} rows…",
            )

            from taskclf.labels.projection import project_blocks_to_windows

            labeled_df = project_blocks_to_windows(features_df, all_labels)
            if labeled_df.empty:
                raise ValueError(
                    "No labeled rows after projection — label spans may not "
                    "temporally overlap any feature windows in the selected range"
                )

            if job._cancel.is_set():
                raise InterruptedError("Cancelled")

            _update("splitting", 40, f"Splitting {len(labeled_df)} labeled rows…")

            from taskclf.train.dataset import split_by_time

            splits = split_by_time(labeled_df)
            train_df = labeled_df.iloc[splits["train"]].reset_index(drop=True)
            val_df = labeled_df.iloc[splits["val"]].reset_index(drop=True)

            if job._cancel.is_set():
                raise InterruptedError("Cancelled")

            _update(
                "training",
                50,
                f"Training LightGBM ({num_boost_round} rounds, "
                f"{len(train_df)} train / {len(val_df)} val)…",
            )

            from taskclf.train.lgbm import train_lgbm as _train

            cw: Literal["balanced", "none"] = (
                "none" if class_weight == "none" else "balanced"
            )
            model, metrics, cm_df, params, cat_encoders = _train(
                train_df,
                val_df,
                num_boost_round=num_boost_round,
                class_weight=cw,
            )

            if job._cancel.is_set():
                raise InterruptedError("Cancelled")

            _update(
                "saving", 85, f"Saving bundle (macro_f1={metrics['macro_f1']:.3f})…"
            )

            from taskclf.core.model_io import build_metadata, save_model_bundle
            from taskclf.train.retrain import compute_dataset_hash

            dataset_hash = compute_dataset_hash(features_df, all_labels)
            metadata = build_metadata(
                label_set=metrics["label_names"],
                train_date_from=start,
                train_date_to=end,
                params=params,
                dataset_hash=dataset_hash,
                data_provenance="synthetic" if synthetic else "real",
                unknown_category_freq_threshold=params.get(
                    "unknown_category_freq_threshold"
                ),
                unknown_category_mask_rate=params.get("unknown_category_mask_rate"),
            )

            run_dir = save_model_bundle(
                model=model,
                metadata=metadata,
                metrics=metrics,
                confusion_df=cm_df,
                base_dir=effective_models_dir,
                cat_encoders=cat_encoders,
            )

            job.metrics = metrics
            job.model_dir = str(run_dir)
            job.status = "complete"
            job.finished_at = dt.datetime.now(dt.timezone.utc).isoformat()
            _update("done", 100, f"Model saved to {run_dir.name}")

            bus.publish_threadsafe(
                {
                    "type": "train_complete",
                    "job_id": job.job_id,
                    "metrics": {
                        "macro_f1": metrics.get("macro_f1"),
                        "weighted_f1": metrics.get("weighted_f1"),
                    },
                    "model_dir": str(run_dir),
                }
            )

            if on_model_trained is not None:
                try:
                    on_model_trained(str(run_dir))
                except Exception:
                    logger.debug("on_model_trained callback failed", exc_info=True)

        except InterruptedError:
            job.status = "failed"
            job.error = "Cancelled by user"
            job.finished_at = dt.datetime.now(dt.timezone.utc).isoformat()
            bus.publish_threadsafe(
                {
                    "type": "train_failed",
                    "job_id": job.job_id,
                    "error": "Cancelled by user",
                }
            )
        except Exception as exc:
            logger.warning("Training failed: %s", exc, exc_info=True)
            job.status = "failed"
            job.error = str(exc)
            job.finished_at = dt.datetime.now(dt.timezone.utc).isoformat()
            bus.publish_threadsafe(
                {
                    "type": "train_failed",
                    "job_id": job.job_id,
                    "error": str(exc),
                }
            )

    def _run_feature_build(
        job: _TrainJob,
        *,
        date_from: str,
        date_to: str,
    ) -> None:
        """Background thread: build features for each date in the range."""
        try:
            from taskclf.features.build import build_features_for_date

            start = dt.date.fromisoformat(date_from)
            end = dt.date.fromisoformat(date_to)
            total_days = (end - start).days + 1
            current = start
            built = 0

            while current <= end:
                if job._cancel.is_set():
                    raise InterruptedError("Cancelled")
                pct = int((built / total_days) * 100)
                job.step = "building_features"
                job.progress_pct = pct
                job.message = (
                    f"Building features for {current} ({built + 1}/{total_days})…"
                )
                bus.publish_threadsafe(
                    {
                        "type": "train_progress",
                        "job_id": job.job_id,
                        "step": "building_features",
                        "progress_pct": pct,
                        "message": job.message,
                    }
                )

                build_features_for_date(
                    current,
                    data_dir,
                    aw_host=aw_host,
                    title_salt=title_salt,
                    user_id=user_config.user_id,
                )
                built += 1
                current += dt.timedelta(days=1)

            job.status = "complete"
            job.step = "done"
            job.progress_pct = 100
            job.message = f"Built features for {built} day(s)"
            job.finished_at = dt.datetime.now(dt.timezone.utc).isoformat()
            bus.publish_threadsafe(
                {
                    "type": "train_complete",
                    "job_id": job.job_id,
                    "metrics": None,
                    "model_dir": None,
                }
            )
        except InterruptedError:
            job.status = "failed"
            job.error = "Cancelled by user"
            job.finished_at = dt.datetime.now(dt.timezone.utc).isoformat()
            bus.publish_threadsafe(
                {
                    "type": "train_failed",
                    "job_id": job.job_id,
                    "error": "Cancelled by user",
                }
            )
        except Exception as exc:
            logger.warning("Feature build failed: %s", exc, exc_info=True)
            job.status = "failed"
            job.error = str(exc)
            job.finished_at = dt.datetime.now(dt.timezone.utc).isoformat()
            bus.publish_threadsafe(
                {
                    "type": "train_failed",
                    "job_id": job.job_id,
                    "error": str(exc),
                }
            )

    @app.post("/api/train/start", status_code=202)
    async def train_start(body: TrainStartRequest) -> TrainStatusResponse:
        with train_job._lock:
            if train_job.status == "running":
                raise HTTPException(
                    status_code=409,
                    detail="A training job is already running",
                )
            train_job.job_id = uuid.uuid4().hex[:12]
            train_job.status = "running"
            train_job.step = "initializing"
            train_job.progress_pct = 0
            train_job.message = "Starting…"
            train_job.error = None
            train_job.metrics = None
            train_job.model_dir = None
            train_job.started_at = dt.datetime.now(dt.timezone.utc).isoformat()
            train_job.finished_at = None
            train_job._cancel.clear()

        thread = threading.Thread(
            target=_run_training_pipeline,
            args=(train_job,),
            kwargs={
                "date_from": body.date_from,
                "date_to": body.date_to,
                "num_boost_round": body.num_boost_round,
                "class_weight": body.class_weight,
                "synthetic": body.synthetic,
            },
            daemon=True,
        )
        thread.start()
        return train_job.to_response()

    @app.post("/api/train/build-features", status_code=202)
    async def train_build_features(body: BuildFeaturesRequest) -> TrainStatusResponse:
        with train_job._lock:
            if train_job.status == "running":
                raise HTTPException(
                    status_code=409,
                    detail="A training job is already running",
                )
            train_job.job_id = uuid.uuid4().hex[:12]
            train_job.status = "running"
            train_job.step = "building_features"
            train_job.progress_pct = 0
            train_job.message = "Starting feature build…"
            train_job.error = None
            train_job.metrics = None
            train_job.model_dir = None
            train_job.started_at = dt.datetime.now(dt.timezone.utc).isoformat()
            train_job.finished_at = None
            train_job._cancel.clear()

        thread = threading.Thread(
            target=_run_feature_build,
            args=(train_job,),
            kwargs={
                "date_from": body.date_from,
                "date_to": body.date_to,
            },
            daemon=True,
        )
        thread.start()
        return train_job.to_response()

    @app.get("/api/train/status")
    async def train_status() -> TrainStatusResponse:
        return train_job.to_response()

    @app.post("/api/train/cancel")
    async def train_cancel() -> TrainStatusResponse:
        if train_job.status != "running":
            raise HTTPException(status_code=409, detail="No running job to cancel")
        train_job._cancel.set()
        return train_job.to_response()

    @app.get("/api/train/models")
    async def train_list_models() -> list[ModelBundleResponse]:
        from taskclf.model_registry import list_bundles

        bundles = list_bundles(effective_models_dir)
        return [
            ModelBundleResponse(
                model_id=b.model_id,
                path=str(b.path),
                valid=b.valid,
                invalid_reason=b.invalid_reason,
                macro_f1=b.metrics.macro_f1 if b.metrics else None,
                weighted_f1=b.metrics.weighted_f1 if b.metrics else None,
                created_at=b.created_at.isoformat() if b.created_at else None,
            )
            for b in bundles
        ]

    @app.get("/api/train/models/current/inspect")
    async def train_current_model_bundle_inspect() -> dict[str, Any]:
        """Bundle-saved validation metrics for the model currently loaded in the tray."""
        if get_tray_state is None:
            return {"loaded": False, "reason": "tray_state_unavailable"}
        state = get_tray_state()
        md = state.get("model_dir")
        if not md:
            return {"loaded": False, "reason": "no_model_loaded"}
        path = Path(str(md)).resolve()
        if not path.is_dir():
            return {"loaded": False, "reason": "model_dir_missing"}
        try:
            payload = _bundle_inspect_bundle_only_payload(path)
        except (FileNotFoundError, OSError, ValueError, KeyError) as exc:
            logger.warning("Bundle inspect failed for %s: %s", path, exc)
            raise HTTPException(
                status_code=500,
                detail=f"Could not read bundle inspection: {exc}",
            ) from exc
        return {"loaded": True, **payload}

    @app.get("/api/train/models/{model_id}/inspect")
    async def train_model_bundle_inspect_by_id(model_id: str) -> dict[str, Any]:
        """Bundle-saved validation metrics for a known model bundle under models_dir."""
        from taskclf.model_registry import list_bundles

        bundles = list_bundles(effective_models_dir)
        target = next((b for b in bundles if b.model_id == model_id), None)
        if target is None:
            raise HTTPException(status_code=404, detail=f"Model {model_id} not found")
        if not target.valid:
            raise HTTPException(
                status_code=422,
                detail=target.invalid_reason or "invalid bundle",
            )
        try:
            return _bundle_inspect_bundle_only_payload(target.path)
        except (FileNotFoundError, OSError, ValueError, KeyError) as exc:
            logger.warning("Bundle inspect failed for %s: %s", target.path, exc)
            raise HTTPException(
                status_code=500,
                detail=f"Could not read bundle inspection: {exc}",
            ) from exc

    @app.get("/api/train/data-check")
    async def train_data_check(
        date_from: str = Query(..., description="Start date (YYYY-MM-DD)"),
        date_to: str = Query(..., description="End date (YYYY-MM-DD)"),
    ) -> DataCheckResponse:
        from taskclf.core.store import read_parquet
        from taskclf.features.build import build_features_for_date

        start = dt.date.fromisoformat(date_from)
        end = dt.date.fromisoformat(date_to)

        dates_built: list[str] = []
        build_errors: list[str] = []

        current = start
        while current <= end:
            fp = _feature_parquet_for_date(data_dir, current)
            if fp is None or not fp.exists():
                try:
                    build_features_for_date(
                        current,
                        data_dir,
                        aw_host=aw_host,
                        title_salt=title_salt,
                        user_id=user_config.user_id,
                    )
                    dates_built.append(current.isoformat())
                except Exception as exc:
                    build_errors.append(f"{current}: {exc}")
            current += dt.timedelta(days=1)

        current = start
        dates_with: list[str] = []
        dates_missing: list[str] = []
        total_rows = 0

        while current <= end:
            fp = _feature_parquet_for_date(data_dir, current)
            if fp is not None and fp.exists():
                try:
                    df = read_parquet(fp)
                    n = len(df)
                except Exception:
                    n = 0
                if n > 0:
                    dates_with.append(current.isoformat())
                    total_rows += n
                else:
                    dates_missing.append(current.isoformat())
            else:
                dates_missing.append(current.isoformat())
            current += dt.timedelta(days=1)

        label_count = 0
        matching_spans: list = []
        lp = data_dir / "labels_v1" / "labels.parquet"
        if lp.exists():
            try:
                spans = read_label_spans(lp)
                start_dt = dt.datetime(
                    start.year, start.month, start.day, tzinfo=dt.timezone.utc
                )
                end_dt = dt.datetime(
                    end.year, end.month, end.day, 23, 59, 59, tzinfo=dt.timezone.utc
                )
                matching_spans = [
                    s for s in spans if s.end_ts >= start_dt and s.start_ts <= end_dt
                ]
                label_count = len(matching_spans)
            except Exception:
                pass

        trainable_rows = 0
        trainable_labels: list[str] = []
        if total_rows > 0 and matching_spans:
            try:
                import pandas as pd
                from taskclf.labels.projection import project_blocks_to_windows

                feature_frames = []
                cur = start
                while cur <= end:
                    fp = _feature_parquet_for_date(data_dir, cur)
                    if fp is not None and fp.exists():
                        frame = read_parquet(fp)
                        if not frame.empty:
                            feature_frames.append(frame)
                    cur += dt.timedelta(days=1)

                if feature_frames:
                    features_df = pd.concat(feature_frames, ignore_index=True)
                    projected = project_blocks_to_windows(features_df, matching_spans)
                    trainable_rows = len(projected)
                    if not projected.empty and "label" in projected.columns:
                        trainable_labels = sorted(projected["label"].unique().tolist())
            except Exception:
                pass

        return DataCheckResponse(
            date_from=date_from,
            date_to=date_to,
            dates_with_features=dates_with,
            dates_missing_features=dates_missing,
            total_feature_rows=total_rows,
            label_span_count=label_count,
            trainable_rows=trainable_rows,
            trainable_labels=trainable_labels,
            dates_built=dates_built,
            build_errors=build_errors,
        )

    # -- WebSocket ------------------------------------------------------------

    @app.get("/api/ws/snapshot")
    async def ws_snapshot() -> dict[str, Any]:
        """Return the latest event for each type so reconnecting clients
        can hydrate their store without waiting for the next push."""
        return bus.snapshot()

    @app.websocket("/ws/predictions")
    async def ws_predictions(websocket: WebSocket) -> None:
        await websocket.accept()
        try:
            async with bus.subscribe() as queue:
                while True:
                    event = await queue.get()
                    await websocket.send_json(event)
        except WebSocketDisconnect:
            pass
        except Exception:
            logger.debug("WebSocket error", exc_info=True)

    # -- Static files (SPA) ---------------------------------------------------

    if _STATIC_DIR.is_dir():
        app.mount(
            "/assets", StaticFiles(directory=_STATIC_DIR / "assets"), name="assets"
        )

        @app.get("/{path:path}")
        async def spa_fallback(path: str) -> FileResponse:
            file_path = _STATIC_DIR / path
            if file_path.is_file():
                return FileResponse(file_path)
            return FileResponse(_STATIC_DIR / "index.html")

    return app

ui.tray

System tray labeling app for continuous background labeling.

Launch

taskclf tray
taskclf tray --model-dir models/run_20260226
taskclf tray --username alice
taskclf tray --retrain-config configs/retrain.yaml
taskclf tray --dev

Features

  • System tray icon -- runs persistently in the background via pystray.
  • In-process web UI server -- the FastAPI server always runs in-process, sharing the tray's EventBus. In --browser mode the dashboard opens in the default browser; otherwise a lightweight pywebview subprocess provides a native floating window. Both modes receive the same live events (status, tray_state, prompt_label, suggest_label, prediction) because the server and the tray publish/subscribe on the same EventBus instance. To keep cold starts responsive, the HTTP server is started before optional model loading finishes, so the dashboard can appear while suggestions are still warming up. Browser-only startup imports the shared taskclf.ui.runtime helpers instead of the tray icon module, so it does not pull Pillow/pystray into that import path. Parquet/pandas I/O is loaded lazily inside label, feature-summary, and training-related handlers so the initial import path stays lighter than the full data stack.
  • Activity transition detection -- polls ActivityWatch and detects when the dominant foreground app changes. A transition fires when the new app persists for >= --transition-minutes (default matches DEFAULT_TRANSITION_MINUTES, currently 2 minutes). On the first poll, an initial_app event is published so the UI can prompt labeling for the pre-start period.
  • Pause/resume -- monitoring can be paused via the tray menu ("Pause"/"Resume") or the POST /api/tray/pause REST endpoint. When paused, polling and transition detection are skipped but session state (poll count, transitions) is preserved. The status event emits state: "paused" and the tray_state event includes paused: true.
  • Transition notifications -- on each transition, a notification prompts the user to label the completed block. Plain browser mode uses the Web Notifications API (driven by the prompt_label WebSocket event and requested when the route mounts). Native shells stay on native notification paths: Electron shows an OS notification with action buttons, and the legacy pywebview shell forwards the same prompt_label event through WindowAPI.show_transition_notification() so suggestions still raise a desktop notification even when the embedded webview does not expose the browser Notification API. User-facing time ranges in notification copy are displayed in the local timezone, while the structured event timestamps remain UTC. Notification bodies include an exact local start/end range line derived from block_start / block_end for precise review. Standalone label/panel windows reuse the same prompt event path as the compact shell, and the frontend de-dupes each transition by block range so multiple subscribed windows do not alert twice for the same prompt. On supported Web Notifications runtimes, transition prompts request persistent display (requireInteraction) so they stay visible until dismissed or clicked instead of auto-closing immediately. In the Electron shell, transition prompts use a native notification path with action buttons on supported platforms: when a model suggestion exists the notification offers Accept, Review, and Skip, while clicking the notification body also opens the labeler; without a suggestion, the native action is Review. The pywebview native path uses privacy-safe copy (suggestion_text when present, otherwise Activity changed) plus the exact local range line; it does not expose raw app names. By default, app names are redacted for privacy (privacy_notifications=True). Set privacy_notifications=False to show raw app identifiers. Desktop fallback notifications can be disabled entirely with notifications_enabled=False.
  • Label suggestions -- when --model-dir is provided, the app predicts a label and includes it in the notification. Without a model, all 8 core labels are shown. On startup, model loading is deferred until after the embedded UI server begins listening, so tray_state.model_loaded may briefly remain false during cold-start while the suggester loads in the background. The shared _LabelSuggester runtime helper propagates the stable config-backed user_id (from UserConfig) to build_features_from_aw_events so the model receives the same personalization signal used during training. When no config is available, falls back to "default-user". Input events (keyboard/mouse statistics from aw-watcher-input) are also fetched and passed to feature building so that input-derived features (keys_per_min, clicks_per_min, etc.) are populated rather than left as None.
  • Quick labeling -- preset durations and the label grid live in the web UI (for example the Recent panel), not in the pystray right-click menu. Use Toggle Dashboard to open the UI.
  • Today's Labels -- tray menu action that shows a desktop notification summarizing today's labeling progress: total label count, total time, and per-label breakdown (e.g. "Today: 5 labels, 1h 35m -- Build 1h 5m, Debug 20m, Write 10m"). Counts use the UTC calendar day. Also available via GET /api/labels/stats for programmatic access.
  • Import Labels -- tray menu action that imports label spans from a CSV file. Opens a file-open dialog (via tkinter) to choose the source CSV, then prompts the user to merge with existing labels or overwrite them. Falls back to native macOS file dialogs (via osascript) when tkinter is unavailable or fails (e.g. threading conflicts with the pystray event loop). Merge deduplicates by (start_ts, end_ts, user_id) and rejects overlapping spans; overwrite replaces all labels. Also available via POST /api/labels/import for programmatic access.
  • Export Labels -- tray menu action that exports all label spans to a CSV file. Opens a save-file dialog (via tkinter) to choose the destination; falls back to <data_dir>/labels_v1/labels_export.csv when tkinter is unavailable. Also available via GET /api/labels/export for programmatic access.
  • Show Status -- tray menu action that shows a desktop notification with connection and session status: ActivityWatch connection state, poll count, transition count, saved labels count, and loaded model name.
  • Open Data Folder -- tray menu action that opens the data directory in the OS file manager (Finder on macOS, xdg-open on Linux). Falls back to a notification showing the path if the file manager cannot be launched.
  • Edit Config -- tray menu action that opens config.toml in the default text editor. On first run, if the file is missing, taskclf writes a full commented starter template once (all supported keys); existing files are not regenerated on later startups. Resolved runtime settings are not rewritten on every launch; values from this file are read at startup, and explicit non-default CLI flags override file values for that run. Changes from the web UI (for example username or suggestion banner TTL via /api/config/user) merge into config.toml. See User config template and configs/user_config.template.toml. Existing config.json files are auto-migrated to TOML when config.toml is absent.
  • Edit Inference Policy -- under the tray Advanced submenu; opens models/inference_policy.json in the default text editor (disabled when --models-dir is not set). If the file is missing, it is created first only when the currently loaded/resolved model bundle can seed it, reusing metadata.json's advisory reject_threshold and auto-attaching a matching artifacts/calibrator_store when its store.json is explicitly bound to that model. If no model can be resolved, the tray does not write a placeholder file; instead it notifies you to use Prediction Model or Open Data Folder (the models/ folder sits next to your data directory), and mentions the optional CLI command taskclf policy create --model-dir models/<run_id> for users who have the CLI installed. The canonical starter shape still lives in configs/inference_policy.template.json; see Inference policy template. Unlike config.toml, this file is not auto-created on first run—only when you use this action with a resolvable model or create it via the CLI. Invalid hand-edits may cause inference to fall back to active.json resolution until the file parses again.
  • Report Issue -- tray menu action that opens the GitHub issue tracker (https://github.com/fruitiecutiepie/taskclf/issues/new) in the default browser, pre-filled with the bug_report.yml template and version/OS diagnostics as query parameters. The user controls what information is submitted; no data is sent automatically.
  • Prediction Model submenu -- a Prediction Model submenu listing all valid model bundles found in --models-dir (default models/). The submenu auto-refreshes on every menu open by re-scanning the models directory, so new bundles created by retraining appear immediately without restarting the tray. The currently loaded model shows a check mark (radio-button effect). Clicking a different bundle hot-swaps the in-memory suggester without restarting. A No Model entry unloads the model entirely. The submenu also contains Refresh Model (reloads via inference policy when configured, otherwise re-reads the active bundle from disk) and Retrain Status to run the retrain eligibility check. When --models-dir is missing or empty, a disabled No Models Found placeholder is shown.
  • Retrain Status -- tray menu action (inside the Prediction Model submenu) that checks whether retraining is due based on the latest model's age and the configured cadence. Shows a notification with "Retrain recommended" (with model name and creation date) or "Model is current". Uses --retrain-config to load a custom retrain YAML config; falls back to defaults when not provided. Disabled when --models-dir is not set.
  • Tray menu order (pystray) -- Toggle Dashboard, Pause / Resume, Show Status; then Today's Labels, Import Labels, Export Labels; then Prediction Model (submenu), Open Data Folder, Edit Config, Advanced (submenu: Edit Inference Policy), Report Issue; then Quit. Left-click opens or toggles the dashboard depending on mode; right-click shows this menu.
  • Shell UI parity -- The compact badge route in a normal browser tab uses a light solid page background and right-aligned in-page label/panel popups with hover, pin, and 300 ms delayed hide so it feels closer to the packaged Electron shell. Host.invoke is still a no-op without a host bridge. The ?view=label and ?view=panel routes match native popup markup. Use the pywebview floating window, taskclf electron, or Electron dev with the sidecar backend for full multi-window behavior.
  • Gap-fill surface -- tracks unlabeled time since the last confirmed label and publishes unlabeled_time events every poll cycle (passive badge). Active gap_fill_prompt events are published only at idle return (>5 min), session start, or immediately after accepting a transition suggestion. When unlabeled time exceeds gap_fill_escalation_minutes (default 480), the tray icon changes to orange (gap_fill_escalated event). No popup or notification is sent on escalation.
  • Event broadcasting -- publishes status, tray_state, initial_app, prediction, label_created, labels_changed, suggest_label, unlabeled_time, gap_fill_prompt, and gap_fill_escalated events to the shared EventBus for connected WebSocket clients.
  • Frontend log channel -- in frontend dev mode (--dev), debug/error messages emitted in the SolidJS app can be forwarded through window.pywebview.api.frontend_debug_log(...) and window.pywebview.api.frontend_error_log(...). Debug lines are written at DEBUG level (requires DEBUG logging enabled, e.g. global --verbose), while error lines are written at ERROR level. The app also installs global window.onerror and unhandledrejection handlers in dev mode so uncaught frontend failures are captured and forwarded via the same error channel.
  • Crash handler -- TrayLabeler.run() wraps the main loop in a top-level try/except. On unhandled exceptions, a crash report is written to <TASKCLF_HOME>/logs/crash_<YYYYMMDD_HHMMSS>.txt and a desktop notification is attempted with the crash file path. See core.crash for details.

Surface Architecture

The tray implements three distinct UI surfaces with separate code paths, interaction patterns, and confidence profiles (Decision 6):

Surface Method Event type Copy function Trigger
Transition suggestion _handle_transition prompt_label transition_suggestion_text App transition
Live status _publish_live_status live_status live_status_text Every poll cycle
Gap-fill indicator _publish_unlabeled_time unlabeled_time gap_fill_prompt Every poll cycle (passive)
Gap-fill prompt _publish_gap_fill_prompt gap_fill_prompt gap_fill_prompt Idle return / session start / post-acceptance
Gap-fill escalation _check_escalation gap_fill_escalated Unlabeled time exceeds threshold
  • Transition suggestions aggregate all buckets in the completed interval via infer.aggregation.aggregate_interval and display an action-oriented prompt with a concrete local-time range (e.g. "Was this Coding? 12:00–12:47").
  • Live status predicts only the current single bucket and publishes a passive present-tense label ("Now: Coding").
  • Gap-fill indicator is a passive badge showing total unlabeled time since the last confirmed label. Published every poll cycle as an unlabeled_time event. Does not interrupt the user.
  • Gap-fill prompt is an active prompt published only at three defined trigger points: idle return (>5 min idle), session start, or immediately after the user accepts a transition suggestion.
  • Gap-fill escalation fires when unlabeled time exceeds a configurable threshold (gap_fill_escalation_minutes, default 480 = one active day). Changes the tray icon color to orange. No popup or notification is sent.
  • Numeric confidence is never shown to the user on transition or live status surfaces.
  • All user-facing copy strings are centralized in ui.copy.

Gap-fill events

Three new WebSocket event types support the gap-fill surface:

  • unlabeled_time — published every poll cycle when unlabeled time exists: unlabeled_minutes, text (human-readable badge text), last_label_end, ts.
  • gap_fill_prompt — published at idle return, session start, or post-acceptance: trigger ("idle_return", "session_start", or "post_acceptance"), unlabeled_minutes, text, last_label_end, ts.
  • gap_fill_escalated — published when unlabeled time exceeds the escalation threshold: unlabeled_minutes, threshold_minutes.

The gap_fill_escalation_minutes setting (default 480) controls when escalation fires. It is a user-facing config unlike the reject threshold.

Privacy

Same guarantees as the web UI: no raw window titles or keystrokes are displayed or stored. Desktop notifications redact app names by default.

Shared monitor and suggestion helpers are documented in ui.runtime.

taskclf.ui.tray

System tray labeling app: persistent background labeler with activity transition detection.

Launch with::

taskclf tray
taskclf tray --model-dir models/run_20260226

Runs a pystray icon in the system tray that:

  • Polls ActivityWatch for the current foreground app
  • Detects activity transitions (dominant app changes persisting >= N minutes)
  • Sends desktop notifications prompting the user to label completed blocks
  • Left-click opens the web dashboard; labeling is done through the web UI
  • Publishes prediction/suggestion events to a shared EventBus for the web UI

TrayLabeler dataclass

System tray icon with labeling menus and notification support.

Parameters:

Name Type Description Default
data_dir Path

Path to the processed data directory (for label storage).

(lambda: Path(DEFAULT_DATA_DIR))()
model_dir Path | None

Optional path to a model bundle for label suggestions.

None
models_dir Path | None

Optional path to the directory containing all model bundles. When set, the tray builds a dynamic "Prediction Model" submenu listing available bundles for hot-swapping.

None
aw_host str

ActivityWatch server URL.

DEFAULT_AW_HOST
title_salt str

Salt for hashing window titles.

DEFAULT_TITLE_SALT
poll_seconds int

Seconds between AW polls.

DEFAULT_POLL_SECONDS
transition_minutes int

Minutes for transition detection threshold.

DEFAULT_TRANSITION_MINUTES
event_bus EventBus | None

Optional shared event bus for broadcasting events.

None
ui_port int

Port for the embedded UI server.

8741
open_browser bool

When True, browser-mode startup opens the UI in the default browser immediately. When False, the server starts without launching a browser tab.

True
Source code in src/taskclf/ui/tray.py
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
@dataclass(kw_only=True, eq=False)
class TrayLabeler:
    """System tray icon with labeling menus and notification support.

    Args:
        data_dir: Path to the processed data directory (for label storage).
        model_dir: Optional path to a model bundle for label suggestions.
        models_dir: Optional path to the directory containing all model
            bundles.  When set, the tray builds a dynamic "Prediction Model"
            submenu listing available bundles for hot-swapping.
        aw_host: ActivityWatch server URL.
        title_salt: Salt for hashing window titles.
        poll_seconds: Seconds between AW polls.
        transition_minutes: Minutes for transition detection threshold.
        event_bus: Optional shared event bus for broadcasting events.
        ui_port: Port for the embedded UI server.
        open_browser: When ``True``, browser-mode startup opens the UI in
            the default browser immediately.  When ``False``, the server
            starts without launching a browser tab.
    """

    data_dir: Path = field(default_factory=lambda: Path(DEFAULT_DATA_DIR))
    model_dir: Path | None = None
    models_dir: Path | None = None
    aw_host: str = DEFAULT_AW_HOST
    title_salt: str = DEFAULT_TITLE_SALT
    poll_seconds: int = DEFAULT_POLL_SECONDS
    aw_timeout_seconds: int = DEFAULT_AW_TIMEOUT_SECONDS
    transition_minutes: int = DEFAULT_TRANSITION_MINUTES
    event_bus: EventBus | None = None
    ui_port: int = 8741
    dev: bool = False
    browser: bool = False
    no_tray: bool = False
    open_browser: bool = True
    username: str | None = None
    notifications_enabled: bool = True
    privacy_notifications: bool = True
    retrain_config: Path | None = None
    gap_fill_escalation_minutes: int = 480
    _data_dir: Path = field(init=False)
    _model_dir: Path | None = field(init=False, default=None)
    _models_dir: Path | None = field(init=False, default=None)
    _retrain_config: Path | None = field(init=False, default=None)
    _labels_path: Path = field(init=False)
    _config: UserConfig = field(init=False)
    _notifications_enabled: bool = field(init=False, default=True)
    _privacy_notifications: bool = field(init=False, default=True)
    _current_app: str = field(init=False, default="unknown")
    _suggested_label: str | None = field(init=False, default=None)
    _suggested_confidence: float | None = field(init=False, default=None)
    _ui_port: int = field(init=False, default=8741)
    _ui_server_running: bool = field(init=False, default=False)
    _ui_proc: Any = field(init=False, default=None)
    _vite_proc: Any = field(init=False, default=None)
    _aw_host: str = field(init=False, default=DEFAULT_AW_HOST)
    _title_salt: str = field(init=False, default=DEFAULT_TITLE_SALT)
    _dev: bool = field(init=False, default=False)
    _browser: bool = field(init=False, default=False)
    _no_tray: bool = field(init=False, default=False)
    _open_browser: bool = field(init=False, default=True)
    _transition_count: int = field(init=False, default=0)
    _last_transition: dict[str, Any] | None = field(init=False, default=None)
    _labels_saved_count: int = field(init=False, default=0)
    _model_schema_hash: str | None = field(init=False, default=None)
    _event_bus: EventBus = field(init=False)
    _suggester: _LabelSuggester | None = field(init=False, default=None)
    _initial_model_load_started: bool = field(init=False, default=False)
    _monitor: ActivityMonitor = field(init=False)
    _icon: Any = field(init=False, default=None)
    _unlabeled_minutes: float = field(init=False, default=0.0)
    _last_label_end_cache: dt.datetime | None = field(init=False, default=None)
    _last_label_cache_count: int = field(init=False, default=-1)
    _escalated: bool = field(init=False, default=False)
    _gap_fill_escalation_minutes: int = field(init=False, default=480)

    def __post_init__(self) -> None:
        self._data_dir = self.data_dir
        self._model_dir = self.model_dir
        self._models_dir = self.models_dir
        self._retrain_config = self.retrain_config
        self._labels_path = self.data_dir / "labels_v1" / "labels.parquet"
        self._config = UserConfig(self.data_dir)
        if self.username is not None:
            self._config.username = self.username

        saved = self._config.as_dict()
        notifications_enabled = self._resolve(
            saved,
            "notifications_enabled",
            self.notifications_enabled,
            True,
        )
        privacy_notifications = self._resolve(
            saved,
            "privacy_notifications",
            self.privacy_notifications,
            True,
        )
        poll_seconds = self._resolve(
            saved,
            "poll_seconds",
            self.poll_seconds,
            DEFAULT_POLL_SECONDS,
        )
        aw_timeout_seconds = self._resolve(
            saved,
            "aw_timeout_seconds",
            self.aw_timeout_seconds,
            DEFAULT_AW_TIMEOUT_SECONDS,
        )
        transition_minutes = self._resolve(
            saved,
            "transition_minutes",
            self.transition_minutes,
            DEFAULT_TRANSITION_MINUTES,
        )
        aw_host = self._resolve(saved, "aw_host", self.aw_host, DEFAULT_AW_HOST)
        title_salt = self._resolve(
            saved, "title_salt", self.title_salt, DEFAULT_TITLE_SALT
        )
        ui_port = self._resolve(saved, "ui_port", self.ui_port, 8741)

        # Resolved values are applied at runtime only; do not rewrite config.toml on
        # every startup (starter template is written once when the file is missing).

        self._notifications_enabled = notifications_enabled
        self._privacy_notifications = privacy_notifications
        self._current_app: str = "unknown"
        self._suggested_label: str | None = None
        self._suggested_confidence: float | None = None
        self._ui_port = ui_port
        self._ui_server_running = False
        self._ui_proc: Any = None
        self._vite_proc: Any = None
        self._aw_host = aw_host
        self._title_salt = title_salt
        self._dev = self.dev
        self._browser = self.browser
        self._no_tray = self.no_tray
        self._open_browser = self.open_browser

        # Electron-spawned sidecar: never use pystray (can return immediately
        # without a GUI context). CLI users can still use --browser --no-open-browser
        # with a tray icon unless TASKCLF_ELECTRON_SHELL=1 is set by Electron.
        if (
            os.environ.get("TASKCLF_ELECTRON_SHELL") == "1"
            and self._browser
            and not self._open_browser
        ):
            self._no_tray = True

        self._transition_count: int = 0
        self._last_transition: dict[str, Any] | None = None
        self._labels_saved_count: int = 0
        self._model_schema_hash: str | None = None

        self._gap_fill_escalation_minutes = self._resolve(
            saved,
            "gap_fill_escalation_minutes",
            self.gap_fill_escalation_minutes,
            480,
        )

        self._event_bus = self.event_bus if self.event_bus is not None else EventBus()

        self._suggester: _LabelSuggester | None = None
        self._initial_model_load_started = False

        idle_transition_minutes = self._resolve(
            saved,
            "idle_transition_minutes",
            DEFAULT_IDLE_TRANSITION_MINUTES,
            DEFAULT_IDLE_TRANSITION_MINUTES,
        )

        self._monitor = ActivityMonitor(
            aw_host=aw_host,
            title_salt=title_salt,
            poll_seconds=poll_seconds,
            aw_timeout_seconds=aw_timeout_seconds,
            transition_minutes=transition_minutes,
            idle_transition_minutes=idle_transition_minutes,
            on_transition=self._handle_transition,
            on_poll=self._handle_poll,
            on_initial_app=self._handle_initial_app,
            event_bus=self._event_bus,
        )

    @staticmethod
    def _resolve(saved: dict[str, Any], key: str, cli_val: Any, default: Any) -> Any:
        """Return *cli_val* when it was explicitly set, else the persisted value."""
        if cli_val != default:
            return cli_val
        return saved.get(key, default)

    def _get_last_label_end(self) -> dt.datetime | None:
        """Return the latest label ``end_ts``, using a cache keyed on save count."""
        if self._last_label_cache_count == self._labels_saved_count:
            return self._last_label_end_cache

        self._last_label_cache_count = self._labels_saved_count

        if not self._labels_path.exists():
            self._last_label_end_cache = None
            return None

        try:
            from taskclf.labels.store import read_label_spans

            spans = read_label_spans(self._labels_path)
            if not spans:
                self._last_label_end_cache = None
                return None
            latest = max(s.end_ts for s in spans)
            if latest.tzinfo is None:
                latest = latest.replace(tzinfo=dt.timezone.utc)
            self._last_label_end_cache = latest
            return latest
        except Exception:
            logger.debug("Could not read labels for gap-fill", exc_info=True)
            self._last_label_end_cache = None
            return None

    @staticmethod
    def _format_duration(minutes: float) -> str:
        """Format a duration in minutes to a human-readable string like '2h 30m'."""
        total = int(minutes)
        if total < 1:
            return "0m"
        hours, mins = divmod(total, 60)
        if hours and mins:
            return f"{hours}h {mins}m"
        if hours:
            return f"{hours}h"
        return f"{mins}m"

    def _publish_unlabeled_time(self) -> None:
        """Compute unlabeled time and publish an ``unlabeled_time`` event."""
        if self._event_bus is None:
            return

        last_end = self._get_last_label_end()
        now = dt.datetime.now(dt.timezone.utc)

        if last_end is None:
            self._unlabeled_minutes = 0.0
            return

        delta = (now - last_end).total_seconds() / 60.0
        self._unlabeled_minutes = max(0.0, delta)

        if self._unlabeled_minutes <= 0:
            return

        from taskclf.ui.copy import gap_fill_prompt

        duration_str = self._format_duration(self._unlabeled_minutes)
        self._event_bus.publish_threadsafe(
            {
                "type": "unlabeled_time",
                "unlabeled_minutes": round(self._unlabeled_minutes, 1),
                "text": gap_fill_prompt(duration_str),
                "last_label_end": last_end.isoformat(),
                "ts": now.isoformat(),
            }
        )

        self._check_escalation()

    def _check_escalation(self) -> None:
        """Publish ``gap_fill_escalated`` and update icon when threshold is exceeded."""
        should_escalate = self._unlabeled_minutes >= self._gap_fill_escalation_minutes
        if should_escalate and not self._escalated:
            self._escalated = True
            if self._event_bus is not None:
                self._event_bus.publish_threadsafe(
                    {
                        "type": "gap_fill_escalated",
                        "unlabeled_minutes": round(self._unlabeled_minutes, 1),
                        "threshold_minutes": self._gap_fill_escalation_minutes,
                    }
                )
            if self._icon is not None:
                self._icon.icon = _make_icon_image(color="#FF9800")
        elif not should_escalate and self._escalated:
            self._escalated = False
            if self._icon is not None:
                self._icon.icon = _make_icon_image()

    def _publish_gap_fill_prompt(self, trigger: str) -> None:
        """Publish a ``gap_fill_prompt`` event if unlabeled time exists.

        Args:
            trigger: One of ``"idle_return"``, ``"session_start"``,
                or ``"post_acceptance"``.
        """
        if self._event_bus is None:
            return

        last_end = self._get_last_label_end()
        now = dt.datetime.now(dt.timezone.utc)
        if last_end is None:
            return

        minutes = max(0.0, (now - last_end).total_seconds() / 60.0)
        if minutes <= 0:
            return

        self._unlabeled_minutes = minutes

        from taskclf.ui.copy import gap_fill_prompt

        duration_str = self._format_duration(minutes)
        self._event_bus.publish_threadsafe(
            {
                "type": "gap_fill_prompt",
                "trigger": trigger,
                "unlabeled_minutes": round(minutes, 1),
                "text": gap_fill_prompt(duration_str),
                "last_label_end": last_end.isoformat(),
                "ts": now.isoformat(),
            }
        )

    def _on_suggestion_accepted(self) -> None:
        """Called when the user accepts a transition suggestion.

        Publishes a ``gap_fill_prompt`` event if adjacent unlabeled time
        exists, piggybacking on the user's labeling attention.
        """
        self._last_label_cache_count = -1
        self._publish_gap_fill_prompt("post_acceptance")

    def _handle_initial_app(self, app: str, ts: dt.datetime) -> None:
        """Publish an initial_app event so the UI can prompt for the pre-start period."""
        if self._event_bus is not None:
            self._event_bus.publish_threadsafe(
                {
                    "type": "initial_app",
                    "app": app,
                    "ts": ts.isoformat(),
                }
            )
        self._publish_gap_fill_prompt("session_start")

    def _on_label_saved(self) -> None:
        """Increment the saved-label counter (called by the embedded server)."""
        self._labels_saved_count += 1

    def _on_model_trained(self, model_dir_str: str) -> None:
        """Auto-reload the model when training completes via the web UI."""
        model_path = Path(model_dir_str)
        if not model_path.is_dir():
            return

        if self._models_dir is not None:
            try:
                new_suggester = _LabelSuggester.from_policy(self._models_dir)
                new_suggester._aw_host = self._aw_host
                new_suggester._title_salt = self._title_salt
                new_suggester._user_id = self._config.user_id
                self._suggester = new_suggester
                self._model_dir = model_path
                self._model_schema_hash = new_suggester._predictor.metadata.schema_hash
                logger.info("Auto-loaded via inference policy after training")
                return
            except Exception:
                logger.debug(
                    "Policy load failed after training; using bundle directly",
                    exc_info=True,
                )

        try:
            new_suggester = _LabelSuggester(model_path)
            new_suggester._aw_host = self._aw_host
            new_suggester._title_salt = self._title_salt
            new_suggester._user_id = self._config.user_id
            self._suggester = new_suggester
            self._model_dir = model_path
            self._model_schema_hash = new_suggester._predictor.metadata.schema_hash
            logger.info("Auto-loaded newly trained model from %s", model_path)
        except Exception:
            logger.warning(
                "Could not auto-load trained model from %s", model_path, exc_info=True
            )

    def _tray_state_event(self) -> dict[str, Any]:
        """Build the latest tray state payload for WebSocket and snapshot clients."""
        return {
            "type": "tray_state",
            "model_loaded": self._suggester is not None,
            "model_dir": str(self._model_dir) if self._model_dir else None,
            "model_schema_hash": self._model_schema_hash,
            "suggested_label": self._suggested_label,
            "suggested_confidence": self._suggested_confidence,
            "transition_count": self._transition_count,
            "last_transition": self._last_transition,
            "labels_saved_count": self._labels_saved_count,
            "data_dir": str(self._data_dir),
            "ui_port": self._ui_port,
            "dev_mode": self._dev,
            "paused": self._monitor.is_paused,
        }

    def _publish_tray_state(self) -> None:
        """Publish the current tray state when the shared EventBus is ready."""
        if self._event_bus is None:
            return
        self._event_bus.publish_threadsafe(self._tray_state_event())

    def _model_configured(self) -> bool:
        """Return True when startup should try to load a suggester."""
        return self._models_dir is not None or self._model_dir is not None

    def _load_initial_suggester(self) -> None:
        """Load the optional suggester after UI startup to reduce cold-start latency."""
        if self._models_dir is not None:
            try:
                self._suggester = _LabelSuggester.from_policy(self._models_dir)
                self._suggester._aw_host = self._aw_host
                self._suggester._title_salt = self._title_salt
                self._suggester._user_id = self._config.user_id
                self._model_schema_hash = (
                    self._suggester._predictor.metadata.schema_hash
                )
                logger.info("Model loaded via inference policy")
            except Exception:
                logger.debug(
                    "No inference policy; trying model_dir fallback", exc_info=True
                )
                self._suggester = None

        if self._suggester is None and self._model_dir is not None:
            try:
                self._suggester = _LabelSuggester(self._model_dir)
                self._suggester._aw_host = self._aw_host
                self._suggester._title_salt = self._title_salt
                self._suggester._user_id = self._config.user_id
                self._model_schema_hash = (
                    self._suggester._predictor.metadata.schema_hash
                )
                logger.info("Model loaded from %s", self._model_dir)
            except Exception:
                logger.warning(
                    "Could not load model from %s", self._model_dir, exc_info=True
                )

        if self._event_bus.wait_ready(timeout=30):
            self._publish_tray_state()

    def _start_initial_model_load(self) -> None:
        """Start lazy model loading once the UI server launch path has been kicked off."""
        if self._initial_model_load_started or not self._model_configured():
            return
        self._initial_model_load_started = True
        threading.Thread(
            target=self._load_initial_suggester,
            daemon=True,
            name="taskclf-model-load",
        ).start()

    def _toggle_pause(self) -> bool:
        """Toggle pause state on the monitor. Returns new paused state."""
        if self._monitor.is_paused:
            self._monitor.resume()
        else:
            self._monitor.pause()
        return self._monitor.is_paused

    def _handle_poll(self, dominant_app: str) -> None:
        self._current_app = dominant_app
        self._publish_tray_state()
        self._publish_live_status()
        self._publish_unlabeled_time()

    def _publish_live_status(self) -> None:
        """Predict the current bucket and publish a ``live_status`` event.

        This is a passive, glanceable status separate from transition
        suggestions.  It uses only the latest single bucket (SEM-002).
        """
        if self._suggester is None or self._event_bus is None:
            return

        now = dt.datetime.now(dt.timezone.utc)
        bucket_start = now.replace(second=0, microsecond=0)
        bucket_end = now

        result = self._suggester.suggest(bucket_start, bucket_end)
        if result is None:
            return

        label, _confidence = result

        from taskclf.ui.copy import live_status_text

        self._event_bus.publish_threadsafe(
            {
                "type": "live_status",
                "label": label,
                "text": live_status_text(label),
                "ts": now.isoformat(),
            }
        )

    def _is_breakidle_block(self, prev_app: str) -> bool:
        """Return True when the completed block should be auto-labeled BreakIdle.

        A block qualifies when:
        - ``prev_app`` is a known lockscreen/screensaver app ID, OR
        - the model suggested ``BreakIdle`` for this block.

        ``prev_app`` is a normalized reverse-domain app ID (e.g.
        ``com.apple.loginwindow``) as returned by
        :func:`~taskclf.adapters.activitywatch.mapping.normalize_app`.
        """
        if prev_app in _LOCKSCREEN_APP_IDS:
            return True
        if self._suggested_label == "BreakIdle":
            return True
        return False

    def _auto_save_breakidle(
        self,
        block_start: dt.datetime,
        block_end: dt.datetime,
    ) -> None:
        """Write a BreakIdle label span directly without user confirmation."""
        from taskclf.core.types import LabelSpan
        from taskclf.labels.store import overwrite_label_span

        uid = self._config.user_id
        span = LabelSpan(
            start_ts=block_start,
            end_ts=block_end,
            label="BreakIdle",
            provenance="auto_idle",
            user_id=uid,
            confidence=1.0,
        )
        try:
            overwrite_label_span(span, self._labels_path)
            self._on_label_saved()
            logger.info(
                "Auto-saved BreakIdle label: %s%s",
                block_start.isoformat(),
                block_end.isoformat(),
            )
        except Exception:
            logger.warning("Failed to auto-save BreakIdle label", exc_info=True)

        if self._event_bus is not None:
            self._event_bus.publish_threadsafe(
                {
                    "type": "label_created",
                    "label": "BreakIdle",
                    "confidence": 1.0,
                    "ts": block_end.isoformat(),
                    "start_ts": block_start.isoformat(),
                    "extend_forward": False,
                }
            )
            self._event_bus.publish_threadsafe(
                {"type": "suggestion_cleared", "reason": "auto_saved_breakidle"}
            )
            self._event_bus.publish_threadsafe(
                {
                    "type": "labels_changed",
                    "reason": "auto_saved_breakidle",
                    "ts": block_end.isoformat(),
                }
            )

    def _handle_transition(
        self,
        prev_app: str,
        new_app: str,
        block_start: dt.datetime,
        block_end: dt.datetime,
    ) -> None:
        self._transition_count += 1
        self._last_transition = {
            "prev_app": prev_app,
            "new_app": new_app,
            "block_start": block_start.isoformat(),
            "block_end": block_end.isoformat(),
            "fired_at": dt.datetime.now(dt.timezone.utc).isoformat(),
        }

        suggestion = None
        if self._suggester is not None:
            suggestion = self._suggester.suggest(block_start, block_end)

        if suggestion is not None:
            self._suggested_label, self._suggested_confidence = suggestion
        else:
            self._suggested_label = None
            self._suggested_confidence = None

        is_lockscreen = prev_app in _LOCKSCREEN_APP_IDS
        is_breakidle = self._is_breakidle_block(prev_app)
        logger.debug(
            "DEBUG transition: prev_app=%r -> new_app=%r, "
            "is_lockscreen=%s, suggested_label=%r, is_breakidle=%s",
            prev_app,
            new_app,
            is_lockscreen,
            self._suggested_label,
            is_breakidle,
        )
        if is_breakidle:
            self._auto_save_breakidle(block_start, block_end)
            idle_duration_min = (block_end - block_start).total_seconds() / 60.0
            if is_lockscreen and idle_duration_min > 5:
                self._last_label_cache_count = -1
                self._publish_gap_fill_prompt("idle_return")
            return

        if self._event_bus is None or not self._event_bus.has_subscribers:
            self._send_notification(prev_app, new_app, block_start, block_end)

        if self._event_bus is not None:
            from taskclf.ui.copy import transition_suggestion_text

            start_str = _display_clock_time_local(block_start)
            end_str = _display_clock_time_local(block_end)
            suggestion_text = (
                transition_suggestion_text(self._suggested_label, start_str, end_str)
                if self._suggested_label is not None
                else None
            )
            self._event_bus.publish_threadsafe(
                {
                    "type": "prompt_label",
                    "prev_app": prev_app,
                    "new_app": new_app,
                    "block_start": block_start.isoformat(),
                    "block_end": block_end.isoformat(),
                    "duration_min": max(
                        1, int((block_end - block_start).total_seconds() / 60)
                    ),
                    "suggested_label": self._suggested_label,
                    "suggestion_text": suggestion_text,
                }
            )
            if (
                self._suggested_label is not None
                and self._suggested_confidence is not None
            ):
                self._event_bus.publish_threadsafe(
                    {
                        "type": "suggest_label",
                        "reason": "app_switch",
                        "old_label": prev_app,
                        "suggested": self._suggested_label,
                        "confidence": self._suggested_confidence,
                        "block_start": block_start.isoformat(),
                        "block_end": block_end.isoformat(),
                    }
                )
            else:
                self._event_bus.publish_threadsafe(
                    {
                        "type": "no_model_transition",
                        "current_app": new_app,
                        "ts": block_end.isoformat(),
                        "block_start": block_start.isoformat(),
                        "block_end": block_end.isoformat(),
                    }
                )

    def _send_notification(
        self,
        prev_app: str,
        new_app: str,
        block_start: dt.datetime,
        block_end: dt.datetime,
    ) -> None:
        if not self._notifications_enabled:
            return

        from taskclf.ui.copy import transition_suggestion_text

        title = "taskclf — Activity changed"
        start_str = _display_clock_time_local(block_start)
        end_str = _display_clock_time_local(block_end)
        range_str = _display_time_range_exact_local(block_start, block_end)

        if self._suggested_label is not None:
            message = (
                f"{transition_suggestion_text(self._suggested_label, start_str, end_str)}"
                f"\n{range_str}"
            )
        elif self._privacy_notifications:
            message = f"Activity changed\n{range_str}"
        else:
            message = f"{prev_app} \u2192 {new_app}\n{range_str}"

        _send_desktop_notification(title, message, timeout=10)

    def _build_menu_items(self) -> tuple["pystray.MenuItem", ...]:
        """Return top-level menu items.

        Used as a callable by ``pystray.Menu`` so the menu is rebuilt
        (including a fresh Prediction Model submenu scan) on every right-click.
        """
        import pystray

        return (
            pystray.MenuItem(
                "Toggle Dashboard",
                self._open_dashboard,
                default=True,
            ),
            pystray.MenuItem(
                lambda _: "Resume" if self._monitor.is_paused else "Pause",
                self._on_pause_menu,
            ),
            pystray.MenuItem("Show Status", self._show_status),
            pystray.Menu.SEPARATOR,
            pystray.MenuItem("Today's Labels", self._label_stats),
            pystray.MenuItem("Import Labels", self._import_labels),
            pystray.MenuItem("Export Labels", self._export_labels),
            pystray.Menu.SEPARATOR,
            pystray.MenuItem("Prediction Model", self._build_model_submenu()),
            pystray.MenuItem("Open Data Folder", self._open_data_dir),
            pystray.MenuItem("Edit Config", self._edit_config),
            pystray.MenuItem("Advanced", self._build_advanced_submenu()),
            pystray.MenuItem("Report Issue", self._report_issue),
            pystray.Menu.SEPARATOR,
            pystray.MenuItem("Quit", self._quit),
        )

    def _build_menu(self) -> "pystray.Menu":
        """Build a static snapshot of the menu (used by tests)."""
        import pystray

        return pystray.Menu(*self._build_menu_items())

    def _on_pause_menu(self, *_args: Any) -> None:
        paused = self._toggle_pause()
        state = "paused" if paused else "resumed"
        self._notify(f"Monitoring {state}")

    def _export_labels(self, *_args: Any) -> None:
        from taskclf.labels.store import export_labels_to_csv

        csv_path: Path | None = None
        try:
            import tkinter as tk
            from tkinter import filedialog

            root = tk.Tk()
            root.withdraw()
            chosen = filedialog.asksaveasfilename(
                defaultextension=".csv",
                filetypes=[("CSV files", "*.csv"), ("All files", "*.*")],
                initialfile="labels_export.csv",
                title="Export Labels",
            )
            root.destroy()
            if not chosen:
                return
            csv_path = Path(chosen)
        except Exception:
            logger.debug("tkinter unavailable, using default export path")
            csv_path = self._data_dir / "labels_v1" / "labels_export.csv"

        try:
            export_labels_to_csv(self._labels_path, csv_path)
            self._notify_with_reveal(
                f"Labels exported to {csv_path.name}",
                csv_path,
            )
            logger.info("Labels exported to %s", csv_path)
        except ValueError as exc:
            self._notify(f"Export failed: {exc}")
            logger.warning("Label export failed: %s", exc)

    def _import_labels(self, *_args: Any) -> None:
        from taskclf.labels.store import (
            import_labels_from_csv,
            merge_label_spans,
            read_label_spans,
            write_label_spans,
        )

        csv_path: Path | None = None
        strategy: str | None = None
        try:
            import tkinter as tk
            from tkinter import filedialog, messagebox

            root = tk.Tk()
            root.withdraw()
            chosen = filedialog.askopenfilename(
                filetypes=[("CSV files", "*.csv"), ("All files", "*.*")],
                title="Import Labels",
            )
            if not chosen:
                root.destroy()
                return
            csv_path = Path(chosen)

            answer = messagebox.askyesnocancel(
                "Import Strategy",
                "Merge with existing labels?\n\n"
                "Yes = merge (keep existing, add new)\n"
                "No = overwrite (replace all labels)",
                parent=root,
            )
            root.destroy()
            if answer is None:
                return
            strategy = "merge" if answer else "overwrite"
        except Exception:
            logger.debug("tkinter unavailable for import dialog, trying osascript")
            csv_path, strategy = self._import_labels_osascript()
            if csv_path is None:
                return

        try:
            imported = import_labels_from_csv(csv_path)
        except (ValueError, Exception) as exc:
            self._notify(f"Import failed: {exc}")
            logger.warning("Label import failed: %s", exc)
            return

        try:
            if strategy == "overwrite":
                write_label_spans(imported, self._labels_path)
            else:
                existing: list = []
                if self._labels_path.exists():
                    existing = read_label_spans(self._labels_path)
                merged = merge_label_spans(existing, imported)
                write_label_spans(merged, self._labels_path)

            self._notify(f"Imported {len(imported)} labels from {csv_path.name}")
            logger.info(
                "Imported %d labels from %s (strategy=%s)",
                len(imported),
                csv_path,
                strategy,
            )
        except ValueError as exc:
            self._notify(f"Import failed: {exc}")
            logger.warning("Label import failed: %s", exc)

    def _import_labels_osascript(self) -> tuple[Path | None, str | None]:
        """macOS fallback for import file dialog using osascript."""
        if platform.system() != "Darwin":
            self._notify("Import failed: no file dialog available")
            return None, None
        try:
            result = subprocess.run(
                [
                    "osascript",
                    "-e",
                    'POSIX path of (choose file of type {"csv"}'
                    ' with prompt "Import Labels")',
                ],
                capture_output=True,
                text=True,
                timeout=120,
            )
            if result.returncode != 0 or not result.stdout.strip():
                return None, None
            csv_path = Path(result.stdout.strip())

            btn = subprocess.run(
                [
                    "osascript",
                    "-e",
                    "button returned of (display dialog"
                    ' "Merge with existing labels?\\n\\n'
                    "Merge = keep existing, add new\\n"
                    'Overwrite = replace all labels"'
                    ' buttons {"Cancel","Overwrite","Merge"}'
                    ' default button "Merge")',
                ],
                capture_output=True,
                text=True,
                timeout=120,
            )
            if btn.returncode != 0 or not btn.stdout.strip():
                return None, None
            strategy = "merge" if btn.stdout.strip() == "Merge" else "overwrite"
            return csv_path, strategy
        except Exception as exc:
            logger.debug("osascript import dialog failed: %s", exc)
            self._notify("Import failed: no file dialog available")
            return None, None

    def _label_stats(self, *_args: Any) -> None:
        """Show a notification with today's labeling progress."""
        from taskclf.labels.store import read_label_spans

        if not self._labels_path.exists():
            self._notify("No labels yet")
            return

        try:
            spans = read_label_spans(self._labels_path)
        except Exception as exc:
            self._notify(f"Could not read labels: {exc}")
            return

        today = dt.datetime.now(dt.timezone.utc).date()
        today_spans = [s for s in spans if s.start_ts.date() == today]

        if not today_spans:
            self._notify("Today: no labels yet")
            return

        breakdown: dict[str, float] = {}
        for s in today_spans:
            mins = (s.end_ts - s.start_ts).total_seconds() / 60
            breakdown[s.label] = breakdown.get(s.label, 0) + mins

        total_min = sum(breakdown.values())
        hours = int(total_min // 60)
        mins = int(total_min % 60)
        time_str = f"{hours}h {mins}m" if hours else f"{mins}m"

        parts = [
            f"{label} {self._format_duration(m)}"
            for label, m in sorted(
                breakdown.items(),
                key=lambda x: x[1],
                reverse=True,
            )
        ]
        summary = f"Today: {len(today_spans)} labels, {time_str}{', '.join(parts)}"
        self._notify(summary)

    def _open_data_dir(self, *_args: Any) -> None:
        """Open the data directory in the OS file manager."""
        system = platform.system()
        try:
            if system == "Darwin":
                subprocess.Popen(["open", str(self._data_dir)])
            else:
                subprocess.Popen(["xdg-open", str(self._data_dir)])
        except Exception:
            logger.debug("Could not open data directory", exc_info=True)
            self._notify(f"Data dir: {self._data_dir}")

    def _edit_config(self, *_args: Any) -> None:
        """Open ``config.toml`` in the default text editor."""
        config_path = self._config._path
        system = platform.system()
        try:
            if system == "Darwin":
                subprocess.Popen(["open", "-t", str(config_path)])
            else:
                subprocess.Popen(["xdg-open", str(config_path)])
        except Exception:
            logger.debug("Could not open config file", exc_info=True)
            self._notify(f"Config: {config_path}")

    def _candidate_calibrator_store_dirs(self, base: Path) -> list[Path]:
        """Return candidate calibrator-store directories under ``artifacts/``."""
        artifacts_dir = base / "artifacts"
        if not artifacts_dir.is_dir():
            return []

        default_store = artifacts_dir / "calibrator_store"
        candidates: list[Path] = []
        seen: set[Path] = set()

        for store_dir in [
            default_store,
            *(p.parent for p in artifacts_dir.rglob("store.json")),
        ]:
            if store_dir in seen:
                continue
            seen.add(store_dir)
            if not (store_dir / "store.json").is_file():
                continue
            if not (store_dir / "global.json").is_file():
                continue
            candidates.append(store_dir)
        return candidates

    def _find_matching_calibrator_store(
        self,
        *,
        models_dir: Path,
        model_bundle: Path,
        model_schema_hash: str,
    ) -> tuple[str | None, str | None]:
        """Return ``(relative_store_dir, method)`` for a matching store.

        Only stores with explicit model binding metadata are auto-selected,
        which avoids guessing across unrelated calibration outputs.
        """
        base = models_dir.parent
        matches: list[tuple[int, int, int, str, Path, str | None]] = []

        for store_dir in self._candidate_calibrator_store_dirs(base):
            try:
                store_meta = json.loads((store_dir / "store.json").read_text())
            except json.JSONDecodeError, OSError:
                logger.debug(
                    "Could not inspect calibrator store metadata at %s",
                    store_dir,
                    exc_info=True,
                )
                continue

            store_bundle_id = store_meta.get("model_bundle_id")
            store_schema_hash = store_meta.get("model_schema_hash")
            if store_bundle_id is None and store_schema_hash is None:
                continue
            if (
                store_bundle_id is not None
                and str(store_bundle_id) != model_bundle.name
            ):
                continue
            if (
                store_schema_hash is not None
                and str(store_schema_hash) != model_schema_hash
            ):
                continue

            method_raw = store_meta.get("method")
            method = method_raw if isinstance(method_raw, str) else None
            created_at_raw = store_meta.get("created_at")
            created_at = created_at_raw if isinstance(created_at_raw, str) else ""
            matches.append(
                (
                    1 if store_bundle_id == model_bundle.name else 0,
                    1 if store_schema_hash == model_schema_hash else 0,
                    1 if store_dir.name == "calibrator_store" else 0,
                    created_at,
                    store_dir,
                    method,
                )
            )

        if not matches:
            return None, None

        _, _, _, _, best_dir, best_method = max(matches, key=lambda item: item[:4])
        return str(best_dir.relative_to(base)), best_method

    def _ensure_inference_policy_file_for_editing(
        self,
    ) -> tuple[Path | None, str | None]:
        """Return ``(policy_path, notice)`` for editing.

        When ``inference_policy.json`` is missing, creates it only when a
        real model bundle can be resolved and seeded.

        Returns:
            ``(None, None)`` when ``models_dir`` is not configured.
            ``(path, None)`` when a policy file is ready to edit.
            ``(None, notice)`` when no resolved model is available; *notice*
            explains in-app steps first, with an optional CLI hint.
        """
        if self._models_dir is None:
            return None, None

        models_dir = self._models_dir
        policy_path = models_dir / DEFAULT_INFERENCE_POLICY_FILE
        if policy_path.is_file():
            return policy_path, None

        from taskclf.core.inference_policy import (
            build_inference_policy,
            PolicyValidationError,
            save_inference_policy,
            validate_policy,
        )
        from taskclf.infer.resolve import ModelResolutionError, resolve_model_dir

        models_dir.mkdir(parents=True, exist_ok=True)

        model_bundle: Path | None = None
        if (
            self._model_dir is not None
            and (self._model_dir / "metadata.json").is_file()
        ):
            model_bundle = self._model_dir
        else:
            try:
                model_bundle = resolve_model_dir(None, models_dir)
            except ModelResolutionError:
                model_bundle = None

        if model_bundle is not None:
            meta_path = model_bundle / "metadata.json"
            try:
                meta = json.loads(meta_path.read_text())
                model_schema_hash = str(meta["schema_hash"])
                model_label_set = list(meta["label_set"])
            except KeyError, TypeError, ValueError, json.JSONDecodeError, OSError:
                logger.debug(
                    "Could not seed inference policy from %s; use CLI instead",
                    model_bundle,
                    exc_info=True,
                )
            else:
                raw_threshold = meta.get("reject_threshold")
                try:
                    reject_threshold = (
                        float(raw_threshold)
                        if raw_threshold is not None
                        else DEFAULT_REJECT_THRESHOLD
                    )
                except TypeError, ValueError:
                    reject_threshold = DEFAULT_REJECT_THRESHOLD

                cal_store_rel, cal_method = self._find_matching_calibrator_store(
                    models_dir=models_dir,
                    model_bundle=model_bundle,
                    model_schema_hash=model_schema_hash,
                )
                policy = build_inference_policy(
                    model_dir=os.path.relpath(model_bundle, models_dir.parent),
                    model_schema_hash=model_schema_hash,
                    model_label_set=model_label_set,
                    reject_threshold=reject_threshold,
                    calibrator_store_dir=cal_store_rel,
                    calibration_method=cal_method,
                    source="tray-edit",
                )
                if cal_store_rel is not None:
                    try:
                        validate_policy(policy, models_dir)
                    except PolicyValidationError:
                        logger.debug(
                            "Ignoring detected calibrator store %s for starter policy",
                            cal_store_rel,
                            exc_info=True,
                        )
                        policy = policy.model_copy(
                            update={
                                "calibrator_store_dir": None,
                                "calibration_method": None,
                            }
                        )
                written = save_inference_policy(policy, models_dir)
                return written, None

        return (
            None,
            "No model available to seed inference_policy.json. "
            "Use Prediction Model or Open Data Folder (models/ next to your data folder). "
            "If you have the CLI: taskclf policy create --model-dir models/<run_id>",
        )

    def _edit_inference_policy(self, *_args: Any) -> None:
        """Open ``inference_policy.json`` in the default text editor.

        Creates the file when missing only if a resolved model can seed it.
        Otherwise, notifies the user with in-app guidance and an optional CLI hint.
        """
        if self._models_dir is None:
            self._notify("No models directory configured")
            return

        policy_path, notice = self._ensure_inference_policy_file_for_editing()
        if notice is not None:
            self._notify(notice)
        if policy_path is None:
            return

        system = platform.system()
        try:
            if system == "Darwin":
                subprocess.Popen(["open", "-t", str(policy_path)])
            else:
                subprocess.Popen(["xdg-open", str(policy_path)])
        except Exception:
            logger.debug("Could not open inference policy file", exc_info=True)
            self._notify(f"Inference policy: {policy_path}")

    def _reload_model(self, *_args: Any) -> None:
        """Re-read the model bundle from disk without restarting."""
        if self._models_dir is not None:
            try:
                new_suggester = _LabelSuggester.from_policy(self._models_dir)
                new_suggester._aw_host = self._aw_host
                new_suggester._title_salt = self._title_salt
                new_suggester._user_id = self._config.user_id
                self._suggester = new_suggester
                self._model_schema_hash = new_suggester._predictor.metadata.schema_hash
                self._notify("Config reloaded via inference policy")
                logger.info("Config reloaded via inference policy")
                return
            except Exception:
                logger.debug("Policy reload failed; trying model_dir", exc_info=True)

        if self._model_dir is None:
            self._notify("No model directory configured")
            return
        try:
            new_suggester = _LabelSuggester(self._model_dir)
            new_suggester._aw_host = self._aw_host
            new_suggester._title_salt = self._title_salt
            new_suggester._user_id = self._config.user_id
            self._suggester = new_suggester
            self._model_schema_hash = new_suggester._predictor.metadata.schema_hash
            self._notify(f"Model reloaded from {self._model_dir.name}")
            logger.info("Model reloaded from %s", self._model_dir)
        except Exception as exc:
            self._notify(f"Reload failed: {exc}")
            logger.warning("Model reload failed: %s", exc, exc_info=True)

    def _check_retrain(self, *_args: Any) -> None:
        """Check whether retraining is due and show a notification."""
        if self._models_dir is None:
            self._notify("No models directory configured")
            return

        try:
            import json

            from taskclf.train.retrain import (
                RetrainConfig,
                check_retrain_due,
                find_latest_model,
                load_retrain_config,
            )

            config = (
                load_retrain_config(self._retrain_config)
                if self._retrain_config is not None and self._retrain_config.is_file()
                else RetrainConfig()
            )

            latest = find_latest_model(self._models_dir)
            due = check_retrain_due(
                self._models_dir,
                config.global_retrain_cadence_days,
            )

            if latest is not None:
                raw = json.loads((latest / "metadata.json").read_text())
                created = raw.get("created_at", "unknown")
                if due:
                    self._notify(
                        f"Retrain recommended "
                        f"(cadence: {config.global_retrain_cadence_days}d, "
                        f"last: {latest.name} created {created})"
                    )
                else:
                    self._notify(f"Model is current ({latest.name}, created {created})")
            else:
                self._notify("Retrain recommended: no models found")
        except Exception as exc:
            self._notify(f"Check failed: {exc}")
            logger.warning("Retrain check failed: %s", exc, exc_info=True)

    def _build_model_submenu(self) -> "pystray.Menu":
        """Build a dynamic submenu listing available model bundles."""
        import pystray

        from taskclf.model_registry import list_bundles

        items: list[pystray.MenuItem] = []

        bundles = list_bundles(self._models_dir) if self._models_dir is not None else []
        valid_bundles = [b for b in bundles if b.valid]

        if valid_bundles:
            for bundle in valid_bundles:
                model_path = bundle.path

                def make_switch_cb(p: Path) -> Callable[..., None]:
                    def cb(*_a: Any) -> None:
                        self._switch_model(p)

                    return cb

                def make_checked(p: Path) -> Callable[..., bool]:
                    return lambda _item: (
                        self._model_dir is not None
                        and self._model_dir.resolve() == p.resolve()
                    )

                items.append(
                    pystray.MenuItem(
                        bundle.model_id,
                        make_switch_cb(model_path),
                        checked=make_checked(model_path),
                    )
                )

            items.append(
                pystray.MenuItem(
                    "No Model",
                    self._unload_model,
                    checked=lambda _item: self._model_dir is None,
                )
            )
            items.append(pystray.Menu.SEPARATOR)
            items.append(
                pystray.MenuItem(
                    "Refresh Model",
                    self._reload_model,
                    enabled=lambda _: self._model_dir is not None,
                )
            )
            items.append(
                pystray.MenuItem(
                    "Retrain Status",
                    self._check_retrain,
                    enabled=lambda _: self._models_dir is not None,
                )
            )
        else:
            items.append(
                pystray.MenuItem(
                    "No Models Found",
                    None,
                    enabled=False,
                )
            )
            items.append(pystray.Menu.SEPARATOR)
            items.append(
                pystray.MenuItem(
                    "Refresh Model",
                    self._reload_model,
                    enabled=lambda _: self._model_dir is not None,
                )
            )
            items.append(
                pystray.MenuItem(
                    "Retrain Status",
                    self._check_retrain,
                    enabled=lambda _: self._models_dir is not None,
                )
            )

        return pystray.Menu(*items)

    def _build_advanced_submenu(self) -> "pystray.Menu":
        """Power-user actions (inference policy, etc.)."""
        import pystray

        return pystray.Menu(
            pystray.MenuItem(
                "Edit Inference Policy",
                self._edit_inference_policy,
                enabled=lambda _: self._models_dir is not None,
            ),
        )

    def _switch_model(self, model_path: Path) -> None:
        """Hot-swap the active model to a different bundle."""
        if (
            self._model_dir is not None
            and self._model_dir.resolve() == model_path.resolve()
        ):
            return

        try:
            new_suggester = _LabelSuggester(model_path)
            new_suggester._aw_host = self._aw_host
            new_suggester._title_salt = self._title_salt
            new_suggester._user_id = self._config.user_id
            self._suggester = new_suggester
            self._model_dir = model_path
            self._model_schema_hash = new_suggester._predictor.metadata.schema_hash
            self._notify(f"Switched to model {model_path.name}")
            logger.info("Switched to model %s", model_path)
        except Exception as exc:
            self._notify(f"Switch failed: {exc}")
            logger.warning(
                "Model switch to %s failed: %s", model_path, exc, exc_info=True
            )

    def _unload_model(self, *_args: Any) -> None:
        """Unload the current model entirely."""
        self._suggester = None
        self._model_dir = None
        self._model_schema_hash = None
        self._suggested_label = None
        self._suggested_confidence = None
        self._notify("Model unloaded")
        logger.info("Model unloaded")

    def _show_status(self, *_args: Any) -> None:
        """Show a notification with connection and session status."""
        aw_status = (
            "connected" if self._monitor._bucket_id is not None else "disconnected"
        )
        paused = " (paused)" if self._monitor.is_paused else ""
        model_name = self._model_dir.name if self._model_dir else "none"

        parts = [
            f"AW: {aw_status}{paused}",
            f"Polls: {self._monitor._poll_count}",
            f"Transitions: {self._transition_count}",
            f"Labels: {self._labels_saved_count}",
            f"Model: {model_name}",
        ]
        self._notify(" | ".join(parts))

    def _open_dashboard(self, *_args: Any) -> None:
        if self._browser:
            import webbrowser

            ui_port = (
                _VITE_DEV_PORT
                if (
                    self._dev
                    and self._vite_proc is not None
                    and self._vite_proc.poll() is None
                )
                else self._ui_port
            )
            webbrowser.open(f"http://127.0.0.1:{ui_port}")
            return

        if self._ui_proc is not None and self._ui_proc.poll() is None:
            logger.debug("Sending toggle to UI process (pid=%s)", self._ui_proc.pid)
            try:
                self._ui_proc.stdin.write(b"toggle\n")
                self._ui_proc.stdin.flush()
            except BrokenPipeError, OSError:
                logger.debug("Could not send toggle to UI process", exc_info=True)
            return

        logger.debug("No running UI process — spawning new window")
        self._spawn_window()

    def _quit(self, *_args: Any) -> None:
        self._monitor.stop()
        self._cleanup_ui()
        if self._icon is not None:
            self._icon.stop()

    def _notify(self, message: str) -> None:
        _send_desktop_notification("taskclf", message, timeout=5)

    def _notify_with_reveal(self, message: str, path: Path) -> None:
        """Show a notification with an option to reveal *path* in the file manager.

        On macOS an AppleScript dialog with an "Show in Finder" button is
        displayed (auto-dismisses after 10 s).  On other platforms the
        containing folder is opened automatically alongside the notification.
        """
        folder = path.parent if path.is_file() else path
        if platform.system() == "Darwin":
            safe_msg = (
                message.replace("\\", "\\\\").replace('"', '\\"').replace("\n", " ")
            )
            safe_folder = str(folder).replace("\\", "\\\\").replace('"', '\\"')
            script = (
                f'set theResult to display dialog "{safe_msg}" '
                f'buttons {{"OK", "Show in Finder"}} default button "OK" '
                f"giving up after 10\n"
                f'if button returned of theResult is "Show in Finder" then\n'
                f'    do shell script "open \\"{safe_folder}\\""\n'
                f"end if"
            )
            try:
                subprocess.run(
                    ["osascript", "-e", script],
                    capture_output=True,
                    timeout=15,
                    check=False,
                )
                return
            except Exception:
                logger.debug("osascript dialog failed, falling back", exc_info=True)

        self._notify(message)
        self._reveal_in_file_manager(folder)

    def _reveal_in_file_manager(self, path: Path) -> None:
        """Open *path* in the platform file manager."""
        system = platform.system()
        try:
            if system == "Darwin":
                subprocess.Popen(["open", str(path)])
            else:
                subprocess.Popen(["xdg-open", str(path)])
        except Exception:
            logger.debug("Could not open folder", exc_info=True)

    _MAX_ISSUE_URL_LEN = 8000

    def _build_report_issue_url(self) -> str:
        """Build a GitHub new-issue URL pre-filled with diagnostics and logs.

        Automatically runs the equivalent of ``taskclf diagnostics`` and
        reads the sanitized log tail from the user's data directory so
        the bug report template fields are pre-populated.
        """
        from urllib.parse import urlencode

        from taskclf.core.crash import _read_log_tail
        from taskclf.core.diagnostics import (
            collect_diagnostics,
            format_diagnostics_text,
        )
        from taskclf.core.paths import taskclf_home

        home = taskclf_home()
        models_dir = str(self._models_dir) if self._models_dir else str(home / "models")

        try:
            info = collect_diagnostics(
                aw_host=self._config.as_dict().get("aw_host", DEFAULT_AW_HOST),
                data_dir=str(self._data_dir),
                models_dir=models_dir,
                include_logs=False,
            )
            diagnostics_text = format_diagnostics_text(info)
        except Exception:
            logger.debug("Failed to collect diagnostics", exc_info=True)
            diagnostics_text = "<unable to collect diagnostics>"

        try:
            log_tail = _read_log_tail(home / "logs", 30)
            logs_text = "\n".join(log_tail) if log_tail else ""
        except Exception:
            logger.debug("Failed to read log tail", exc_info=True)
            logs_text = ""

        params: dict[str, str] = {
            "template": "bug_report.yml",
            "title": "[Bug]: ",
            "diagnostics": diagnostics_text,
        }
        if logs_text:
            params["logs"] = logs_text

        base = "https://github.com/fruitiecutiepie/taskclf/issues/new"
        url = f"{base}?{urlencode(params)}"

        if len(url) > self._MAX_ISSUE_URL_LEN:
            params.pop("logs", None)
            url = f"{base}?{urlencode(params)}"

        return url

    def _report_issue(self, *_args: Any) -> None:
        """Open the GitHub issue tracker in the default browser."""
        import webbrowser

        url = self._build_report_issue_url()
        webbrowser.open(url)

    def _start_server(self) -> int:
        """Start FastAPI + uvicorn in-process, sharing the tray's EventBus.

        Both ``--browser`` and native-window modes call this so that tray
        events (status, tray_state, suggest_label, prediction) are always
        visible to WebSocket clients.

        Returns:
            The effective UI port (may differ from ``self._ui_port`` when
            ``--dev`` starts a Vite dev server on ``_VITE_DEV_PORT``).
        """
        import os

        import uvicorn

        from taskclf.ui.server import create_app

        tray_actions: dict[str, Callable[..., Any]] = {
            "open_dashboard": self._open_dashboard,
            "pause_toggle": self._on_pause_menu,
            "label_stats": self._label_stats,
            "import_labels": self._import_labels,
            "export_labels": self._export_labels,
            "switch_model": self._switch_model,
            "unload_model": self._unload_model,
            "reload_model": self._reload_model,
            "check_retrain": self._check_retrain,
            "show_status": self._show_status,
            "open_data_dir": self._open_data_dir,
            "edit_config": self._edit_config,
            "edit_inference_policy": self._edit_inference_policy,
            "report_issue": self._report_issue,
            "quit": self._quit,
        }

        def get_tray_state() -> dict[str, Any]:
            return {
                "paused": self._monitor.is_paused,
                "model_dir": str(self._model_dir.resolve())
                if self._model_dir
                else None,
                "models_dir": str(self._models_dir.resolve())
                if self._models_dir
                else None,
            }

        fastapi_app = create_app(
            data_dir=self._data_dir,
            models_dir=self._models_dir,
            aw_host=self._aw_host,
            title_salt=self._title_salt,
            event_bus=self._event_bus,
            on_label_saved=self._on_label_saved,
            on_model_trained=self._on_model_trained,
            on_suggestion_accepted=self._on_suggestion_accepted,
            pause_toggle=self._toggle_pause,
            is_paused=lambda: self._monitor.is_paused,
            tray_actions=tray_actions,
            get_tray_state=get_tray_state,
            get_activity_provider_status=lambda: self._monitor.activity_provider_status,
        )

        uvicorn_config = uvicorn.Config(
            fastapi_app,
            host="127.0.0.1",
            port=self._ui_port,
            log_level="warning",
            ws_ping_interval=30,
            ws_ping_timeout=30,
        )
        server = uvicorn.Server(uvicorn_config)
        server_thread = threading.Thread(target=server.run, daemon=True)
        server_thread.start()
        self._ui_server_running = True

        print(f"taskclf API on http://127.0.0.1:{self._ui_port}", flush=True)

        ui_port = self._ui_port

        if self._dev:
            import taskclf.ui.server as _ui_srv

            frontend_dir = Path(_ui_srv.__file__).resolve().parent / "frontend"
            if not frontend_dir.is_dir():
                print(
                    "Warning: frontend source not found (--dev requires a repo checkout)"
                )
                return ui_port

            if not (frontend_dir / "node_modules").is_dir():
                print("Installing frontend dependencies…")
                subprocess.run(["pnpm", "install"], cwd=frontend_dir, check=True)

            vite_env = {
                **os.environ,
                "TASKCLF_PORT": str(self._ui_port),
                "TASKCLF_DEV_PORT": str(_VITE_DEV_PORT),
            }
            self._vite_proc = subprocess.Popen(
                ["pnpm", "run", "dev"],
                cwd=frontend_dir,
                env=vite_env,
            )
            ui_port = _VITE_DEV_PORT
            print(f"Vite dev server → http://127.0.0.1:{ui_port} (hot reload)")

            for _attempt in range(30):
                try:
                    import urllib.request

                    urllib.request.urlopen(f"http://127.0.0.1:{ui_port}", timeout=1)
                    break
                except Exception:
                    if self._vite_proc.poll() is not None:
                        print("Warning: Vite dev server exited unexpectedly")
                        return ui_port
                    time.sleep(0.5)
            else:
                print("Warning: Vite dev server not responding, opening anyway")

        return ui_port

    def _start_ui_subprocess(self) -> None:
        """Run FastAPI in-process and spawn a pywebview window subprocess.

        The server runs in-process so the tray's ``EventBus`` is shared
        with WebSocket clients.  Only the native window shell runs in a
        child process (no duplicate ``ActivityMonitor`` or ``EventBus``).
        """
        self._start_server()
        self._spawn_window()

    def _spawn_window(self) -> None:
        """Spawn a pywebview-only subprocess pointing at the in-process server."""
        import sys

        from taskclf.ui import window_run as _window_module

        try:
            cmd = [
                sys.executable,
                "-m",
                _window_module.__name__,
                "--port",
                str(self._ui_port),
            ]
            self._ui_proc = subprocess.Popen(cmd, stdin=subprocess.PIPE)
            mode = " (dev)" if self._dev else ""
            print(
                f"UI window launched{mode} (pid={self._ui_proc.pid}, port={self._ui_port})"
            )
        except Exception:
            logger.warning("Could not start UI window subprocess", exc_info=True)
            print(
                f"Warning: UI window failed to start. Dashboard at http://127.0.0.1:{self._ui_port}"
            )

    def _start_ui_embedded(self) -> None:
        """Run FastAPI in-process and optionally open the dashboard in a browser."""
        ui_port = self._start_server()
        mode = " (dev)" if self._dev else ""
        if self._open_browser:
            import webbrowser

            webbrowser.open(f"http://127.0.0.1:{ui_port}")
            print(f"UI opened in browser{mode} (port={ui_port})")
            return

        print(f"UI server ready{mode} (port={ui_port})")

    def _cleanup_ui(self) -> None:
        """Terminate UI and Vite subprocesses if still running."""
        if self._ui_proc is not None and self._ui_proc.poll() is None:
            self._ui_proc.terminate()
            try:
                self._ui_proc.wait(timeout=5)
            except Exception:
                logger.debug(
                    "UI process did not exit gracefully, killing", exc_info=True
                )
                self._ui_proc.kill()
        if self._vite_proc is not None and self._vite_proc.poll() is None:
            self._vite_proc.terminate()
            try:
                self._vite_proc.wait(timeout=5)
            except Exception:
                logger.debug(
                    "Vite process did not exit gracefully, killing", exc_info=True
                )
                self._vite_proc.kill()

    def run(self) -> None:
        """Start the tray icon and background monitor. Blocks until quit."""
        try:
            self._run_inner()
        except SystemExit, KeyboardInterrupt:
            raise
        except Exception as exc:
            from taskclf.core.crash import write_crash_report

            try:
                path = write_crash_report(exc)
                _send_desktop_notification(
                    "taskclf crashed",
                    f"Details saved to {path}",
                    timeout=10,
                )
            except Exception:
                logger.debug("Could not write crash report", exc_info=True)
            raise

    def _run_inner(self) -> None:
        """Actual run logic, separated so ``run()`` can wrap it."""
        import atexit

        from taskclf.core.logging import setup_file_logging

        setup_file_logging()

        if self._browser:
            self._start_ui_embedded()
        else:
            self._start_ui_subprocess()
        atexit.register(self._cleanup_ui)
        self._start_initial_model_load()

        monitor_thread = threading.Thread(
            target=self._monitor.run,
            daemon=True,
        )
        monitor_thread.start()

        if self._suggester is not None:
            mode = "with model suggestions"
        elif self._model_configured():
            mode = "loading model suggestions"
        else:
            mode = "label-only (no model)"

        if self._no_tray:
            # Duplicate to stderr so headless / frozen sidecars still show lines if
            # stdout is not attached (Electron spawn, some PyInstaller configs).
            for line in (
                f"taskclf running ({mode}), no tray icon.",
                f"UI available at http://127.0.0.1:{self._ui_port}",
                "Press Ctrl+C to quit.",
            ):
                print(line)
                try:
                    print(line, file=sys.stderr, flush=True)
                except Exception:
                    pass
            # threading.Event.wait() can return spuriously on some platforms; keep
            # the Electron sidecar alive until interrupt or process exit.
            try:
                while True:
                    time.sleep(86400.0)
            except KeyboardInterrupt:
                pass
            finally:
                self._monitor.stop()
                self._cleanup_ui()
            return

        import pystray

        icon_image = _make_icon_image()
        self._icon = pystray.Icon(
            "taskclf",
            icon_image,
            "taskclf",
            menu=pystray.Menu(self._build_menu_items),
        )

        print(f"taskclf tray started ({mode})")
        print(
            "Click the tray icon to open the dashboard. Press Ctrl+C or Quit to exit."
        )

        try:
            self._icon.run()
        finally:
            self._cleanup_ui()

run()

Start the tray icon and background monitor. Blocks until quit.

Source code in src/taskclf/ui/tray.py
def run(self) -> None:
    """Start the tray icon and background monitor. Blocks until quit."""
    try:
        self._run_inner()
    except SystemExit, KeyboardInterrupt:
        raise
    except Exception as exc:
        from taskclf.core.crash import write_crash_report

        try:
            path = write_crash_report(exc)
            _send_desktop_notification(
                "taskclf crashed",
                f"Details saved to {path}",
                timeout=10,
            )
        except Exception:
            logger.debug("Could not write crash report", exc_info=True)
        raise

run_tray(*, model_dir=None, models_dir=None, aw_host=DEFAULT_AW_HOST, poll_seconds=DEFAULT_POLL_SECONDS, aw_timeout_seconds=DEFAULT_AW_TIMEOUT_SECONDS, title_salt=DEFAULT_TITLE_SALT, data_dir=Path(DEFAULT_DATA_DIR), transition_minutes=DEFAULT_TRANSITION_MINUTES, event_bus=None, ui_port=8741, dev=False, browser=False, no_tray=False, open_browser=True, username=None, notifications_enabled=True, privacy_notifications=True, retrain_config=None)

Launch the system tray labeling app.

Always starts the FastAPI server in-process so the tray's EventBus is shared with WebSocket clients. In browser mode the dashboard opens in the default browser; otherwise a lightweight pywebview subprocess provides the native floating window.

Parameters:

Name Type Description Default
model_dir Path | None

Optional path to a trained model bundle. When provided, the tray suggests labels on activity transitions.

None
models_dir Path | None

Optional path to the directory containing all model bundles. Enables the "Prediction Model" submenu for hot-swapping.

None
aw_host str

ActivityWatch server URL.

DEFAULT_AW_HOST
poll_seconds int

Seconds between AW polling cycles.

DEFAULT_POLL_SECONDS
aw_timeout_seconds int

Seconds to wait for AW API responses.

DEFAULT_AW_TIMEOUT_SECONDS
title_salt str

Salt for hashing window titles.

DEFAULT_TITLE_SALT
data_dir Path

Processed data directory (labels stored here).

Path(DEFAULT_DATA_DIR)
transition_minutes int

Minutes a new dominant app must persist before a transition notification fires.

DEFAULT_TRANSITION_MINUTES
event_bus EventBus | None

Optional shared event bus for broadcasting events to connected WebSocket clients.

None
ui_port int

Port for the embedded web UI server.

8741
dev bool

When True, the spawned UI subprocess starts a Vite dev server for frontend hot reload.

False
browser bool

When True, the spawned UI subprocess opens in the default browser instead of a native window.

False
no_tray bool

When True, skip the native tray icon entirely. The main thread blocks until interrupted. Useful with --browser for a fully browser-based workflow.

False
open_browser bool

When True, browser mode launches the default browser automatically. Set to False when another host shell (for example Electron) will render the web UI.

True
username str | None

Display name to persist in config.json. Does not affect label identity (labels use the stable auto-generated UUID user_id).

None
notifications_enabled bool

When False, desktop notifications are suppressed entirely.

True
privacy_notifications bool

When True (the default), app names are redacted from desktop notifications to protect privacy. Set to False to show raw app identifiers.

True
retrain_config Path | None

Optional path to a retrain YAML config. Enables the "Retrain Status" item in the Prediction Model submenu.

None
Source code in src/taskclf/ui/tray.py
def run_tray(
    *,
    model_dir: Path | None = None,
    models_dir: Path | None = None,
    aw_host: str = DEFAULT_AW_HOST,
    poll_seconds: int = DEFAULT_POLL_SECONDS,
    aw_timeout_seconds: int = DEFAULT_AW_TIMEOUT_SECONDS,
    title_salt: str = DEFAULT_TITLE_SALT,
    data_dir: Path = Path(DEFAULT_DATA_DIR),
    transition_minutes: int = DEFAULT_TRANSITION_MINUTES,
    event_bus: EventBus | None = None,
    ui_port: int = 8741,
    dev: bool = False,
    browser: bool = False,
    no_tray: bool = False,
    open_browser: bool = True,
    username: str | None = None,
    notifications_enabled: bool = True,
    privacy_notifications: bool = True,
    retrain_config: Path | None = None,
) -> None:
    """Launch the system tray labeling app.

    Always starts the FastAPI server in-process so the tray's
    ``EventBus`` is shared with WebSocket clients.  In browser mode
    the dashboard opens in the default browser; otherwise a lightweight
    pywebview subprocess provides the native floating window.

    Args:
        model_dir: Optional path to a trained model bundle.  When
            provided, the tray suggests labels on activity transitions.
        models_dir: Optional path to the directory containing all model
            bundles.  Enables the "Prediction Model" submenu for hot-swapping.
        aw_host: ActivityWatch server URL.
        poll_seconds: Seconds between AW polling cycles.
        aw_timeout_seconds: Seconds to wait for AW API responses.
        title_salt: Salt for hashing window titles.
        data_dir: Processed data directory (labels stored here).
        transition_minutes: Minutes a new dominant app must persist
            before a transition notification fires.
        event_bus: Optional shared event bus for broadcasting events
            to connected WebSocket clients.
        ui_port: Port for the embedded web UI server.
        dev: When ``True``, the spawned UI subprocess starts a Vite
            dev server for frontend hot reload.
        browser: When ``True``, the spawned UI subprocess opens in the
            default browser instead of a native window.
        no_tray: When ``True``, skip the native tray icon entirely.
            The main thread blocks until interrupted.  Useful with
            ``--browser`` for a fully browser-based workflow.
        open_browser: When ``True``, browser mode launches the default
            browser automatically.  Set to ``False`` when another host
            shell (for example Electron) will render the web UI.
        username: Display name to persist in ``config.json``.  Does not
            affect label identity (labels use the stable auto-generated
            UUID ``user_id``).
        notifications_enabled: When ``False``, desktop notifications
            are suppressed entirely.
        privacy_notifications: When ``True`` (the default), app names
            are redacted from desktop notifications to protect privacy.
            Set to ``False`` to show raw app identifiers.
        retrain_config: Optional path to a retrain YAML config.
            Enables the "Retrain Status" item in the Prediction Model submenu.
    """
    tray = TrayLabeler(
        data_dir=data_dir,
        model_dir=model_dir,
        models_dir=models_dir,
        aw_host=aw_host,
        title_salt=title_salt,
        poll_seconds=poll_seconds,
        aw_timeout_seconds=aw_timeout_seconds,
        transition_minutes=transition_minutes,
        event_bus=event_bus,
        ui_port=ui_port,
        dev=dev,
        browser=browser,
        no_tray=no_tray,
        open_browser=open_browser,
        username=username,
        notifications_enabled=notifications_enabled,
        privacy_notifications=privacy_notifications,
        retrain_config=retrain_config,
    )
    tray.run()