Compare commits
22 Commits
03970bf17f
..
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
7fd98ffcb9
|
|||
|
cfac95ca03
|
|||
|
10bfb6090c
|
|||
|
c8ccf31583
|
|||
|
6e971941f8
|
|||
|
794f42b42b
|
|||
|
b18d860aca
|
|||
|
1771b43bba
|
|||
|
0485410285
|
|||
|
aba77a23cb
|
|||
|
c505e117d1
|
|||
|
1b48386132
|
|||
|
0f1f5b418a
|
|||
|
60732f2986
|
|||
|
633983ed98
|
|||
|
1c160d5ccb
|
|||
|
b7e539de3b
|
|||
|
e6997a76c4
|
|||
|
6d9cfaf8be
|
|||
|
66a32c751a
|
|||
|
d2a3e64b89
|
|||
|
e6b8583868
|
@@ -21,7 +21,7 @@ highest-confidence result wins.
|
|||||||
1. **Local** — sidecar `.lrc` files or embedded audio metadata (FLAC, MP3)
|
1. **Local** — sidecar `.lrc` files or embedded audio metadata (FLAC, MP3)
|
||||||
2. **Cache Search** — fuzzy cross-album lookup in local cache
|
2. **Cache Search** — fuzzy cross-album lookup in local cache
|
||||||
3. **Spotify** — synced lyrics via Spotify's API
|
3. **Spotify** — synced lyrics via Spotify's API
|
||||||
(requires `SPOTIFY_SP_DC` and Spotify trackid)
|
(requires `credentials.spotify_sp_dc` and Spotify trackid)
|
||||||
4. **LRCLIB** — exact match from [lrclib.net](https://lrclib.net)
|
4. **LRCLIB** — exact match from [lrclib.net](https://lrclib.net)
|
||||||
(requires full metadata)
|
(requires full metadata)
|
||||||
5. **Musixmatch (Spotify)** — Musixmatch API with Spotify trackid
|
5. **Musixmatch (Spotify)** — Musixmatch API with Spotify trackid
|
||||||
@@ -30,7 +30,7 @@ highest-confidence result wins.
|
|||||||
7. **Musixmatch** — Musixmatch API with metadata search (requires at least a title)
|
7. **Musixmatch** — Musixmatch API with metadata search (requires at least a title)
|
||||||
8. **Netease** — Netease Cloud Music public API
|
8. **Netease** — Netease Cloud Music public API
|
||||||
9. **QQ Music** — QQ Music via self-hosted API proxy
|
9. **QQ Music** — QQ Music via self-hosted API proxy
|
||||||
(requires `QQ_MUSIC_API_URL` that provides the same interface as [tooplick/qq-music-api](https://github.com/tooplick/qq-music-api))
|
(requires `credentials.qq_music_api_url`; compatible with [tooplick/qq-music-api](https://github.com/tooplick/qq-music-api))
|
||||||
|
|
||||||
> I'm aware that Spotify's lyrics are provided by Musixmatch, but the fact is
|
> I'm aware that Spotify's lyrics are provided by Musixmatch, but the fact is
|
||||||
> that Musixmatch's own search will yield different (and more) results than
|
> that Musixmatch's own search will yield different (and more) results than
|
||||||
@@ -46,7 +46,7 @@ See `lrx --help` for full command reference. Common use cases:
|
|||||||
lrx fetch
|
lrx fetch
|
||||||
```
|
```
|
||||||
|
|
||||||
targeting a specific player and a source to fetch from:
|
targeting a specific player and source:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
lrx fetch --player mpd --method lrclib-search
|
lrx fetch --player mpd --method lrclib-search
|
||||||
@@ -73,6 +73,21 @@ See `lrx --help` for full command reference. Common use cases:
|
|||||||
lrx export --output /path/to/lyrics.lrc
|
lrx export --output /path/to/lyrics.lrc
|
||||||
```
|
```
|
||||||
|
|
||||||
|
- Watch active player and stream lyrics continuously to stdout:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lrx watch pipe
|
||||||
|
lrx watch pipe --before 1 --after 2 # show context lines
|
||||||
|
```
|
||||||
|
|
||||||
|
Control a running watch session:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lrx watch ctl status # print session status as JSON
|
||||||
|
lrx watch ctl offset +200 # shift lyrics forward 200 ms
|
||||||
|
lrx watch ctl offset -150
|
||||||
|
```
|
||||||
|
|
||||||
- Cache management:
|
- Cache management:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -83,47 +98,54 @@ See `lrx --help` for full command reference. Common use cases:
|
|||||||
lrx cache confidence spotify 100 # manually set confidence for a source
|
lrx cache confidence spotify 100 # manually set confidence for a source
|
||||||
```
|
```
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
Set credentials via environment variable or `.env` file:
|
|
||||||
|
|
||||||
- `~/.config/lrx/.env` — user-level
|
|
||||||
- `.env` in working directory — project-local
|
|
||||||
- Shell environment — highest priority
|
|
||||||
|
|
||||||
```env
|
|
||||||
SPOTIFY_SP_DC=your_cookie_value
|
|
||||||
MUSIXMATCH_USERTOKEN=your_musixmatch_usertoken
|
|
||||||
QQ_MUSIC_API_URL=https://api.example.com
|
|
||||||
PREFERRED_PLAYER=spotify
|
|
||||||
```
|
|
||||||
|
|
||||||
- `SPOTIFY_SP_DC` — required for Spotify source. Defaults to empty
|
|
||||||
(disabled Spotify source).
|
|
||||||
- `MUSIXMATCH_USERTOKEN` — optional for Musixmatch sources
|
|
||||||
([Curators Settings Page](https://curators.musixmatch.com/settings)
|
|
||||||
-> Login (if required)
|
|
||||||
-> "Copy debug info").
|
|
||||||
If not set, an anonymous token will be fetched at runtime.
|
|
||||||
- `QQ_MUSIC_API_URL` — required for QQ Music source. Defaults to empty
|
|
||||||
(disabled QQ Music source).
|
|
||||||
- `PREFERRED_PLAYER` — preferred MPRIS player when multiple are active.
|
|
||||||
Defaults to `spotify`. Only used when no `--player` flag is given and more
|
|
||||||
than one player (or none of them) is currently playing.
|
|
||||||
|
|
||||||
Shell completion (zsh/fish/bash):
|
Shell completion (zsh/fish/bash):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
lrx --install-completion
|
lrx --install-completion
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Configuration is read from `~/.config/lrx-cli/config.toml`. The file is
|
||||||
|
optional; all values have defaults. Unknown keys are rejected with an error.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[general]
|
||||||
|
preferred_player = "" # preferred MPRIS player when multiple are active
|
||||||
|
player_blacklist = ["firefox", "zen", "chrome", "chromium", "vivaldi", "edge", "opera", "mpv"] # bypassed by --player/-p
|
||||||
|
http_timeout = 10.0 # seconds
|
||||||
|
|
||||||
|
[credentials]
|
||||||
|
spotify_sp_dc = "" # required for Spotify source
|
||||||
|
musixmatch_usertoken = "" # optional; anonymous token fetched if empty
|
||||||
|
qq_music_api_url = "" # required for QQ Music source
|
||||||
|
|
||||||
|
[watch]
|
||||||
|
debounce_ms = 400 # ms to wait after a track change before fetching
|
||||||
|
calibration_interval_s = 3.0 # seconds between full MPRIS position recalibrations
|
||||||
|
position_tick_ms = 50 # ms between local position ticks
|
||||||
|
socket_path = "" # Unix socket path; defaults to <cache_dir>/watch.sock
|
||||||
|
```
|
||||||
|
|
||||||
|
**Credentials:**
|
||||||
|
|
||||||
|
- `spotify_sp_dc` — `SP_DC` cookie from a logged-in Spotify web session. Required
|
||||||
|
for the Spotify source; leave empty to disable it.
|
||||||
|
- `musixmatch_usertoken` — found at
|
||||||
|
[Curators Settings Page](https://curators.musixmatch.com/settings) → Login → "Copy debug info".
|
||||||
|
If empty, an anonymous token will be fetched at runtime, which could be more likely to
|
||||||
|
hit the rate limits.
|
||||||
|
- `qq_music_api_url` — base URL of a self-hosted
|
||||||
|
[qq-music-api](https://github.com/tooplick/qq-music-api) (compatible) instance. Required
|
||||||
|
for the QQ Music source; leave empty to disable it.
|
||||||
|
|
||||||
## Development
|
## Development
|
||||||
|
|
||||||
Clone this repository:
|
Clone this repository:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/Uyanide/LRX-CLI.git
|
git clone https://github.com/Uyanide/lrx-cli.git
|
||||||
cd LRX-CLI
|
cd lrx-cli
|
||||||
```
|
```
|
||||||
|
|
||||||
Create a virtual environment and install dependencies (for example, using uv):
|
Create a virtual environment and install dependencies (for example, using uv):
|
||||||
@@ -133,16 +155,25 @@ uv venv .venv
|
|||||||
uv sync
|
uv sync
|
||||||
```
|
```
|
||||||
|
|
||||||
Run tests without network calls
|
Run tests (without network access):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uv run pytest -m "not network"
|
uv run poe test
|
||||||
```
|
```
|
||||||
|
|
||||||
or full tests:
|
Run tests including **REAL EXTERNAL** API calls. Some of them will be skipped
|
||||||
|
if the required credentials are not configured as [above](#configuration). This might be useful
|
||||||
|
to verify whether the lyric sources are still valid and working as expected:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uv run pytest
|
uv run poe test-api
|
||||||
|
```
|
||||||
|
|
||||||
|
Other unified tasks:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run poe fmt # ruff format
|
||||||
|
uv run poe lint # ruff check + pyright
|
||||||
```
|
```
|
||||||
|
|
||||||
Run the CLI:
|
Run the CLI:
|
||||||
|
|||||||
@@ -0,0 +1,2 @@
|
|||||||
|
*
|
||||||
|
!.gitignore
|
||||||
@@ -0,0 +1,343 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import traceback
|
||||||
|
from dataclasses import asdict
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Awaitable, Callable
|
||||||
|
|
||||||
|
import httpx
|
||||||
|
|
||||||
|
from lrx_cli.authenticators import create_authenticators
|
||||||
|
from lrx_cli.cache import CacheEngine
|
||||||
|
from lrx_cli.config import AppConfig, load_config
|
||||||
|
from lrx_cli.fetchers import (
|
||||||
|
create_fetchers,
|
||||||
|
LrclibFetcher,
|
||||||
|
LrclibSearchFetcher,
|
||||||
|
NeteaseFetcher,
|
||||||
|
SpotifyFetcher,
|
||||||
|
QQMusicFetcher,
|
||||||
|
MusixmatchFetcher,
|
||||||
|
MusixmatchSpotifyFetcher,
|
||||||
|
)
|
||||||
|
from lrx_cli.models import TrackMeta
|
||||||
|
|
||||||
|
|
||||||
|
SAMPLE_TRACK = TrackMeta(
|
||||||
|
title="One Last Kiss",
|
||||||
|
artist="Hikaru Utada",
|
||||||
|
album="One Last Kiss",
|
||||||
|
length=252026,
|
||||||
|
trackid="5RhWszHMSKzb7KiXk4Ae0M",
|
||||||
|
url="https://open.spotify.com/track/5RhWszHMSKzb7KiXk4Ae0M",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _jsonable(value: Any) -> Any:
|
||||||
|
if isinstance(value, (str, int, float, bool)) or value is None:
|
||||||
|
return value
|
||||||
|
if isinstance(value, dict):
|
||||||
|
return {str(k): _jsonable(v) for k, v in value.items()}
|
||||||
|
if isinstance(value, (list, tuple)):
|
||||||
|
return [_jsonable(v) for v in value]
|
||||||
|
if isinstance(value, bytes):
|
||||||
|
try:
|
||||||
|
return value.decode("utf-8")
|
||||||
|
except Exception:
|
||||||
|
return value.hex()
|
||||||
|
if hasattr(value, "model_dump"):
|
||||||
|
return _jsonable(value.model_dump())
|
||||||
|
if hasattr(value, "__dict__"):
|
||||||
|
return _jsonable(vars(value))
|
||||||
|
return repr(value)
|
||||||
|
|
||||||
|
|
||||||
|
def _write_json(path: Path, payload: Any) -> None:
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
path.write_text(
|
||||||
|
json.dumps(_jsonable(payload), ensure_ascii=False, indent=2) + "\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _clear_output_files(out_dir: Path) -> None:
|
||||||
|
for pattern in ("*.json", "*.db"):
|
||||||
|
for path in out_dir.glob(pattern):
|
||||||
|
if path.is_file():
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
def _new_runtime(config: AppConfig, db_path: Path):
|
||||||
|
cache = CacheEngine(str(db_path))
|
||||||
|
authenticators = create_authenticators(cache, config)
|
||||||
|
fetchers = create_fetchers(cache, authenticators, config)
|
||||||
|
return fetchers, authenticators
|
||||||
|
|
||||||
|
|
||||||
|
async def _response_dump(resp: httpx.Response) -> dict[str, Any]:
|
||||||
|
out: dict[str, Any] = {
|
||||||
|
"status_code": resp.status_code,
|
||||||
|
"headers": dict(resp.headers),
|
||||||
|
"url": str(resp.request.url),
|
||||||
|
"method": resp.request.method,
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
out["json"] = resp.json()
|
||||||
|
except Exception:
|
||||||
|
out["text"] = resp.text
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def _decode_body(content: bytes) -> str:
|
||||||
|
if not content:
|
||||||
|
return ""
|
||||||
|
try:
|
||||||
|
return content.decode("utf-8")
|
||||||
|
except Exception:
|
||||||
|
return content.hex()
|
||||||
|
|
||||||
|
|
||||||
|
def _dump_request(req: httpx.Request) -> dict[str, Any]:
|
||||||
|
query_params = {k: v for k, v in req.url.params.multi_items()}
|
||||||
|
return {
|
||||||
|
"method": req.method,
|
||||||
|
"url": str(req.url),
|
||||||
|
"headers": dict(req.headers),
|
||||||
|
"query_params": query_params,
|
||||||
|
"body": _decode_body(req.content),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async def run_capture(out_dir: Path, timeout: float, strict: bool) -> int:
|
||||||
|
out_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
_clear_output_files(out_dir)
|
||||||
|
|
||||||
|
# Use isolated cache DBs to avoid polluting normal runtime cache.
|
||||||
|
anon_fetchers, _ = _new_runtime(AppConfig(), out_dir / ".capture-anon.db")
|
||||||
|
cred_fetchers, _ = _new_runtime(load_config(), out_dir / ".capture-cred.db")
|
||||||
|
|
||||||
|
calls: list[tuple[str, dict[str, Any], Callable[[], Awaitable[Any]]]] = []
|
||||||
|
|
||||||
|
captured_requests: list[dict[str, Any]] = []
|
||||||
|
original_send = httpx.AsyncClient.send
|
||||||
|
|
||||||
|
async def _patched_send(
|
||||||
|
self: httpx.AsyncClient,
|
||||||
|
request: httpx.Request,
|
||||||
|
*args: Any,
|
||||||
|
**kwargs: Any,
|
||||||
|
) -> httpx.Response:
|
||||||
|
captured_requests.append(_dump_request(request))
|
||||||
|
return await original_send(self, request, *args, **kwargs)
|
||||||
|
|
||||||
|
httpx.AsyncClient.send = _patched_send # type: ignore[method-assign]
|
||||||
|
|
||||||
|
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||||
|
# LRCLIB
|
||||||
|
lrclib = anon_fetchers["lrclib"]
|
||||||
|
assert isinstance(lrclib, LrclibFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"lrclib_get",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: lrclib._api_get(client, SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
lrclib_search = anon_fetchers["lrclib-search"]
|
||||||
|
assert isinstance(lrclib_search, LrclibSearchFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"lrclib_search_candidates",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: lrclib_search._api_candidates(client, SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Netease
|
||||||
|
netease = anon_fetchers["netease"]
|
||||||
|
assert isinstance(netease, NeteaseFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"netease_search_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK), "limit": 5},
|
||||||
|
lambda: netease._api_search_track(client, SAMPLE_TRACK, 5),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"netease_lyric_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK), "limit": 5},
|
||||||
|
lambda: netease._api_lyric_track(client, SAMPLE_TRACK, 5),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Spotify (credentialed runtime)
|
||||||
|
spotify = cred_fetchers["spotify"]
|
||||||
|
assert isinstance(spotify, SpotifyFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"spotify_lyrics",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: spotify._api_lyrics(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# QQMusic (credentialed runtime)
|
||||||
|
qq = cred_fetchers["qqmusic"]
|
||||||
|
assert isinstance(qq, QQMusicFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"qqmusic_search_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK), "limit": 10},
|
||||||
|
lambda: qq._api_search(SAMPLE_TRACK, 10),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"qqmusic_lyric_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK), "limit": 10},
|
||||||
|
lambda: qq._api_lyric_track(SAMPLE_TRACK, 10),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Musixmatch anonymous
|
||||||
|
mxm_anon = anon_fetchers["musixmatch"]
|
||||||
|
mxm_sp_anon = anon_fetchers["musixmatch-spotify"]
|
||||||
|
assert isinstance(mxm_anon, MusixmatchFetcher)
|
||||||
|
assert isinstance(mxm_sp_anon, MusixmatchSpotifyFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_anonymous_search_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_anon._api_search_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_anonymous_macro_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_anon._api_macro_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_spotify_anonymous_macro_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_sp_anon._api_macro_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Musixmatch credentialed (if token configured, this uses it)
|
||||||
|
mxm_cred = cred_fetchers["musixmatch"]
|
||||||
|
mxm_sp_cred = cred_fetchers["musixmatch-spotify"]
|
||||||
|
assert isinstance(mxm_cred, MusixmatchFetcher)
|
||||||
|
assert isinstance(mxm_sp_cred, MusixmatchSpotifyFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_token_search_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_cred._api_search_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_token_macro_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_cred._api_macro_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_spotify_token_macro_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_sp_cred._api_macro_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
failures = 0
|
||||||
|
try:
|
||||||
|
for idx, (name, request_payload, fn) in enumerate(calls, start=1):
|
||||||
|
stem = f"{idx:03d}_{name}"
|
||||||
|
req_path = out_dir / f"{stem}.request.json"
|
||||||
|
resp_path = out_dir / f"{stem}.response.json"
|
||||||
|
|
||||||
|
captured_requests.clear()
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = await fn()
|
||||||
|
if isinstance(result, httpx.Response):
|
||||||
|
payload = await _response_dump(result)
|
||||||
|
else:
|
||||||
|
payload = _jsonable(result)
|
||||||
|
_write_json(
|
||||||
|
req_path,
|
||||||
|
{
|
||||||
|
"call": name,
|
||||||
|
"input": request_payload,
|
||||||
|
"http_requests": _jsonable(captured_requests),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
_write_json(resp_path, {"ok": True, "response": payload})
|
||||||
|
except Exception as exc:
|
||||||
|
failures += 1
|
||||||
|
_write_json(
|
||||||
|
req_path,
|
||||||
|
{
|
||||||
|
"call": name,
|
||||||
|
"input": request_payload,
|
||||||
|
"http_requests": _jsonable(captured_requests),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
_write_json(
|
||||||
|
resp_path,
|
||||||
|
{
|
||||||
|
"ok": False,
|
||||||
|
"error": str(exc),
|
||||||
|
"traceback": traceback.format_exc(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
if strict:
|
||||||
|
break
|
||||||
|
finally:
|
||||||
|
httpx.AsyncClient.send = original_send # type: ignore[method-assign]
|
||||||
|
|
||||||
|
return failures
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description=(
|
||||||
|
"Call external provider APIs with sample data and save request/response "
|
||||||
|
"pairs for API reference."
|
||||||
|
)
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--out-dir",
|
||||||
|
type=Path,
|
||||||
|
default=Path("misc/api_ref"),
|
||||||
|
help="Output directory for request/response files.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--timeout",
|
||||||
|
type=float,
|
||||||
|
default=20.0,
|
||||||
|
help="HTTP timeout in seconds.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--strict",
|
||||||
|
action="store_true",
|
||||||
|
help="Stop on first failed call.",
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
failures = asyncio.run(run_capture(args.out_dir, args.timeout, args.strict))
|
||||||
|
print(f"capture finished: failures={failures}, out_dir={args.out_dir}")
|
||||||
|
return 1 if (args.strict and failures > 0) else 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
+19
-4
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
|
|||||||
|
|
||||||
[project]
|
[project]
|
||||||
name = "lrx-cli"
|
name = "lrx-cli"
|
||||||
version = "0.6.4"
|
version = "0.7.9"
|
||||||
description = "Fetch line-synced lyrics for your music player."
|
description = "Fetch line-synced lyrics for your music player."
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.13"
|
requires-python = ">=3.13"
|
||||||
@@ -14,8 +14,7 @@ dependencies = [
|
|||||||
"httpx>=0.28.1",
|
"httpx>=0.28.1",
|
||||||
"loguru>=0.7.3",
|
"loguru>=0.7.3",
|
||||||
"mutagen>=1.47.0",
|
"mutagen>=1.47.0",
|
||||||
"platformdirs>=4.9.4",
|
"platformdirs>=4.9.6",
|
||||||
"python-dotenv>=1.2.2",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
@@ -25,4 +24,20 @@ lrx = "lrx_cli.cli:run"
|
|||||||
ignore = ["E402"] # Since there are headers
|
ignore = ["E402"] # Since there are headers
|
||||||
|
|
||||||
[dependency-groups]
|
[dependency-groups]
|
||||||
dev = ["pytest>=9.0.2", "ruff>=0.15.8"]
|
dev = [
|
||||||
|
"poethepoet>=0.44.0",
|
||||||
|
"pyright>=1.1.406",
|
||||||
|
"pytest>=9.0.2",
|
||||||
|
"ruff>=0.15.8",
|
||||||
|
]
|
||||||
|
|
||||||
|
[tool.poe.tasks]
|
||||||
|
fmt = "ruff format ."
|
||||||
|
lint = { shell = "ruff check . && pyright" }
|
||||||
|
test = "pytest"
|
||||||
|
test-api = "pytest -m 'network or not network'"
|
||||||
|
|
||||||
|
[tool.pyright]
|
||||||
|
pythonVersion = "3.13"
|
||||||
|
include = ["src", "tests", "misc"]
|
||||||
|
typeCheckingMode = "standard"
|
||||||
|
|||||||
@@ -1,2 +1,3 @@
|
|||||||
[pytest]
|
[pytest]
|
||||||
|
addopts = -m "not network"
|
||||||
markers = network: marks tests that require real network access to external APIs
|
markers = network: marks tests that require real network access to external APIs
|
||||||
|
|||||||
+80
-35
@@ -21,9 +21,9 @@ colorama==0.4.6 ; sys_platform == 'win32' \
|
|||||||
# via
|
# via
|
||||||
# loguru
|
# loguru
|
||||||
# pytest
|
# pytest
|
||||||
cyclopts==4.10.1 \
|
cyclopts==4.10.2 \
|
||||||
--hash=sha256:35f37257139380a386d9fe4475e1e7c87ca7795765ef4f31abba579fcfcb6ecd \
|
--hash=sha256:a1f2d6f8f7afac9456b48f75a40b36658778ddc9c6d406b520d017ae32c990fe \
|
||||||
--hash=sha256:ad4e4bb90576412d32276b14a76f55d43353753d16217f2c3cd5bdceba7f15a0
|
--hash=sha256:d7b950457ef2563596d56331f80cbbbf86a2772535fb8b315c4f03bc7e6127f1
|
||||||
# via lrx-cli
|
# via lrx-cli
|
||||||
dbus-next==0.2.3 \
|
dbus-next==0.2.3 \
|
||||||
--hash=sha256:58948f9aff9db08316734c0be2a120f6dc502124d9642f55e90ac82ffb16a18b \
|
--hash=sha256:58948f9aff9db08316734c0be2a120f6dc502124d9642f55e90ac82ffb16a18b \
|
||||||
@@ -75,31 +75,72 @@ mutagen==1.47.0 \
|
|||||||
--hash=sha256:719fadef0a978c31b4cf3c956261b3c58b6948b32023078a2117b1de09f0fc99 \
|
--hash=sha256:719fadef0a978c31b4cf3c956261b3c58b6948b32023078a2117b1de09f0fc99 \
|
||||||
--hash=sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719
|
--hash=sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719
|
||||||
# via lrx-cli
|
# via lrx-cli
|
||||||
|
nodeenv==1.10.0 \
|
||||||
|
--hash=sha256:5bb13e3eed2923615535339b3c620e76779af4cb4c6a90deccc9e36b274d3827 \
|
||||||
|
--hash=sha256:996c191ad80897d076bdfba80a41994c2b47c68e224c542b48feba42ba00f8bb
|
||||||
|
# via pyright
|
||||||
packaging==26.0 \
|
packaging==26.0 \
|
||||||
--hash=sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4 \
|
--hash=sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4 \
|
||||||
--hash=sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529
|
--hash=sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529
|
||||||
# via pytest
|
# via pytest
|
||||||
platformdirs==4.9.4 \
|
pastel==0.2.1 \
|
||||||
--hash=sha256:1ec356301b7dc906d83f371c8f487070e99d3ccf9e501686456394622a01a934 \
|
--hash=sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364 \
|
||||||
--hash=sha256:68a9a4619a666ea6439f2ff250c12a853cd1cbd5158d258bd824a7df6be2f868
|
--hash=sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d
|
||||||
|
# via poethepoet
|
||||||
|
platformdirs==4.9.6 \
|
||||||
|
--hash=sha256:3bfa75b0ad0db84096ae777218481852c0ebc6c727b3168c1b9e0118e458cf0a \
|
||||||
|
--hash=sha256:e61adb1d5e5cb3441b4b7710bea7e4c12250ca49439228cc1021c00dcfac0917
|
||||||
# via lrx-cli
|
# via lrx-cli
|
||||||
pluggy==1.6.0 \
|
pluggy==1.6.0 \
|
||||||
--hash=sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3 \
|
--hash=sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3 \
|
||||||
--hash=sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746
|
--hash=sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746
|
||||||
# via pytest
|
# via pytest
|
||||||
pygments==2.19.2 \
|
poethepoet==0.44.0 \
|
||||||
--hash=sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887 \
|
--hash=sha256:36d3d834708ed069ac1e4f8ed77915c55265b7b6e01aeb2fe617c9fe9cfd524a \
|
||||||
--hash=sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b
|
--hash=sha256:c2667b513621788fb46482e371cdf81c0b04344e0e0bcb7aa8af45f84c2fce7b
|
||||||
|
pygments==2.20.0 \
|
||||||
|
--hash=sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f \
|
||||||
|
--hash=sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176
|
||||||
# via
|
# via
|
||||||
# pytest
|
# pytest
|
||||||
# rich
|
# rich
|
||||||
pytest==9.0.2 \
|
pyright==1.1.408 \
|
||||||
--hash=sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b \
|
--hash=sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1 \
|
||||||
--hash=sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11
|
--hash=sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684
|
||||||
python-dotenv==1.2.2 \
|
pytest==9.0.3 \
|
||||||
--hash=sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a \
|
--hash=sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9 \
|
||||||
--hash=sha256:2c371a91fbd7ba082c2c1dc1f8bf89ca22564a087c2c287cd9b662adde799cf3
|
--hash=sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c
|
||||||
# via lrx-cli
|
pyyaml==6.0.3 \
|
||||||
|
--hash=sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c \
|
||||||
|
--hash=sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3 \
|
||||||
|
--hash=sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6 \
|
||||||
|
--hash=sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65 \
|
||||||
|
--hash=sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1 \
|
||||||
|
--hash=sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310 \
|
||||||
|
--hash=sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac \
|
||||||
|
--hash=sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9 \
|
||||||
|
--hash=sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7 \
|
||||||
|
--hash=sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35 \
|
||||||
|
--hash=sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb \
|
||||||
|
--hash=sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065 \
|
||||||
|
--hash=sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c \
|
||||||
|
--hash=sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c \
|
||||||
|
--hash=sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764 \
|
||||||
|
--hash=sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac \
|
||||||
|
--hash=sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8 \
|
||||||
|
--hash=sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3 \
|
||||||
|
--hash=sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5 \
|
||||||
|
--hash=sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702 \
|
||||||
|
--hash=sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788 \
|
||||||
|
--hash=sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba \
|
||||||
|
--hash=sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5 \
|
||||||
|
--hash=sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26 \
|
||||||
|
--hash=sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f \
|
||||||
|
--hash=sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b \
|
||||||
|
--hash=sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be \
|
||||||
|
--hash=sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c \
|
||||||
|
--hash=sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6
|
||||||
|
# via poethepoet
|
||||||
rich==14.3.3 \
|
rich==14.3.3 \
|
||||||
--hash=sha256:793431c1f8619afa7d3b52b2cdec859562b950ea0d4b6b505397612db8d5362d \
|
--hash=sha256:793431c1f8619afa7d3b52b2cdec859562b950ea0d4b6b505397612db8d5362d \
|
||||||
--hash=sha256:b8daa0b9e4eef54dd8cf7c86c03713f53241884e814f4e2f5fb342fe520f639b
|
--hash=sha256:b8daa0b9e4eef54dd8cf7c86c03713f53241884e814f4e2f5fb342fe520f639b
|
||||||
@@ -110,25 +151,29 @@ rich-rst==1.3.2 \
|
|||||||
--hash=sha256:a1196fdddf1e364b02ec68a05e8ff8f6914fee10fbca2e6b6735f166bb0da8d4 \
|
--hash=sha256:a1196fdddf1e364b02ec68a05e8ff8f6914fee10fbca2e6b6735f166bb0da8d4 \
|
||||||
--hash=sha256:a99b4907cbe118cf9d18b0b44de272efa61f15117c61e39ebdc431baf5df722a
|
--hash=sha256:a99b4907cbe118cf9d18b0b44de272efa61f15117c61e39ebdc431baf5df722a
|
||||||
# via cyclopts
|
# via cyclopts
|
||||||
ruff==0.15.8 \
|
ruff==0.15.10 \
|
||||||
--hash=sha256:04f79eff02a72db209d47d665ba7ebcad609d8918a134f86cb13dd132159fc89 \
|
--hash=sha256:0744e31482f8f7d0d10a11fcbf897af272fefdfcb10f5af907b18c2813ff4d5f \
|
||||||
--hash=sha256:0f29b989a55572fb885b77464cf24af05500806ab4edf9a0fd8977f9759d85b1 \
|
--hash=sha256:0ee3ef42dab7078bda5ff6a1bcba8539e9857deb447132ad5566a038674540d0 \
|
||||||
--hash=sha256:12e617fc01a95e5821648a6df341d80456bd627bfab8a829f7cfc26a14a4b4a3 \
|
--hash=sha256:136c00ca2f47b0018b073f28cb5c1506642a830ea941a60354b0e8bc8076b151 \
|
||||||
--hash=sha256:2033f963c43949d51e6fdccd3946633c6b37c484f5f98c3035f49c27395a8ab8 \
|
--hash=sha256:28cb32d53203242d403d819fd6983152489b12e4a3ae44993543d6fe62ab42ed \
|
||||||
--hash=sha256:432701303b26416d22ba696c39f2c6f12499b89093b61360abc34bcc9bf07762 \
|
--hash=sha256:51cb8cc943e891ba99989dd92d61e29b1d231e14811db9be6440ecf25d5c1609 \
|
||||||
--hash=sha256:6ee3ae5c65a42f273f126686353f2e08ff29927b7b7e203b711514370d500de3 \
|
--hash=sha256:601d1610a9e1f1c2165a4f561eeaa2e2ea1e97f3287c5aa258d3dab8b57c6188 \
|
||||||
--hash=sha256:75e5cd06b1cf3f47a3996cfc999226b19aa92e7cce682dcd62f80d7035f98f49 \
|
--hash=sha256:8154d43684e4333360fedd11aaa40b1b08a4e37d8ffa9d95fee6fa5b37b6fab1 \
|
||||||
--hash=sha256:8d9a5b8ea13f26ae90838afc33f91b547e61b794865374f114f349e9036835fb \
|
--hash=sha256:83e1dd04312997c99ea6965df66a14fb4f03ba978564574ffc68b0d61fd3989e \
|
||||||
--hash=sha256:995f11f63597ee362130d1d5a327a87cb6f3f5eae3094c620bcc632329a4d26e \
|
--hash=sha256:8ab88715f3a6deb6bde6c227f3a123410bec7b855c3ae331b4c006189e895cef \
|
||||||
--hash=sha256:ac51d486bf457cdc985a412fb1801b2dfd1bd8838372fc55de64b1510eff4bec \
|
--hash=sha256:8b80a2f3c9c8a950d6237f2ca12b206bccff626139be9fa005f14feb881a1ae8 \
|
||||||
--hash=sha256:bc1f0a51254ba21767bfa9a8b5013ca8149dcf38092e6a9eb704d876de94dc34 \
|
--hash=sha256:93cc06a19e5155b4441dd72808fdf84290d84ad8a39ca3b0f994363ade4cebb1 \
|
||||||
--hash=sha256:c2a33a529fb3cbc23a7124b5c6ff121e4d6228029cba374777bd7649cc8598b8 \
|
--hash=sha256:a768ff5969b4f44c349d48edf4ab4f91eddb27fd9d77799598e130fb628aa158 \
|
||||||
--hash=sha256:c9861eb959edab053c10ad62c278835ee69ca527b6dcd72b47d5c1e5648964f6 \
|
--hash=sha256:b0c52744cf9f143a393e284125d2576140b68264a93c6716464e129a3e9adb48 \
|
||||||
--hash=sha256:cbe05adeba76d58162762d6b239c9056f1a15a55bd4b346cfd21e26cd6ad7bc7 \
|
--hash=sha256:b1e7c16ea0ff5a53b7c2df52d947e685973049be1cdfe2b59a9c43601897b22e \
|
||||||
--hash=sha256:cf891fa8e3bb430c0e7fac93851a5978fc99c8fa2c053b57b118972866f8e5f2 \
|
--hash=sha256:d1f86e67ebfdef88e00faefa1552b5e510e1d35f3be7d423dc7e84e63788c94e \
|
||||||
--hash=sha256:d3e3d0b6ba8dca1b7ef9ab80a28e840a20070c4b62e56d675c24f366ef330570 \
|
--hash=sha256:d4272e87e801e9a27a2e8df7b21011c909d9ddd82f4f3281d269b6ba19789ca5 \
|
||||||
--hash=sha256:d910ae974b7a06a33a057cb87d2a10792a3b2b3b35e33d2699fdf63ec8f6b17a \
|
--hash=sha256:e3e53c588164dc025b671c9df2462429d60357ea91af7e92e9d56c565a9f1b07 \
|
||||||
--hash=sha256:fdce027ada77baa448077ccc6ebb2fa9c3c62fd110d8659d601cf2f475858d94
|
--hash=sha256:e59c9bdc056a320fb9ea1700a8d591718b8faf78af065484e801258d3a76bc3f
|
||||||
|
typing-extensions==4.15.0 \
|
||||||
|
--hash=sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466 \
|
||||||
|
--hash=sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548
|
||||||
|
# via pyright
|
||||||
win32-setctime==1.2.0 ; sys_platform == 'win32' \
|
win32-setctime==1.2.0 ; sys_platform == 'win32' \
|
||||||
--hash=sha256:95d644c4e708aba81dc3704a116d8cbc974d70b3bdb8be1d150e36be6e9d1390 \
|
--hash=sha256:95d644c4e708aba81dc3704a116d8cbc974d70b3bdb8be1d150e36be6e9d1390 \
|
||||||
--hash=sha256:ae1fdf948f5640aae05c511ade119313fb6a30d7eabe25fef9764dca5873c4c0
|
--hash=sha256:ae1fdf948f5640aae05c511ade119313fb6a30d7eabe25fef9764dca5873c4c0
|
||||||
|
|||||||
@@ -0,0 +1,21 @@
|
|||||||
|
from .config import AppConfig, GeneralConfig, CredentialConfig, load_config
|
||||||
|
from .core import LrcManager
|
||||||
|
from .models import CacheStatus, TrackMeta, LyricResult
|
||||||
|
from .lrc import LRCData, LyricLine
|
||||||
|
from .fetchers import FetcherMethodType
|
||||||
|
from .utils import get_sidecar_path
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"AppConfig",
|
||||||
|
"GeneralConfig",
|
||||||
|
"CredentialConfig",
|
||||||
|
"load_config",
|
||||||
|
"LrcManager",
|
||||||
|
"CacheStatus",
|
||||||
|
"TrackMeta",
|
||||||
|
"LRCData",
|
||||||
|
"LyricLine",
|
||||||
|
"LyricResult",
|
||||||
|
"FetcherMethodType",
|
||||||
|
"get_sidecar_path",
|
||||||
|
]
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ Date: 2026-04-06 08:19:54
|
|||||||
Description: The entry point.
|
Description: The entry point.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from .cli import run
|
from .cli import run
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|||||||
@@ -4,12 +4,15 @@ Date: 2026-04-06 08:21:01
|
|||||||
Description: Credential authenticators for third-party provider APIs
|
Description: Credential authenticators for third-party provider APIs
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from lrx_cli.authenticators.qqmusic import QQMusicAuthenticator
|
from lrx_cli.authenticators.qqmusic import QQMusicAuthenticator
|
||||||
|
|
||||||
from .base import BaseAuthenticator
|
from .base import BaseAuthenticator
|
||||||
from .spotify import SpotifyAuthenticator
|
from .spotify import SpotifyAuthenticator
|
||||||
from .musixmatch import MusixmatchAuthenticator
|
from .musixmatch import MusixmatchAuthenticator
|
||||||
from .dummy import DummyAuthenticator
|
from .dummy import DummyAuthenticator
|
||||||
|
from ..config import AppConfig
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"BaseAuthenticator",
|
"BaseAuthenticator",
|
||||||
@@ -20,11 +23,13 @@ __all__ = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
def create_authenticators(cache) -> dict[str, BaseAuthenticator]:
|
def create_authenticators(cache, config: AppConfig) -> dict[str, BaseAuthenticator]:
|
||||||
"""Factory function to create authenticators with cache access."""
|
"""Factory function to create authenticators with injected config."""
|
||||||
return {
|
return {
|
||||||
"dummy": DummyAuthenticator(),
|
"dummy": DummyAuthenticator(cache, config.credentials, config.general),
|
||||||
"spotify": SpotifyAuthenticator(cache),
|
"spotify": SpotifyAuthenticator(cache, config.credentials, config.general),
|
||||||
"musixmatch": MusixmatchAuthenticator(cache),
|
"musixmatch": MusixmatchAuthenticator(
|
||||||
"qqmusic": QQMusicAuthenticator(),
|
cache, config.credentials, config.general
|
||||||
|
),
|
||||||
|
"qqmusic": QQMusicAuthenticator(cache, config.credentials, config.general),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,13 +4,25 @@ Date: 2026-04-05 03:18:14
|
|||||||
Description: Base class for credential authenticators.
|
Description: Base class for credential authenticators.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
from ..cache import CacheEngine
|
||||||
|
from ..config import CredentialConfig, GeneralConfig
|
||||||
|
|
||||||
|
|
||||||
class BaseAuthenticator(ABC):
|
class BaseAuthenticator(ABC):
|
||||||
"""Manages obtaining, caching, and refreshing a credential for one provider."""
|
"""Manages obtaining, caching, and refreshing a credential for one provider."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
|
||||||
|
) -> None:
|
||||||
|
self._cache = cache
|
||||||
|
self._credentials = credentials
|
||||||
|
self._general = general
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def name(self) -> str: ...
|
def name(self) -> str: ...
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ Date: 2026-04-05 03:36:44
|
|||||||
Description: A dummy authenticator that does nothing and always reports as configured.
|
Description: A dummy authenticator that does nothing and always reports as configured.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from .base import BaseAuthenticator
|
from .base import BaseAuthenticator
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ Date: 2026-04-05 03:27:56
|
|||||||
Description: Musixmatch authenticator — token management, 401 retry, and cooldown.
|
Description: Musixmatch authenticator — token management, 401 retry, and cooldown.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import time
|
import time
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from urllib.parse import urlencode
|
from urllib.parse import urlencode
|
||||||
@@ -12,7 +14,7 @@ from loguru import logger
|
|||||||
|
|
||||||
from .base import BaseAuthenticator
|
from .base import BaseAuthenticator
|
||||||
from ..cache import CacheEngine
|
from ..cache import CacheEngine
|
||||||
from ..config import HTTP_TIMEOUT, MUSIXMATCH_COOLDOWN_MS, credentials
|
from ..config import CredentialConfig, GeneralConfig, MUSIXMATCH_COOLDOWN_MS
|
||||||
|
|
||||||
_MUSIXMATCH_TOKEN_URL = "https://apic-desktop.musixmatch.com/ws/1.1/token.get"
|
_MUSIXMATCH_TOKEN_URL = "https://apic-desktop.musixmatch.com/ws/1.1/token.get"
|
||||||
|
|
||||||
@@ -23,9 +25,18 @@ _MXM_BASE_PARAMS = {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _new_mxm_client(timeout: float) -> httpx.AsyncClient:
|
||||||
|
"""Build Musixmatch client without httpx default User-Agent header."""
|
||||||
|
client = httpx.AsyncClient(timeout=timeout, headers=_MXM_HEADERS)
|
||||||
|
client.headers.pop("User-Agent", None)
|
||||||
|
return client
|
||||||
|
|
||||||
|
|
||||||
class MusixmatchAuthenticator(BaseAuthenticator):
|
class MusixmatchAuthenticator(BaseAuthenticator):
|
||||||
def __init__(self, cache: CacheEngine) -> None:
|
def __init__(
|
||||||
self._cache = cache
|
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
|
||||||
|
) -> None:
|
||||||
|
super().__init__(cache, credentials, general)
|
||||||
self._cached_token: Optional[str] = None
|
self._cached_token: Optional[str] = None
|
||||||
self._cooldown_until_ms: int = 0
|
self._cooldown_until_ms: int = 0
|
||||||
|
|
||||||
@@ -77,8 +88,8 @@ class MusixmatchAuthenticator(BaseAuthenticator):
|
|||||||
logger.debug("Musixmatch: fetching anonymous token")
|
logger.debug("Musixmatch: fetching anonymous token")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
async with _new_mxm_client(self._general.http_timeout) as client:
|
||||||
resp = await client.get(url, headers=_MXM_HEADERS)
|
resp = await client.get(url)
|
||||||
resp.raise_for_status()
|
resp.raise_for_status()
|
||||||
data = resp.json()
|
data = resp.json()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -102,8 +113,8 @@ class MusixmatchAuthenticator(BaseAuthenticator):
|
|||||||
|
|
||||||
async def _get_token(self) -> Optional[str]:
|
async def _get_token(self) -> Optional[str]:
|
||||||
"""Return a valid token: env var > memory > DB > fresh fetch."""
|
"""Return a valid token: env var > memory > DB > fresh fetch."""
|
||||||
if credentials.MUSIXMATCH_USERTOKEN:
|
if self._credentials.musixmatch_usertoken:
|
||||||
return credentials.MUSIXMATCH_USERTOKEN
|
return self._credentials.musixmatch_usertoken
|
||||||
|
|
||||||
if self._cached_token:
|
if self._cached_token:
|
||||||
return self._cached_token
|
return self._cached_token
|
||||||
@@ -139,9 +150,9 @@ class MusixmatchAuthenticator(BaseAuthenticator):
|
|||||||
self._set_cooldown()
|
self._set_cooldown()
|
||||||
return None
|
return None
|
||||||
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
async with _new_mxm_client(self._general.http_timeout) as client:
|
||||||
url = f"{url_base}?{urlencode({**_MXM_BASE_PARAMS, **params, 'usertoken': token})}"
|
url = f"{url_base}?{urlencode({**_MXM_BASE_PARAMS, **params, 'usertoken': token})}"
|
||||||
resp = await client.get(url, headers=_MXM_HEADERS)
|
resp = await client.get(url)
|
||||||
|
|
||||||
if resp.status_code == 401:
|
if resp.status_code == 401:
|
||||||
logger.debug("Musixmatch: 401 received, refreshing token")
|
logger.debug("Musixmatch: 401 received, refreshing token")
|
||||||
@@ -151,7 +162,7 @@ class MusixmatchAuthenticator(BaseAuthenticator):
|
|||||||
self._set_cooldown()
|
self._set_cooldown()
|
||||||
return None
|
return None
|
||||||
url = f"{url_base}?{urlencode({**_MXM_BASE_PARAMS, **params, 'usertoken': token})}"
|
url = f"{url_base}?{urlencode({**_MXM_BASE_PARAMS, **params, 'usertoken': token})}"
|
||||||
resp = await client.get(url, headers=_MXM_HEADERS)
|
resp = await client.get(url)
|
||||||
|
|
||||||
resp.raise_for_status()
|
resp.raise_for_status()
|
||||||
return resp.json()
|
return resp.json()
|
||||||
|
|||||||
@@ -4,22 +4,71 @@ Date: 2026-04-05 03:47:30
|
|||||||
Description: QQ Music API authenticator - currently only a proxy.
|
Description: QQ Music API authenticator - currently only a proxy.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
import httpx
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseAuthenticator
|
from .base import BaseAuthenticator
|
||||||
from ..config import credentials
|
from ..cache import CacheEngine
|
||||||
|
from ..config import CredentialConfig, GeneralConfig
|
||||||
|
|
||||||
|
|
||||||
class QQMusicAuthenticator(BaseAuthenticator):
|
class QQMusicAuthenticator(BaseAuthenticator):
|
||||||
def __init__(self) -> None:
|
def __init__(
|
||||||
pass
|
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
|
||||||
|
) -> None:
|
||||||
|
super().__init__(cache, credentials, general)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def name(self) -> str:
|
def name(self) -> str:
|
||||||
return "qqmusic"
|
return "qqmusic"
|
||||||
|
|
||||||
def is_configured(self) -> bool:
|
def is_configured(self) -> bool:
|
||||||
return bool(credentials.QQ_MUSIC_API_URL)
|
return bool(self._credentials.qq_music_api_url)
|
||||||
|
|
||||||
async def authenticate(self) -> Optional[str]:
|
async def authenticate(self) -> Optional[str]:
|
||||||
return credentials.QQ_MUSIC_API_URL
|
return self._credentials.qq_music_api_url.rstrip("/") or None
|
||||||
|
|
||||||
|
async def search(self, keyword: str, num: int) -> dict | None:
|
||||||
|
"""Call qq-music-api search endpoint and return raw JSON payload."""
|
||||||
|
base_url = await self.authenticate()
|
||||||
|
if not base_url:
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
resp = await client.get(
|
||||||
|
f"{base_url}/api/search",
|
||||||
|
params={"keyword": keyword, "type": "song", "num": num},
|
||||||
|
)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"QQMusic: search request failed: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def get_lyric(self, mid: str) -> dict | None:
|
||||||
|
"""Call qq-music-api lyric endpoint and return raw JSON payload."""
|
||||||
|
base_url = await self.authenticate()
|
||||||
|
if not base_url:
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
resp = await client.get(
|
||||||
|
f"{base_url}/api/lyric",
|
||||||
|
params={"mid": mid},
|
||||||
|
)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"QQMusic: lyric request failed for mid={mid}: {e}")
|
||||||
|
return None
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ Date: 2026-04-05 03:18:14
|
|||||||
Description: Spotify authenticator — TOTP-based access token via SP_DC cookie.
|
Description: Spotify authenticator — TOTP-based access token via SP_DC cookie.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import hashlib
|
import hashlib
|
||||||
import hmac
|
import hmac
|
||||||
import struct
|
import struct
|
||||||
@@ -14,10 +16,11 @@ from loguru import logger
|
|||||||
|
|
||||||
from .base import BaseAuthenticator
|
from .base import BaseAuthenticator
|
||||||
from ..cache import CacheEngine
|
from ..cache import CacheEngine
|
||||||
from ..config import HTTP_TIMEOUT, UA_BROWSER, credentials
|
from ..config import CredentialConfig, GeneralConfig, UA_BROWSER
|
||||||
|
|
||||||
_SPOTIFY_TOKEN_URL = "https://open.spotify.com/api/token"
|
_SPOTIFY_TOKEN_URL = "https://open.spotify.com/api/token"
|
||||||
_SPOTIFY_SERVER_TIME_URL = "https://open.spotify.com/api/server-time"
|
_SPOTIFY_SERVER_TIME_URL = "https://open.spotify.com/api/server-time"
|
||||||
|
_SPOTIFY_LYRICS_URL = "https://spclient.wg.spotify.com/color-lyrics/v2/track/"
|
||||||
_SPOTIFY_SECRET_URL = (
|
_SPOTIFY_SECRET_URL = (
|
||||||
"https://raw.githubusercontent.com/xyloflake/spot-secrets-go"
|
"https://raw.githubusercontent.com/xyloflake/spot-secrets-go"
|
||||||
"/refs/heads/main/secrets/secrets.json"
|
"/refs/heads/main/secrets/secrets.json"
|
||||||
@@ -32,8 +35,10 @@ SPOTIFY_BASE_HEADERS = {
|
|||||||
|
|
||||||
|
|
||||||
class SpotifyAuthenticator(BaseAuthenticator):
|
class SpotifyAuthenticator(BaseAuthenticator):
|
||||||
def __init__(self, cache: CacheEngine) -> None:
|
def __init__(
|
||||||
self._cache = cache
|
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
|
||||||
|
) -> None:
|
||||||
|
super().__init__(cache, credentials, general)
|
||||||
self._cached_secret: Optional[Tuple[str, int]] = None
|
self._cached_secret: Optional[Tuple[str, int]] = None
|
||||||
self._cached_token: Optional[str] = None
|
self._cached_token: Optional[str] = None
|
||||||
self._token_expires_at: float = 0.0
|
self._token_expires_at: float = 0.0
|
||||||
@@ -43,7 +48,7 @@ class SpotifyAuthenticator(BaseAuthenticator):
|
|||||||
return "spotify"
|
return "spotify"
|
||||||
|
|
||||||
def is_configured(self) -> bool:
|
def is_configured(self) -> bool:
|
||||||
return bool(credentials.SPOTIFY_SP_DC)
|
return bool(self._credentials.spotify_sp_dc)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _generate_totp(server_time_s: int, secret: str) -> str:
|
def _generate_totp(server_time_s: int, secret: str) -> str:
|
||||||
@@ -82,7 +87,9 @@ class SpotifyAuthenticator(BaseAuthenticator):
|
|||||||
|
|
||||||
async def _get_server_time(self, client: httpx.AsyncClient) -> Optional[int]:
|
async def _get_server_time(self, client: httpx.AsyncClient) -> Optional[int]:
|
||||||
try:
|
try:
|
||||||
res = await client.get(_SPOTIFY_SERVER_TIME_URL, timeout=HTTP_TIMEOUT)
|
res = await client.get(
|
||||||
|
_SPOTIFY_SERVER_TIME_URL, timeout=self._general.http_timeout
|
||||||
|
)
|
||||||
res.raise_for_status()
|
res.raise_for_status()
|
||||||
data = res.json()
|
data = res.json()
|
||||||
if not isinstance(data, dict) or "serverTime" not in data:
|
if not isinstance(data, dict) or "serverTime" not in data:
|
||||||
@@ -100,7 +107,9 @@ class SpotifyAuthenticator(BaseAuthenticator):
|
|||||||
logger.debug("Spotify: using cached TOTP secret")
|
logger.debug("Spotify: using cached TOTP secret")
|
||||||
return self._cached_secret
|
return self._cached_secret
|
||||||
try:
|
try:
|
||||||
res = await client.get(_SPOTIFY_SECRET_URL, timeout=HTTP_TIMEOUT)
|
res = await client.get(
|
||||||
|
_SPOTIFY_SECRET_URL, timeout=self._general.http_timeout
|
||||||
|
)
|
||||||
res.raise_for_status()
|
res.raise_for_status()
|
||||||
data = res.json()
|
data = res.json()
|
||||||
if not isinstance(data, list) or len(data) == 0:
|
if not isinstance(data, list) or len(data) == 0:
|
||||||
@@ -133,13 +142,13 @@ class SpotifyAuthenticator(BaseAuthenticator):
|
|||||||
if db_token and time.time() < self._token_expires_at - 30:
|
if db_token and time.time() < self._token_expires_at - 30:
|
||||||
return db_token
|
return db_token
|
||||||
|
|
||||||
if not credentials.SPOTIFY_SP_DC:
|
if not self._credentials.spotify_sp_dc:
|
||||||
logger.error("Spotify: SPOTIFY_SP_DC env var not set — cannot authenticate")
|
logger.error("Spotify: spotify_sp_dc not configured — cannot authenticate")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
headers = {
|
headers = {
|
||||||
"Accept": "*/*",
|
"Accept": "*/*",
|
||||||
"Cookie": f"sp_dc={credentials.SPOTIFY_SP_DC}",
|
"Cookie": f"sp_dc={self._credentials.spotify_sp_dc}",
|
||||||
**SPOTIFY_BASE_HEADERS,
|
**SPOTIFY_BASE_HEADERS,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -166,7 +175,9 @@ class SpotifyAuthenticator(BaseAuthenticator):
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
res = await client.get(
|
res = await client.get(
|
||||||
_SPOTIFY_TOKEN_URL, params=params, timeout=HTTP_TIMEOUT
|
_SPOTIFY_TOKEN_URL,
|
||||||
|
params=params,
|
||||||
|
timeout=self._general.http_timeout,
|
||||||
)
|
)
|
||||||
if res.status_code != 200:
|
if res.status_code != 200:
|
||||||
logger.error(f"Spotify: token request returned {res.status_code}")
|
logger.error(f"Spotify: token request returned {res.status_code}")
|
||||||
@@ -200,3 +211,35 @@ class SpotifyAuthenticator(BaseAuthenticator):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Spotify: token request failed: {e}")
|
logger.error(f"Spotify: token request failed: {e}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
async def get_lyrics(self, track_id: str) -> dict | None:
|
||||||
|
"""Fetch raw lyrics JSON payload for a Spotify track."""
|
||||||
|
token = await self.authenticate()
|
||||||
|
if not token:
|
||||||
|
return None
|
||||||
|
|
||||||
|
url = (
|
||||||
|
f"{_SPOTIFY_LYRICS_URL}{track_id}"
|
||||||
|
"?format=json&vocalRemoval=false&market=from_token"
|
||||||
|
)
|
||||||
|
headers = {
|
||||||
|
"Accept": "application/json",
|
||||||
|
"Authorization": f"Bearer {token}",
|
||||||
|
**SPOTIFY_BASE_HEADERS,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
res = await client.get(url, headers=headers)
|
||||||
|
if res.status_code == 404:
|
||||||
|
return None
|
||||||
|
if res.status_code != 200:
|
||||||
|
logger.error(f"Spotify: lyrics API returned {res.status_code}")
|
||||||
|
return None
|
||||||
|
data = res.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Spotify: lyrics fetch failed: {e}")
|
||||||
|
return None
|
||||||
|
|||||||
+23
-15
@@ -5,6 +5,8 @@ Description: SQLite-based lyric cache with per-source slot rows, TTL expiration,
|
|||||||
and schema migrations (confidence versioning + slot migration).
|
and schema migrations (confidence versioning + slot migration).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import sqlite3
|
import sqlite3
|
||||||
import hashlib
|
import hashlib
|
||||||
@@ -22,7 +24,7 @@ from .config import (
|
|||||||
SLOT_UNSYNCED,
|
SLOT_UNSYNCED,
|
||||||
)
|
)
|
||||||
from .models import TrackMeta, LyricResult, CacheStatus
|
from .models import TrackMeta, LyricResult, CacheStatus
|
||||||
from .ranking import is_positive_status, select_best_positive
|
from .utils import is_positive_status, select_best_positive
|
||||||
|
|
||||||
|
|
||||||
_ALL_SLOTS = (SLOT_SYNCED, SLOT_UNSYNCED)
|
_ALL_SLOTS = (SLOT_SYNCED, SLOT_UNSYNCED)
|
||||||
@@ -85,9 +87,15 @@ class CacheEngine:
|
|||||||
self.db_path = db_path
|
self.db_path = db_path
|
||||||
self._init_db()
|
self._init_db()
|
||||||
|
|
||||||
|
def _connect(self) -> sqlite3.Connection:
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
conn.execute("PRAGMA journal_mode=WAL")
|
||||||
|
conn.execute("PRAGMA busy_timeout=5000")
|
||||||
|
return conn
|
||||||
|
|
||||||
def _init_db(self) -> None:
|
def _init_db(self) -> None:
|
||||||
"""Create cache tables and run one-time slot/cache migrations."""
|
"""Create cache tables and run one-time slot/cache migrations."""
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.execute("""
|
conn.execute("""
|
||||||
CREATE TABLE IF NOT EXISTS credentials (
|
CREATE TABLE IF NOT EXISTS credentials (
|
||||||
name TEXT PRIMARY KEY,
|
name TEXT PRIMARY KEY,
|
||||||
@@ -256,7 +264,7 @@ class CacheEngine:
|
|||||||
return []
|
return []
|
||||||
|
|
||||||
now = int(time.time())
|
now = int(time.time())
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"DELETE FROM cache WHERE key = ? AND expires_at IS NOT NULL AND expires_at < ?",
|
"DELETE FROM cache WHERE key = ? AND expires_at IS NOT NULL AND expires_at < ?",
|
||||||
(key, now),
|
(key, now),
|
||||||
@@ -353,7 +361,7 @@ class CacheEngine:
|
|||||||
# Convenience for callers that still pass a single negative result.
|
# Convenience for callers that still pass a single negative result.
|
||||||
kinds = [SLOT_SYNCED, SLOT_UNSYNCED]
|
kinds = [SLOT_SYNCED, SLOT_UNSYNCED]
|
||||||
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
for kind in kinds:
|
for kind in kinds:
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"""INSERT OR REPLACE INTO cache
|
"""INSERT OR REPLACE INTO cache
|
||||||
@@ -386,7 +394,7 @@ class CacheEngine:
|
|||||||
|
|
||||||
def clear_all(self) -> None:
|
def clear_all(self) -> None:
|
||||||
"""Remove every entry from the cache."""
|
"""Remove every entry from the cache."""
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.execute("DELETE FROM cache")
|
conn.execute("DELETE FROM cache")
|
||||||
conn.commit()
|
conn.commit()
|
||||||
logger.info("Cache cleared.")
|
logger.info("Cache cleared.")
|
||||||
@@ -396,7 +404,7 @@ class CacheEngine:
|
|||||||
if not self._track_has_meta(track):
|
if not self._track_has_meta(track):
|
||||||
logger.info(f"No cache entries found for {track.display_name()}.")
|
logger.info(f"No cache entries found for {track.display_name()}.")
|
||||||
return
|
return
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
cur = conn.execute(
|
cur = conn.execute(
|
||||||
f"DELETE FROM cache WHERE {_TRACK_WHERE}",
|
f"DELETE FROM cache WHERE {_TRACK_WHERE}",
|
||||||
_track_where_params(track),
|
_track_where_params(track),
|
||||||
@@ -411,7 +419,7 @@ class CacheEngine:
|
|||||||
|
|
||||||
def prune(self) -> int:
|
def prune(self) -> int:
|
||||||
"""Remove all expired entries. Returns the number of rows deleted."""
|
"""Remove all expired entries. Returns the number of rows deleted."""
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
cur = conn.execute(
|
cur = conn.execute(
|
||||||
"DELETE FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
|
"DELETE FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
|
||||||
(int(time.time()),),
|
(int(time.time()),),
|
||||||
@@ -439,7 +447,7 @@ class CacheEngine:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
now = int(time.time())
|
now = int(time.time())
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.row_factory = sqlite3.Row
|
conn.row_factory = sqlite3.Row
|
||||||
rows = conn.execute(
|
rows = conn.execute(
|
||||||
f"SELECT status, lyrics, source, confidence FROM cache"
|
f"SELECT status, lyrics, source, confidence FROM cache"
|
||||||
@@ -495,7 +503,7 @@ class CacheEngine:
|
|||||||
return []
|
return []
|
||||||
|
|
||||||
now = int(time.time())
|
now = int(time.time())
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.row_factory = sqlite3.Row
|
conn.row_factory = sqlite3.Row
|
||||||
rows = conn.execute(
|
rows = conn.execute(
|
||||||
"""SELECT * FROM cache
|
"""SELECT * FROM cache
|
||||||
@@ -557,7 +565,7 @@ class CacheEngine:
|
|||||||
"""
|
"""
|
||||||
if not self._track_has_meta(track):
|
if not self._track_has_meta(track):
|
||||||
return 0
|
return 0
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
cur = conn.execute(
|
cur = conn.execute(
|
||||||
f"UPDATE cache SET confidence = ? WHERE {_TRACK_WHERE} AND source = ?",
|
f"UPDATE cache SET confidence = ? WHERE {_TRACK_WHERE} AND source = ?",
|
||||||
[confidence] + _track_where_params(track) + [source],
|
[confidence] + _track_where_params(track) + [source],
|
||||||
@@ -571,7 +579,7 @@ class CacheEngine:
|
|||||||
"""Return all cached rows for a given track (across all sources)."""
|
"""Return all cached rows for a given track (across all sources)."""
|
||||||
if not self._track_has_meta(track):
|
if not self._track_has_meta(track):
|
||||||
return []
|
return []
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.row_factory = sqlite3.Row
|
conn.row_factory = sqlite3.Row
|
||||||
return [
|
return [
|
||||||
dict(r)
|
dict(r)
|
||||||
@@ -586,7 +594,7 @@ class CacheEngine:
|
|||||||
def get_credential(self, name: str) -> Optional[dict]:
|
def get_credential(self, name: str) -> Optional[dict]:
|
||||||
"""Return cached credential data if present and not expired."""
|
"""Return cached credential data if present and not expired."""
|
||||||
now_ms = int(time.time() * 1000)
|
now_ms = int(time.time() * 1000)
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.row_factory = sqlite3.Row
|
conn.row_factory = sqlite3.Row
|
||||||
row = conn.execute(
|
row = conn.execute(
|
||||||
"SELECT data FROM credentials WHERE name = ? AND (expires_at IS NULL OR expires_at > ?)",
|
"SELECT data FROM credentials WHERE name = ? AND (expires_at IS NULL OR expires_at > ?)",
|
||||||
@@ -603,7 +611,7 @@ class CacheEngine:
|
|||||||
self, name: str, data: dict, expires_at_ms: Optional[int] = None
|
self, name: str, data: dict, expires_at_ms: Optional[int] = None
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Persist credential data, optionally with an expiry timestamp (Unix ms)."""
|
"""Persist credential data, optionally with an expiry timestamp (Unix ms)."""
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"INSERT OR REPLACE INTO credentials (name, data, expires_at) VALUES (?, ?, ?)",
|
"INSERT OR REPLACE INTO credentials (name, data, expires_at) VALUES (?, ?, ?)",
|
||||||
(name, json.dumps(data), expires_at_ms),
|
(name, json.dumps(data), expires_at_ms),
|
||||||
@@ -612,14 +620,14 @@ class CacheEngine:
|
|||||||
|
|
||||||
def query_all(self) -> list[dict]:
|
def query_all(self) -> list[dict]:
|
||||||
"""Return every row in the cache table."""
|
"""Return every row in the cache table."""
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
conn.row_factory = sqlite3.Row
|
conn.row_factory = sqlite3.Row
|
||||||
return [dict(r) for r in conn.execute("SELECT * FROM cache").fetchall()]
|
return [dict(r) for r in conn.execute("SELECT * FROM cache").fetchall()]
|
||||||
|
|
||||||
def stats(self) -> dict:
|
def stats(self) -> dict:
|
||||||
"""Return aggregate cache statistics."""
|
"""Return aggregate cache statistics."""
|
||||||
now = int(time.time())
|
now = int(time.time())
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
with self._connect() as conn:
|
||||||
total = conn.execute("SELECT COUNT(*) FROM cache").fetchone()[0]
|
total = conn.execute("SELECT COUNT(*) FROM cache").fetchone()[0]
|
||||||
expired = conn.execute(
|
expired = conn.execute(
|
||||||
"SELECT COUNT(*) FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
|
"SELECT COUNT(*) FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
|
||||||
|
|||||||
+164
-12
@@ -4,21 +4,34 @@ Date: 2026-03-26 02:04:39
|
|||||||
Description: CLI interface.
|
Description: CLI interface.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
import os
|
import os
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Annotated
|
from typing import Annotated
|
||||||
from urllib.parse import quote
|
from urllib.parse import quote
|
||||||
import cyclopts
|
import cyclopts
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .config import DB_PATH, enable_debug
|
from .config import (
|
||||||
|
DB_PATH,
|
||||||
|
AppConfig,
|
||||||
|
load_config,
|
||||||
|
enable_debug,
|
||||||
|
)
|
||||||
|
from .utils import get_sidecar_path
|
||||||
from .models import TrackMeta
|
from .models import TrackMeta
|
||||||
from .mpris import get_current_track
|
from .mpris import get_current_track
|
||||||
from .core import LrcManager
|
from .core import LrcManager
|
||||||
from .fetchers import FetcherMethodType
|
from .fetchers import FetcherMethodType
|
||||||
from .lrc import get_sidecar_path
|
from .watch import WatchCoordinator
|
||||||
|
from .watch.control import ControlClient, parse_delta
|
||||||
|
from .watch.view.pipe import PipeOutput
|
||||||
|
from .watch.view.print import PrintOutput
|
||||||
|
|
||||||
|
|
||||||
app = cyclopts.App(
|
app = cyclopts.App(
|
||||||
@@ -29,10 +42,17 @@ app.register_install_completion_command()
|
|||||||
cache_app = cyclopts.App(name="cache", help="Manage the local SQLite cache.")
|
cache_app = cyclopts.App(name="cache", help="Manage the local SQLite cache.")
|
||||||
app.command(cache_app)
|
app.command(cache_app)
|
||||||
|
|
||||||
|
watch_app = cyclopts.App(name="watch", help="Watch MPRIS and output lyrics.")
|
||||||
|
app.command(watch_app)
|
||||||
|
|
||||||
|
ctl_app = cyclopts.App(name="ctl", help="Control a running watch session.")
|
||||||
|
watch_app.command(ctl_app)
|
||||||
|
|
||||||
|
|
||||||
# Global state set by the meta launcher
|
# Global state set by the meta launcher
|
||||||
_player: str | None = None
|
_player: str | None = None
|
||||||
_db_path: str | None = None
|
_db_path: str | None = None
|
||||||
|
_app_config: AppConfig = AppConfig()
|
||||||
|
|
||||||
# Will be initialized before any command runs, safe to set to None here
|
# Will be initialized before any command runs, safe to set to None here
|
||||||
manager: LrcManager = None # type: ignore
|
manager: LrcManager = None # type: ignore
|
||||||
@@ -51,7 +71,7 @@ def launcher(
|
|||||||
str | None,
|
str | None,
|
||||||
cyclopts.Parameter(
|
cyclopts.Parameter(
|
||||||
name=["--player", "-p"],
|
name=["--player", "-p"],
|
||||||
help="Target a specific MPRIS player using its DBus name or a portion thereof.",
|
help="Target a specific MPRIS player using its DBus name or a portion thereof. Bypasses player_blacklist.",
|
||||||
),
|
),
|
||||||
] = None,
|
] = None,
|
||||||
db_path: Annotated[
|
db_path: Annotated[
|
||||||
@@ -62,13 +82,13 @@ def launcher(
|
|||||||
),
|
),
|
||||||
] = None,
|
] = None,
|
||||||
):
|
):
|
||||||
global _player, _db_path
|
global _player, _db_path, _app_config, manager
|
||||||
if debug:
|
if debug:
|
||||||
enable_debug()
|
enable_debug()
|
||||||
_player = player
|
_player = player
|
||||||
_db_path = str(Path(db_path).resolve()) if db_path else DB_PATH
|
_db_path = str(Path(db_path).resolve()) if db_path else DB_PATH
|
||||||
global manager
|
_app_config = load_config()
|
||||||
manager = LrcManager(db_path=_db_path)
|
manager = LrcManager(db_path=_db_path, config=_app_config)
|
||||||
app(tokens)
|
app(tokens)
|
||||||
|
|
||||||
|
|
||||||
@@ -114,7 +134,11 @@ def fetch(
|
|||||||
] = False,
|
] = False,
|
||||||
):
|
):
|
||||||
"""Fetch and print lyrics for the currently playing track."""
|
"""Fetch and print lyrics for the currently playing track."""
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
|
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
@@ -298,7 +322,11 @@ def export(
|
|||||||
] = False,
|
] = False,
|
||||||
):
|
):
|
||||||
"""Export lyrics of the current track to a .lrc file."""
|
"""Export lyrics of the current track to a .lrc file."""
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -357,6 +385,114 @@ def export(
|
|||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
# watch subcommands
|
||||||
|
|
||||||
|
|
||||||
|
@watch_app.command
|
||||||
|
def pipe(
|
||||||
|
before: Annotated[
|
||||||
|
int,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name=["--before", "-b"],
|
||||||
|
help="Number of lyric lines to show before current line.",
|
||||||
|
),
|
||||||
|
] = 0,
|
||||||
|
after: Annotated[
|
||||||
|
int,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name=["--after", "-a"],
|
||||||
|
help="Number of lyric lines to show after current line.",
|
||||||
|
),
|
||||||
|
] = 0,
|
||||||
|
no_newline: Annotated[
|
||||||
|
bool,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name=["--no-newline", "-n"],
|
||||||
|
negative="",
|
||||||
|
help="Do not append a new line after the lyric output.",
|
||||||
|
),
|
||||||
|
] = False,
|
||||||
|
):
|
||||||
|
"""Watch active player and continuously print lyric window to stdout."""
|
||||||
|
logger.info(
|
||||||
|
"Starting watch pipe (player filter: {})",
|
||||||
|
_player or "<none>",
|
||||||
|
)
|
||||||
|
output = PipeOutput(
|
||||||
|
before=max(0, before), after=max(0, after), no_newline=no_newline
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
session = WatchCoordinator(
|
||||||
|
manager,
|
||||||
|
output,
|
||||||
|
player_hint=_player,
|
||||||
|
config=_app_config,
|
||||||
|
)
|
||||||
|
success = asyncio.run(session.run())
|
||||||
|
if not success:
|
||||||
|
sys.exit(1)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logger.info("Watch stopped.")
|
||||||
|
|
||||||
|
|
||||||
|
@watch_app.command(name="print")
|
||||||
|
def watch_print(
|
||||||
|
plain: Annotated[
|
||||||
|
bool,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name="--plain",
|
||||||
|
negative="",
|
||||||
|
help="Output plain text (strips all tags). Takes priority over --normalize.",
|
||||||
|
),
|
||||||
|
] = False,
|
||||||
|
) -> None:
|
||||||
|
"""Watch active player and print all lyrics to stdout once per track change."""
|
||||||
|
logger.info(
|
||||||
|
"Starting watch print (player filter: {})",
|
||||||
|
_player or "<none>",
|
||||||
|
)
|
||||||
|
output = PrintOutput(plain=plain)
|
||||||
|
try:
|
||||||
|
session = WatchCoordinator(
|
||||||
|
manager,
|
||||||
|
output,
|
||||||
|
player_hint=_player,
|
||||||
|
config=_app_config,
|
||||||
|
)
|
||||||
|
success = asyncio.run(session.run())
|
||||||
|
if not success:
|
||||||
|
sys.exit(1)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logger.info("Watch stopped.")
|
||||||
|
|
||||||
|
|
||||||
|
@ctl_app.command
|
||||||
|
def offset(delta: str) -> None:
|
||||||
|
"""Adjust watch offset. Examples: +200, -200, 0."""
|
||||||
|
parsed_ok, parsed_delta, parse_error = parse_delta(delta)
|
||||||
|
if not parsed_ok or parsed_delta is None:
|
||||||
|
logger.error(parse_error or "Invalid offset delta")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
response = ControlClient(_app_config.watch.socket_path).send(
|
||||||
|
{"cmd": "offset", "delta": parsed_delta}
|
||||||
|
)
|
||||||
|
if not response.get("ok"):
|
||||||
|
logger.error(response.get("error", "Unknown error"))
|
||||||
|
sys.exit(1)
|
||||||
|
print(json.dumps(response, indent=2, ensure_ascii=False))
|
||||||
|
|
||||||
|
|
||||||
|
@ctl_app.command
|
||||||
|
def status() -> None:
|
||||||
|
"""Print current watch session status as JSON."""
|
||||||
|
response = ControlClient(_app_config.watch.socket_path).send({"cmd": "status"})
|
||||||
|
if not response.get("ok"):
|
||||||
|
logger.error(response.get("error", "Unknown error"))
|
||||||
|
sys.exit(1)
|
||||||
|
print(json.dumps(response, indent=2, ensure_ascii=False))
|
||||||
|
|
||||||
|
|
||||||
# cache subcommands
|
# cache subcommands
|
||||||
|
|
||||||
|
|
||||||
@@ -379,7 +515,11 @@ def query(
|
|||||||
print()
|
print()
|
||||||
return
|
return
|
||||||
|
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -399,7 +539,11 @@ def clear(
|
|||||||
manager.cache.clear_all()
|
manager.cache.clear_all()
|
||||||
return
|
return
|
||||||
|
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -489,7 +633,11 @@ def confidence(
|
|||||||
logger.error("Score must be between 0 and 100.")
|
logger.error("Score must be between 0 and 100.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -513,7 +661,11 @@ def insert(
|
|||||||
] = None,
|
] = None,
|
||||||
):
|
):
|
||||||
"""Manually insert lyrics into the cache for the current track."""
|
"""Manually insert lyrics into the cache for the current track."""
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|||||||
+132
-37
@@ -1,14 +1,20 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-25 10:17:56
|
Date: 2026-03-25 10:17:56
|
||||||
Description: Global configuration constants and logger setup.
|
Description: Global configuration constants, typed config dataclasses, and logger setup.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import dataclasses
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
|
import tomllib
|
||||||
|
from dataclasses import dataclass, field
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from typing import Any, get_type_hints
|
||||||
|
|
||||||
from platformdirs import user_cache_dir, user_config_dir
|
from platformdirs import user_cache_dir, user_config_dir
|
||||||
from dotenv import load_dotenv
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from importlib.metadata import version
|
from importlib.metadata import version
|
||||||
|
|
||||||
@@ -24,13 +30,7 @@ DB_PATH = os.path.join(CACHE_DIR, "cache.db")
|
|||||||
SLOT_SYNCED = "SYNCED"
|
SLOT_SYNCED = "SYNCED"
|
||||||
SLOT_UNSYNCED = "UNSYNCED"
|
SLOT_UNSYNCED = "UNSYNCED"
|
||||||
|
|
||||||
# .env loading
|
_WATCH_SOCKET_PATH = str(Path(CACHE_DIR) / "watch.sock")
|
||||||
_config_env = Path(user_config_dir(APP_NAME, APP_AUTHOR)) / ".env"
|
|
||||||
load_dotenv(_config_env) # ~/.config/lrx-cli/.env
|
|
||||||
load_dotenv() # .env in cwd (does NOT override existing vars)
|
|
||||||
|
|
||||||
# HTTP
|
|
||||||
HTTP_TIMEOUT = 10.0
|
|
||||||
|
|
||||||
# Cache TTLs (seconds)
|
# Cache TTLs (seconds)
|
||||||
TTL_SYNCED = None # never expires
|
TTL_SYNCED = None # never expires
|
||||||
@@ -66,36 +66,131 @@ UA_LRX = f"LRX-CLI {APP_VERSION} (https://github.com/Uyanide/lrx-cli)"
|
|||||||
|
|
||||||
MUSIXMATCH_COOLDOWN_MS = 600_000 # 10 minutes
|
MUSIXMATCH_COOLDOWN_MS = 600_000 # 10 minutes
|
||||||
|
|
||||||
# Player preference (used when multiple MPRIS players are active)
|
|
||||||
PREFERRED_PLAYER = os.environ.get("PREFERRED_PLAYER", "spotify")
|
|
||||||
|
|
||||||
|
|
||||||
class _Credentials:
|
|
||||||
"""Credential config with lazy os.environ reads.
|
|
||||||
|
|
||||||
Stable constants live as module-level names above.
|
|
||||||
Credentials are @property so monkeypatch.setenv / monkeypatch.delenv
|
|
||||||
affect them without needing to patch each consumer separately.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def SPOTIFY_SP_DC(self) -> str:
|
|
||||||
return os.environ.get("SPOTIFY_SP_DC", "")
|
|
||||||
|
|
||||||
@property
|
|
||||||
def QQ_MUSIC_API_URL(self) -> str:
|
|
||||||
return os.environ.get("QQ_MUSIC_API_URL", "").rstrip("/")
|
|
||||||
|
|
||||||
@property
|
|
||||||
def MUSIXMATCH_USERTOKEN(self) -> str:
|
|
||||||
return os.environ.get("MUSIXMATCH_USERTOKEN", "")
|
|
||||||
|
|
||||||
|
|
||||||
credentials = _Credentials()
|
|
||||||
|
|
||||||
os.makedirs(CACHE_DIR, exist_ok=True)
|
os.makedirs(CACHE_DIR, exist_ok=True)
|
||||||
|
|
||||||
# Logger
|
|
||||||
|
DEFAULT_PREFERRED_PLAYER = ""
|
||||||
|
DEFAULT_PLAYER_BLACKLIST: tuple[str, ...] = (
|
||||||
|
"firefox",
|
||||||
|
"zen",
|
||||||
|
"chrome",
|
||||||
|
"chromium",
|
||||||
|
"vivaldi",
|
||||||
|
"edge",
|
||||||
|
"opera",
|
||||||
|
"mpv",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class GeneralConfig:
|
||||||
|
preferred_player: str = DEFAULT_PREFERRED_PLAYER
|
||||||
|
player_blacklist: tuple[str, ...] = DEFAULT_PLAYER_BLACKLIST
|
||||||
|
http_timeout: float = 10.0
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class CredentialConfig:
|
||||||
|
spotify_sp_dc: str = ""
|
||||||
|
musixmatch_usertoken: str = ""
|
||||||
|
qq_music_api_url: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class WatchConfig:
|
||||||
|
debounce_ms: int = 400
|
||||||
|
calibration_interval_s: float = 3.0
|
||||||
|
position_tick_ms: int = 50
|
||||||
|
socket_path: str = field(default_factory=lambda: _WATCH_SOCKET_PATH)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class AppConfig:
|
||||||
|
general: GeneralConfig = field(default_factory=GeneralConfig)
|
||||||
|
credentials: CredentialConfig = field(default_factory=CredentialConfig)
|
||||||
|
watch: WatchConfig = field(default_factory=WatchConfig)
|
||||||
|
|
||||||
|
|
||||||
|
_CONFIG_PATH = Path(user_config_dir(APP_NAME, APP_AUTHOR)) / "config.toml"
|
||||||
|
|
||||||
|
|
||||||
|
def _coerce(val: Any, hint: Any, section: str, name: str) -> Any:
|
||||||
|
"""Coerce and validate one TOML value against its declared field type."""
|
||||||
|
if hint is str:
|
||||||
|
if not isinstance(val, str):
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}: expected str, got {type(val).__name__}"
|
||||||
|
)
|
||||||
|
return val
|
||||||
|
if hint is int:
|
||||||
|
if not isinstance(val, int) or isinstance(val, bool):
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}: expected int, got {type(val).__name__}"
|
||||||
|
)
|
||||||
|
return val
|
||||||
|
if hint is float:
|
||||||
|
if isinstance(val, bool):
|
||||||
|
raise ValueError(f"[{section}].{name}: expected float, got bool")
|
||||||
|
if isinstance(val, (int, float)):
|
||||||
|
return float(val)
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}: expected float, got {type(val).__name__}"
|
||||||
|
)
|
||||||
|
origin = getattr(hint, "__origin__", None)
|
||||||
|
if origin is tuple:
|
||||||
|
if not isinstance(val, list):
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}: expected array, got {type(val).__name__}"
|
||||||
|
)
|
||||||
|
for i, item in enumerate(val):
|
||||||
|
if not isinstance(item, str):
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}[{i}]: expected str, got {type(item).__name__}"
|
||||||
|
)
|
||||||
|
return tuple(val)
|
||||||
|
raise ValueError(f"[{section}].{name}: unsupported field type {hint!r}")
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_section(raw: dict[str, Any], cls: type, section: str) -> Any:
|
||||||
|
"""Parse one TOML section dict into a frozen dataclass, rejecting unknown keys."""
|
||||||
|
fields_map = {f.name: f for f in dataclasses.fields(cls)}
|
||||||
|
hints = get_type_hints(cls)
|
||||||
|
|
||||||
|
unknown = set(raw) - set(fields_map)
|
||||||
|
if unknown:
|
||||||
|
raise ValueError(
|
||||||
|
f"Unknown config keys in [{section}]: {', '.join(sorted(unknown))}"
|
||||||
|
)
|
||||||
|
|
||||||
|
kwargs: dict[str, Any] = {}
|
||||||
|
for name, f in fields_map.items():
|
||||||
|
if name not in raw:
|
||||||
|
if f.default is not dataclasses.MISSING:
|
||||||
|
kwargs[name] = f.default
|
||||||
|
elif f.default_factory is not dataclasses.MISSING: # type: ignore[misc]
|
||||||
|
kwargs[name] = f.default_factory()
|
||||||
|
continue
|
||||||
|
kwargs[name] = _coerce(raw[name], hints[name], section, name)
|
||||||
|
|
||||||
|
return cls(**kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def load_config(path: Path | None = None) -> AppConfig:
|
||||||
|
"""Load AppConfig from TOML file; return all-defaults when file is absent."""
|
||||||
|
resolved = path or _CONFIG_PATH
|
||||||
|
if not resolved.exists():
|
||||||
|
return AppConfig()
|
||||||
|
with open(resolved, "rb") as f:
|
||||||
|
data = tomllib.load(f)
|
||||||
|
return AppConfig(
|
||||||
|
general=_parse_section(data.get("general", {}), GeneralConfig, "general"),
|
||||||
|
credentials=_parse_section(
|
||||||
|
data.get("credentials", {}), CredentialConfig, "credentials"
|
||||||
|
),
|
||||||
|
watch=_parse_section(data.get("watch", {}), WatchConfig, "watch"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
_LOG_FORMAT = (
|
_LOG_FORMAT = (
|
||||||
"<green>{time:YYYY-MM-DD HH:mm:ss}</green> | "
|
"<green>{time:YYYY-MM-DD HH:mm:ss}</green> | "
|
||||||
"<level>{level: <8}</level> | "
|
"<level>{level: <8}</level> | "
|
||||||
|
|||||||
+7
-4
@@ -5,6 +5,8 @@ Description: Core orchestrator — coordinates fetchers with cache-aware fallbac
|
|||||||
Also handles enrichers & authenticators & …
|
Also handles enrichers & authenticators & …
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
@@ -22,10 +24,11 @@ from .config import (
|
|||||||
HIGH_CONFIDENCE,
|
HIGH_CONFIDENCE,
|
||||||
SLOT_SYNCED,
|
SLOT_SYNCED,
|
||||||
SLOT_UNSYNCED,
|
SLOT_UNSYNCED,
|
||||||
|
AppConfig,
|
||||||
)
|
)
|
||||||
from .models import TrackMeta, LyricResult, CacheStatus
|
from .models import TrackMeta, LyricResult, CacheStatus
|
||||||
from .enrichers import create_enrichers, enrich_track
|
from .enrichers import create_enrichers, enrich_track
|
||||||
from .ranking import is_better_result, select_best_positive
|
from .utils import is_better_result, select_best_positive
|
||||||
|
|
||||||
|
|
||||||
# Maps CacheStatus to the default TTL used when storing results
|
# Maps CacheStatus to the default TTL used when storing results
|
||||||
@@ -92,10 +95,10 @@ def _has_negative_for_both_slots(cached_rows: list[LyricResult]) -> bool:
|
|||||||
class LrcManager:
|
class LrcManager:
|
||||||
"""Main entry point for fetching lyrics with caching."""
|
"""Main entry point for fetching lyrics with caching."""
|
||||||
|
|
||||||
def __init__(self, db_path: str) -> None:
|
def __init__(self, db_path: str, config: AppConfig = AppConfig()) -> None:
|
||||||
self.cache = CacheEngine(db_path=db_path)
|
self.cache = CacheEngine(db_path=db_path)
|
||||||
self.authenticators = create_authenticators(self.cache)
|
self.authenticators = create_authenticators(self.cache, config)
|
||||||
self.fetchers = create_fetchers(self.cache, self.authenticators)
|
self.fetchers = create_fetchers(self.cache, self.authenticators, config)
|
||||||
self.enrichers = create_enrichers(self.authenticators)
|
self.enrichers = create_enrichers(self.authenticators)
|
||||||
|
|
||||||
async def _run_group(
|
async def _run_group(
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ Date: 2026-03-31 06:09:11
|
|||||||
Description: Metadata enrichment pipeline
|
Description: Metadata enrichment pipeline
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseEnricher
|
from .base import BaseEnricher
|
||||||
|
|||||||
@@ -4,13 +4,15 @@ Date: 2026-03-31 06:11:27
|
|||||||
Description: Enricher that reads metadata from audio file tags.
|
Description: Enricher that reads metadata from audio file tags.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from mutagen._file import File, FileType
|
from mutagen._file import File, FileType
|
||||||
|
|
||||||
from .base import BaseEnricher
|
from .base import BaseEnricher
|
||||||
from ..models import TrackMeta
|
from ..models import TrackMeta
|
||||||
from ..lrc import get_audio_path
|
from ..utils import get_audio_path
|
||||||
|
|
||||||
|
|
||||||
class AudioTagEnricher(BaseEnricher):
|
class AudioTagEnricher(BaseEnricher):
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ Date: 2026-03-31 06:08:16
|
|||||||
Description: Base class for metadata enrichers.
|
Description: Base class for metadata enrichers.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
|||||||
@@ -4,13 +4,15 @@ Date: 2026-03-31 06:08:44
|
|||||||
Description: Enricher that parses metadata from the audio file path.
|
Description: Enricher that parses metadata from the audio file path.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import re
|
import re
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseEnricher
|
from .base import BaseEnricher
|
||||||
from ..models import TrackMeta
|
from ..models import TrackMeta
|
||||||
from ..lrc import get_audio_path
|
from ..utils import get_audio_path
|
||||||
|
|
||||||
|
|
||||||
# Common track-number prefixes: "01 - ", "01. ", "1 - ", etc.
|
# Common track-number prefixes: "01 - ", "01. ", "1 - ", etc.
|
||||||
|
|||||||
@@ -4,8 +4,9 @@ Date: 2026-04-05 02:13:49
|
|||||||
Description: Musixmatch metadata enricher (matcher.track.get by Spotify track ID).
|
Description: Musixmatch metadata enricher (matcher.track.get by Spotify track ID).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from typing import Optional
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseEnricher
|
from .base import BaseEnricher
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ Date: 2026-03-25 02:33:26
|
|||||||
Description: Fetcher pipeline — registry and types.
|
Description: Fetcher pipeline — registry and types.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Literal, Optional
|
from typing import Literal, Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
@@ -23,6 +25,7 @@ from ..authenticators import (
|
|||||||
QQMusicAuthenticator,
|
QQMusicAuthenticator,
|
||||||
)
|
)
|
||||||
from ..cache import CacheEngine
|
from ..cache import CacheEngine
|
||||||
|
from ..config import AppConfig
|
||||||
from ..models import TrackMeta
|
from ..models import TrackMeta
|
||||||
|
|
||||||
FetcherMethodType = Literal[
|
FetcherMethodType = Literal[
|
||||||
@@ -52,26 +55,27 @@ _FETCHER_GROUPS: list[list[FetcherMethodType]] = [
|
|||||||
def create_fetchers(
|
def create_fetchers(
|
||||||
cache: CacheEngine,
|
cache: CacheEngine,
|
||||||
authenticators: dict[str, BaseAuthenticator],
|
authenticators: dict[str, BaseAuthenticator],
|
||||||
|
config: AppConfig,
|
||||||
) -> dict[FetcherMethodType, BaseFetcher]:
|
) -> dict[FetcherMethodType, BaseFetcher]:
|
||||||
"""Instantiate all fetchers. Returns a dict keyed by source name."""
|
"""Instantiate all fetchers. Returns a dict keyed by source name."""
|
||||||
spotify_auth = authenticators["spotify"]
|
spotify_auth = authenticators["spotify"]
|
||||||
mxm_auth = authenticators["musixmatch"]
|
mxm_auth = authenticators["musixmatch"]
|
||||||
qqmusic_auth = authenticators.get("qqmusic")
|
qqmusic_auth = authenticators["qqmusic"]
|
||||||
assert isinstance(spotify_auth, SpotifyAuthenticator)
|
assert isinstance(spotify_auth, SpotifyAuthenticator)
|
||||||
assert isinstance(mxm_auth, MusixmatchAuthenticator)
|
assert isinstance(mxm_auth, MusixmatchAuthenticator)
|
||||||
assert isinstance(qqmusic_auth, QQMusicAuthenticator)
|
assert isinstance(qqmusic_auth, QQMusicAuthenticator)
|
||||||
fetchers: dict[FetcherMethodType, BaseFetcher] = {
|
g = config.general
|
||||||
"local": LocalFetcher(),
|
return {
|
||||||
|
"local": LocalFetcher(g),
|
||||||
"cache-search": CacheSearchFetcher(cache),
|
"cache-search": CacheSearchFetcher(cache),
|
||||||
"spotify": SpotifyFetcher(spotify_auth),
|
"spotify": SpotifyFetcher(g, spotify_auth),
|
||||||
"lrclib": LrclibFetcher(),
|
"lrclib": LrclibFetcher(g),
|
||||||
"musixmatch-spotify": MusixmatchSpotifyFetcher(mxm_auth),
|
"musixmatch-spotify": MusixmatchSpotifyFetcher(g, mxm_auth),
|
||||||
"lrclib-search": LrclibSearchFetcher(),
|
"lrclib-search": LrclibSearchFetcher(g),
|
||||||
"netease": NeteaseFetcher(),
|
"netease": NeteaseFetcher(g),
|
||||||
"qqmusic": QQMusicFetcher(qqmusic_auth),
|
"qqmusic": QQMusicFetcher(g, qqmusic_auth),
|
||||||
"musixmatch": MusixmatchFetcher(mxm_auth),
|
"musixmatch": MusixmatchFetcher(g, mxm_auth),
|
||||||
}
|
}
|
||||||
return fetchers
|
|
||||||
|
|
||||||
|
|
||||||
def build_plan(
|
def build_plan(
|
||||||
|
|||||||
@@ -4,10 +4,14 @@ Date: 2026-03-25 02:33:26
|
|||||||
Description: Base fetcher class and common interfaces.
|
Description: Base fetcher class and common interfaces.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
from ..authenticators.base import BaseAuthenticator
|
||||||
|
from ..config import GeneralConfig
|
||||||
from ..models import CacheStatus, TrackMeta, LyricResult
|
from ..models import CacheStatus, TrackMeta, LyricResult
|
||||||
|
|
||||||
|
|
||||||
@@ -38,6 +42,12 @@ class FetchResult:
|
|||||||
|
|
||||||
|
|
||||||
class BaseFetcher(ABC):
|
class BaseFetcher(ABC):
|
||||||
|
def __init__(
|
||||||
|
self, general: GeneralConfig, auth: Optional[BaseAuthenticator] = None
|
||||||
|
) -> None:
|
||||||
|
self._general = general
|
||||||
|
self._auth = auth
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def source_name(self) -> str:
|
def source_name(self) -> str:
|
||||||
|
|||||||
@@ -8,10 +8,11 @@ Description: Cache-search fetcher — cross-album fuzzy lookup in the local cach
|
|||||||
albums or is played from different players.
|
albums or is played from different players.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
|
|
||||||
from .base import BaseFetcher, FetchResult
|
from .base import BaseFetcher, FetchResult
|
||||||
from .selection import SearchCandidate, select_best
|
from .selection import SearchCandidate, select_best
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
|
|||||||
@@ -7,6 +7,8 @@ Description: Local fetcher — reads lyrics from .lrc sidecar files or embedded
|
|||||||
2. Embedded lyrics in audio metadata (FLAC, MP3 USLT/SYLT tags)
|
2. Embedded lyrics in audio metadata (FLAC, MP3 USLT/SYLT tags)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from mutagen._file import File
|
from mutagen._file import File
|
||||||
@@ -14,7 +16,8 @@ from mutagen.flac import FLAC
|
|||||||
|
|
||||||
from .base import BaseFetcher, FetchResult
|
from .base import BaseFetcher, FetchResult
|
||||||
from ..models import CacheStatus, TrackMeta, LyricResult
|
from ..models import CacheStatus, TrackMeta, LyricResult
|
||||||
from ..lrc import get_audio_path, get_sidecar_path, LRCData
|
from ..lrc import LRCData
|
||||||
|
from ..utils import get_audio_path, get_sidecar_path
|
||||||
|
|
||||||
|
|
||||||
class LocalFetcher(BaseFetcher):
|
class LocalFetcher(BaseFetcher):
|
||||||
|
|||||||
@@ -5,6 +5,8 @@ Description: LRCLIB fetcher — queries lrclib.net for synced/plain lyrics.
|
|||||||
Requires complete track metadata (artist, title, album, duration).
|
Requires complete track metadata (artist, title, album, duration).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import httpx
|
import httpx
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from urllib.parse import urlencode
|
from urllib.parse import urlencode
|
||||||
@@ -13,7 +15,6 @@ from .base import BaseFetcher, FetchResult
|
|||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
from ..lrc import LRCData
|
from ..lrc import LRCData
|
||||||
from ..config import (
|
from ..config import (
|
||||||
HTTP_TIMEOUT,
|
|
||||||
TTL_UNSYNCED,
|
TTL_UNSYNCED,
|
||||||
TTL_NOT_FOUND,
|
TTL_NOT_FOUND,
|
||||||
UA_LRX,
|
UA_LRX,
|
||||||
@@ -22,6 +23,38 @@ from ..config import (
|
|||||||
_LRCLIB_API_URL = "https://lrclib.net/api/get"
|
_LRCLIB_API_URL = "https://lrclib.net/api/get"
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_lrclib_response(data: dict) -> FetchResult:
|
||||||
|
"""Parse LRCLIB JSON response into synced/unsynced fetch result."""
|
||||||
|
synced = data.get("syncedLyrics")
|
||||||
|
unsynced = data.get("plainLyrics")
|
||||||
|
|
||||||
|
res_synced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
res_unsynced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
|
||||||
|
if isinstance(synced, str) and synced.strip():
|
||||||
|
lyrics = LRCData(synced)
|
||||||
|
res_synced = LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
|
lyrics=lyrics,
|
||||||
|
source="lrclib",
|
||||||
|
)
|
||||||
|
|
||||||
|
if isinstance(unsynced, str) and unsynced.strip():
|
||||||
|
lyrics = LRCData(unsynced)
|
||||||
|
res_unsynced = LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_UNSYNCED,
|
||||||
|
lyrics=lyrics,
|
||||||
|
source="lrclib",
|
||||||
|
ttl=TTL_UNSYNCED,
|
||||||
|
)
|
||||||
|
|
||||||
|
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
||||||
|
|
||||||
|
|
||||||
class LrclibFetcher(BaseFetcher):
|
class LrclibFetcher(BaseFetcher):
|
||||||
@property
|
@property
|
||||||
def source_name(self) -> str:
|
def source_name(self) -> str:
|
||||||
@@ -30,12 +63,12 @@ class LrclibFetcher(BaseFetcher):
|
|||||||
def is_available(self, track: TrackMeta) -> bool:
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
return track.is_complete
|
return track.is_complete
|
||||||
|
|
||||||
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
async def _api_get(
|
||||||
"""Fetch lyrics from LRCLIB. Requires complete metadata."""
|
self,
|
||||||
if not track.is_complete:
|
client: httpx.AsyncClient,
|
||||||
logger.debug("LRCLIB: skipped — incomplete metadata")
|
track: TrackMeta,
|
||||||
return FetchResult()
|
) -> httpx.Response:
|
||||||
|
"""Issue one LRCLIB get request using the same path as production fetch."""
|
||||||
params = {
|
params = {
|
||||||
"track_name": track.title,
|
"track_name": track.title,
|
||||||
"artist_name": track.artist,
|
"artist_name": track.artist,
|
||||||
@@ -43,11 +76,19 @@ class LrclibFetcher(BaseFetcher):
|
|||||||
"duration": track.length / 1000.0 if track.length else 0,
|
"duration": track.length / 1000.0 if track.length else 0,
|
||||||
}
|
}
|
||||||
url = f"{_LRCLIB_API_URL}?{urlencode(params)}"
|
url = f"{_LRCLIB_API_URL}?{urlencode(params)}"
|
||||||
|
return await client.get(url, headers={"User-Agent": UA_LRX})
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
"""Fetch lyrics from LRCLIB. Requires complete metadata."""
|
||||||
|
if not track.is_complete:
|
||||||
|
logger.debug("LRCLIB: skipped — incomplete metadata")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
logger.info(f"LRCLIB: fetching lyrics for {track.display_name()}")
|
logger.info(f"LRCLIB: fetching lyrics for {track.display_name()}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
resp = await client.get(url, headers={"User-Agent": UA_LRX})
|
resp = await self._api_get(client, track)
|
||||||
|
|
||||||
if resp.status_code == 404:
|
if resp.status_code == 404:
|
||||||
logger.debug(f"LRCLIB: not found for {track.display_name()}")
|
logger.debug(f"LRCLIB: not found for {track.display_name()}")
|
||||||
@@ -61,37 +102,16 @@ class LrclibFetcher(BaseFetcher):
|
|||||||
if not isinstance(data, dict):
|
if not isinstance(data, dict):
|
||||||
logger.error(f"LRCLIB: unexpected response type: {type(data).__name__}")
|
logger.error(f"LRCLIB: unexpected response type: {type(data).__name__}")
|
||||||
return FetchResult.from_network_error()
|
return FetchResult.from_network_error()
|
||||||
|
result = _parse_lrclib_response(data)
|
||||||
synced = data.get("syncedLyrics")
|
if result.synced and result.synced.lyrics:
|
||||||
unsynced = data.get("plainLyrics")
|
logger.info(
|
||||||
|
f"LRCLIB: got synced lyrics ({len(result.synced.lyrics)} lines)"
|
||||||
res_synced: LyricResult = LyricResult(
|
|
||||||
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
|
||||||
)
|
)
|
||||||
res_unsynced: LyricResult = LyricResult(
|
if result.unsynced and result.unsynced.lyrics:
|
||||||
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
logger.info(
|
||||||
|
f"LRCLIB: got unsynced lyrics ({len(result.unsynced.lyrics)} lines)"
|
||||||
)
|
)
|
||||||
|
return result
|
||||||
if isinstance(synced, str) and synced.strip():
|
|
||||||
lyrics = LRCData(synced)
|
|
||||||
logger.info(f"LRCLIB: got synced lyrics ({len(lyrics)} lines)")
|
|
||||||
res_synced = LyricResult(
|
|
||||||
status=CacheStatus.SUCCESS_SYNCED,
|
|
||||||
lyrics=lyrics,
|
|
||||||
source=self.source_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
if isinstance(unsynced, str) and unsynced.strip():
|
|
||||||
lyrics = LRCData(unsynced)
|
|
||||||
logger.info(f"LRCLIB: got unsynced lyrics ({len(lyrics)} lines)")
|
|
||||||
res_unsynced = LyricResult(
|
|
||||||
status=CacheStatus.SUCCESS_UNSYNCED,
|
|
||||||
lyrics=lyrics,
|
|
||||||
source=self.source_name,
|
|
||||||
ttl=TTL_UNSYNCED,
|
|
||||||
)
|
|
||||||
|
|
||||||
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
|
||||||
|
|
||||||
except httpx.HTTPError as e:
|
except httpx.HTTPError as e:
|
||||||
logger.error(f"LRCLIB: HTTP error: {e}")
|
logger.error(f"LRCLIB: HTTP error: {e}")
|
||||||
|
|||||||
@@ -5,6 +5,8 @@ Description: LRCLIB search fetcher — fuzzy search via lrclib.net /api/search.
|
|||||||
Used when metadata is incomplete (no album or duration) but title is available.
|
Used when metadata is incomplete (no album or duration) but title is available.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
import httpx
|
import httpx
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
@@ -15,7 +17,6 @@ from .selection import SearchCandidate, select_best
|
|||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
from ..lrc import LRCData
|
from ..lrc import LRCData
|
||||||
from ..config import (
|
from ..config import (
|
||||||
HTTP_TIMEOUT,
|
|
||||||
TTL_UNSYNCED,
|
TTL_UNSYNCED,
|
||||||
TTL_NOT_FOUND,
|
TTL_NOT_FOUND,
|
||||||
UA_LRX,
|
UA_LRX,
|
||||||
@@ -24,6 +25,24 @@ from ..config import (
|
|||||||
_LRCLIB_SEARCH_URL = "https://lrclib.net/api/search"
|
_LRCLIB_SEARCH_URL = "https://lrclib.net/api/search"
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_lrclib_search_results(items: list[dict]) -> list[SearchCandidate[dict]]:
|
||||||
|
"""Map LRCLIB search JSON items to normalized SearchCandidate entries."""
|
||||||
|
return [
|
||||||
|
SearchCandidate(
|
||||||
|
item=item,
|
||||||
|
duration_ms=item["duration"] * 1000
|
||||||
|
if isinstance(item.get("duration"), (int, float))
|
||||||
|
else None,
|
||||||
|
is_synced=isinstance(item.get("syncedLyrics"), str)
|
||||||
|
and bool(item["syncedLyrics"].strip()),
|
||||||
|
title=item.get("trackName"),
|
||||||
|
artist=item.get("artistName"),
|
||||||
|
album=item.get("albumName"),
|
||||||
|
)
|
||||||
|
for item in items
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
class LrclibSearchFetcher(BaseFetcher):
|
class LrclibSearchFetcher(BaseFetcher):
|
||||||
@property
|
@property
|
||||||
def source_name(self) -> str:
|
def source_name(self) -> str:
|
||||||
@@ -60,22 +79,12 @@ class LrclibSearchFetcher(BaseFetcher):
|
|||||||
|
|
||||||
return queries
|
return queries
|
||||||
|
|
||||||
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
async def _api_query(
|
||||||
if not track.title:
|
self,
|
||||||
logger.debug("LRCLIB-search: skipped — no title")
|
client: httpx.AsyncClient,
|
||||||
return FetchResult()
|
params: dict[str, str],
|
||||||
|
) -> tuple[list[dict], bool]:
|
||||||
queries = self._build_queries(track)
|
"""Issue one LRCLIB search query using production request path."""
|
||||||
logger.info(f"LRCLIB-search: searching for {track.display_name()}")
|
|
||||||
|
|
||||||
seen_ids: set[int] = set()
|
|
||||||
candidates: list[dict] = []
|
|
||||||
had_error = False
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
|
|
||||||
async def _query(params: dict[str, str]) -> tuple[list[dict], bool]:
|
|
||||||
url = f"{_LRCLIB_SEARCH_URL}?{urlencode(params)}"
|
url = f"{_LRCLIB_SEARCH_URL}?{urlencode(params)}"
|
||||||
logger.debug(f"LRCLIB-search: query {params}")
|
logger.debug(f"LRCLIB-search: query {params}")
|
||||||
try:
|
try:
|
||||||
@@ -91,8 +100,20 @@ class LrclibSearchFetcher(BaseFetcher):
|
|||||||
return [], False
|
return [], False
|
||||||
return [item for item in data if isinstance(item, dict)], False
|
return [item for item in data if isinstance(item, dict)], False
|
||||||
|
|
||||||
all_results = await asyncio.gather(*(_query(p) for p in queries))
|
async def _api_candidates(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
track: TrackMeta,
|
||||||
|
) -> tuple[list[dict], bool]:
|
||||||
|
"""Request and merge LRCLIB-search candidates using built-in query strategy."""
|
||||||
|
queries = self._build_queries(track)
|
||||||
|
all_results = await asyncio.gather(
|
||||||
|
*(self._api_query(client, p) for p in queries)
|
||||||
|
)
|
||||||
|
|
||||||
|
seen_ids: set[int] = set()
|
||||||
|
candidates: list[dict] = []
|
||||||
|
had_error = False
|
||||||
for items, err in all_results:
|
for items, err in all_results:
|
||||||
if err:
|
if err:
|
||||||
had_error = True
|
had_error = True
|
||||||
@@ -103,6 +124,18 @@ class LrclibSearchFetcher(BaseFetcher):
|
|||||||
if item_id is not None:
|
if item_id is not None:
|
||||||
seen_ids.add(item_id)
|
seen_ids.add(item_id)
|
||||||
candidates.append(item)
|
candidates.append(item)
|
||||||
|
return candidates, had_error
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
if not track.title:
|
||||||
|
logger.debug("LRCLIB-search: skipped — no title")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
logger.info(f"LRCLIB-search: searching for {track.display_name()}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
candidates, had_error = await self._api_candidates(client, track)
|
||||||
|
|
||||||
if not candidates:
|
if not candidates:
|
||||||
if had_error:
|
if had_error:
|
||||||
@@ -112,23 +145,10 @@ class LrclibSearchFetcher(BaseFetcher):
|
|||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"LRCLIB-search: got {len(candidates)} unique candidates "
|
f"LRCLIB-search: got {len(candidates)} unique candidates "
|
||||||
f"from {len(queries)} queries"
|
f"from {len(self._build_queries(track))} queries"
|
||||||
)
|
)
|
||||||
|
|
||||||
mapped = [
|
mapped = _parse_lrclib_search_results(candidates)
|
||||||
SearchCandidate(
|
|
||||||
item=item,
|
|
||||||
duration_ms=item["duration"] * 1000
|
|
||||||
if isinstance(item.get("duration"), (int, float))
|
|
||||||
else None,
|
|
||||||
is_synced=isinstance(item.get("syncedLyrics"), str)
|
|
||||||
and bool(item["syncedLyrics"].strip()),
|
|
||||||
title=item.get("trackName"),
|
|
||||||
artist=item.get("artistName"),
|
|
||||||
album=item.get("albumName"),
|
|
||||||
)
|
|
||||||
for item in candidates
|
|
||||||
]
|
|
||||||
best, confidence = select_best(
|
best, confidence = select_best(
|
||||||
mapped,
|
mapped,
|
||||||
track.length,
|
track.length,
|
||||||
|
|||||||
@@ -11,6 +11,8 @@ Description: Musixmatch fetchers (desktop API, anonymous or usertoken auth).
|
|||||||
musixmatch — metadata search + best-candidate fallback
|
musixmatch — metadata search + best-candidate fallback
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
@@ -18,6 +20,7 @@ from loguru import logger
|
|||||||
from .base import BaseFetcher, FetchResult
|
from .base import BaseFetcher, FetchResult
|
||||||
from .selection import SearchCandidate, select_best
|
from .selection import SearchCandidate, select_best
|
||||||
from ..authenticators.musixmatch import MusixmatchAuthenticator
|
from ..authenticators.musixmatch import MusixmatchAuthenticator
|
||||||
|
from ..config import GeneralConfig
|
||||||
from ..lrc import LRCData
|
from ..lrc import LRCData
|
||||||
from ..models import CacheStatus, LyricResult, TrackMeta
|
from ..models import CacheStatus, LyricResult, TrackMeta
|
||||||
|
|
||||||
@@ -82,21 +85,8 @@ def _parse_subtitle(body: str) -> Optional[str]:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
async def _fetch_macro(
|
def _parse_mxm_macro(data: dict) -> LRCData | None:
|
||||||
auth: MusixmatchAuthenticator,
|
"""Parse macro.subtitles.get payload into LRCData (richsync preferred)."""
|
||||||
params: dict,
|
|
||||||
) -> Optional[LRCData]:
|
|
||||||
"""Call macro.subtitles.get via auth.get_json.
|
|
||||||
|
|
||||||
Returns LRCData (richsync preferred over subtitle), or None when no usable
|
|
||||||
lyrics are found. Raises on HTTP/network errors.
|
|
||||||
"""
|
|
||||||
logger.debug(f"Musixmatch: macro call with {list(params.keys())}")
|
|
||||||
data = await auth.get_json(_MUSIXMATCH_MACRO_URL, {**_MXM_MACRO_PARAMS, **params})
|
|
||||||
if data is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Musixmatch returns body=[] (not {}) when the track is not found
|
|
||||||
body = data.get("message", {}).get("body", {})
|
body = data.get("message", {}).get("body", {})
|
||||||
if not isinstance(body, dict):
|
if not isinstance(body, dict):
|
||||||
return None
|
return None
|
||||||
@@ -104,7 +94,6 @@ async def _fetch_macro(
|
|||||||
if not isinstance(macro_calls, dict):
|
if not isinstance(macro_calls, dict):
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Prefer richsync (word-level timing)
|
|
||||||
richsync_msg = macro_calls.get("track.richsync.get", {}).get("message", {})
|
richsync_msg = macro_calls.get("track.richsync.get", {}).get("message", {})
|
||||||
if (
|
if (
|
||||||
isinstance(richsync_msg, dict)
|
isinstance(richsync_msg, dict)
|
||||||
@@ -118,10 +107,8 @@ async def _fetch_macro(
|
|||||||
if lrc_text:
|
if lrc_text:
|
||||||
lrc = LRCData(lrc_text)
|
lrc = LRCData(lrc_text)
|
||||||
if lrc:
|
if lrc:
|
||||||
logger.debug("Musixmatch: got richsync lyrics")
|
|
||||||
return lrc
|
return lrc
|
||||||
|
|
||||||
# Fall back to subtitle (line-level timing)
|
|
||||||
subtitle_msg = macro_calls.get("track.subtitles.get", {}).get("message", {})
|
subtitle_msg = macro_calls.get("track.subtitles.get", {}).get("message", {})
|
||||||
if (
|
if (
|
||||||
isinstance(subtitle_msg, dict)
|
isinstance(subtitle_msg, dict)
|
||||||
@@ -135,34 +122,81 @@ async def _fetch_macro(
|
|||||||
if lrc_text:
|
if lrc_text:
|
||||||
lrc = LRCData(lrc_text)
|
lrc = LRCData(lrc_text)
|
||||||
if lrc:
|
if lrc:
|
||||||
logger.debug("Musixmatch: got subtitle lyrics")
|
|
||||||
return lrc
|
return lrc
|
||||||
|
|
||||||
logger.debug("Musixmatch: no usable lyrics in macro response")
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_mxm_search(data: dict) -> list[SearchCandidate[int]]:
|
||||||
|
"""Parse track.search payload to normalized candidates."""
|
||||||
|
track_list = data.get("message", {}).get("body", {}).get("track_list", [])
|
||||||
|
if not isinstance(track_list, list) or not track_list:
|
||||||
|
return []
|
||||||
|
|
||||||
|
return [
|
||||||
|
SearchCandidate(
|
||||||
|
item=int(t["commontrack_id"]),
|
||||||
|
duration_ms=(
|
||||||
|
float(t["track_length"]) * 1000 if t.get("track_length") else None
|
||||||
|
),
|
||||||
|
is_synced=bool(t.get("has_subtitles") or t.get("has_richsync")),
|
||||||
|
title=t.get("track_name"),
|
||||||
|
artist=t.get("artist_name"),
|
||||||
|
album=t.get("album_name"),
|
||||||
|
)
|
||||||
|
for item in track_list
|
||||||
|
if isinstance(item, dict)
|
||||||
|
and isinstance(t := item.get("track", {}), dict)
|
||||||
|
and isinstance(t.get("commontrack_id"), int)
|
||||||
|
and not t.get("instrumental")
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
class MusixmatchSpotifyFetcher(BaseFetcher):
|
class MusixmatchSpotifyFetcher(BaseFetcher):
|
||||||
"""Direct lookup by Spotify track ID — no search, single request."""
|
"""Direct lookup by Spotify track ID — no search, single request."""
|
||||||
|
|
||||||
def __init__(self, auth: MusixmatchAuthenticator) -> None:
|
_auth: MusixmatchAuthenticator
|
||||||
self.auth = auth
|
|
||||||
|
def __init__(self, general: GeneralConfig, auth: MusixmatchAuthenticator) -> None:
|
||||||
|
super().__init__(general, auth)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def source_name(self) -> str:
|
def source_name(self) -> str:
|
||||||
return "musixmatch-spotify"
|
return "musixmatch-spotify"
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
return bool(track.trackid) and not self.auth.is_cooldown()
|
return bool(track.trackid) and not self._auth.is_cooldown()
|
||||||
|
|
||||||
|
async def _api_macro(self, params: dict) -> dict | None:
|
||||||
|
"""Request macro payload through authenticator using production path."""
|
||||||
|
return await self._auth.get_json(
|
||||||
|
_MUSIXMATCH_MACRO_URL, {**_MXM_MACRO_PARAMS, **params}
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _api_macro_track(self, track: TrackMeta) -> dict | None:
|
||||||
|
"""Request macro payload for one track using Spotify ID lookup path."""
|
||||||
|
if not track.trackid:
|
||||||
|
return None
|
||||||
|
return await self._api_macro({"track_spotify_id": track.trackid})
|
||||||
|
|
||||||
|
async def _fetch_macro(self, params: dict) -> LRCData | None:
|
||||||
|
"""Request and parse Musixmatch macro lyrics payload."""
|
||||||
|
logger.debug(f"Musixmatch: macro call with {list(params.keys())}")
|
||||||
|
data = await self._api_macro(params)
|
||||||
|
if data is None:
|
||||||
|
return None
|
||||||
|
lrc = _parse_mxm_macro(data)
|
||||||
|
if lrc is None:
|
||||||
|
logger.debug("Musixmatch: no usable lyrics in macro response")
|
||||||
|
return None
|
||||||
|
logger.debug("Musixmatch: parsed macro lyrics")
|
||||||
|
return lrc
|
||||||
|
|
||||||
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
logger.info(f"Musixmatch-Spotify: fetching lyrics for {track.display_name()}")
|
logger.info(f"Musixmatch-Spotify: fetching lyrics for {track.display_name()}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
lrc = await _fetch_macro(
|
lrc = await self._fetch_macro({"track_spotify_id": track.trackid}) # type: ignore[dict-item]
|
||||||
self.auth,
|
|
||||||
{"track_spotify_id": track.trackid}, # type: ignore[dict-item]
|
|
||||||
)
|
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
return FetchResult.from_not_found()
|
return FetchResult.from_not_found()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -191,8 +225,10 @@ class MusixmatchSpotifyFetcher(BaseFetcher):
|
|||||||
class MusixmatchFetcher(BaseFetcher):
|
class MusixmatchFetcher(BaseFetcher):
|
||||||
"""Metadata search + best-candidate lyric fetch."""
|
"""Metadata search + best-candidate lyric fetch."""
|
||||||
|
|
||||||
def __init__(self, auth: MusixmatchAuthenticator) -> None:
|
_auth: MusixmatchAuthenticator
|
||||||
self.auth = auth
|
|
||||||
|
def __init__(self, general: GeneralConfig, auth: MusixmatchAuthenticator) -> None:
|
||||||
|
super().__init__(general, auth)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def source_name(self) -> str:
|
def source_name(self) -> str:
|
||||||
@@ -203,11 +239,15 @@ class MusixmatchFetcher(BaseFetcher):
|
|||||||
return "musixmatch"
|
return "musixmatch"
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
return bool(track.title) and not self.auth.is_cooldown()
|
return bool(track.title) and not self._auth.is_cooldown()
|
||||||
|
|
||||||
async def _search(self, track: TrackMeta) -> tuple[Optional[int], float]:
|
async def _api_search(self, params: dict) -> dict | None:
|
||||||
"""Search for track metadata. Raises on network/HTTP errors."""
|
"""Request search payload through authenticator using production path."""
|
||||||
params: dict = {
|
return await self._auth.get_json(_MUSIXMATCH_SEARCH_URL, params)
|
||||||
|
|
||||||
|
def _build_search_params(self, track: TrackMeta) -> dict[str, str]:
|
||||||
|
"""Build Musixmatch search params for one track."""
|
||||||
|
params: dict[str, str] = {
|
||||||
"q_track": track.title or "",
|
"q_track": track.title or "",
|
||||||
"page_size": "10",
|
"page_size": "10",
|
||||||
"f_has_lyrics": "1",
|
"f_has_lyrics": "1",
|
||||||
@@ -216,36 +256,66 @@ class MusixmatchFetcher(BaseFetcher):
|
|||||||
params["q_artist"] = track.artist
|
params["q_artist"] = track.artist
|
||||||
if track.album:
|
if track.album:
|
||||||
params["q_album"] = track.album
|
params["q_album"] = track.album
|
||||||
|
return params
|
||||||
|
|
||||||
|
async def _api_search_track(self, track: TrackMeta) -> dict | None:
|
||||||
|
"""Request search payload for one track using production path."""
|
||||||
|
return await self._api_search(self._build_search_params(track))
|
||||||
|
|
||||||
|
async def _api_macro(self, params: dict) -> dict | None:
|
||||||
|
"""Request macro payload through authenticator using production path."""
|
||||||
|
return await self._auth.get_json(
|
||||||
|
_MUSIXMATCH_MACRO_URL, {**_MXM_MACRO_PARAMS, **params}
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _api_macro_track(self, track: TrackMeta) -> dict | None:
|
||||||
|
"""Request macro payload for top-ranked search candidate of one track."""
|
||||||
|
search_data = await self._api_search_track(track)
|
||||||
|
if search_data is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
candidates = _parse_mxm_search(search_data)
|
||||||
|
if not candidates:
|
||||||
|
return None
|
||||||
|
|
||||||
|
commontrack_id, _confidence = select_best(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if commontrack_id is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
return await self._api_macro({"commontrack_id": str(commontrack_id)})
|
||||||
|
|
||||||
|
async def _fetch_macro(self, params: dict) -> LRCData | None:
|
||||||
|
"""Request and parse Musixmatch macro lyrics payload."""
|
||||||
|
logger.debug(f"Musixmatch: macro call with {list(params.keys())}")
|
||||||
|
data = await self._api_macro(params)
|
||||||
|
if data is None:
|
||||||
|
return None
|
||||||
|
lrc = _parse_mxm_macro(data)
|
||||||
|
if lrc is None:
|
||||||
|
logger.debug("Musixmatch: no usable lyrics in macro response")
|
||||||
|
return None
|
||||||
|
logger.debug("Musixmatch: parsed macro lyrics")
|
||||||
|
return lrc
|
||||||
|
|
||||||
|
async def _search(self, track: TrackMeta) -> tuple[Optional[int], float]:
|
||||||
|
"""Search for track metadata. Raises on network/HTTP errors."""
|
||||||
logger.debug(f"Musixmatch: searching for '{track.display_name()}'")
|
logger.debug(f"Musixmatch: searching for '{track.display_name()}'")
|
||||||
data = await self.auth.get_json(_MUSIXMATCH_SEARCH_URL, params)
|
data = await self._api_search_track(track)
|
||||||
if data is None:
|
if data is None:
|
||||||
return None, 0.0
|
return None, 0.0
|
||||||
|
|
||||||
track_list = data.get("message", {}).get("body", {}).get("track_list", [])
|
candidates = _parse_mxm_search(data)
|
||||||
if not isinstance(track_list, list) or not track_list:
|
if not candidates:
|
||||||
logger.debug("Musixmatch: search returned 0 results")
|
logger.debug("Musixmatch: search returned 0 results")
|
||||||
return None, 0.0
|
return None, 0.0
|
||||||
|
|
||||||
logger.debug(f"Musixmatch: search returned {len(track_list)} candidates")
|
logger.debug(f"Musixmatch: search returned {len(candidates)} candidates")
|
||||||
|
|
||||||
candidates = [
|
|
||||||
SearchCandidate(
|
|
||||||
item=int(t["commontrack_id"]),
|
|
||||||
duration_ms=(
|
|
||||||
float(t["track_length"]) * 1000 if t.get("track_length") else None
|
|
||||||
),
|
|
||||||
is_synced=bool(t.get("has_subtitles") or t.get("has_richsync")),
|
|
||||||
title=t.get("track_name"),
|
|
||||||
artist=t.get("artist_name"),
|
|
||||||
album=t.get("album_name"),
|
|
||||||
)
|
|
||||||
for item in track_list
|
|
||||||
if isinstance(item, dict)
|
|
||||||
and isinstance(t := item.get("track", {}), dict)
|
|
||||||
and isinstance(t.get("commontrack_id"), int)
|
|
||||||
and not t.get("instrumental")
|
|
||||||
]
|
|
||||||
|
|
||||||
best_id, confidence = select_best(
|
best_id, confidence = select_best(
|
||||||
candidates,
|
candidates,
|
||||||
@@ -269,10 +339,7 @@ class MusixmatchFetcher(BaseFetcher):
|
|||||||
logger.debug(f"Musixmatch: no match found for {track.display_name()}")
|
logger.debug(f"Musixmatch: no match found for {track.display_name()}")
|
||||||
return FetchResult.from_not_found()
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
lrc = await _fetch_macro(
|
lrc = await self._fetch_macro({"commontrack_id": str(commontrack_id)})
|
||||||
self.auth,
|
|
||||||
{"commontrack_id": str(commontrack_id)},
|
|
||||||
)
|
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
return FetchResult.from_not_found()
|
return FetchResult.from_not_found()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|||||||
+133
-69
@@ -7,6 +7,8 @@ Description: Netease Cloud Music fetcher.
|
|||||||
retrieving lyrics. No authentication required.
|
retrieving lyrics. No authentication required.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
import httpx
|
import httpx
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
@@ -16,7 +18,6 @@ from .selection import SearchCandidate, select_ranked
|
|||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
from ..lrc import LRCData
|
from ..lrc import LRCData
|
||||||
from ..config import (
|
from ..config import (
|
||||||
HTTP_TIMEOUT,
|
|
||||||
TTL_NOT_FOUND,
|
TTL_NOT_FOUND,
|
||||||
MULTI_CANDIDATE_DELAY_S,
|
MULTI_CANDIDATE_DELAY_S,
|
||||||
UA_BROWSER,
|
UA_BROWSER,
|
||||||
@@ -31,6 +32,42 @@ _NETEASE_BASE_HEADERS = {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_netease_search(data: dict) -> list[SearchCandidate[int]]:
|
||||||
|
"""Parse Netease search response into scored candidates."""
|
||||||
|
result_body = data.get("result")
|
||||||
|
if not isinstance(result_body, dict):
|
||||||
|
return []
|
||||||
|
|
||||||
|
songs = result_body.get("songs")
|
||||||
|
if not isinstance(songs, list) or len(songs) == 0:
|
||||||
|
return []
|
||||||
|
|
||||||
|
return [
|
||||||
|
SearchCandidate(
|
||||||
|
item=song_id,
|
||||||
|
duration_ms=float(song["dt"]) if isinstance(song.get("dt"), int) else None,
|
||||||
|
title=song.get("name"),
|
||||||
|
artist=", ".join(a.get("name", "") for a in song.get("ar", [])) or None,
|
||||||
|
album=(song.get("al") or {}).get("name"),
|
||||||
|
)
|
||||||
|
for song in songs
|
||||||
|
if isinstance(song, dict) and isinstance(song_id := song.get("id"), int)
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_netease_lyrics(data: dict) -> LRCData | None:
|
||||||
|
"""Parse Netease lyric response to LRCData."""
|
||||||
|
lrc_obj = data.get("lrc")
|
||||||
|
if not isinstance(lrc_obj, dict):
|
||||||
|
return None
|
||||||
|
|
||||||
|
lrc = lrc_obj.get("lyric", "")
|
||||||
|
if not isinstance(lrc, str) or not lrc.strip():
|
||||||
|
return None
|
||||||
|
|
||||||
|
return LRCData(lrc)
|
||||||
|
|
||||||
|
|
||||||
class NeteaseFetcher(BaseFetcher):
|
class NeteaseFetcher(BaseFetcher):
|
||||||
@property
|
@property
|
||||||
def source_name(self) -> str:
|
def source_name(self) -> str:
|
||||||
@@ -39,6 +76,88 @@ class NeteaseFetcher(BaseFetcher):
|
|||||||
def is_available(self, track: TrackMeta) -> bool:
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
return bool(track.title)
|
return bool(track.title)
|
||||||
|
|
||||||
|
async def _api_search(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
query: str,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Issue one Netease search request and return JSON payload."""
|
||||||
|
resp = await client.post(
|
||||||
|
_NETEASE_SEARCH_URL,
|
||||||
|
headers=_NETEASE_BASE_HEADERS,
|
||||||
|
data={"s": query, "type": "1", "limit": str(limit), "offset": "0"},
|
||||||
|
)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def _api_search_track(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
track: TrackMeta,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Request Netease search payload for one track using production query strategy."""
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
if not query:
|
||||||
|
return None
|
||||||
|
return await self._api_search(client, query, limit)
|
||||||
|
|
||||||
|
async def _api_lyric(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
song_id: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Issue one Netease lyric request and return JSON payload."""
|
||||||
|
resp = await client.post(
|
||||||
|
_NETEASE_LYRIC_URL,
|
||||||
|
headers=_NETEASE_BASE_HEADERS,
|
||||||
|
data={
|
||||||
|
"id": str(song_id),
|
||||||
|
"cp": "false",
|
||||||
|
"tv": "0",
|
||||||
|
"lv": "0",
|
||||||
|
"rv": "0",
|
||||||
|
"kv": "0",
|
||||||
|
"yv": "0",
|
||||||
|
"ytv": "0",
|
||||||
|
"yrv": "0",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def _api_lyric_track(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
track: TrackMeta,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Request lyric payload for top-ranked candidate of a track."""
|
||||||
|
search_data = await self._api_search_track(client, track, limit)
|
||||||
|
if search_data is None:
|
||||||
|
return None
|
||||||
|
candidates = _parse_netease_search(search_data)
|
||||||
|
if not candidates:
|
||||||
|
return None
|
||||||
|
ranked = select_ranked(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if not ranked:
|
||||||
|
return None
|
||||||
|
top_song_id = ranked[0][0]
|
||||||
|
return await self._api_lyric(client, top_song_id)
|
||||||
|
|
||||||
async def _search(
|
async def _search(
|
||||||
self, track: TrackMeta, limit: int = 10
|
self, track: TrackMeta, limit: int = 10
|
||||||
) -> list[tuple[int, float]]:
|
) -> list[tuple[int, float]]:
|
||||||
@@ -49,47 +168,19 @@ class NeteaseFetcher(BaseFetcher):
|
|||||||
logger.debug(f"Netease: searching for '{query}' (limit={limit})")
|
logger.debug(f"Netease: searching for '{query}' (limit={limit})")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
resp = await client.post(
|
result = await self._api_search_track(client, track, limit)
|
||||||
_NETEASE_SEARCH_URL,
|
|
||||||
headers=_NETEASE_BASE_HEADERS,
|
|
||||||
data={"s": query, "type": "1", "limit": str(limit), "offset": "0"},
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
result = resp.json()
|
|
||||||
|
|
||||||
if not isinstance(result, dict):
|
if result is None:
|
||||||
logger.error(
|
logger.error("Netease: search returned non-dict payload")
|
||||||
f"Netease: search returned non-dict: {type(result).__name__}"
|
|
||||||
)
|
|
||||||
return []
|
return []
|
||||||
|
|
||||||
result_body = result.get("result")
|
candidates = _parse_netease_search(result)
|
||||||
if not isinstance(result_body, dict):
|
if not candidates:
|
||||||
logger.debug("Netease: search 'result' field missing or invalid")
|
|
||||||
return []
|
|
||||||
|
|
||||||
songs = result_body.get("songs")
|
|
||||||
if not isinstance(songs, list) or len(songs) == 0:
|
|
||||||
logger.debug("Netease: search returned 0 results")
|
logger.debug("Netease: search returned 0 results")
|
||||||
return []
|
return []
|
||||||
|
|
||||||
logger.debug(f"Netease: search returned {len(songs)} candidates")
|
logger.debug(f"Netease: search returned {len(candidates)} candidates")
|
||||||
|
|
||||||
candidates = [
|
|
||||||
SearchCandidate(
|
|
||||||
item=song_id,
|
|
||||||
duration_ms=float(song["dt"])
|
|
||||||
if isinstance(song.get("dt"), int)
|
|
||||||
else None,
|
|
||||||
title=song.get("name"),
|
|
||||||
artist=", ".join(a.get("name", "") for a in song.get("ar", []))
|
|
||||||
or None,
|
|
||||||
album=(song.get("al") or {}).get("name"),
|
|
||||||
)
|
|
||||||
for song in songs
|
|
||||||
if isinstance(song, dict) and isinstance(song_id := song.get("id"), int)
|
|
||||||
]
|
|
||||||
ranked = select_ranked(
|
ranked = select_ranked(
|
||||||
candidates,
|
candidates,
|
||||||
track.length,
|
track.length,
|
||||||
@@ -114,44 +205,17 @@ class NeteaseFetcher(BaseFetcher):
|
|||||||
logger.debug(f"Netease: fetching lyrics for song_id={song_id}")
|
logger.debug(f"Netease: fetching lyrics for song_id={song_id}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
resp = await client.post(
|
data = await self._api_lyric(client, song_id)
|
||||||
_NETEASE_LYRIC_URL,
|
|
||||||
headers=_NETEASE_BASE_HEADERS,
|
|
||||||
data={
|
|
||||||
"id": str(song_id),
|
|
||||||
"cp": "false",
|
|
||||||
"tv": "0",
|
|
||||||
"lv": "0",
|
|
||||||
"rv": "0",
|
|
||||||
"kv": "0",
|
|
||||||
"yv": "0",
|
|
||||||
"ytv": "0",
|
|
||||||
"yrv": "0",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
data = resp.json()
|
|
||||||
|
|
||||||
if not isinstance(data, dict):
|
if data is None:
|
||||||
logger.error(
|
logger.error("Netease: lyric response is not dict")
|
||||||
f"Netease: lyric response is not dict: {type(data).__name__}"
|
|
||||||
)
|
|
||||||
return FetchResult.from_network_error()
|
return FetchResult.from_network_error()
|
||||||
|
|
||||||
lrc_obj = data.get("lrc")
|
lrcdata = _parse_netease_lyrics(data)
|
||||||
if not isinstance(lrc_obj, dict):
|
if lrcdata is None:
|
||||||
logger.debug(
|
|
||||||
f"Netease: no 'lrc' object in response for song_id={song_id}"
|
|
||||||
)
|
|
||||||
return FetchResult.from_not_found()
|
|
||||||
|
|
||||||
lrc: str = lrc_obj.get("lyric", "")
|
|
||||||
if not isinstance(lrc, str) or not lrc.strip():
|
|
||||||
logger.debug(f"Netease: empty lyrics for song_id={song_id}")
|
logger.debug(f"Netease: empty lyrics for song_id={song_id}")
|
||||||
return FetchResult.from_not_found()
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
lrcdata = LRCData(lrc)
|
|
||||||
status = lrcdata.detect_sync_status()
|
status = lrcdata.detect_sync_status()
|
||||||
logger.info(
|
logger.info(
|
||||||
f"Netease: got {status.value} lyrics for song_id={song_id} "
|
f"Netease: got {status.value} lyrics for song_id={song_id} "
|
||||||
|
|||||||
+107
-67
@@ -9,79 +9,138 @@ Description: QQ Music fetcher via self-hosted API proxy.
|
|||||||
Search → pick best match → fetch LRC lyrics.
|
Search → pick best match → fetch LRC lyrics.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
import httpx
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseFetcher, FetchResult
|
from .base import BaseFetcher, FetchResult
|
||||||
from .selection import SearchCandidate, select_ranked
|
from .selection import SearchCandidate, select_ranked
|
||||||
|
from ..authenticators import QQMusicAuthenticator
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
from ..lrc import LRCData
|
from ..lrc import LRCData
|
||||||
from ..config import (
|
from ..config import (
|
||||||
HTTP_TIMEOUT,
|
GeneralConfig,
|
||||||
TTL_NOT_FOUND,
|
TTL_NOT_FOUND,
|
||||||
MULTI_CANDIDATE_DELAY_S,
|
MULTI_CANDIDATE_DELAY_S,
|
||||||
)
|
)
|
||||||
|
|
||||||
_QQ_MUSIC_API_SEARCH_ENDPOINT = "/api/search"
|
|
||||||
_QQ_MUSIC_API_LYRIC_ENDPOINT = "/api/lyric"
|
|
||||||
from ..authenticators import QQMusicAuthenticator
|
|
||||||
|
|
||||||
|
|
||||||
class QQMusicFetcher(BaseFetcher):
|
|
||||||
def __init__(self, auth: QQMusicAuthenticator) -> None:
|
|
||||||
self.auth = auth
|
|
||||||
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "qqmusic"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return bool(track.title) and self.auth.is_configured()
|
|
||||||
|
|
||||||
async def _search(
|
|
||||||
self, track: TrackMeta, limit: int = 10
|
|
||||||
) -> list[tuple[str, float]]:
|
|
||||||
query = f"{track.artist or ''} {track.title or ''}".strip()
|
|
||||||
if not query:
|
|
||||||
return []
|
|
||||||
|
|
||||||
logger.debug(f"QQMusic: searching for '{query}' (limit={limit})")
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
resp = await client.get(
|
|
||||||
f"{await self.auth.authenticate()}{_QQ_MUSIC_API_SEARCH_ENDPOINT}",
|
|
||||||
params={"keyword": query, "type": "song", "num": limit},
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
data = resp.json()
|
|
||||||
|
|
||||||
|
def _parse_qq_search(data: dict) -> list[SearchCandidate[str]]:
|
||||||
|
"""Parse QQMusic search response into normalized candidates."""
|
||||||
if data.get("code") != 0:
|
if data.get("code") != 0:
|
||||||
logger.error(f"QQMusic: search API error: {data}")
|
|
||||||
return []
|
return []
|
||||||
|
|
||||||
songs = data.get("data", {}).get("list", [])
|
songs = data.get("data", {}).get("list", [])
|
||||||
if not songs:
|
if not isinstance(songs, list):
|
||||||
logger.debug("QQMusic: search returned 0 results")
|
|
||||||
return []
|
return []
|
||||||
|
|
||||||
logger.debug(f"QQMusic: search returned {len(songs)} candidates")
|
return [
|
||||||
|
|
||||||
candidates = [
|
|
||||||
SearchCandidate(
|
SearchCandidate(
|
||||||
item=mid,
|
item=mid,
|
||||||
duration_ms=float(song["interval"]) * 1000
|
duration_ms=float(song["interval"]) * 1000
|
||||||
if isinstance(song.get("interval"), int)
|
if isinstance(song.get("interval"), int)
|
||||||
else None,
|
else None,
|
||||||
title=song.get("name"),
|
title=song.get("name"),
|
||||||
artist=", ".join(s.get("name", "") for s in song.get("singer", []))
|
artist=", ".join(s.get("name", "") for s in song.get("singer", [])) or None,
|
||||||
or None,
|
|
||||||
album=(song.get("album") or {}).get("name"),
|
album=(song.get("album") or {}).get("name"),
|
||||||
)
|
)
|
||||||
for song in songs
|
for song in songs
|
||||||
if isinstance(song, dict) and isinstance(mid := song.get("mid"), str)
|
if isinstance(song, dict) and isinstance(mid := song.get("mid"), str)
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_qq_lyrics(data: dict) -> LRCData | None:
|
||||||
|
"""Parse QQMusic lyric response to LRCData."""
|
||||||
|
if data.get("code") != 0:
|
||||||
|
return None
|
||||||
|
|
||||||
|
lrc = data.get("data", {}).get("lyric", "")
|
||||||
|
if not isinstance(lrc, str) or not lrc.strip():
|
||||||
|
return None
|
||||||
|
return LRCData(lrc)
|
||||||
|
|
||||||
|
|
||||||
|
class QQMusicFetcher(BaseFetcher):
|
||||||
|
_auth: QQMusicAuthenticator
|
||||||
|
|
||||||
|
def __init__(self, general: GeneralConfig, auth: QQMusicAuthenticator) -> None:
|
||||||
|
super().__init__(general, auth)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "qqmusic"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return bool(track.title) and self._auth.is_configured()
|
||||||
|
|
||||||
|
async def _api_search(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Return raw QQMusic search payload for one track."""
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
if not query:
|
||||||
|
return None
|
||||||
|
data = await self._auth.search(query, limit)
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def _api_lyric(
|
||||||
|
self,
|
||||||
|
mid: str,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Return raw QQMusic lyric payload for one song MID."""
|
||||||
|
data = await self._auth.get_lyric(mid)
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def _api_lyric_track(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Return raw QQMusic lyric payload for top-ranked search candidate."""
|
||||||
|
search_data = await self._api_search(track, limit)
|
||||||
|
if search_data is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
candidates = _parse_qq_search(search_data)
|
||||||
|
if not candidates:
|
||||||
|
return None
|
||||||
|
|
||||||
|
ranked = select_ranked(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if not ranked:
|
||||||
|
return None
|
||||||
|
|
||||||
|
mid = ranked[0][0]
|
||||||
|
return await self._api_lyric(mid)
|
||||||
|
|
||||||
|
async def _search(
|
||||||
|
self, track: TrackMeta, limit: int = 10
|
||||||
|
) -> list[tuple[str, float]]:
|
||||||
|
search_data = await self._api_search(track, limit)
|
||||||
|
if search_data is None:
|
||||||
|
return []
|
||||||
|
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
logger.debug(f"QQMusic: searching for '{query}' (limit={limit})")
|
||||||
|
|
||||||
|
candidates = _parse_qq_search(search_data)
|
||||||
|
if not candidates:
|
||||||
|
logger.debug("QQMusic: search returned 0 results")
|
||||||
|
return []
|
||||||
|
|
||||||
|
logger.debug(f"QQMusic: search returned {len(candidates)} candidates")
|
||||||
ranked = select_ranked(
|
ranked = select_ranked(
|
||||||
candidates,
|
candidates,
|
||||||
track.length,
|
track.length,
|
||||||
@@ -98,32 +157,17 @@ class QQMusicFetcher(BaseFetcher):
|
|||||||
logger.debug("QQMusic: no suitable candidate found")
|
logger.debug("QQMusic: no suitable candidate found")
|
||||||
return ranked
|
return ranked
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"QQMusic: search failed: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
async def _get_lyric(self, mid: str, confidence: float = 0.0) -> FetchResult:
|
async def _get_lyric(self, mid: str, confidence: float = 0.0) -> FetchResult:
|
||||||
logger.debug(f"QQMusic: fetching lyrics for mid={mid}")
|
logger.debug(f"QQMusic: fetching lyrics for mid={mid}")
|
||||||
|
data = await self._api_lyric(mid)
|
||||||
try:
|
if data is None:
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
resp = await client.get(
|
|
||||||
f"{await self.auth.authenticate()}{_QQ_MUSIC_API_LYRIC_ENDPOINT}",
|
|
||||||
params={"mid": mid},
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
data = resp.json()
|
|
||||||
|
|
||||||
if data.get("code") != 0:
|
|
||||||
logger.error(f"QQMusic: lyric API error: {data}")
|
|
||||||
return FetchResult.from_network_error()
|
return FetchResult.from_network_error()
|
||||||
|
|
||||||
lrc = data.get("data", {}).get("lyric", "")
|
lrcdata = _parse_qq_lyrics(data)
|
||||||
if not isinstance(lrc, str) or not lrc.strip():
|
if lrcdata is None:
|
||||||
logger.debug(f"QQMusic: empty lyrics for mid={mid}")
|
logger.debug(f"QQMusic: empty lyrics for mid={mid}")
|
||||||
return FetchResult.from_not_found()
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
lrcdata = LRCData(lrc)
|
|
||||||
status = lrcdata.detect_sync_status()
|
status = lrcdata.detect_sync_status()
|
||||||
logger.info(
|
logger.info(
|
||||||
f"QQMusic: got {status.value} lyrics for mid={mid} ({len(lrcdata)} lines)"
|
f"QQMusic: got {status.value} lyrics for mid={mid} ({len(lrcdata)} lines)"
|
||||||
@@ -149,12 +193,8 @@ class QQMusicFetcher(BaseFetcher):
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"QQMusic: lyric fetch failed for mid={mid}: {e}")
|
|
||||||
return FetchResult.from_network_error()
|
|
||||||
|
|
||||||
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
if not self.auth.is_configured():
|
if not self._auth.is_configured():
|
||||||
logger.debug("QQMusic: skipped — Auth not configured")
|
logger.debug("QQMusic: skipped — Auth not configured")
|
||||||
return FetchResult()
|
return FetchResult()
|
||||||
|
|
||||||
|
|||||||
@@ -8,6 +8,8 @@ Description: Shared candidate-selection logic for search-based fetchers.
|
|||||||
proximity, and sync status.
|
proximity, and sync status.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from typing import Generic, Optional, TypeVar
|
from typing import Generic, Optional, TypeVar
|
||||||
|
|
||||||
|
|||||||
@@ -4,37 +4,24 @@ Date: 2026-03-25 10:43:21
|
|||||||
Description: Spotify fetcher — obtains synced lyrics via Spotify's internal color-lyrics API.
|
Description: Spotify fetcher — obtains synced lyrics via Spotify's internal color-lyrics API.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import httpx
|
from __future__ import annotations
|
||||||
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseFetcher, FetchResult
|
from .base import BaseFetcher, FetchResult
|
||||||
from ..authenticators.spotify import SpotifyAuthenticator, SPOTIFY_BASE_HEADERS
|
from ..authenticators.spotify import SpotifyAuthenticator
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
from ..lrc import LRCData
|
from ..lrc import LRCData
|
||||||
from ..config import HTTP_TIMEOUT, TTL_NOT_FOUND
|
from ..config import GeneralConfig, TTL_NOT_FOUND
|
||||||
|
|
||||||
_SPOTIFY_LYRICS_URL = "https://spclient.wg.spotify.com/color-lyrics/v2/track/"
|
|
||||||
|
|
||||||
|
|
||||||
class SpotifyFetcher(BaseFetcher):
|
|
||||||
def __init__(self, auth: SpotifyAuthenticator) -> None:
|
|
||||||
self.auth = auth
|
|
||||||
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "spotify"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return bool(track.trackid) and self.auth.is_configured()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _format_lrc_line(start_ms: int, words: str) -> str:
|
def _format_lrc_line(start_ms: int, words: str) -> str:
|
||||||
minutes = start_ms // 60000
|
minutes = start_ms // 60000
|
||||||
seconds = (start_ms // 1000) % 60
|
seconds = (start_ms // 1000) % 60
|
||||||
centiseconds = round((start_ms % 1000) / 10.0)
|
centiseconds = round((start_ms % 1000) / 10.0)
|
||||||
return f"[{minutes:02d}:{seconds:02d}.{centiseconds:02.0f}]{words}"
|
return f"[{minutes:02d}:{seconds:02d}.{centiseconds:02.0f}]{words}"
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _is_truly_synced(lines: list[dict]) -> bool:
|
def _is_truly_synced(lines: list[dict]) -> bool:
|
||||||
for line in lines:
|
for line in lines:
|
||||||
try:
|
try:
|
||||||
@@ -45,55 +32,24 @@ class SpotifyFetcher(BaseFetcher):
|
|||||||
continue
|
continue
|
||||||
return False
|
return False
|
||||||
|
|
||||||
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
|
||||||
if not track.trackid:
|
|
||||||
logger.debug("Spotify: skipped — no trackid in metadata")
|
|
||||||
return FetchResult()
|
|
||||||
|
|
||||||
logger.info(f"Spotify: fetching lyrics for trackid={track.trackid}")
|
def _parse_spotify_lyrics(data: dict) -> LRCData | None:
|
||||||
|
"""Parse Spotify color-lyrics payload to LRCData."""
|
||||||
|
lyrics_data = data.get("lyrics")
|
||||||
|
if not isinstance(lyrics_data, dict):
|
||||||
|
return None
|
||||||
|
|
||||||
token = await self.auth.authenticate()
|
|
||||||
if not token:
|
|
||||||
logger.error("Spotify: cannot fetch lyrics without a token")
|
|
||||||
return FetchResult.from_network_error()
|
|
||||||
|
|
||||||
url = f"{_SPOTIFY_LYRICS_URL}{track.trackid}?format=json&vocalRemoval=false&market=from_token"
|
|
||||||
headers = {
|
|
||||||
"Accept": "application/json",
|
|
||||||
"Authorization": f"Bearer {token}",
|
|
||||||
**SPOTIFY_BASE_HEADERS,
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
res = await client.get(url, headers=headers)
|
|
||||||
|
|
||||||
if res.status_code == 404:
|
|
||||||
logger.debug(f"Spotify: 404 for trackid={track.trackid}")
|
|
||||||
return FetchResult.from_not_found()
|
|
||||||
|
|
||||||
if res.status_code != 200:
|
|
||||||
logger.error(f"Spotify: lyrics API returned {res.status_code}")
|
|
||||||
return FetchResult.from_network_error()
|
|
||||||
|
|
||||||
data = res.json()
|
|
||||||
|
|
||||||
if not isinstance(data, dict) or "lyrics" not in data:
|
|
||||||
logger.error("Spotify: unexpected lyrics response structure")
|
|
||||||
return FetchResult.from_network_error()
|
|
||||||
|
|
||||||
lyrics_data = data["lyrics"]
|
|
||||||
sync_type = lyrics_data.get("syncType", "")
|
sync_type = lyrics_data.get("syncType", "")
|
||||||
lines = lyrics_data.get("lines", [])
|
lines = lyrics_data.get("lines", [])
|
||||||
|
|
||||||
if not isinstance(lines, list) or len(lines) == 0:
|
if not isinstance(lines, list) or len(lines) == 0:
|
||||||
logger.debug("Spotify: response contained no lyric lines")
|
return None
|
||||||
return FetchResult.from_not_found()
|
|
||||||
|
|
||||||
is_synced = sync_type == "LINE_SYNCED" and self._is_truly_synced(lines)
|
is_synced = sync_type == "LINE_SYNCED" and _is_truly_synced(lines)
|
||||||
|
|
||||||
lrc_lines: list[str] = []
|
lrc_lines: list[str] = []
|
||||||
for line in lines:
|
for line in lines:
|
||||||
|
if not isinstance(line, dict):
|
||||||
|
continue
|
||||||
words = line.get("words", "")
|
words = line.get("words", "")
|
||||||
if not isinstance(words, str):
|
if not isinstance(words, str):
|
||||||
continue
|
continue
|
||||||
@@ -103,20 +59,58 @@ class SpotifyFetcher(BaseFetcher):
|
|||||||
ms = 0
|
ms = 0
|
||||||
|
|
||||||
if is_synced:
|
if is_synced:
|
||||||
lrc_lines.append(self._format_lrc_line(ms, words))
|
lrc_lines.append(_format_lrc_line(ms, words))
|
||||||
else:
|
else:
|
||||||
lrc_lines.append(f"[00:00.00]{words}")
|
lrc_lines.append(f"[00:00.00]{words}")
|
||||||
|
|
||||||
content = LRCData("\n".join(lrc_lines))
|
if not lrc_lines:
|
||||||
status = (
|
return None
|
||||||
CacheStatus.SUCCESS_SYNCED
|
return LRCData("\n".join(lrc_lines))
|
||||||
if is_synced
|
|
||||||
else CacheStatus.SUCCESS_UNSYNCED
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.info(f"Spotify: got {status.value} lyrics ({len(lrc_lines)} lines)")
|
|
||||||
|
class SpotifyFetcher(BaseFetcher):
|
||||||
|
def __init__(self, general: GeneralConfig, auth: SpotifyAuthenticator) -> None:
|
||||||
|
super().__init__(general, auth)
|
||||||
|
|
||||||
|
_auth: SpotifyAuthenticator
|
||||||
|
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "spotify"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return bool(track.trackid) and self._auth.is_configured()
|
||||||
|
|
||||||
|
async def _api_lyrics(self, track: TrackMeta) -> dict | None:
|
||||||
|
"""Return raw Spotify lyrics payload for one track using production auth path."""
|
||||||
|
if not track.trackid:
|
||||||
|
return None
|
||||||
|
data = await self._auth.get_lyrics(track.trackid)
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
if not track.trackid:
|
||||||
|
logger.debug("Spotify: skipped — no trackid in metadata")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
logger.info(f"Spotify: fetching lyrics for trackid={track.trackid}")
|
||||||
|
|
||||||
|
data = await self._api_lyrics(track)
|
||||||
|
if data is None:
|
||||||
|
logger.debug(f"Spotify: no lyrics payload for trackid={track.trackid}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
content = _parse_spotify_lyrics(data)
|
||||||
|
if content is None:
|
||||||
|
logger.debug("Spotify: response contained no parseable lyric lines")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
status = content.detect_sync_status()
|
||||||
|
logger.info(f"Spotify: got {status.value} lyrics ({len(content)} lines)")
|
||||||
not_found = LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
not_found = LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
||||||
if is_synced:
|
if status == CacheStatus.SUCCESS_SYNCED:
|
||||||
return FetchResult(
|
return FetchResult(
|
||||||
synced=LyricResult(
|
synced=LyricResult(
|
||||||
status=CacheStatus.SUCCESS_SYNCED,
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
@@ -133,7 +127,3 @@ class SpotifyFetcher(BaseFetcher):
|
|||||||
source=self.source_name,
|
source=self.source_name,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Spotify: lyrics fetch failed: {e}")
|
|
||||||
return FetchResult.from_network_error()
|
|
||||||
|
|||||||
+2
-33
@@ -4,12 +4,12 @@ Date: 2026-03-25 21:54:01
|
|||||||
Description: LRC parsing, modeling, and serialization helpers.
|
Description: LRC parsing, modeling, and serialization helpers.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
import re
|
import re
|
||||||
from pathlib import Path
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from urllib.parse import unquote
|
|
||||||
|
|
||||||
from .models import CacheStatus
|
from .models import CacheStatus
|
||||||
|
|
||||||
@@ -463,34 +463,3 @@ class LRCData:
|
|||||||
"""
|
"""
|
||||||
normalized = self.normalize()
|
normalized = self.normalize()
|
||||||
return self._serialize_lines(normalized._lines, include_word_sync=False)
|
return self._serialize_lines(normalized._lines, include_word_sync=False)
|
||||||
|
|
||||||
|
|
||||||
def get_audio_path(audio_url: str, ensure_exists: bool = False) -> Optional[Path]:
|
|
||||||
"""Convert file:// URL to Path, return None if invalid or (if ensure_exists) file doesn't exist."""
|
|
||||||
if not audio_url.startswith("file://"):
|
|
||||||
return None
|
|
||||||
file_path = unquote(audio_url.replace("file://", "", 1))
|
|
||||||
path = Path(file_path)
|
|
||||||
if ensure_exists and not path.exists():
|
|
||||||
return None
|
|
||||||
return path
|
|
||||||
|
|
||||||
|
|
||||||
def get_sidecar_path(
|
|
||||||
audio_url: str,
|
|
||||||
ensure_audio_exists: bool = False,
|
|
||||||
ensure_exists: bool = False,
|
|
||||||
extension: str = ".lrc",
|
|
||||||
) -> Optional[Path]:
|
|
||||||
"""Given a file:// URL, return the corresponding .lrc sidecar path.
|
|
||||||
|
|
||||||
If ensure_audio_exists is True, return None if the audio file does not exist.
|
|
||||||
If ensure_exists is True, return None if the .lrc file does not exist.
|
|
||||||
"""
|
|
||||||
audio_path = get_audio_path(audio_url, ensure_exists=ensure_audio_exists)
|
|
||||||
if not audio_path:
|
|
||||||
return None
|
|
||||||
lrc_path = audio_path.with_suffix(extension)
|
|
||||||
if ensure_exists and not lrc_path.exists():
|
|
||||||
return None
|
|
||||||
return lrc_path
|
|
||||||
|
|||||||
+72
-29
@@ -4,18 +4,21 @@ Date: 2026-03-25 04:44:15
|
|||||||
Description: MPRIS integration for fetching track metadata.
|
Description: MPRIS integration for fetching track metadata.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
from dbus_next.aio.message_bus import MessageBus
|
from dbus_next.aio.message_bus import MessageBus
|
||||||
from dbus_next.constants import BusType
|
from dbus_next.constants import BusType
|
||||||
from dbus_next.message import Message
|
from dbus_next.message import Message
|
||||||
from lrx_cli.models import TrackMeta
|
|
||||||
from lrx_cli.config import PREFERRED_PLAYER
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from typing import Optional, List, Any
|
from typing import Optional, List, Any
|
||||||
|
|
||||||
|
from .config import DEFAULT_PLAYER_BLACKLIST, DEFAULT_PREFERRED_PLAYER
|
||||||
|
from .models import TrackMeta
|
||||||
|
|
||||||
|
|
||||||
async def _list_mpris_players(bus: MessageBus) -> List[str]:
|
async def _list_mpris_players(bus: MessageBus) -> List[str]:
|
||||||
"""List all MPRIS player bus names."""
|
"""List all MPRIS player bus names without any filtering."""
|
||||||
try:
|
try:
|
||||||
reply = await bus.call(
|
reply = await bus.call(
|
||||||
Message(
|
Message(
|
||||||
@@ -52,47 +55,79 @@ async def _get_playback_status(bus: MessageBus, player_name: str) -> Optional[st
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def pick_active_player(
|
||||||
|
all_names: list[str],
|
||||||
|
playing: list[str],
|
||||||
|
preferred: str,
|
||||||
|
last_active: str | None = None,
|
||||||
|
) -> str | None:
|
||||||
|
"""Select the best MPRIS player by play state, preferred keyword, and continuity.
|
||||||
|
|
||||||
|
Priority: single playing > preferred keyword among playing > preferred keyword
|
||||||
|
among all candidates > last active > first candidate.
|
||||||
|
"""
|
||||||
|
if not all_names:
|
||||||
|
return None
|
||||||
|
if len(playing) == 1:
|
||||||
|
return playing[0]
|
||||||
|
candidates = playing if playing else all_names
|
||||||
|
preferred_lower = preferred.lower().strip()
|
||||||
|
if preferred_lower:
|
||||||
|
for name in candidates:
|
||||||
|
if preferred_lower in name.lower():
|
||||||
|
return name
|
||||||
|
if last_active and last_active in all_names:
|
||||||
|
return last_active
|
||||||
|
return candidates[0] if candidates else None
|
||||||
|
|
||||||
|
|
||||||
async def _select_player(
|
async def _select_player(
|
||||||
bus: MessageBus, specific_player: Optional[str] = None
|
bus: MessageBus,
|
||||||
|
specific_player: Optional[str],
|
||||||
|
preferred_player: str,
|
||||||
|
player_blacklist: tuple[str, ...],
|
||||||
) -> Optional[str]:
|
) -> Optional[str]:
|
||||||
"""Select the best MPRIS player.
|
"""Select the best MPRIS player.
|
||||||
|
|
||||||
When specific_player is given, filter by name match.
|
When specific_player is given, it bypasses player_blacklist and filters by name.
|
||||||
Otherwise: prefer the currently playing player. If multiple are playing,
|
Otherwise: prefer the currently playing player. If multiple are playing,
|
||||||
prefer the one matching PREFERRED_PLAYER env var (default: spotify).
|
prefer the one matching preferred_player (default: spotify).
|
||||||
"""
|
"""
|
||||||
players = await _list_mpris_players(bus)
|
all_names = await _list_mpris_players(bus)
|
||||||
if not players:
|
if not all_names:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
if specific_player:
|
if specific_player:
|
||||||
players = [p for p in players if specific_player.lower() in p.lower()]
|
# --player bypasses player_blacklist so the user can target any player
|
||||||
return players[0] if players else None
|
matched = [p for p in all_names if specific_player.lower() in p.lower()]
|
||||||
|
return matched[0] if matched else None
|
||||||
|
|
||||||
# Check playback status for each player
|
# auto-selection: apply blacklist before choosing
|
||||||
playing = []
|
# candidates = []
|
||||||
for p in players:
|
# for p in all_names:
|
||||||
|
# if any(x.lower() in p.lower() for x in player_blacklist):
|
||||||
|
# logger.info(f"Excluding blacklisted player: {p}")
|
||||||
|
# else:
|
||||||
|
# candidates.append(p)
|
||||||
|
candidates = [
|
||||||
|
p
|
||||||
|
for p in all_names
|
||||||
|
if not any(x.lower() in p.lower() for x in player_blacklist)
|
||||||
|
]
|
||||||
|
playing: list[str] = []
|
||||||
|
for p in candidates:
|
||||||
status = await _get_playback_status(bus, p)
|
status = await _get_playback_status(bus, p)
|
||||||
logger.debug(f"Player {p}: {status}")
|
logger.debug(f"Player {p}: {status}")
|
||||||
if status == "Playing":
|
if status == "Playing":
|
||||||
playing.append(p)
|
playing.append(p)
|
||||||
|
|
||||||
candidates = playing if playing else players
|
return pick_active_player(candidates, playing, preferred_player)
|
||||||
|
|
||||||
if len(candidates) == 1:
|
|
||||||
return candidates[0]
|
|
||||||
|
|
||||||
# Multiple candidates: prefer PREFERRED_PLAYER
|
|
||||||
preferred = PREFERRED_PLAYER.lower()
|
|
||||||
if preferred:
|
|
||||||
for p in candidates:
|
|
||||||
if preferred in p.lower():
|
|
||||||
return p
|
|
||||||
return candidates[0]
|
|
||||||
|
|
||||||
|
|
||||||
async def _fetch_metadata_dbus(
|
async def _fetch_metadata_dbus(
|
||||||
specific_player: Optional[str] = None,
|
specific_player: Optional[str],
|
||||||
|
preferred_player: str,
|
||||||
|
player_blacklist: tuple[str, ...],
|
||||||
) -> Optional[TrackMeta]:
|
) -> Optional[TrackMeta]:
|
||||||
bus = None
|
bus = None
|
||||||
try:
|
try:
|
||||||
@@ -102,7 +137,9 @@ async def _fetch_metadata_dbus(
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
player_name = await _select_player(bus, specific_player)
|
player_name = await _select_player(
|
||||||
|
bus, specific_player, preferred_player, player_blacklist
|
||||||
|
)
|
||||||
if not player_name:
|
if not player_name:
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"No active MPRIS players found via DBus{' for ' + specific_player if specific_player else ''}."
|
f"No active MPRIS players found via DBus{' for ' + specific_player if specific_player else ''}."
|
||||||
@@ -182,9 +219,15 @@ async def _fetch_metadata_dbus(
|
|||||||
bus.disconnect()
|
bus.disconnect()
|
||||||
|
|
||||||
|
|
||||||
def get_current_track(player_name: Optional[str] = None) -> Optional[TrackMeta]:
|
def get_current_track(
|
||||||
|
player_name: Optional[str] = None,
|
||||||
|
preferred_player: str = DEFAULT_PREFERRED_PLAYER,
|
||||||
|
player_blacklist: tuple[str, ...] = DEFAULT_PLAYER_BLACKLIST,
|
||||||
|
) -> Optional[TrackMeta]:
|
||||||
try:
|
try:
|
||||||
return asyncio.run(_fetch_metadata_dbus(player_name))
|
return asyncio.run(
|
||||||
|
_fetch_metadata_dbus(player_name, preferred_player, player_blacklist)
|
||||||
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"DBus async loop failed: {e}")
|
logger.error(f"DBus async loop failed: {e}")
|
||||||
return None
|
return None
|
||||||
|
|||||||
@@ -5,6 +5,8 @@ Description: Shared text normalization utilities for fuzzy matching.
|
|||||||
Used by cache key generation, cache search, and candidate selection scoring.
|
Used by cache key generation, cache search, and candidate selection scoring.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import re
|
import re
|
||||||
import unicodedata
|
import unicodedata
|
||||||
|
|
||||||
|
|||||||
@@ -1,14 +1,56 @@
|
|||||||
"""Shared ranking rules for LyricResult selection.
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
This module centralizes how positive lyric results are compared so cache/core
|
Date: 2026-04-10 17:06:37
|
||||||
and other callers use the same precedence and edge-case handling.
|
Description: Utility functions
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Optional
|
from typing import TYPE_CHECKING, Optional
|
||||||
|
from urllib.parse import unquote
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
from .models import CacheStatus, LyricResult
|
from .models import CacheStatus
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from .models import LyricResult
|
||||||
|
|
||||||
|
|
||||||
|
# Paths
|
||||||
|
|
||||||
|
|
||||||
|
def get_audio_path(audio_url: str, ensure_exists: bool = False) -> Optional[Path]:
|
||||||
|
"""Convert file:// URL to Path, return None if invalid or (if ensure_exists) file doesn't exist."""
|
||||||
|
if not audio_url.startswith("file://"):
|
||||||
|
return None
|
||||||
|
file_path = unquote(audio_url.replace("file://", "", 1))
|
||||||
|
path = Path(file_path)
|
||||||
|
if ensure_exists and not path.exists():
|
||||||
|
return None
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def get_sidecar_path(
|
||||||
|
audio_url: str,
|
||||||
|
ensure_audio_exists: bool = False,
|
||||||
|
ensure_exists: bool = False,
|
||||||
|
extension: str = ".lrc",
|
||||||
|
) -> Optional[Path]:
|
||||||
|
"""Given a file:// URL, return the corresponding .lrc sidecar path.
|
||||||
|
|
||||||
|
If ensure_audio_exists is True, return None if the audio file does not exist.
|
||||||
|
If ensure_exists is True, return None if the .lrc file does not exist.
|
||||||
|
"""
|
||||||
|
audio_path = get_audio_path(audio_url, ensure_exists=ensure_audio_exists)
|
||||||
|
if not audio_path:
|
||||||
|
return None
|
||||||
|
lrc_path = audio_path.with_suffix(extension)
|
||||||
|
if ensure_exists and not lrc_path.exists():
|
||||||
|
return None
|
||||||
|
return lrc_path
|
||||||
|
|
||||||
|
|
||||||
|
# Ranking
|
||||||
|
|
||||||
|
|
||||||
def is_positive_status(status: CacheStatus) -> bool:
|
def is_positive_status(status: CacheStatus) -> bool:
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .session import WatchCoordinator
|
||||||
|
|
||||||
|
__all__ = ["WatchCoordinator"]
|
||||||
@@ -0,0 +1,154 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:14:58
|
||||||
|
Description: Unix-socket control channel for communicating with a running watch session.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from .session import WatchCoordinator
|
||||||
|
|
||||||
|
|
||||||
|
class ControlServer:
|
||||||
|
"""Control server that handles offset/status commands over a Unix socket."""
|
||||||
|
|
||||||
|
_socket_path: Path
|
||||||
|
_server: asyncio.AbstractServer | None
|
||||||
|
|
||||||
|
def __init__(self, socket_path: str) -> None:
|
||||||
|
"""Initialize control server with socket path from config or explicit override."""
|
||||||
|
self._socket_path = Path(socket_path)
|
||||||
|
self._server: asyncio.AbstractServer | None = None
|
||||||
|
|
||||||
|
async def start(self, session: "WatchCoordinator") -> bool:
|
||||||
|
"""Start listening for control requests and bind session handlers."""
|
||||||
|
if not await self._prepare_socket_path():
|
||||||
|
return False
|
||||||
|
|
||||||
|
self._socket_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
self._server = await asyncio.start_unix_server(
|
||||||
|
lambda r, w: self._handle(session, r, w),
|
||||||
|
path=str(self._socket_path),
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def _prepare_socket_path(self) -> bool:
|
||||||
|
"""Ensure socket path is usable and reject when another session is active."""
|
||||||
|
if not self._socket_path.exists():
|
||||||
|
return True
|
||||||
|
|
||||||
|
try:
|
||||||
|
# probe the socket to distinguish a live session from a stale socket file
|
||||||
|
reader, writer = await asyncio.open_unix_connection(str(self._socket_path))
|
||||||
|
writer.close()
|
||||||
|
await writer.wait_closed()
|
||||||
|
# connection succeeded → another watch session is actively listening
|
||||||
|
logger.error(
|
||||||
|
"A watch session is already running. Use 'lrx watch ctl status'."
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
except Exception:
|
||||||
|
# connection refused / file is stale → safe to remove and reuse
|
||||||
|
try:
|
||||||
|
self._socket_path.unlink(missing_ok=True)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
"""Stop control server and remove stale socket path."""
|
||||||
|
if self._server is not None:
|
||||||
|
self._server.close()
|
||||||
|
await self._server.wait_closed()
|
||||||
|
self._server = None
|
||||||
|
try:
|
||||||
|
self._socket_path.unlink(missing_ok=True)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
async def _handle(
|
||||||
|
self,
|
||||||
|
session: "WatchCoordinator",
|
||||||
|
reader: asyncio.StreamReader,
|
||||||
|
writer: asyncio.StreamWriter,
|
||||||
|
) -> None:
|
||||||
|
"""Handle one control request and send JSON response."""
|
||||||
|
resp: dict[str, object] = {"ok": False, "error": "internal error"}
|
||||||
|
try:
|
||||||
|
line = await reader.readline()
|
||||||
|
if not line:
|
||||||
|
resp = {"ok": False, "error": "empty request"}
|
||||||
|
else:
|
||||||
|
req = json.loads(line.decode("utf-8"))
|
||||||
|
cmd = req.get("cmd")
|
||||||
|
if cmd == "offset":
|
||||||
|
delta = int(req.get("delta", 0))
|
||||||
|
resp = session.handle_offset(delta)
|
||||||
|
elif cmd == "status":
|
||||||
|
resp = session.handle_status()
|
||||||
|
else:
|
||||||
|
resp = {"ok": False, "error": "unknown command"}
|
||||||
|
except Exception as e:
|
||||||
|
resp = {"ok": False, "error": str(e)}
|
||||||
|
finally:
|
||||||
|
writer.write((json.dumps(resp) + "\n").encode("utf-8"))
|
||||||
|
await writer.drain()
|
||||||
|
writer.close()
|
||||||
|
await writer.wait_closed()
|
||||||
|
|
||||||
|
|
||||||
|
class ControlClient:
|
||||||
|
"""Control client used by CLI commands to talk to active watch session."""
|
||||||
|
|
||||||
|
_socket_path: Path
|
||||||
|
|
||||||
|
def __init__(self, socket_path: str) -> None:
|
||||||
|
"""Initialize control client with socket path from config or explicit override."""
|
||||||
|
self._socket_path = Path(socket_path)
|
||||||
|
|
||||||
|
async def _send_async(self, cmd: dict[str, object]) -> dict[str, object]:
|
||||||
|
"""Send one JSON command to control server and return JSON response."""
|
||||||
|
if not self._socket_path.exists():
|
||||||
|
return {"ok": False, "error": "No watch session running."}
|
||||||
|
|
||||||
|
try:
|
||||||
|
reader, writer = await asyncio.open_unix_connection(str(self._socket_path))
|
||||||
|
except Exception:
|
||||||
|
return {"ok": False, "error": "No watch session running."}
|
||||||
|
|
||||||
|
writer.write((json.dumps(cmd) + "\n").encode("utf-8"))
|
||||||
|
await writer.drain()
|
||||||
|
line = await reader.readline()
|
||||||
|
writer.close()
|
||||||
|
await writer.wait_closed()
|
||||||
|
if not line:
|
||||||
|
return {"ok": False, "error": "Empty response."}
|
||||||
|
return json.loads(line.decode("utf-8"))
|
||||||
|
|
||||||
|
def send(self, cmd: dict[str, object]) -> dict[str, object]:
|
||||||
|
"""Synchronous wrapper around async control request."""
|
||||||
|
return asyncio.run(self._send_async(cmd))
|
||||||
|
|
||||||
|
|
||||||
|
def parse_delta(raw: str) -> tuple[bool, int | None, str | None]:
|
||||||
|
"""Parse signed millisecond offset delta string for ctl offset command."""
|
||||||
|
value = raw.strip()
|
||||||
|
try:
|
||||||
|
if value.startswith("+"):
|
||||||
|
return True, int(value[1:]), None
|
||||||
|
if value.startswith("-"):
|
||||||
|
# keep the sign by negating; bare int() would accept "-123" too but
|
||||||
|
# explicit split is clearer about intent and avoids double-negative edge cases
|
||||||
|
return True, -int(value[1:]), None
|
||||||
|
return True, int(value), None
|
||||||
|
except ValueError:
|
||||||
|
return False, None, f"Invalid offset delta: {raw}"
|
||||||
@@ -0,0 +1,89 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:14:41
|
||||||
|
Description: Debounced lyric fetch orchestration for watch session.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from typing import Awaitable, Callable, Optional
|
||||||
|
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..models import TrackMeta
|
||||||
|
|
||||||
|
|
||||||
|
class LyricFetcher:
|
||||||
|
"""Debounces track updates and runs at most one lyric fetch task at a time."""
|
||||||
|
|
||||||
|
_watch_debounce_ms: int
|
||||||
|
_fetch_func: Callable[[TrackMeta], Awaitable[Optional[LRCData]]]
|
||||||
|
_on_fetching: Callable[[], Awaitable[None] | None]
|
||||||
|
_on_result: Callable[[Optional[LRCData]], Awaitable[None] | None]
|
||||||
|
_debounce_task: asyncio.Task | None
|
||||||
|
_fetch_task: asyncio.Task | None
|
||||||
|
_pending_track: TrackMeta | None
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
fetch_func: Callable[[TrackMeta], Awaitable[Optional[LRCData]]],
|
||||||
|
on_fetching: Callable[[], Awaitable[None] | None],
|
||||||
|
on_result: Callable[[Optional[LRCData]], Awaitable[None] | None],
|
||||||
|
watch_debounce_ms: int,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize fetch callbacks and runtime options."""
|
||||||
|
self._watch_debounce_ms = watch_debounce_ms
|
||||||
|
self._fetch_func = fetch_func
|
||||||
|
self._on_fetching = on_fetching
|
||||||
|
self._on_result = on_result
|
||||||
|
self._debounce_task: asyncio.Task | None = None
|
||||||
|
self._fetch_task: asyncio.Task | None = None
|
||||||
|
self._pending_track: TrackMeta | None = None
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
"""Cancel and await all in-flight debounce/fetch tasks."""
|
||||||
|
for task in (self._debounce_task, self._fetch_task):
|
||||||
|
if task is not None:
|
||||||
|
task.cancel()
|
||||||
|
await asyncio.gather(
|
||||||
|
*[t for t in (self._debounce_task, self._fetch_task) if t is not None],
|
||||||
|
return_exceptions=True,
|
||||||
|
)
|
||||||
|
self._debounce_task = None
|
||||||
|
self._fetch_task = None
|
||||||
|
|
||||||
|
def request(self, track: TrackMeta) -> None:
|
||||||
|
"""Request lyrics for track with debounce collapsing."""
|
||||||
|
self._pending_track = track
|
||||||
|
if self._debounce_task is not None:
|
||||||
|
# cancel any pending debounce window — the new request supersedes it
|
||||||
|
self._debounce_task.cancel()
|
||||||
|
self._debounce_task = asyncio.create_task(self._debounce_then_fetch())
|
||||||
|
|
||||||
|
async def _debounce_then_fetch(self) -> None:
|
||||||
|
"""Wait debounce window then start a fresh fetch task for latest pending track."""
|
||||||
|
await asyncio.sleep(self._watch_debounce_ms / 1000.0)
|
||||||
|
track = self._pending_track
|
||||||
|
if track is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
if self._fetch_task is not None:
|
||||||
|
# abort any in-flight fetch for a previous track before starting the new one
|
||||||
|
self._fetch_task.cancel()
|
||||||
|
await asyncio.gather(self._fetch_task, return_exceptions=True)
|
||||||
|
|
||||||
|
self._fetch_task = asyncio.create_task(self._do_fetch(track))
|
||||||
|
|
||||||
|
async def _do_fetch(self, track: TrackMeta) -> None:
|
||||||
|
"""Execute fetch lifecycle callbacks and fetch lyrics for a track."""
|
||||||
|
# callbacks may be plain functions or coroutines — handle both
|
||||||
|
fetching_callback_result = self._on_fetching()
|
||||||
|
if asyncio.iscoroutine(fetching_callback_result):
|
||||||
|
await fetching_callback_result
|
||||||
|
|
||||||
|
lyrics = await self._fetch_func(track)
|
||||||
|
|
||||||
|
result_callback_result = self._on_result(lyrics)
|
||||||
|
if asyncio.iscoroutine(result_callback_result):
|
||||||
|
await result_callback_result
|
||||||
@@ -0,0 +1,402 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:14:27
|
||||||
|
Description: Player discovery, state monitoring, and active-player selection for watch mode.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Callable, Optional
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
from dbus_next.aio.message_bus import MessageBus
|
||||||
|
from dbus_next.constants import BusType
|
||||||
|
from dbus_next.message import Message
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from ..models import TrackMeta
|
||||||
|
from ..mpris import pick_active_player
|
||||||
|
|
||||||
|
|
||||||
|
def _variant_value(item: object) -> object | None:
|
||||||
|
"""Extract .value from DBus variant-like objects when available."""
|
||||||
|
if hasattr(item, "value"):
|
||||||
|
return getattr(item, "value")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(slots=True)
|
||||||
|
class PlayerState:
|
||||||
|
"""Current observable state for one MPRIS player."""
|
||||||
|
|
||||||
|
bus_name: str
|
||||||
|
status: str
|
||||||
|
track: Optional[TrackMeta]
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True, slots=True)
|
||||||
|
class PlayerTarget:
|
||||||
|
"""Constraint for choosing which players are visible to watch."""
|
||||||
|
|
||||||
|
hint: Optional[str] = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def normalized_hint(self) -> str:
|
||||||
|
"""Return normalized lowercase player hint string."""
|
||||||
|
return (self.hint or "").strip().lower()
|
||||||
|
|
||||||
|
def allows(self, bus_name: str) -> bool:
|
||||||
|
"""Return whether given MPRIS bus name passes this target constraint."""
|
||||||
|
normalized_hint = self.normalized_hint
|
||||||
|
if not normalized_hint:
|
||||||
|
return True
|
||||||
|
return _keyword_match(bus_name, normalized_hint)
|
||||||
|
|
||||||
|
|
||||||
|
def _keyword_match(text: str, keyword: str) -> bool:
|
||||||
|
"""Return True when keyword exists in text, case-insensitively."""
|
||||||
|
return keyword.strip().lower() in text.lower()
|
||||||
|
|
||||||
|
|
||||||
|
class PlayerMonitor:
|
||||||
|
"""Tracks MPRIS players and forwards signal-driven state updates to session callbacks."""
|
||||||
|
|
||||||
|
_player_blacklist: tuple[str, ...]
|
||||||
|
_on_players_changed: Callable[[], None]
|
||||||
|
_on_seeked: Callable[[str, int], None]
|
||||||
|
_on_playback_status: Callable[[str, str], None]
|
||||||
|
_target: PlayerTarget
|
||||||
|
players: dict[str, PlayerState]
|
||||||
|
_bus: MessageBus | None
|
||||||
|
_props_cache: dict[str, object]
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
on_players_changed: Callable[[], None],
|
||||||
|
on_seeked: Callable[[str, int], None],
|
||||||
|
on_playback_status: Callable[[str, str], None],
|
||||||
|
player_blacklist: tuple[str, ...],
|
||||||
|
target: Optional[PlayerTarget] = None,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize monitor callbacks, runtime options, and player target filter."""
|
||||||
|
self._player_blacklist = player_blacklist
|
||||||
|
self._on_players_changed = on_players_changed
|
||||||
|
self._on_seeked = on_seeked
|
||||||
|
self._on_playback_status = on_playback_status
|
||||||
|
self._target = target or PlayerTarget()
|
||||||
|
self.players: dict[str, PlayerState] = {}
|
||||||
|
self._bus: MessageBus | None = None
|
||||||
|
self._props_cache: dict[str, object] = {}
|
||||||
|
|
||||||
|
async def start(self) -> None:
|
||||||
|
"""Start DBus monitoring and populate initial player snapshot."""
|
||||||
|
self._bus = await MessageBus(bus_type=BusType.SESSION).connect()
|
||||||
|
self._bus.add_message_handler(self._on_message)
|
||||||
|
await self._add_match_rules()
|
||||||
|
await self.refresh()
|
||||||
|
|
||||||
|
async def close(self) -> None:
|
||||||
|
"""Stop DBus monitoring and close bus connection."""
|
||||||
|
self._props_cache.clear()
|
||||||
|
if self._bus:
|
||||||
|
self._bus.disconnect()
|
||||||
|
self._bus = None
|
||||||
|
|
||||||
|
async def _get_player_props(self, bus_name: str) -> object | None:
|
||||||
|
"""Return cached DBus Properties interface for player, creating it if missing."""
|
||||||
|
if not self._bus:
|
||||||
|
return None
|
||||||
|
if bus_name in self._props_cache:
|
||||||
|
return self._props_cache[bus_name]
|
||||||
|
|
||||||
|
try:
|
||||||
|
introspection = await self._bus.introspect(
|
||||||
|
bus_name, "/org/mpris/MediaPlayer2"
|
||||||
|
)
|
||||||
|
proxy = self._bus.get_proxy_object(
|
||||||
|
bus_name, "/org/mpris/MediaPlayer2", introspection
|
||||||
|
)
|
||||||
|
props = proxy.get_interface("org.freedesktop.DBus.Properties")
|
||||||
|
self._props_cache[bus_name] = props
|
||||||
|
return props
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to prepare DBus props for {bus_name}: {e}")
|
||||||
|
self._props_cache.pop(bus_name, None)
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def _add_match_rules(self) -> None:
|
||||||
|
"""Register signal subscriptions needed by monitor."""
|
||||||
|
if not self._bus:
|
||||||
|
return
|
||||||
|
rules = [
|
||||||
|
"type='signal',interface='org.freedesktop.DBus',member='NameOwnerChanged'",
|
||||||
|
"type='signal',interface='org.freedesktop.DBus.Properties',member='PropertiesChanged'",
|
||||||
|
"type='signal',interface='org.mpris.MediaPlayer2.Player',member='Seeked'",
|
||||||
|
]
|
||||||
|
for rule in rules:
|
||||||
|
try:
|
||||||
|
await self._bus.call(
|
||||||
|
Message(
|
||||||
|
destination="org.freedesktop.DBus",
|
||||||
|
path="/org/freedesktop/DBus",
|
||||||
|
interface="org.freedesktop.DBus",
|
||||||
|
member="AddMatch",
|
||||||
|
signature="s",
|
||||||
|
body=[rule],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to add DBus match rule {rule}: {e}")
|
||||||
|
|
||||||
|
async def _list_mpris_players(self) -> list[str]:
|
||||||
|
"""List visible MPRIS players after applying target filter and optional blacklist.
|
||||||
|
|
||||||
|
The blacklist is skipped when an explicit player hint is active so that
|
||||||
|
``--player`` can target any player regardless of PLAYER_BLACKLIST.
|
||||||
|
"""
|
||||||
|
if not self._bus:
|
||||||
|
return []
|
||||||
|
try:
|
||||||
|
reply = await self._bus.call(
|
||||||
|
Message(
|
||||||
|
destination="org.freedesktop.DBus",
|
||||||
|
path="/org/freedesktop/DBus",
|
||||||
|
interface="org.freedesktop.DBus",
|
||||||
|
member="ListNames",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if not reply or not reply.body:
|
||||||
|
return []
|
||||||
|
out: list[str] = []
|
||||||
|
hint_active = bool(self._target.normalized_hint)
|
||||||
|
for name in reply.body[0]:
|
||||||
|
if not name.startswith("org.mpris.MediaPlayer2."):
|
||||||
|
continue
|
||||||
|
# --player bypasses the blacklist; only filter when no hint is given
|
||||||
|
if not hint_active and any(
|
||||||
|
x.lower() in name.lower() for x in self._player_blacklist
|
||||||
|
):
|
||||||
|
# logger.info(f"Excluding blacklisted player: {name}")
|
||||||
|
continue
|
||||||
|
if not self._target.allows(name):
|
||||||
|
continue
|
||||||
|
out.append(name)
|
||||||
|
return out
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to list mpris players: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
async def _fetch_player_state(self, bus_name: str) -> Optional[PlayerState]:
|
||||||
|
"""Read current playback status and metadata from one player service."""
|
||||||
|
props = await self._get_player_props(bus_name)
|
||||||
|
if props is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
status_var = await getattr(props, "call_get")(
|
||||||
|
"org.mpris.MediaPlayer2.Player", "PlaybackStatus"
|
||||||
|
)
|
||||||
|
metadata_var = await getattr(props, "call_get")(
|
||||||
|
"org.mpris.MediaPlayer2.Player", "Metadata"
|
||||||
|
)
|
||||||
|
status = status_var.value if status_var else "Stopped"
|
||||||
|
track = self._track_from_metadata(
|
||||||
|
metadata_var.value if metadata_var else {}
|
||||||
|
)
|
||||||
|
return PlayerState(bus_name=bus_name, status=status, track=track)
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to read state for {bus_name}: {e}")
|
||||||
|
self._props_cache.pop(bus_name, None)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _track_from_metadata(self, metadata: dict[str, object]) -> Optional[TrackMeta]:
|
||||||
|
"""Build TrackMeta object from MPRIS metadata map."""
|
||||||
|
if not metadata:
|
||||||
|
return None
|
||||||
|
trackid = metadata.get("mpris:trackid")
|
||||||
|
if trackid is not None:
|
||||||
|
trackid = _variant_value(trackid)
|
||||||
|
# normalize Spotify track IDs — the raw MPRIS value varies by client version
|
||||||
|
if isinstance(trackid, str) and trackid.startswith("spotify:track:"):
|
||||||
|
trackid = trackid.removeprefix("spotify:track:")
|
||||||
|
elif isinstance(trackid, str) and trackid.startswith("/com/spotify/track/"):
|
||||||
|
trackid = trackid.removeprefix("/com/spotify/track/")
|
||||||
|
elif not isinstance(trackid, str):
|
||||||
|
trackid = None
|
||||||
|
|
||||||
|
length = metadata.get("mpris:length")
|
||||||
|
length_ms = None
|
||||||
|
length_value = _variant_value(length) if length is not None else None
|
||||||
|
if isinstance(length_value, int):
|
||||||
|
# MPRIS reports length in microseconds; convert to milliseconds
|
||||||
|
length_ms = length_value // 1000
|
||||||
|
|
||||||
|
artist = metadata.get("xesam:artist")
|
||||||
|
artist_v = None
|
||||||
|
artist_value = _variant_value(artist) if artist is not None else None
|
||||||
|
if isinstance(artist_value, list) and artist_value:
|
||||||
|
# xesam:artist is a list; take the first entry as primary artist
|
||||||
|
artist_v = artist_value[0]
|
||||||
|
|
||||||
|
title = metadata.get("xesam:title")
|
||||||
|
album = metadata.get("xesam:album")
|
||||||
|
url = metadata.get("xesam:url")
|
||||||
|
|
||||||
|
title_value = _variant_value(title) if title is not None else None
|
||||||
|
album_value = _variant_value(album) if album is not None else None
|
||||||
|
url_value = _variant_value(url) if url is not None else None
|
||||||
|
|
||||||
|
return TrackMeta(
|
||||||
|
trackid=trackid,
|
||||||
|
length=length_ms,
|
||||||
|
album=album_value if isinstance(album_value, str) else None,
|
||||||
|
artist=artist_v,
|
||||||
|
title=title_value if isinstance(title_value, str) else None,
|
||||||
|
url=url_value if isinstance(url_value, str) else None,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def refresh(self) -> None:
|
||||||
|
"""Refresh full player snapshot and notify session when visible set changes."""
|
||||||
|
players = await self._list_mpris_players()
|
||||||
|
updated: dict[str, PlayerState] = {}
|
||||||
|
for bus_name in players:
|
||||||
|
st = await self._fetch_player_state(bus_name)
|
||||||
|
if st is not None:
|
||||||
|
updated[bus_name] = st
|
||||||
|
|
||||||
|
before = set(self.players.keys())
|
||||||
|
after = set(updated.keys())
|
||||||
|
added = sorted(after - before)
|
||||||
|
removed = sorted(before - after)
|
||||||
|
|
||||||
|
for bus_name in removed:
|
||||||
|
self._props_cache.pop(bus_name, None)
|
||||||
|
|
||||||
|
self.players = updated
|
||||||
|
|
||||||
|
if added or removed:
|
||||||
|
logger.info(
|
||||||
|
"MPRIS players updated: added={}, removed={}",
|
||||||
|
added,
|
||||||
|
removed,
|
||||||
|
)
|
||||||
|
|
||||||
|
self._on_players_changed()
|
||||||
|
|
||||||
|
async def _resolve_well_known_name(self, unique_sender: str) -> str | None:
|
||||||
|
"""Map a DBus unique sender (e.g. :1.42) to a tracked MPRIS bus name."""
|
||||||
|
if unique_sender in self.players:
|
||||||
|
# sender is already a well-known name we track (unlikely but fast path)
|
||||||
|
return unique_sender
|
||||||
|
if not self._bus:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Seeked signals arrive with the unique connection name (:1.N), not the
|
||||||
|
# well-known bus name (org.mpris.MediaPlayer2.X). Ask D-Bus which
|
||||||
|
# well-known name owns that unique name.
|
||||||
|
for bus_name in self.players:
|
||||||
|
try:
|
||||||
|
reply = await self._bus.call(
|
||||||
|
Message(
|
||||||
|
destination="org.freedesktop.DBus",
|
||||||
|
path="/org/freedesktop/DBus",
|
||||||
|
interface="org.freedesktop.DBus",
|
||||||
|
member="GetNameOwner",
|
||||||
|
signature="s",
|
||||||
|
body=[bus_name],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if reply and reply.body and str(reply.body[0]) == unique_sender:
|
||||||
|
return bus_name
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def _handle_seeked_signal(self, sender: str, position_ms: int) -> None:
|
||||||
|
"""Route Seeked signal to session using well-known bus name when possible."""
|
||||||
|
bus_name = await self._resolve_well_known_name(sender)
|
||||||
|
if bus_name is not None:
|
||||||
|
self._on_seeked(bus_name, position_ms)
|
||||||
|
return
|
||||||
|
|
||||||
|
# If we cannot map sender reliably, force a state refresh to converge.
|
||||||
|
await self.refresh()
|
||||||
|
|
||||||
|
def _on_message(self, message: Message) -> bool:
|
||||||
|
"""Low-level DBus signal handler for player lifecycle/status/seek events."""
|
||||||
|
try:
|
||||||
|
if (
|
||||||
|
message.interface == "org.freedesktop.DBus"
|
||||||
|
and message.member == "NameOwnerChanged"
|
||||||
|
):
|
||||||
|
# a player appeared or disappeared — rescan the full player list
|
||||||
|
if message.body and str(message.body[0]).startswith(
|
||||||
|
"org.mpris.MediaPlayer2."
|
||||||
|
):
|
||||||
|
asyncio.create_task(self.refresh())
|
||||||
|
return False
|
||||||
|
|
||||||
|
if (
|
||||||
|
message.interface == "org.freedesktop.DBus.Properties"
|
||||||
|
and message.member == "PropertiesChanged"
|
||||||
|
):
|
||||||
|
# message.sender is a unique connection name, not the well-known bus
|
||||||
|
# name, so we can't filter by sender here — match by object path and
|
||||||
|
# interface instead to scope it to MPRIS Player properties only
|
||||||
|
path_ok = message.path == "/org/mpris/MediaPlayer2"
|
||||||
|
iface = message.body[0] if message.body else None
|
||||||
|
if path_ok and iface == "org.mpris.MediaPlayer2.Player":
|
||||||
|
asyncio.create_task(self.refresh())
|
||||||
|
return False
|
||||||
|
|
||||||
|
if (
|
||||||
|
message.interface == "org.mpris.MediaPlayer2.Player"
|
||||||
|
and message.member == "Seeked"
|
||||||
|
):
|
||||||
|
sender = message.sender or ""
|
||||||
|
if sender and message.body:
|
||||||
|
# MPRIS Seeked position is in microseconds; convert to ms
|
||||||
|
position_us = int(message.body[0])
|
||||||
|
asyncio.create_task(
|
||||||
|
self._handle_seeked_signal(
|
||||||
|
sender,
|
||||||
|
max(0, position_us // 1000),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"PlayerMonitor signal handling error: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def get_position_ms(self, bus_name: str) -> Optional[int]:
|
||||||
|
"""Read player-reported position in milliseconds."""
|
||||||
|
props = await self._get_player_props(bus_name)
|
||||||
|
if props is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
position_var = await getattr(props, "call_get")(
|
||||||
|
"org.mpris.MediaPlayer2.Player", "Position"
|
||||||
|
)
|
||||||
|
if position_var is None:
|
||||||
|
return None
|
||||||
|
return max(0, int(position_var.value) // 1000)
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to read position from {bus_name}: {e}")
|
||||||
|
self._props_cache.pop(bus_name, None)
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class ActivePlayerSelector:
|
||||||
|
@staticmethod
|
||||||
|
def select(
|
||||||
|
players: dict[str, PlayerState],
|
||||||
|
last_active: str | None,
|
||||||
|
preferred_player: str,
|
||||||
|
) -> str | None:
|
||||||
|
"""Select active player by playing state, preferred keyword, and continuity."""
|
||||||
|
if not players:
|
||||||
|
return None
|
||||||
|
all_names = list(players.keys())
|
||||||
|
playing = [name for name, st in players.items() if st.status == "Playing"]
|
||||||
|
return pick_active_player(all_names, playing, preferred_player, last_active)
|
||||||
@@ -0,0 +1,390 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:10:52
|
||||||
|
Description: Watch orchestration with explicit MVVM role boundaries.
|
||||||
|
|
||||||
|
- Model: WatchModel stores domain state.
|
||||||
|
- ViewModel: WatchViewModel projects model to output-facing state/signature.
|
||||||
|
- Coordinator: WatchCoordinator wires services and drives async workflows.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from dataclasses import asdict
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from ..core import LrcManager
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..models import TrackMeta
|
||||||
|
from .control import ControlServer
|
||||||
|
from .fetcher import LyricFetcher
|
||||||
|
from ..config import AppConfig
|
||||||
|
from .view import BaseOutput, LyricView, WatchState, WatchStatus
|
||||||
|
from .player import ActivePlayerSelector, PlayerMonitor, PlayerTarget
|
||||||
|
from .tracker import PositionTracker
|
||||||
|
|
||||||
|
|
||||||
|
class WatchModel:
|
||||||
|
"""Model layer that owns watch state and lyric timeline representation."""
|
||||||
|
|
||||||
|
offset_ms: int
|
||||||
|
active_player: str | None
|
||||||
|
active_track_key: str | None
|
||||||
|
status: WatchStatus
|
||||||
|
lyrics: LyricView | None
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.offset_ms = 0
|
||||||
|
self.active_player: str | None = None
|
||||||
|
self.active_track_key: str | None = None
|
||||||
|
self.status: WatchStatus = WatchStatus.IDLE
|
||||||
|
self.lyrics: LyricView | None = None
|
||||||
|
|
||||||
|
def set_lyrics(self, lyrics: LRCData | None) -> None:
|
||||||
|
"""Update lyrics and rebuild projection once per lyric object change."""
|
||||||
|
if lyrics is None:
|
||||||
|
self.lyrics = None
|
||||||
|
return
|
||||||
|
|
||||||
|
self.lyrics = LyricView.from_lrc(lyrics)
|
||||||
|
|
||||||
|
def state_signature(self, track: TrackMeta | None, position_ms: int) -> tuple:
|
||||||
|
"""Build dedupe signature from model state and current lyric cursor."""
|
||||||
|
# prefer trackid when available; fall back to display name for players
|
||||||
|
# that don't expose a stable ID (e.g. some MPRIS implementations)
|
||||||
|
track_key = (
|
||||||
|
track.trackid
|
||||||
|
if track and track.trackid
|
||||||
|
else track.display_name()
|
||||||
|
if track
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.status != WatchStatus.OK or self.lyrics is None:
|
||||||
|
# non-OK states don't have cursor position — discriminate by status alone
|
||||||
|
return ("status", self.status, self.active_player, track_key)
|
||||||
|
at_ms = position_ms + self.offset_ms
|
||||||
|
cursor = self.lyrics.signature_cursor(at_ms)
|
||||||
|
return ("lyrics", self.active_player, track_key, cursor)
|
||||||
|
|
||||||
|
|
||||||
|
class WatchViewModel:
|
||||||
|
"""ViewModel that projects WatchModel into view-consumable snapshots."""
|
||||||
|
|
||||||
|
_model: WatchModel
|
||||||
|
|
||||||
|
def __init__(self, model: WatchModel) -> None:
|
||||||
|
self._model = model
|
||||||
|
|
||||||
|
def signature(self, track: TrackMeta | None, position_ms: int) -> tuple:
|
||||||
|
"""Build dedupe signature for current projected state."""
|
||||||
|
return self._model.state_signature(track, position_ms)
|
||||||
|
|
||||||
|
def state(self, track: TrackMeta | None, position_ms: int) -> WatchState:
|
||||||
|
"""Project model values into immutable WatchState payload."""
|
||||||
|
return WatchState(
|
||||||
|
track=track,
|
||||||
|
lyrics=self._model.lyrics,
|
||||||
|
position_ms=position_ms,
|
||||||
|
offset_ms=self._model.offset_ms,
|
||||||
|
status=self._model.status,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class WatchCoordinator:
|
||||||
|
"""Application/service orchestration layer for watch runtime."""
|
||||||
|
|
||||||
|
_manager: LrcManager
|
||||||
|
_output: BaseOutput
|
||||||
|
_config: AppConfig
|
||||||
|
_model: WatchModel
|
||||||
|
_view_model: WatchViewModel
|
||||||
|
_player_hint: str | None
|
||||||
|
_last_emit_signature: tuple | None
|
||||||
|
_target: PlayerTarget
|
||||||
|
_control: ControlServer
|
||||||
|
_player_monitor: PlayerMonitor
|
||||||
|
_tracker: PositionTracker
|
||||||
|
_fetcher: LyricFetcher
|
||||||
|
_emit_scheduled: bool
|
||||||
|
_calibration_task: asyncio.Task | None
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
manager: LrcManager,
|
||||||
|
output: BaseOutput,
|
||||||
|
player_hint: str | None,
|
||||||
|
config: AppConfig,
|
||||||
|
) -> None:
|
||||||
|
self._manager = manager
|
||||||
|
self._output = output
|
||||||
|
self._config = config
|
||||||
|
self._model = WatchModel()
|
||||||
|
self._view_model = WatchViewModel(self._model)
|
||||||
|
self._player_hint = player_hint
|
||||||
|
self._last_emit_signature: tuple | None = None
|
||||||
|
self._emit_scheduled = False
|
||||||
|
self._calibration_task = None
|
||||||
|
|
||||||
|
self._target = PlayerTarget(hint=player_hint)
|
||||||
|
|
||||||
|
self._control = ControlServer(socket_path=config.watch.socket_path)
|
||||||
|
self._player_monitor = PlayerMonitor(
|
||||||
|
on_players_changed=self._on_player_change,
|
||||||
|
on_seeked=self._on_seeked,
|
||||||
|
on_playback_status=self._on_playback_status,
|
||||||
|
player_blacklist=self._config.general.player_blacklist,
|
||||||
|
target=self._target,
|
||||||
|
)
|
||||||
|
self._tracker = PositionTracker(
|
||||||
|
poll_position_ms=self._player_monitor.get_position_ms,
|
||||||
|
config=self._config,
|
||||||
|
on_tick=self._on_tracker_tick,
|
||||||
|
)
|
||||||
|
self._fetcher = LyricFetcher(
|
||||||
|
fetch_func=self._fetch_lyrics,
|
||||||
|
on_fetching=self._on_fetching,
|
||||||
|
on_result=self._on_lyrics_update,
|
||||||
|
watch_debounce_ms=self._config.watch.debounce_ms,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def run(self) -> bool:
|
||||||
|
"""Run watch workflow and return success flag."""
|
||||||
|
logger.info(
|
||||||
|
"watch session starting (player filter: {})",
|
||||||
|
self._player_hint or "<none>",
|
||||||
|
)
|
||||||
|
|
||||||
|
if not await self._control.start(self):
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
await self._player_monitor.start()
|
||||||
|
await self._tracker.start()
|
||||||
|
self._calibration_task = asyncio.create_task(self._calibration_loop())
|
||||||
|
# emit once at startup so outputs don't sit blank until the first event
|
||||||
|
self._schedule_emit()
|
||||||
|
# block forever; CancelledError from signal handler exits the loop cleanly
|
||||||
|
await asyncio.Event().wait()
|
||||||
|
return True
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
return True
|
||||||
|
except Exception as exc:
|
||||||
|
logger.exception("watch runtime error: {}", exc)
|
||||||
|
return False
|
||||||
|
finally:
|
||||||
|
logger.info("watch session stopping")
|
||||||
|
if self._calibration_task is not None:
|
||||||
|
self._calibration_task.cancel()
|
||||||
|
await asyncio.gather(self._calibration_task, return_exceptions=True)
|
||||||
|
self._calibration_task = None
|
||||||
|
await self._fetcher.stop()
|
||||||
|
await self._tracker.stop()
|
||||||
|
await self._player_monitor.close()
|
||||||
|
await self._control.stop()
|
||||||
|
|
||||||
|
async def _calibration_loop(self) -> None:
|
||||||
|
"""Periodically refresh full MPRIS snapshot as fallback calibration."""
|
||||||
|
interval = max(0.1, self._config.watch.calibration_interval_s)
|
||||||
|
while True:
|
||||||
|
await asyncio.sleep(interval)
|
||||||
|
try:
|
||||||
|
await self._player_monitor.refresh()
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
raise
|
||||||
|
except Exception as exc:
|
||||||
|
logger.debug("mpris calibration refresh failed: {}", exc)
|
||||||
|
|
||||||
|
def _active_track(self) -> TrackMeta | None:
|
||||||
|
"""Return active track metadata from selected player."""
|
||||||
|
player = self._player_monitor.players.get(self._model.active_player or "")
|
||||||
|
return player.track if player else None
|
||||||
|
|
||||||
|
def _request_fetch_for_active_track(self, reason: str) -> bool:
|
||||||
|
"""Trigger lyric fetch for active track when needed."""
|
||||||
|
track = self._active_track()
|
||||||
|
if track is None:
|
||||||
|
return False
|
||||||
|
if self._model.lyrics is not None:
|
||||||
|
# lyrics already loaded — nothing to fetch
|
||||||
|
return False
|
||||||
|
if self._model.status == WatchStatus.FETCHING:
|
||||||
|
# a fetch is already in flight — don't queue another
|
||||||
|
return False
|
||||||
|
logger.info("fetching lyrics for track ({}): {}", reason, track.display_name())
|
||||||
|
self._fetcher.request(track)
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def _fetch_lyrics(self, track: TrackMeta) -> Optional[LRCData]:
|
||||||
|
"""Fetch lyrics in worker thread."""
|
||||||
|
result = await asyncio.to_thread(
|
||||||
|
self._manager.fetch_for_track,
|
||||||
|
track,
|
||||||
|
None,
|
||||||
|
False,
|
||||||
|
False,
|
||||||
|
)
|
||||||
|
if result and result.lyrics:
|
||||||
|
return result.lyrics
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _on_player_change(self) -> None:
|
||||||
|
"""React to monitor player snapshot change."""
|
||||||
|
prev_player = self._model.active_player
|
||||||
|
prev_track_key = self._model.active_track_key
|
||||||
|
|
||||||
|
selected = ActivePlayerSelector.select(
|
||||||
|
self._player_monitor.players,
|
||||||
|
self._model.active_player,
|
||||||
|
self._config.general.preferred_player,
|
||||||
|
)
|
||||||
|
self._model.active_player = selected
|
||||||
|
|
||||||
|
if selected != prev_player:
|
||||||
|
logger.info(
|
||||||
|
"active player changed: {} -> {}",
|
||||||
|
prev_player or "<none>",
|
||||||
|
selected or "<none>",
|
||||||
|
)
|
||||||
|
|
||||||
|
if selected is None:
|
||||||
|
self._model.status = WatchStatus.IDLE
|
||||||
|
self._model.active_track_key = None
|
||||||
|
self._model.set_lyrics(None)
|
||||||
|
self._schedule_emit()
|
||||||
|
return
|
||||||
|
|
||||||
|
state = self._player_monitor.players.get(selected)
|
||||||
|
if state is None:
|
||||||
|
self._model.status = WatchStatus.IDLE
|
||||||
|
self._model.active_track_key = None
|
||||||
|
self._model.set_lyrics(None)
|
||||||
|
self._schedule_emit()
|
||||||
|
return
|
||||||
|
|
||||||
|
track = state.track
|
||||||
|
track_key = (
|
||||||
|
track.trackid
|
||||||
|
if track and track.trackid
|
||||||
|
else track.display_name()
|
||||||
|
if track
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
|
||||||
|
track_changed = track_key != prev_track_key
|
||||||
|
player_changed = selected != prev_player
|
||||||
|
if track_changed or player_changed:
|
||||||
|
# clear stale lyrics immediately so the old track's lines don't flash
|
||||||
|
self._model.set_lyrics(None)
|
||||||
|
|
||||||
|
self._model.active_track_key = track_key
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
self._tracker.set_active_player(
|
||||||
|
selected,
|
||||||
|
state.status,
|
||||||
|
track_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# only fetch on identity change — calibration ticks must not re-trigger fetches
|
||||||
|
started_fetch = False
|
||||||
|
if track is not None and (player_changed or track_changed):
|
||||||
|
started_fetch = self._request_fetch_for_active_track("track-changed")
|
||||||
|
|
||||||
|
# derive status from what actually happened this tick; preserve FETCHING
|
||||||
|
# if an in-flight request was started before this snapshot arrived
|
||||||
|
if self._model.lyrics is not None:
|
||||||
|
self._model.status = WatchStatus.OK
|
||||||
|
elif started_fetch:
|
||||||
|
self._model.status = WatchStatus.FETCHING
|
||||||
|
elif self._model.status != WatchStatus.FETCHING:
|
||||||
|
# don't overwrite FETCHING with NO_LYRICS while a request is in flight
|
||||||
|
self._model.status = WatchStatus.NO_LYRICS
|
||||||
|
self._schedule_emit()
|
||||||
|
|
||||||
|
def _on_seeked(self, bus_name: str, position_ms: int) -> None:
|
||||||
|
"""Forward seek event to tracker."""
|
||||||
|
asyncio.create_task(self._tracker.on_seeked(bus_name, position_ms))
|
||||||
|
|
||||||
|
def _on_playback_status(self, bus_name: str, status: str) -> None:
|
||||||
|
"""Forward playback status change to position tracker."""
|
||||||
|
asyncio.create_task(self._tracker.on_playback_status(bus_name, status))
|
||||||
|
|
||||||
|
def _on_tracker_tick(self) -> None:
|
||||||
|
"""Emit updates from tracker tick only while lyrics are actively rendering."""
|
||||||
|
if self._model.status == WatchStatus.OK and self._output.position_sensitive:
|
||||||
|
self._schedule_emit()
|
||||||
|
|
||||||
|
def _schedule_emit(self) -> None:
|
||||||
|
"""Coalesce frequent events into at most one in-flight emit task."""
|
||||||
|
if self._emit_scheduled:
|
||||||
|
# a task is already queued; it will pick up the latest model state when it runs
|
||||||
|
return
|
||||||
|
self._emit_scheduled = True
|
||||||
|
asyncio.create_task(self._run_scheduled_emit())
|
||||||
|
|
||||||
|
async def _run_scheduled_emit(self) -> None:
|
||||||
|
"""Run one coalesced emit and release scheduler gate."""
|
||||||
|
try:
|
||||||
|
await self._emit_state()
|
||||||
|
finally:
|
||||||
|
# release the gate even on error so future events can still schedule
|
||||||
|
self._emit_scheduled = False
|
||||||
|
|
||||||
|
async def _on_fetching(self) -> None:
|
||||||
|
"""Mark model as fetching and emit state."""
|
||||||
|
self._model.status = WatchStatus.FETCHING
|
||||||
|
await self._emit_state()
|
||||||
|
|
||||||
|
async def _on_lyrics_update(self, lyrics: Optional[LRCData]) -> None:
|
||||||
|
"""Update model with fetched lyrics and emit state."""
|
||||||
|
self._model.set_lyrics(lyrics)
|
||||||
|
self._model.status = (
|
||||||
|
WatchStatus.OK if lyrics is not None else WatchStatus.NO_LYRICS
|
||||||
|
)
|
||||||
|
logger.info(
|
||||||
|
"lyrics update result: {}",
|
||||||
|
"found" if lyrics is not None else "not found",
|
||||||
|
)
|
||||||
|
await self._emit_state()
|
||||||
|
|
||||||
|
async def _emit_state(self) -> None:
|
||||||
|
"""Emit output state only when semantic signature changes."""
|
||||||
|
player = self._player_monitor.players.get(self._model.active_player or "")
|
||||||
|
track = player.track if player else None
|
||||||
|
# position=0 for non-position-sensitive outputs so the signature is stable
|
||||||
|
# across ticks and on_state fires at most once per track+status transition
|
||||||
|
position = (
|
||||||
|
await self._tracker.get_position_ms()
|
||||||
|
if self._output.position_sensitive
|
||||||
|
else 0
|
||||||
|
)
|
||||||
|
|
||||||
|
signature = self._view_model.signature(track, position)
|
||||||
|
if signature == self._last_emit_signature:
|
||||||
|
# state hasn't changed semantically — skip redundant render
|
||||||
|
return
|
||||||
|
self._last_emit_signature = signature
|
||||||
|
state = self._view_model.state(track, position)
|
||||||
|
await self._output.on_state(state)
|
||||||
|
|
||||||
|
def handle_offset(self, delta: int) -> dict:
|
||||||
|
"""Apply offset update requested by control channel."""
|
||||||
|
self._model.offset_ms += delta
|
||||||
|
return {"ok": True, "offset_ms": self._model.offset_ms}
|
||||||
|
|
||||||
|
def handle_status(self) -> dict:
|
||||||
|
"""Return status payload for control channel."""
|
||||||
|
player = self._player_monitor.players.get(self._model.active_player or "")
|
||||||
|
track = asdict(player.track) if player and player.track else None
|
||||||
|
return {
|
||||||
|
"ok": True,
|
||||||
|
"offset_ms": self._model.offset_ms,
|
||||||
|
"player": self._model.active_player,
|
||||||
|
"track": track,
|
||||||
|
"position_ms": self._tracker.peek_position_ms(),
|
||||||
|
"lyrics_status": self._model.status,
|
||||||
|
}
|
||||||
@@ -0,0 +1,156 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:13:35
|
||||||
|
Description: Playback position tracking utilities for watch mode.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
from typing import Awaitable, Callable, Optional
|
||||||
|
|
||||||
|
from ..config import AppConfig
|
||||||
|
|
||||||
|
|
||||||
|
class PositionTracker:
|
||||||
|
"""Maintains an estimated playback position from seek/status events plus local clock."""
|
||||||
|
|
||||||
|
_config: AppConfig
|
||||||
|
_poll_position_ms: Callable[[str], Awaitable[Optional[int]]]
|
||||||
|
_active_player: str | None
|
||||||
|
_is_playing: bool
|
||||||
|
_track_key: str | None
|
||||||
|
_position_ms: int
|
||||||
|
_last_tick: float
|
||||||
|
_fast_task: asyncio.Task | None
|
||||||
|
_on_tick: Callable[[], None] | None
|
||||||
|
_lock: asyncio.Lock
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
poll_position_ms: Callable[[str], Awaitable[Optional[int]]],
|
||||||
|
config: AppConfig,
|
||||||
|
on_tick: Callable[[], None] | None = None,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize tracker with position polling callback and runtime options."""
|
||||||
|
self._config = config
|
||||||
|
self._poll_position_ms = poll_position_ms
|
||||||
|
self._on_tick = on_tick
|
||||||
|
self._active_player: str | None = None
|
||||||
|
self._is_playing = False
|
||||||
|
self._track_key: str | None = None
|
||||||
|
self._position_ms = 0
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
self._fast_task: asyncio.Task | None = None
|
||||||
|
self._lock = asyncio.Lock()
|
||||||
|
|
||||||
|
async def start(self) -> None:
|
||||||
|
"""Start local monotonic position ticking task."""
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
self._fast_task = asyncio.create_task(self._fast_loop())
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
"""Stop tracker tasks and await clean cancellation."""
|
||||||
|
tasks = [t for t in (self._fast_task,) if t is not None]
|
||||||
|
for task in tasks:
|
||||||
|
task.cancel()
|
||||||
|
if tasks:
|
||||||
|
await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
self._fast_task = None
|
||||||
|
|
||||||
|
async def set_active_player(
|
||||||
|
self,
|
||||||
|
bus_name: str | None,
|
||||||
|
playback_status: str,
|
||||||
|
track_key: str | None,
|
||||||
|
) -> None:
|
||||||
|
"""Switch active source and calibrate position once when entering a new playing track."""
|
||||||
|
should_calibrate_now = False
|
||||||
|
async with self._lock:
|
||||||
|
player_changed = self._active_player != bus_name
|
||||||
|
track_changed = self._track_key != track_key
|
||||||
|
was_playing = self._is_playing
|
||||||
|
self._active_player = bus_name
|
||||||
|
self._is_playing = playback_status == "Playing"
|
||||||
|
status_changed_to_playing = self._is_playing and not was_playing
|
||||||
|
if player_changed or track_changed:
|
||||||
|
# reset to 0 so stale position from a previous track doesn't bleed through
|
||||||
|
self._position_ms = 0
|
||||||
|
# poll MPRIS on any identity change (player, track, or resume) so a paused
|
||||||
|
# mid-song player gets its position anchored immediately; calibration-loop
|
||||||
|
# ticks are excluded because they pass the same player/track/status
|
||||||
|
should_calibrate_now = bool(self._active_player) and (
|
||||||
|
player_changed or track_changed or status_changed_to_playing
|
||||||
|
)
|
||||||
|
self._track_key = track_key
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
|
||||||
|
if should_calibrate_now and self._active_player:
|
||||||
|
await self._calibrate_once(self._active_player)
|
||||||
|
|
||||||
|
async def on_seeked(self, bus_name: str, position_ms: int) -> None:
|
||||||
|
"""Apply explicit seek position update for active player."""
|
||||||
|
async with self._lock:
|
||||||
|
if bus_name != self._active_player:
|
||||||
|
return
|
||||||
|
self._position_ms = max(0, position_ms)
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
|
||||||
|
async def on_playback_status(self, bus_name: str, playback_status: str) -> None:
|
||||||
|
"""Update playing state and calibrate once on paused-to-playing transition."""
|
||||||
|
should_calibrate_now = False
|
||||||
|
async with self._lock:
|
||||||
|
if bus_name != self._active_player:
|
||||||
|
return
|
||||||
|
was_playing = self._is_playing
|
||||||
|
self._is_playing = playback_status == "Playing"
|
||||||
|
# re-anchor last_tick when resuming so the gap while paused isn't counted
|
||||||
|
should_calibrate_now = self._is_playing and not was_playing
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
|
||||||
|
if should_calibrate_now:
|
||||||
|
await self._calibrate_once(bus_name)
|
||||||
|
|
||||||
|
async def _fast_loop(self) -> None:
|
||||||
|
"""Advance position by monotonic clock while active player is playing."""
|
||||||
|
interval = self._config.watch.position_tick_ms / 1000.0
|
||||||
|
while True:
|
||||||
|
await asyncio.sleep(interval)
|
||||||
|
should_notify = False
|
||||||
|
async with self._lock:
|
||||||
|
now = time.monotonic()
|
||||||
|
if self._is_playing and self._active_player:
|
||||||
|
# accumulate elapsed wall-clock time as playback position;
|
||||||
|
# seek events and calibration snapshots correct drift periodically
|
||||||
|
delta_ms = int((now - self._last_tick) * 1000)
|
||||||
|
if delta_ms > 0:
|
||||||
|
self._position_ms += delta_ms
|
||||||
|
should_notify = True
|
||||||
|
# always update last_tick so paused time isn't counted on resume
|
||||||
|
self._last_tick = now
|
||||||
|
|
||||||
|
if should_notify and self._on_tick is not None:
|
||||||
|
self._on_tick()
|
||||||
|
|
||||||
|
async def _calibrate_once(self, bus_name: str) -> None:
|
||||||
|
"""Poll player-reported position once and synchronize local tracker state."""
|
||||||
|
polled = await self._poll_position_ms(bus_name)
|
||||||
|
if polled is None:
|
||||||
|
return
|
||||||
|
async with self._lock:
|
||||||
|
if bus_name != self._active_player:
|
||||||
|
return
|
||||||
|
# Drift correction is signal-assisted; polling is fallback.
|
||||||
|
self._position_ms = max(0, polled)
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
|
||||||
|
async def get_position_ms(self) -> int:
|
||||||
|
"""Return current tracked position in milliseconds."""
|
||||||
|
async with self._lock:
|
||||||
|
return max(0, int(self._position_ms))
|
||||||
|
|
||||||
|
def peek_position_ms(self) -> int:
|
||||||
|
"""Return current tracked position without awaiting lock (best-effort snapshot)."""
|
||||||
|
return max(0, int(self._position_ms))
|
||||||
@@ -0,0 +1,102 @@
|
|||||||
|
"""Output abstraction types for watch mode rendering."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from bisect import bisect_right
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from enum import Enum
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from ...lrc import LRCData, LyricLine
|
||||||
|
from ...models import TrackMeta
|
||||||
|
|
||||||
|
|
||||||
|
class WatchStatus(str, Enum):
|
||||||
|
IDLE = "idle"
|
||||||
|
FETCHING = "fetching"
|
||||||
|
OK = "ok"
|
||||||
|
NO_LYRICS = "no_lyrics"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(slots=True, frozen=True)
|
||||||
|
class LyricView:
|
||||||
|
"""View-ready immutable lyric data projected from one normalized LRC object."""
|
||||||
|
|
||||||
|
normalized: LRCData
|
||||||
|
lines: tuple[str, ...]
|
||||||
|
timed_line_entries: tuple[tuple[int, int], ...]
|
||||||
|
timestamps: tuple[int, ...]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def from_lrc(lyrics: LRCData) -> "LyricView":
|
||||||
|
"""Build a view projection once from normalized lyrics."""
|
||||||
|
normalized = lyrics.normalize()
|
||||||
|
|
||||||
|
lines: list[str] = []
|
||||||
|
entries: list[tuple[int, int]] = []
|
||||||
|
|
||||||
|
line_index = 0
|
||||||
|
for line in normalized.lines:
|
||||||
|
if not isinstance(line, LyricLine):
|
||||||
|
# skip metadata/tag lines that carry no renderable text
|
||||||
|
continue
|
||||||
|
text = line.text
|
||||||
|
lines.append(text)
|
||||||
|
# use first timestamp; clamp to 0 so bisect always works with non-negative ms
|
||||||
|
timestamp = line.line_times_ms[0] if line.line_times_ms else 0
|
||||||
|
entries.append((max(0, timestamp), line_index))
|
||||||
|
line_index += 1
|
||||||
|
|
||||||
|
# extract timestamps into a flat tuple so bisect_right can binary-search it
|
||||||
|
timestamps = tuple(timestamp for timestamp, _ in entries)
|
||||||
|
return LyricView(
|
||||||
|
normalized=normalized,
|
||||||
|
lines=tuple(lines),
|
||||||
|
timed_line_entries=tuple(entries),
|
||||||
|
timestamps=timestamps,
|
||||||
|
)
|
||||||
|
|
||||||
|
def signature_cursor(self, at_ms: int) -> tuple:
|
||||||
|
"""Build a stable cursor signature for dedupe decisions."""
|
||||||
|
if not self.timed_line_entries:
|
||||||
|
# untimed lyrics: signature is the full line set — changes only on track change
|
||||||
|
return ("plain", self.lines)
|
||||||
|
|
||||||
|
first_ts = self.timed_line_entries[0][0]
|
||||||
|
if at_ms < first_ts:
|
||||||
|
# playback hasn't reached the first lyric yet; hold until it does
|
||||||
|
return ("before_first", first_ts)
|
||||||
|
|
||||||
|
# bisect_right gives the insertion point after equal timestamps, so -1 gives
|
||||||
|
# the last line whose timestamp <= at_ms (i.e. the currently active line)
|
||||||
|
idx = bisect_right(self.timestamps, at_ms) - 1
|
||||||
|
if idx < 0:
|
||||||
|
idx = 0
|
||||||
|
|
||||||
|
ts, line_idx = self.timed_line_entries[idx]
|
||||||
|
text = self.lines[line_idx] if line_idx < len(self.lines) else ""
|
||||||
|
return ("ok", idx, ts, text)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(slots=True)
|
||||||
|
class WatchState:
|
||||||
|
"""Immutable snapshot payload delivered from session to output implementations."""
|
||||||
|
|
||||||
|
track: Optional[TrackMeta]
|
||||||
|
lyrics: Optional[LyricView]
|
||||||
|
position_ms: int
|
||||||
|
offset_ms: int
|
||||||
|
status: WatchStatus
|
||||||
|
|
||||||
|
|
||||||
|
class BaseOutput(ABC):
|
||||||
|
# When False, the coordinator passes position=0 for signature computation and
|
||||||
|
# skips tracker-tick-driven emits, so on_state fires at most once per
|
||||||
|
# track+status transition rather than on every lyric cursor advance.
|
||||||
|
position_sensitive: bool = True
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
"""Render or deliver one watch state frame."""
|
||||||
|
...
|
||||||
@@ -0,0 +1,95 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:15:17
|
||||||
|
Description: Pipe output implementation for watch mode.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from bisect import bisect_right
|
||||||
|
from dataclasses import dataclass
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from . import BaseOutput, WatchState, WatchStatus
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(slots=True)
|
||||||
|
class PipeOutput(BaseOutput):
|
||||||
|
"""Render a fixed lyric context window to stdout for streaming/pipe usage."""
|
||||||
|
|
||||||
|
before: int = 0
|
||||||
|
after: int = 0
|
||||||
|
no_newline: bool = False
|
||||||
|
|
||||||
|
def _window_size(self) -> int:
|
||||||
|
"""Return rendered lyric window size."""
|
||||||
|
return self.before + 1 + self.after
|
||||||
|
|
||||||
|
def _render_status(self, message: str) -> list[str]:
|
||||||
|
"""Render centered status line in fixed-size window."""
|
||||||
|
lines = [""] * self._window_size()
|
||||||
|
lines[self.before] = message
|
||||||
|
return lines
|
||||||
|
|
||||||
|
def _render_lyrics(self, state: WatchState) -> list[str]:
|
||||||
|
"""Render context lines centered on current timed lyric entry."""
|
||||||
|
if state.lyrics is None:
|
||||||
|
return self._render_status("[no lyrics]")
|
||||||
|
|
||||||
|
all_lines = state.lyrics.lines
|
||||||
|
if not all_lines:
|
||||||
|
return self._render_status("[no lyrics]")
|
||||||
|
entries = state.lyrics.timed_line_entries
|
||||||
|
|
||||||
|
effective_ms = state.position_ms + state.offset_ms
|
||||||
|
current_line_idx: int | None
|
||||||
|
if entries and effective_ms < entries[0][0]:
|
||||||
|
# playback hasn't reached the first lyric yet; treat current slot as empty
|
||||||
|
# so the after-window can show upcoming lines without a "current" anchor
|
||||||
|
current_line_idx = None
|
||||||
|
else:
|
||||||
|
if not entries:
|
||||||
|
current_line_idx = 0
|
||||||
|
else:
|
||||||
|
# bisect_right - 1 gives the last entry whose timestamp <= effective_ms
|
||||||
|
current_entry_idx = (
|
||||||
|
bisect_right(state.lyrics.timestamps, effective_ms) - 1
|
||||||
|
)
|
||||||
|
if current_entry_idx < 0:
|
||||||
|
current_entry_idx = 0
|
||||||
|
current_line_idx = entries[current_entry_idx][1]
|
||||||
|
|
||||||
|
out: list[str] = []
|
||||||
|
for rel in range(-self.before, self.after + 1):
|
||||||
|
if current_line_idx is None:
|
||||||
|
# before-first-timestamp: before/current slots are empty; after slots
|
||||||
|
# show lines starting from index 0 (rel=1 → line 0, rel=2 → line 1, …)
|
||||||
|
if rel <= 0:
|
||||||
|
out.append("")
|
||||||
|
continue
|
||||||
|
line_idx = rel - 1
|
||||||
|
else:
|
||||||
|
line_idx = current_line_idx + rel
|
||||||
|
|
||||||
|
if 0 <= line_idx < len(all_lines):
|
||||||
|
out.append(all_lines[line_idx])
|
||||||
|
else:
|
||||||
|
out.append("")
|
||||||
|
|
||||||
|
return out
|
||||||
|
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
"""Render and flush one frame for the latest watch state."""
|
||||||
|
if state.status == WatchStatus.FETCHING:
|
||||||
|
lines = self._render_status("[fetching...]")
|
||||||
|
elif state.status == WatchStatus.NO_LYRICS:
|
||||||
|
lines = self._render_status("[no lyrics]")
|
||||||
|
elif state.status == WatchStatus.IDLE:
|
||||||
|
lines = self._render_status("[idle]")
|
||||||
|
else:
|
||||||
|
lines = self._render_lyrics(state)
|
||||||
|
|
||||||
|
for line in lines:
|
||||||
|
# no_newline mode lets callers use \r to overwrite the previous frame in-place
|
||||||
|
sys.stdout.write(line + ("\n" if not self.no_newline else ""))
|
||||||
|
sys.stdout.flush()
|
||||||
@@ -0,0 +1,46 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:15:31
|
||||||
|
Description: Print output implementation for watch mode — one shot per track.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from . import BaseOutput, WatchState, WatchStatus
|
||||||
|
|
||||||
|
|
||||||
|
class PrintOutput(BaseOutput):
|
||||||
|
"""Emit full lyrics to stdout once per track transition, then stay silent.
|
||||||
|
|
||||||
|
Deduplication is delegated to the coordinator via position_sensitive=False:
|
||||||
|
the coordinator uses a fixed position for signatures, so on_state fires at
|
||||||
|
most once per (status, track_key) transition rather than on every tick.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# fixed position=0 in signatures → coordinator calls on_state only on
|
||||||
|
# track/status transitions, never on lyric cursor advances
|
||||||
|
position_sensitive = False
|
||||||
|
|
||||||
|
plain: bool
|
||||||
|
|
||||||
|
def __init__(self, plain: bool = False) -> None:
|
||||||
|
self.plain = plain
|
||||||
|
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
if state.status == WatchStatus.FETCHING or state.status == WatchStatus.IDLE:
|
||||||
|
return
|
||||||
|
|
||||||
|
if state.status == WatchStatus.NO_LYRICS:
|
||||||
|
# emit a blank line as a machine-readable sentinel for "track changed, no lyrics"
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
sys.stdout.flush()
|
||||||
|
elif state.status == WatchStatus.OK and state.lyrics is not None:
|
||||||
|
lrc = state.lyrics.normalized
|
||||||
|
if self.plain:
|
||||||
|
text = lrc.to_plain()
|
||||||
|
else:
|
||||||
|
text = str(lrc)
|
||||||
|
sys.stdout.write(text + "\n")
|
||||||
|
sys.stdout.flush()
|
||||||
@@ -1,13 +1,3 @@
|
|||||||
import pytest
|
|
||||||
|
|
||||||
from lrx_cli.config import enable_debug
|
from lrx_cli.config import enable_debug
|
||||||
|
|
||||||
enable_debug()
|
enable_debug()
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def no_credentials(monkeypatch):
|
|
||||||
"""Clear all credential env vars so only anonymous fetchers are active."""
|
|
||||||
monkeypatch.delenv("SPOTIFY_SP_DC", raising=False)
|
|
||||||
monkeypatch.delenv("QQ_MUSIC_API_URL", raising=False)
|
|
||||||
monkeypatch.delenv("MUSIXMATCH_USERTOKEN", raising=False)
|
|
||||||
|
|||||||
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"syncedLyrics": "[00:01.00]s1\n[00:02.00]s2",
|
||||||
|
"plainLyrics": "p1\np2"
|
||||||
|
}
|
||||||
@@ -0,0 +1,20 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"trackName": "My Love",
|
||||||
|
"artistName": "Westlife",
|
||||||
|
"albumName": "Coast To Coast",
|
||||||
|
"duration": 231.847,
|
||||||
|
"syncedLyrics": "[00:01.00]hello",
|
||||||
|
"plainLyrics": "hello"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"trackName": "My Love (Live)",
|
||||||
|
"artistName": "Westlife",
|
||||||
|
"albumName": "Live",
|
||||||
|
"duration": 262.0,
|
||||||
|
"syncedLyrics": "",
|
||||||
|
"plainLyrics": "hello"
|
||||||
|
}
|
||||||
|
]
|
||||||
@@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"body": {
|
||||||
|
"macro_calls": {
|
||||||
|
"track.richsync.get": {
|
||||||
|
"message": {
|
||||||
|
"header": {
|
||||||
|
"status_code": 200
|
||||||
|
},
|
||||||
|
"body": {
|
||||||
|
"richsync": {
|
||||||
|
"richsync_body": "[{\"ts\": 1.2, \"x\": \"hello\"}, {\"ts\": 2.34, \"x\": \"world\"}]"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"track.subtitles.get": {
|
||||||
|
"message": {
|
||||||
|
"header": {
|
||||||
|
"status_code": 404
|
||||||
|
},
|
||||||
|
"body": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"body": {
|
||||||
|
"macro_calls": {
|
||||||
|
"track.richsync.get": {
|
||||||
|
"message": {
|
||||||
|
"header": {
|
||||||
|
"status_code": 404
|
||||||
|
},
|
||||||
|
"body": {}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"track.subtitles.get": {
|
||||||
|
"message": {
|
||||||
|
"header": {
|
||||||
|
"status_code": 200
|
||||||
|
},
|
||||||
|
"body": {
|
||||||
|
"subtitle_list": [
|
||||||
|
{
|
||||||
|
"subtitle": {
|
||||||
|
"subtitle_body": "[{\"text\": \"hello\", \"time\": {\"total\": 1.1}}, {\"text\": \"world\", \"time\": {\"total\": 2.22}}]"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
+20
@@ -0,0 +1,20 @@
|
|||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"body": {
|
||||||
|
"track_list": [
|
||||||
|
{
|
||||||
|
"track": {
|
||||||
|
"commontrack_id": 123,
|
||||||
|
"track_length": 232,
|
||||||
|
"has_subtitles": 1,
|
||||||
|
"has_richsync": 0,
|
||||||
|
"track_name": "My Love",
|
||||||
|
"artist_name": "Westlife",
|
||||||
|
"album_name": "Coast To Coast",
|
||||||
|
"instrumental": 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
+5
@@ -0,0 +1,5 @@
|
|||||||
|
{
|
||||||
|
"lrc": {
|
||||||
|
"lyric": "[00:01.00]line1\n[00:02.00]line2"
|
||||||
|
}
|
||||||
|
}
|
||||||
+32
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"result": {
|
||||||
|
"songs": [
|
||||||
|
{
|
||||||
|
"id": 2080607,
|
||||||
|
"name": "My Love",
|
||||||
|
"dt": 231941,
|
||||||
|
"ar": [
|
||||||
|
{
|
||||||
|
"name": "Westlife"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"al": {
|
||||||
|
"name": "Unbreakable"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 572412968,
|
||||||
|
"name": "My Love",
|
||||||
|
"dt": 231000,
|
||||||
|
"ar": [
|
||||||
|
{
|
||||||
|
"name": "Westlife"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"al": {
|
||||||
|
"name": "Pure... Love"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
+6
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"code": 0,
|
||||||
|
"data": {
|
||||||
|
"lyric": "[00:01.00]hello\n[00:02.00]world"
|
||||||
|
}
|
||||||
|
}
|
||||||
+33
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"code": 0,
|
||||||
|
"data": {
|
||||||
|
"list": [
|
||||||
|
{
|
||||||
|
"mid": "mid1",
|
||||||
|
"interval": 232,
|
||||||
|
"name": "My Love",
|
||||||
|
"singer": [
|
||||||
|
{
|
||||||
|
"name": "Westlife"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"album": {
|
||||||
|
"name": "Coast To Coast"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mid": "mid2",
|
||||||
|
"interval": 248,
|
||||||
|
"name": "My Love (Album Version)",
|
||||||
|
"singer": [
|
||||||
|
{
|
||||||
|
"name": "Little Texas"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"album": {
|
||||||
|
"name": "Greatest Hits"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
+9
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"lyrics": {
|
||||||
|
"syncType": "LINE_SYNCED",
|
||||||
|
"lines": [
|
||||||
|
{"startTimeMs": "1000", "words": "hello"},
|
||||||
|
{"startTimeMs": "2500", "words": "world"}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"lyrics": {
|
||||||
|
"syncType": "UNSYNCED",
|
||||||
|
"lines": [
|
||||||
|
{"startTimeMs": "0", "words": "plain one"},
|
||||||
|
{"startTimeMs": "0", "words": "plain two"}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
+10
-8
@@ -1,16 +1,18 @@
|
|||||||
import os
|
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
|
from lrx_cli.config import load_config
|
||||||
|
|
||||||
|
_credentials = load_config().credentials
|
||||||
|
|
||||||
requires_spotify = pytest.mark.skipif(
|
requires_spotify = pytest.mark.skipif(
|
||||||
not os.environ.get("SPOTIFY_SP_DC"),
|
not _credentials.spotify_sp_dc,
|
||||||
reason="requires SPOTIFY_SP_DC",
|
reason="requires credentials.spotify_sp_dc in config.toml",
|
||||||
)
|
)
|
||||||
requires_qq_music = pytest.mark.skipif(
|
requires_qq_music = pytest.mark.skipif(
|
||||||
not os.environ.get("QQ_MUSIC_API_URL"),
|
not _credentials.qq_music_api_url,
|
||||||
reason="requires QQ_MUSIC_API_URL",
|
reason="requires credentials.qq_music_api_url in config.toml",
|
||||||
)
|
)
|
||||||
requires_musixmatch_token = pytest.mark.skipif(
|
requires_musixmatch_token = pytest.mark.skipif(
|
||||||
not os.environ.get("MUSIXMATCH_USERTOKEN"),
|
not _credentials.musixmatch_usertoken,
|
||||||
reason="requires MUSIXMATCH_USERTOKEN",
|
reason="requires credentials.musixmatch_usertoken in config.toml",
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -0,0 +1,61 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from lrx_cli.config import AppConfig, CredentialConfig, WatchConfig, load_config
|
||||||
|
|
||||||
|
|
||||||
|
def test_missing_file_returns_defaults(tmp_path):
|
||||||
|
assert load_config(tmp_path / "nonexistent.toml") == AppConfig()
|
||||||
|
|
||||||
|
|
||||||
|
def test_empty_file_returns_defaults(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_text("")
|
||||||
|
assert load_config(p) == AppConfig()
|
||||||
|
|
||||||
|
|
||||||
|
def test_partial_section_keeps_other_defaults(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(b"[watch]\ndebounce_ms = 200\n")
|
||||||
|
cfg = load_config(p)
|
||||||
|
assert cfg.watch.debounce_ms == 200
|
||||||
|
assert cfg.watch.calibration_interval_s == WatchConfig().calibration_interval_s
|
||||||
|
|
||||||
|
|
||||||
|
def test_credentials_roundtrip(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(
|
||||||
|
b"[credentials]\n"
|
||||||
|
b'spotify_sp_dc = "abc"\n'
|
||||||
|
b'qq_music_api_url = "http://localhost:3000"\n'
|
||||||
|
)
|
||||||
|
assert load_config(p).credentials == CredentialConfig(
|
||||||
|
spotify_sp_dc="abc", qq_music_api_url="http://localhost:3000"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_int_coerced_to_float(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(b"[general]\nhttp_timeout = 5\n")
|
||||||
|
assert load_config(p).general.http_timeout == 5.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_unknown_key_raises(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(b"[general]\ntypo_key = 1\n")
|
||||||
|
with pytest.raises(ValueError, match="Unknown config keys"):
|
||||||
|
load_config(p)
|
||||||
|
|
||||||
|
|
||||||
|
def test_wrong_type_raises(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(b"[watch]\ndebounce_ms = true\n")
|
||||||
|
with pytest.raises(ValueError, match="expected int"):
|
||||||
|
load_config(p)
|
||||||
|
|
||||||
|
|
||||||
|
def test_app_config_is_frozen():
|
||||||
|
cfg = AppConfig()
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
cfg.general = None # type: ignore[misc]
|
||||||
+471
-79
@@ -1,17 +1,42 @@
|
|||||||
from pathlib import Path
|
from __future__ import annotations
|
||||||
import pytest
|
|
||||||
from dataclasses import replace
|
from dataclasses import replace
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Callable
|
||||||
|
|
||||||
from lrx_cli.fetchers import FetcherMethodType
|
import httpx
|
||||||
from lrx_cli.models import TrackMeta
|
import pytest
|
||||||
|
|
||||||
|
from lrx_cli.authenticators import create_authenticators
|
||||||
|
from lrx_cli.cache import CacheEngine
|
||||||
|
from lrx_cli.config import AppConfig, load_config
|
||||||
from lrx_cli.core import LrcManager
|
from lrx_cli.core import LrcManager
|
||||||
from tests.marks import (
|
from lrx_cli.fetchers import FetcherMethodType, create_fetchers
|
||||||
requires_spotify,
|
from lrx_cli.fetchers.lrclib import LrclibFetcher, _parse_lrclib_response
|
||||||
requires_qq_music,
|
from lrx_cli.fetchers.lrclib_search import (
|
||||||
requires_musixmatch_token,
|
LrclibSearchFetcher,
|
||||||
|
_parse_lrclib_search_results,
|
||||||
)
|
)
|
||||||
|
from lrx_cli.fetchers.musixmatch import (
|
||||||
|
MusixmatchFetcher,
|
||||||
|
MusixmatchSpotifyFetcher,
|
||||||
|
_parse_mxm_macro,
|
||||||
|
_parse_mxm_search,
|
||||||
|
)
|
||||||
|
from lrx_cli.fetchers.netease import (
|
||||||
|
NeteaseFetcher,
|
||||||
|
_parse_netease_lyrics,
|
||||||
|
_parse_netease_search,
|
||||||
|
)
|
||||||
|
from lrx_cli.fetchers.qqmusic import QQMusicFetcher, _parse_qq_lyrics, _parse_qq_search
|
||||||
|
from lrx_cli.fetchers.spotify import SpotifyFetcher, _parse_spotify_lyrics
|
||||||
|
from lrx_cli.lrc import LRCData
|
||||||
|
from lrx_cli.models import CacheStatus, TrackMeta
|
||||||
|
from tests.marks import requires_musixmatch_token, requires_qq_music, requires_spotify
|
||||||
|
|
||||||
SAMPLE_SPOTIFY_TRACK: TrackMeta = TrackMeta(
|
SAMPLE_TRACK = TrackMeta(
|
||||||
title="One Last Kiss",
|
title="One Last Kiss",
|
||||||
artist="Hikaru Utada",
|
artist="Hikaru Utada",
|
||||||
album="One Last Kiss",
|
album="One Last Kiss",
|
||||||
@@ -20,79 +45,152 @@ SAMPLE_SPOTIFY_TRACK: TrackMeta = TrackMeta(
|
|||||||
url="https://open.spotify.com/track/5RhWszHMSKzb7KiXk4Ae0M",
|
url="https://open.spotify.com/track/5RhWszHMSKzb7KiXk4Ae0M",
|
||||||
)
|
)
|
||||||
|
|
||||||
SAMPLE_SPOTIFY_TRACK_ALBUM_MODIFIED = replace(SAMPLE_SPOTIFY_TRACK, album="BADモード")
|
SAMPLE_TRACK_ALBUM_MODIFIED = replace(SAMPLE_TRACK, album="BADモード")
|
||||||
|
SAMPLE_TRACK_ARTIST_MODIFIED = replace(SAMPLE_TRACK, artist="宇多田ヒカル")
|
||||||
SAMPLE_SPOTIFY_TRACK_ARTIST_MODIFIED = replace(
|
SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED = replace(
|
||||||
SAMPLE_SPOTIFY_TRACK, artist="宇多田ヒカル"
|
SAMPLE_TRACK,
|
||||||
|
artist="宇多田ヒカル",
|
||||||
|
album="BADモード",
|
||||||
)
|
)
|
||||||
|
|
||||||
SAMPLE_SPOTIFY_TRACK_ALBUM_ARTIST_MODIFIED = replace(
|
_FIXTURE_DIR = Path(__file__).parent / "fixtures" / "fetchers"
|
||||||
SAMPLE_SPOTIFY_TRACK, artist="宇多田ヒカル", album="BADモード"
|
_NETWORK_TIMEOUT = 20.0
|
||||||
)
|
|
||||||
|
ParserFunc = Callable[[dict], LRCData | None]
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def lrc_manager(tmp_path: Path) -> LrcManager:
|
def lrc_manager(tmp_path: Path) -> LrcManager:
|
||||||
return LrcManager(str(tmp_path / "cache.db"))
|
return LrcManager(str(tmp_path / "cache.db"), AppConfig())
|
||||||
|
|
||||||
|
|
||||||
def _fetch_and_assert(
|
@pytest.fixture
|
||||||
|
def cred_lrc_manager(tmp_path: Path) -> LrcManager:
|
||||||
|
return LrcManager(str(tmp_path / "cache.db"), load_config())
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def fetcher_runtime_anonymous(tmp_path: Path):
|
||||||
|
cfg = AppConfig()
|
||||||
|
cache = CacheEngine(str(tmp_path / "network-anon-cache.db"))
|
||||||
|
authenticators = create_authenticators(cache, cfg)
|
||||||
|
fetchers = create_fetchers(cache, authenticators, cfg)
|
||||||
|
return fetchers, cfg
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def fetcher_runtime_credentialed(tmp_path: Path):
|
||||||
|
cfg = load_config()
|
||||||
|
cache = CacheEngine(str(tmp_path / "network-cred-cache.db"))
|
||||||
|
authenticators = create_authenticators(cache, cfg)
|
||||||
|
fetchers = create_fetchers(cache, authenticators, cfg)
|
||||||
|
return fetchers, cfg
|
||||||
|
|
||||||
|
|
||||||
|
def _load_fixture(name: str) -> dict | list:
|
||||||
|
return json.loads((_FIXTURE_DIR / name).read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def _assert_shape(actual: object, fixture: object) -> None:
|
||||||
|
"""Assert actual payload contains fixture structure recursively.
|
||||||
|
|
||||||
|
- dict: all fixture keys must exist with matching nested shape
|
||||||
|
- list: actual must contain at least fixture length and each indexed shape must match
|
||||||
|
- scalar: runtime type must match fixture type
|
||||||
|
"""
|
||||||
|
if isinstance(fixture, dict):
|
||||||
|
assert isinstance(actual, dict)
|
||||||
|
for key, value in fixture.items():
|
||||||
|
assert key in actual
|
||||||
|
_assert_shape(actual[key], value)
|
||||||
|
return
|
||||||
|
|
||||||
|
if isinstance(fixture, list):
|
||||||
|
assert isinstance(actual, list)
|
||||||
|
assert len(actual) >= len(fixture)
|
||||||
|
for idx, value in enumerate(fixture):
|
||||||
|
_assert_shape(actual[idx], value)
|
||||||
|
return
|
||||||
|
|
||||||
|
if fixture is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
assert isinstance(actual, type(fixture))
|
||||||
|
|
||||||
|
|
||||||
|
def _fetch_with_method(
|
||||||
lrc_manager: LrcManager,
|
lrc_manager: LrcManager,
|
||||||
method: FetcherMethodType,
|
method: FetcherMethodType,
|
||||||
expect_fail: bool = False,
|
*,
|
||||||
bypass_cache: bool = True,
|
bypass_cache: bool = False,
|
||||||
) -> None:
|
):
|
||||||
result = lrc_manager.fetch_for_track(
|
return lrc_manager.fetch_for_track(
|
||||||
SAMPLE_SPOTIFY_TRACK, force_method=method, bypass_cache=bypass_cache
|
SAMPLE_TRACK,
|
||||||
|
force_method=method,
|
||||||
|
bypass_cache=bypass_cache,
|
||||||
)
|
)
|
||||||
if expect_fail:
|
|
||||||
|
|
||||||
|
# Cache-search fetcher behavior
|
||||||
|
|
||||||
|
|
||||||
|
def test_cache_search_no_cache_fails(lrc_manager: LrcManager):
|
||||||
|
result = _fetch_with_method(lrc_manager, "cache-search", bypass_cache=False)
|
||||||
assert result is None
|
assert result is None
|
||||||
else:
|
|
||||||
|
|
||||||
|
def test_cache_search_exact_hit(lrc_manager: LrcManager):
|
||||||
|
expected = "[00:00.01]lyrics"
|
||||||
|
lrc_manager.manual_insert(SAMPLE_TRACK, expected)
|
||||||
|
|
||||||
|
result = lrc_manager.fetch_for_track(
|
||||||
|
SAMPLE_TRACK,
|
||||||
|
force_method="cache-search",
|
||||||
|
bypass_cache=False,
|
||||||
|
)
|
||||||
|
|
||||||
assert result is not None
|
assert result is not None
|
||||||
assert result.status == "SUCCESS_SYNCED"
|
|
||||||
assert result.lyrics is not None
|
assert result.lyrics is not None
|
||||||
|
assert result.lyrics.to_text() == expected
|
||||||
|
|
||||||
def test_cache_search_fetcher_without_cache(lrc_manager: LrcManager):
|
|
||||||
_fetch_and_assert(lrc_manager, "cache-search", expect_fail=True, bypass_cache=False)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
"query_track",
|
"query_track",
|
||||||
[
|
[
|
||||||
pytest.param(SAMPLE_SPOTIFY_TRACK, id="exact_match"),
|
pytest.param(SAMPLE_TRACK_ARTIST_MODIFIED, id="artist_modified"),
|
||||||
pytest.param(SAMPLE_SPOTIFY_TRACK_ARTIST_MODIFIED, id="artist_modified"),
|
pytest.param(SAMPLE_TRACK_ALBUM_MODIFIED, id="album_modified"),
|
||||||
pytest.param(SAMPLE_SPOTIFY_TRACK_ALBUM_MODIFIED, id="album_modified"),
|
pytest.param(SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED, id="album_artist_modified"),
|
||||||
pytest.param(
|
|
||||||
SAMPLE_SPOTIFY_TRACK_ALBUM_ARTIST_MODIFIED, id="album_artist_modified"
|
|
||||||
),
|
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
def test_cache_search_fetcher_with_fuzzy_metadata(
|
def test_cache_search_fuzzy_hit(lrc_manager: LrcManager, query_track: TrackMeta):
|
||||||
lrc_manager: LrcManager, query_track: TrackMeta
|
expected = "[00:00.01]lyrics"
|
||||||
):
|
lrc_manager.manual_insert(SAMPLE_TRACK, expected)
|
||||||
expected_lrc = "[00:00.01]lyrics"
|
|
||||||
lrc_manager.manual_insert(SAMPLE_SPOTIFY_TRACK, expected_lrc)
|
|
||||||
|
|
||||||
result = lrc_manager.fetch_for_track(
|
result = lrc_manager.fetch_for_track(
|
||||||
query_track, force_method="cache-search", bypass_cache=False
|
query_track,
|
||||||
|
force_method="cache-search",
|
||||||
|
bypass_cache=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
assert result is not None
|
assert result is not None
|
||||||
assert result.lyrics is not None
|
assert result.lyrics is not None
|
||||||
assert result.lyrics.to_text() == expected_lrc
|
assert result.lyrics.to_text() == expected
|
||||||
|
|
||||||
|
|
||||||
def test_cache_search_fetcher_prefer_better_match(lrc_manager: LrcManager):
|
def test_cache_search_prefer_better_match(lrc_manager: LrcManager):
|
||||||
lrc_manager.manual_insert(
|
lrc_manager.manual_insert(
|
||||||
SAMPLE_SPOTIFY_TRACK_ARTIST_MODIFIED, "[00:00.01]artist modified"
|
SAMPLE_TRACK_ARTIST_MODIFIED,
|
||||||
|
"[00:00.01]artist modified",
|
||||||
)
|
)
|
||||||
lrc_manager.manual_insert(
|
lrc_manager.manual_insert(
|
||||||
SAMPLE_SPOTIFY_TRACK_ALBUM_ARTIST_MODIFIED, "[00:00.01]artist+album modified"
|
SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED,
|
||||||
|
"[00:00.01]artist+album modified",
|
||||||
)
|
)
|
||||||
|
|
||||||
result = lrc_manager.fetch_for_track(
|
result = lrc_manager.fetch_for_track(
|
||||||
SAMPLE_SPOTIFY_TRACK, force_method="cache-search", bypass_cache=False
|
SAMPLE_TRACK,
|
||||||
|
force_method="cache-search",
|
||||||
|
bypass_cache=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
assert result is not None
|
assert result is not None
|
||||||
@@ -100,53 +198,347 @@ def test_cache_search_fetcher_prefer_better_match(lrc_manager: LrcManager):
|
|||||||
assert result.lyrics.to_text() == "[00:00.01]artist modified"
|
assert result.lyrics.to_text() == "[00:00.01]artist modified"
|
||||||
|
|
||||||
|
|
||||||
|
# API response format for every fetcher
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.network
|
@pytest.mark.network
|
||||||
@pytest.mark.parametrize(
|
def test_api_lrclib_response_shape(fetcher_runtime_anonymous):
|
||||||
"method, expect_fail",
|
fetchers, _cfg = fetcher_runtime_anonymous
|
||||||
[
|
fetcher = fetchers["lrclib"]
|
||||||
("lrclib", False),
|
assert isinstance(fetcher, LrclibFetcher)
|
||||||
("lrclib-search", False),
|
|
||||||
("netease", False),
|
async def _run() -> dict:
|
||||||
("spotify", True), # requires auth
|
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
|
||||||
("qqmusic", True), # requires api
|
response = await fetcher._api_get(client, SAMPLE_TRACK)
|
||||||
],
|
assert response.status_code == 200
|
||||||
)
|
payload = response.json()
|
||||||
def test_anonymous_remote_fetchers(
|
assert isinstance(payload, dict)
|
||||||
no_credentials,
|
return payload
|
||||||
lrc_manager: LrcManager,
|
|
||||||
method: FetcherMethodType,
|
payload = asyncio.run(_run())
|
||||||
expect_fail: bool,
|
_assert_shape(payload, _load_fixture("lrclib_response.json"))
|
||||||
):
|
|
||||||
_fetch_and_assert(lrc_manager, method, expect_fail)
|
|
||||||
|
@pytest.mark.network
|
||||||
|
def test_api_lrclib_search_response_shape(fetcher_runtime_anonymous):
|
||||||
|
fetchers, _cfg = fetcher_runtime_anonymous
|
||||||
|
fetcher = fetchers["lrclib-search"]
|
||||||
|
assert isinstance(fetcher, LrclibSearchFetcher)
|
||||||
|
|
||||||
|
async def _run() -> list[dict]:
|
||||||
|
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
|
||||||
|
items, had_error = await fetcher._api_candidates(client, SAMPLE_TRACK)
|
||||||
|
assert had_error is False
|
||||||
|
return items
|
||||||
|
|
||||||
|
payload = asyncio.run(_run())
|
||||||
|
_assert_shape(payload, _load_fixture("lrclib_search_results.json"))
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.network
|
||||||
|
def test_api_netease_response_shape(fetcher_runtime_anonymous):
|
||||||
|
fetchers, _cfg = fetcher_runtime_anonymous
|
||||||
|
fetcher = fetchers["netease"]
|
||||||
|
assert isinstance(fetcher, NeteaseFetcher)
|
||||||
|
|
||||||
|
async def _run() -> tuple[dict, dict]:
|
||||||
|
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
|
||||||
|
search = await fetcher._api_search_track(client, SAMPLE_TRACK, 5)
|
||||||
|
lyric = await fetcher._api_lyric_track(client, SAMPLE_TRACK, 5)
|
||||||
|
assert isinstance(search, dict)
|
||||||
|
assert isinstance(lyric, dict)
|
||||||
|
return search, lyric
|
||||||
|
|
||||||
|
search_payload, lyric_payload = asyncio.run(_run())
|
||||||
|
_assert_shape(search_payload, _load_fixture("netease_search.json"))
|
||||||
|
_assert_shape(lyric_payload, _load_fixture("netease_lyrics.json"))
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.network
|
@pytest.mark.network
|
||||||
@requires_spotify
|
@requires_spotify
|
||||||
def test_spotify_fetcher(lrc_manager: LrcManager):
|
def test_api_spotify_response_shape(fetcher_runtime_credentialed):
|
||||||
_fetch_and_assert(lrc_manager, "spotify")
|
fetchers, _cfg = fetcher_runtime_credentialed
|
||||||
|
fetcher = fetchers["spotify"]
|
||||||
|
assert isinstance(fetcher, SpotifyFetcher)
|
||||||
|
|
||||||
|
async def _run() -> dict:
|
||||||
|
payload = await fetcher._api_lyrics(SAMPLE_TRACK)
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
return payload
|
||||||
|
|
||||||
|
payload = asyncio.run(_run())
|
||||||
|
_assert_shape(payload, _load_fixture("spotify_synced.json"))
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.network
|
@pytest.mark.network
|
||||||
@requires_qq_music
|
@requires_qq_music
|
||||||
def test_qqmusic_fetcher(lrc_manager: LrcManager):
|
def test_api_qqmusic_response_shape(fetcher_runtime_credentialed):
|
||||||
_fetch_and_assert(lrc_manager, "qqmusic")
|
fetchers, _cfg = fetcher_runtime_credentialed
|
||||||
|
fetcher = fetchers["qqmusic"]
|
||||||
|
assert isinstance(fetcher, QQMusicFetcher)
|
||||||
|
|
||||||
|
async def _run() -> tuple[dict, dict]:
|
||||||
|
search = await fetcher._api_search(SAMPLE_TRACK, 10)
|
||||||
|
lyric = await fetcher._api_lyric_track(SAMPLE_TRACK, 10)
|
||||||
|
assert isinstance(search, dict)
|
||||||
|
assert isinstance(lyric, dict)
|
||||||
|
return search, lyric
|
||||||
|
|
||||||
|
search_payload, lyric_payload = asyncio.run(_run())
|
||||||
|
_assert_shape(search_payload, _load_fixture("qq_search.json"))
|
||||||
|
_assert_shape(lyric_payload, _load_fixture("qq_lyrics.json"))
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.network
|
@pytest.mark.network
|
||||||
def test_musixmatch_anonymous_fetcher(no_credentials, lrc_manager: LrcManager):
|
def test_api_musixmatch_anonymous_response_shape(fetcher_runtime_anonymous):
|
||||||
# These fetchers should be tested in a single test to share the same usertoken
|
"""Anonymous musixmatch calls must share one cache/auth context in this test."""
|
||||||
# Otherwise the second may fail due to rate limits
|
fetchers, _cfg = fetcher_runtime_anonymous
|
||||||
_fetch_and_assert(lrc_manager, "musixmatch", expect_fail=False)
|
search_fetcher = fetchers["musixmatch"]
|
||||||
_fetch_and_assert(lrc_manager, "musixmatch-spotify", expect_fail=False)
|
spotify_fetcher = fetchers["musixmatch-spotify"]
|
||||||
|
assert isinstance(search_fetcher, MusixmatchFetcher)
|
||||||
|
assert isinstance(spotify_fetcher, MusixmatchSpotifyFetcher)
|
||||||
|
|
||||||
|
async def _run() -> tuple[dict, dict, dict]:
|
||||||
|
search = await search_fetcher._api_search_track(SAMPLE_TRACK)
|
||||||
|
macro_from_search = await search_fetcher._api_macro_track(SAMPLE_TRACK)
|
||||||
|
macro_from_spotify = await spotify_fetcher._api_macro_track(SAMPLE_TRACK)
|
||||||
|
assert isinstance(search, dict)
|
||||||
|
assert isinstance(macro_from_search, dict)
|
||||||
|
assert isinstance(macro_from_spotify, dict)
|
||||||
|
return search, macro_from_search, macro_from_spotify
|
||||||
|
|
||||||
|
search_payload, macro_payload, spotify_macro_payload = asyncio.run(_run())
|
||||||
|
_assert_shape(search_payload, _load_fixture("musixmatch_search.json"))
|
||||||
|
_assert_shape(macro_payload, _load_fixture("musixmatch_macro_richsync.json"))
|
||||||
|
_assert_shape(
|
||||||
|
spotify_macro_payload, _load_fixture("musixmatch_macro_richsync.json")
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.network
|
@pytest.mark.network
|
||||||
@requires_musixmatch_token
|
@requires_musixmatch_token
|
||||||
def test_musixmatch_fetcher(lrc_manager: LrcManager):
|
def test_api_musixmatch_token_response_shape(fetcher_runtime_credentialed):
|
||||||
_fetch_and_assert(lrc_manager, "musixmatch")
|
fetchers, _cfg = fetcher_runtime_credentialed
|
||||||
_fetch_and_assert(lrc_manager, "musixmatch-spotify")
|
search_fetcher = fetchers["musixmatch"]
|
||||||
|
spotify_fetcher = fetchers["musixmatch-spotify"]
|
||||||
|
assert isinstance(search_fetcher, MusixmatchFetcher)
|
||||||
|
assert isinstance(spotify_fetcher, MusixmatchSpotifyFetcher)
|
||||||
|
|
||||||
|
async def _run() -> tuple[dict, dict, dict]:
|
||||||
|
search = await search_fetcher._api_search_track(SAMPLE_TRACK)
|
||||||
|
macro_from_search = await search_fetcher._api_macro_track(SAMPLE_TRACK)
|
||||||
|
macro_from_spotify = await spotify_fetcher._api_macro_track(SAMPLE_TRACK)
|
||||||
|
assert isinstance(search, dict)
|
||||||
|
assert isinstance(macro_from_search, dict)
|
||||||
|
assert isinstance(macro_from_spotify, dict)
|
||||||
|
return search, macro_from_search, macro_from_spotify
|
||||||
|
|
||||||
|
search_payload, macro_payload, spotify_macro_payload = asyncio.run(_run())
|
||||||
|
_assert_shape(search_payload, _load_fixture("musixmatch_search.json"))
|
||||||
|
_assert_shape(macro_payload, _load_fixture("musixmatch_macro_richsync.json"))
|
||||||
|
_assert_shape(
|
||||||
|
spotify_macro_payload, _load_fixture("musixmatch_macro_richsync.json")
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_local_fetcher(lrc_manager: LrcManager):
|
# Parse fixture JSON into real data structures
|
||||||
# Since this not a local track
|
|
||||||
_fetch_and_assert(lrc_manager, "local", True)
|
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
"fixture_name,parser,expected_status",
|
||||||
|
[
|
||||||
|
pytest.param(
|
||||||
|
"spotify_synced.json",
|
||||||
|
_parse_spotify_lyrics,
|
||||||
|
"SUCCESS_SYNCED",
|
||||||
|
id="spotify-synced",
|
||||||
|
),
|
||||||
|
pytest.param(
|
||||||
|
"spotify_unsynced.json",
|
||||||
|
_parse_spotify_lyrics,
|
||||||
|
"SUCCESS_UNSYNCED",
|
||||||
|
id="spotify-unsynced",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
def test_parse_spotify_fixture(
|
||||||
|
fixture_name: str,
|
||||||
|
parser: ParserFunc,
|
||||||
|
expected_status: str,
|
||||||
|
):
|
||||||
|
payload = _load_fixture(fixture_name)
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = parser(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert parsed.detect_sync_status().value == expected_status
|
||||||
|
if expected_status == "SUCCESS_SYNCED":
|
||||||
|
assert parsed.to_text() == "[00:01.00]hello\n[00:02.50]world"
|
||||||
|
else:
|
||||||
|
assert parsed.to_text() == "[00:00.00]plain one\n[00:00.00]plain two"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_qq_search_fixture() -> None:
|
||||||
|
payload = _load_fixture("qq_search.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_qq_search(payload)
|
||||||
|
assert len(parsed) == 2
|
||||||
|
|
||||||
|
assert parsed[0].item == "mid1"
|
||||||
|
assert parsed[0].title == "My Love"
|
||||||
|
assert parsed[0].artist == "Westlife"
|
||||||
|
assert parsed[0].duration_ms == 232000.0
|
||||||
|
assert parsed[0].album == "Coast To Coast"
|
||||||
|
|
||||||
|
assert parsed[1].item == "mid2"
|
||||||
|
assert parsed[1].title == "My Love (Album Version)"
|
||||||
|
assert parsed[1].artist == "Little Texas"
|
||||||
|
assert parsed[1].duration_ms == 248000.0
|
||||||
|
assert parsed[1].album == "Greatest Hits"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_qq_lyrics_fixture() -> None:
|
||||||
|
payload = _load_fixture("qq_lyrics.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_qq_lyrics(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_lrclib_response_fixture() -> None:
|
||||||
|
payload = _load_fixture("lrclib_response.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_lrclib_response(payload)
|
||||||
|
assert parsed.synced is not None and parsed.synced.lyrics is not None
|
||||||
|
assert parsed.unsynced is not None and parsed.unsynced.lyrics is not None
|
||||||
|
assert parsed.synced.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert parsed.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
assert parsed.synced.lyrics.to_text() == "[00:01.00]s1\n[00:02.00]s2"
|
||||||
|
assert parsed.unsynced.lyrics.to_text() == "[00:00.00]p1\n[00:00.00]p2"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_lrclib_search_results_fixture() -> None:
|
||||||
|
payload = _load_fixture("lrclib_search_results.json")
|
||||||
|
assert isinstance(payload, list)
|
||||||
|
parsed = _parse_lrclib_search_results(payload)
|
||||||
|
assert len(parsed) == 2
|
||||||
|
|
||||||
|
assert parsed[0].item.get("id") == 1
|
||||||
|
assert parsed[0].duration_ms == 231847.0
|
||||||
|
assert parsed[0].is_synced is True
|
||||||
|
assert parsed[0].title == "My Love"
|
||||||
|
assert parsed[0].artist == "Westlife"
|
||||||
|
assert parsed[0].album == "Coast To Coast"
|
||||||
|
|
||||||
|
assert parsed[1].item.get("id") == 2
|
||||||
|
assert parsed[1].duration_ms == 262000.0
|
||||||
|
assert parsed[1].is_synced is False
|
||||||
|
assert parsed[1].title == "My Love (Live)"
|
||||||
|
assert parsed[1].artist == "Westlife"
|
||||||
|
assert parsed[1].album == "Live"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_netease_search_fixture() -> None:
|
||||||
|
payload = _load_fixture("netease_search.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_netease_search(payload)
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed[0].item == 2080607
|
||||||
|
assert parsed[0].title == "My Love"
|
||||||
|
assert parsed[0].artist == "Westlife"
|
||||||
|
assert parsed[0].duration_ms == 231941.0
|
||||||
|
assert parsed[0].album == "Unbreakable"
|
||||||
|
|
||||||
|
assert parsed[1].item == 572412968
|
||||||
|
assert parsed[1].artist == "Westlife"
|
||||||
|
assert parsed[1].duration_ms == 231000.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_netease_lyrics_fixture() -> None:
|
||||||
|
payload = _load_fixture("netease_lyrics.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_netease_lyrics(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert parsed.to_text() == "[00:01.00]line1\n[00:02.00]line2"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_musixmatch_search_fixture() -> None:
|
||||||
|
payload = _load_fixture("musixmatch_search.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_mxm_search(payload)
|
||||||
|
assert len(parsed) == 1
|
||||||
|
assert parsed[0].item == 123
|
||||||
|
assert parsed[0].is_synced is True
|
||||||
|
assert parsed[0].title == "My Love"
|
||||||
|
assert parsed[0].artist == "Westlife"
|
||||||
|
assert parsed[0].duration_ms == 232000.0
|
||||||
|
assert parsed[0].album == "Coast To Coast"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_musixmatch_macro_fixture() -> None:
|
||||||
|
payload = _load_fixture("musixmatch_macro_richsync.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_mxm_macro(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_musixmatch_macro_subtitle_fallback_fixture() -> None:
|
||||||
|
payload = _load_fixture("musixmatch_macro_subtitle.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_mxm_macro(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert parsed.to_text() == "[00:01.10]hello\n[00:02.22]world"
|
||||||
|
|
||||||
|
|
||||||
|
# Empty / partial-error response handling
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_spotify_empty_or_invalid() -> None:
|
||||||
|
assert _parse_spotify_lyrics({}) is None
|
||||||
|
assert _parse_spotify_lyrics({"lyrics": {"lines": []}}) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_qq_search_empty_or_error() -> None:
|
||||||
|
assert _parse_qq_search({}) == []
|
||||||
|
assert _parse_qq_search({"code": 1}) == []
|
||||||
|
assert _parse_qq_search({"code": 0, "data": {"list": []}}) == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_qq_lyrics_empty_or_error() -> None:
|
||||||
|
assert _parse_qq_lyrics({}) is None
|
||||||
|
assert _parse_qq_lyrics({"code": 1}) is None
|
||||||
|
assert _parse_qq_lyrics({"code": 0, "data": {"lyric": ""}}) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_lrclib_response_empty_or_partial() -> None:
|
||||||
|
parsed = _parse_lrclib_response({})
|
||||||
|
assert parsed.synced is not None
|
||||||
|
assert parsed.unsynced is not None
|
||||||
|
assert parsed.synced.lyrics is None
|
||||||
|
assert parsed.unsynced.lyrics is None
|
||||||
|
|
||||||
|
parsed_partial = _parse_lrclib_response({"syncedLyrics": "[00:01.00]line"})
|
||||||
|
assert (
|
||||||
|
parsed_partial.synced is not None and parsed_partial.synced.lyrics is not None
|
||||||
|
)
|
||||||
|
assert parsed_partial.unsynced is not None
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_netease_empty_or_partial() -> None:
|
||||||
|
assert _parse_netease_search({}) == []
|
||||||
|
assert _parse_netease_search({"result": {"songs": []}}) == []
|
||||||
|
assert _parse_netease_lyrics({}) is None
|
||||||
|
assert _parse_netease_lyrics({"lrc": {"lyric": ""}}) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_musixmatch_empty_or_partial() -> None:
|
||||||
|
assert _parse_mxm_search({}) == []
|
||||||
|
assert _parse_mxm_search({"message": {"body": {"track_list": []}}}) == []
|
||||||
|
assert _parse_mxm_macro({}) is None
|
||||||
|
assert _parse_mxm_macro({"message": {"body": []}}) is None
|
||||||
|
|||||||
@@ -0,0 +1,123 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from lrx_cli.config import AppConfig
|
||||||
|
from lrx_cli.enrichers.audio_tag import AudioTagEnricher
|
||||||
|
from lrx_cli.enrichers.file_name import FileNameEnricher
|
||||||
|
from lrx_cli.models import CacheStatus, TrackMeta
|
||||||
|
from lrx_cli.fetchers.local import LocalFetcher
|
||||||
|
|
||||||
|
_GENERAL = AppConfig().general
|
||||||
|
|
||||||
|
|
||||||
|
def _local_track(path: Path) -> TrackMeta:
|
||||||
|
return TrackMeta(url=f"file://{path}")
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_unavailable_for_non_local_track():
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
assert not fetcher.is_available(TrackMeta(title="Song", artist="Artist"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_available_for_local_track(tmp_path):
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
assert fetcher.is_available(_local_track(tmp_path / "song.flac"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_returns_empty_for_non_file_url():
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
track = TrackMeta(url="https://example.com/song.mp3")
|
||||||
|
result = asyncio.run(fetcher.fetch(track))
|
||||||
|
assert result.synced is None and result.unsynced is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_reads_synced_sidecar(tmp_path):
|
||||||
|
audio = tmp_path / "song.flac"
|
||||||
|
lrc = audio.with_suffix(".lrc")
|
||||||
|
lrc.write_text("[00:01.00]Hello\n[00:03.00]World\n")
|
||||||
|
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
result = asyncio.run(fetcher.fetch(_local_track(audio)))
|
||||||
|
|
||||||
|
assert result.synced is not None
|
||||||
|
assert result.synced.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert result.synced.source is not None
|
||||||
|
assert "sidecar" in result.synced.source
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_reads_unsynced_sidecar(tmp_path):
|
||||||
|
audio = tmp_path / "song.flac"
|
||||||
|
lrc = audio.with_suffix(".lrc")
|
||||||
|
lrc.write_text("Hello\nWorld\n")
|
||||||
|
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
result = asyncio.run(fetcher.fetch(_local_track(audio)))
|
||||||
|
|
||||||
|
assert result.unsynced is not None
|
||||||
|
assert result.synced is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_empty_sidecar_ignored(tmp_path):
|
||||||
|
audio = tmp_path / "song.flac"
|
||||||
|
(audio.with_suffix(".lrc")).write_text(" ")
|
||||||
|
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
result = asyncio.run(fetcher.fetch(_local_track(audio)))
|
||||||
|
|
||||||
|
assert result.synced is None and result.unsynced is None
|
||||||
|
|
||||||
|
|
||||||
|
def _enrich(path: str, **existing) -> dict | None:
|
||||||
|
enricher = FileNameEnricher()
|
||||||
|
track = TrackMeta(url=f"file://{path}", **existing)
|
||||||
|
return asyncio.run(enricher.enrich(track))
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_artist_title_split(tmp_path):
|
||||||
|
result = _enrich(str(tmp_path / "Utada Hikaru - First Love.flac"))
|
||||||
|
assert result == {
|
||||||
|
"artist": "Utada Hikaru",
|
||||||
|
"title": "First Love",
|
||||||
|
"album": tmp_path.name,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_track_number_prefix(tmp_path):
|
||||||
|
# "01. Title" — no " - " separator, regex strips leading "01. "
|
||||||
|
result = _enrich(str(tmp_path / "01. First Love.flac"))
|
||||||
|
assert result and result.get("title") == "First Love"
|
||||||
|
assert "artist" not in result
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_title_only(tmp_path):
|
||||||
|
result = _enrich(str(tmp_path / "First Love.flac"))
|
||||||
|
assert result and result.get("title") == "First Love"
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_does_not_overwrite_existing_fields(tmp_path):
|
||||||
|
result = _enrich(
|
||||||
|
str(tmp_path / "Artist - Title.flac"),
|
||||||
|
artist="Existing Artist",
|
||||||
|
title="Existing Title",
|
||||||
|
)
|
||||||
|
assert result is None or ("artist" not in result and "title" not in result)
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_non_local_returns_none():
|
||||||
|
enricher = FileNameEnricher()
|
||||||
|
track = TrackMeta(title="Song", artist="Artist")
|
||||||
|
assert asyncio.run(enricher.enrich(track)) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_audio_tag_enricher_non_local_returns_none():
|
||||||
|
enricher = AudioTagEnricher()
|
||||||
|
track = TrackMeta(title="Song", artist="Artist")
|
||||||
|
assert asyncio.run(enricher.enrich(track)) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_audio_tag_enricher_missing_file_returns_none(tmp_path):
|
||||||
|
enricher = AudioTagEnricher()
|
||||||
|
track = _local_track(tmp_path / "nonexistent.flac")
|
||||||
|
assert asyncio.run(enricher.enrich(track)) is None
|
||||||
@@ -277,3 +277,53 @@ def test_unsynced_cache_only_still_fetches_when_unsynced_disallowed(tmp_path):
|
|||||||
assert fetcher.called
|
assert fetcher.called
|
||||||
assert result is not None
|
assert result is not None
|
||||||
assert result.status == CacheStatus.SUCCESS_SYNCED
|
assert result.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
# manual_insert
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_synced_stored_with_correct_status(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
manager.manual_insert(_track(), "[00:01.00]Hello\n[00:03.00]World\n")
|
||||||
|
|
||||||
|
rows = manager.cache.query_track(_track())
|
||||||
|
assert any(r["status"] == CacheStatus.SUCCESS_SYNCED.value for r in rows)
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_unsynced_stored_with_correct_status(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
manager.manual_insert(_track(), "Hello\nWorld\n")
|
||||||
|
|
||||||
|
rows = manager.cache.query_track(_track())
|
||||||
|
assert any(r["status"] == CacheStatus.SUCCESS_UNSYNCED.value for r in rows)
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_source_and_ttl(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
manager.manual_insert(_track(), "[00:01.00]line\n")
|
||||||
|
|
||||||
|
rows = manager.cache.query_track(_track())
|
||||||
|
assert all(r["source"] == "manual" for r in rows)
|
||||||
|
assert all(r["expires_at"] is None for r in rows)
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_overwrites_previous_entry(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
track = _track()
|
||||||
|
manager.manual_insert(track, "[00:01.00]old\n")
|
||||||
|
manager.manual_insert(track, "[00:01.00]new\n")
|
||||||
|
|
||||||
|
best = manager.cache.get_best(track, ["manual"])
|
||||||
|
assert best is not None
|
||||||
|
assert str(best.lyrics) == "[00:01.00]new"
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_is_returned_by_fetch(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
track = _track()
|
||||||
|
manager.manual_insert(track, "[00:01.00]cached\n")
|
||||||
|
|
||||||
|
result = manager.fetch_for_track(track)
|
||||||
|
assert result is not None
|
||||||
|
assert result.lyrics is not None
|
||||||
|
assert str(result.lyrics) == "[00:01.00]cached"
|
||||||
|
|||||||
@@ -0,0 +1,684 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from lrx_cli.lrc import LRCData
|
||||||
|
from lrx_cli.models import TrackMeta
|
||||||
|
from lrx_cli.watch.control import ControlClient, ControlServer, parse_delta
|
||||||
|
from lrx_cli.watch.view import BaseOutput, LyricView, WatchState, WatchStatus
|
||||||
|
from lrx_cli.watch.view.pipe import PipeOutput
|
||||||
|
from lrx_cli.watch.view.print import PrintOutput
|
||||||
|
from lrx_cli.watch.player import ActivePlayerSelector, PlayerState, PlayerTarget
|
||||||
|
from lrx_cli.config import AppConfig
|
||||||
|
from lrx_cli.watch.tracker import PositionTracker
|
||||||
|
from lrx_cli.watch.session import WatchCoordinator
|
||||||
|
|
||||||
|
|
||||||
|
TEST_CONFIG = AppConfig()
|
||||||
|
BUS = "org.mpris.MediaPlayer2.spotify"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_delta_supports_plus_minus_and_reset() -> None:
|
||||||
|
assert parse_delta("+200") == (True, 200, None)
|
||||||
|
assert parse_delta("-150") == (True, -150, None)
|
||||||
|
assert parse_delta("0") == (True, 0, None)
|
||||||
|
|
||||||
|
|
||||||
|
# PlayerTarget
|
||||||
|
|
||||||
|
|
||||||
|
def test_player_target_allows_all_when_hint_empty() -> None:
|
||||||
|
target = PlayerTarget()
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.mpd") is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_player_target_filters_by_case_insensitive_substring() -> None:
|
||||||
|
target = PlayerTarget("Spot")
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.mpd") is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_player_target_hint_allows_regardless_of_blacklist() -> None:
|
||||||
|
# --player bypasses PLAYER_BLACKLIST; PlayerTarget.allows() reflects the hint only
|
||||||
|
target = PlayerTarget("spot")
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
|
||||||
|
|
||||||
|
|
||||||
|
# ActivePlayerSelector
|
||||||
|
|
||||||
|
|
||||||
|
def _ps(bus: str, status: str = "Playing") -> PlayerState:
|
||||||
|
return PlayerState(bus_name=bus, status=status, track=TrackMeta(title="T"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_returns_none_when_no_players() -> None:
|
||||||
|
assert ActivePlayerSelector.select({}, None, "spotify") is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_prefers_single_playing() -> None:
|
||||||
|
players = {
|
||||||
|
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
|
||||||
|
"org.mpris.MediaPlayer2.bar": _ps("org.mpris.MediaPlayer2.bar", "Playing"),
|
||||||
|
}
|
||||||
|
assert (
|
||||||
|
ActivePlayerSelector.select(players, None, "spotify")
|
||||||
|
== "org.mpris.MediaPlayer2.bar"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_prefers_keyword_among_multiple_playing() -> None:
|
||||||
|
players = {
|
||||||
|
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo"),
|
||||||
|
"org.mpris.MediaPlayer2.spotify": _ps("org.mpris.MediaPlayer2.spotify"),
|
||||||
|
}
|
||||||
|
assert (
|
||||||
|
ActivePlayerSelector.select(players, None, "spotify")
|
||||||
|
== "org.mpris.MediaPlayer2.spotify"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_uses_last_active_when_no_playing() -> None:
|
||||||
|
players = {
|
||||||
|
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
|
||||||
|
"org.mpris.MediaPlayer2.bar": _ps("org.mpris.MediaPlayer2.bar", "Stopped"),
|
||||||
|
}
|
||||||
|
assert (
|
||||||
|
ActivePlayerSelector.select(players, "org.mpris.MediaPlayer2.bar", "spotify")
|
||||||
|
== "org.mpris.MediaPlayer2.bar"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_falls_back_to_first_when_no_preference() -> None:
|
||||||
|
players = {
|
||||||
|
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
|
||||||
|
}
|
||||||
|
result = ActivePlayerSelector.select(players, None, "")
|
||||||
|
assert result == "org.mpris.MediaPlayer2.foo"
|
||||||
|
|
||||||
|
|
||||||
|
# PositionTracker
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_seeked_calibrates_immediately() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=1200), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Playing", "track-A")
|
||||||
|
await tracker.on_seeked(BUS, 3500)
|
||||||
|
pos = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert pos >= 3500
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_pause_stops_position_growth() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=0), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Playing", "track-A")
|
||||||
|
await asyncio.sleep(0.08)
|
||||||
|
before = await tracker.get_position_ms()
|
||||||
|
await tracker.on_playback_status(BUS, "Paused")
|
||||||
|
await asyncio.sleep(0.08)
|
||||||
|
after = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert before > 0
|
||||||
|
assert after - before < 20
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_resume_via_playback_status_calibrates() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=50000), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Paused", "track-A")
|
||||||
|
await tracker.on_playback_status(BUS, "Playing")
|
||||||
|
pos = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert pos >= 50000
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_paused_start_calibrates_initial_position() -> None:
|
||||||
|
"""set_active_player with Paused must still calibrate position — player may be mid-song."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=45000), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Paused", "track-A")
|
||||||
|
pos = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert pos >= 45000
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_resume_via_set_active_player_calibrates() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=42000), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Paused", "track-A")
|
||||||
|
await tracker.set_active_player(BUS, "Playing", "track-A")
|
||||||
|
pos = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert pos >= 42000
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
# ControlServer and ControlClient
|
||||||
|
|
||||||
|
|
||||||
|
def test_control_server_and_client_roundtrip(tmp_path: Path) -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
class _Session:
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.offset = 0
|
||||||
|
|
||||||
|
def handle_offset(self, delta: int) -> dict:
|
||||||
|
self.offset += delta
|
||||||
|
return {"ok": True, "offset_ms": self.offset}
|
||||||
|
|
||||||
|
def handle_status(self) -> dict:
|
||||||
|
return {"ok": True, "offset_ms": self.offset, "lyrics_status": "idle"}
|
||||||
|
|
||||||
|
socket_path = tmp_path / "watch.sock"
|
||||||
|
server = ControlServer(socket_path=str(socket_path))
|
||||||
|
await server.start(_Session()) # type: ignore
|
||||||
|
client = ControlClient(socket_path=str(socket_path))
|
||||||
|
r1 = await client._send_async({"cmd": "offset", "delta": 200})
|
||||||
|
r2 = await client._send_async({"cmd": "status"})
|
||||||
|
await server.stop()
|
||||||
|
assert r1 == {"ok": True, "offset_ms": 200}
|
||||||
|
assert r2["ok"] is True
|
||||||
|
assert r2["offset_ms"] == 200
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
# PipeOutput
|
||||||
|
|
||||||
|
|
||||||
|
def _pipe_state(
|
||||||
|
status: WatchStatus,
|
||||||
|
lyrics: Optional[LRCData] = None,
|
||||||
|
position_ms: int = 0,
|
||||||
|
offset_ms: int = 0,
|
||||||
|
track: Optional[TrackMeta] = None,
|
||||||
|
) -> WatchState:
|
||||||
|
return WatchState(
|
||||||
|
track=track,
|
||||||
|
lyrics=LyricView.from_lrc(lyrics) if lyrics else None,
|
||||||
|
position_ms=position_ms,
|
||||||
|
offset_ms=offset_ms,
|
||||||
|
status=status,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_fetching_renders_status_window(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.FETCHING))
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "\n[fetching...]\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_no_lyrics_renders_status_window(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.NO_LYRICS))
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "\n[no lyrics]\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_idle_renders_status_window(capsys) -> None:
|
||||||
|
asyncio.run(PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.IDLE)))
|
||||||
|
assert capsys.readouterr().out == "\n[idle]\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_no_newline_mode(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=0, after=0, no_newline=True).on_state(
|
||||||
|
_pipe_state(WatchStatus.FETCHING)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "[fetching...]"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_default_window_shows_current_line(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput().on_state(_pipe_state(WatchStatus.OK, lrc, position_ms=2100))
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "b\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_context_window(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=2100)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "a\nb\nc\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_before_region_empty_at_first_line(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=1100)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "\na\nb\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_after_region_empty_at_last_line(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=3100)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "b\nc\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_upcoming_lines_before_first_timestamp(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:02.00]a\n[00:03.00]b")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=0)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "\n\na\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_offset_ms_shifts_effective_position(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput().on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=1000, offset_ms=1500)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
# effective = 2500 ms → line b
|
||||||
|
assert capsys.readouterr().out == "b\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_repeated_text_uses_correct_timed_occurrence(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]A\n[00:02.00]X\n[00:03.00]B\n[00:04.00]X\n[00:05.00]C")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=4100)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "B\nX\nC\n"
|
||||||
|
|
||||||
|
|
||||||
|
# PrintOutput
|
||||||
|
|
||||||
|
|
||||||
|
def _ok_state(lyrics: LRCData, track: Optional[TrackMeta] = None) -> WatchState:
|
||||||
|
return WatchState(
|
||||||
|
track=track or TrackMeta(title="Song", artist="Artist"),
|
||||||
|
lyrics=LyricView.from_lrc(lyrics),
|
||||||
|
position_ms=0,
|
||||||
|
offset_ms=0,
|
||||||
|
status=WatchStatus.OK,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _status_state(status: WatchStatus, track: Optional[TrackMeta] = None) -> WatchState:
|
||||||
|
return WatchState(
|
||||||
|
track=track or TrackMeta(title="Song", artist="Artist"),
|
||||||
|
lyrics=None,
|
||||||
|
position_ms=0,
|
||||||
|
offset_ms=0,
|
||||||
|
status=status,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_emits_lrc_on_ok(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PrintOutput().on_state(_ok_state(LRCData("[00:01.00]Hello\n[00:02.00]World")))
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out.startswith("[00:01.00]")
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_plain_strips_tags(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PrintOutput(plain=True).on_state(
|
||||||
|
_ok_state(LRCData("[00:01.00]Hello\n[00:02.00]World"))
|
||||||
|
)
|
||||||
|
)
|
||||||
|
out = capsys.readouterr().out
|
||||||
|
assert "[" not in out
|
||||||
|
assert "Hello" in out
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_plain_with_unsynced_lyrics(capsys) -> None:
|
||||||
|
asyncio.run(PrintOutput(plain=True).on_state(_ok_state(LRCData("Hello\nWorld"))))
|
||||||
|
out = capsys.readouterr().out
|
||||||
|
assert "Hello" in out
|
||||||
|
assert "[" not in out
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_no_lyrics_emits_blank_line(capsys) -> None:
|
||||||
|
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.NO_LYRICS)))
|
||||||
|
assert capsys.readouterr().out == "\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_fetching_emits_nothing(capsys) -> None:
|
||||||
|
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.FETCHING)))
|
||||||
|
assert capsys.readouterr().out == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_idle_emits_nothing(capsys) -> None:
|
||||||
|
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.IDLE)))
|
||||||
|
assert capsys.readouterr().out == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_is_stateless(capsys) -> None:
|
||||||
|
"""View has no internal deduplication — emits on every call."""
|
||||||
|
output = PrintOutput()
|
||||||
|
state = _ok_state(LRCData("[00:01.00]Hello"))
|
||||||
|
asyncio.run(output.on_state(state))
|
||||||
|
asyncio.run(output.on_state(state))
|
||||||
|
lines = [ln for ln in capsys.readouterr().out.splitlines() if ln]
|
||||||
|
assert len(lines) == 2
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_position_sensitive_is_false() -> None:
|
||||||
|
assert PrintOutput.position_sensitive is False
|
||||||
|
|
||||||
|
|
||||||
|
# WatchCoordinator
|
||||||
|
|
||||||
|
|
||||||
|
class _CaptureFetcher:
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.requested: list[str] = []
|
||||||
|
|
||||||
|
def request(self, track: TrackMeta) -> None:
|
||||||
|
self.requested.append(track.display_name())
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def _make_coordinator(output: Optional[BaseOutput] = None) -> WatchCoordinator:
|
||||||
|
class _Manager:
|
||||||
|
def fetch_for_track(self, *_a, **_kw):
|
||||||
|
return None
|
||||||
|
|
||||||
|
class _NullOutput(BaseOutput):
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
session = WatchCoordinator(
|
||||||
|
_Manager(), # type: ignore
|
||||||
|
output or _NullOutput(),
|
||||||
|
player_hint=None,
|
||||||
|
config=TEST_CONFIG,
|
||||||
|
)
|
||||||
|
session._tracker = PositionTracker(
|
||||||
|
lambda _bus: asyncio.sleep(0, result=0),
|
||||||
|
TEST_CONFIG,
|
||||||
|
)
|
||||||
|
return session
|
||||||
|
|
||||||
|
|
||||||
|
def _pstate(status: str = "Playing", title: str = "Song") -> PlayerState:
|
||||||
|
return PlayerState(
|
||||||
|
bus_name=BUS,
|
||||||
|
status=status,
|
||||||
|
track=TrackMeta(title=title, artist="Artist"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_fetches_on_initial_player() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Playing")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert fetcher.requested == ["Artist - Song"]
|
||||||
|
assert session._model.status == WatchStatus.FETCHING
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_fetches_while_paused() -> None:
|
||||||
|
"""Fetch starts immediately even when player is paused — no wait for resume."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Paused")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert fetcher.requested == ["Artist - Song"]
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_paused_start_emits_correct_line_after_fetch() -> None:
|
||||||
|
"""After fetch completes with a mid-song paused player, the current lyric line must render."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
received: list[WatchState] = []
|
||||||
|
|
||||||
|
class _CaptureOutput(BaseOutput):
|
||||||
|
position_sensitive = True
|
||||||
|
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
received.append(state)
|
||||||
|
|
||||||
|
class _Manager:
|
||||||
|
def fetch_for_track(self, *_a, **_kw):
|
||||||
|
return None
|
||||||
|
|
||||||
|
PAUSED_MS = 45000
|
||||||
|
lrc = LRCData("[00:43.00]a\n[00:44.00]b\n[00:46.00]c")
|
||||||
|
|
||||||
|
session = WatchCoordinator(
|
||||||
|
_Manager(), # type: ignore
|
||||||
|
_CaptureOutput(),
|
||||||
|
player_hint=None,
|
||||||
|
config=TEST_CONFIG,
|
||||||
|
)
|
||||||
|
session._tracker = PositionTracker(
|
||||||
|
lambda _bus: asyncio.sleep(0, result=PAUSED_MS),
|
||||||
|
TEST_CONFIG,
|
||||||
|
)
|
||||||
|
await session._tracker.start()
|
||||||
|
|
||||||
|
# Calibrate tracker directly (tracker-level behavior already covered by
|
||||||
|
# test_position_tracker_paused_start_calibrates_initial_position)
|
||||||
|
await session._tracker.set_active_player(BUS, "Paused", "Artist - Song")
|
||||||
|
|
||||||
|
# Put model in the state _on_player_change would have produced
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._model.active_track_key = "Artist - Song"
|
||||||
|
session._model.status = WatchStatus.FETCHING
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Paused")}
|
||||||
|
session._last_emit_signature = (
|
||||||
|
"status",
|
||||||
|
WatchStatus.FETCHING,
|
||||||
|
BUS,
|
||||||
|
"Artist - Song",
|
||||||
|
)
|
||||||
|
|
||||||
|
await session._on_lyrics_update(lrc)
|
||||||
|
|
||||||
|
last_ok = next(
|
||||||
|
(s for s in reversed(received) if s.status == WatchStatus.OK), None
|
||||||
|
)
|
||||||
|
assert last_ok is not None, "no OK state emitted after lyrics loaded"
|
||||||
|
assert last_ok.position_ms >= PAUSED_MS
|
||||||
|
|
||||||
|
await session._tracker.stop()
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_fetches_on_track_change() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._model.active_track_key = "Artist - Old Song"
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]old"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Playing", title="New Song")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
assert fetcher.requested == ["Artist - New Song"]
|
||||||
|
assert session._model.lyrics is None
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_no_refetch_on_calibration_no_lyrics() -> None:
|
||||||
|
"""Calibration with same player/track and no_lyrics must NOT trigger a second fetch."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Playing")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert len(fetcher.requested) == 1
|
||||||
|
|
||||||
|
session._model.status = WatchStatus.NO_LYRICS
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert len(fetcher.requested) == 1
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_no_fetch_when_lyrics_present() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._model.active_track_key = "Artist - Song"
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]line"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Playing")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
assert fetcher.requested == []
|
||||||
|
assert session._model.status == WatchStatus.OK
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_player_disappears_goes_idle() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._model.active_track_key = "Artist - Song"
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]line"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
|
||||||
|
session._player_monitor.players = {}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
assert session._model.status == WatchStatus.IDLE
|
||||||
|
assert session._model.lyrics is None
|
||||||
|
assert session._model.active_player is None
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_no_fetch_when_track_is_none() -> None:
|
||||||
|
"""Player present but reports no track metadata → no fetch, status NO_LYRICS."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {
|
||||||
|
BUS: PlayerState(bus_name=BUS, status="Playing", track=None)
|
||||||
|
}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
assert fetcher.requested == []
|
||||||
|
assert session._model.status == WatchStatus.NO_LYRICS
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_emit_deduplicates_on_same_cursor() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
counts = [0]
|
||||||
|
|
||||||
|
class _CountOutput(BaseOutput):
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
counts[0] += 1
|
||||||
|
|
||||||
|
session = _make_coordinator(_CountOutput())
|
||||||
|
track = TrackMeta(title="Song", artist="Artist")
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._player_monitor.players = {
|
||||||
|
BUS: PlayerState(bus_name=BUS, status="Playing", track=track)
|
||||||
|
}
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]a\n[00:03.00]b"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
await session._tracker.set_active_player(BUS, "Playing", "Artist - Song")
|
||||||
|
|
||||||
|
await session._emit_state() # emits
|
||||||
|
await session._emit_state() # same cursor → no emit
|
||||||
|
assert counts[0] == 1
|
||||||
|
|
||||||
|
await session._tracker.on_seeked(BUS, 3200)
|
||||||
|
await session._emit_state() # cursor advanced → emits
|
||||||
|
assert counts[0] == 2
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_position_insensitive_output_ignores_seeks() -> None:
|
||||||
|
"""With position_sensitive=False, seek events do not trigger re-emit."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
counts = [0]
|
||||||
|
|
||||||
|
class _CountPrint(PrintOutput):
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
counts[0] += 1
|
||||||
|
|
||||||
|
session = _make_coordinator(_CountPrint())
|
||||||
|
track = TrackMeta(title="Song", artist="Artist")
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._player_monitor.players = {
|
||||||
|
BUS: PlayerState(bus_name=BUS, status="Playing", track=track)
|
||||||
|
}
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]a\n[00:03.00]b"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
|
||||||
|
await session._emit_state() # emits once
|
||||||
|
assert counts[0] == 1
|
||||||
|
|
||||||
|
await session._tracker.on_seeked(BUS, 3200)
|
||||||
|
await session._emit_state() # position fixed at 0 → same signature → no re-emit
|
||||||
|
assert counts[0] == 1
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
@@ -43,7 +43,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cyclopts"
|
name = "cyclopts"
|
||||||
version = "4.10.1"
|
version = "4.10.2"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "attrs" },
|
{ name = "attrs" },
|
||||||
@@ -51,9 +51,9 @@ dependencies = [
|
|||||||
{ name = "rich" },
|
{ name = "rich" },
|
||||||
{ name = "rich-rst" },
|
{ name = "rich-rst" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/6c/c4/2ce2ca1451487dc7d59f09334c3fa1182c46cfcf0a2d5f19f9b26d53ac74/cyclopts-4.10.1.tar.gz", hash = "sha256:ad4e4bb90576412d32276b14a76f55d43353753d16217f2c3cd5bdceba7f15a0", size = 166623, upload-time = "2026-03-23T14:43:01.098Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/66/2c/fced34890f6e5a93a4b7afb2c71e8eee2a0719fb26193a0abf159ecb714d/cyclopts-4.10.2.tar.gz", hash = "sha256:d7b950457ef2563596d56331f80cbbbf86a2772535fb8b315c4f03bc7e6127f1", size = 166664, upload-time = "2026-04-08T23:57:45.805Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/8a/0b/2261922126b2e50c601fe22d7ff5194e0a4d50e654836260c0665e24d862/cyclopts-4.10.1-py3-none-any.whl", hash = "sha256:35f37257139380a386d9fe4475e1e7c87ca7795765ef4f31abba579fcfcb6ecd", size = 204331, upload-time = "2026-03-23T14:43:02.625Z" },
|
{ url = "https://files.pythonhosted.org/packages/b4/bd/05055d8360cef0757d79367157f3b15c0a0715e81e08f86a04018ec045f0/cyclopts-4.10.2-py3-none-any.whl", hash = "sha256:a1f2d6f8f7afac9456b48f75a40b36658778ddc9c6d406b520d017ae32c990fe", size = 204314, upload-time = "2026-04-08T23:57:46.969Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -153,7 +153,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lrx-cli"
|
name = "lrx-cli"
|
||||||
version = "0.6.4"
|
version = "0.7.9"
|
||||||
source = { editable = "." }
|
source = { editable = "." }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "cyclopts" },
|
{ name = "cyclopts" },
|
||||||
@@ -162,11 +162,12 @@ dependencies = [
|
|||||||
{ name = "loguru" },
|
{ name = "loguru" },
|
||||||
{ name = "mutagen" },
|
{ name = "mutagen" },
|
||||||
{ name = "platformdirs" },
|
{ name = "platformdirs" },
|
||||||
{ name = "python-dotenv" },
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dev-dependencies]
|
[package.dev-dependencies]
|
||||||
dev = [
|
dev = [
|
||||||
|
{ name = "poethepoet" },
|
||||||
|
{ name = "pyright" },
|
||||||
{ name = "pytest" },
|
{ name = "pytest" },
|
||||||
{ name = "ruff" },
|
{ name = "ruff" },
|
||||||
]
|
]
|
||||||
@@ -178,12 +179,13 @@ requires-dist = [
|
|||||||
{ name = "httpx", specifier = ">=0.28.1" },
|
{ name = "httpx", specifier = ">=0.28.1" },
|
||||||
{ name = "loguru", specifier = ">=0.7.3" },
|
{ name = "loguru", specifier = ">=0.7.3" },
|
||||||
{ name = "mutagen", specifier = ">=1.47.0" },
|
{ name = "mutagen", specifier = ">=1.47.0" },
|
||||||
{ name = "platformdirs", specifier = ">=4.9.4" },
|
{ name = "platformdirs", specifier = ">=4.9.6" },
|
||||||
{ name = "python-dotenv", specifier = ">=1.2.2" },
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.metadata.requires-dev]
|
[package.metadata.requires-dev]
|
||||||
dev = [
|
dev = [
|
||||||
|
{ name = "poethepoet", specifier = ">=0.44.0" },
|
||||||
|
{ name = "pyright", specifier = ">=1.1.406" },
|
||||||
{ name = "pytest", specifier = ">=9.0.2" },
|
{ name = "pytest", specifier = ">=9.0.2" },
|
||||||
{ name = "ruff", specifier = ">=0.15.8" },
|
{ name = "ruff", specifier = ">=0.15.8" },
|
||||||
]
|
]
|
||||||
@@ -218,6 +220,15 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/b0/7a/620f945b96be1f6ee357d211d5bf74ab1b7fe72a9f1525aafbfe3aee6875/mutagen-1.47.0-py3-none-any.whl", hash = "sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719", size = 194391, upload-time = "2023-09-03T16:33:29.955Z" },
|
{ url = "https://files.pythonhosted.org/packages/b0/7a/620f945b96be1f6ee357d211d5bf74ab1b7fe72a9f1525aafbfe3aee6875/mutagen-1.47.0-py3-none-any.whl", hash = "sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719", size = 194391, upload-time = "2023-09-03T16:33:29.955Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "nodeenv"
|
||||||
|
version = "1.10.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/24/bf/d1bda4f6168e0b2e9e5958945e01910052158313224ada5ce1fb2e1113b8/nodeenv-1.10.0.tar.gz", hash = "sha256:996c191ad80897d076bdfba80a41994c2b47c68e224c542b48feba42ba00f8bb", size = 55611, upload-time = "2025-12-20T14:08:54.006Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/88/b2/d0896bdcdc8d28a7fc5717c305f1a861c26e18c05047949fb371034d98bd/nodeenv-1.10.0-py2.py3-none-any.whl", hash = "sha256:5bb13e3eed2923615535339b3c620e76779af4cb4c6a90deccc9e36b274d3827", size = 23438, upload-time = "2025-12-20T14:08:52.782Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "packaging"
|
name = "packaging"
|
||||||
version = "26.0"
|
version = "26.0"
|
||||||
@@ -228,12 +239,21 @@ wheels = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "platformdirs"
|
name = "pastel"
|
||||||
version = "4.9.4"
|
version = "0.2.1"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/19/56/8d4c30c8a1d07013911a8fdbd8f89440ef9f08d07a1b50ab8ca8be5a20f9/platformdirs-4.9.4.tar.gz", hash = "sha256:1ec356301b7dc906d83f371c8f487070e99d3ccf9e501686456394622a01a934", size = 28737, upload-time = "2026-03-05T18:34:13.271Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/76/f1/4594f5e0fcddb6953e5b8fe00da8c317b8b41b547e2b3ae2da7512943c62/pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d", size = 7555, upload-time = "2020-09-16T19:21:12.43Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/63/d7/97f7e3a6abb67d8080dd406fd4df842c2be0efaf712d1c899c32a075027c/platformdirs-4.9.4-py3-none-any.whl", hash = "sha256:68a9a4619a666ea6439f2ff250c12a853cd1cbd5158d258bd824a7df6be2f868", size = 21216, upload-time = "2026-03-05T18:34:12.172Z" },
|
{ url = "https://files.pythonhosted.org/packages/aa/18/a8444036c6dd65ba3624c63b734d3ba95ba63ace513078e1580590075d21/pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364", size = 5955, upload-time = "2020-09-16T19:21:11.409Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "platformdirs"
|
||||||
|
version = "4.9.6"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/9f/4a/0883b8e3802965322523f0b200ecf33d31f10991d0401162f4b23c698b42/platformdirs-4.9.6.tar.gz", hash = "sha256:3bfa75b0ad0db84096ae777218481852c0ebc6c727b3168c1b9e0118e458cf0a", size = 29400, upload-time = "2026-04-09T00:04:10.812Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/75/a6/a0a304dc33b49145b21f4808d763822111e67d1c3a32b524a1baf947b6e1/platformdirs-4.9.6-py3-none-any.whl", hash = "sha256:e61adb1d5e5cb3441b4b7710bea7e4c12250ca49439228cc1021c00dcfac0917", size = 21348, upload-time = "2026-04-09T00:04:09.463Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -246,17 +266,43 @@ wheels = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pygments"
|
name = "poethepoet"
|
||||||
version = "2.19.2"
|
version = "0.44.0"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
|
dependencies = [
|
||||||
|
{ name = "pastel" },
|
||||||
|
{ name = "pyyaml" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/a1/a4/e487662f12a5ecd2ac4d77f7697e4bda481953bb80032b158e5ab55173d4/poethepoet-0.44.0.tar.gz", hash = "sha256:c2667b513621788fb46482e371cdf81c0b04344e0e0bcb7aa8af45f84c2fce7b", size = 96040, upload-time = "2026-04-06T19:40:58.908Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
|
{ url = "https://files.pythonhosted.org/packages/80/b7/503b7d3a51b0de9a329f1323048d166e309a97bb31bdc60e6acd11d2c71f/poethepoet-0.44.0-py3-none-any.whl", hash = "sha256:36d3d834708ed069ac1e4f8ed77915c55265b7b6e01aeb2fe617c9fe9cfd524a", size = 122873, upload-time = "2026-04-06T19:40:57.369Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pygments"
|
||||||
|
version = "2.20.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/c3/b2/bc9c9196916376152d655522fdcebac55e66de6603a76a02bca1b6414f6c/pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f", size = 4955991, upload-time = "2026-03-29T13:29:33.898Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f4/7e/a72dd26f3b0f4f2bf1dd8923c85f7ceb43172af56d63c7383eb62b332364/pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176", size = 1231151, upload-time = "2026-03-29T13:29:30.038Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pyright"
|
||||||
|
version = "1.1.408"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "nodeenv" },
|
||||||
|
{ name = "typing-extensions" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/74/b2/5db700e52554b8f025faa9c3c624c59f1f6c8841ba81ab97641b54322f16/pyright-1.1.408.tar.gz", hash = "sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684", size = 4400578, upload-time = "2026-01-08T08:07:38.795Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/0c/82/a2c93e32800940d9573fb28c346772a14778b84ba7524e691b324620ab89/pyright-1.1.408-py3-none-any.whl", hash = "sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1", size = 6399144, upload-time = "2026-01-08T08:07:37.082Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pytest"
|
name = "pytest"
|
||||||
version = "9.0.2"
|
version = "9.0.3"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||||
@@ -265,18 +311,45 @@ dependencies = [
|
|||||||
{ name = "pluggy" },
|
{ name = "pluggy" },
|
||||||
{ name = "pygments" },
|
{ name = "pygments" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
|
{ url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "python-dotenv"
|
name = "pyyaml"
|
||||||
version = "1.2.2"
|
version = "6.0.3"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/82/ed/0301aeeac3e5353ef3d94b6ec08bbcabd04a72018415dcb29e588514bba8/python_dotenv-1.2.2.tar.gz", hash = "sha256:2c371a91fbd7ba082c2c1dc1f8bf89ca22564a087c2c287cd9b662adde799cf3", size = 50135, upload-time = "2026-03-01T16:00:26.196Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/0b/d7/1959b9648791274998a9c3526f6d0ec8fd2233e4d4acce81bbae76b44b2a/python_dotenv-1.2.2-py3-none-any.whl", hash = "sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a", size = 22101, upload-time = "2026-03-01T16:00:25.09Z" },
|
{ url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -307,27 +380,36 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ruff"
|
name = "ruff"
|
||||||
version = "0.15.8"
|
version = "0.15.10"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/14/b0/73cf7550861e2b4824950b8b52eebdcc5adc792a00c514406556c5b80817/ruff-0.15.8.tar.gz", hash = "sha256:995f11f63597ee362130d1d5a327a87cb6f3f5eae3094c620bcc632329a4d26e", size = 4610921, upload-time = "2026-03-26T18:39:38.675Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/e7/d9/aa3f7d59a10ef6b14fe3431706f854dbf03c5976be614a9796d36326810c/ruff-0.15.10.tar.gz", hash = "sha256:d1f86e67ebfdef88e00faefa1552b5e510e1d35f3be7d423dc7e84e63788c94e", size = 4631728, upload-time = "2026-04-09T14:06:09.884Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/4a/92/c445b0cd6da6e7ae51e954939cb69f97e008dbe750cfca89b8cedc081be7/ruff-0.15.8-py3-none-linux_armv6l.whl", hash = "sha256:cbe05adeba76d58162762d6b239c9056f1a15a55bd4b346cfd21e26cd6ad7bc7", size = 10527394, upload-time = "2026-03-26T18:39:41.566Z" },
|
{ url = "https://files.pythonhosted.org/packages/eb/00/a1c2fdc9939b2c03691edbda290afcd297f1f389196172826b03d6b6a595/ruff-0.15.10-py3-none-linux_armv6l.whl", hash = "sha256:0744e31482f8f7d0d10a11fcbf897af272fefdfcb10f5af907b18c2813ff4d5f", size = 10563362, upload-time = "2026-04-09T14:06:21.189Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/eb/92/f1c662784d149ad1414cae450b082cf736430c12ca78367f20f5ed569d65/ruff-0.15.8-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:d3e3d0b6ba8dca1b7ef9ab80a28e840a20070c4b62e56d675c24f366ef330570", size = 10905693, upload-time = "2026-03-26T18:39:30.364Z" },
|
{ url = "https://files.pythonhosted.org/packages/5c/15/006990029aea0bebe9d33c73c3e28c80c391ebdba408d1b08496f00d422d/ruff-0.15.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b1e7c16ea0ff5a53b7c2df52d947e685973049be1cdfe2b59a9c43601897b22e", size = 10951122, upload-time = "2026-04-09T14:06:02.236Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/ca/f2/7a631a8af6d88bcef997eb1bf87cc3da158294c57044aafd3e17030613de/ruff-0.15.8-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6ee3ae5c65a42f273f126686353f2e08ff29927b7b7e203b711514370d500de3", size = 10323044, upload-time = "2026-03-26T18:39:33.37Z" },
|
{ url = "https://files.pythonhosted.org/packages/f2/c0/4ac978fe874d0618c7da647862afe697b281c2806f13ce904ad652fa87e4/ruff-0.15.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:93cc06a19e5155b4441dd72808fdf84290d84ad8a39ca3b0f994363ade4cebb1", size = 10314005, upload-time = "2026-04-09T14:06:00.026Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/67/18/1bf38e20914a05e72ef3b9569b1d5c70a7ef26cd188d69e9ca8ef588d5bf/ruff-0.15.8-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fdce027ada77baa448077ccc6ebb2fa9c3c62fd110d8659d601cf2f475858d94", size = 10629135, upload-time = "2026-03-26T18:39:44.142Z" },
|
{ url = "https://files.pythonhosted.org/packages/da/73/c209138a5c98c0d321266372fc4e33ad43d506d7e5dd817dd89b60a8548f/ruff-0.15.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83e1dd04312997c99ea6965df66a14fb4f03ba978564574ffc68b0d61fd3989e", size = 10643450, upload-time = "2026-04-09T14:05:42.137Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/d2/e9/138c150ff9af60556121623d41aba18b7b57d95ac032e177b6a53789d279/ruff-0.15.8-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:12e617fc01a95e5821648a6df341d80456bd627bfab8a829f7cfc26a14a4b4a3", size = 10348041, upload-time = "2026-03-26T18:39:52.178Z" },
|
{ url = "https://files.pythonhosted.org/packages/ec/76/0deec355d8ec10709653635b1f90856735302cb8e149acfdf6f82a5feb70/ruff-0.15.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8154d43684e4333360fedd11aaa40b1b08a4e37d8ffa9d95fee6fa5b37b6fab1", size = 10379597, upload-time = "2026-04-09T14:05:49.984Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/02/f1/5bfb9298d9c323f842c5ddeb85f1f10ef51516ac7a34ba446c9347d898df/ruff-0.15.8-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:432701303b26416d22ba696c39f2c6f12499b89093b61360abc34bcc9bf07762", size = 11121987, upload-time = "2026-03-26T18:39:55.195Z" },
|
{ url = "https://files.pythonhosted.org/packages/dc/be/86bba8fc8798c081e28a4b3bb6d143ccad3fd5f6f024f02002b8f08a9fa3/ruff-0.15.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ab88715f3a6deb6bde6c227f3a123410bec7b855c3ae331b4c006189e895cef", size = 11146645, upload-time = "2026-04-09T14:06:12.246Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/10/11/6da2e538704e753c04e8d86b1fc55712fdbdcc266af1a1ece7a51fff0d10/ruff-0.15.8-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d910ae974b7a06a33a057cb87d2a10792a3b2b3b35e33d2699fdf63ec8f6b17a", size = 11951057, upload-time = "2026-03-26T18:39:19.18Z" },
|
{ url = "https://files.pythonhosted.org/packages/a8/89/140025e65911b281c57be1d385ba1d932c2366ca88ae6663685aed8d4881/ruff-0.15.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a768ff5969b4f44c349d48edf4ab4f91eddb27fd9d77799598e130fb628aa158", size = 12030289, upload-time = "2026-04-09T14:06:04.776Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/83/f0/c9208c5fd5101bf87002fed774ff25a96eea313d305f1e5d5744698dc314/ruff-0.15.8-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2033f963c43949d51e6fdccd3946633c6b37c484f5f98c3035f49c27395a8ab8", size = 11464613, upload-time = "2026-03-26T18:40:06.301Z" },
|
{ url = "https://files.pythonhosted.org/packages/88/de/ddacca9545a5e01332567db01d44bd8cf725f2db3b3d61a80550b48308ea/ruff-0.15.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ee3ef42dab7078bda5ff6a1bcba8539e9857deb447132ad5566a038674540d0", size = 11496266, upload-time = "2026-04-09T14:05:55.485Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/f8/22/d7f2fabdba4fae9f3b570e5605d5eb4500dcb7b770d3217dca4428484b17/ruff-0.15.8-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f29b989a55572fb885b77464cf24af05500806ab4edf9a0fd8977f9759d85b1", size = 11257557, upload-time = "2026-03-26T18:39:57.972Z" },
|
{ url = "https://files.pythonhosted.org/packages/bc/bb/7ddb00a83760ff4a83c4e2fc231fd63937cc7317c10c82f583302e0f6586/ruff-0.15.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51cb8cc943e891ba99989dd92d61e29b1d231e14811db9be6440ecf25d5c1609", size = 11256418, upload-time = "2026-04-09T14:05:57.69Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/71/8c/382a9620038cf6906446b23ce8632ab8c0811b8f9d3e764f58bedd0c9a6f/ruff-0.15.8-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:ac51d486bf457cdc985a412fb1801b2dfd1bd8838372fc55de64b1510eff4bec", size = 11169440, upload-time = "2026-03-26T18:39:22.205Z" },
|
{ url = "https://files.pythonhosted.org/packages/dc/8d/55de0d35aacf6cd50b6ee91ee0f291672080021896543776f4170fc5c454/ruff-0.15.10-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:e59c9bdc056a320fb9ea1700a8d591718b8faf78af065484e801258d3a76bc3f", size = 11288416, upload-time = "2026-04-09T14:05:44.695Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/4d/0d/0994c802a7eaaf99380085e4e40c845f8e32a562e20a38ec06174b52ef24/ruff-0.15.8-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:c9861eb959edab053c10ad62c278835ee69ca527b6dcd72b47d5c1e5648964f6", size = 10605963, upload-time = "2026-03-26T18:39:46.682Z" },
|
{ url = "https://files.pythonhosted.org/packages/68/cf/9438b1a27426ec46a80e0a718093c7f958ef72f43eb3111862949ead3cc1/ruff-0.15.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:136c00ca2f47b0018b073f28cb5c1506642a830ea941a60354b0e8bc8076b151", size = 10621053, upload-time = "2026-04-09T14:05:52.782Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/19/aa/d624b86f5b0aad7cef6bbf9cd47a6a02dfdc4f72c92a337d724e39c9d14b/ruff-0.15.8-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8d9a5b8ea13f26ae90838afc33f91b547e61b794865374f114f349e9036835fb", size = 10357484, upload-time = "2026-03-26T18:39:49.176Z" },
|
{ url = "https://files.pythonhosted.org/packages/4c/50/e29be6e2c135e9cd4cb15fbade49d6a2717e009dff3766dd080fcb82e251/ruff-0.15.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8b80a2f3c9c8a950d6237f2ca12b206bccff626139be9fa005f14feb881a1ae8", size = 10378302, upload-time = "2026-04-09T14:06:14.361Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/35/c3/e0b7835d23001f7d999f3895c6b569927c4d39912286897f625736e1fd04/ruff-0.15.8-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c2a33a529fb3cbc23a7124b5c6ff121e4d6228029cba374777bd7649cc8598b8", size = 10830426, upload-time = "2026-03-26T18:40:03.702Z" },
|
{ url = "https://files.pythonhosted.org/packages/18/2f/e0b36a6f99c51bb89f3a30239bc7bf97e87a37ae80aa2d6542d6e5150364/ruff-0.15.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:e3e53c588164dc025b671c9df2462429d60357ea91af7e92e9d56c565a9f1b07", size = 10850074, upload-time = "2026-04-09T14:06:16.581Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/f0/51/ab20b322f637b369383adc341d761eaaa0f0203d6b9a7421cd6e783d81b9/ruff-0.15.8-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:75e5cd06b1cf3f47a3996cfc999226b19aa92e7cce682dcd62f80d7035f98f49", size = 11345125, upload-time = "2026-03-26T18:39:27.799Z" },
|
{ url = "https://files.pythonhosted.org/packages/11/08/874da392558ce087a0f9b709dc6ec0d60cbc694c1c772dab8d5f31efe8cb/ruff-0.15.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:b0c52744cf9f143a393e284125d2576140b68264a93c6716464e129a3e9adb48", size = 11358051, upload-time = "2026-04-09T14:06:18.948Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/37/e6/90b2b33419f59d0f2c4c8a48a4b74b460709a557e8e0064cf33ad894f983/ruff-0.15.8-py3-none-win32.whl", hash = "sha256:bc1f0a51254ba21767bfa9a8b5013ca8149dcf38092e6a9eb704d876de94dc34", size = 10571959, upload-time = "2026-03-26T18:39:36.117Z" },
|
{ url = "https://files.pythonhosted.org/packages/e4/46/602938f030adfa043e67112b73821024dc79f3ab4df5474c25fa4c1d2d14/ruff-0.15.10-py3-none-win32.whl", hash = "sha256:d4272e87e801e9a27a2e8df7b21011c909d9ddd82f4f3281d269b6ba19789ca5", size = 10588964, upload-time = "2026-04-09T14:06:07.14Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/1f/a2/ef467cb77099062317154c63f234b8a7baf7cb690b99af760c5b68b9ee7f/ruff-0.15.8-py3-none-win_amd64.whl", hash = "sha256:04f79eff02a72db209d47d665ba7ebcad609d8918a134f86cb13dd132159fc89", size = 11743893, upload-time = "2026-03-26T18:39:25.01Z" },
|
{ url = "https://files.pythonhosted.org/packages/25/b6/261225b875d7a13b33a6d02508c39c28450b2041bb01d0f7f1a83d569512/ruff-0.15.10-py3-none-win_amd64.whl", hash = "sha256:28cb32d53203242d403d819fd6983152489b12e4a3ae44993543d6fe62ab42ed", size = 11745044, upload-time = "2026-04-09T14:05:39.473Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/15/e2/77be4fff062fa78d9b2a4dea85d14785dac5f1d0c1fb58ed52331f0ebe28/ruff-0.15.8-py3-none-win_arm64.whl", hash = "sha256:cf891fa8e3bb430c0e7fac93851a5978fc99c8fa2c053b57b118972866f8e5f2", size = 11048175, upload-time = "2026-03-26T18:40:01.06Z" },
|
{ url = "https://files.pythonhosted.org/packages/58/ed/dea90a65b7d9e69888890fb14c90d7f51bf0c1e82ad800aeb0160e4bacfd/ruff-0.15.10-py3-none-win_arm64.whl", hash = "sha256:601d1610a9e1f1c2165a4f561eeaa2e2ea1e97f3287c5aa258d3dab8b57c6188", size = 11035607, upload-time = "2026-04-09T14:05:47.593Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "typing-extensions"
|
||||||
|
version = "4.15.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
|||||||
Reference in New Issue
Block a user