Compare commits

..

25 Commits

Author SHA1 Message Date
Uyanide f30a51204f feat: include appversion in UA 2026-03-31 15:13:58 +02:00
Uyanide 8362fa71a4 fix: sp 2026-03-31 15:13:58 +02:00
Uyanide 5b5654f514 refactor: remove unnecessary list of fetcher methods 2026-03-31 06:45:57 +02:00
Uyanide ffd9fd0ea9 feat: add metadata enrichers & refactor 2026-03-31 06:42:30 +02:00
Uyanide 1b83b5933d feat: 'search' command no longer requires 'title' param 2026-03-31 06:42:29 +02:00
Uyanide a3e5c17d9b fix: request param and header of sp fetcher 2026-03-31 04:17:24 +02:00
Uyanide 02abfe636f feat: add player preference configuration and improve MPRIS player selection logic 2026-03-31 03:14:01 +02:00
Uyanide 1a301deb40 feat: export lyrics default to sidecar path 2026-03-31 03:14:01 +02:00
Uyanide 34dfe7d042 chore: bump version to 0.1.3 2026-03-31 02:15:39 +02:00
Uyanide 8c9678bbf2 feat: add qqmusic fetcher 2026-03-31 02:05:06 +02:00
Uyanide cf0cb1ab53 feat: replace typer with cycplots & improve cli 2026-03-30 18:48:42 +02:00
Uyanide bb72623446 update .gitignore 2026-03-30 01:06:54 +02:00
Uyanide cf3fe3d00e feat: enhance fuzzy matching and add artist normalization in cache.py 2026-03-28 07:35:29 +01:00
Uyanide a74bf885a2 feat: add exact metadata match for cache search in CacheSearchFetcher 2026-03-28 06:54:30 +01:00
Uyanide 05d7def249 feat: implement cache-search fetcher for cross-album fuzzy lookup 2026-03-28 06:21:31 +01:00
Uyanide 4182229ae2 🚨 lint 2026-03-27 12:52:45 +01:00
Uyanide 6c0b61e208 fix: URL decoding in local fetcher 2026-03-26 02:32:45 +01:00
Uyanide c07f8e0a82 feat: add offset handling for LRC time tags 2026-03-25 21:55:36 +01:00
Uyanide b9fa6c6705 fix: normalize time tags in fetched lrc (why [00:17:06]?) 2026-03-25 11:16:03 +01:00
Uyanide 6e50352934 feat: persist spo token 2026-03-25 10:53:01 +01:00
Uyanide 4dc4cd62b0 feat: successfully synced lyrics should never expire 2026-03-25 10:24:56 +01:00
Uyanide 9281df0f4c chore: add LICENSE 2026-03-25 06:03:06 +01:00
Uyanide 108084c020 chore: remove .vscode 2026-03-25 05:59:47 +01:00
Uyanide c93b0dce89 chore: add README 2026-03-25 05:58:54 +01:00
Uyanide 72d06e0aa9 init 2026-03-25 05:58:37 +01:00
91 changed files with 2802 additions and 11287 deletions
+2 -3
View File
@@ -10,6 +10,5 @@ wheels/
!.gitignore
!.python-version
TODO.md
PENDING.md
SOLVED.md
*.md
!README.md
+4 -8
View File
@@ -1,11 +1,7 @@
Copyright 2026 Uyanide
Copyright 2026 Uyanide me@uyani.de
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+37 -140
View File
@@ -1,191 +1,91 @@
# LRX-CLI
# lrcfetch
> [!WARNING]
>
> This project is primarily provided for educational and experimental purposes.
> It is yet not ready for production or commercial use and may violate the terms
> of service (ToS) of thirdparty music platforms. Use of this software is at
> your own risk; the authors provide no warranties and accept no liability for any
> consequences arising from its use.
A CLI tool for fetching LRC lyrics on Linux. Automatically detects the currently
playing track via MPRIS/DBus and retrieves the best-matching lyrics from
multiple sources, ranked by confidence scoring.
A CLI tool for fetching LRC lyrics on Linux. Automatically detects the currently playing track via MPRIS/DBus and retrieves synced (or plain with all time tags set to `[00:00.00]` if failed to find any synced) lyrics from multiple sources.
## Sources
Sources are queried in order. High-confidence results (exact match or manual
insert) terminate the pipeline early; otherwise all sources are tried and the
highest-confidence result wins.
Lyrics are fetched using a fallback pipeline (first synced result wins):
1. **Local** — sidecar `.lrc` files or embedded audio metadata (FLAC, MP3)
2. **Cache Search** — fuzzy cross-album lookup in local cache
3. **Spotify** — synced lyrics via Spotify's API
(requires `credentials.spotify_sp_dc` and Spotify trackid)
4. **LRCLIB**exact match from [lrclib.net](https://lrclib.net)
(requires full metadata)
5. **Musixmatch (Spotify)** — Musixmatch API with Spotify trackid
(requires Spotify trackid)
6. **LRCLIB Search** — fuzzy search from lrclib.net (requires at least a title)
7. **Musixmatch** — Musixmatch API with metadata search (requires at least a title)
8. **Netease** — Netease Cloud Music public API
9. **QQ Music** — QQ Music via self-hosted API proxy
(requires `credentials.qq_music_api_url`; compatible with [tooplick/qq-music-api](https://github.com/tooplick/qq-music-api))
> I'm aware that Spotify's lyrics are provided by Musixmatch, but the fact is
> that Musixmatch's own search will yield different (and more) results than
> Spotify's, so I treat them as separate sources.
3. **Spotify** — synced lyrics via Spotify's API (requires `SPOTIFY_SP_DC`)
4. **LRCLIB** — exact match from [lrclib.net](https://lrclib.net) (requires full metadata)
5. **LRCLIB Search**fuzzy search from lrclib.net (requires at least a title)
6. **Netease** — Netease Cloud Music public API
7. **QQ Music** QQ Music via self-hosted API proxy (requires `QQ_MUSIC_API_URL` that provides the same interface as [tooplick/qq-music-api](https://github.com/tooplick/qq-music-api))
## Usage
See `lrx --help` for full command reference. Common use cases:
See `lrcfetch --help` for full command reference. Common use cases:
- Fetch lyrics for the currently playing track:
```bash
lrx fetch
lrcfetch fetch
```
targeting a specific player and source:
using a specific player or source to fetch from:
```bash
lrx fetch --player mpd --method lrclib-search
lrcfetch --player mpd fetch --method lrclib-search
```
- Search by metadata (bypasses MPRIS):
```bash
lrx search -t "My Love" -a "Westlife"
lrx search --trackid "5p0ietGkLNEqx1Z7ijkw5g"
lrcfetch search -t "My Love" -a "Westlife"
lrcfetch search --trackid "5p0ietGkLNEqx1Z7ijkw5g"
```
or by path to a local audio file:
or for a local file:
```bash
lrx search --path "/path/to/Westlife - My Love.flac"
lrcfetch search --path "/path/to/Westlife - My Love.flac"
```
- Export to sidecar `.lrc` file (or `.txt` with `--plain`):
- Export to sidecar `.lrc` file:
```bash
lrx export
lrx export --plain
lrx export --output /path/to/lyrics.lrc
lrcfetch export
```
- Watch active player and stream lyrics continuously to stdout:
or to a custom path:
```bash
lrx watch pipe
lrx watch pipe --before 1 --after 2 # show context lines
```
Control a running watch session:
```bash
lrx watch ctl status # print session status as JSON
lrx watch ctl offset +200 # shift lyrics forward 200 ms
lrx watch ctl offset -150
lrcfetch export --output /path/to/lyrics.lrc
```
- Cache management:
```bash
lrx cache stats # statistics
lrx cache query # inspect cache entries for current track
lrx cache clear # clear cache of current track
lrx cache clear --all # clear entire cache
lrx cache confidence spotify 100 # manually set confidence for a source
```
Shell completion (zsh/fish/bash):
```bash
lrx --install-completion
lrcfetch cache stats # show cache statistics
lrcfetch cache query # query cache for current track
lrcfetch cache clear # clears cache of current track
lrcfetch cache clear --all # clears entire cache
```
## Configuration
Configuration is read from `~/.config/lrx-cli/config.toml`. The file is
optional; all values have defaults. Unknown keys are rejected with an error.
Set credentials via environment variable or `.env` file:
```toml
[general]
preferred_player = "" # preferred MPRIS player when multiple are active
player_blacklist = ["firefox", "zen", "chrome", "chromium", "vivaldi", "edge", "opera", "mpv"] # bypassed by --player/-p
http_timeout = 10.0 # seconds
- `~/.config/lrcfetch/.env` — user-level
- `.env` in working directory — project-local
- Shell environment — highest priority
[credentials]
spotify_sp_dc = "" # required for Spotify source
musixmatch_usertoken = "" # optional; anonymous token fetched if empty
qq_music_api_url = "" # required for QQ Music source
[watch]
debounce_ms = 400 # ms to wait after a track change before fetching
calibration_interval_s = 3.0 # seconds between full MPRIS position recalibrations
position_tick_ms = 50 # ms between local position ticks
socket_path = "" # Unix socket path; defaults to <cache_dir>/watch.sock
```env
SPOTIFY_SP_DC=your_cookie_value
QQ_MUSIC_API_URL=https://api.example.com
LRCFETCH_PLAYER=spotify
```
**Credentials:**
- `SPOTIFY_SP_DC` — required for Spotify source. Defaults to empty (disabled Spotify source).
- `QQ_MUSIC_API_URL` — required for QQ Music source. Defaults to empty (disabled QQ Music source).
- `LRCFETCH_PLAYER` — preferred MPRIS player when multiple are active. Defaults to `spotify`. Only used when no `--player` flag is given and more than one player (or none of them) is currently playing.
- `spotify_sp_dc` — `SP_DC` cookie from a logged-in Spotify web session. Required
for the Spotify source; leave empty to disable it.
- `musixmatch_usertoken` — found at
[Curators Settings Page](https://curators.musixmatch.com/settings) → Login → "Copy debug info".
If empty, an anonymous token will be fetched at runtime, which could be more likely to
hit the rate limits.
- `qq_music_api_url` — base URL of a self-hosted
[qq-music-api](https://github.com/tooplick/qq-music-api) (compatible) instance. Required
for the QQ Music source; leave empty to disable it.
## Development
Clone this repository:
Shell completion (zsh/fish/bash):
```bash
git clone https://github.com/Uyanide/lrx-cli.git
cd lrx-cli
```
Create a virtual environment and install dependencies (for example, using uv):
```bash
uv venv .venv
uv sync
```
Run tests (without network access):
```bash
uv run poe test
```
Run tests including **REAL EXTERNAL** API calls. Some of them will be skipped
if the required credentials are not configured as [above](#configuration). This might be useful
to verify whether the lyric sources are still valid and working as expected:
```bash
uv run poe test-api
```
Other unified tasks:
```bash
uv run poe fmt # ruff format
uv run poe lint # ruff check + pyright
```
Run the CLI:
```bash
uv run lrx --help
```
Install to user-level (optional):
```bash
uv tool install .
lrcfetch --install-completion
```
## Credits
@@ -195,6 +95,3 @@ uv tool install .
- [librelyrics-spotify](https://github.com/libre-lyrics/librelyrics-spotify)
- [NeteaseCloudMusicAPI](https://www.npmjs.com/package/NeteaseCloudMusicApi?activeTab=readme)
- [qq-music-api](https://github.com/tooplick/qq-music-api)
- [LyricsMPRIS-Rust](https://github.com/BEST8OY/LyricsMPRIS-Rust)
- [onetagger](https://github.com/Marekkon5/onetagger)
- [Rise Media Player](https://github.com/theimpactfulcompany/Rise-Media-Player)
+4
View File
@@ -0,0 +1,4 @@
from lrcfetch.cli import run
if __name__ == "__main__":
run()
+441
View File
@@ -0,0 +1,441 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 10:18:03
Description: SQLite-based lyric cache with per-source storage and TTL expiration
"""
import re
import sqlite3
import hashlib
import time
import unicodedata
from typing import Optional
from loguru import logger
from .config import DB_PATH, DURATION_TOLERANCE_MS
from .models import TrackMeta, LyricResult, CacheStatus
# Punctuation to strip for fuzzy matching (ASCII + fullwidth + CJK brackets/symbols)
_PUNCT_RE = re.compile(
r"[~!@#$%^&*()_+\-=\[\]{}|;:'\",.<>?/\\`"
r"~!@#$%^&*()_+-=【】{}|;:'",。<>?/\`"
r"「」『』《》〈〉〔〕·•‥…—–]"
)
_SPACE_RE = re.compile(r"\s+")
# feat./ft./featuring and everything after (case-insensitive, word boundary)
_FEAT_RE = re.compile(r"\s*(?:\bfeat\.?\b|\bft\.?\b|\bfeaturing\b).*", re.IGNORECASE)
# Multi-artist separators: /, &, ×, x (surrounded by spaces), ;, 、, vs.
_ARTIST_SEP_RE = re.compile(r"\s*(?:[/&;×、]|\bvs\.?\b|\bx\b)\s*", re.IGNORECASE)
def _normalize_for_match(s: str) -> str:
"""Normalize a string for fuzzy comparison.
Lowercases, NFKC-normalizes (fullwidth → halfwidth), strips punctuation,
and collapses whitespace.
"""
s = unicodedata.normalize("NFKC", s).lower()
s = _FEAT_RE.sub("", s)
s = _PUNCT_RE.sub(" ", s)
s = _SPACE_RE.sub(" ", s).strip()
return s
def _normalize_artist(s: str) -> str:
"""Normalize an artist string: split by separators, normalize each, sort.
Splits first (on /, &, ;, ×, 、, vs., x), then strips feat./ft./featuring
from each part individually, so 'A feat. C / B' → ['a', 'b'] not just ['a'].
"""
s = unicodedata.normalize("NFKC", s).lower()
parts = _ARTIST_SEP_RE.split(s)
normed = sorted(
{_normalize_for_match(p) for p in parts if _FEAT_RE.sub("", p).strip()}
)
return "\0".join(normed) if normed else _normalize_for_match(s)
def _generate_key(track: TrackMeta, source: str) -> str:
"""Generate a unique cache key from track metadata and source.
The key is scoped by source so that different fetchers can cache
independently for the same track (e.g. Spotify synced vs Netease unsynced).
"""
# Spotify tracks always use their track ID as the primary identifier
if track.trackid and source == "spotify":
return f"spotify:{track.trackid}"
parts = []
if track.artist:
parts.append(track.artist)
if track.title:
parts.append(track.title)
if track.album:
parts.append(track.album)
if track.length:
parts.append(str(track.length))
# Fall back to URL for local files
if not parts and track.url:
return f"{source}:url:{track.url}"
if not parts:
raise ValueError("Insufficient metadata to generate cache key")
raw = "|".join(parts)
digest = hashlib.sha256(raw.encode()).hexdigest()
return f"{source}:{digest}"
class CacheEngine:
def __init__(self, db_path: str = DB_PATH):
self.db_path = db_path
self._init_db()
def _init_db(self) -> None:
"""Create or migrate the cache table."""
with sqlite3.connect(self.db_path) as conn:
conn.execute("""
CREATE TABLE IF NOT EXISTS cache (
key TEXT PRIMARY KEY,
source TEXT NOT NULL,
status TEXT NOT NULL,
lyrics TEXT,
created_at INTEGER NOT NULL,
expires_at INTEGER,
artist TEXT,
title TEXT,
album TEXT,
length INTEGER
)
""")
# Migration: add length column if missing
cols = {r[1] for r in conn.execute("PRAGMA table_info(cache)").fetchall()}
if "length" not in cols:
conn.execute("ALTER TABLE cache ADD COLUMN length INTEGER")
conn.commit()
# Read
def get(self, track: TrackMeta, source: str) -> Optional[LyricResult]:
"""Look up a cached result for *track* from *source*.
Returns None on cache miss or expiration.
"""
try:
key = _generate_key(track, source)
except ValueError:
return None
with sqlite3.connect(self.db_path) as conn:
row = conn.execute(
"SELECT status, lyrics, source, expires_at, length FROM cache WHERE key = ?",
(key,),
).fetchone()
if not row:
logger.debug(f"Cache miss: {source} / {track.display_name()}")
return None
status_str, lyrics, src, expires_at, cached_length = row
# Check TTL expiration
if expires_at and expires_at < int(time.time()):
logger.debug(f"Cache expired: {source} / {track.display_name()}")
conn.execute("DELETE FROM cache WHERE key = ?", (key,))
conn.commit()
return None
# Backfill length if the cached row is missing it
if cached_length is None and track.length is not None:
conn.execute(
"UPDATE cache SET length = ? WHERE key = ?",
(track.length, key),
)
conn.commit()
remaining = expires_at - int(time.time()) if expires_at else None
logger.debug(
f"Cache hit: {source} / {track.display_name()} "
f"[{status_str}, ttl={remaining}s]"
)
return LyricResult(
status=CacheStatus(status_str),
lyrics=lyrics,
source=src,
ttl=remaining,
)
def get_best(self, track: TrackMeta, sources: list[str]) -> Optional[LyricResult]:
"""Return the best cached result across *sources* (synced > unsynced).
Skips negative statuses (NOT_FOUND, NETWORK_ERROR) — those are only
consulted per-source to avoid redundant fetches.
"""
best: Optional[LyricResult] = None
for src in sources:
cached = self.get(track, src)
if not cached:
continue
if cached.status == CacheStatus.SUCCESS_SYNCED:
return cached # Can't do better
if cached.status == CacheStatus.SUCCESS_UNSYNCED and best is None:
best = cached
return best
# Write
def set(
self,
track: TrackMeta,
source: str,
result: LyricResult,
ttl_seconds: Optional[int] = None,
) -> None:
"""Store a lyric result in the cache."""
try:
key = _generate_key(track, source)
except ValueError:
logger.warning("Cannot cache: insufficient track metadata.")
return
now = int(time.time())
expires_at = now + ttl_seconds if ttl_seconds else None
with sqlite3.connect(self.db_path) as conn:
conn.execute(
"""INSERT OR REPLACE INTO cache
(key, source, status, lyrics, created_at, expires_at,
artist, title, album, length)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(
key,
source,
result.status.value,
result.lyrics,
now,
expires_at,
track.artist,
track.title,
track.album,
track.length,
),
)
conn.commit()
logger.debug(
f"Cached: {source} / {track.display_name()} "
f"[{result.status.value}, ttl={ttl_seconds}s]"
)
# Delete
def clear_all(self) -> None:
"""Remove every entry from the cache."""
with sqlite3.connect(self.db_path) as conn:
conn.execute("DELETE FROM cache")
conn.commit()
logger.info("Cache cleared.")
def clear_track(self, track: TrackMeta) -> None:
"""Remove all cached entries (every source) for a single track."""
conditions, params = self._track_where(track)
if not conditions:
logger.info(f"No cache entries found for {track.display_name()}.")
return
where = " AND ".join(conditions)
with sqlite3.connect(self.db_path) as conn:
cur = conn.execute(f"DELETE FROM cache WHERE {where}", params)
conn.commit()
if cur.rowcount:
logger.info(
f"Cleared {cur.rowcount} cache entries for {track.display_name()}."
)
else:
logger.info(f"No cache entries found for {track.display_name()}.")
def prune(self) -> int:
"""Remove all expired entries. Returns the number of rows deleted."""
with sqlite3.connect(self.db_path) as conn:
cur = conn.execute(
"DELETE FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
(int(time.time()),),
)
conn.commit()
count = cur.rowcount
logger.info(f"Pruned {count} expired cache entries.")
return count
@staticmethod
def _track_where(track: TrackMeta) -> tuple[list[str], list[str]]:
"""Build WHERE conditions to match a track across all sources."""
conditions: list[str] = []
params: list[str] = []
if track.artist:
conditions.append("artist = ?")
params.append(track.artist)
if track.title:
conditions.append("title = ?")
params.append(track.title)
if track.album:
conditions.append("album = ?")
params.append(track.album)
return conditions, params
# Exact cross-source search
def find_best_positive(self, track: TrackMeta) -> Optional[LyricResult]:
"""Find the best positive (synced/unsynced) cache entry for *track*.
Uses exact metadata match (artist + title + album) across all sources.
Returns synced if available, otherwise unsynced, or None.
"""
conditions, params = self._track_where(track)
if not conditions:
return None
now = int(time.time())
conditions.append("status IN (?, ?)")
params.extend(
[CacheStatus.SUCCESS_SYNCED.value, CacheStatus.SUCCESS_UNSYNCED.value]
)
conditions.append("(expires_at IS NULL OR expires_at > ?)")
params.append(str(now))
where = " AND ".join(conditions)
with sqlite3.connect(self.db_path) as conn:
conn.row_factory = sqlite3.Row
rows = conn.execute(
f"SELECT status, lyrics, source FROM cache WHERE {where} "
"ORDER BY CASE status WHEN ? THEN 0 ELSE 1 END LIMIT 1",
params + [CacheStatus.SUCCESS_SYNCED.value],
).fetchall()
if not rows:
return None
row = dict(rows[0])
return LyricResult(
status=CacheStatus(row["status"]),
lyrics=row["lyrics"],
source="cache-search",
)
# Fuzzy search
def search_by_meta(
self,
artist: Optional[str],
title: Optional[str],
length: Optional[int] = None,
) -> list[dict]:
"""Search cache for lyrics matching artist/title with fuzzy normalization.
Ignores album and source. Only returns positive results (synced/unsynced)
that have not expired. When *length* is provided, filters by duration
tolerance and sorts by closest match.
"""
if not title:
return []
now = int(time.time())
with sqlite3.connect(self.db_path) as conn:
conn.row_factory = sqlite3.Row
rows = conn.execute(
"""SELECT * FROM cache
WHERE status IN (?, ?)
AND (expires_at IS NULL OR expires_at > ?)""",
(
CacheStatus.SUCCESS_SYNCED.value,
CacheStatus.SUCCESS_UNSYNCED.value,
now,
),
).fetchall()
norm_title = _normalize_for_match(title)
norm_artist = _normalize_artist(artist) if artist else None
matches: list[dict] = []
for row in rows:
row_dict = dict(row)
# Title must match
row_title = row_dict.get("title") or ""
if _normalize_for_match(row_title) != norm_title:
continue
# Artist must match if provided
if norm_artist:
row_artist = row_dict.get("artist") or ""
if _normalize_artist(row_artist) != norm_artist:
continue
matches.append(row_dict)
# Duration filtering
if length is not None and matches:
scored = []
for m in matches:
row_len = m.get("length")
if row_len is not None:
diff = abs(row_len - length)
if diff <= DURATION_TOLERANCE_MS:
scored.append((diff, m))
else:
# No duration info in cache — still a candidate but lower priority
scored.append((DURATION_TOLERANCE_MS, m))
scored.sort(
key=lambda x: (
x[0],
x[1].get("status") != CacheStatus.SUCCESS_SYNCED.value,
)
)
matches = [m for _, m in scored]
return matches
# Query / inspect
def query_track(self, track: TrackMeta) -> list[dict]:
"""Return all cached rows for a given track (across all sources)."""
conditions, params = self._track_where(track)
if not conditions:
return []
where = " AND ".join(conditions)
with sqlite3.connect(self.db_path) as conn:
conn.row_factory = sqlite3.Row
return [
dict(r)
for r in conn.execute(
f"SELECT * FROM cache WHERE {where}", params
).fetchall()
]
def query_all(self) -> list[dict]:
"""Return every row in the cache table."""
with sqlite3.connect(self.db_path) as conn:
conn.row_factory = sqlite3.Row
return [dict(r) for r in conn.execute("SELECT * FROM cache").fetchall()]
def stats(self) -> dict:
"""Return aggregate cache statistics."""
now = int(time.time())
with sqlite3.connect(self.db_path) as conn:
total = conn.execute("SELECT COUNT(*) FROM cache").fetchone()[0]
expired = conn.execute(
"SELECT COUNT(*) FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
(now,),
).fetchone()[0]
by_status = dict(
conn.execute(
"SELECT status, COUNT(*) FROM cache GROUP BY status"
).fetchall()
)
by_source = dict(
conn.execute(
"SELECT source, COUNT(*) FROM cache GROUP BY source"
).fetchall()
)
return {
"total": total,
"expired": expired,
"active": total - expired,
"by_status": by_status,
"by_source": by_source,
}
+395
View File
@@ -0,0 +1,395 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-26 02:04:39
Description: CLI interface
"""
import sys
import time
import os
from pathlib import Path
from typing import Annotated
from urllib.parse import quote
import cyclopts
from loguru import logger
from .config import enable_debug
from .models import TrackMeta, CacheStatus
from .mpris import get_current_track
from .core import LrcManager
from .fetchers import FetcherMethodType
from .lrc import get_sidecar_path
app = cyclopts.App(
help="LRCFetch — Fetch line-synced lyrics for your music player.",
)
app.register_install_completion_command()
cache_app = cyclopts.App(name="cache", help="Manage the local SQLite cache.")
app.command(cache_app)
manager = LrcManager()
# Global state set by the meta launcher
_player: str | None = None
@app.meta.default
def launcher(
*tokens: Annotated[str, cyclopts.Parameter(show=False, allow_leading_hyphen=True)],
debug: Annotated[
bool,
cyclopts.Parameter(
name=["--debug", "-d"], negative="", help="Enable debug logging."
),
] = False,
player: Annotated[
str | None,
cyclopts.Parameter(
name=["--player", "-p"],
help="Target a specific MPRIS player using its DBus name or a portion thereof.",
),
] = None,
):
global _player
if debug:
enable_debug()
_player = player
app(tokens)
# fetch
@app.command
def fetch(
*,
method: Annotated[
FetcherMethodType | None,
cyclopts.Parameter(help="Force a specific source."),
] = None,
no_cache: Annotated[
bool,
cyclopts.Parameter(
name="--no-cache", negative="", help="Bypass the cache for this request."
),
] = False,
only_synced: Annotated[
bool,
cyclopts.Parameter(
name="--only-synced", negative="", help="Only accept synced (timed) lyrics."
),
] = False,
):
"""Fetch and print lyrics for the currently playing track."""
track = get_current_track(_player)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
logger.info(f"Track: {track.display_name()}")
result = manager.fetch_for_track(track, force_method=method, bypass_cache=no_cache)
if not result or not result.lyrics:
logger.error("No lyrics found.")
sys.exit(1)
if only_synced and result.status != CacheStatus.SUCCESS_SYNCED:
logger.error("Only unsynced lyrics available (--only-synced requested).")
sys.exit(1)
print(result.lyrics)
# search
@app.command
def search(
*,
title: Annotated[
str | None, cyclopts.Parameter(name=["--title", "-t"], help="Track title.")
] = None,
artist: Annotated[
str | None, cyclopts.Parameter(name=["--artist", "-a"], help="Artist name.")
] = None,
album: Annotated[str | None, cyclopts.Parameter(help="Album name.")] = None,
trackid: Annotated[str | None, cyclopts.Parameter(help="Spotify track ID.")] = None,
length: Annotated[
int | None,
cyclopts.Parameter(
name=["--length", "-l"], help="Track duration in milliseconds."
),
] = None,
url: Annotated[
str | None,
cyclopts.Parameter(
help="Local file URL (file:///...). Mutually exclusive with --path."
),
] = None,
path: Annotated[
str | None,
cyclopts.Parameter(
name=["--path"],
help="Local audio file path. Mutually exclusive with --url.",
),
] = None,
method: Annotated[
FetcherMethodType | None, cyclopts.Parameter(help="Force a specific source.")
] = None,
no_cache: Annotated[
bool,
cyclopts.Parameter(
name="--no-cache", negative="", help="Bypass the cache for this request."
),
] = False,
only_synced: Annotated[
bool,
cyclopts.Parameter(
name="--only-synced", negative="", help="Only accept synced (timed) lyrics."
),
] = False,
):
"""Search for lyrics by metadata (bypasses MPRIS)."""
if url and path:
logger.error("--url and --path are mutually exclusive.")
sys.exit(1)
if path:
resolved = str(Path(path).resolve())
url = "file://" + quote(resolved, safe="/")
track = TrackMeta(
title=title,
artist=artist,
album=album,
trackid=trackid,
length=length,
url=url,
)
logger.info(f"Track: {track.display_name()}")
result = manager.fetch_for_track(track, force_method=method, bypass_cache=no_cache)
if not result or not result.lyrics:
logger.error("No lyrics found.")
sys.exit(1)
if only_synced and result.status != CacheStatus.SUCCESS_SYNCED:
logger.error("Only unsynced lyrics available (--only-synced requested).")
sys.exit(1)
print(result.lyrics)
# export
@app.command
def export(
*,
output: Annotated[
str | None,
cyclopts.Parameter(
name=["--output", "-o"],
help="Output file path (default: same directory as audio file with .lrc extension, or current directory if not available).",
),
] = None,
method: Annotated[
FetcherMethodType | None, cyclopts.Parameter(help="Force a specific source.")
] = None,
no_cache: Annotated[
bool, cyclopts.Parameter(name="--no-cache", negative="", help="Bypass cache.")
] = False,
overwrite: Annotated[
bool,
cyclopts.Parameter(
name=["--overwrite", "-f"], negative="", help="Overwrite existing file."
),
] = False,
):
"""Export lyrics of the current track to a .lrc file."""
track = get_current_track(_player)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
result = manager.fetch_for_track(track, force_method=method, bypass_cache=no_cache)
if not result or not result.lyrics:
logger.error("No lyrics available to export.")
sys.exit(1)
# Build default output path
if not output:
if track.url:
lrc_path = get_sidecar_path(track.url, ensure_exists=False)
if lrc_path:
output = str(lrc_path)
logger.info(f"Exporting to sidecar path: {output}")
# Fallback to current directory with sanitized filename
if not output:
filename = (
f"{track.artist} - {track.title}.lrc"
if track.artist and track.title
else "lyrics.lrc"
)
# Sanitize filename
filename = "".join(
c for c in filename if c.isalpha() or c.isdigit() or c in " -_."
).rstrip()
output = os.path.join(os.getcwd(), filename)
if os.path.exists(output) and not overwrite:
logger.error(f"File exists: {output} (use -f to overwrite)")
sys.exit(1)
try:
with open(output, "w", encoding="utf-8") as f:
f.write(result.lyrics)
logger.info(f"Exported lyrics to {output}")
except Exception as e:
logger.error(f"Failed to write file: {e}")
sys.exit(1)
# cache subcommands
@cache_app.command
def query(
*,
all: Annotated[
bool,
cyclopts.Parameter(name="--all", negative="", help="Dump all cache entries."),
] = False,
):
"""Show cached entries for the current track."""
if all:
rows = manager.cache.query_all()
if not rows:
print("Cache is empty.")
return
for row in rows:
_print_cache_row(row)
print()
return
track = get_current_track(_player)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
_print_track_cache(track)
@cache_app.command
def clear(
*,
all: Annotated[
bool,
cyclopts.Parameter(name="--all", negative="", help="Clear the entire cache."),
] = False,
):
"""Clear cached entries for the current track."""
if all:
manager.cache.clear_all()
return
track = get_current_track(_player)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
manager.cache.clear_track(track)
@cache_app.command
def prune():
"""Remove expired cache entries."""
manager.cache.prune()
@cache_app.command
def stats():
"""Show cache statistics."""
s = manager.cache.stats()
print("=== Cache Statistics ===")
print(f"Total entries : {s['total']}")
print(f"Active : {s['active']}")
print(f"Expired : {s['expired']}")
if s["by_status"]:
print("\nBy status:")
for status, count in s["by_status"].items():
print(f" {status}: {count}")
if s["by_source"]:
print("\nBy source:")
for source, count in s["by_source"].items():
print(f" {source}: {count}")
# helpers
def _print_track_cache(track: TrackMeta) -> None:
"""Print all cached entries for a given track."""
print(f"Track: {track.display_name()}")
if track.album:
print(f"Album: {track.album}")
if track.length:
secs = track.length / 1000.0
print(f"Duration: {int(secs // 60)}:{secs % 60:05.2f}")
print()
rows = manager.cache.query_track(track)
if not rows:
print(" (no cache entries)")
return
for row in rows:
_print_cache_row(row, indent=" ")
def _print_cache_row(row: dict, indent: str = "") -> None:
"""Pretty-print a single cache row."""
now = int(time.time())
source = row.get("source", "?")
status = row.get("status", "?")
artist = row.get("artist", "")
title = row.get("title", "")
album = row.get("album", "")
created = row.get("created_at", 0)
expires = row.get("expires_at")
lyrics = row.get("lyrics", "")
name = f"{artist} - {title}" if artist and title else row.get("key", "?")
print(f"{indent}[{source}] {name}")
if album:
print(f"{indent} Album : {album}")
print(f"{indent} Status : {status}")
if created:
age = now - created
print(f"{indent} Cached : {age // 3600}h {(age % 3600) // 60}m ago")
if expires:
remaining = expires - now
if remaining > 0:
print(
f"{indent} Expires : in {remaining // 3600}h {(remaining % 3600) // 60}m"
)
else:
print(f"{indent} Expires : EXPIRED")
else:
print(f"{indent} Expires : never")
if lyrics:
line_count = len(lyrics.splitlines())
print(f"{indent} Lyrics : {line_count} lines")
def run():
app.meta()
if __name__ == "__main__":
run()
+88
View File
@@ -0,0 +1,88 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 10:17:56
Description: Global configuration constants and logger setup
"""
import os
import sys
from pathlib import Path
from platformdirs import user_cache_dir, user_config_dir
from dotenv import load_dotenv
from loguru import logger
from importlib.metadata import version
# Application
APP_NAME = "lrcfetch"
APP_AUTHOR = "Uyanide"
APP_VERSION = version(APP_NAME)
# Paths
CACHE_DIR = user_cache_dir(APP_NAME, APP_AUTHOR)
DB_PATH = os.path.join(CACHE_DIR, "cache.db")
# .env loading
_config_env = Path(user_config_dir(APP_NAME, APP_AUTHOR)) / ".env"
load_dotenv(_config_env) # ~/.config/lrcfetch/.env
load_dotenv() # .env in cwd (does NOT override existing vars)
# HTTP
HTTP_TIMEOUT = 10.0
# Cache TTLs (seconds)
TTL_SYNCED = None # never expires
TTL_UNSYNCED = 86400 # 1 day
TTL_NOT_FOUND = 86400 * 3 # 3 days
TTL_NETWORK_ERROR = 3600 # 1 hour
# Search
DURATION_TOLERANCE_MS = 3000 # max duration mismatch for search matching
# Spotify related
SPOTIFY_TOKEN_URL = "https://open.spotify.com/api/token"
SPOTIFY_LYRICS_URL = "https://spclient.wg.spotify.com/color-lyrics/v2/track/"
SPOTIFY_SERVER_TIME_URL = "https://open.spotify.com/api/server-time"
SPOTIFY_SECRET_URL = (
"https://raw.githubusercontent.com/xyloflake/spot-secrets-go"
"/refs/heads/main/secrets/secrets.json"
)
SPOTIFY_SP_DC = os.environ.get("SPOTIFY_SP_DC", "")
SPOTIFY_TOKEN_CACHE_FILE = os.path.join(CACHE_DIR, "spotify_token.json")
SPOTIFY_APP_VERSION = "1.2.87.284.g3ff41c13"
# Netease api
NETEASE_SEARCH_URL = "https://music.163.com/api/cloudsearch/pc"
NETEASE_LYRIC_URL = "https://interface3.music.163.com/api/song/lyric"
# LRCLIB api
LRCLIB_API_URL = "https://lrclib.net/api/get"
LRCLIB_SEARCH_URL = "https://lrclib.net/api/search"
# QQ Music API (self-hosted proxy)
QQ_MUSIC_API_URL = os.environ.get("QQ_MUSIC_API_URL", "").rstrip("/")
# Player preference (used when multiple MPRIS players are active)
PREFERRED_PLAYER = os.environ.get("LRCFETCH_PLAYER", "spotify")
# User-Agents
UA_BROWSER = "Mozilla/5.0 (X11; Linux x86_64; rv:148.0) Gecko/20100101 Firefox/148.0"
UA_LRCFETCH = f"LRCFetch {APP_VERSION} (https://github.com/Uyanide/lrcfetch)"
os.makedirs(CACHE_DIR, exist_ok=True)
# Logger
_LOG_FORMAT = (
"<green>{time:YYYY-MM-DD HH:mm:ss}</green> | "
"<level>{level: <8}</level> | "
"<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - "
"<level>{message}</level>"
)
logger.remove()
logger.add(sys.stderr, format=_LOG_FORMAT, level="INFO")
def enable_debug() -> None:
"""Switch logger to DEBUG level."""
logger.remove()
logger.add(sys.stderr, format=_LOG_FORMAT, level="DEBUG")
+198
View File
@@ -0,0 +1,198 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 11:09:53
Description: Core orchestrator — coordinates fetchers with cache-aware fallback
"""
"""
Fetch pipeline:
1. Check cache for each source in the fallback sequence
2. For sources without a valid cache hit, call the fetcher
3. Cache every result (success, not-found, or error) per source
4. Return the best result (synced > unsynced > None)
"""
from typing import Optional
from loguru import logger
from .fetchers import FetcherMethodType, create_fetchers
from .fetchers.base import BaseFetcher
from .cache import CacheEngine
from .lrc import LRC_LINE_RE, normalize_tags
from .config import TTL_SYNCED, TTL_UNSYNCED, TTL_NOT_FOUND, TTL_NETWORK_ERROR
from .models import TrackMeta, LyricResult, CacheStatus
from .enrichers import enrich_track
def _normalize_unsynced(lyrics: str) -> str:
"""Normalize unsynced lyrics so every line has a [00:00.00] tag.
- Lines that already have time tags: replace with [00:00.00]
- Lines without time tags: prepend [00:00.00]
- Blank lines are kept as-is
"""
out: list[str] = []
for line in lyrics.splitlines():
stripped = line.strip()
if not stripped:
out.append("")
continue
cleaned = LRC_LINE_RE.sub("", stripped)
while LRC_LINE_RE.match(cleaned):
cleaned = LRC_LINE_RE.sub("", cleaned)
out.append(f"[00:00.00]{cleaned}")
return "\n".join(out)
# Maps CacheStatus to the default TTL used when storing results
_STATUS_TTL: dict[CacheStatus, Optional[int]] = {
CacheStatus.SUCCESS_SYNCED: TTL_SYNCED,
CacheStatus.SUCCESS_UNSYNCED: TTL_UNSYNCED,
CacheStatus.NOT_FOUND: TTL_NOT_FOUND,
CacheStatus.NETWORK_ERROR: TTL_NETWORK_ERROR,
}
class LrcManager:
"""Main entry point for fetching lyrics with caching."""
def __init__(self) -> None:
self.cache = CacheEngine()
self.fetchers = create_fetchers(self.cache)
def _build_sequence(
self, track: TrackMeta, force_method: Optional[FetcherMethodType] = None
) -> list[BaseFetcher]:
"""Determine the ordered list of fetchers to try."""
if force_method:
if force_method not in self.fetchers:
logger.error(f"Unknown method: {force_method}")
return []
return [self.fetchers[force_method]]
sequence: list[BaseFetcher] = []
if track.is_local:
sequence.append(self.fetchers["local"])
if track.title:
sequence.append(self.fetchers["cache-search"])
if track.trackid:
sequence.append(self.fetchers["spotify"])
if track.is_complete:
sequence.append(self.fetchers["lrclib"])
if track.title:
sequence.append(self.fetchers["lrclib-search"])
sequence.append(self.fetchers["netease"])
sequence.append(self.fetchers["qqmusic"])
logger.debug(f"Fallback sequence: {[f.source_name for f in sequence]}")
return sequence
def fetch_for_track(
self,
track: TrackMeta,
force_method: Optional[FetcherMethodType] = None,
bypass_cache: bool = False,
) -> Optional[LyricResult]:
"""Fetch lyrics for *track* using the fallback pipeline.
Each source is checked against the cache independently:
- Cache hit with synced lyrics → return immediately
- Cache hit with negative status (NOT_FOUND / NETWORK_ERROR) → skip source
- Cache miss or unsynced → call fetcher, then cache the result
After all sources are tried, returns the best result found
(synced > unsynced > None).
"""
track = enrich_track(track)
logger.info(f"Fetching lyrics for: {track.display_name()}")
sequence = self._build_sequence(track, force_method)
if not sequence:
return None
# Best result seen so far (synced wins over unsynced)
best_result: Optional[LyricResult] = None
for fetcher in sequence:
source = fetcher.source_name
# Cache check (skip for fetchers that handle their own caching)
if not bypass_cache and not fetcher.self_cached:
cached = self.cache.get(track, source)
if cached:
if cached.status == CacheStatus.SUCCESS_SYNCED:
logger.info(f"[{source}] cache hit: synced lyrics")
return cached
elif cached.status == CacheStatus.SUCCESS_UNSYNCED:
logger.debug(
f"[{source}] cache hit: unsynced lyrics (continuing)"
)
if best_result is None:
best_result = cached
continue # Try next source for synced
elif cached.status in (
CacheStatus.NOT_FOUND,
CacheStatus.NETWORK_ERROR,
):
logger.debug(
f"[{source}] cache hit: {cached.status.value}, skipping"
)
continue
elif not fetcher.self_cached:
logger.debug(f"[{source}] cache bypassed")
# Fetch
logger.debug(f"[{source}] calling fetcher...")
result = fetcher.fetch(track, bypass_cache=bypass_cache)
if not result:
logger.debug(f"[{source}] returned None (no result)")
continue
# Normalize non-standard time tags [mm:ss:cc] → [mm:ss.cc]
if result.lyrics:
result = LyricResult(
status=result.status,
lyrics=normalize_tags(result.lyrics),
source=result.source,
ttl=result.ttl,
)
# Cache the normalized result (skip for self-cached fetchers)
if not fetcher.self_cached:
ttl = result.ttl or _STATUS_TTL.get(result.status, TTL_NOT_FOUND)
self.cache.set(track, source, result, ttl_seconds=ttl)
# Evaluate result
if result.status == CacheStatus.SUCCESS_SYNCED:
logger.info(f"[{source}] got synced lyrics")
return result
if result.status == CacheStatus.SUCCESS_UNSYNCED:
logger.debug(f"[{source}] got unsynced lyrics (continuing)")
if best_result is None:
best_result = result
# NOT_FOUND / NETWORK_ERROR: already cached, try next
# Return best available
if best_result:
# Normalize unsynced lyrics: set all timestamps to [00:00.00]
if (
best_result.status == CacheStatus.SUCCESS_UNSYNCED
and best_result.lyrics
):
best_result = LyricResult(
status=best_result.status,
lyrics=_normalize_unsynced(best_result.lyrics),
source=best_result.source,
ttl=best_result.ttl,
)
logger.info(
f"Returning unsynced lyrics from {best_result.source} "
f"(no synced source found)"
)
else:
logger.info(f"No lyrics found for {track.display_name()}")
return best_result
+39
View File
@@ -0,0 +1,39 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-31 06:09:11
Description: Metadata enrichment pipeline
"""
from loguru import logger
from .base import BaseEnricher
from .audio_tag import AudioTagEnricher
from .file_name import FileNameEnricher
from ..models import TrackMeta
# Enrichers run in order; earlier ones have higher priority.
_ENRICHERS: list[BaseEnricher] = [
AudioTagEnricher(),
FileNameEnricher(),
]
def enrich_track(track: TrackMeta) -> TrackMeta:
"""Run all enrichers and return a track with missing fields filled in.
Each enricher sees the cumulative state (earlier enrichers' results
are already applied). A field is only set if it is currently None.
"""
for enricher in _ENRICHERS:
try:
result = enricher.enrich(track)
except Exception as e:
logger.warning(f"Enricher {enricher.name} failed: {e}")
continue
if not result:
continue
# Only apply fields that are still None
updates = {k: v for k, v in result.items() if getattr(track, k, None) is None}
if updates:
track = track.model_copy(update=updates)
return track
@@ -1,18 +1,16 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-31 06:11:27
Description: Enricher that reads metadata from audio file tags.
Description: Enricher that reads metadata from audio file tags (mutagen)
"""
from __future__ import annotations
from typing import Optional
from loguru import logger
from mutagen._file import File, FileType
from .base import BaseEnricher
from ..models import TrackMeta
from ..utils import get_audio_path
from ..lrc import get_audio_path
class AudioTagEnricher(BaseEnricher):
@@ -22,11 +20,7 @@ class AudioTagEnricher(BaseEnricher):
def name(self) -> str:
return "audio-tag"
@property
def provides(self) -> set[str]:
return {"title", "artist", "album", "length"}
async def enrich(self, track: TrackMeta) -> Optional[dict]:
def enrich(self, track: TrackMeta) -> Optional[dict]:
if not track.is_local or not track.url:
return None
@@ -1,11 +1,9 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-31 06:08:16
Description: Base class for metadata enrichers.
Description: Base class for metadata enrichers
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Optional
@@ -24,12 +22,8 @@ class BaseEnricher(ABC):
@abstractmethod
def name(self) -> str: ...
@property
@abstractmethod
def provides(self) -> set[str]: ...
@abstractmethod
async def enrich(self, track: TrackMeta) -> Optional[dict]:
def enrich(self, track: TrackMeta) -> Optional[dict]:
"""Return a dict of {field_name: value} for fields this enricher can fill.
Return None or an empty dict if nothing can be contributed.
@@ -1,18 +1,16 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-31 06:08:44
Description: Enricher that parses metadata from the audio file path.
Description: Enricher that parses metadata from the audio file path
"""
from __future__ import annotations
import re
from typing import Optional
from loguru import logger
from .base import BaseEnricher
from ..models import TrackMeta
from ..utils import get_audio_path
from ..lrc import get_audio_path
# Common track-number prefixes: "01 - ", "01. ", "1 - ", etc.
@@ -35,11 +33,7 @@ class FileNameEnricher(BaseEnricher):
def name(self) -> str:
return "file-name"
@property
def provides(self) -> set[str]:
return {"artist", "title", "album"}
async def enrich(self, track: TrackMeta) -> Optional[dict]:
def enrich(self, track: TrackMeta) -> Optional[dict]:
if not track.is_local or not track.url:
return None
@@ -66,35 +60,22 @@ class FileNameEnricher(BaseEnricher):
# Left was only a track number → right is the title
if not track.title:
updates["title"] = right
# Try "Artist-Title" split (no spaces)
elif "-" in stem:
left, right = stem.split("-", 1)
left = _TRACK_NUM_RE.sub("", left).strip()
right = right.strip()
if left and right:
if not track.artist:
updates["artist"] = left
if not track.title:
updates["title"] = right
elif right:
if not track.title:
updates["title"] = right
# No separator: strip track number, remainder is title
else:
# No separator: strip track number, remainder is title
title_guess = _TRACK_NUM_RE.sub("", stem).strip()
if title_guess and not track.title:
updates["title"] = title_guess
# Use parent directory as album fallback
if not track.album and "album" not in updates:
# Use parent directory as artist fallback
# Typical layout: /Music/Artist/Album/01 - Track.flac
if not track.artist and "artist" not in updates:
parents = audio_path.parents
if len(parents) >= 1:
if len(parents) >= 2:
album_dir = parents[0].name
if album_dir and album_dir not in (".", "/"):
if not track.album:
artist_dir = parents[1].name
if artist_dir and artist_dir not in (".", "/"):
updates["artist"] = artist_dir
if not track.album and album_dir and album_dir != artist_dir:
updates["album"] = album_dir
if updates:
+41
View File
@@ -0,0 +1,41 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 02:33:26
Description: Fetcher pipeline — registry and types
"""
from typing import Literal
from .base import BaseFetcher
from .local import LocalFetcher
from .cache_search import CacheSearchFetcher
from .spotify import SpotifyFetcher
from .lrclib import LrclibFetcher
from .lrclib_search import LrclibSearchFetcher
from .netease import NeteaseFetcher
from .qqmusic import QQMusicFetcher
from ..cache import CacheEngine
FetcherMethodType = Literal[
"local",
"cache-search",
"spotify",
"lrclib",
"lrclib-search",
"netease",
"qqmusic",
]
def create_fetchers(cache: CacheEngine) -> dict[FetcherMethodType, BaseFetcher]:
"""Instantiate all fetchers. Returns a dict keyed by source name."""
fetchers: dict[FetcherMethodType, BaseFetcher] = {
"local": LocalFetcher(),
"cache-search": CacheSearchFetcher(cache),
"spotify": SpotifyFetcher(),
"lrclib": LrclibFetcher(),
"lrclib-search": LrclibSearchFetcher(),
"netease": NeteaseFetcher(),
"qqmusic": QQMusicFetcher(),
}
return fetchers
+30
View File
@@ -0,0 +1,30 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 02:33:26
Description: Base fetcher class and common interfaces
"""
from abc import ABC, abstractmethod
from typing import Optional
from ..models import TrackMeta, LyricResult
class BaseFetcher(ABC):
@property
@abstractmethod
def source_name(self) -> str:
"""Name of the fetcher source."""
pass
@property
def self_cached(self) -> bool:
"""True if this fetcher manages its own cache (skip per-source cache check)."""
return False
@abstractmethod
def fetch(
self, track: TrackMeta, bypass_cache: bool = False
) -> Optional[LyricResult]:
"""Fetch lyrics for the given track. Returns None if unable to fetch."""
pass
+82
View File
@@ -0,0 +1,82 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-28 05:57:46
Description: Cache-search fetcher — cross-album fuzzy lookup in the local cache
"""
"""
Searches existing cache entries by artist + title with fuzzy normalization,
ignoring album and source. Useful when the same track appears on different
albums or is played from different players.
"""
from typing import Optional
from loguru import logger
from .base import BaseFetcher
from ..models import TrackMeta, LyricResult, CacheStatus
from ..cache import CacheEngine
class CacheSearchFetcher(BaseFetcher):
def __init__(self, cache: CacheEngine) -> None:
self._cache = cache
@property
def source_name(self) -> str:
return "cache-search"
@property
def self_cached(self) -> bool:
return True
def fetch(
self, track: TrackMeta, bypass_cache: bool = False
) -> Optional[LyricResult]:
if bypass_cache:
logger.debug("Cache-search: bypassed by caller")
return None
if not track.title:
logger.debug("Cache-search: skipped — no title")
return None
# Fast path: exact metadata match (artist+title+album), single SQL query
exact = self._cache.find_best_positive(track)
if exact:
logger.info(f"Cache-search: exact hit ({exact.status.value})")
return exact
# Slow path: fuzzy cross-album search
matches = self._cache.search_by_meta(
artist=track.artist,
title=track.title,
length=track.length,
)
if not matches:
logger.debug(f"Cache-search: no match for {track.display_name()}")
return None
# Pick best: prefer synced, then first available
best = None
for m in matches:
if m.get("status") == CacheStatus.SUCCESS_SYNCED.value:
best = m
break
if best is None:
best = m
if not best or not best.get("lyrics"):
return None
status = CacheStatus(best["status"])
logger.info(
f"Cache-search: fuzzy hit from [{best.get('source')}] "
f"album={best.get('album')!r} ({status.value})"
)
return LyricResult(
status=status,
lyrics=best["lyrics"],
source=self.source_name,
)
+93
View File
@@ -0,0 +1,93 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-26 02:08:41
Description: Local fetcher — reads lyrics from .lrc sidecar files or embedded audio metadata
"""
"""
Priority:
1. Same-directory .lrc file (e.g. /path/to/track.lrc)
2. Embedded lyrics in audio metadata (FLAC, MP3 USLT/SYLT tags)
"""
from typing import Optional
from loguru import logger
from mutagen._file import File
from mutagen.flac import FLAC
from .base import BaseFetcher
from ..models import TrackMeta, LyricResult
from ..lrc import detect_sync_status, get_audio_path, get_sidecar_path
class LocalFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "local"
def fetch(
self, track: TrackMeta, bypass_cache: bool = False
) -> Optional[LyricResult]:
"""Attempt to read lyrics from local filesystem."""
if not track.is_local or not track.url:
return None
audio_path = get_audio_path(track.url, ensure_exists=False)
if not audio_path:
logger.debug(f"Local: audio URL is not a valid file path: {track.url}")
return None
lrc_path = get_sidecar_path(
track.url, ensure_audio_exists=False, ensure_exists=True
)
if lrc_path:
try:
with open(lrc_path, "r", encoding="utf-8") as f:
content = f.read().strip()
if content:
status = detect_sync_status(content)
logger.info(f"Local: found .lrc sidecar ({status.value})")
return LyricResult(
status=status, lyrics=content, source=self.source_name
)
except Exception as e:
logger.error(f"Local: error reading {lrc_path}: {e}")
else:
logger.debug(f"Local: no .lrc sidecar found for {audio_path}")
# Embedded metadata
if not audio_path.exists():
logger.debug(f"Local: audio file does not exist: {audio_path}")
return None
try:
audio = File(audio_path)
if audio is not None:
lyrics = None
if isinstance(audio, FLAC):
# FLAC stores lyrics in vorbis comment tags
lyrics = (
audio.get("lyrics") or audio.get("unsynclyrics") or [None]
)[0]
elif hasattr(audio, "tags") and audio.tags:
# MP3 / other: look for USLT or SYLT ID3 frames
for key in audio.tags.keys():
if key.startswith("USLT") or key.startswith("SYLT"):
lyrics = str(audio.tags[key])
break
if lyrics:
status = detect_sync_status(lyrics)
logger.info(f"Local: found embedded lyrics ({status.value})")
return LyricResult(
status=status,
lyrics=lyrics.strip(),
source=f"{self.source_name} (embedded)",
)
else:
logger.debug("Local: no embedded lyrics found")
except Exception as e:
logger.error(f"Local: error reading metadata for {audio_path}: {e}")
logger.debug(f"Local: no lyrics found for {audio_path}")
return None
+105
View File
@@ -0,0 +1,105 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 05:23:38
Description: LRCLIB fetcher — queries lrclib.net for synced/plain lyrics
"""
"""
Requires complete track metadata (artist, title, album, duration).
"""
from typing import Optional
import httpx
from loguru import logger
from urllib.parse import urlencode
from .base import BaseFetcher
from ..models import TrackMeta, LyricResult, CacheStatus
from ..config import (
HTTP_TIMEOUT,
TTL_UNSYNCED,
TTL_NOT_FOUND,
TTL_NETWORK_ERROR,
LRCLIB_API_URL,
UA_LRCFETCH,
)
class LrclibFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "lrclib"
def fetch(
self, track: TrackMeta, bypass_cache: bool = False
) -> Optional[LyricResult]:
"""Fetch lyrics from LRCLIB. Requires complete metadata."""
if not track.is_complete:
logger.debug("LRCLIB: skipped — incomplete metadata")
return None
params = {
"track_name": track.title,
"artist_name": track.artist,
"album_name": track.album,
"duration": track.length / 1000.0 if track.length else 0,
}
url = f"{LRCLIB_API_URL}?{urlencode(params)}"
logger.info(f"LRCLIB: fetching lyrics for {track.display_name()}")
try:
with httpx.Client(timeout=HTTP_TIMEOUT) as client:
resp = client.get(url, headers={"User-Agent": UA_LRCFETCH})
if resp.status_code == 404:
logger.debug(f"LRCLIB: not found for {track.display_name()}")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
if resp.status_code != 200:
logger.error(f"LRCLIB: API returned {resp.status_code}")
return LyricResult(
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
)
data = resp.json()
# Validate response
if not isinstance(data, dict):
logger.error(f"LRCLIB: unexpected response type: {type(data).__name__}")
return LyricResult(
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
)
synced = data.get("syncedLyrics")
unsynced = data.get("plainLyrics")
if isinstance(synced, str) and synced.strip():
logger.info(
f"LRCLIB: got synced lyrics ({len(synced.splitlines())} lines)"
)
return LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=synced.strip(),
source=self.source_name,
)
elif isinstance(unsynced, str) and unsynced.strip():
logger.info(
f"LRCLIB: got unsynced lyrics ({len(unsynced.splitlines())} lines)"
)
return LyricResult(
status=CacheStatus.SUCCESS_UNSYNCED,
lyrics=unsynced.strip(),
source=self.source_name,
ttl=TTL_UNSYNCED,
)
else:
logger.debug(f"LRCLIB: empty response for {track.display_name()}")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
except httpx.HTTPError as e:
logger.error(f"LRCLIB: HTTP error: {e}")
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
except Exception as e:
logger.error(f"LRCLIB: unexpected error: {e}")
return None
+162
View File
@@ -0,0 +1,162 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 05:30:50
Description: LRCLIB search fetcher — fuzzy search via lrclib.net /api/search
"""
"""
Used when metadata is incomplete (no album or duration) but title is available.
Selects the best match by duration when track length is known.
"""
import httpx
from typing import Optional
from loguru import logger
from urllib.parse import urlencode
from .base import BaseFetcher
from ..models import TrackMeta, LyricResult, CacheStatus
from ..config import (
HTTP_TIMEOUT,
TTL_UNSYNCED,
TTL_NOT_FOUND,
TTL_NETWORK_ERROR,
DURATION_TOLERANCE_MS,
LRCLIB_SEARCH_URL,
UA_LRCFETCH,
)
class LrclibSearchFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "lrclib-search"
def fetch(
self, track: TrackMeta, bypass_cache: bool = False
) -> Optional[LyricResult]:
"""Search LRCLIB for lyrics. Requires at least a title."""
if not track.title:
logger.debug("LRCLIB-search: skipped — no title")
return None
params: dict[str, str] = {"track_name": track.title}
if track.artist:
params["artist_name"] = track.artist
if track.album:
params["album_name"] = track.album
url = f"{LRCLIB_SEARCH_URL}?{urlencode(params)}"
logger.info(f"LRCLIB-search: searching for {track.display_name()}")
try:
with httpx.Client(timeout=HTTP_TIMEOUT) as client:
resp = client.get(url, headers={"User-Agent": UA_LRCFETCH})
if resp.status_code != 200:
logger.error(f"LRCLIB-search: API returned {resp.status_code}")
return LyricResult(
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
)
data = resp.json()
if not isinstance(data, list) or len(data) == 0:
logger.debug(f"LRCLIB-search: no results for {track.display_name()}")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
logger.debug(f"LRCLIB-search: got {len(data)} candidates")
# Select best match by duration
best = self._select_best(data, track)
if best is None:
logger.debug("LRCLIB-search: no valid candidate found")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
# Extract lyrics
synced = best.get("syncedLyrics")
unsynced = best.get("plainLyrics")
if isinstance(synced, str) and synced.strip():
logger.info(
f"LRCLIB-search: got synced lyrics ({len(synced.splitlines())} lines)"
)
return LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=synced.strip(),
source=self.source_name,
)
elif isinstance(unsynced, str) and unsynced.strip():
logger.info(
f"LRCLIB-search: got unsynced lyrics ({len(unsynced.splitlines())} lines)"
)
return LyricResult(
status=CacheStatus.SUCCESS_UNSYNCED,
lyrics=unsynced.strip(),
source=self.source_name,
ttl=TTL_UNSYNCED,
)
else:
logger.debug("LRCLIB-search: best candidate has empty lyrics")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
except httpx.HTTPError as e:
logger.error(f"LRCLIB-search: HTTP error: {e}")
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
except Exception as e:
logger.error(f"LRCLIB-search: unexpected error: {e}")
return None
@staticmethod
def _select_best(candidates: list[dict], track: TrackMeta) -> Optional[dict]:
"""Pick the best candidate, preferring synced lyrics and closest duration."""
if track.length is not None:
track_s = track.length / 1000.0
best: Optional[dict] = None
best_diff = float("inf")
for item in candidates:
if not isinstance(item, dict):
continue
duration = item.get("duration")
if not isinstance(duration, (int, float)):
continue
diff = abs(duration - track_s) * 1000 # compare in ms
if diff > DURATION_TOLERANCE_MS:
continue
# Prefer synced over unsynced at similar duration
has_synced = (
isinstance(item.get("syncedLyrics"), str)
and item["syncedLyrics"].strip()
)
best_synced = (
best is not None
and isinstance(best.get("syncedLyrics"), str)
and best["syncedLyrics"].strip()
)
if diff < best_diff or (
diff == best_diff and has_synced and not best_synced
):
best_diff = diff
best = item
if best is not None:
logger.debug(
f"LRCLIB-search: selected id={best.get('id')} (diff={best_diff:.0f}ms)"
)
return best
logger.debug(
f"LRCLIB-search: no candidate within {DURATION_TOLERANCE_MS}ms"
)
return None
# No duration — pick first with synced lyrics, or just first
for item in candidates:
if (
isinstance(item, dict)
and isinstance(item.get("syncedLyrics"), str)
and item["syncedLyrics"].strip()
):
return item
return candidates[0] if isinstance(candidates[0], dict) else None
+212
View File
@@ -0,0 +1,212 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 11:04:51
Description: Netease Cloud Music fetcher
"""
"""
Uses the public cloudsearch API for searching and the song/lyric API for
retrieving lyrics. No authentication required.
Search results are filtered by duration when the track has a known length
to avoid returning lyrics for the wrong version of a song.
"""
from typing import Optional
import httpx
from loguru import logger
from .base import BaseFetcher
from ..models import TrackMeta, LyricResult, CacheStatus
from ..lrc import is_synced
from ..config import (
HTTP_TIMEOUT,
TTL_NOT_FOUND,
TTL_NETWORK_ERROR,
DURATION_TOLERANCE_MS,
NETEASE_SEARCH_URL,
NETEASE_LYRIC_URL,
UA_BROWSER,
)
_HEADERS = {
"User-Agent": UA_BROWSER,
"Referer": "https://music.163.com/",
}
class NeteaseFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "netease"
def _search(self, track: TrackMeta, limit: int = 10) -> Optional[int]:
"""Search Netease and return the best-matching song ID.
When ``track.length`` is available, candidates are ranked by duration
difference and only accepted if within ``DURATION_TOLERANCE_MS``.
"""
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
return None
logger.debug(f"Netease: searching for '{query}' (limit={limit})")
try:
with httpx.Client(timeout=HTTP_TIMEOUT) as client:
resp = client.post(
NETEASE_SEARCH_URL,
headers=_HEADERS,
data={"s": query, "type": "1", "limit": str(limit), "offset": "0"},
)
resp.raise_for_status()
result = resp.json()
# Validate response
if not isinstance(result, dict):
logger.error(
f"Netease: search returned non-dict: {type(result).__name__}"
)
return None
result_body = result.get("result")
if not isinstance(result_body, dict):
logger.debug("Netease: search 'result' field missing or invalid")
return None
songs = result_body.get("songs")
if not isinstance(songs, list) or len(songs) == 0:
logger.debug("Netease: search returned 0 results")
return None
logger.debug(f"Netease: search returned {len(songs)} candidates")
# Duration-based best-match selection
if track.length is not None:
track_ms = track.length
best_id: Optional[int] = None
best_diff = float("inf")
for song in songs:
if not isinstance(song, dict):
continue
sid = song.get("id")
name = song.get("name", "?")
duration = song.get("dt") # milliseconds
if not isinstance(duration, int):
logger.debug(
f" candidate {sid} '{name}': no duration, skipped"
)
continue
diff = abs(duration - track_ms)
logger.debug(
f" candidate {sid} '{name}': "
f"duration={duration}ms, diff={diff}ms"
)
if diff < best_diff:
best_diff = diff
best_id = sid
if best_id is not None and best_diff <= DURATION_TOLERANCE_MS:
logger.debug(f"Netease: selected id={best_id} (diff={best_diff}ms)")
return best_id
logger.debug(
f"Netease: no candidate within {DURATION_TOLERANCE_MS}ms "
f"(best diff={best_diff}ms)"
)
return None
# No duration info — take the first result
first = songs[0]
if not isinstance(first, dict) or "id" not in first:
logger.error("Netease: first search result has no 'id'")
return None
logger.debug(
f"Netease: no duration available, using first result "
f"id={first['id']} '{first.get('name', '?')}'"
)
return first["id"]
except Exception as e:
logger.error(f"Netease: search failed: {e}")
return None
def _get_lyric(self, song_id: int) -> Optional[LyricResult]:
"""Fetch lyrics for a given Netease song ID."""
logger.debug(f"Netease: fetching lyrics for song_id={song_id}")
try:
with httpx.Client(timeout=HTTP_TIMEOUT) as client:
resp = client.post(
NETEASE_LYRIC_URL,
headers=_HEADERS,
data={
"id": str(song_id),
"cp": "false",
"tv": "0",
"lv": "0",
"rv": "0",
"kv": "0",
"yv": "0",
"ytv": "0",
"yrv": "0",
},
)
resp.raise_for_status()
data = resp.json()
# Validate response
if not isinstance(data, dict):
logger.error(
f"Netease: lyric response is not dict: {type(data).__name__}"
)
return LyricResult(
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
)
lrc_obj = data.get("lrc")
if not isinstance(lrc_obj, dict):
logger.debug(
f"Netease: no 'lrc' object in response for song_id={song_id}"
)
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
lrc: str = lrc_obj.get("lyric", "")
if not isinstance(lrc, str) or not lrc.strip():
logger.debug(f"Netease: empty lyrics for song_id={song_id}")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
# Determine sync status
synced = is_synced(lrc)
status = (
CacheStatus.SUCCESS_SYNCED if synced else CacheStatus.SUCCESS_UNSYNCED
)
logger.info(
f"Netease: got {status.value} lyrics for song_id={song_id} "
f"({len(lrc.splitlines())} lines)"
)
return LyricResult(
status=status, lyrics=lrc.strip(), source=self.source_name
)
except Exception as e:
logger.error(f"Netease: lyric fetch failed for song_id={song_id}: {e}")
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
def fetch(
self, track: TrackMeta, bypass_cache: bool = False
) -> Optional[LyricResult]:
"""Search for the track and fetch its lyrics."""
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
logger.debug("Netease: skipped — insufficient metadata")
return None
logger.info(f"Netease: fetching lyrics for {track.display_name()}")
song_id = self._search(track)
if not song_id:
logger.debug(f"Netease: no match found for {track.display_name()}")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
return self._get_lyric(song_id)
+177
View File
@@ -0,0 +1,177 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-31 01:54:02
Description: QQ Music fetcher via self-hosted API proxy
"""
"""
Requires a running qq-music-api instance.
The base URL is read from the QQ_MUSIC_API_URL environment variable.
Search → pick best match by duration → fetch LRC lyrics.
"""
from typing import Optional
import httpx
from loguru import logger
from .base import BaseFetcher
from ..models import TrackMeta, LyricResult, CacheStatus
from ..lrc import is_synced
from ..config import (
HTTP_TIMEOUT,
TTL_NOT_FOUND,
TTL_NETWORK_ERROR,
DURATION_TOLERANCE_MS,
QQ_MUSIC_API_URL,
)
class QQMusicFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "qqmusic"
def _search(self, track: TrackMeta, limit: int = 10) -> Optional[str]:
"""Search QQ Music and return the best-matching song MID."""
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
return None
logger.debug(f"QQMusic: searching for '{query}' (limit={limit})")
try:
with httpx.Client(timeout=HTTP_TIMEOUT) as client:
resp = client.get(
f"{QQ_MUSIC_API_URL}/api/search",
params={"keyword": query, "type": "song", "num": limit},
)
resp.raise_for_status()
data = resp.json()
if data.get("code") != 0:
logger.error(f"QQMusic: search API error: {data}")
return None
songs = data.get("data", {}).get("list", [])
if not songs:
logger.debug("QQMusic: search returned 0 results")
return None
logger.debug(f"QQMusic: search returned {len(songs)} candidates")
# Duration-based best-match selection
if track.length is not None:
track_ms = track.length
best_mid: Optional[str] = None
best_diff = float("inf")
for song in songs:
if not isinstance(song, dict):
continue
mid = song.get("mid")
name = song.get("name", "?")
# interval is in seconds
interval = song.get("interval")
if not isinstance(interval, int):
logger.debug(
f" candidate {mid} '{name}': no duration, skipped"
)
continue
duration_ms = interval * 1000
diff = abs(duration_ms - track_ms)
logger.debug(
f" candidate {mid} '{name}': "
f"duration={duration_ms}ms, diff={diff}ms"
)
if diff < best_diff:
best_diff = diff
best_mid = mid
if best_mid is not None and best_diff <= DURATION_TOLERANCE_MS:
logger.debug(
f"QQMusic: selected mid={best_mid} (diff={best_diff}ms)"
)
return best_mid
logger.debug(
f"QQMusic: no candidate within {DURATION_TOLERANCE_MS}ms "
f"(best diff={best_diff}ms)"
)
return None
# No duration info — take the first result
first = songs[0]
if not isinstance(first, dict) or "mid" not in first:
logger.error("QQMusic: first search result has no 'mid'")
return None
logger.debug(
f"QQMusic: no duration available, using first result "
f"mid={first['mid']} '{first.get('name', '?')}'"
)
return first["mid"]
except Exception as e:
logger.error(f"QQMusic: search failed: {e}")
return None
def _get_lyric(self, mid: str) -> Optional[LyricResult]:
"""Fetch lyrics for a given QQ Music song MID."""
logger.debug(f"QQMusic: fetching lyrics for mid={mid}")
try:
with httpx.Client(timeout=HTTP_TIMEOUT) as client:
resp = client.get(
f"{QQ_MUSIC_API_URL}/api/lyric",
params={"mid": mid},
)
resp.raise_for_status()
data = resp.json()
if data.get("code") != 0:
logger.error(f"QQMusic: lyric API error: {data}")
return LyricResult(
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
)
lrc = data.get("data", {}).get("lyric", "")
if not isinstance(lrc, str) or not lrc.strip():
logger.debug(f"QQMusic: empty lyrics for mid={mid}")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
synced = is_synced(lrc)
status = (
CacheStatus.SUCCESS_SYNCED if synced else CacheStatus.SUCCESS_UNSYNCED
)
logger.info(
f"QQMusic: got {status.value} lyrics for mid={mid} "
f"({len(lrc.splitlines())} lines)"
)
return LyricResult(
status=status, lyrics=lrc.strip(), source=self.source_name
)
except Exception as e:
logger.error(f"QQMusic: lyric fetch failed for mid={mid}: {e}")
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
def fetch(
self, track: TrackMeta, bypass_cache: bool = False
) -> Optional[LyricResult]:
"""Search for the track and fetch its lyrics."""
if not QQ_MUSIC_API_URL:
logger.debug("QQMusic: skipped — QQ_MUSIC_API_URL not configured")
return None
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
logger.debug("QQMusic: skipped — insufficient metadata")
return None
logger.info(f"QQMusic: fetching lyrics for {track.display_name()}")
mid = self._search(track)
if not mid:
logger.debug(f"QQMusic: no match found for {track.display_name()}")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
return self._get_lyric(mid)
+369
View File
@@ -0,0 +1,369 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 10:43:21
Description: Spotify fetcher — obtains synced lyrics via Spotify's internal color-lyrics API.
"""
"""
Authentication flow:
1. Fetch server time from Spotify
2. Fetch TOTP secret
3. Generate a TOTP code and exchange it (with SP_DC cookie) for an access token
4. Request lyrics using the access token
The secret and token are cached on the instance to avoid redundant network
calls within the same session.
Requires SPOTIFY_SP_DC environment variable to be set.
"""
import httpx
import json
import time
import struct
import hmac
import hashlib
from typing import Optional, Tuple
from loguru import logger
from .base import BaseFetcher
from ..models import TrackMeta, LyricResult, CacheStatus
from ..config import (
HTTP_TIMEOUT,
SPOTIFY_APP_VERSION,
TTL_NOT_FOUND,
TTL_NETWORK_ERROR,
SPOTIFY_TOKEN_URL,
SPOTIFY_LYRICS_URL,
SPOTIFY_SERVER_TIME_URL,
SPOTIFY_SECRET_URL,
SPOTIFY_SP_DC,
SPOTIFY_TOKEN_CACHE_FILE,
UA_BROWSER,
)
class SpotifyFetcher(BaseFetcher):
def __init__(self) -> None:
# Session-level caches to avoid refetching within the same run
self._cached_secret: Optional[Tuple[str, int]] = None
self._cached_token: Optional[str] = None
self._token_expires_at: float = 0.0
@property
def source_name(self) -> str:
return "spotify"
# ─── Auth helpers ────────────────────────────────────────────────
def _get_server_time(self, client: httpx.Client) -> Optional[int]:
"""Fetch Spotify's server timestamp (seconds since epoch)."""
try:
res = client.get(SPOTIFY_SERVER_TIME_URL, timeout=HTTP_TIMEOUT)
res.raise_for_status()
data = res.json()
if not isinstance(data, dict) or "serverTime" not in data:
logger.error(f"Spotify: unexpected server-time response: {data}")
return None
server_time = data["serverTime"]
logger.debug(f"Spotify: server time = {server_time}")
return server_time
except Exception as e:
logger.error(f"Spotify: failed to fetch server time: {e}")
return None
def _get_secret(self, client: httpx.Client) -> Optional[Tuple[str, int]]:
"""Fetch and decode the TOTP secret. Cached after first success.
Response format: [{version: int, secret: str}, ...]
Each character in *secret* is XOR-decoded with ``(index % 33) + 9``.
"""
if self._cached_secret is not None:
logger.debug("Spotify: using cached TOTP secret")
return self._cached_secret
try:
res = client.get(SPOTIFY_SECRET_URL, timeout=HTTP_TIMEOUT)
res.raise_for_status()
data = res.json()
if not isinstance(data, list) or len(data) == 0:
logger.error(
f"Spotify: unexpected secrets response (type={type(data).__name__}, len={len(data) if isinstance(data, list) else '?'})"
)
return None
last = data[-1]
if "secret" not in last or "version" not in last:
logger.error(f"Spotify: malformed secret entry: {list(last.keys())}")
return None
secret_raw = last["secret"]
version = last["version"]
# XOR decode
parts = []
for i, char in enumerate(secret_raw):
parts.append(str(ord(char) ^ ((i % 33) + 9)))
secret = "".join(parts)
logger.debug(f"Spotify: decoded secret v{version} (len={len(secret)})")
self._cached_secret = (secret, version)
return self._cached_secret
except Exception as e:
logger.error(f"Spotify: failed to fetch secret: {e}")
return None
@staticmethod
def _generate_totp(server_time_s: int, secret: str) -> str:
"""Generate a 6-digit TOTP code compatible with Spotify's auth.
Uses HMAC-SHA1 with a 30-second period, matching the Go reference.
"""
counter = server_time_s // 30
counter_bytes = struct.pack(">Q", counter)
mac = hmac.new(secret.encode(), counter_bytes, hashlib.sha1).digest()
offset = mac[-1] & 0x0F
binary_code = (
(mac[offset] & 0x7F) << 24
| (mac[offset + 1] & 0xFF) << 16
| (mac[offset + 2] & 0xFF) << 8
| (mac[offset + 3] & 0xFF)
)
code = binary_code % (10**6)
return str(code).zfill(6)
def _load_cached_token(self) -> Optional[str]:
"""Try to load a valid token from the persistent cache file."""
try:
with open(SPOTIFY_TOKEN_CACHE_FILE, "r") as f:
data = json.load(f)
expires_ms = data.get("accessTokenExpirationTimestampMs", 0)
if expires_ms <= int(time.time() * 1000):
logger.debug("Spotify: persisted token expired")
return None
token = data.get("accessToken", "")
if not token:
return None
self._cached_token = token
self._token_expires_at = expires_ms / 1000.0
logger.debug("Spotify: loaded token from cache file")
return token
except (FileNotFoundError, json.JSONDecodeError, KeyError):
return None
def _save_token(self, body: dict) -> None:
"""Persist the token response to disk."""
try:
with open(SPOTIFY_TOKEN_CACHE_FILE, "w") as f:
json.dump(body, f)
logger.debug("Spotify: token saved to cache file")
except Exception as e:
logger.warning(f"Spotify: failed to write token cache: {e}")
def _get_token(self) -> Optional[str]:
"""Obtain a Spotify access token. Cached in memory and on disk.
Requires SP_DC cookie (set via SPOTIFY_SP_DC env var).
"""
# 1. Memory cache
if self._cached_token and time.time() < self._token_expires_at - 30:
logger.debug("Spotify: using in-memory cached token")
return self._cached_token
# 2. Disk cache
disk_token = self._load_cached_token()
if disk_token and time.time() < self._token_expires_at - 30:
return disk_token
# 3. Fetch new token
if not SPOTIFY_SP_DC:
logger.error(
"Spotify: SPOTIFY_SP_DC env var not set — "
"cannot authenticate with Spotify"
)
return None
headers = {
"User-Agent": UA_BROWSER,
"Accept": "*/*",
"Referer": "https://open.spotify.com/",
"Cookie": f"sp_dc={SPOTIFY_SP_DC}",
}
with httpx.Client(headers=headers) as client:
server_time = self._get_server_time(client)
if server_time is None:
return None
secret_data = self._get_secret(client)
if secret_data is None:
return None
secret, version = secret_data
totp = self._generate_totp(server_time, secret)
logger.debug(f"Spotify: generated TOTP v{version}: {totp}")
params = {
"reason": "init",
"productType": "web-player",
"totp": totp,
"totpVer": str(version),
"totpServer": totp,
}
try:
res = client.get(SPOTIFY_TOKEN_URL, params=params, timeout=HTTP_TIMEOUT)
if res.status_code != 200:
logger.error(f"Spotify: token request returned {res.status_code}")
return None
body = res.json()
if not isinstance(body, dict) or "accessToken" not in body:
logger.error(
f"Spotify: unexpected token response keys: {list(body.keys()) if isinstance(body, dict) else type(body).__name__}"
)
return None
token = body["accessToken"]
is_anonymous = body.get("isAnonymous", False)
if is_anonymous:
logger.warning(
"Spotify: received anonymous token — SP_DC may be invalid"
)
expires_ms = body.get("accessTokenExpirationTimestampMs", 0)
if expires_ms and expires_ms > int(time.time() * 1000):
self._token_expires_at = expires_ms / 1000.0
else:
logger.warning("Spotify: token expiry missing or invalid")
self._token_expires_at = time.time() + 3600
self._cached_token = token
# Persist to disk (including anonymous tokens, same as Go ref)
self._save_token(body)
logger.debug("Spotify: obtained access token")
return token
except Exception as e:
logger.error(f"Spotify: token request failed: {e}")
return None
# ─── Lyrics ──────────────────────────────────────────────────────
@staticmethod
def _format_lrc_line(start_ms: int, words: str) -> str:
"""Format a single lyric line as LRC ``[mm:ss.cc]text``."""
minutes = start_ms // 60000
seconds = (start_ms // 1000) % 60
centiseconds = round((start_ms % 1000) / 10.0)
return f"[{minutes:02d}:{seconds:02d}.{centiseconds:02.0f}]{words}"
@staticmethod
def _is_truly_synced(lines: list[dict]) -> bool:
"""Check if lyrics are actually synced (not all timestamps zero)."""
for line in lines:
try:
ms = int(line.get("startTimeMs", "0"))
if ms > 0:
return True
except (ValueError, TypeError):
continue
return False
def fetch(
self, track: TrackMeta, bypass_cache: bool = False
) -> Optional[LyricResult]:
"""Fetch lyrics for a Spotify track by its track ID."""
if not track.trackid:
logger.debug("Spotify: skipped — no trackid in metadata")
return None
logger.info(f"Spotify: fetching lyrics for trackid={track.trackid}")
token = self._get_token()
if not token:
logger.error("Spotify: cannot fetch lyrics without a token")
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
url = f"{SPOTIFY_LYRICS_URL}{track.trackid}?format=json&vocalRemoval=false&market=from_token"
headers = {
"User-Agent": UA_BROWSER,
"Accept": "application/json",
"Authorization": f"Bearer {token}",
"Referer": "https://open.spotify.com/",
"App-Platform": "WebPlayer",
"Spotify-App-Version": SPOTIFY_APP_VERSION,
"Origin": "https://open.spotify.com",
}
try:
with httpx.Client(timeout=HTTP_TIMEOUT) as client:
res = client.get(url, headers=headers)
if res.status_code == 404:
logger.debug(f"Spotify: 404 for trackid={track.trackid}")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
if res.status_code != 200:
logger.error(f"Spotify: lyrics API returned {res.status_code}")
return LyricResult(
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
)
data = res.json()
# Validate response structure
if not isinstance(data, dict) or "lyrics" not in data:
logger.error("Spotify: unexpected lyrics response structure")
return LyricResult(
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
)
lyrics_data = data["lyrics"]
sync_type = lyrics_data.get("syncType", "")
lines = lyrics_data.get("lines", [])
if not isinstance(lines, list) or len(lines) == 0:
logger.debug("Spotify: response contained no lyric lines")
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
# Determine sync status
# syncType == "LINE_SYNCED" AND at least one non-zero timestamp
is_synced = sync_type == "LINE_SYNCED" and self._is_truly_synced(lines)
# Convert to LRC
lrc_lines: list[str] = []
for line in lines:
words = line.get("words", "")
if not isinstance(words, str):
continue
try:
ms = int(line.get("startTimeMs", "0"))
except (ValueError, TypeError):
ms = 0
if is_synced:
lrc_lines.append(self._format_lrc_line(ms, words))
else:
# Unsynced: emit with zero timestamps
lrc_lines.append(f"[00:00.00]{words}")
content = "\n".join(lrc_lines)
status = (
CacheStatus.SUCCESS_SYNCED
if is_synced
else CacheStatus.SUCCESS_UNSYNCED
)
logger.info(f"Spotify: got {status.value} lyrics ({len(lrc_lines)} lines)")
return LyricResult(status=status, lyrics=content, source=self.source_name)
except Exception as e:
logger.error(f"Spotify: lyrics fetch failed: {e}")
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
+126
View File
@@ -0,0 +1,126 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 21:54:01
Description: Shared LRC time-tag utilities
"""
import re
from pathlib import Path
from typing import Optional
from urllib.parse import unquote
from .models import CacheStatus
# Standard format: [mm:ss.cc] or [mm:ss.ccc]
_STANDARD_TAG_RE = re.compile(r"\[\d{2}:\d{2}\.\d{2,3}\]")
# Non-standard format: [mm:ss:cc] (two colons instead of dot)
_COLON_TAG_RE = re.compile(r"\[(\d{2}:\d{2}):(\d{2,3})\]")
# Matches any LRC time tag (standard or non-standard) at start of line
LRC_LINE_RE = re.compile(r"^\[(\d{2}:\d{2}[.:]\d{2,3})\]", re.MULTILINE)
# All-zero tags
_ZERO_TAG_RE = re.compile(r"^\[00:00[.:]0{2,3}\]$")
# [offset:+/-xxx] tag — value in milliseconds
_OFFSET_RE = re.compile(r"^\[offset:\s*([+-]?\d+)\]\s*$", re.MULTILINE | re.IGNORECASE)
# Time tag for offset application: captures mm, ss, cc/ccc
_TIME_TAG_RE = re.compile(r"\[(\d{2}):(\d{2})\.(\d{2,3})\]")
def _apply_offset(text: str) -> str:
"""Parse [offset:±ms] tag and shift all time tags accordingly.
Per LRC spec, a positive offset means lyrics appear sooner (subtract
from timestamps), negative means later (add to timestamps).
"""
m = _OFFSET_RE.search(text)
if not m:
return text
offset_ms = int(m.group(1))
if offset_ms == 0:
return _OFFSET_RE.sub("", text).strip("\n")
# Remove the offset tag line
text = _OFFSET_RE.sub("", text)
def _shift(match: re.Match) -> str:
mm, ss, cs = int(match.group(1)), int(match.group(2)), match.group(3)
# Normalize centiseconds to milliseconds
if len(cs) == 2:
ms = int(cs) * 10
fmt_cs = 2
else:
ms = int(cs)
fmt_cs = 3
total_ms = (mm * 60 + ss) * 1000 + ms - offset_ms
total_ms = max(0, total_ms)
new_mm = total_ms // 60000
new_ss = (total_ms % 60000) // 1000
new_cs = total_ms % 1000
if fmt_cs == 2:
new_cs = new_cs // 10
return f"[{new_mm:02d}:{new_ss:02d}.{new_cs:02d}]"
return f"[{new_mm:02d}:{new_ss:02d}.{new_cs:03d}]"
return _TIME_TAG_RE.sub(_shift, text)
def normalize_tags(text: str) -> str:
"""Normalize LRC time tags: colon format → dot format, then apply offset."""
text = _COLON_TAG_RE.sub(r"[\1.\2]", text)
return _apply_offset(text)
def is_synced(text: str) -> bool:
"""Check whether text contains actual LRC time tags with non-zero times.
Returns False if no tags exist or all tags are [00:00.00].
Handles both [mm:ss.cc] and [mm:ss:cc] formats.
"""
tags = _STANDARD_TAG_RE.findall(text)
# Also check non-standard format
tags += [f"[{m.group(1)}.{m.group(2)}]" for m in _COLON_TAG_RE.finditer(text)]
if not tags:
return False
for tag in tags:
if not _ZERO_TAG_RE.match(tag):
return True
return False
def detect_sync_status(text: str) -> CacheStatus:
"""Determine whether lyrics contain meaningful LRC time tags."""
return (
CacheStatus.SUCCESS_SYNCED if is_synced(text) else CacheStatus.SUCCESS_UNSYNCED
)
def get_audio_path(audio_url: str, ensure_exists: bool = False) -> Optional[Path]:
"""Convert file:// URL to Path, return None if invalid or (if ensure_exists) file doesn't exist."""
if not audio_url.startswith("file://"):
return None
file_path = unquote(audio_url.replace("file://", "", 1))
path = Path(file_path)
if ensure_exists and not path.exists():
return None
return path
def get_sidecar_path(
audio_url: str, ensure_audio_exists: bool = False, ensure_exists: bool = False
) -> Optional[Path]:
"""Given a file:// URL, return the corresponding .lrc sidecar path.
If ensure_audio_exists is True, return None if the audio file does not exist.
If ensure_exists is True, return None if the .lrc file does not exist.
"""
audio_path = get_audio_path(audio_url, ensure_exists=ensure_audio_exists)
if not audio_path:
return None
lrc_path = audio_path.with_suffix(".lrc")
if ensure_exists and not lrc_path.exists():
return None
return lrc_path
+10 -18
View File
@@ -1,17 +1,12 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 04:09:36
Description: Data models.
Description: Data models
"""
from __future__ import annotations
from pydantic import BaseModel, ConfigDict
from enum import Enum
from typing import Optional, TYPE_CHECKING
from dataclasses import dataclass
if TYPE_CHECKING:
from .lrc import LRCData
from typing import Optional
class CacheStatus(str, Enum):
@@ -23,10 +18,11 @@ class CacheStatus(str, Enum):
NETWORK_ERROR = "NETWORK_ERROR"
@dataclass
class TrackMeta:
class TrackMeta(BaseModel):
"""Metadata describing a track obtained from MPRIS or manual input."""
model_config = ConfigDict(strict=True)
trackid: Optional[str] = None # Spotify track ID (without "spotify:track:" prefix)
length: Optional[int] = None # Duration in milliseconds
album: Optional[str] = None
@@ -54,16 +50,12 @@ class TrackMeta:
return " - ".join(parts) if parts else self.trackid or self.url or "(unknown)"
@dataclass
class LyricResult:
class LyricResult(BaseModel):
"""Result of a lyric fetch attempt, also used as cache record."""
model_config = ConfigDict(strict=True)
status: CacheStatus
lyrics: Optional[LRCData] = None
lyrics: Optional[str] = None
source: Optional[str] = None # Which fetcher produced this result
ttl: Optional[int] = None # Hint for cache TTL (seconds)
confidence: float = 100.0 # 0-100 selection confidence (100 = trusted/exact)
def __post_init__(self) -> None:
if self.status in (CacheStatus.NOT_FOUND, CacheStatus.NETWORK_ERROR):
self.confidence = 0.0
+30 -75
View File
@@ -1,24 +1,21 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 04:44:15
Description: MPRIS integration for fetching track metadata.
Description: MPRIS integration for fetching track metadata
"""
from __future__ import annotations
import asyncio
from dbus_next.aio.message_bus import MessageBus
from dbus_next.constants import BusType
from dbus_next.message import Message
from lrcfetch.models import TrackMeta
from lrcfetch.config import PREFERRED_PLAYER
from loguru import logger
from typing import Optional, List, Any
from .config import DEFAULT_PLAYER_BLACKLIST, DEFAULT_PREFERRED_PLAYER
from .models import TrackMeta
async def _list_mpris_players(bus: MessageBus) -> List[str]:
"""List all MPRIS player bus names without any filtering."""
"""List all MPRIS player bus names."""
try:
reply = await bus.call(
Message(
@@ -55,79 +52,47 @@ async def _get_playback_status(bus: MessageBus, player_name: str) -> Optional[st
return None
def pick_active_player(
all_names: list[str],
playing: list[str],
preferred: str,
last_active: str | None = None,
) -> str | None:
"""Select the best MPRIS player by play state, preferred keyword, and continuity.
Priority: single playing > preferred keyword among playing > preferred keyword
among all candidates > last active > first candidate.
"""
if not all_names:
return None
if len(playing) == 1:
return playing[0]
candidates = playing if playing else all_names
preferred_lower = preferred.lower().strip()
if preferred_lower:
for name in candidates:
if preferred_lower in name.lower():
return name
if last_active and last_active in all_names:
return last_active
return candidates[0] if candidates else None
async def _select_player(
bus: MessageBus,
specific_player: Optional[str],
preferred_player: str,
player_blacklist: tuple[str, ...],
bus: MessageBus, specific_player: Optional[str] = None
) -> Optional[str]:
"""Select the best MPRIS player.
When specific_player is given, it bypasses player_blacklist and filters by name.
When specific_player is given, filter by name match.
Otherwise: prefer the currently playing player. If multiple are playing,
prefer the one matching preferred_player (default: spotify).
prefer the one matching LRCFETCH_PLAYER env var (default: spotify).
"""
all_names = await _list_mpris_players(bus)
if not all_names:
players = await _list_mpris_players(bus)
if not players:
return None
if specific_player:
# --player bypasses player_blacklist so the user can target any player
matched = [p for p in all_names if specific_player.lower() in p.lower()]
return matched[0] if matched else None
players = [p for p in players if specific_player.lower() in p.lower()]
return players[0] if players else None
# auto-selection: apply blacklist before choosing
# candidates = []
# for p in all_names:
# if any(x.lower() in p.lower() for x in player_blacklist):
# logger.info(f"Excluding blacklisted player: {p}")
# else:
# candidates.append(p)
candidates = [
p
for p in all_names
if not any(x.lower() in p.lower() for x in player_blacklist)
]
playing: list[str] = []
for p in candidates:
# Check playback status for each player
playing = []
for p in players:
status = await _get_playback_status(bus, p)
logger.debug(f"Player {p}: {status}")
if status == "Playing":
playing.append(p)
return pick_active_player(candidates, playing, preferred_player)
candidates = playing if playing else players
if len(candidates) == 1:
return candidates[0]
# Multiple candidates: prefer LRCFETCH_PLAYER
preferred = PREFERRED_PLAYER.lower()
if preferred:
for p in candidates:
if preferred in p.lower():
return p
return candidates[0]
async def _fetch_metadata_dbus(
specific_player: Optional[str],
preferred_player: str,
player_blacklist: tuple[str, ...],
specific_player: Optional[str] = None,
) -> Optional[TrackMeta]:
bus = None
try:
@@ -137,9 +102,7 @@ async def _fetch_metadata_dbus(
return None
try:
player_name = await _select_player(
bus, specific_player, preferred_player, player_blacklist
)
player_name = await _select_player(bus, specific_player)
if not player_name:
logger.debug(
f"No active MPRIS players found via DBus{' for ' + specific_player if specific_player else ''}."
@@ -178,8 +141,6 @@ async def _fetch_metadata_dbus(
trackid = trackid.removeprefix("spotify:track:")
elif trackid.startswith("/com/spotify/track/"):
trackid = trackid.removeprefix("/com/spotify/track/")
else:
trackid = None
# Extract length (usually microseconds)
length = metadata.get("mpris:length", None)
@@ -219,15 +180,9 @@ async def _fetch_metadata_dbus(
bus.disconnect()
def get_current_track(
player_name: Optional[str] = None,
preferred_player: str = DEFAULT_PREFERRED_PLAYER,
player_blacklist: tuple[str, ...] = DEFAULT_PLAYER_BLACKLIST,
) -> Optional[TrackMeta]:
def get_current_track(player_name: Optional[str] = None) -> Optional[TrackMeta]:
try:
return asyncio.run(
_fetch_metadata_dbus(player_name, preferred_player, player_blacklist)
)
return asyncio.run(_fetch_metadata_dbus(player_name))
except Exception as e:
logger.error(f"DBus async loop failed: {e}")
return None
+4
View File
@@ -0,0 +1,4 @@
from lrcfetch.cli import run
if __name__ == "__main__":
run()
-2
View File
@@ -1,2 +0,0 @@
*
!.gitignore
-343
View File
@@ -1,343 +0,0 @@
from __future__ import annotations
import argparse
import asyncio
import json
import traceback
from dataclasses import asdict
from pathlib import Path
from typing import Any, Awaitable, Callable
import httpx
from lrx_cli.authenticators import create_authenticators
from lrx_cli.cache import CacheEngine
from lrx_cli.config import AppConfig, load_config
from lrx_cli.fetchers import (
create_fetchers,
LrclibFetcher,
LrclibSearchFetcher,
NeteaseFetcher,
SpotifyFetcher,
QQMusicFetcher,
MusixmatchFetcher,
MusixmatchSpotifyFetcher,
)
from lrx_cli.models import TrackMeta
SAMPLE_TRACK = TrackMeta(
title="One Last Kiss",
artist="Hikaru Utada",
album="One Last Kiss",
length=252026,
trackid="5RhWszHMSKzb7KiXk4Ae0M",
url="https://open.spotify.com/track/5RhWszHMSKzb7KiXk4Ae0M",
)
def _jsonable(value: Any) -> Any:
if isinstance(value, (str, int, float, bool)) or value is None:
return value
if isinstance(value, dict):
return {str(k): _jsonable(v) for k, v in value.items()}
if isinstance(value, (list, tuple)):
return [_jsonable(v) for v in value]
if isinstance(value, bytes):
try:
return value.decode("utf-8")
except Exception:
return value.hex()
if hasattr(value, "model_dump"):
return _jsonable(value.model_dump())
if hasattr(value, "__dict__"):
return _jsonable(vars(value))
return repr(value)
def _write_json(path: Path, payload: Any) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(
json.dumps(_jsonable(payload), ensure_ascii=False, indent=2) + "\n",
encoding="utf-8",
)
def _clear_output_files(out_dir: Path) -> None:
for pattern in ("*.json", "*.db"):
for path in out_dir.glob(pattern):
if path.is_file():
path.unlink()
def _new_runtime(config: AppConfig, db_path: Path):
cache = CacheEngine(str(db_path))
authenticators = create_authenticators(cache, config)
fetchers = create_fetchers(cache, authenticators, config)
return fetchers, authenticators
async def _response_dump(resp: httpx.Response) -> dict[str, Any]:
out: dict[str, Any] = {
"status_code": resp.status_code,
"headers": dict(resp.headers),
"url": str(resp.request.url),
"method": resp.request.method,
}
try:
out["json"] = resp.json()
except Exception:
out["text"] = resp.text
return out
def _decode_body(content: bytes) -> str:
if not content:
return ""
try:
return content.decode("utf-8")
except Exception:
return content.hex()
def _dump_request(req: httpx.Request) -> dict[str, Any]:
query_params = {k: v for k, v in req.url.params.multi_items()}
return {
"method": req.method,
"url": str(req.url),
"headers": dict(req.headers),
"query_params": query_params,
"body": _decode_body(req.content),
}
async def run_capture(out_dir: Path, timeout: float, strict: bool) -> int:
out_dir.mkdir(parents=True, exist_ok=True)
_clear_output_files(out_dir)
# Use isolated cache DBs to avoid polluting normal runtime cache.
anon_fetchers, _ = _new_runtime(AppConfig(), out_dir / ".capture-anon.db")
cred_fetchers, _ = _new_runtime(load_config(), out_dir / ".capture-cred.db")
calls: list[tuple[str, dict[str, Any], Callable[[], Awaitable[Any]]]] = []
captured_requests: list[dict[str, Any]] = []
original_send = httpx.AsyncClient.send
async def _patched_send(
self: httpx.AsyncClient,
request: httpx.Request,
*args: Any,
**kwargs: Any,
) -> httpx.Response:
captured_requests.append(_dump_request(request))
return await original_send(self, request, *args, **kwargs)
httpx.AsyncClient.send = _patched_send # type: ignore[method-assign]
async with httpx.AsyncClient(timeout=timeout) as client:
# LRCLIB
lrclib = anon_fetchers["lrclib"]
assert isinstance(lrclib, LrclibFetcher)
calls.append(
(
"lrclib_get",
{"track": asdict(SAMPLE_TRACK)},
lambda: lrclib._api_get(client, SAMPLE_TRACK),
)
)
lrclib_search = anon_fetchers["lrclib-search"]
assert isinstance(lrclib_search, LrclibSearchFetcher)
calls.append(
(
"lrclib_search_candidates",
{"track": asdict(SAMPLE_TRACK)},
lambda: lrclib_search._api_candidates(client, SAMPLE_TRACK),
)
)
# Netease
netease = anon_fetchers["netease"]
assert isinstance(netease, NeteaseFetcher)
calls.append(
(
"netease_search_track",
{"track": asdict(SAMPLE_TRACK), "limit": 5},
lambda: netease._api_search_track(client, SAMPLE_TRACK, 5),
)
)
calls.append(
(
"netease_lyric_track",
{"track": asdict(SAMPLE_TRACK), "limit": 5},
lambda: netease._api_lyric_track(client, SAMPLE_TRACK, 5),
)
)
# Spotify (credentialed runtime)
spotify = cred_fetchers["spotify"]
assert isinstance(spotify, SpotifyFetcher)
calls.append(
(
"spotify_lyrics",
{"track": asdict(SAMPLE_TRACK)},
lambda: spotify._api_lyrics(SAMPLE_TRACK),
)
)
# QQMusic (credentialed runtime)
qq = cred_fetchers["qqmusic"]
assert isinstance(qq, QQMusicFetcher)
calls.append(
(
"qqmusic_search_track",
{"track": asdict(SAMPLE_TRACK), "limit": 10},
lambda: qq._api_search(SAMPLE_TRACK, 10),
)
)
calls.append(
(
"qqmusic_lyric_track",
{"track": asdict(SAMPLE_TRACK), "limit": 10},
lambda: qq._api_lyric_track(SAMPLE_TRACK, 10),
)
)
# Musixmatch anonymous
mxm_anon = anon_fetchers["musixmatch"]
mxm_sp_anon = anon_fetchers["musixmatch-spotify"]
assert isinstance(mxm_anon, MusixmatchFetcher)
assert isinstance(mxm_sp_anon, MusixmatchSpotifyFetcher)
calls.append(
(
"musixmatch_anonymous_search_track",
{"track": asdict(SAMPLE_TRACK)},
lambda: mxm_anon._api_search_track(SAMPLE_TRACK),
)
)
calls.append(
(
"musixmatch_anonymous_macro_track",
{"track": asdict(SAMPLE_TRACK)},
lambda: mxm_anon._api_macro_track(SAMPLE_TRACK),
)
)
calls.append(
(
"musixmatch_spotify_anonymous_macro_track",
{"track": asdict(SAMPLE_TRACK)},
lambda: mxm_sp_anon._api_macro_track(SAMPLE_TRACK),
)
)
# Musixmatch credentialed (if token configured, this uses it)
mxm_cred = cred_fetchers["musixmatch"]
mxm_sp_cred = cred_fetchers["musixmatch-spotify"]
assert isinstance(mxm_cred, MusixmatchFetcher)
assert isinstance(mxm_sp_cred, MusixmatchSpotifyFetcher)
calls.append(
(
"musixmatch_token_search_track",
{"track": asdict(SAMPLE_TRACK)},
lambda: mxm_cred._api_search_track(SAMPLE_TRACK),
)
)
calls.append(
(
"musixmatch_token_macro_track",
{"track": asdict(SAMPLE_TRACK)},
lambda: mxm_cred._api_macro_track(SAMPLE_TRACK),
)
)
calls.append(
(
"musixmatch_spotify_token_macro_track",
{"track": asdict(SAMPLE_TRACK)},
lambda: mxm_sp_cred._api_macro_track(SAMPLE_TRACK),
)
)
failures = 0
try:
for idx, (name, request_payload, fn) in enumerate(calls, start=1):
stem = f"{idx:03d}_{name}"
req_path = out_dir / f"{stem}.request.json"
resp_path = out_dir / f"{stem}.response.json"
captured_requests.clear()
try:
result = await fn()
if isinstance(result, httpx.Response):
payload = await _response_dump(result)
else:
payload = _jsonable(result)
_write_json(
req_path,
{
"call": name,
"input": request_payload,
"http_requests": _jsonable(captured_requests),
},
)
_write_json(resp_path, {"ok": True, "response": payload})
except Exception as exc:
failures += 1
_write_json(
req_path,
{
"call": name,
"input": request_payload,
"http_requests": _jsonable(captured_requests),
},
)
_write_json(
resp_path,
{
"ok": False,
"error": str(exc),
"traceback": traceback.format_exc(),
},
)
if strict:
break
finally:
httpx.AsyncClient.send = original_send # type: ignore[method-assign]
return failures
def main() -> int:
parser = argparse.ArgumentParser(
description=(
"Call external provider APIs with sample data and save request/response "
"pairs for API reference."
)
)
parser.add_argument(
"--out-dir",
type=Path,
default=Path("misc/api_ref"),
help="Output directory for request/response files.",
)
parser.add_argument(
"--timeout",
type=float,
default=20.0,
help="HTTP timeout in seconds.",
)
parser.add_argument(
"--strict",
action="store_true",
help="Stop on first failed call.",
)
args = parser.parse_args()
failures = asyncio.run(run_capture(args.out_dir, args.timeout, args.strict))
print(f"capture finished: failures={failures}, out_dir={args.out_dir}")
return 1 if (args.strict and failures > 0) else 0
if __name__ == "__main__":
raise SystemExit(main())
+8 -22
View File
@@ -3,8 +3,8 @@ requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "lrx-cli"
version = "0.7.9"
name = "lrcfetch"
version = "0.1.5"
description = "Fetch line-synced lyrics for your music player."
readme = "README.md"
requires-python = ">=3.13"
@@ -14,30 +14,16 @@ dependencies = [
"httpx>=0.28.1",
"loguru>=0.7.3",
"mutagen>=1.47.0",
"platformdirs>=4.9.6",
"platformdirs>=4.9.4",
"pydantic>=2.12.5",
"python-dotenv>=1.2.2",
]
[project.scripts]
lrx = "lrx_cli.cli:run"
lrcfetch = "lrcfetch.cli:run"
[tool.ruff.lint]
ignore = ["E402"] # Since there are headers
ignore = ["E402"]
[dependency-groups]
dev = [
"poethepoet>=0.44.0",
"pyright>=1.1.406",
"pytest>=9.0.2",
"ruff>=0.15.8",
]
[tool.poe.tasks]
fmt = "ruff format ."
lint = { shell = "ruff check . && pyright" }
test = "pytest"
test-api = "pytest -m 'network or not network'"
[tool.pyright]
pythonVersion = "3.13"
include = ["src", "tests", "misc"]
typeCheckingMode = "standard"
dev = ["ruff>=0.15.8"]
-3
View File
@@ -1,3 +0,0 @@
[pytest]
addopts = -m "not network"
markers = network: marks tests that require real network access to external APIs
-180
View File
@@ -1,180 +0,0 @@
# This file was autogenerated by uv via the following command:
# uv export
-e .
anyio==4.13.0 \
--hash=sha256:08b310f9e24a9594186fd75b4f73f4a4152069e3853f1ed8bfbf58369f4ad708 \
--hash=sha256:334b70e641fd2221c1505b3890c69882fe4a2df910cba14d97019b90b24439dc
# via httpx
attrs==26.1.0 \
--hash=sha256:c647aa4a12dfbad9333ca4e71fe62ddc36f4e63b2d260a37a8b83d2f043ac309 \
--hash=sha256:d03ceb89cb322a8fd706d4fb91940737b6642aa36998fe130a9bc96c985eff32
# via cyclopts
certifi==2026.2.25 \
--hash=sha256:027692e4402ad994f1c42e52a4997a9763c646b73e4096e4d5d6db8af1d6f0fa \
--hash=sha256:e887ab5cee78ea814d3472169153c2d12cd43b14bd03329a39a9c6e2e80bfba7
# via
# httpcore
# httpx
colorama==0.4.6 ; sys_platform == 'win32' \
--hash=sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44 \
--hash=sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6
# via
# loguru
# pytest
cyclopts==4.10.2 \
--hash=sha256:a1f2d6f8f7afac9456b48f75a40b36658778ddc9c6d406b520d017ae32c990fe \
--hash=sha256:d7b950457ef2563596d56331f80cbbbf86a2772535fb8b315c4f03bc7e6127f1
# via lrx-cli
dbus-next==0.2.3 \
--hash=sha256:58948f9aff9db08316734c0be2a120f6dc502124d9642f55e90ac82ffb16a18b \
--hash=sha256:f4eae26909332ada528c0a3549dda8d4f088f9b365153952a408e28023a626a5
# via lrx-cli
docstring-parser==0.17.0 \
--hash=sha256:583de4a309722b3315439bb31d64ba3eebada841f2e2cee23b99df001434c912 \
--hash=sha256:cf2569abd23dce8099b300f9b4fa8191e9582dda731fd533daf54c4551658708
# via cyclopts
docutils==0.22.4 \
--hash=sha256:4db53b1fde9abecbb74d91230d32ab626d94f6badfc575d6db9194a49df29968 \
--hash=sha256:d0013f540772d1420576855455d050a2180186c91c15779301ac2ccb3eeb68de
# via rich-rst
h11==0.16.0 \
--hash=sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1 \
--hash=sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86
# via httpcore
httpcore==1.0.9 \
--hash=sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55 \
--hash=sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8
# via httpx
httpx==0.28.1 \
--hash=sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc \
--hash=sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad
# via lrx-cli
idna==3.11 \
--hash=sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea \
--hash=sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902
# via
# anyio
# httpx
iniconfig==2.3.0 \
--hash=sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730 \
--hash=sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12
# via pytest
loguru==0.7.3 \
--hash=sha256:19480589e77d47b8d85b2c827ad95d49bf31b0dcde16593892eb51dd18706eb6 \
--hash=sha256:31a33c10c8e1e10422bfd431aeb5d351c7cf7fa671e3c4df004162264b28220c
# via lrx-cli
markdown-it-py==4.0.0 \
--hash=sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147 \
--hash=sha256:cb0a2b4aa34f932c007117b194e945bd74e0ec24133ceb5bac59009cda1cb9f3
# via rich
mdurl==0.1.2 \
--hash=sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8 \
--hash=sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba
# via markdown-it-py
mutagen==1.47.0 \
--hash=sha256:719fadef0a978c31b4cf3c956261b3c58b6948b32023078a2117b1de09f0fc99 \
--hash=sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719
# via lrx-cli
nodeenv==1.10.0 \
--hash=sha256:5bb13e3eed2923615535339b3c620e76779af4cb4c6a90deccc9e36b274d3827 \
--hash=sha256:996c191ad80897d076bdfba80a41994c2b47c68e224c542b48feba42ba00f8bb
# via pyright
packaging==26.0 \
--hash=sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4 \
--hash=sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529
# via pytest
pastel==0.2.1 \
--hash=sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364 \
--hash=sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d
# via poethepoet
platformdirs==4.9.6 \
--hash=sha256:3bfa75b0ad0db84096ae777218481852c0ebc6c727b3168c1b9e0118e458cf0a \
--hash=sha256:e61adb1d5e5cb3441b4b7710bea7e4c12250ca49439228cc1021c00dcfac0917
# via lrx-cli
pluggy==1.6.0 \
--hash=sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3 \
--hash=sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746
# via pytest
poethepoet==0.44.0 \
--hash=sha256:36d3d834708ed069ac1e4f8ed77915c55265b7b6e01aeb2fe617c9fe9cfd524a \
--hash=sha256:c2667b513621788fb46482e371cdf81c0b04344e0e0bcb7aa8af45f84c2fce7b
pygments==2.20.0 \
--hash=sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f \
--hash=sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176
# via
# pytest
# rich
pyright==1.1.408 \
--hash=sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1 \
--hash=sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684
pytest==9.0.3 \
--hash=sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9 \
--hash=sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c
pyyaml==6.0.3 \
--hash=sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c \
--hash=sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3 \
--hash=sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6 \
--hash=sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65 \
--hash=sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1 \
--hash=sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310 \
--hash=sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac \
--hash=sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9 \
--hash=sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7 \
--hash=sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35 \
--hash=sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb \
--hash=sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065 \
--hash=sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c \
--hash=sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c \
--hash=sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764 \
--hash=sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac \
--hash=sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8 \
--hash=sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3 \
--hash=sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5 \
--hash=sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702 \
--hash=sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788 \
--hash=sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba \
--hash=sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5 \
--hash=sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26 \
--hash=sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f \
--hash=sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b \
--hash=sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be \
--hash=sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c \
--hash=sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6
# via poethepoet
rich==14.3.3 \
--hash=sha256:793431c1f8619afa7d3b52b2cdec859562b950ea0d4b6b505397612db8d5362d \
--hash=sha256:b8daa0b9e4eef54dd8cf7c86c03713f53241884e814f4e2f5fb342fe520f639b
# via
# cyclopts
# rich-rst
rich-rst==1.3.2 \
--hash=sha256:a1196fdddf1e364b02ec68a05e8ff8f6914fee10fbca2e6b6735f166bb0da8d4 \
--hash=sha256:a99b4907cbe118cf9d18b0b44de272efa61f15117c61e39ebdc431baf5df722a
# via cyclopts
ruff==0.15.10 \
--hash=sha256:0744e31482f8f7d0d10a11fcbf897af272fefdfcb10f5af907b18c2813ff4d5f \
--hash=sha256:0ee3ef42dab7078bda5ff6a1bcba8539e9857deb447132ad5566a038674540d0 \
--hash=sha256:136c00ca2f47b0018b073f28cb5c1506642a830ea941a60354b0e8bc8076b151 \
--hash=sha256:28cb32d53203242d403d819fd6983152489b12e4a3ae44993543d6fe62ab42ed \
--hash=sha256:51cb8cc943e891ba99989dd92d61e29b1d231e14811db9be6440ecf25d5c1609 \
--hash=sha256:601d1610a9e1f1c2165a4f561eeaa2e2ea1e97f3287c5aa258d3dab8b57c6188 \
--hash=sha256:8154d43684e4333360fedd11aaa40b1b08a4e37d8ffa9d95fee6fa5b37b6fab1 \
--hash=sha256:83e1dd04312997c99ea6965df66a14fb4f03ba978564574ffc68b0d61fd3989e \
--hash=sha256:8ab88715f3a6deb6bde6c227f3a123410bec7b855c3ae331b4c006189e895cef \
--hash=sha256:8b80a2f3c9c8a950d6237f2ca12b206bccff626139be9fa005f14feb881a1ae8 \
--hash=sha256:93cc06a19e5155b4441dd72808fdf84290d84ad8a39ca3b0f994363ade4cebb1 \
--hash=sha256:a768ff5969b4f44c349d48edf4ab4f91eddb27fd9d77799598e130fb628aa158 \
--hash=sha256:b0c52744cf9f143a393e284125d2576140b68264a93c6716464e129a3e9adb48 \
--hash=sha256:b1e7c16ea0ff5a53b7c2df52d947e685973049be1cdfe2b59a9c43601897b22e \
--hash=sha256:d1f86e67ebfdef88e00faefa1552b5e510e1d35f3be7d423dc7e84e63788c94e \
--hash=sha256:d4272e87e801e9a27a2e8df7b21011c909d9ddd82f4f3281d269b6ba19789ca5 \
--hash=sha256:e3e53c588164dc025b671c9df2462429d60357ea91af7e92e9d56c565a9f1b07 \
--hash=sha256:e59c9bdc056a320fb9ea1700a8d591718b8faf78af065484e801258d3a76bc3f
typing-extensions==4.15.0 \
--hash=sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466 \
--hash=sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548
# via pyright
win32-setctime==1.2.0 ; sys_platform == 'win32' \
--hash=sha256:95d644c4e708aba81dc3704a116d8cbc974d70b3bdb8be1d150e36be6e9d1390 \
--hash=sha256:ae1fdf948f5640aae05c511ade119313fb6a30d7eabe25fef9764dca5873c4c0
# via loguru
-21
View File
@@ -1,21 +0,0 @@
from .config import AppConfig, GeneralConfig, CredentialConfig, load_config
from .core import LrcManager
from .models import CacheStatus, TrackMeta, LyricResult
from .lrc import LRCData, LyricLine
from .fetchers import FetcherMethodType
from .utils import get_sidecar_path
__all__ = [
"AppConfig",
"GeneralConfig",
"CredentialConfig",
"load_config",
"LrcManager",
"CacheStatus",
"TrackMeta",
"LRCData",
"LyricLine",
"LyricResult",
"FetcherMethodType",
"get_sidecar_path",
]
-12
View File
@@ -1,12 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-06 08:19:54
Description: The entry point.
"""
from __future__ import annotations
from .cli import run
if __name__ == "__main__":
run()
-35
View File
@@ -1,35 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-06 08:21:01
Description: Credential authenticators for third-party provider APIs
"""
from __future__ import annotations
from lrx_cli.authenticators.qqmusic import QQMusicAuthenticator
from .base import BaseAuthenticator
from .spotify import SpotifyAuthenticator
from .musixmatch import MusixmatchAuthenticator
from .dummy import DummyAuthenticator
from ..config import AppConfig
__all__ = [
"BaseAuthenticator",
"SpotifyAuthenticator",
"MusixmatchAuthenticator",
"QQMusicAuthenticator",
"DummyAuthenticator",
]
def create_authenticators(cache, config: AppConfig) -> dict[str, BaseAuthenticator]:
"""Factory function to create authenticators with injected config."""
return {
"dummy": DummyAuthenticator(cache, config.credentials, config.general),
"spotify": SpotifyAuthenticator(cache, config.credentials, config.general),
"musixmatch": MusixmatchAuthenticator(
cache, config.credentials, config.general
),
"qqmusic": QQMusicAuthenticator(cache, config.credentials, config.general),
}
-44
View File
@@ -1,44 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-05 03:18:14
Description: Base class for credential authenticators.
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Optional
from ..cache import CacheEngine
from ..config import CredentialConfig, GeneralConfig
class BaseAuthenticator(ABC):
"""Manages obtaining, caching, and refreshing a credential for one provider."""
def __init__(
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
) -> None:
self._cache = cache
self._credentials = credentials
self._general = general
@property
@abstractmethod
def name(self) -> str: ...
def is_configured(self) -> bool:
"""True if the prerequisite config (e.g. env var) is present.
Default is True — authenticators that can obtain credentials anonymously
should not override this.
"""
return True
@abstractmethod
async def authenticate(self) -> Optional[str]:
"""Return current valid credential string, refreshing if needed.
Returns None if unavailable (misconfigured or network failure).
"""
...
-21
View File
@@ -1,21 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-05 03:36:44
Description: A dummy authenticator that does nothing and always reports as configured.
"""
from __future__ import annotations
from .base import BaseAuthenticator
class DummyAuthenticator(BaseAuthenticator):
@property
def name(self) -> str:
return "dummy"
def is_configured(self) -> bool:
return True
async def authenticate(self) -> None:
return None
-168
View File
@@ -1,168 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-05 03:27:56
Description: Musixmatch authenticator — token management, 401 retry, and cooldown.
"""
from __future__ import annotations
import time
from typing import Optional
from urllib.parse import urlencode
import httpx
from loguru import logger
from .base import BaseAuthenticator
from ..cache import CacheEngine
from ..config import CredentialConfig, GeneralConfig, MUSIXMATCH_COOLDOWN_MS
_MUSIXMATCH_TOKEN_URL = "https://apic-desktop.musixmatch.com/ws/1.1/token.get"
_MXM_HEADERS = {"Cookie": "x-mxm-token-guid="}
_MXM_BASE_PARAMS = {
"format": "json",
"app_id": "web-desktop-app-v1.0",
}
def _new_mxm_client(timeout: float) -> httpx.AsyncClient:
"""Build Musixmatch client without httpx default User-Agent header."""
client = httpx.AsyncClient(timeout=timeout, headers=_MXM_HEADERS)
client.headers.pop("User-Agent", None)
return client
class MusixmatchAuthenticator(BaseAuthenticator):
def __init__(
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
) -> None:
super().__init__(cache, credentials, general)
self._cached_token: Optional[str] = None
self._cooldown_until_ms: int = 0
@property
def name(self) -> str:
return "musixmatch"
def is_configured(self) -> bool:
return True # anonymous token always available
def is_cooldown(self) -> bool:
"""Return True if Musixmatch requests are blocked due to repeated auth failure."""
now_ms = int(time.time() * 1000)
if self._cooldown_until_ms > now_ms:
return True
data = self._cache.get_credential("musixmatch_cooldown")
if data:
until = data.get("until_ms", 0)
if until > now_ms:
self._cooldown_until_ms = until
return True
return False
def _set_cooldown(self) -> None:
now_ms = int(time.time() * 1000)
until_ms = now_ms + MUSIXMATCH_COOLDOWN_MS
self._cooldown_until_ms = until_ms
self._cache.set_credential(
"musixmatch_cooldown",
{"until_ms": until_ms},
expires_at_ms=until_ms,
)
logger.warning("Musixmatch: token unavailable, entering cooldown")
def _invalidate_token(self) -> None:
"""Discard the current token from memory and DB."""
self._cached_token = None
# Store with an already-expired timestamp so get_credential returns None
self._cache.set_credential("musixmatch", {"token": ""}, expires_at_ms=1)
async def _fetch_new_token(self) -> Optional[str]:
"""Call token.get and persist the result. Returns token string or None."""
params = {
**_MXM_BASE_PARAMS,
"user_language": "en",
"t": str(int(time.time() * 1000)),
}
url = f"{_MUSIXMATCH_TOKEN_URL}?{urlencode(params)}"
logger.debug("Musixmatch: fetching anonymous token")
try:
async with _new_mxm_client(self._general.http_timeout) as client:
resp = await client.get(url)
resp.raise_for_status()
data = resp.json()
except Exception as e:
logger.warning(f"Musixmatch: token fetch failed: {e}")
return None
token = (
data.get("message", {}).get("body", {}).get("user_token")
if isinstance(data, dict)
else None
)
if not isinstance(token, str) or not token:
logger.warning("Musixmatch: unexpected token.get response structure")
return None
self._cached_token = token
# No expiry — token is valid until we get a 401
self._cache.set_credential("musixmatch", {"token": token}, expires_at_ms=None)
logger.debug("Musixmatch: obtained anonymous token")
return token
async def _get_token(self) -> Optional[str]:
"""Return a valid token: env var > memory > DB > fresh fetch."""
if self._credentials.musixmatch_usertoken:
return self._credentials.musixmatch_usertoken
if self._cached_token:
return self._cached_token
data = self._cache.get_credential("musixmatch")
if data and isinstance(data.get("token"), str) and data["token"]:
self._cached_token = data["token"]
return self._cached_token
return await self._fetch_new_token()
async def authenticate(self) -> Optional[str]:
if self.is_cooldown():
logger.debug("Musixmatch: authenticate called during cooldown")
return None
return await self._get_token()
async def get_json(self, url_base: str, params: dict) -> Optional[dict]:
"""Authenticated GET to a Musixmatch endpoint.
- Injects format, app_id, and usertoken automatically.
- On 401: invalidates token, fetches a fresh one, retries once.
- On failed token fetch (initial or retry): sets cooldown, returns None.
- On network / HTTP error: raises (callers map this to NETWORK_ERROR).
- Returns None if cooldown is active.
"""
if self.is_cooldown():
logger.debug("Musixmatch: request blocked by cooldown")
return None
token = await self._get_token()
if not token:
self._set_cooldown()
return None
async with _new_mxm_client(self._general.http_timeout) as client:
url = f"{url_base}?{urlencode({**_MXM_BASE_PARAMS, **params, 'usertoken': token})}"
resp = await client.get(url)
if resp.status_code == 401:
logger.debug("Musixmatch: 401 received, refreshing token")
self._invalidate_token()
token = await self._fetch_new_token()
if not token:
self._set_cooldown()
return None
url = f"{url_base}?{urlencode({**_MXM_BASE_PARAMS, **params, 'usertoken': token})}"
resp = await client.get(url)
resp.raise_for_status()
return resp.json()
-74
View File
@@ -1,74 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-05 03:47:30
Description: QQ Music API authenticator - currently only a proxy.
"""
from __future__ import annotations
from typing import Optional
import httpx
from loguru import logger
from .base import BaseAuthenticator
from ..cache import CacheEngine
from ..config import CredentialConfig, GeneralConfig
class QQMusicAuthenticator(BaseAuthenticator):
def __init__(
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
) -> None:
super().__init__(cache, credentials, general)
@property
def name(self) -> str:
return "qqmusic"
def is_configured(self) -> bool:
return bool(self._credentials.qq_music_api_url)
async def authenticate(self) -> Optional[str]:
return self._credentials.qq_music_api_url.rstrip("/") or None
async def search(self, keyword: str, num: int) -> dict | None:
"""Call qq-music-api search endpoint and return raw JSON payload."""
base_url = await self.authenticate()
if not base_url:
return None
try:
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
resp = await client.get(
f"{base_url}/api/search",
params={"keyword": keyword, "type": "song", "num": num},
)
resp.raise_for_status()
data = resp.json()
if not isinstance(data, dict):
return None
return data
except Exception as e:
logger.error(f"QQMusic: search request failed: {e}")
return None
async def get_lyric(self, mid: str) -> dict | None:
"""Call qq-music-api lyric endpoint and return raw JSON payload."""
base_url = await self.authenticate()
if not base_url:
return None
try:
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
resp = await client.get(
f"{base_url}/api/lyric",
params={"mid": mid},
)
resp.raise_for_status()
data = resp.json()
if not isinstance(data, dict):
return None
return data
except Exception as e:
logger.error(f"QQMusic: lyric request failed for mid={mid}: {e}")
return None
-245
View File
@@ -1,245 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-05 03:18:14
Description: Spotify authenticator — TOTP-based access token via SP_DC cookie.
"""
from __future__ import annotations
import hashlib
import hmac
import struct
import time
from typing import Optional, Tuple
import httpx
from loguru import logger
from .base import BaseAuthenticator
from ..cache import CacheEngine
from ..config import CredentialConfig, GeneralConfig, UA_BROWSER
_SPOTIFY_TOKEN_URL = "https://open.spotify.com/api/token"
_SPOTIFY_SERVER_TIME_URL = "https://open.spotify.com/api/server-time"
_SPOTIFY_LYRICS_URL = "https://spclient.wg.spotify.com/color-lyrics/v2/track/"
_SPOTIFY_SECRET_URL = (
"https://raw.githubusercontent.com/xyloflake/spot-secrets-go"
"/refs/heads/main/secrets/secrets.json"
)
SPOTIFY_BASE_HEADERS = {
"User-Agent": UA_BROWSER,
"Referer": "https://open.spotify.com/",
"Origin": "https://open.spotify.com",
"App-Platform": "WebPlayer",
"Spotify-App-Version": "1.2.88.21.g8e037c8f",
}
class SpotifyAuthenticator(BaseAuthenticator):
def __init__(
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
) -> None:
super().__init__(cache, credentials, general)
self._cached_secret: Optional[Tuple[str, int]] = None
self._cached_token: Optional[str] = None
self._token_expires_at: float = 0.0
@property
def name(self) -> str:
return "spotify"
def is_configured(self) -> bool:
return bool(self._credentials.spotify_sp_dc)
@staticmethod
def _generate_totp(server_time_s: int, secret: str) -> str:
counter = server_time_s // 30
counter_bytes = struct.pack(">Q", counter)
mac = hmac.new(secret.encode(), counter_bytes, hashlib.sha1).digest()
offset = mac[-1] & 0x0F
binary_code = (
(mac[offset] & 0x7F) << 24
| (mac[offset + 1] & 0xFF) << 16
| (mac[offset + 2] & 0xFF) << 8
| (mac[offset + 3] & 0xFF)
)
return str(binary_code % (10**6)).zfill(6)
def _load_cached_token(self) -> Optional[str]:
data = self._cache.get_credential("spotify")
if not data:
return None
expires_ms = data.get("accessTokenExpirationTimestampMs", 0)
if expires_ms <= int(time.time() * 1000):
logger.debug("Spotify: persisted token expired")
return None
token = data.get("accessToken", "")
if not token:
return None
self._cached_token = token
self._token_expires_at = expires_ms / 1000.0
logger.debug("Spotify: loaded token from DB cache")
return token
def _save_token(self, body: dict) -> None:
expires_ms = body.get("accessTokenExpirationTimestampMs")
self._cache.set_credential("spotify", body, expires_ms)
logger.debug("Spotify: token saved to DB cache")
async def _get_server_time(self, client: httpx.AsyncClient) -> Optional[int]:
try:
res = await client.get(
_SPOTIFY_SERVER_TIME_URL, timeout=self._general.http_timeout
)
res.raise_for_status()
data = res.json()
if not isinstance(data, dict) or "serverTime" not in data:
logger.error(f"Spotify: unexpected server-time response: {data}")
return None
server_time = data["serverTime"]
logger.debug(f"Spotify: server time = {server_time}")
return server_time
except Exception as e:
logger.error(f"Spotify: failed to fetch server time: {e}")
return None
async def _get_secret(self, client: httpx.AsyncClient) -> Optional[Tuple[str, int]]:
if self._cached_secret is not None:
logger.debug("Spotify: using cached TOTP secret")
return self._cached_secret
try:
res = await client.get(
_SPOTIFY_SECRET_URL, timeout=self._general.http_timeout
)
res.raise_for_status()
data = res.json()
if not isinstance(data, list) or len(data) == 0:
logger.error(
f"Spotify: unexpected secrets response (type={type(data).__name__})"
)
return None
last = data[-1]
if "secret" not in last or "version" not in last:
logger.error(f"Spotify: malformed secret entry: {list(last.keys())}")
return None
secret_raw = last["secret"]
version = last["version"]
secret = "".join(
str(ord(c) ^ ((i % 33) + 9)) for i, c in enumerate(secret_raw)
)
logger.debug(f"Spotify: decoded secret v{version} (len={len(secret)})")
self._cached_secret = (secret, version)
return self._cached_secret
except Exception as e:
logger.error(f"Spotify: failed to fetch secret: {e}")
return None
async def authenticate(self) -> Optional[str]:
if self._cached_token and time.time() < self._token_expires_at - 30:
logger.debug("Spotify: using in-memory cached token")
return self._cached_token
db_token = self._load_cached_token()
if db_token and time.time() < self._token_expires_at - 30:
return db_token
if not self._credentials.spotify_sp_dc:
logger.error("Spotify: spotify_sp_dc not configured — cannot authenticate")
return None
headers = {
"Accept": "*/*",
"Cookie": f"sp_dc={self._credentials.spotify_sp_dc}",
**SPOTIFY_BASE_HEADERS,
}
async with httpx.AsyncClient(headers=headers) as client:
server_time = await self._get_server_time(client)
if server_time is None:
return None
secret_data = await self._get_secret(client)
if secret_data is None:
return None
secret, version = secret_data
totp = self._generate_totp(server_time, secret)
logger.debug(f"Spotify: generated TOTP v{version}: {totp}")
params = {
"reason": "init",
"productType": "web-player",
"totp": totp,
"totpVer": str(version),
"totpServer": totp,
}
try:
res = await client.get(
_SPOTIFY_TOKEN_URL,
params=params,
timeout=self._general.http_timeout,
)
if res.status_code != 200:
logger.error(f"Spotify: token request returned {res.status_code}")
return None
body = res.json()
if not isinstance(body, dict) or "accessToken" not in body:
logger.error(
f"Spotify: unexpected token response keys: {list(body.keys()) if isinstance(body, dict) else type(body).__name__}"
)
return None
token = body["accessToken"]
if body.get("isAnonymous", False):
logger.warning(
"Spotify: received anonymous token — SP_DC may be invalid"
)
expires_ms = body.get("accessTokenExpirationTimestampMs", 0)
if expires_ms and expires_ms > int(time.time() * 1000):
self._token_expires_at = expires_ms / 1000.0
else:
logger.warning("Spotify: token expiry missing or invalid")
self._token_expires_at = time.time() + 3600
self._cached_token = token
self._save_token(body)
logger.debug("Spotify: obtained access token")
return token
except Exception as e:
logger.error(f"Spotify: token request failed: {e}")
return None
async def get_lyrics(self, track_id: str) -> dict | None:
"""Fetch raw lyrics JSON payload for a Spotify track."""
token = await self.authenticate()
if not token:
return None
url = (
f"{_SPOTIFY_LYRICS_URL}{track_id}"
"?format=json&vocalRemoval=false&market=from_token"
)
headers = {
"Accept": "application/json",
"Authorization": f"Bearer {token}",
**SPOTIFY_BASE_HEADERS,
}
try:
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
res = await client.get(url, headers=headers)
if res.status_code == 404:
return None
if res.status_code != 200:
logger.error(f"Spotify: lyrics API returned {res.status_code}")
return None
data = res.json()
if not isinstance(data, dict):
return None
return data
except Exception as e:
logger.error(f"Spotify: lyrics fetch failed: {e}")
return None
-701
View File
@@ -1,701 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 10:18:03
Description: SQLite-based lyric cache with per-source slot rows, TTL expiration,
and schema migrations (confidence versioning + slot migration).
"""
from __future__ import annotations
import json
import sqlite3
import hashlib
import time
from typing import Optional
from loguru import logger
from .lrc import LRCData
from .normalize import normalize_for_match as _normalize_for_match
from .config import (
DURATION_TOLERANCE_MS,
LEGACY_CONFIDENCE,
CONFIDENCE_ALGO_VERSION,
SLOT_SYNCED,
SLOT_UNSYNCED,
)
from .models import TrackMeta, LyricResult, CacheStatus
from .utils import is_positive_status, select_best_positive
_ALL_SLOTS = (SLOT_SYNCED, SLOT_UNSYNCED)
# Fixed WHERE clause for exact track matching. Column names are hardcoded
# literals; only the values come from user-supplied params — no injection risk.
_TRACK_WHERE = (
"(? IS NULL OR artist = ?) AND "
"(? IS NULL OR title = ?) AND "
"(? IS NULL OR album = ?)"
)
def _track_where_params(track: TrackMeta) -> list:
return [
track.artist,
track.artist,
track.title,
track.title,
track.album,
track.album,
]
def _generate_key(track: TrackMeta, source: str) -> str:
"""Generate a unique cache key from track metadata and source.
The key is scoped by source so that different fetchers can cache
independently for the same track (e.g. Spotify synced vs Netease unsynced).
"""
# Spotify tracks always use their track ID as the primary identifier
if track.trackid and source == "spotify":
return f"spotify:{track.trackid}"
parts = []
if track.artist:
parts.append(track.artist)
if track.title:
parts.append(track.title)
if track.album:
parts.append(track.album)
if track.length:
parts.append(str(track.length))
# Fall back to URL for local files
if not parts and track.url:
return f"{source}:url:{track.url}"
if not parts:
raise ValueError("Insufficient metadata to generate cache key")
raw = "|".join(parts)
digest = hashlib.sha256(raw.encode()).hexdigest()
return f"{source}:{digest}"
class CacheEngine:
def __init__(self, db_path: str):
self.db_path = db_path
self._init_db()
def _connect(self) -> sqlite3.Connection:
conn = sqlite3.connect(self.db_path)
conn.execute("PRAGMA journal_mode=WAL")
conn.execute("PRAGMA busy_timeout=5000")
return conn
def _init_db(self) -> None:
"""Create cache tables and run one-time slot/cache migrations."""
with self._connect() as conn:
conn.execute("""
CREATE TABLE IF NOT EXISTS credentials (
name TEXT PRIMARY KEY,
data TEXT NOT NULL,
expires_at INTEGER
)
""")
cache_exists = conn.execute(
"SELECT 1 FROM sqlite_master WHERE type='table' AND name='cache'"
).fetchone()
if not cache_exists:
self._create_cache_table(conn)
conn.commit()
return
cols = {r[1] for r in conn.execute("PRAGMA table_info(cache)").fetchall()}
if "positive_kind" not in cols:
# Normalize legacy shape first so migration SQL can safely read all columns.
if "length" not in cols:
conn.execute("ALTER TABLE cache ADD COLUMN length INTEGER")
if "confidence" not in cols:
conn.execute("ALTER TABLE cache ADD COLUMN confidence REAL")
if "confidence_version" not in cols:
conn.execute(
"ALTER TABLE cache ADD COLUMN confidence_version INTEGER"
)
self._migrate_legacy_to_slot_cache(conn)
cols = {
r[1] for r in conn.execute("PRAGMA table_info(cache)").fetchall()
}
if "confidence_version" not in cols:
conn.execute("ALTER TABLE cache ADD COLUMN confidence_version INTEGER")
conn.execute(
"""
UPDATE cache
SET confidence = MIN(100.0, COALESCE(confidence, ?) + 10.0)
WHERE status = ? AND positive_kind = ?
""",
(
LEGACY_CONFIDENCE,
CacheStatus.SUCCESS_UNSYNCED.value,
SLOT_UNSYNCED,
),
)
conn.execute(
"UPDATE cache SET confidence_version = ? WHERE confidence_version IS NULL",
(CONFIDENCE_ALGO_VERSION,),
)
conn.commit()
def _create_cache_table(self, conn: sqlite3.Connection) -> None:
conn.execute("""
CREATE TABLE IF NOT EXISTS cache (
key TEXT NOT NULL,
positive_kind TEXT NOT NULL,
source TEXT NOT NULL,
status TEXT NOT NULL,
lyrics TEXT,
created_at INTEGER NOT NULL,
expires_at INTEGER,
artist TEXT,
title TEXT,
album TEXT,
length INTEGER,
confidence REAL,
confidence_version INTEGER,
PRIMARY KEY (key, positive_kind)
)
""")
def _migrate_legacy_to_slot_cache(self, conn: sqlite3.Connection) -> None:
"""One-time migration from single-row cache to slot-scoped cache rows."""
conn.execute("ALTER TABLE cache RENAME TO cache_legacy")
self._create_cache_table(conn)
positive_statuses = (
CacheStatus.SUCCESS_SYNCED.value,
CacheStatus.SUCCESS_UNSYNCED.value,
)
negative_statuses = (
CacheStatus.NOT_FOUND.value,
CacheStatus.NETWORK_ERROR.value,
)
conn.execute(
"""
INSERT INTO cache (
key, positive_kind, source, status, lyrics, created_at, expires_at,
artist, title, album, length, confidence, confidence_version
)
SELECT
key,
CASE
WHEN status = ? THEN ?
WHEN status = ? THEN ?
ELSE ?
END,
source, status, lyrics, created_at, expires_at, artist, title, album, length,
CASE
WHEN status = ? THEN MIN(100.0, COALESCE(confidence, ?) + 10.0)
WHEN status = ? THEN COALESCE(confidence, ?)
ELSE COALESCE(confidence, 0.0)
END,
COALESCE(confidence_version, ?)
FROM cache_legacy
WHERE status IN (?, ?)
""",
(
CacheStatus.SUCCESS_SYNCED.value,
SLOT_SYNCED,
CacheStatus.SUCCESS_UNSYNCED.value,
SLOT_UNSYNCED,
SLOT_SYNCED,
CacheStatus.SUCCESS_UNSYNCED.value,
LEGACY_CONFIDENCE,
CacheStatus.SUCCESS_SYNCED.value,
LEGACY_CONFIDENCE,
CONFIDENCE_ALGO_VERSION,
positive_statuses[0],
positive_statuses[1],
),
)
for slot in _ALL_SLOTS:
conn.execute(
"""
INSERT INTO cache (
key, positive_kind, source, status, lyrics, created_at, expires_at,
artist, title, album, length, confidence, confidence_version
)
SELECT
key, ?, source, status, lyrics, created_at, expires_at, artist, title,
album, length,
COALESCE(confidence, 0.0),
COALESCE(confidence_version, ?)
FROM cache_legacy
WHERE status IN (?, ?)
""",
(
slot,
CONFIDENCE_ALGO_VERSION,
negative_statuses[0],
negative_statuses[1],
),
)
conn.execute("DROP TABLE cache_legacy")
@staticmethod
def _slot_for_status(status: CacheStatus) -> str:
if status == CacheStatus.SUCCESS_SYNCED:
return SLOT_SYNCED
if status == CacheStatus.SUCCESS_UNSYNCED:
return SLOT_UNSYNCED
raise ValueError(f"Status {status.value} requires explicit slot")
# Read
def get_all(self, track: TrackMeta, source: str) -> list[LyricResult]:
"""Return all non-expired cached slot rows for track/source."""
try:
key = _generate_key(track, source)
except ValueError:
return []
now = int(time.time())
with self._connect() as conn:
conn.execute(
"DELETE FROM cache WHERE key = ? AND expires_at IS NOT NULL AND expires_at < ?",
(key, now),
)
conn.commit()
rows = conn.execute(
"""
SELECT status, lyrics, source, expires_at, length, confidence
FROM cache
WHERE key = ? AND (expires_at IS NULL OR expires_at > ?)
ORDER BY positive_kind
""",
(key, now),
).fetchall()
if not rows:
logger.debug(f"Cache miss: {source} / {track.display_name()}")
return []
# Backfill missing length for all slot rows under the same key.
if track.length is not None:
conn.execute(
"UPDATE cache SET length = ? WHERE key = ? AND length IS NULL",
(track.length, key),
)
conn.commit()
results: list[LyricResult] = []
for status_str, lyrics, src, expires_at, _cached_length, confidence in rows:
remaining = expires_at - now if expires_at else None
status = CacheStatus(status_str)
if confidence is None:
if is_positive_status(status):
confidence = LEGACY_CONFIDENCE
else:
confidence = 0.0
results.append(
LyricResult(
status=status,
lyrics=LRCData(lyrics) if lyrics else None,
source=src,
ttl=remaining,
confidence=confidence,
)
)
return results
def get_best(self, track: TrackMeta, sources: list[str]) -> Optional[LyricResult]:
"""Return best positive cached result across sources.
Negative statuses are ignored by ranking.
"""
positives: list[LyricResult] = []
for src in sources:
rows = self.get_all(track, src)
positives.extend(r for r in rows if is_positive_status(r.status))
return select_best_positive(positives, allow_unsynced=True)
# Write
def set(
self,
track: TrackMeta,
source: str,
result: LyricResult,
ttl_seconds: Optional[int] = None,
positive_kind: Optional[str] = None,
) -> None:
"""Store a lyric result in the cache.
New/updated rows are tagged with the current confidence algorithm
version so future migrations can be applied deterministically.
"""
try:
key = _generate_key(track, source)
except ValueError:
logger.warning("Cannot cache: insufficient track metadata.")
return
now = int(time.time())
expires_at = now + ttl_seconds if ttl_seconds else None
kinds: list[str]
if positive_kind is not None:
kinds = [positive_kind]
elif result.status in (
CacheStatus.SUCCESS_SYNCED,
CacheStatus.SUCCESS_UNSYNCED,
):
kinds = [self._slot_for_status(result.status)]
else:
# Convenience for callers that still pass a single negative result.
kinds = [SLOT_SYNCED, SLOT_UNSYNCED]
with self._connect() as conn:
for kind in kinds:
conn.execute(
"""INSERT OR REPLACE INTO cache
(key, positive_kind, source, status, lyrics, created_at, expires_at,
artist, title, album, length, confidence, confidence_version)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(
key,
kind,
source,
result.status.value,
str(result.lyrics) if result.lyrics else None,
now,
expires_at,
track.artist,
track.title,
track.album,
track.length,
result.confidence,
CONFIDENCE_ALGO_VERSION,
),
)
conn.commit()
logger.debug(
f"Cached: {source} / {track.display_name()} "
f"[{result.status.value}, ttl={ttl_seconds}s]"
)
# Delete
def clear_all(self) -> None:
"""Remove every entry from the cache."""
with self._connect() as conn:
conn.execute("DELETE FROM cache")
conn.commit()
logger.info("Cache cleared.")
def clear_track(self, track: TrackMeta) -> None:
"""Remove all cached entries (every source) for a single track."""
if not self._track_has_meta(track):
logger.info(f"No cache entries found for {track.display_name()}.")
return
with self._connect() as conn:
cur = conn.execute(
f"DELETE FROM cache WHERE {_TRACK_WHERE}",
_track_where_params(track),
)
conn.commit()
if cur.rowcount:
logger.info(
f"Cleared {cur.rowcount} cache entries for {track.display_name()}."
)
else:
logger.info(f"No cache entries found for {track.display_name()}.")
def prune(self) -> int:
"""Remove all expired entries. Returns the number of rows deleted."""
with self._connect() as conn:
cur = conn.execute(
"DELETE FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
(int(time.time()),),
)
conn.commit()
count = cur.rowcount
logger.info(f"Pruned {count} expired cache entries.")
return count
@staticmethod
def _track_has_meta(track: TrackMeta) -> bool:
return bool(track.artist or track.title or track.album)
# Exact cross-source search
def find_best_positive(
self, track: TrackMeta, status: CacheStatus
) -> Optional[LyricResult]:
"""Find the best positive (synced/unsynced) cache entry for track.
Uses exact metadata match (artist + title + album) across all sources.
Returns the highest-confidence entry, or None.
"""
if not self._track_has_meta(track):
return None
now = int(time.time())
with self._connect() as conn:
conn.row_factory = sqlite3.Row
rows = conn.execute(
f"SELECT status, lyrics, source, confidence FROM cache"
f" WHERE {_TRACK_WHERE}"
" AND status = ?"
" AND positive_kind = ?"
" AND (expires_at IS NULL OR expires_at > ?)"
" ORDER BY COALESCE(confidence, ?) DESC,"
" CASE status WHEN ? THEN 0 ELSE 1 END,"
" created_at DESC",
_track_where_params(track)
+ [
status.value,
self._slot_for_status(status),
now,
LEGACY_CONFIDENCE,
CacheStatus.SUCCESS_SYNCED.value,
],
).fetchall()
if not rows:
return None
row = dict(rows[0])
confidence = row["confidence"]
if confidence is None:
confidence = LEGACY_CONFIDENCE
return LyricResult(
status=CacheStatus(row["status"]),
lyrics=LRCData(row["lyrics"]) if row["lyrics"] else None,
source="cache-search",
confidence=confidence,
)
# Fuzzy search
def search_by_meta(
self,
title: Optional[str],
length: Optional[int] = None,
) -> list[dict]:
"""Search cache for lyrics matching title with fuzzy normalization.
Artist is intentionally not filtered here — artist names can differ
significantly across languages (e.g. Japanese romanization vs. kanji),
making hard artist filtering unreliable for cross-language queries.
Ignores artist, album and source. Only returns positive results
(synced/unsynced) that have not expired. When length is provided,
filters by duration tolerance and sorts by closest match.
"""
if not title:
return []
now = int(time.time())
with self._connect() as conn:
conn.row_factory = sqlite3.Row
rows = conn.execute(
"""SELECT * FROM cache
WHERE status IN (?, ?)
AND (expires_at IS NULL OR expires_at > ?)""",
(
CacheStatus.SUCCESS_SYNCED.value,
CacheStatus.SUCCESS_UNSYNCED.value,
now,
),
).fetchall()
norm_title = _normalize_for_match(title)
matches: list[dict] = []
for row in rows:
row_dict = dict(row)
# Title must match
row_title = row_dict.get("title") or ""
if _normalize_for_match(row_title) != norm_title:
continue
matches.append(row_dict)
# Duration filtering
if length is not None and matches:
scored = []
for m in matches:
row_len = m.get("length")
if row_len is not None:
diff = abs(row_len - length)
if diff <= DURATION_TOLERANCE_MS:
scored.append((diff, m))
else:
# No duration info in cache — still a candidate but lower priority
scored.append((DURATION_TOLERANCE_MS, m))
scored.sort(
key=lambda x: (
x[0],
-(x[1].get("confidence") or 0),
x[1].get("status") != CacheStatus.SUCCESS_SYNCED.value,
-(x[1].get("created_at") or 0),
)
)
matches = [m for _, m in scored]
return matches
# Update
def update_confidence(
self,
track: TrackMeta,
confidence: float,
source: str,
) -> int:
"""Update confidence for a specific source's cache entry matching track.
Returns the number of rows updated.
"""
if not self._track_has_meta(track):
return 0
with self._connect() as conn:
cur = conn.execute(
f"UPDATE cache SET confidence = ? WHERE {_TRACK_WHERE} AND source = ?",
[confidence] + _track_where_params(track) + [source],
)
conn.commit()
return cur.rowcount
# Query / inspect
def query_track(self, track: TrackMeta) -> list[dict]:
"""Return all cached rows for a given track (across all sources)."""
if not self._track_has_meta(track):
return []
with self._connect() as conn:
conn.row_factory = sqlite3.Row
return [
dict(r)
for r in conn.execute(
f"SELECT * FROM cache WHERE {_TRACK_WHERE}",
_track_where_params(track),
).fetchall()
]
# Credentials
def get_credential(self, name: str) -> Optional[dict]:
"""Return cached credential data if present and not expired."""
now_ms = int(time.time() * 1000)
with self._connect() as conn:
conn.row_factory = sqlite3.Row
row = conn.execute(
"SELECT data FROM credentials WHERE name = ? AND (expires_at IS NULL OR expires_at > ?)",
(name, now_ms),
).fetchone()
if row is None:
return None
try:
return json.loads(row["data"])
except (json.JSONDecodeError, KeyError):
return None
def set_credential(
self, name: str, data: dict, expires_at_ms: Optional[int] = None
) -> None:
"""Persist credential data, optionally with an expiry timestamp (Unix ms)."""
with self._connect() as conn:
conn.execute(
"INSERT OR REPLACE INTO credentials (name, data, expires_at) VALUES (?, ?, ?)",
(name, json.dumps(data), expires_at_ms),
)
conn.commit()
def query_all(self) -> list[dict]:
"""Return every row in the cache table."""
with self._connect() as conn:
conn.row_factory = sqlite3.Row
return [dict(r) for r in conn.execute("SELECT * FROM cache").fetchall()]
def stats(self) -> dict:
"""Return aggregate cache statistics."""
now = int(time.time())
with self._connect() as conn:
total = conn.execute("SELECT COUNT(*) FROM cache").fetchone()[0]
expired = conn.execute(
"SELECT COUNT(*) FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
(now,),
).fetchone()[0]
by_status = dict(
conn.execute(
"SELECT status, COUNT(*) FROM cache GROUP BY status"
).fetchall()
)
by_source = dict(
conn.execute(
"SELECT source, COUNT(*) FROM cache GROUP BY source"
).fetchall()
)
by_slot = dict(
conn.execute(
"SELECT positive_kind, COUNT(*) FROM cache GROUP BY positive_kind"
).fetchall()
)
# Source × Status cross-tabulation
source_status = conn.execute(
"SELECT source, status, COUNT(*) FROM cache GROUP BY source, status"
).fetchall()
# Confidence buckets (only for positive statuses)
confidence_rows = conn.execute(
"SELECT confidence FROM cache WHERE status IN (?, ?)",
(
CacheStatus.SUCCESS_SYNCED.value,
CacheStatus.SUCCESS_UNSYNCED.value,
),
).fetchall()
# Build source×status table: {source: {status: count}}
source_status_table: dict[str, dict[str, int]] = {}
for src, status, count in source_status:
source_status_table.setdefault(src, {})[status] = count
# Build confidence buckets
buckets = {
"legacy (NULL)": 0,
"0-24": 0,
"25-49": 0,
"50-79": 0,
"80-99": 0,
"100": 0,
}
for (conf,) in confidence_rows:
if conf is None:
buckets["legacy (NULL)"] += 1
elif conf >= 100:
buckets["100"] += 1
elif conf >= 80:
buckets["80-99"] += 1
elif conf >= 50:
buckets["50-79"] += 1
elif conf >= 25:
buckets["25-49"] += 1
else:
buckets["0-24"] += 1
return {
"total": total,
"expired": expired,
"active": total - expired,
"by_status": by_status,
"by_source": by_source,
"by_slot": by_slot,
"source_status": source_status_table,
"confidence_buckets": buckets,
}
-755
View File
@@ -1,755 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-26 02:04:39
Description: CLI interface.
"""
from __future__ import annotations
import sys
import time
import os
import asyncio
import json
from pathlib import Path
from typing import Annotated
from urllib.parse import quote
import cyclopts
from loguru import logger
from .config import (
DB_PATH,
AppConfig,
load_config,
enable_debug,
)
from .utils import get_sidecar_path
from .models import TrackMeta
from .mpris import get_current_track
from .core import LrcManager
from .fetchers import FetcherMethodType
from .watch import WatchCoordinator
from .watch.control import ControlClient, parse_delta
from .watch.view.pipe import PipeOutput
from .watch.view.print import PrintOutput
app = cyclopts.App(
help="LRX-CLI — Fetch line-synced lyrics for your music player.",
)
app.register_install_completion_command()
cache_app = cyclopts.App(name="cache", help="Manage the local SQLite cache.")
app.command(cache_app)
watch_app = cyclopts.App(name="watch", help="Watch MPRIS and output lyrics.")
app.command(watch_app)
ctl_app = cyclopts.App(name="ctl", help="Control a running watch session.")
watch_app.command(ctl_app)
# Global state set by the meta launcher
_player: str | None = None
_db_path: str | None = None
_app_config: AppConfig = AppConfig()
# Will be initialized before any command runs, safe to set to None here
manager: LrcManager = None # type: ignore
@app.meta.default
def launcher(
*tokens: Annotated[str, cyclopts.Parameter(show=False, allow_leading_hyphen=True)],
debug: Annotated[
bool,
cyclopts.Parameter(
name=["--debug", "-d"], negative="", help="Enable debug logging."
),
] = False,
player: Annotated[
str | None,
cyclopts.Parameter(
name=["--player", "-p"],
help="Target a specific MPRIS player using its DBus name or a portion thereof. Bypasses player_blacklist.",
),
] = None,
db_path: Annotated[
str | None,
cyclopts.Parameter(
name=["--db-path", "-c"],
help=f"Custom path for the cache database file (default: {DB_PATH}).",
),
] = None,
):
global _player, _db_path, _app_config, manager
if debug:
enable_debug()
_player = player
_db_path = str(Path(db_path).resolve()) if db_path else DB_PATH
_app_config = load_config()
manager = LrcManager(db_path=_db_path, config=_app_config)
app(tokens)
# fetch
@app.command
def fetch(
*,
method: Annotated[
FetcherMethodType | None,
cyclopts.Parameter(help="Force a specific source."),
] = None,
no_cache: Annotated[
bool,
cyclopts.Parameter(
name="--no-cache", negative="", help="Bypass the cache for this request."
),
] = False,
allow_unsynced: Annotated[
bool,
cyclopts.Parameter(
name="--allow-unsynced",
negative="",
help="Allow unsynced lyrics (will be displayed with all time tags set to [00:00.00]).",
),
] = False,
plain: Annotated[
bool,
cyclopts.Parameter(
name="--plain",
negative="",
help="Output only plain lyrics without tags (highest priority over --normalize).",
),
] = False,
normalize: Annotated[
bool,
cyclopts.Parameter(
name="--normalize",
negative="",
help="Output normalized LRC (ignored when --plain is also set).",
),
] = False,
):
"""Fetch and print lyrics for the currently playing track."""
track = get_current_track(
_player,
preferred_player=_app_config.general.preferred_player,
player_blacklist=_app_config.general.player_blacklist,
)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
logger.info(f"Track: {track.display_name()}")
result = manager.fetch_for_track(
track,
force_method=method,
bypass_cache=no_cache,
allow_unsynced=allow_unsynced,
)
if not result or not result.lyrics:
logger.error("No lyrics found.")
sys.exit(1)
if plain:
print(result.lyrics.to_plain())
elif normalize:
print(result.lyrics.to_normalized_text())
else:
print(result.lyrics.to_text())
# search
@app.command
def search(
*,
title: Annotated[
str | None, cyclopts.Parameter(name=["--title", "-t"], help="Track title.")
] = None,
artist: Annotated[
str | None, cyclopts.Parameter(name=["--artist", "-a"], help="Artist name.")
] = None,
album: Annotated[str | None, cyclopts.Parameter(help="Album name.")] = None,
trackid: Annotated[str | None, cyclopts.Parameter(help="Spotify track ID.")] = None,
length: Annotated[
int | None,
cyclopts.Parameter(
name=["--length", "-l"], help="Track duration in milliseconds."
),
] = None,
url: Annotated[
str | None,
cyclopts.Parameter(
help="Local file URL (file:///...). Mutually exclusive with --path."
),
] = None,
path: Annotated[
str | None,
cyclopts.Parameter(
name=["--path"],
help="Local audio file path. Mutually exclusive with --url.",
),
] = None,
method: Annotated[
FetcherMethodType | None, cyclopts.Parameter(help="Force a specific source.")
] = None,
no_cache: Annotated[
bool,
cyclopts.Parameter(
name="--no-cache", negative="", help="Bypass the cache for this request."
),
] = False,
allow_unsynced: Annotated[
bool,
cyclopts.Parameter(
name="--allow-unsynced",
negative="",
help="Allow unsynced lyrics (will be displayed with all time tags set to [00:00.00]).",
),
] = False,
plain: Annotated[
bool,
cyclopts.Parameter(
name="--plain",
negative="",
help="Output only plain lyrics without tags (highest priority over --normalize).",
),
] = False,
normalize: Annotated[
bool,
cyclopts.Parameter(
name="--normalize",
negative="",
help="Output normalized LRC (ignored when --plain is also set).",
),
] = False,
):
"""Search for lyrics by metadata (bypasses MPRIS)."""
if url and path:
logger.error("--url and --path are mutually exclusive.")
sys.exit(1)
if path:
resolved = str(Path(path).resolve())
url = "file://" + quote(resolved, safe="/")
track = TrackMeta(
title=title,
artist=artist,
album=album,
trackid=trackid,
length=length,
url=url,
)
logger.info(f"Track: {track.display_name()}")
result = manager.fetch_for_track(
track,
force_method=method,
bypass_cache=no_cache,
allow_unsynced=allow_unsynced,
)
if not result or not result.lyrics:
logger.error("No lyrics found.")
sys.exit(1)
if plain:
print(result.lyrics.to_plain())
elif normalize:
print(result.lyrics.to_normalized_text())
else:
print(result.lyrics.to_text())
# export
@app.command
def export(
*,
output: Annotated[
str | None,
cyclopts.Parameter(
name=["--output", "-o"],
help="Output file path (default: same directory as audio file with .lrc extension, or current directory if not available).",
),
] = None,
method: Annotated[
FetcherMethodType | None, cyclopts.Parameter(help="Force a specific source.")
] = None,
no_cache: Annotated[
bool, cyclopts.Parameter(name="--no-cache", negative="", help="Bypass cache.")
] = False,
overwrite: Annotated[
bool,
cyclopts.Parameter(
name=["--overwrite", "-f"], negative="", help="Overwrite existing file."
),
] = False,
allow_unsynced: Annotated[
bool,
cyclopts.Parameter(
name="--allow-unsynced",
negative="",
help="Allow unsynced lyrics (will be exported with all time tags set to [00:00.00] if --plain is not present).",
),
] = False,
plain: Annotated[
bool,
cyclopts.Parameter(
name="--plain",
negative="",
help="Export only plain lyrics (.txt, highest priority over --normalize).",
),
] = False,
normalize: Annotated[
bool,
cyclopts.Parameter(
name="--normalize",
negative="",
help="Export normalized LRC output (ignored when --plain is also set).",
),
] = False,
):
"""Export lyrics of the current track to a .lrc file."""
track = get_current_track(
_player,
preferred_player=_app_config.general.preferred_player,
player_blacklist=_app_config.general.player_blacklist,
)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
result = manager.fetch_for_track(
track,
force_method=method,
bypass_cache=no_cache,
allow_unsynced=allow_unsynced,
)
if not result or not result.lyrics:
logger.error("No lyrics available to export.")
sys.exit(1)
# Output file extension
ext = ".lrc" if not plain else ".txt"
if output and not output.endswith(ext):
output += ext
# Build default output path
if not output:
if track.url:
lrc_path = get_sidecar_path(track.url, ensure_exists=False, extension=ext)
if lrc_path:
output = str(lrc_path)
logger.info(f"Exporting to sidecar path: {output}")
# Fallback to current directory with sanitized filename
if not output:
filename = (
f"{track.artist} - {track.title}{ext}"
if track.artist and track.title
else "lyrics" + ext
)
# Sanitize filename
filename = "".join(
c for c in filename if c.isalpha() or c.isdigit() or c in " -_."
).rstrip()
output = os.path.join(os.getcwd(), filename)
if os.path.exists(output) and not overwrite:
logger.error(f"File exists: {output} (use -f to overwrite)")
sys.exit(1)
try:
with open(output, "w", encoding="utf-8") as f:
if plain:
f.write(result.lyrics.to_plain())
elif normalize:
f.write(result.lyrics.to_normalized_text())
else:
f.write(result.lyrics.to_text())
logger.info(f"Exported lyrics to {output}")
except Exception as e:
logger.error(f"Failed to write file: {e}")
sys.exit(1)
# watch subcommands
@watch_app.command
def pipe(
before: Annotated[
int,
cyclopts.Parameter(
name=["--before", "-b"],
help="Number of lyric lines to show before current line.",
),
] = 0,
after: Annotated[
int,
cyclopts.Parameter(
name=["--after", "-a"],
help="Number of lyric lines to show after current line.",
),
] = 0,
no_newline: Annotated[
bool,
cyclopts.Parameter(
name=["--no-newline", "-n"],
negative="",
help="Do not append a new line after the lyric output.",
),
] = False,
):
"""Watch active player and continuously print lyric window to stdout."""
logger.info(
"Starting watch pipe (player filter: {})",
_player or "<none>",
)
output = PipeOutput(
before=max(0, before), after=max(0, after), no_newline=no_newline
)
try:
session = WatchCoordinator(
manager,
output,
player_hint=_player,
config=_app_config,
)
success = asyncio.run(session.run())
if not success:
sys.exit(1)
except KeyboardInterrupt:
logger.info("Watch stopped.")
@watch_app.command(name="print")
def watch_print(
plain: Annotated[
bool,
cyclopts.Parameter(
name="--plain",
negative="",
help="Output plain text (strips all tags). Takes priority over --normalize.",
),
] = False,
) -> None:
"""Watch active player and print all lyrics to stdout once per track change."""
logger.info(
"Starting watch print (player filter: {})",
_player or "<none>",
)
output = PrintOutput(plain=plain)
try:
session = WatchCoordinator(
manager,
output,
player_hint=_player,
config=_app_config,
)
success = asyncio.run(session.run())
if not success:
sys.exit(1)
except KeyboardInterrupt:
logger.info("Watch stopped.")
@ctl_app.command
def offset(delta: str) -> None:
"""Adjust watch offset. Examples: +200, -200, 0."""
parsed_ok, parsed_delta, parse_error = parse_delta(delta)
if not parsed_ok or parsed_delta is None:
logger.error(parse_error or "Invalid offset delta")
sys.exit(1)
response = ControlClient(_app_config.watch.socket_path).send(
{"cmd": "offset", "delta": parsed_delta}
)
if not response.get("ok"):
logger.error(response.get("error", "Unknown error"))
sys.exit(1)
print(json.dumps(response, indent=2, ensure_ascii=False))
@ctl_app.command
def status() -> None:
"""Print current watch session status as JSON."""
response = ControlClient(_app_config.watch.socket_path).send({"cmd": "status"})
if not response.get("ok"):
logger.error(response.get("error", "Unknown error"))
sys.exit(1)
print(json.dumps(response, indent=2, ensure_ascii=False))
# cache subcommands
@cache_app.command
def query(
*,
all: Annotated[
bool,
cyclopts.Parameter(name="--all", negative="", help="Dump all cache entries."),
] = False,
):
"""Show cached entries for the current track."""
if all:
rows = manager.cache.query_all()
if not rows:
print("Cache is empty.")
return
for row in rows:
_print_cache_row(row)
print()
return
track = get_current_track(
_player,
preferred_player=_app_config.general.preferred_player,
player_blacklist=_app_config.general.player_blacklist,
)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
_print_track_cache(track)
@cache_app.command
def clear(
*,
all: Annotated[
bool,
cyclopts.Parameter(name="--all", negative="", help="Clear the entire cache."),
] = False,
):
"""Clear cached entries for the current track."""
if all:
manager.cache.clear_all()
return
track = get_current_track(
_player,
preferred_player=_app_config.general.preferred_player,
player_blacklist=_app_config.general.player_blacklist,
)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
manager.cache.clear_track(track)
@cache_app.command
def prune():
"""Remove expired cache entries."""
manager.cache.prune()
@cache_app.command
def stats():
"""Show cache statistics."""
s = manager.cache.stats()
print("=== Cache Statistics ===")
print(f"Total entries : {s['total']}")
print(f"Active : {s['active']}")
print(f"Expired : {s['expired']}")
by_slot = s.get("by_slot", {})
if by_slot:
print(
"Slots : "
+ ", ".join(f"{k}={v}" for k, v in sorted(by_slot.items()))
)
# Source × Status table
table = s.get("source_status", {})
if table:
all_statuses = sorted({st for row in table.values() for st in row})
# Short labels for column headers
short = {
"SUCCESS_SYNCED": "synced",
"SUCCESS_UNSYNCED": "unsynced",
"NOT_FOUND": "not_found",
"NETWORK_ERROR": "net_err",
}
headers = [short.get(st, st) for st in all_statuses]
sources = sorted(table.keys())
# Column widths
src_w = max(len(src) for src in sources)
src_w = max(src_w, 6) # min width for "source" header
col_w = [max(len(h) if h else 0, 4) for h in headers]
print(
f"\n{'source':<{src_w}} "
+ " ".join(f"{h:>{w}}" for h, w in zip(headers, col_w))
)
print("-" * src_w + " " + " ".join("-" * w for w in col_w))
for src in sources:
counts = [str(table[src].get(st, 0)) for st in all_statuses]
print(
f"{src:<{src_w}} "
+ " ".join(f"{c:>{w}}" for c, w in zip(counts, col_w))
)
totals = [
str(sum(table[src].get(st, 0) for src in sources)) for st in all_statuses
]
print("-" * src_w + " " + " ".join("-" * w for w in col_w))
print(
f"{'total':<{src_w}} "
+ " ".join(f"{c:>{w}}" for c, w in zip(totals, col_w))
)
# Confidence distribution (positive entries only)
buckets = s.get("confidence_buckets", {})
non_empty = {k: v for k, v in buckets.items() if v > 0}
if non_empty:
label_w = max(len(k) for k in non_empty)
print("\nConfidence distribution (positive entries):")
for label, count in buckets.items():
if count > 0:
print(f" {label:>{label_w}} : {count}")
@cache_app.command
def confidence(
source: Annotated[
str, cyclopts.Parameter(help="Source to update (e.g. spotify, netease).")
],
score: Annotated[float, cyclopts.Parameter(help="Confidence score (0-100).")],
):
"""Set confidence score for the current track's cache entry from a specific source."""
if not 0 <= score <= 100:
logger.error("Score must be between 0 and 100.")
sys.exit(1)
track = get_current_track(
_player,
preferred_player=_app_config.general.preferred_player,
player_blacklist=_app_config.general.player_blacklist,
)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
updated = manager.cache.update_confidence(track, score, source=source)
if updated:
print(f"Updated [{source}] confidence to {score:.0f}.")
else:
print(f"No cache entry found for [{source}].")
@cache_app.command
def insert(
*,
path: Annotated[
str | None,
cyclopts.Parameter(
name=["--path"],
help="Path to a local .lrc file to insert instead of reading from stdin.",
),
] = None,
):
"""Manually insert lyrics into the cache for the current track."""
track = get_current_track(
_player,
preferred_player=_app_config.general.preferred_player,
player_blacklist=_app_config.general.player_blacklist,
)
if not track:
logger.error("No active playing track found.")
sys.exit(1)
if path:
try:
with open(path, "r", encoding="utf-8") as f:
lyrics = f.read()
except Exception as e:
logger.error(f"Failed to read file: {e}")
sys.exit(1)
else:
logger.info("Reading lyrics from stdin (Ctrl+D to finish)...")
lyrics = sys.stdin.read()
manager.manual_insert(track, lyrics)
# helpers
def _print_track_cache(track: TrackMeta) -> None:
"""Print all cached entries for a given track."""
print(f"Track: {track.display_name()}")
if track.album:
print(f"Album: {track.album}")
if track.length:
secs = track.length / 1000.0
print(f"Duration: {int(secs // 60)}:{secs % 60:05.2f}")
print()
rows = manager.cache.query_track(track)
if not rows:
print(" (no cache entries)")
return
for row in rows:
_print_cache_row(row, indent=" ")
def _print_cache_row(row: dict, indent: str = "") -> None:
"""Pretty-print a single cache row."""
now = int(time.time())
source = row.get("source", "?")
slot = row.get("positive_kind", "?")
status = row.get("status", "?")
artist = row.get("artist", "")
title = row.get("title", "")
album = row.get("album", "")
created = row.get("created_at", 0)
expires = row.get("expires_at")
lyrics = row.get("lyrics", "")
confidence = row.get("confidence")
name = f"{artist} - {title}" if artist and title else row.get("key", "?")
print(f"{indent}[{source}/{slot}] {name}")
if album:
print(f"{indent} Album : {album}")
print(f"{indent} Status : {status}")
if created:
age = now - created
print(f"{indent} Cached : {age // 3600}h {(age % 3600) // 60}m ago")
if expires:
remaining = expires - now
if remaining > 0:
print(
f"{indent} Expires : in {remaining // 3600}h {(remaining % 3600) // 60}m"
)
else:
print(f"{indent} Expires : EXPIRED")
else:
print(f"{indent} Expires : never")
if lyrics:
line_count = len(lyrics.splitlines())
print(f"{indent} Lyrics : {line_count} lines")
if confidence is not None:
print(f"{indent} Confidence: {confidence:.0f}")
else:
print(f"{indent} Confidence: (legacy)")
def run():
app.meta()
if __name__ == "__main__":
run()
-208
View File
@@ -1,208 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 10:17:56
Description: Global configuration constants, typed config dataclasses, and logger setup.
"""
from __future__ import annotations
import dataclasses
import os
import sys
import tomllib
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, get_type_hints
from platformdirs import user_cache_dir, user_config_dir
from loguru import logger
from importlib.metadata import version
# Application
APP_NAME = "lrx-cli"
APP_AUTHOR = "Uyanide"
APP_VERSION = version(APP_NAME)
# Paths
CACHE_DIR = user_cache_dir(APP_NAME, APP_AUTHOR)
DB_PATH = os.path.join(CACHE_DIR, "cache.db")
# Slot identifiers used by per-slot cache rows.
SLOT_SYNCED = "SYNCED"
SLOT_UNSYNCED = "UNSYNCED"
_WATCH_SOCKET_PATH = str(Path(CACHE_DIR) / "watch.sock")
# Cache TTLs (seconds)
TTL_SYNCED = None # never expires
TTL_UNSYNCED = None # never expires
TTL_NOT_FOUND = 86400 * 3 # 3 days
TTL_NETWORK_ERROR = 3600 # 1 hour
# Search
DURATION_TOLERANCE_MS = 3000 # max duration mismatch for search matching
# Confidence scoring weights (sum to 100)
SCORE_W_TITLE = 40.0
SCORE_W_ARTIST = 30.0
SCORE_W_ALBUM = 10.0
SCORE_W_DURATION = 10.0
SCORE_W_SYNCED = 10.0
CONFIDENCE_ALGO_VERSION = 1
# Confidence thresholds
MIN_CONFIDENCE = 40.0 # below this, candidate is rejected
HIGH_CONFIDENCE = 80.0 # at or above this, stop searching early
# Multi-candidate fetching
MULTI_CANDIDATE_LIMIT = 3 # max candidates to try per search-based fetcher
MULTI_CANDIDATE_DELAY_S = 0.2 # delay between sequential lyric fetches
# Legacy cache rows (no confidence stored) get a base score by sync status
LEGACY_CONFIDENCE = 50.0
# User-Agents
UA_BROWSER = "Mozilla/5.0 (X11; Linux x86_64; rv:149.0) Gecko/20100101 Firefox/149.0"
UA_LRX = f"LRX-CLI {APP_VERSION} (https://github.com/Uyanide/lrx-cli)"
MUSIXMATCH_COOLDOWN_MS = 600_000 # 10 minutes
os.makedirs(CACHE_DIR, exist_ok=True)
DEFAULT_PREFERRED_PLAYER = ""
DEFAULT_PLAYER_BLACKLIST: tuple[str, ...] = (
"firefox",
"zen",
"chrome",
"chromium",
"vivaldi",
"edge",
"opera",
"mpv",
)
@dataclass(frozen=True)
class GeneralConfig:
preferred_player: str = DEFAULT_PREFERRED_PLAYER
player_blacklist: tuple[str, ...] = DEFAULT_PLAYER_BLACKLIST
http_timeout: float = 10.0
@dataclass(frozen=True)
class CredentialConfig:
spotify_sp_dc: str = ""
musixmatch_usertoken: str = ""
qq_music_api_url: str = ""
@dataclass(frozen=True)
class WatchConfig:
debounce_ms: int = 400
calibration_interval_s: float = 3.0
position_tick_ms: int = 50
socket_path: str = field(default_factory=lambda: _WATCH_SOCKET_PATH)
@dataclass(frozen=True)
class AppConfig:
general: GeneralConfig = field(default_factory=GeneralConfig)
credentials: CredentialConfig = field(default_factory=CredentialConfig)
watch: WatchConfig = field(default_factory=WatchConfig)
_CONFIG_PATH = Path(user_config_dir(APP_NAME, APP_AUTHOR)) / "config.toml"
def _coerce(val: Any, hint: Any, section: str, name: str) -> Any:
"""Coerce and validate one TOML value against its declared field type."""
if hint is str:
if not isinstance(val, str):
raise ValueError(
f"[{section}].{name}: expected str, got {type(val).__name__}"
)
return val
if hint is int:
if not isinstance(val, int) or isinstance(val, bool):
raise ValueError(
f"[{section}].{name}: expected int, got {type(val).__name__}"
)
return val
if hint is float:
if isinstance(val, bool):
raise ValueError(f"[{section}].{name}: expected float, got bool")
if isinstance(val, (int, float)):
return float(val)
raise ValueError(
f"[{section}].{name}: expected float, got {type(val).__name__}"
)
origin = getattr(hint, "__origin__", None)
if origin is tuple:
if not isinstance(val, list):
raise ValueError(
f"[{section}].{name}: expected array, got {type(val).__name__}"
)
for i, item in enumerate(val):
if not isinstance(item, str):
raise ValueError(
f"[{section}].{name}[{i}]: expected str, got {type(item).__name__}"
)
return tuple(val)
raise ValueError(f"[{section}].{name}: unsupported field type {hint!r}")
def _parse_section(raw: dict[str, Any], cls: type, section: str) -> Any:
"""Parse one TOML section dict into a frozen dataclass, rejecting unknown keys."""
fields_map = {f.name: f for f in dataclasses.fields(cls)}
hints = get_type_hints(cls)
unknown = set(raw) - set(fields_map)
if unknown:
raise ValueError(
f"Unknown config keys in [{section}]: {', '.join(sorted(unknown))}"
)
kwargs: dict[str, Any] = {}
for name, f in fields_map.items():
if name not in raw:
if f.default is not dataclasses.MISSING:
kwargs[name] = f.default
elif f.default_factory is not dataclasses.MISSING: # type: ignore[misc]
kwargs[name] = f.default_factory()
continue
kwargs[name] = _coerce(raw[name], hints[name], section, name)
return cls(**kwargs)
def load_config(path: Path | None = None) -> AppConfig:
"""Load AppConfig from TOML file; return all-defaults when file is absent."""
resolved = path or _CONFIG_PATH
if not resolved.exists():
return AppConfig()
with open(resolved, "rb") as f:
data = tomllib.load(f)
return AppConfig(
general=_parse_section(data.get("general", {}), GeneralConfig, "general"),
credentials=_parse_section(
data.get("credentials", {}), CredentialConfig, "credentials"
),
watch=_parse_section(data.get("watch", {}), WatchConfig, "watch"),
)
_LOG_FORMAT = (
"<green>{time:YYYY-MM-DD HH:mm:ss}</green> | "
"<level>{level: <8}</level> | "
"<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - "
"<level>{message}</level>"
)
logger.remove()
logger.add(sys.stderr, format=_LOG_FORMAT, level="INFO")
def enable_debug() -> None:
"""Switch logger to DEBUG level."""
logger.remove()
logger.add(sys.stderr, format=_LOG_FORMAT, level="DEBUG")
-307
View File
@@ -1,307 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 11:09:53
Description: Core orchestrator — coordinates fetchers with cache-aware fallback.
Also handles enrichers & authenticators & …
"""
from __future__ import annotations
import asyncio
from typing import Optional
from loguru import logger
from .fetchers import FetcherMethodType, build_plan, create_fetchers
from .fetchers.base import BaseFetcher, FetchResult
from .authenticators import create_authenticators
from .cache import CacheEngine
from .lrc import LRCData
from .config import (
TTL_SYNCED,
TTL_UNSYNCED,
TTL_NOT_FOUND,
TTL_NETWORK_ERROR,
HIGH_CONFIDENCE,
SLOT_SYNCED,
SLOT_UNSYNCED,
AppConfig,
)
from .models import TrackMeta, LyricResult, CacheStatus
from .enrichers import create_enrichers, enrich_track
from .utils import is_better_result, select_best_positive
# Maps CacheStatus to the default TTL used when storing results
_STATUS_TTL: dict[CacheStatus, Optional[int]] = {
CacheStatus.SUCCESS_SYNCED: TTL_SYNCED,
CacheStatus.SUCCESS_UNSYNCED: TTL_UNSYNCED,
CacheStatus.NOT_FOUND: TTL_NOT_FOUND,
CacheStatus.NETWORK_ERROR: TTL_NETWORK_ERROR,
}
def _pick_for_return(
result: FetchResult,
allow_unsynced: bool,
) -> Optional[LyricResult]:
"""Pick best positive slot for final selection under current strategy."""
candidates: list[LyricResult] = []
if result.synced and result.synced.status == CacheStatus.SUCCESS_SYNCED:
candidates.append(result.synced)
if (
allow_unsynced
and result.unsynced
and result.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
):
candidates.append(result.unsynced)
return select_best_positive(candidates, allow_unsynced=True)
def _iter_slot_results(result: FetchResult) -> list[tuple[str, LyricResult]]:
"""Return all non-None slot results with their cache slot key."""
out: list[tuple[str, LyricResult]] = []
if result.synced is not None:
out.append((SLOT_SYNCED, result.synced))
if result.unsynced is not None:
out.append((SLOT_UNSYNCED, result.unsynced))
return out
def _pick_cached_for_return(
cached_rows: list[LyricResult],
allow_unsynced: bool,
) -> Optional[LyricResult]:
"""Convert cached slot rows into FetchResult-like view and select return candidate."""
fr = FetchResult()
for row in cached_rows:
if row.status == CacheStatus.SUCCESS_SYNCED:
fr = FetchResult(synced=row, unsynced=fr.unsynced)
elif row.status == CacheStatus.SUCCESS_UNSYNCED:
fr = FetchResult(synced=fr.synced, unsynced=row)
return _pick_for_return(fr, allow_unsynced)
def _has_negative_for_both_slots(cached_rows: list[LyricResult]) -> bool:
"""True when both slot rows are present and both are negative."""
if len(cached_rows) < 2:
return False
return all(
r.status in (CacheStatus.NOT_FOUND, CacheStatus.NETWORK_ERROR)
for r in cached_rows
)
class LrcManager:
"""Main entry point for fetching lyrics with caching."""
def __init__(self, db_path: str, config: AppConfig = AppConfig()) -> None:
self.cache = CacheEngine(db_path=db_path)
self.authenticators = create_authenticators(self.cache, config)
self.fetchers = create_fetchers(self.cache, self.authenticators, config)
self.enrichers = create_enrichers(self.authenticators)
async def _run_group(
self,
group: list[BaseFetcher],
track: TrackMeta,
bypass_cache: bool,
allow_unsynced: bool,
) -> list[tuple[str, LyricResult]]:
"""Run one group with slot-aware cache check then parallel fetch uncached sources."""
cached_results: list[tuple[str, LyricResult]] = []
need_fetch: list[BaseFetcher] = []
for fetcher in group:
source = fetcher.source_name
if not bypass_cache and not fetcher.self_cached:
cached_rows = self.cache.get_all(track, source)
if cached_rows:
if _has_negative_for_both_slots(cached_rows):
logger.debug(
f"[{source}] cache hit: all slots negative, skipping"
)
continue
cached_for_return = _pick_cached_for_return(
cached_rows, allow_unsynced
)
if cached_for_return is not None:
is_trusted = cached_for_return.confidence >= HIGH_CONFIDENCE
logger.info(
f"[{source}] cache hit: {cached_for_return.status.value}"
f" (confidence={cached_for_return.confidence:.0f})"
)
cached_results.append((source, cached_for_return))
# Return immediately on trusted synced cache hit
if (
cached_for_return.status == CacheStatus.SUCCESS_SYNCED
and is_trusted
):
return cached_results
continue
elif not fetcher.self_cached:
logger.debug(f"[{source}] cache bypassed")
need_fetch.append(fetcher)
if need_fetch:
task_map: dict[asyncio.Task, BaseFetcher] = {
asyncio.create_task(f.fetch(track, bypass_cache=bypass_cache)): f
for f in need_fetch
}
pending = set(task_map)
while pending:
done, pending = await asyncio.wait(
pending, return_when=asyncio.FIRST_COMPLETED
)
found_trusted = False
for task in done:
fetcher = task_map[task]
source = fetcher.source_name
try:
result = task.result()
except Exception as e:
logger.error(f"[{source}] fetch raised: {e}")
continue
if result is None:
logger.debug(f"[{source}] returned None")
continue
return_result = _pick_for_return(result, allow_unsynced)
if not fetcher.self_cached and not bypass_cache:
for slot_kind, slot_result in _iter_slot_results(result):
ttl = slot_result.ttl or _STATUS_TTL.get(
slot_result.status, TTL_NOT_FOUND
)
self.cache.set(
track,
source,
slot_result,
ttl_seconds=ttl,
positive_kind=slot_kind,
)
if return_result is not None:
logger.info(
f"[{source}] got {return_result.status.value} lyrics"
f" (confidence={return_result.confidence:.0f})"
)
cached_results.append((source, return_result))
if (
return_result is not None
and return_result.status == CacheStatus.SUCCESS_SYNCED
and return_result.confidence >= HIGH_CONFIDENCE
):
found_trusted = True
if found_trusted:
for t in pending:
t.cancel()
await asyncio.gather(*pending, return_exceptions=True)
break
return cached_results
async def _fetch_for_track(
self,
track: TrackMeta,
force_method: Optional[FetcherMethodType],
bypass_cache: bool,
allow_unsynced: bool,
) -> Optional[LyricResult]:
track = await enrich_track(track, self.enrichers)
logger.info(f"Fetching lyrics for: {track.display_name()}")
plan = build_plan(self.fetchers, track, force_method)
if not plan:
return None
best_result: Optional[LyricResult] = None
for group in plan:
group_results = await self._run_group(
group,
track,
bypass_cache,
allow_unsynced,
)
for source, result in group_results:
if result.status not in (
CacheStatus.SUCCESS_SYNCED,
CacheStatus.SUCCESS_UNSYNCED,
):
continue
is_trusted = result.confidence >= HIGH_CONFIDENCE
# Trusted synced → return immediately
if result.status == CacheStatus.SUCCESS_SYNCED and is_trusted:
logger.info(
f"Returning {result.status.value} lyrics from {source}"
f" (confidence={result.confidence:.0f})"
)
return result
if best_result is None or is_better_result(
result,
best_result,
allow_unsynced=allow_unsynced,
):
best_result = result
if best_result:
if (
best_result.status == CacheStatus.SUCCESS_UNSYNCED
and not allow_unsynced
):
logger.info(
f"Unsynced lyrics found from {best_result.source}, but unsynced results are not allowed"
)
return None
logger.info(
f"Returning {best_result.status.value} lyrics from {best_result.source}"
)
return best_result
logger.info(f"No lyrics found for {track.display_name()}")
return None
def fetch_for_track(
self,
track: TrackMeta,
force_method: Optional[FetcherMethodType] = None,
bypass_cache: bool = False,
allow_unsynced: bool = False,
) -> Optional[LyricResult]:
"""Fetch lyrics for track using the group-based parallel pipeline."""
return asyncio.run(
self._fetch_for_track(
track,
force_method,
bypass_cache,
allow_unsynced,
)
)
def manual_insert(
self,
track: TrackMeta,
lyrics: str,
) -> None:
"""Manually insert lyrics into the cache for a track."""
track = asyncio.run(enrich_track(track, self.enrichers))
logger.info(f"Manually inserting lyrics for: {track.display_name()}")
lrc = LRCData(lyrics)
result = LyricResult(
status=lrc.detect_sync_status(),
lyrics=lrc,
source="manual",
ttl=None,
)
self.cache.set(track, "manual", result, ttl_seconds=None)
logger.info("Lyrics inserted into cache.")
-60
View File
@@ -1,60 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-31 06:09:11
Description: Metadata enrichment pipeline
"""
from __future__ import annotations
from loguru import logger
from .base import BaseEnricher
from .audio_tag import AudioTagEnricher
from .file_name import FileNameEnricher
from .musixmatch import MusixmatchSpotifyEnricher
from ..authenticators import BaseAuthenticator, MusixmatchAuthenticator
from ..models import TrackMeta
# Enrichers run in order; earlier ones have higher priority.
# There are only a few of them, so we can just call them sequentially without worrying about async concurrency or batching.
def create_enrichers(
authenticators: dict[str, BaseAuthenticator],
) -> list[BaseEnricher]:
"""Instantiate all enrichers."""
mxm_auth = authenticators["musixmatch"]
assert isinstance(mxm_auth, MusixmatchAuthenticator)
return [
AudioTagEnricher(),
FileNameEnricher(),
MusixmatchSpotifyEnricher(mxm_auth),
]
async def enrich_track(track: TrackMeta, enrichers: list[BaseEnricher]) -> TrackMeta:
"""Run all enrichers and return a track with missing fields filled in.
Each enricher sees the cumulative state (earlier enrichers' results
are already applied). A field is only set if it is currently None.
"""
for enricher in enrichers:
try:
# Skip if all provided fields are already filled
if all(
getattr(track, field, None) is not None for field in enricher.provides
):
continue
result = await enricher.enrich(track)
except Exception as e:
logger.warning(f"Enricher {enricher.name} failed: {e}")
continue
if not result:
continue
# Only apply fields that are still None
updates = {k: v for k, v in result.items() if getattr(track, k, None) is None}
if updates:
for k, v in updates.items():
setattr(track, k, v)
return track
-73
View File
@@ -1,73 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-05 02:13:49
Description: Musixmatch metadata enricher (matcher.track.get by Spotify track ID).
"""
from __future__ import annotations
from typing import Optional
from loguru import logger
from .base import BaseEnricher
from ..authenticators.musixmatch import MusixmatchAuthenticator
from ..models import TrackMeta
_MUSIXMATCH_TRACK_MATCH_URL = (
"https://apic-desktop.musixmatch.com/ws/1.1/matcher.track.get"
)
class MusixmatchSpotifyEnricher(BaseEnricher):
"""Fill title, artist, album, and length from Musixmatch using Spotify track ID."""
def __init__(self, auth: MusixmatchAuthenticator) -> None:
self.auth = auth
@property
def name(self) -> str:
return "musixmatch"
@property
def provides(self) -> set[str]:
return {"title", "artist", "album", "length"}
async def enrich(self, track: TrackMeta) -> Optional[dict]:
if not track.trackid:
return None
logger.debug(f"Musixmatch enricher: looking up trackid={track.trackid}")
try:
data = await self.auth.get_json(
_MUSIXMATCH_TRACK_MATCH_URL,
{"track_spotify_id": track.trackid},
)
except Exception as e:
logger.warning(f"Musixmatch enricher: request failed: {e}")
return None
if data is None:
return None
body = data.get("message", {}).get("body")
t = body.get("track") if isinstance(body, dict) else None
if not isinstance(t, dict):
logger.debug(
f"Musixmatch enricher: no track data for trackid={track.trackid}"
)
return None
updates: dict = {}
if isinstance(t.get("track_name"), str) and t["track_name"]:
updates["title"] = t["track_name"]
if isinstance(t.get("artist_name"), str) and t["artist_name"]:
updates["artist"] = t["artist_name"]
if isinstance(t.get("album_name"), str) and t["album_name"]:
updates["album"] = t["album_name"]
if isinstance(t.get("track_length"), int) and t["track_length"] > 0:
updates["length"] = t["track_length"] * 1000
if updates:
logger.debug(f"Musixmatch enricher: filled {list(updates.keys())}")
return updates or None
-104
View File
@@ -1,104 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 02:33:26
Description: Fetcher pipeline — registry and types.
"""
from __future__ import annotations
from typing import Literal, Optional
from loguru import logger
from .base import BaseFetcher
from .local import LocalFetcher
from .cache_search import CacheSearchFetcher
from .spotify import SpotifyFetcher
from .lrclib import LrclibFetcher
from .lrclib_search import LrclibSearchFetcher
from .musixmatch import MusixmatchFetcher, MusixmatchSpotifyFetcher
from .netease import NeteaseFetcher
from .qqmusic import QQMusicFetcher
from ..authenticators import (
BaseAuthenticator,
SpotifyAuthenticator,
MusixmatchAuthenticator,
QQMusicAuthenticator,
)
from ..cache import CacheEngine
from ..config import AppConfig
from ..models import TrackMeta
FetcherMethodType = Literal[
"local",
"cache-search",
"spotify",
"lrclib",
"musixmatch-spotify",
"lrclib-search",
"netease",
"qqmusic",
"musixmatch",
]
# Fetchers within a group run in parallel; groups run sequentially.
# A group that produces any trusted and synced result stops the pipeline.
_FETCHER_GROUPS: list[list[FetcherMethodType]] = [
["local"],
["cache-search"],
["spotify"],
["lrclib", "musixmatch-spotify"],
["lrclib-search", "musixmatch"],
["netease", "qqmusic"],
]
def create_fetchers(
cache: CacheEngine,
authenticators: dict[str, BaseAuthenticator],
config: AppConfig,
) -> dict[FetcherMethodType, BaseFetcher]:
"""Instantiate all fetchers. Returns a dict keyed by source name."""
spotify_auth = authenticators["spotify"]
mxm_auth = authenticators["musixmatch"]
qqmusic_auth = authenticators["qqmusic"]
assert isinstance(spotify_auth, SpotifyAuthenticator)
assert isinstance(mxm_auth, MusixmatchAuthenticator)
assert isinstance(qqmusic_auth, QQMusicAuthenticator)
g = config.general
return {
"local": LocalFetcher(g),
"cache-search": CacheSearchFetcher(cache),
"spotify": SpotifyFetcher(g, spotify_auth),
"lrclib": LrclibFetcher(g),
"musixmatch-spotify": MusixmatchSpotifyFetcher(g, mxm_auth),
"lrclib-search": LrclibSearchFetcher(g),
"netease": NeteaseFetcher(g),
"qqmusic": QQMusicFetcher(g, qqmusic_auth),
"musixmatch": MusixmatchFetcher(g, mxm_auth),
}
def build_plan(
fetchers: dict[FetcherMethodType, BaseFetcher],
track: TrackMeta,
force_method: Optional[FetcherMethodType] = None,
) -> list[list[BaseFetcher]]:
"""Return the fetch plan as a list of groups (each group runs in parallel)."""
if force_method:
if force_method not in fetchers:
logger.error(f"Unknown method: {force_method}")
return []
return [[fetchers[force_method]]]
plan: list[list[BaseFetcher]] = []
for group_methods in _FETCHER_GROUPS:
group = [
fetchers[m]
for m in group_methods
if m in fetchers and fetchers[m].is_available(track)
]
if group:
plan.append(group)
logger.debug(f"Fetch plan: {[[f.source_name for f in g] for g in plan]}")
return plan
-70
View File
@@ -1,70 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 02:33:26
Description: Base fetcher class and common interfaces.
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Optional
from dataclasses import dataclass
from ..authenticators.base import BaseAuthenticator
from ..config import GeneralConfig
from ..models import CacheStatus, TrackMeta, LyricResult
@dataclass(frozen=True, slots=True)
class FetchResult:
synced: Optional[LyricResult] = None
unsynced: Optional[LyricResult] = None
@staticmethod
def from_not_found() -> "FetchResult":
return FetchResult(
synced=LyricResult(status=CacheStatus.NOT_FOUND, lyrics=None, source=None),
unsynced=LyricResult(
status=CacheStatus.NOT_FOUND, lyrics=None, source=None
),
)
@staticmethod
def from_network_error() -> "FetchResult":
return FetchResult(
synced=LyricResult(
status=CacheStatus.NETWORK_ERROR, lyrics=None, source=None
),
unsynced=LyricResult(
status=CacheStatus.NETWORK_ERROR, lyrics=None, source=None
),
)
class BaseFetcher(ABC):
def __init__(
self, general: GeneralConfig, auth: Optional[BaseAuthenticator] = None
) -> None:
self._general = general
self._auth = auth
@property
@abstractmethod
def source_name(self) -> str:
"""Name of the fetcher source."""
pass
@property
def self_cached(self) -> bool:
"""True if this fetcher manages its own cache (skip per-source cache check)."""
return False
@abstractmethod
def is_available(self, track: TrackMeta) -> bool:
"""Check if the fetcher is available for the given track (e.g. has required metadata)."""
pass
@abstractmethod
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
"""Fetch lyrics for the given track. Returns None if unable to fetch."""
pass
-121
View File
@@ -1,121 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-28 05:57:46
Description: Cache-search fetcher — cross-album fuzzy lookup in the local cache.
Searches existing cache entries by artist + title with fuzzy normalization,
ignoring album and source. Useful when the same track appears on different
albums or is played from different players.
"""
from __future__ import annotations
from typing import Optional
from loguru import logger
from .base import BaseFetcher, FetchResult
from .selection import SearchCandidate, select_best
from ..models import TrackMeta, LyricResult, CacheStatus
from ..cache import CacheEngine
from ..lrc import LRCData
class CacheSearchFetcher(BaseFetcher):
def __init__(self, cache: CacheEngine) -> None:
self._cache = cache
@property
def source_name(self) -> str:
return "cache-search"
@property
def self_cached(self) -> bool:
return True
def is_available(self, track: TrackMeta) -> bool:
return bool(track.title)
def _get_exact(self, track: TrackMeta, synced: bool) -> Optional[LyricResult]:
exact = self._cache.find_best_positive(
track,
CacheStatus.SUCCESS_SYNCED if synced else CacheStatus.SUCCESS_UNSYNCED,
)
if exact and exact.lyrics is not None:
logger.info(
f"Cache-search: exact {'synced' if synced else 'unsynced'} hit ({exact.status.value})"
)
return exact
return None
def _get_fuzzy(
self, matches: list, track: TrackMeta, synced: bool
) -> Optional[LyricResult]:
filtered = [
SearchCandidate(
item=m,
duration_ms=float(m["length"]) if m.get("length") else None,
is_synced=synced,
title=m.get("title"),
artist=m.get("artist"),
album=m.get("album"),
)
for m in matches
if m.get("lyrics")
and (synced and m.get("status") == CacheStatus.SUCCESS_SYNCED.value)
or (not synced and m.get("status") == CacheStatus.SUCCESS_UNSYNCED.value)
]
best, confidence = select_best(
filtered,
track.length,
title=track.title,
artist=track.artist,
album=track.album,
)
if best and best.get("lyrics") is not None:
status = (
CacheStatus.SUCCESS_SYNCED if synced else CacheStatus.SUCCESS_UNSYNCED
)
logger.info(
f"Cache-search: fuzzy {'synced' if synced else 'unsynced'} hit from "
f"[{best.get('source')}] album={best.get('album')!r} (confidence={confidence:.0f})"
)
return LyricResult(
status=status,
lyrics=LRCData(best["lyrics"]),
source=self.source_name,
confidence=confidence,
)
return None
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
if bypass_cache:
logger.debug("Cache-search: bypassed by caller")
return FetchResult()
if not track.title:
logger.debug("Cache-search: skipped — no title")
return FetchResult()
res_synced: Optional[LyricResult] = None
res_unsynced: Optional[LyricResult] = None
# Fast path: exact metadata match (artist+title+album), single SQL query
res_synced = self._get_exact(track, synced=True)
res_unsynced = self._get_exact(track, synced=False)
if res_synced and res_unsynced:
return FetchResult(synced=res_synced, unsynced=res_unsynced)
# Slow path: fuzzy cross-album search
matches = self._cache.search_by_meta(title=track.title, length=track.length)
if not matches:
logger.debug(f"Cache-search: no match for {track.display_name()}")
return FetchResult(synced=res_synced, unsynced=res_unsynced)
if not res_synced:
res_synced = self._get_fuzzy(matches, track, synced=True)
if not res_unsynced:
res_unsynced = self._get_fuzzy(matches, track, synced=False)
return FetchResult(synced=res_synced, unsynced=res_unsynced)
-119
View File
@@ -1,119 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-26 02:08:41
Description: Local fetcher — reads lyrics from .lrc sidecar files or embedded audio metadata.
Priority:
1. Same-directory .lrc file (e.g. /path/to/track.lrc)
2. Embedded lyrics in audio metadata (FLAC, MP3 USLT/SYLT tags)
"""
from __future__ import annotations
from typing import Optional
from loguru import logger
from mutagen._file import File
from mutagen.flac import FLAC
from .base import BaseFetcher, FetchResult
from ..models import CacheStatus, TrackMeta, LyricResult
from ..lrc import LRCData
from ..utils import get_audio_path, get_sidecar_path
class LocalFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "local"
def is_available(self, track: TrackMeta) -> bool:
return track.is_local
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
"""Attempt to read lyrics from local filesystem."""
if not track.is_local or not track.url:
return FetchResult()
audio_path = get_audio_path(track.url, ensure_exists=False)
if not audio_path:
logger.debug(f"Local: audio URL is not a valid file path: {track.url}")
return FetchResult()
synced_result: Optional[LyricResult] = None
unsynced_result: Optional[LyricResult] = None
lrc_path = get_sidecar_path(
track.url, ensure_audio_exists=False, ensure_exists=True
)
if lrc_path:
try:
with open(lrc_path, "r", encoding="utf-8") as f:
content = f.read().strip()
if content:
lrc = LRCData(content)
status = lrc.detect_sync_status()
logger.info(
f"Local: found .lrc sidecar ({status.value}) for {audio_path.name}"
)
if status == CacheStatus.SUCCESS_SYNCED:
synced_result = LyricResult(
status=status,
lyrics=lrc,
source=f"{self.source_name} (sidecar)",
)
else:
unsynced_result = LyricResult(
status=status,
lyrics=lrc,
source=f"{self.source_name} (sidecar)",
)
except Exception as e:
logger.error(f"Local: error reading {lrc_path}: {e}")
else:
logger.debug(f"Local: no .lrc sidecar found for {audio_path}")
# Embedded metadata
if not audio_path.exists():
logger.debug(f"Local: audio file does not exist: {audio_path}")
else:
try:
audio = File(audio_path)
if audio is not None:
lyrics = None
if isinstance(audio, FLAC):
# FLAC stores lyrics in vorbis comment tags
lyrics = (
audio.get("lyrics") or audio.get("unsynclyrics") or [None]
)[0]
elif hasattr(audio, "tags") and audio.tags:
# MP3 / other: look for USLT or SYLT ID3 frames
for key in audio.tags.keys():
if key.startswith("USLT") or key.startswith("SYLT"):
lyrics = str(audio.tags[key])
break
if lyrics:
lrc = LRCData(lyrics)
status = lrc.detect_sync_status()
logger.info(
f"Local: found embedded lyrics ({status.value}) for {audio_path.name}"
)
if status == CacheStatus.SUCCESS_SYNCED and not synced_result:
synced_result = LyricResult(
status=status,
lyrics=lrc,
source=f"{self.source_name} (embedded)",
)
elif not unsynced_result:
unsynced_result = LyricResult(
status=status,
lyrics=lrc,
source=f"{self.source_name} (embedded)",
)
else:
logger.debug("Local: no embedded lyrics found")
except Exception as e:
logger.error(f"Local: error reading metadata for {audio_path}: {e}")
return FetchResult(synced=synced_result, unsynced=unsynced_result)
-121
View File
@@ -1,121 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 05:23:38
Description: LRCLIB fetcher — queries lrclib.net for synced/plain lyrics.
Requires complete track metadata (artist, title, album, duration).
"""
from __future__ import annotations
import httpx
from loguru import logger
from urllib.parse import urlencode
from .base import BaseFetcher, FetchResult
from ..models import TrackMeta, LyricResult, CacheStatus
from ..lrc import LRCData
from ..config import (
TTL_UNSYNCED,
TTL_NOT_FOUND,
UA_LRX,
)
_LRCLIB_API_URL = "https://lrclib.net/api/get"
def _parse_lrclib_response(data: dict) -> FetchResult:
"""Parse LRCLIB JSON response into synced/unsynced fetch result."""
synced = data.get("syncedLyrics")
unsynced = data.get("plainLyrics")
res_synced: LyricResult = LyricResult(
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
)
res_unsynced: LyricResult = LyricResult(
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
)
if isinstance(synced, str) and synced.strip():
lyrics = LRCData(synced)
res_synced = LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=lyrics,
source="lrclib",
)
if isinstance(unsynced, str) and unsynced.strip():
lyrics = LRCData(unsynced)
res_unsynced = LyricResult(
status=CacheStatus.SUCCESS_UNSYNCED,
lyrics=lyrics,
source="lrclib",
ttl=TTL_UNSYNCED,
)
return FetchResult(synced=res_synced, unsynced=res_unsynced)
class LrclibFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "lrclib"
def is_available(self, track: TrackMeta) -> bool:
return track.is_complete
async def _api_get(
self,
client: httpx.AsyncClient,
track: TrackMeta,
) -> httpx.Response:
"""Issue one LRCLIB get request using the same path as production fetch."""
params = {
"track_name": track.title,
"artist_name": track.artist,
"album_name": track.album,
"duration": track.length / 1000.0 if track.length else 0,
}
url = f"{_LRCLIB_API_URL}?{urlencode(params)}"
return await client.get(url, headers={"User-Agent": UA_LRX})
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
"""Fetch lyrics from LRCLIB. Requires complete metadata."""
if not track.is_complete:
logger.debug("LRCLIB: skipped — incomplete metadata")
return FetchResult()
logger.info(f"LRCLIB: fetching lyrics for {track.display_name()}")
try:
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
resp = await self._api_get(client, track)
if resp.status_code == 404:
logger.debug(f"LRCLIB: not found for {track.display_name()}")
return FetchResult.from_not_found()
if resp.status_code != 200:
logger.error(f"LRCLIB: API returned {resp.status_code}")
return FetchResult.from_network_error()
data = resp.json()
if not isinstance(data, dict):
logger.error(f"LRCLIB: unexpected response type: {type(data).__name__}")
return FetchResult.from_network_error()
result = _parse_lrclib_response(data)
if result.synced and result.synced.lyrics:
logger.info(
f"LRCLIB: got synced lyrics ({len(result.synced.lyrics)} lines)"
)
if result.unsynced and result.unsynced.lyrics:
logger.info(
f"LRCLIB: got unsynced lyrics ({len(result.unsynced.lyrics)} lines)"
)
return result
except httpx.HTTPError as e:
logger.error(f"LRCLIB: HTTP error: {e}")
return FetchResult.from_network_error()
except Exception as e:
logger.error(f"LRCLIB: unexpected error: {e}")
return FetchResult()
-205
View File
@@ -1,205 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 05:30:50
Description: LRCLIB search fetcher — fuzzy search via lrclib.net /api/search.
Used when metadata is incomplete (no album or duration) but title is available.
"""
from __future__ import annotations
import asyncio
import httpx
from loguru import logger
from urllib.parse import urlencode
from .base import BaseFetcher, FetchResult
from .selection import SearchCandidate, select_best
from ..models import TrackMeta, LyricResult, CacheStatus
from ..lrc import LRCData
from ..config import (
TTL_UNSYNCED,
TTL_NOT_FOUND,
UA_LRX,
)
_LRCLIB_SEARCH_URL = "https://lrclib.net/api/search"
def _parse_lrclib_search_results(items: list[dict]) -> list[SearchCandidate[dict]]:
"""Map LRCLIB search JSON items to normalized SearchCandidate entries."""
return [
SearchCandidate(
item=item,
duration_ms=item["duration"] * 1000
if isinstance(item.get("duration"), (int, float))
else None,
is_synced=isinstance(item.get("syncedLyrics"), str)
and bool(item["syncedLyrics"].strip()),
title=item.get("trackName"),
artist=item.get("artistName"),
album=item.get("albumName"),
)
for item in items
]
class LrclibSearchFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "lrclib-search"
def is_available(self, track: TrackMeta) -> bool:
return bool(track.title)
def _build_queries(self, track: TrackMeta) -> list[dict[str, str]]:
"""Build up to 4 query param sets, from most specific to least.
1. title + artist + album (if all present)
2. title + artist (if artist present)
3. title + album (if album present)
4. title only
"""
assert track.title is not None
title = track.title
queries: list[dict[str, str]] = []
if track.artist and track.album:
queries.append(
{
"track_name": title,
"artist_name": track.artist,
"album_name": track.album,
}
)
if track.artist:
queries.append({"track_name": title, "artist_name": track.artist})
if track.album:
queries.append({"track_name": title, "album_name": track.album})
queries.append({"track_name": title})
return queries
async def _api_query(
self,
client: httpx.AsyncClient,
params: dict[str, str],
) -> tuple[list[dict], bool]:
"""Issue one LRCLIB search query using production request path."""
url = f"{_LRCLIB_SEARCH_URL}?{urlencode(params)}"
logger.debug(f"LRCLIB-search: query {params}")
try:
resp = await client.get(url, headers={"User-Agent": UA_LRX})
except httpx.HTTPError as e:
logger.error(f"LRCLIB-search: HTTP error: {e}")
return [], True
if resp.status_code != 200:
logger.error(f"LRCLIB-search: API returned {resp.status_code}")
return [], True
data = resp.json()
if not isinstance(data, list):
return [], False
return [item for item in data if isinstance(item, dict)], False
async def _api_candidates(
self,
client: httpx.AsyncClient,
track: TrackMeta,
) -> tuple[list[dict], bool]:
"""Request and merge LRCLIB-search candidates using built-in query strategy."""
queries = self._build_queries(track)
all_results = await asyncio.gather(
*(self._api_query(client, p) for p in queries)
)
seen_ids: set[int] = set()
candidates: list[dict] = []
had_error = False
for items, err in all_results:
if err:
had_error = True
for item in items:
item_id = item.get("id")
if item_id is not None and item_id in seen_ids:
continue
if item_id is not None:
seen_ids.add(item_id)
candidates.append(item)
return candidates, had_error
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
if not track.title:
logger.debug("LRCLIB-search: skipped — no title")
return FetchResult()
logger.info(f"LRCLIB-search: searching for {track.display_name()}")
try:
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
candidates, had_error = await self._api_candidates(client, track)
if not candidates:
if had_error:
return FetchResult.from_network_error()
logger.debug(f"LRCLIB-search: no results for {track.display_name()}")
return FetchResult.from_not_found()
logger.debug(
f"LRCLIB-search: got {len(candidates)} unique candidates "
f"from {len(self._build_queries(track))} queries"
)
mapped = _parse_lrclib_search_results(candidates)
best, confidence = select_best(
mapped,
track.length,
title=track.title,
artist=track.artist,
album=track.album,
)
if best is None:
logger.debug("LRCLIB-search: no valid candidate found")
return FetchResult.from_not_found()
synced = best.get("syncedLyrics")
unsynced = best.get("plainLyrics")
res_synced: LyricResult = LyricResult(
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
)
res_unsynced: LyricResult = LyricResult(
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
)
if isinstance(synced, str) and synced.strip():
lyrics = LRCData(synced)
logger.info(
f"LRCLIB-search: got synced lyrics ({len(lyrics)} lines, confidence={confidence:.0f})"
)
res_synced = LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=lyrics,
source=self.source_name,
confidence=confidence,
)
if isinstance(unsynced, str) and unsynced.strip():
lyrics = LRCData(unsynced)
logger.info(
f"LRCLIB-search: got unsynced lyrics ({len(lyrics)} lines, confidence={confidence:.0f})"
)
res_unsynced = LyricResult(
status=CacheStatus.SUCCESS_UNSYNCED,
lyrics=lyrics,
source=self.source_name,
ttl=TTL_UNSYNCED,
confidence=confidence,
)
return FetchResult(synced=res_synced, unsynced=res_unsynced)
except httpx.HTTPError as e:
logger.error(f"LRCLIB-search: HTTP error: {e}")
return FetchResult.from_network_error()
except Exception as e:
logger.error(f"LRCLIB-search: unexpected error: {e}")
return FetchResult()
-366
View File
@@ -1,366 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-04 15:28:34
Description: Musixmatch fetchers (desktop API, anonymous or usertoken auth).
Uses the Musixmatch desktop API (apic-desktop.musixmatch.com).
Token and all HTTP calls are managed by MusixmatchAuthenticator.
Two fetchers:
musixmatch-spotify — direct lookup by Spotify track ID (exact, no search)
musixmatch — metadata search + best-candidate fallback
"""
from __future__ import annotations
import json
from typing import Optional
from loguru import logger
from .base import BaseFetcher, FetchResult
from .selection import SearchCandidate, select_best
from ..authenticators.musixmatch import MusixmatchAuthenticator
from ..config import GeneralConfig
from ..lrc import LRCData
from ..models import CacheStatus, LyricResult, TrackMeta
_MUSIXMATCH_MACRO_URL = "https://apic-desktop.musixmatch.com/ws/1.1/macro.subtitles.get"
_MUSIXMATCH_SEARCH_URL = "https://apic-desktop.musixmatch.com/ws/1.1/track.search"
# Macro-specific params (format/app_id injected by authenticator)
_MXM_MACRO_PARAMS = {
"namespace": "lyrics_richsynched",
"subtitle_format": "mxm",
"optional_calls": "track.richsync",
}
def _format_ts(s: float) -> str:
mm = int(s) // 60
ss = int(s) % 60
cs = min(round((s % 1) * 100), 99)
return f"[{mm:02d}:{ss:02d}.{cs:02d}]"
def _parse_richsync(body: str) -> Optional[str]:
"""Parse richsync JSON body → LRC text. Each entry: {"ts": float, "x": str}."""
try:
data = json.loads(body)
if not isinstance(data, list):
return None
lines = []
for entry in data:
if not isinstance(entry, dict):
continue
ts = entry.get("ts")
x = entry.get("x")
if not isinstance(ts, (int, float)) or not isinstance(x, str):
continue
lines.append(f"{_format_ts(float(ts))}{x}")
return "\n".join(lines) if lines else None
except Exception:
return None
def _parse_subtitle(body: str) -> Optional[str]:
"""Parse subtitle JSON body → LRC text. Each entry: {"text": str, "time": {"total": float}}."""
try:
data = json.loads(body)
if not isinstance(data, list):
return None
lines = []
for entry in data:
if not isinstance(entry, dict):
continue
text = entry.get("text")
time_obj = entry.get("time")
if not isinstance(text, str) or not isinstance(time_obj, dict):
continue
total = time_obj.get("total")
if not isinstance(total, (int, float)):
continue
lines.append(f"{_format_ts(float(total))}{text}")
return "\n".join(lines) if lines else None
except Exception:
return None
def _parse_mxm_macro(data: dict) -> LRCData | None:
"""Parse macro.subtitles.get payload into LRCData (richsync preferred)."""
body = data.get("message", {}).get("body", {})
if not isinstance(body, dict):
return None
macro_calls = body.get("macro_calls", {})
if not isinstance(macro_calls, dict):
return None
richsync_msg = macro_calls.get("track.richsync.get", {}).get("message", {})
if (
isinstance(richsync_msg, dict)
and richsync_msg.get("header", {}).get("status_code") == 200
):
richsync_body = (
richsync_msg.get("body", {}).get("richsync", {}).get("richsync_body")
)
if isinstance(richsync_body, str):
lrc_text = _parse_richsync(richsync_body)
if lrc_text:
lrc = LRCData(lrc_text)
if lrc:
return lrc
subtitle_msg = macro_calls.get("track.subtitles.get", {}).get("message", {})
if (
isinstance(subtitle_msg, dict)
and subtitle_msg.get("header", {}).get("status_code") == 200
):
subtitle_list = subtitle_msg.get("body", {}).get("subtitle_list", [])
if isinstance(subtitle_list, list) and subtitle_list:
subtitle_body = subtitle_list[0].get("subtitle", {}).get("subtitle_body")
if isinstance(subtitle_body, str):
lrc_text = _parse_subtitle(subtitle_body)
if lrc_text:
lrc = LRCData(lrc_text)
if lrc:
return lrc
return None
def _parse_mxm_search(data: dict) -> list[SearchCandidate[int]]:
"""Parse track.search payload to normalized candidates."""
track_list = data.get("message", {}).get("body", {}).get("track_list", [])
if not isinstance(track_list, list) or not track_list:
return []
return [
SearchCandidate(
item=int(t["commontrack_id"]),
duration_ms=(
float(t["track_length"]) * 1000 if t.get("track_length") else None
),
is_synced=bool(t.get("has_subtitles") or t.get("has_richsync")),
title=t.get("track_name"),
artist=t.get("artist_name"),
album=t.get("album_name"),
)
for item in track_list
if isinstance(item, dict)
and isinstance(t := item.get("track", {}), dict)
and isinstance(t.get("commontrack_id"), int)
and not t.get("instrumental")
]
class MusixmatchSpotifyFetcher(BaseFetcher):
"""Direct lookup by Spotify track ID — no search, single request."""
_auth: MusixmatchAuthenticator
def __init__(self, general: GeneralConfig, auth: MusixmatchAuthenticator) -> None:
super().__init__(general, auth)
@property
def source_name(self) -> str:
return "musixmatch-spotify"
def is_available(self, track: TrackMeta) -> bool:
return bool(track.trackid) and not self._auth.is_cooldown()
async def _api_macro(self, params: dict) -> dict | None:
"""Request macro payload through authenticator using production path."""
return await self._auth.get_json(
_MUSIXMATCH_MACRO_URL, {**_MXM_MACRO_PARAMS, **params}
)
async def _api_macro_track(self, track: TrackMeta) -> dict | None:
"""Request macro payload for one track using Spotify ID lookup path."""
if not track.trackid:
return None
return await self._api_macro({"track_spotify_id": track.trackid})
async def _fetch_macro(self, params: dict) -> LRCData | None:
"""Request and parse Musixmatch macro lyrics payload."""
logger.debug(f"Musixmatch: macro call with {list(params.keys())}")
data = await self._api_macro(params)
if data is None:
return None
lrc = _parse_mxm_macro(data)
if lrc is None:
logger.debug("Musixmatch: no usable lyrics in macro response")
return None
logger.debug("Musixmatch: parsed macro lyrics")
return lrc
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
logger.info(f"Musixmatch-Spotify: fetching lyrics for {track.display_name()}")
try:
lrc = await self._fetch_macro({"track_spotify_id": track.trackid}) # type: ignore[dict-item]
except AttributeError:
return FetchResult.from_not_found()
except Exception as e:
logger.error(f"Musixmatch-Spotify: fetch failed: {e}")
return FetchResult.from_network_error()
if lrc is None:
logger.debug(
f"Musixmatch-Spotify: no lyrics found for {track.display_name()}"
)
return FetchResult.from_not_found()
logger.info(f"Musixmatch-Spotify: got SUCCESS_SYNCED lyrics ({len(lrc)} lines)")
return FetchResult(
synced=LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=lrc,
source=self.source_name,
),
# Fetching unsynced lyrics is not possible with current endpoint,
# so no need to cache NOT_FOUND to avoid repeated failed attempts
unsynced=None,
)
class MusixmatchFetcher(BaseFetcher):
"""Metadata search + best-candidate lyric fetch."""
_auth: MusixmatchAuthenticator
def __init__(self, general: GeneralConfig, auth: MusixmatchAuthenticator) -> None:
super().__init__(general, auth)
@property
def source_name(self) -> str:
return "musixmatch"
@property
def requires_auth(self) -> str:
return "musixmatch"
def is_available(self, track: TrackMeta) -> bool:
return bool(track.title) and not self._auth.is_cooldown()
async def _api_search(self, params: dict) -> dict | None:
"""Request search payload through authenticator using production path."""
return await self._auth.get_json(_MUSIXMATCH_SEARCH_URL, params)
def _build_search_params(self, track: TrackMeta) -> dict[str, str]:
"""Build Musixmatch search params for one track."""
params: dict[str, str] = {
"q_track": track.title or "",
"page_size": "10",
"f_has_lyrics": "1",
}
if track.artist:
params["q_artist"] = track.artist
if track.album:
params["q_album"] = track.album
return params
async def _api_search_track(self, track: TrackMeta) -> dict | None:
"""Request search payload for one track using production path."""
return await self._api_search(self._build_search_params(track))
async def _api_macro(self, params: dict) -> dict | None:
"""Request macro payload through authenticator using production path."""
return await self._auth.get_json(
_MUSIXMATCH_MACRO_URL, {**_MXM_MACRO_PARAMS, **params}
)
async def _api_macro_track(self, track: TrackMeta) -> dict | None:
"""Request macro payload for top-ranked search candidate of one track."""
search_data = await self._api_search_track(track)
if search_data is None:
return None
candidates = _parse_mxm_search(search_data)
if not candidates:
return None
commontrack_id, _confidence = select_best(
candidates,
track.length,
title=track.title,
artist=track.artist,
album=track.album,
)
if commontrack_id is None:
return None
return await self._api_macro({"commontrack_id": str(commontrack_id)})
async def _fetch_macro(self, params: dict) -> LRCData | None:
"""Request and parse Musixmatch macro lyrics payload."""
logger.debug(f"Musixmatch: macro call with {list(params.keys())}")
data = await self._api_macro(params)
if data is None:
return None
lrc = _parse_mxm_macro(data)
if lrc is None:
logger.debug("Musixmatch: no usable lyrics in macro response")
return None
logger.debug("Musixmatch: parsed macro lyrics")
return lrc
async def _search(self, track: TrackMeta) -> tuple[Optional[int], float]:
"""Search for track metadata. Raises on network/HTTP errors."""
logger.debug(f"Musixmatch: searching for '{track.display_name()}'")
data = await self._api_search_track(track)
if data is None:
return None, 0.0
candidates = _parse_mxm_search(data)
if not candidates:
logger.debug("Musixmatch: search returned 0 results")
return None, 0.0
logger.debug(f"Musixmatch: search returned {len(candidates)} candidates")
best_id, confidence = select_best(
candidates,
track.length,
title=track.title,
artist=track.artist,
album=track.album,
)
if best_id is not None:
logger.debug(f"Musixmatch: best candidate id={best_id} ({confidence:.0f})")
else:
logger.debug("Musixmatch: no suitable candidate found")
return best_id, confidence
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
logger.info(f"Musixmatch: fetching lyrics for {track.display_name()}")
try:
commontrack_id, confidence = await self._search(track)
if commontrack_id is None:
logger.debug(f"Musixmatch: no match found for {track.display_name()}")
return FetchResult.from_not_found()
lrc = await self._fetch_macro({"commontrack_id": str(commontrack_id)})
except AttributeError:
return FetchResult.from_not_found()
except Exception as e:
logger.error(f"Musixmatch: fetch failed: {e}")
return FetchResult.from_network_error()
if lrc is None:
logger.debug(f"Musixmatch: no lyrics for commontrack_id={commontrack_id}")
return FetchResult.from_not_found()
logger.info(
f"Musixmatch: got SUCCESS_SYNCED lyrics "
f"for commontrack_id={commontrack_id} ({len(lrc)} lines)"
)
return FetchResult(
synced=LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=lrc,
source=self.source_name,
confidence=confidence,
),
# Same as above
unsynced=None,
)
-298
View File
@@ -1,298 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 11:04:51
Description: Netease Cloud Music fetcher.
Uses the public cloudsearch API for searching and the song/lyric API for
retrieving lyrics. No authentication required.
"""
from __future__ import annotations
import asyncio
import httpx
from loguru import logger
from .base import BaseFetcher, FetchResult
from .selection import SearchCandidate, select_ranked
from ..models import TrackMeta, LyricResult, CacheStatus
from ..lrc import LRCData
from ..config import (
TTL_NOT_FOUND,
MULTI_CANDIDATE_DELAY_S,
UA_BROWSER,
)
_NETEASE_SEARCH_URL = "https://music.163.com/api/cloudsearch/pc"
_NETEASE_LYRIC_URL = "https://interface3.music.163.com/api/song/lyric"
_NETEASE_BASE_HEADERS = {
"User-Agent": UA_BROWSER,
"Referer": "https://music.163.com/",
"Origin": "https://music.163.com",
}
def _parse_netease_search(data: dict) -> list[SearchCandidate[int]]:
"""Parse Netease search response into scored candidates."""
result_body = data.get("result")
if not isinstance(result_body, dict):
return []
songs = result_body.get("songs")
if not isinstance(songs, list) or len(songs) == 0:
return []
return [
SearchCandidate(
item=song_id,
duration_ms=float(song["dt"]) if isinstance(song.get("dt"), int) else None,
title=song.get("name"),
artist=", ".join(a.get("name", "") for a in song.get("ar", [])) or None,
album=(song.get("al") or {}).get("name"),
)
for song in songs
if isinstance(song, dict) and isinstance(song_id := song.get("id"), int)
]
def _parse_netease_lyrics(data: dict) -> LRCData | None:
"""Parse Netease lyric response to LRCData."""
lrc_obj = data.get("lrc")
if not isinstance(lrc_obj, dict):
return None
lrc = lrc_obj.get("lyric", "")
if not isinstance(lrc, str) or not lrc.strip():
return None
return LRCData(lrc)
class NeteaseFetcher(BaseFetcher):
@property
def source_name(self) -> str:
return "netease"
def is_available(self, track: TrackMeta) -> bool:
return bool(track.title)
async def _api_search(
self,
client: httpx.AsyncClient,
query: str,
limit: int,
) -> dict | None:
"""Issue one Netease search request and return JSON payload."""
resp = await client.post(
_NETEASE_SEARCH_URL,
headers=_NETEASE_BASE_HEADERS,
data={"s": query, "type": "1", "limit": str(limit), "offset": "0"},
)
resp.raise_for_status()
data = resp.json()
if not isinstance(data, dict):
return None
return data
async def _api_search_track(
self,
client: httpx.AsyncClient,
track: TrackMeta,
limit: int,
) -> dict | None:
"""Request Netease search payload for one track using production query strategy."""
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
return None
return await self._api_search(client, query, limit)
async def _api_lyric(
self,
client: httpx.AsyncClient,
song_id: int,
) -> dict | None:
"""Issue one Netease lyric request and return JSON payload."""
resp = await client.post(
_NETEASE_LYRIC_URL,
headers=_NETEASE_BASE_HEADERS,
data={
"id": str(song_id),
"cp": "false",
"tv": "0",
"lv": "0",
"rv": "0",
"kv": "0",
"yv": "0",
"ytv": "0",
"yrv": "0",
},
)
resp.raise_for_status()
data = resp.json()
if not isinstance(data, dict):
return None
return data
async def _api_lyric_track(
self,
client: httpx.AsyncClient,
track: TrackMeta,
limit: int,
) -> dict | None:
"""Request lyric payload for top-ranked candidate of a track."""
search_data = await self._api_search_track(client, track, limit)
if search_data is None:
return None
candidates = _parse_netease_search(search_data)
if not candidates:
return None
ranked = select_ranked(
candidates,
track.length,
title=track.title,
artist=track.artist,
album=track.album,
)
if not ranked:
return None
top_song_id = ranked[0][0]
return await self._api_lyric(client, top_song_id)
async def _search(
self, track: TrackMeta, limit: int = 10
) -> list[tuple[int, float]]:
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
return []
logger.debug(f"Netease: searching for '{query}' (limit={limit})")
try:
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
result = await self._api_search_track(client, track, limit)
if result is None:
logger.error("Netease: search returned non-dict payload")
return []
candidates = _parse_netease_search(result)
if not candidates:
logger.debug("Netease: search returned 0 results")
return []
logger.debug(f"Netease: search returned {len(candidates)} candidates")
ranked = select_ranked(
candidates,
track.length,
title=track.title,
artist=track.artist,
album=track.album,
)
if ranked:
logger.debug(
"Netease: top candidates: "
+ ", ".join(f"id={i} ({c:.0f})" for i, c in ranked)
)
else:
logger.debug("Netease: no suitable candidate found")
return ranked
except Exception as e:
logger.error(f"Netease: search failed: {e}")
return []
async def _get_lyric(self, song_id: int, confidence: float = 0.0) -> FetchResult:
logger.debug(f"Netease: fetching lyrics for song_id={song_id}")
try:
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
data = await self._api_lyric(client, song_id)
if data is None:
logger.error("Netease: lyric response is not dict")
return FetchResult.from_network_error()
lrcdata = _parse_netease_lyrics(data)
if lrcdata is None:
logger.debug(f"Netease: empty lyrics for song_id={song_id}")
return FetchResult.from_not_found()
status = lrcdata.detect_sync_status()
logger.info(
f"Netease: got {status.value} lyrics for song_id={song_id} "
f"({len(lrcdata)} lines)"
)
not_found = LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
if status == CacheStatus.SUCCESS_SYNCED:
return FetchResult(
synced=LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=lrcdata,
source=self.source_name,
confidence=confidence,
),
unsynced=not_found,
)
return FetchResult(
synced=not_found,
unsynced=LyricResult(
status=CacheStatus.SUCCESS_UNSYNCED,
lyrics=lrcdata,
source=self.source_name,
confidence=confidence,
),
)
except Exception as e:
logger.error(f"Netease: lyric fetch failed for song_id={song_id}: {e}")
return FetchResult.from_network_error()
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
logger.debug("Netease: skipped — insufficient metadata")
return FetchResult()
logger.info(f"Netease: fetching lyrics for {track.display_name()}")
candidates = await self._search(track)
if not candidates:
logger.debug(f"Netease: no match found for {track.display_name()}")
return FetchResult.from_not_found()
res_synced: LyricResult = LyricResult(
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
)
res_unsynced: LyricResult = LyricResult(
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
)
for i, (song_id, confidence) in enumerate(candidates):
if i > 0:
await asyncio.sleep(MULTI_CANDIDATE_DELAY_S)
result = await self._get_lyric(song_id, confidence=confidence)
if result.synced and result.synced.status == CacheStatus.NETWORK_ERROR:
return result
if result.unsynced and result.unsynced.status == CacheStatus.NETWORK_ERROR:
return result
if (
res_synced.status == CacheStatus.NOT_FOUND
and result.synced
and result.synced.status == CacheStatus.SUCCESS_SYNCED
):
res_synced = result.synced
if (
res_unsynced.status == CacheStatus.NOT_FOUND
and result.unsynced
and result.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
):
res_unsynced = result.unsynced
# Netease API is quite expensive, so we stop after finding synced lyrics,
# instead of trying to find both synced and unsynced versions
if (
res_synced.status == CacheStatus.SUCCESS_SYNCED
# and res_unsynced.status == CacheStatus.SUCCESS_UNSYNCED
):
break
return FetchResult(synced=res_synced, unsynced=res_unsynced)
-249
View File
@@ -1,249 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-31 01:54:02
Description: QQ Music fetcher via self-hosted API proxy.
Requires a running qq-music-api instance.
The base URL is read from the QQ_MUSIC_API_URL environment variable.
Search → pick best match → fetch LRC lyrics.
"""
from __future__ import annotations
import asyncio
from loguru import logger
from .base import BaseFetcher, FetchResult
from .selection import SearchCandidate, select_ranked
from ..authenticators import QQMusicAuthenticator
from ..models import TrackMeta, LyricResult, CacheStatus
from ..lrc import LRCData
from ..config import (
GeneralConfig,
TTL_NOT_FOUND,
MULTI_CANDIDATE_DELAY_S,
)
def _parse_qq_search(data: dict) -> list[SearchCandidate[str]]:
"""Parse QQMusic search response into normalized candidates."""
if data.get("code") != 0:
return []
songs = data.get("data", {}).get("list", [])
if not isinstance(songs, list):
return []
return [
SearchCandidate(
item=mid,
duration_ms=float(song["interval"]) * 1000
if isinstance(song.get("interval"), int)
else None,
title=song.get("name"),
artist=", ".join(s.get("name", "") for s in song.get("singer", [])) or None,
album=(song.get("album") or {}).get("name"),
)
for song in songs
if isinstance(song, dict) and isinstance(mid := song.get("mid"), str)
]
def _parse_qq_lyrics(data: dict) -> LRCData | None:
"""Parse QQMusic lyric response to LRCData."""
if data.get("code") != 0:
return None
lrc = data.get("data", {}).get("lyric", "")
if not isinstance(lrc, str) or not lrc.strip():
return None
return LRCData(lrc)
class QQMusicFetcher(BaseFetcher):
_auth: QQMusicAuthenticator
def __init__(self, general: GeneralConfig, auth: QQMusicAuthenticator) -> None:
super().__init__(general, auth)
@property
def source_name(self) -> str:
return "qqmusic"
def is_available(self, track: TrackMeta) -> bool:
return bool(track.title) and self._auth.is_configured()
async def _api_search(
self,
track: TrackMeta,
limit: int,
) -> dict | None:
"""Return raw QQMusic search payload for one track."""
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
return None
data = await self._auth.search(query, limit)
if not isinstance(data, dict):
return None
return data
async def _api_lyric(
self,
mid: str,
) -> dict | None:
"""Return raw QQMusic lyric payload for one song MID."""
data = await self._auth.get_lyric(mid)
if not isinstance(data, dict):
return None
return data
async def _api_lyric_track(
self,
track: TrackMeta,
limit: int,
) -> dict | None:
"""Return raw QQMusic lyric payload for top-ranked search candidate."""
search_data = await self._api_search(track, limit)
if search_data is None:
return None
candidates = _parse_qq_search(search_data)
if not candidates:
return None
ranked = select_ranked(
candidates,
track.length,
title=track.title,
artist=track.artist,
album=track.album,
)
if not ranked:
return None
mid = ranked[0][0]
return await self._api_lyric(mid)
async def _search(
self, track: TrackMeta, limit: int = 10
) -> list[tuple[str, float]]:
search_data = await self._api_search(track, limit)
if search_data is None:
return []
query = f"{track.artist or ''} {track.title or ''}".strip()
logger.debug(f"QQMusic: searching for '{query}' (limit={limit})")
candidates = _parse_qq_search(search_data)
if not candidates:
logger.debug("QQMusic: search returned 0 results")
return []
logger.debug(f"QQMusic: search returned {len(candidates)} candidates")
ranked = select_ranked(
candidates,
track.length,
title=track.title,
artist=track.artist,
album=track.album,
)
if ranked:
logger.debug(
"QQMusic: top candidates: "
+ ", ".join(f"mid={m} ({c:.0f})" for m, c in ranked)
)
else:
logger.debug("QQMusic: no suitable candidate found")
return ranked
async def _get_lyric(self, mid: str, confidence: float = 0.0) -> FetchResult:
logger.debug(f"QQMusic: fetching lyrics for mid={mid}")
data = await self._api_lyric(mid)
if data is None:
return FetchResult.from_network_error()
lrcdata = _parse_qq_lyrics(data)
if lrcdata is None:
logger.debug(f"QQMusic: empty lyrics for mid={mid}")
return FetchResult.from_not_found()
status = lrcdata.detect_sync_status()
logger.info(
f"QQMusic: got {status.value} lyrics for mid={mid} ({len(lrcdata)} lines)"
)
not_found = LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
if status == CacheStatus.SUCCESS_SYNCED:
return FetchResult(
synced=LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=lrcdata,
source=self.source_name,
confidence=confidence,
),
unsynced=not_found,
)
return FetchResult(
synced=not_found,
unsynced=LyricResult(
status=CacheStatus.SUCCESS_UNSYNCED,
lyrics=lrcdata,
source=self.source_name,
confidence=confidence,
),
)
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
if not self._auth.is_configured():
logger.debug("QQMusic: skipped — Auth not configured")
return FetchResult()
query = f"{track.artist or ''} {track.title or ''}".strip()
if not query:
logger.debug("QQMusic: skipped — insufficient metadata")
return FetchResult()
logger.info(f"QQMusic: fetching lyrics for {track.display_name()}")
candidates = await self._search(track)
if not candidates:
logger.debug(f"QQMusic: no match found for {track.display_name()}")
return FetchResult.from_not_found()
res_synced: LyricResult = LyricResult(
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
)
res_unsynced: LyricResult = LyricResult(
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
)
for i, (mid, confidence) in enumerate(candidates):
if i > 0:
await asyncio.sleep(MULTI_CANDIDATE_DELAY_S)
result = await self._get_lyric(mid, confidence=confidence)
if result.synced and result.synced.status == CacheStatus.NETWORK_ERROR:
return result
if result.unsynced and result.unsynced.status == CacheStatus.NETWORK_ERROR:
return result
if (
res_synced.status == CacheStatus.NOT_FOUND
and result.synced
and result.synced.status == CacheStatus.SUCCESS_SYNCED
):
res_synced = result.synced
if (
res_unsynced.status == CacheStatus.NOT_FOUND
and result.unsynced
and result.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
):
res_unsynced = result.unsynced
# QQMusic API is quite expensive, so we stop after finding synced lyrics,
# instead of trying to find both synced and unsynced versions
if (
res_synced.status == CacheStatus.SUCCESS_SYNCED
# and res_unsynced.status == CacheStatus.SUCCESS_UNSYNCED
):
break
return FetchResult(synced=res_synced, unsynced=res_unsynced)
-214
View File
@@ -1,214 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-04 11:32:23
Description: Shared candidate-selection logic for search-based fetchers.
Each fetcher maps its API-specific results to SearchCandidate, then calls
select_best() which scores candidates by metadata similarity, duration
proximity, and sync status.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Generic, Optional, TypeVar
from ..config import (
DURATION_TOLERANCE_MS,
MULTI_CANDIDATE_LIMIT,
SCORE_W_TITLE as _W_TITLE,
SCORE_W_ARTIST as _W_ARTIST,
SCORE_W_ALBUM as _W_ALBUM,
SCORE_W_DURATION as _W_DURATION,
SCORE_W_SYNCED as _W_SYNCED,
MIN_CONFIDENCE,
)
from ..normalize import normalize_for_match, normalize_artist
T = TypeVar("T")
@dataclass
class SearchCandidate(Generic[T]):
"""A normalized search result for best-match selection.
Attributes:
item: The original API-specific object (dict, ID, etc.)
duration_ms: Track duration in milliseconds, or None if unknown.
is_synced: Whether this candidate is known to have synced lyrics.
title: Candidate track title for similarity scoring.
artist: Candidate artist name for similarity scoring.
album: Candidate album name for similarity scoring.
"""
item: T
duration_ms: Optional[float] = None
is_synced: bool = False
title: Optional[str] = None
artist: Optional[str] = None
album: Optional[str] = None
def _text_similarity(a: str, b: str) -> float:
"""Compare two normalized strings. Returns 0.0-1.0."""
if a == b:
return 1.0
if not a or not b:
return 0.0
# Containment: one is a substring of the other (e.g. "My Love" vs "My Love (Album Version)")
if a in b or b in a:
return min(len(a), len(b)) / max(len(a), len(b))
return 0.0
def _score_candidate(
c: SearchCandidate[T],
ref_title: Optional[str],
ref_artist: Optional[str],
ref_album: Optional[str],
ref_length_ms: Optional[int],
) -> float:
"""Score a candidate from 0-100 based on metadata match quality.
Scoring works in two tiers:
Metadata score computed from fields available on both sides,
then rescaled to fill the 0-90 range so that missing fields don't
inflate the score. Fields missing on both sides are simply excluded
from the calculation (neutral). Fields present on only one side
contribute 0 to the numerator but their weight still counts in the
denominator (penalty for asymmetric absence).
Field weights (before rescaling):
- Title: 40
- Artist: 30
- Album: 10
- Duration: 10 (only when reference track has duration; hard mismatch is
pre-filtered before scoring)
"""
raw = 0.0
available_weight = 0.0
# Title
if ref_title is not None or c.title is not None:
available_weight += _W_TITLE
if ref_title is not None and c.title is not None:
raw += _W_TITLE * _text_similarity(
normalize_for_match(ref_title), normalize_for_match(c.title)
)
# else both None → excluded
# Artist
if ref_artist is not None or c.artist is not None:
available_weight += _W_ARTIST
if ref_artist is not None and c.artist is not None:
na = normalize_artist(ref_artist)
nb = normalize_artist(c.artist)
if na == nb:
raw += _W_ARTIST
else:
raw += _W_ARTIST * _text_similarity(
normalize_for_match(ref_artist), normalize_for_match(c.artist)
)
# Album
if ref_album is not None or c.album is not None:
available_weight += _W_ALBUM
if ref_album is not None and c.album is not None:
raw += _W_ALBUM * _text_similarity(
normalize_for_match(ref_album), normalize_for_match(c.album)
)
# Duration — only counted when the reference track has duration.
# If the candidate also has duration, it contributes positively when matching
# (hard mismatch is already filtered upstream in select_best).
# If the candidate lacks duration, it contributes 0 to raw but still counts
# in available_weight (penalty for missing verifiable info).
# If the reference has no duration, duration is excluded entirely (neutral).
if ref_length_ms is not None:
available_weight += _W_DURATION
if c.duration_ms is not None:
diff = abs(c.duration_ms - ref_length_ms)
if diff <= DURATION_TOLERANCE_MS:
raw += _W_DURATION * (1.0 - diff / DURATION_TOLERANCE_MS)
# Rescale metadata to 0-90 range
_MAX_METADATA = _W_TITLE + _W_ARTIST + _W_ALBUM + _W_DURATION # 90
if available_weight > 0:
metadata_score = (raw / available_weight) * _MAX_METADATA
else:
# No comparable fields at all — only synced bonus matters
metadata_score = 0.0
# Synced bonus (always 10 pts, independent of metadata)
# synced_score = _W_SYNCED if c.is_synced else 0.0
# EDIT: synced or not should not affect the score that indicates metadata similarity.
# Always apply synced bonus regardless of is_synced.
synced_score = _W_SYNCED
return metadata_score + synced_score
def select_ranked(
candidates: list[SearchCandidate[T]],
track_length_ms: Optional[int] = None,
*,
title: Optional[str] = None,
artist: Optional[str] = None,
album: Optional[str] = None,
min_confidence: float = MIN_CONFIDENCE,
max_results: int = MULTI_CANDIDATE_LIMIT,
) -> list[tuple[T, float]]:
"""Score candidates and return top max_results above min_confidence, sorted by score descending."""
scored: list[tuple[T, float]] = []
for c in candidates:
if (
track_length_ms is not None
and c.duration_ms is not None
and abs(c.duration_ms - track_length_ms) > DURATION_TOLERANCE_MS
):
continue
s = _score_candidate(c, title, artist, album, track_length_ms)
if s >= min_confidence:
scored.append((c.item, s))
scored.sort(key=lambda x: x[1], reverse=True)
return scored[:max_results]
def select_best(
candidates: list[SearchCandidate[T]],
track_length_ms: Optional[int] = None,
*,
title: Optional[str] = None,
artist: Optional[str] = None,
album: Optional[str] = None,
min_confidence: float = MIN_CONFIDENCE,
) -> tuple[Optional[T], float]:
"""Pick the best candidate by confidence scoring.
Returns (item, score). Item is None if no candidate scores above min_confidence.
"""
if not candidates:
return None, 0.0
best_item: Optional[T] = None
best_score = -1.0
for c in candidates:
# Hard duration filter: both sides have duration but they don't match → skip.
if (
track_length_ms is not None
and c.duration_ms is not None
and abs(c.duration_ms - track_length_ms) > DURATION_TOLERANCE_MS
):
continue
s = _score_candidate(c, title, artist, album, track_length_ms)
if s > best_score:
best_score = s
best_item = c.item
if best_score < min_confidence:
return None, best_score
return best_item, best_score
-129
View File
@@ -1,129 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 10:43:21
Description: Spotify fetcher obtains synced lyrics via Spotify's internal color-lyrics API.
"""
from __future__ import annotations
from loguru import logger
from .base import BaseFetcher, FetchResult
from ..authenticators.spotify import SpotifyAuthenticator
from ..models import TrackMeta, LyricResult, CacheStatus
from ..lrc import LRCData
from ..config import GeneralConfig, TTL_NOT_FOUND
def _format_lrc_line(start_ms: int, words: str) -> str:
minutes = start_ms // 60000
seconds = (start_ms // 1000) % 60
centiseconds = round((start_ms % 1000) / 10.0)
return f"[{minutes:02d}:{seconds:02d}.{centiseconds:02.0f}]{words}"
def _is_truly_synced(lines: list[dict]) -> bool:
for line in lines:
try:
ms = int(line.get("startTimeMs", "0"))
if ms > 0:
return True
except (ValueError, TypeError):
continue
return False
def _parse_spotify_lyrics(data: dict) -> LRCData | None:
"""Parse Spotify color-lyrics payload to LRCData."""
lyrics_data = data.get("lyrics")
if not isinstance(lyrics_data, dict):
return None
sync_type = lyrics_data.get("syncType", "")
lines = lyrics_data.get("lines", [])
if not isinstance(lines, list) or len(lines) == 0:
return None
is_synced = sync_type == "LINE_SYNCED" and _is_truly_synced(lines)
lrc_lines: list[str] = []
for line in lines:
if not isinstance(line, dict):
continue
words = line.get("words", "")
if not isinstance(words, str):
continue
try:
ms = int(line.get("startTimeMs", "0"))
except (ValueError, TypeError):
ms = 0
if is_synced:
lrc_lines.append(_format_lrc_line(ms, words))
else:
lrc_lines.append(f"[00:00.00]{words}")
if not lrc_lines:
return None
return LRCData("\n".join(lrc_lines))
class SpotifyFetcher(BaseFetcher):
def __init__(self, general: GeneralConfig, auth: SpotifyAuthenticator) -> None:
super().__init__(general, auth)
_auth: SpotifyAuthenticator
@property
def source_name(self) -> str:
return "spotify"
def is_available(self, track: TrackMeta) -> bool:
return bool(track.trackid) and self._auth.is_configured()
async def _api_lyrics(self, track: TrackMeta) -> dict | None:
"""Return raw Spotify lyrics payload for one track using production auth path."""
if not track.trackid:
return None
data = await self._auth.get_lyrics(track.trackid)
if not isinstance(data, dict):
return None
return data
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
if not track.trackid:
logger.debug("Spotify: skipped — no trackid in metadata")
return FetchResult()
logger.info(f"Spotify: fetching lyrics for trackid={track.trackid}")
data = await self._api_lyrics(track)
if data is None:
logger.debug(f"Spotify: no lyrics payload for trackid={track.trackid}")
return FetchResult.from_not_found()
content = _parse_spotify_lyrics(data)
if content is None:
logger.debug("Spotify: response contained no parseable lyric lines")
return FetchResult.from_not_found()
status = content.detect_sync_status()
logger.info(f"Spotify: got {status.value} lyrics ({len(content)} lines)")
not_found = LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
if status == CacheStatus.SUCCESS_SYNCED:
return FetchResult(
synced=LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=content,
source=self.source_name,
),
unsynced=not_found,
)
return FetchResult(
synced=not_found,
unsynced=LyricResult(
status=CacheStatus.SUCCESS_UNSYNCED,
lyrics=content,
source=self.source_name,
),
)
-465
View File
@@ -1,465 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-03-25 21:54:01
Description: LRC parsing, modeling, and serialization helpers.
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
import re
from typing import Optional
from .models import CacheStatus
# Parses any time tag input format:
# [mm:ss], [mm:ss.c], [mm:ss.cc], [mm:ss.ccc], [mm:ss:cc], …
_RAW_TAG_RE = re.compile(r"\[(\d{2,}):(\d{2})(?:[.:](\d{1,3}))?\]")
# One or more leading bracket tags at line start.
# Used to strip start tags in plain-mode fallback.
_LINE_START_TAGS_RE = re.compile(r"^(?:\[[^\]]*\])+", re.MULTILINE)
# Timed word-sync tags: <mm:ss>, <mm:ss.c>, <mm:ss.cc>, <mm:ss:cc>
_WORD_SYNC_TAG_RE = re.compile(r"<(\d{2,}):(\d{2})(?:[.:](\d{1,3}))?>")
# A single doc-level tag line: [key:value].
# Disallow nested [] in value so multi-tag lines are not treated as doc tags.
_DOC_TAG_RE = re.compile(r"^\[([^:\]\[]+):([^\[\]]*)\]$")
# QRC uses a different format and is intentionally out of scope here.
def _remove_pattern(text: str, pattern: re.Pattern) -> str:
"""Remove all occurrences of pattern from text, then strip leading/trailing whitespace."""
return pattern.sub("", text).strip()
def _raw_tag_to_ms(mm: str, ss: str, frac: Optional[str]) -> int:
"""Convert parsed time tag components to total milliseconds."""
if frac is None:
ms = 0
else:
n = len(frac)
if n == 1:
ms = int(frac) * 100
elif n == 2:
ms = int(frac) * 10
else:
ms = int(frac)
return (int(mm) * 60 + int(ss)) * 1000 + ms
def _ms_to_std_tag(total_ms: int) -> str:
mm = max(0, total_ms) // 60000
ss = (max(0, total_ms) % 60000) // 1000
cs = min(round((max(0, total_ms) % 1000) / 10), 99)
return f"[{mm:02d}:{ss:02d}.{cs:02d}]"
def _ms_to_word_tag(total_ms: int) -> str:
mm = max(0, total_ms) // 60000
ss = (max(0, total_ms) % 60000) // 1000
cs = min(round((max(0, total_ms) % 1000) / 10), 99)
return f"<{mm:02d}:{ss:02d}.{cs:02d}>"
@dataclass(frozen=True)
class LrcWordSegment:
text: str
time_ms: Optional[int] = None
duration_ms: Optional[int] = None
class BaseLine(ABC):
"""Common line interface for rendering and text extraction."""
@property
@abstractmethod
def text(self) -> str:
"""Return plain text content for this line."""
@abstractmethod
def to_text(self, include_word_sync: bool) -> str:
"""Return full serialized line text."""
@abstractmethod
def to_plain_unsynced(self) -> Optional[str]:
"""Return this line's plain-text contribution in unsynced mode."""
@abstractmethod
def timed_plain_entries(self) -> list[tuple[int, str]]:
"""Return (timestamp_ms, text) entries for synced plain-mode output."""
def has_nonzero_timestamp(self) -> bool:
return any(ts > 0 for ts, _ in self.timed_plain_entries())
@dataclass
class DocTagLine(BaseLine):
"""Represents a single doc tag line like [ar:Artist]."""
key: str
value: str
@property
def text(self) -> str:
return f"[{self.key}:{self.value}]"
def to_text(self, include_word_sync: bool) -> str:
return self.text
def to_plain_unsynced(self) -> Optional[str]:
return None
def timed_plain_entries(self) -> list[tuple[int, str]]:
return []
@dataclass
class LyricLine(BaseLine):
"""Lyric line with optional line-level timestamps."""
line_times_ms: list[int] = field(default_factory=list)
words: list[LrcWordSegment] = field(default_factory=list)
@property
def text(self) -> str:
return "".join(seg.text for seg in self.words)
def to_text(self, include_word_sync: bool) -> str:
prefix = "".join(_ms_to_std_tag(ms) for ms in self.line_times_ms)
return prefix + self.text
def to_plain_unsynced(self) -> Optional[str]:
return _remove_pattern(self.text, _LINE_START_TAGS_RE)
def timed_plain_entries(self) -> list[tuple[int, str]]:
return [(tag_ms, self.text) for tag_ms in self.line_times_ms]
@dataclass
class WordSyncLyricLine(LyricLine):
"""Lyric line that can render per-word sync tags when requested."""
def to_text(self, include_word_sync: bool) -> str:
prefix = "".join(_ms_to_std_tag(ms) for ms in self.line_times_ms)
if not include_word_sync:
return prefix + self.text
parts: list[str] = []
for seg in self.words:
if seg.time_ms is not None:
parts.append(_ms_to_word_tag(seg.time_ms))
parts.append(seg.text)
return prefix + "".join(parts)
def _split_trimmed_lines(text: str) -> list[str]:
"""Split text into lines, strip each line, and drop outer blank lines."""
lines = [line.strip() for line in text.splitlines()]
while lines and not lines[0].strip():
lines.pop(0)
while lines and not lines[-1].strip():
lines.pop()
return lines
def _extract_leading_line_tags(line: str) -> tuple[list[int], str]:
"""Parse leading line-sync tags and return (times_ms, lyric_part).
Spaces between consecutive leading tags are dropped. If non-space text
appears, parsing of leading tags stops and the remainder is lyric text.
"""
pos = 0
tags_ms: list[int] = []
while True:
m = _RAW_TAG_RE.match(line, pos)
if not m:
break
tags_ms.append(_raw_tag_to_ms(m.group(1), m.group(2), m.group(3)))
pos = m.end()
# Allow spaces only between consecutive leading tags.
# We only check for '[' here; the next loop decides whether it is a valid time tag.
scan = pos
while scan < len(line) and line[scan].isspace():
scan += 1
if scan < len(line) and line[scan] == "[":
pos = scan
continue
pos = scan
break
return tags_ms, line[pos:]
def _parse_word_segments(lyric_part: str) -> tuple[list[LrcWordSegment], bool]:
"""Parse timed word-sync tags while preserving all lyric text exactly."""
segments: list[LrcWordSegment] = []
cursor = 0
current_time: Optional[int] = None
has_word_sync = False
for m in _WORD_SYNC_TAG_RE.finditer(lyric_part):
piece = lyric_part[cursor : m.start()]
if piece:
segments.append(LrcWordSegment(text=piece, time_ms=current_time))
current_time = _raw_tag_to_ms(m.group(1), m.group(2), m.group(3))
has_word_sync = True
cursor = m.end()
tail = lyric_part[cursor:]
if tail or not segments:
segments.append(
LrcWordSegment(
text=tail,
time_ms=current_time if has_word_sync else None,
)
)
return segments, has_word_sync
def _is_single_doc_tag_line(line: str) -> Optional[tuple[str, str]]:
"""Return (key, value) only for standalone single doc-tag lines."""
if _RAW_TAG_RE.fullmatch(line):
return None
m = _DOC_TAG_RE.fullmatch(line)
if not m:
return None
key = m.group(1).strip()
value = m.group(2).strip()
return key, value
def _parse_offset_value(value: str) -> Optional[int]:
"""Parse doc offset value in milliseconds, returning None for invalid values."""
try:
return int(value.strip())
except ValueError:
return None
class LRCData:
_lines: list[BaseLine]
_doc_tags: dict[str, str]
def __init__(self, text: Optional[str] = None) -> None:
self._doc_tags = {}
if not text:
self._lines = []
return
raw_lines = _split_trimmed_lines(text)
parsed: list[BaseLine] = []
for raw in raw_lines:
maybe_tag = _is_single_doc_tag_line(raw)
if maybe_tag is not None:
key, value = maybe_tag
self._doc_tags[key] = value
parsed.append(DocTagLine(key=key, value=value))
continue
tags_ms, lyric_part = _extract_leading_line_tags(raw)
words, has_word_sync = _parse_word_segments(lyric_part if tags_ms else raw)
if has_word_sync:
parsed.append(WordSyncLyricLine(line_times_ms=tags_ms, words=words))
else:
parsed.append(LyricLine(line_times_ms=tags_ms, words=words))
self._lines = parsed
def __str__(self) -> str:
return self._serialize_lines(self._lines, include_word_sync=True)
def __repr__(self) -> str:
return f"LRCData(doc_tags={self._doc_tags!r}, lines={self._lines!r})"
def __len__(self) -> int:
return len(self._lines)
@property
def tags(self) -> dict[str, str]:
return self._doc_tags
@property
def lines(self) -> list[BaseLine]:
return self._lines
def is_synced(self) -> bool:
"""Return True if any lyric line contains a non-zero line timestamp."""
return any(line.has_nonzero_timestamp() for line in self._lines)
def detect_sync_status(self) -> CacheStatus:
"""Map sync detection result to cache status."""
return (
CacheStatus.SUCCESS_SYNCED
if self.is_synced()
else CacheStatus.SUCCESS_UNSYNCED
)
def normalize_unsynced(self) -> "LRCData":
"""Convert lyrics into unsynced LRC form with [00:00.00] tags.
- Leading blank lyric lines are skipped.
- Middle blank lyric lines are preserved as empty synced lines.
- Doc-tag lines are preserved unchanged.
"""
out: list[BaseLine] = []
first = True
for line in self._lines:
if isinstance(line, DocTagLine):
out.append(DocTagLine(key=line.key, value=line.value))
continue
assert isinstance(line, LyricLine)
stripped = line.text.strip()
if not stripped and not first:
out.append(
LyricLine(line_times_ms=[0], words=[LrcWordSegment(text="")])
)
continue
elif not stripped:
continue
first = False
out.append(
LyricLine(
line_times_ms=[0],
words=[LrcWordSegment(text=line.text)],
)
)
ret = LRCData()
ret._lines = out
ret._doc_tags = dict(self._doc_tags)
return ret
def normalize(self) -> "LRCData":
"""Normalize LRC for decode/export oriented output.
Rules:
- Move all doc tags to the beginning, preserving line order and duplicates.
- Keep doc tags unchanged except removing all offset tags.
- Remove word-sync tags.
- Convert untagged non-empty lyric lines to [00:00.00] lyrics.
- Drop empty lyric lines.
- Expand lyric lines with multiple time tags into one line per tag.
- Apply offset (ms) to lyric timestamps and sort by timestamp.
"""
out_doc_tags: list[DocTagLine] = []
lyric_entries: list[tuple[int, str]] = []
offset_ms = 0
# Resolve offset first so it applies to all lyric lines, independent of tag position.
for line in self._lines:
if isinstance(line, DocTagLine) and line.key.strip().lower() == "offset":
parsed_offset = _parse_offset_value(line.value)
if parsed_offset is not None:
offset_ms = parsed_offset
for line in self._lines:
if isinstance(line, DocTagLine):
if line.key.strip().lower() == "offset":
continue
out_doc_tags.append(DocTagLine(key=line.key, value=line.value))
continue
assert isinstance(line, LyricLine)
lyric_text = line.text
if not lyric_text.strip():
continue
line_times = line.line_times_ms if line.line_times_ms else [0]
for time_ms in line_times:
shifted = max(0, time_ms + offset_ms)
lyric_entries.append((shifted, lyric_text))
# Sort by timestamp; original index as tiebreaker so equal-time entries
# retain the order they appeared in the input.
lyric_entries = [
e
for _, e in sorted(enumerate(lyric_entries), key=lambda x: (x[1][0], x[0]))
]
out_lyrics: list[LyricLine] = [
LyricLine(line_times_ms=[time_ms], words=[LrcWordSegment(text=text)])
for time_ms, text in lyric_entries
]
ret = LRCData()
ret._lines = [*out_doc_tags, *out_lyrics]
ret._doc_tags = {line.key: line.value for line in out_doc_tags}
return ret
def to_plain(
self,
deduplicate: bool = False,
) -> str:
"""Convert lyrics to plain text with all tags stripped.
If synced, output is sorted by line timestamp and duplicated for multi-tag lines.
If not synced, leading bracket tags are stripped per line and original order is kept.
If deduplicate is True, only consecutive duplicate plain lines are collapsed.
"""
if not self.is_synced():
plain_lines = [
text
for text in (line.to_plain_unsynced() for line in self._lines)
if text is not None
]
return "\n".join(plain_lines).strip("\n")
tagged_lines: list[tuple[int, str]] = []
for line in self._lines:
tagged_lines.extend(line.timed_plain_entries())
sorted_lines = [
lyric
for _, (_, lyric) in sorted(
enumerate(tagged_lines), key=lambda x: (x[1][0], x[0])
)
]
if deduplicate:
# Remove consecutive duplicates
deduped_lines = []
prev_line = None
for line in sorted_lines:
if line != prev_line:
deduped_lines.append(line)
prev_line = line
sorted_lines = deduped_lines
return "\n".join(sorted_lines).strip()
@staticmethod
def _serialize_lines(lines: list[BaseLine], include_word_sync: bool) -> str:
return "\n".join(
line.to_text(include_word_sync=include_word_sync) for line in lines
)
def to_text(
self,
include_word_sync: bool = False,
) -> str:
"""Serialize to non-normalized LRC text.
- Unsynced lyrics are converted to [00:00.00]-tagged form.
- include_word_sync only controls rendering of per-word tags.
- This method does not apply normalize() rules.
"""
res = self if self.is_synced() else self.normalize_unsynced()
return self._serialize_lines(res._lines, include_word_sync=include_word_sync)
def to_normalized_text(self) -> str:
"""Serialize using normalize() rules.
Normalized output always strips word-sync tags.
"""
normalized = self.normalize()
return self._serialize_lines(normalized._lines, include_word_sync=False)
-50
View File
@@ -1,50 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-02 05:24:27
Description: Shared text normalization utilities for fuzzy matching.
Used by cache key generation, cache search, and candidate selection scoring.
"""
from __future__ import annotations
import re
import unicodedata
# Punctuation to strip for fuzzy matching (ASCII + fullwidth + CJK brackets/symbols)
_PUNCT_RE = re.compile(
r"[~!@#$%^&*()_+\-=\[\]{}|;:'\",.<>?/\\`"
r"~!@#$%^&*()_+-=【】{}|;:'",。<>?/\`"
r"「」『』《》〈〉〔〕·•‥…—–]"
)
_SPACE_RE = re.compile(r"\s+")
# feat./ft./featuring and everything after (case-insensitive, word boundary)
_FEAT_RE = re.compile(r"\s*(?:\bfeat\.?\b|\bft\.?\b|\bfeaturing\b).*", re.IGNORECASE)
# Multi-artist separators: /, &, ×, x (surrounded by spaces), ;, 、, vs.
_ARTIST_SEP_RE = re.compile(r"\s*(?:[/&;×、]|\bvs\.?\b|\bx\b)\s*", re.IGNORECASE)
def normalize_for_match(s: str) -> str:
"""Normalize a string for fuzzy comparison.
Lowercases, NFKC-normalizes (fullwidth halfwidth), strips punctuation,
and collapses whitespace.
"""
s = unicodedata.normalize("NFKC", s).lower()
s = _FEAT_RE.sub("", s)
s = _PUNCT_RE.sub(" ", s)
s = _SPACE_RE.sub(" ", s).strip()
return s
def normalize_artist(s: str) -> str:
"""Normalize an artist string: split by separators, normalize each, sort.
Splits first (on /, &, ;, ×, , vs., x), then strips feat./ft./featuring
from each part individually, so 'A feat. C / B' ['a', 'b'] not just ['a'].
"""
s = unicodedata.normalize("NFKC", s).lower()
parts = _ARTIST_SEP_RE.split(s)
normed = sorted(
{normalize_for_match(p) for p in parts if _FEAT_RE.sub("", p).strip()}
)
return "\0".join(normed) if normed else normalize_for_match(s)
-111
View File
@@ -1,111 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-10 17:06:37
Description: Utility functions
"""
from __future__ import annotations
from typing import TYPE_CHECKING, Optional
from urllib.parse import unquote
from pathlib import Path
from .models import CacheStatus
if TYPE_CHECKING:
from .models import LyricResult
# Paths
def get_audio_path(audio_url: str, ensure_exists: bool = False) -> Optional[Path]:
"""Convert file:// URL to Path, return None if invalid or (if ensure_exists) file doesn't exist."""
if not audio_url.startswith("file://"):
return None
file_path = unquote(audio_url.replace("file://", "", 1))
path = Path(file_path)
if ensure_exists and not path.exists():
return None
return path
def get_sidecar_path(
audio_url: str,
ensure_audio_exists: bool = False,
ensure_exists: bool = False,
extension: str = ".lrc",
) -> Optional[Path]:
"""Given a file:// URL, return the corresponding .lrc sidecar path.
If ensure_audio_exists is True, return None if the audio file does not exist.
If ensure_exists is True, return None if the .lrc file does not exist.
"""
audio_path = get_audio_path(audio_url, ensure_exists=ensure_audio_exists)
if not audio_path:
return None
lrc_path = audio_path.with_suffix(extension)
if ensure_exists and not lrc_path.exists():
return None
return lrc_path
# Ranking
def is_positive_status(status: CacheStatus) -> bool:
return status in (CacheStatus.SUCCESS_SYNCED, CacheStatus.SUCCESS_UNSYNCED)
def is_better_result(
new: LyricResult,
old: LyricResult,
*,
allow_unsynced: bool,
) -> bool:
"""Return True when new should rank above old.
Ordering rules (highest first):
1) Positive statuses always beat negative statuses.
2) When allow_unsynced=False, SUCCESS_SYNCED always beats SUCCESS_UNSYNCED.
3) Higher confidence beats lower confidence.
4) On equal confidence, SUCCESS_SYNCED beats SUCCESS_UNSYNCED.
"""
new_positive = is_positive_status(new.status)
old_positive = is_positive_status(old.status)
if not new_positive:
return False
if not old_positive:
return True
new_synced = new.status == CacheStatus.SUCCESS_SYNCED
old_synced = old.status == CacheStatus.SUCCESS_SYNCED
if not allow_unsynced and new_synced != old_synced:
return new_synced
if new.confidence != old.confidence:
return new.confidence > old.confidence
return new_synced and not old_synced
def select_best_positive(
candidates: list[LyricResult],
*,
allow_unsynced: bool,
) -> Optional[LyricResult]:
"""Pick best positive LyricResult from candidates.
Negative statuses are ignored.
"""
positives = [c for c in candidates if is_positive_status(c.status)]
if not positives:
return None
best = positives[0]
for c in positives[1:]:
if is_better_result(c, best, allow_unsynced=allow_unsynced):
best = c
return best
-5
View File
@@ -1,5 +0,0 @@
from __future__ import annotations
from .session import WatchCoordinator
__all__ = ["WatchCoordinator"]
-154
View File
@@ -1,154 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-10 08:14:58
Description: Unix-socket control channel for communicating with a running watch session.
"""
from __future__ import annotations
import asyncio
import json
from pathlib import Path
from typing import TYPE_CHECKING
from loguru import logger
if TYPE_CHECKING:
from .session import WatchCoordinator
class ControlServer:
"""Control server that handles offset/status commands over a Unix socket."""
_socket_path: Path
_server: asyncio.AbstractServer | None
def __init__(self, socket_path: str) -> None:
"""Initialize control server with socket path from config or explicit override."""
self._socket_path = Path(socket_path)
self._server: asyncio.AbstractServer | None = None
async def start(self, session: "WatchCoordinator") -> bool:
"""Start listening for control requests and bind session handlers."""
if not await self._prepare_socket_path():
return False
self._socket_path.parent.mkdir(parents=True, exist_ok=True)
self._server = await asyncio.start_unix_server(
lambda r, w: self._handle(session, r, w),
path=str(self._socket_path),
)
return True
async def _prepare_socket_path(self) -> bool:
"""Ensure socket path is usable and reject when another session is active."""
if not self._socket_path.exists():
return True
try:
# probe the socket to distinguish a live session from a stale socket file
reader, writer = await asyncio.open_unix_connection(str(self._socket_path))
writer.close()
await writer.wait_closed()
# connection succeeded → another watch session is actively listening
logger.error(
"A watch session is already running. Use 'lrx watch ctl status'."
)
return False
except Exception:
# connection refused / file is stale → safe to remove and reuse
try:
self._socket_path.unlink(missing_ok=True)
except Exception:
pass
return True
async def stop(self) -> None:
"""Stop control server and remove stale socket path."""
if self._server is not None:
self._server.close()
await self._server.wait_closed()
self._server = None
try:
self._socket_path.unlink(missing_ok=True)
except Exception:
pass
async def _handle(
self,
session: "WatchCoordinator",
reader: asyncio.StreamReader,
writer: asyncio.StreamWriter,
) -> None:
"""Handle one control request and send JSON response."""
resp: dict[str, object] = {"ok": False, "error": "internal error"}
try:
line = await reader.readline()
if not line:
resp = {"ok": False, "error": "empty request"}
else:
req = json.loads(line.decode("utf-8"))
cmd = req.get("cmd")
if cmd == "offset":
delta = int(req.get("delta", 0))
resp = session.handle_offset(delta)
elif cmd == "status":
resp = session.handle_status()
else:
resp = {"ok": False, "error": "unknown command"}
except Exception as e:
resp = {"ok": False, "error": str(e)}
finally:
writer.write((json.dumps(resp) + "\n").encode("utf-8"))
await writer.drain()
writer.close()
await writer.wait_closed()
class ControlClient:
"""Control client used by CLI commands to talk to active watch session."""
_socket_path: Path
def __init__(self, socket_path: str) -> None:
"""Initialize control client with socket path from config or explicit override."""
self._socket_path = Path(socket_path)
async def _send_async(self, cmd: dict[str, object]) -> dict[str, object]:
"""Send one JSON command to control server and return JSON response."""
if not self._socket_path.exists():
return {"ok": False, "error": "No watch session running."}
try:
reader, writer = await asyncio.open_unix_connection(str(self._socket_path))
except Exception:
return {"ok": False, "error": "No watch session running."}
writer.write((json.dumps(cmd) + "\n").encode("utf-8"))
await writer.drain()
line = await reader.readline()
writer.close()
await writer.wait_closed()
if not line:
return {"ok": False, "error": "Empty response."}
return json.loads(line.decode("utf-8"))
def send(self, cmd: dict[str, object]) -> dict[str, object]:
"""Synchronous wrapper around async control request."""
return asyncio.run(self._send_async(cmd))
def parse_delta(raw: str) -> tuple[bool, int | None, str | None]:
"""Parse signed millisecond offset delta string for ctl offset command."""
value = raw.strip()
try:
if value.startswith("+"):
return True, int(value[1:]), None
if value.startswith("-"):
# keep the sign by negating; bare int() would accept "-123" too but
# explicit split is clearer about intent and avoids double-negative edge cases
return True, -int(value[1:]), None
return True, int(value), None
except ValueError:
return False, None, f"Invalid offset delta: {raw}"
-89
View File
@@ -1,89 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-10 08:14:41
Description: Debounced lyric fetch orchestration for watch session.
"""
from __future__ import annotations
import asyncio
from typing import Awaitable, Callable, Optional
from ..lrc import LRCData
from ..models import TrackMeta
class LyricFetcher:
"""Debounces track updates and runs at most one lyric fetch task at a time."""
_watch_debounce_ms: int
_fetch_func: Callable[[TrackMeta], Awaitable[Optional[LRCData]]]
_on_fetching: Callable[[], Awaitable[None] | None]
_on_result: Callable[[Optional[LRCData]], Awaitable[None] | None]
_debounce_task: asyncio.Task | None
_fetch_task: asyncio.Task | None
_pending_track: TrackMeta | None
def __init__(
self,
fetch_func: Callable[[TrackMeta], Awaitable[Optional[LRCData]]],
on_fetching: Callable[[], Awaitable[None] | None],
on_result: Callable[[Optional[LRCData]], Awaitable[None] | None],
watch_debounce_ms: int,
) -> None:
"""Initialize fetch callbacks and runtime options."""
self._watch_debounce_ms = watch_debounce_ms
self._fetch_func = fetch_func
self._on_fetching = on_fetching
self._on_result = on_result
self._debounce_task: asyncio.Task | None = None
self._fetch_task: asyncio.Task | None = None
self._pending_track: TrackMeta | None = None
async def stop(self) -> None:
"""Cancel and await all in-flight debounce/fetch tasks."""
for task in (self._debounce_task, self._fetch_task):
if task is not None:
task.cancel()
await asyncio.gather(
*[t for t in (self._debounce_task, self._fetch_task) if t is not None],
return_exceptions=True,
)
self._debounce_task = None
self._fetch_task = None
def request(self, track: TrackMeta) -> None:
"""Request lyrics for track with debounce collapsing."""
self._pending_track = track
if self._debounce_task is not None:
# cancel any pending debounce window — the new request supersedes it
self._debounce_task.cancel()
self._debounce_task = asyncio.create_task(self._debounce_then_fetch())
async def _debounce_then_fetch(self) -> None:
"""Wait debounce window then start a fresh fetch task for latest pending track."""
await asyncio.sleep(self._watch_debounce_ms / 1000.0)
track = self._pending_track
if track is None:
return
if self._fetch_task is not None:
# abort any in-flight fetch for a previous track before starting the new one
self._fetch_task.cancel()
await asyncio.gather(self._fetch_task, return_exceptions=True)
self._fetch_task = asyncio.create_task(self._do_fetch(track))
async def _do_fetch(self, track: TrackMeta) -> None:
"""Execute fetch lifecycle callbacks and fetch lyrics for a track."""
# callbacks may be plain functions or coroutines — handle both
fetching_callback_result = self._on_fetching()
if asyncio.iscoroutine(fetching_callback_result):
await fetching_callback_result
lyrics = await self._fetch_func(track)
result_callback_result = self._on_result(lyrics)
if asyncio.iscoroutine(result_callback_result):
await result_callback_result
-402
View File
@@ -1,402 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-10 08:14:27
Description: Player discovery, state monitoring, and active-player selection for watch mode.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Callable, Optional
import asyncio
from dbus_next.aio.message_bus import MessageBus
from dbus_next.constants import BusType
from dbus_next.message import Message
from loguru import logger
from ..models import TrackMeta
from ..mpris import pick_active_player
def _variant_value(item: object) -> object | None:
"""Extract .value from DBus variant-like objects when available."""
if hasattr(item, "value"):
return getattr(item, "value")
return None
@dataclass(slots=True)
class PlayerState:
"""Current observable state for one MPRIS player."""
bus_name: str
status: str
track: Optional[TrackMeta]
@dataclass(frozen=True, slots=True)
class PlayerTarget:
"""Constraint for choosing which players are visible to watch."""
hint: Optional[str] = None
@property
def normalized_hint(self) -> str:
"""Return normalized lowercase player hint string."""
return (self.hint or "").strip().lower()
def allows(self, bus_name: str) -> bool:
"""Return whether given MPRIS bus name passes this target constraint."""
normalized_hint = self.normalized_hint
if not normalized_hint:
return True
return _keyword_match(bus_name, normalized_hint)
def _keyword_match(text: str, keyword: str) -> bool:
"""Return True when keyword exists in text, case-insensitively."""
return keyword.strip().lower() in text.lower()
class PlayerMonitor:
"""Tracks MPRIS players and forwards signal-driven state updates to session callbacks."""
_player_blacklist: tuple[str, ...]
_on_players_changed: Callable[[], None]
_on_seeked: Callable[[str, int], None]
_on_playback_status: Callable[[str, str], None]
_target: PlayerTarget
players: dict[str, PlayerState]
_bus: MessageBus | None
_props_cache: dict[str, object]
def __init__(
self,
on_players_changed: Callable[[], None],
on_seeked: Callable[[str, int], None],
on_playback_status: Callable[[str, str], None],
player_blacklist: tuple[str, ...],
target: Optional[PlayerTarget] = None,
) -> None:
"""Initialize monitor callbacks, runtime options, and player target filter."""
self._player_blacklist = player_blacklist
self._on_players_changed = on_players_changed
self._on_seeked = on_seeked
self._on_playback_status = on_playback_status
self._target = target or PlayerTarget()
self.players: dict[str, PlayerState] = {}
self._bus: MessageBus | None = None
self._props_cache: dict[str, object] = {}
async def start(self) -> None:
"""Start DBus monitoring and populate initial player snapshot."""
self._bus = await MessageBus(bus_type=BusType.SESSION).connect()
self._bus.add_message_handler(self._on_message)
await self._add_match_rules()
await self.refresh()
async def close(self) -> None:
"""Stop DBus monitoring and close bus connection."""
self._props_cache.clear()
if self._bus:
self._bus.disconnect()
self._bus = None
async def _get_player_props(self, bus_name: str) -> object | None:
"""Return cached DBus Properties interface for player, creating it if missing."""
if not self._bus:
return None
if bus_name in self._props_cache:
return self._props_cache[bus_name]
try:
introspection = await self._bus.introspect(
bus_name, "/org/mpris/MediaPlayer2"
)
proxy = self._bus.get_proxy_object(
bus_name, "/org/mpris/MediaPlayer2", introspection
)
props = proxy.get_interface("org.freedesktop.DBus.Properties")
self._props_cache[bus_name] = props
return props
except Exception as e:
logger.debug(f"Failed to prepare DBus props for {bus_name}: {e}")
self._props_cache.pop(bus_name, None)
return None
async def _add_match_rules(self) -> None:
"""Register signal subscriptions needed by monitor."""
if not self._bus:
return
rules = [
"type='signal',interface='org.freedesktop.DBus',member='NameOwnerChanged'",
"type='signal',interface='org.freedesktop.DBus.Properties',member='PropertiesChanged'",
"type='signal',interface='org.mpris.MediaPlayer2.Player',member='Seeked'",
]
for rule in rules:
try:
await self._bus.call(
Message(
destination="org.freedesktop.DBus",
path="/org/freedesktop/DBus",
interface="org.freedesktop.DBus",
member="AddMatch",
signature="s",
body=[rule],
)
)
except Exception as e:
logger.debug(f"Failed to add DBus match rule {rule}: {e}")
async def _list_mpris_players(self) -> list[str]:
"""List visible MPRIS players after applying target filter and optional blacklist.
The blacklist is skipped when an explicit player hint is active so that
``--player`` can target any player regardless of PLAYER_BLACKLIST.
"""
if not self._bus:
return []
try:
reply = await self._bus.call(
Message(
destination="org.freedesktop.DBus",
path="/org/freedesktop/DBus",
interface="org.freedesktop.DBus",
member="ListNames",
)
)
if not reply or not reply.body:
return []
out: list[str] = []
hint_active = bool(self._target.normalized_hint)
for name in reply.body[0]:
if not name.startswith("org.mpris.MediaPlayer2."):
continue
# --player bypasses the blacklist; only filter when no hint is given
if not hint_active and any(
x.lower() in name.lower() for x in self._player_blacklist
):
# logger.info(f"Excluding blacklisted player: {name}")
continue
if not self._target.allows(name):
continue
out.append(name)
return out
except Exception as e:
logger.debug(f"Failed to list mpris players: {e}")
return []
async def _fetch_player_state(self, bus_name: str) -> Optional[PlayerState]:
"""Read current playback status and metadata from one player service."""
props = await self._get_player_props(bus_name)
if props is None:
return None
try:
status_var = await getattr(props, "call_get")(
"org.mpris.MediaPlayer2.Player", "PlaybackStatus"
)
metadata_var = await getattr(props, "call_get")(
"org.mpris.MediaPlayer2.Player", "Metadata"
)
status = status_var.value if status_var else "Stopped"
track = self._track_from_metadata(
metadata_var.value if metadata_var else {}
)
return PlayerState(bus_name=bus_name, status=status, track=track)
except Exception as e:
logger.debug(f"Failed to read state for {bus_name}: {e}")
self._props_cache.pop(bus_name, None)
return None
def _track_from_metadata(self, metadata: dict[str, object]) -> Optional[TrackMeta]:
"""Build TrackMeta object from MPRIS metadata map."""
if not metadata:
return None
trackid = metadata.get("mpris:trackid")
if trackid is not None:
trackid = _variant_value(trackid)
# normalize Spotify track IDs — the raw MPRIS value varies by client version
if isinstance(trackid, str) and trackid.startswith("spotify:track:"):
trackid = trackid.removeprefix("spotify:track:")
elif isinstance(trackid, str) and trackid.startswith("/com/spotify/track/"):
trackid = trackid.removeprefix("/com/spotify/track/")
elif not isinstance(trackid, str):
trackid = None
length = metadata.get("mpris:length")
length_ms = None
length_value = _variant_value(length) if length is not None else None
if isinstance(length_value, int):
# MPRIS reports length in microseconds; convert to milliseconds
length_ms = length_value // 1000
artist = metadata.get("xesam:artist")
artist_v = None
artist_value = _variant_value(artist) if artist is not None else None
if isinstance(artist_value, list) and artist_value:
# xesam:artist is a list; take the first entry as primary artist
artist_v = artist_value[0]
title = metadata.get("xesam:title")
album = metadata.get("xesam:album")
url = metadata.get("xesam:url")
title_value = _variant_value(title) if title is not None else None
album_value = _variant_value(album) if album is not None else None
url_value = _variant_value(url) if url is not None else None
return TrackMeta(
trackid=trackid,
length=length_ms,
album=album_value if isinstance(album_value, str) else None,
artist=artist_v,
title=title_value if isinstance(title_value, str) else None,
url=url_value if isinstance(url_value, str) else None,
)
async def refresh(self) -> None:
"""Refresh full player snapshot and notify session when visible set changes."""
players = await self._list_mpris_players()
updated: dict[str, PlayerState] = {}
for bus_name in players:
st = await self._fetch_player_state(bus_name)
if st is not None:
updated[bus_name] = st
before = set(self.players.keys())
after = set(updated.keys())
added = sorted(after - before)
removed = sorted(before - after)
for bus_name in removed:
self._props_cache.pop(bus_name, None)
self.players = updated
if added or removed:
logger.info(
"MPRIS players updated: added={}, removed={}",
added,
removed,
)
self._on_players_changed()
async def _resolve_well_known_name(self, unique_sender: str) -> str | None:
"""Map a DBus unique sender (e.g. :1.42) to a tracked MPRIS bus name."""
if unique_sender in self.players:
# sender is already a well-known name we track (unlikely but fast path)
return unique_sender
if not self._bus:
return None
# Seeked signals arrive with the unique connection name (:1.N), not the
# well-known bus name (org.mpris.MediaPlayer2.X). Ask D-Bus which
# well-known name owns that unique name.
for bus_name in self.players:
try:
reply = await self._bus.call(
Message(
destination="org.freedesktop.DBus",
path="/org/freedesktop/DBus",
interface="org.freedesktop.DBus",
member="GetNameOwner",
signature="s",
body=[bus_name],
)
)
if reply and reply.body and str(reply.body[0]) == unique_sender:
return bus_name
except Exception:
continue
return None
async def _handle_seeked_signal(self, sender: str, position_ms: int) -> None:
"""Route Seeked signal to session using well-known bus name when possible."""
bus_name = await self._resolve_well_known_name(sender)
if bus_name is not None:
self._on_seeked(bus_name, position_ms)
return
# If we cannot map sender reliably, force a state refresh to converge.
await self.refresh()
def _on_message(self, message: Message) -> bool:
"""Low-level DBus signal handler for player lifecycle/status/seek events."""
try:
if (
message.interface == "org.freedesktop.DBus"
and message.member == "NameOwnerChanged"
):
# a player appeared or disappeared — rescan the full player list
if message.body and str(message.body[0]).startswith(
"org.mpris.MediaPlayer2."
):
asyncio.create_task(self.refresh())
return False
if (
message.interface == "org.freedesktop.DBus.Properties"
and message.member == "PropertiesChanged"
):
# message.sender is a unique connection name, not the well-known bus
# name, so we can't filter by sender here — match by object path and
# interface instead to scope it to MPRIS Player properties only
path_ok = message.path == "/org/mpris/MediaPlayer2"
iface = message.body[0] if message.body else None
if path_ok and iface == "org.mpris.MediaPlayer2.Player":
asyncio.create_task(self.refresh())
return False
if (
message.interface == "org.mpris.MediaPlayer2.Player"
and message.member == "Seeked"
):
sender = message.sender or ""
if sender and message.body:
# MPRIS Seeked position is in microseconds; convert to ms
position_us = int(message.body[0])
asyncio.create_task(
self._handle_seeked_signal(
sender,
max(0, position_us // 1000),
)
)
return False
except Exception as e:
logger.debug(f"PlayerMonitor signal handling error: {e}")
return False
async def get_position_ms(self, bus_name: str) -> Optional[int]:
"""Read player-reported position in milliseconds."""
props = await self._get_player_props(bus_name)
if props is None:
return None
try:
position_var = await getattr(props, "call_get")(
"org.mpris.MediaPlayer2.Player", "Position"
)
if position_var is None:
return None
return max(0, int(position_var.value) // 1000)
except Exception as e:
logger.debug(f"Failed to read position from {bus_name}: {e}")
self._props_cache.pop(bus_name, None)
return None
class ActivePlayerSelector:
@staticmethod
def select(
players: dict[str, PlayerState],
last_active: str | None,
preferred_player: str,
) -> str | None:
"""Select active player by playing state, preferred keyword, and continuity."""
if not players:
return None
all_names = list(players.keys())
playing = [name for name, st in players.items() if st.status == "Playing"]
return pick_active_player(all_names, playing, preferred_player, last_active)
-390
View File
@@ -1,390 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-10 08:10:52
Description: Watch orchestration with explicit MVVM role boundaries.
- Model: WatchModel stores domain state.
- ViewModel: WatchViewModel projects model to output-facing state/signature.
- Coordinator: WatchCoordinator wires services and drives async workflows.
"""
from __future__ import annotations
import asyncio
from dataclasses import asdict
from typing import Optional
from loguru import logger
from ..core import LrcManager
from ..lrc import LRCData
from ..models import TrackMeta
from .control import ControlServer
from .fetcher import LyricFetcher
from ..config import AppConfig
from .view import BaseOutput, LyricView, WatchState, WatchStatus
from .player import ActivePlayerSelector, PlayerMonitor, PlayerTarget
from .tracker import PositionTracker
class WatchModel:
"""Model layer that owns watch state and lyric timeline representation."""
offset_ms: int
active_player: str | None
active_track_key: str | None
status: WatchStatus
lyrics: LyricView | None
def __init__(self) -> None:
self.offset_ms = 0
self.active_player: str | None = None
self.active_track_key: str | None = None
self.status: WatchStatus = WatchStatus.IDLE
self.lyrics: LyricView | None = None
def set_lyrics(self, lyrics: LRCData | None) -> None:
"""Update lyrics and rebuild projection once per lyric object change."""
if lyrics is None:
self.lyrics = None
return
self.lyrics = LyricView.from_lrc(lyrics)
def state_signature(self, track: TrackMeta | None, position_ms: int) -> tuple:
"""Build dedupe signature from model state and current lyric cursor."""
# prefer trackid when available; fall back to display name for players
# that don't expose a stable ID (e.g. some MPRIS implementations)
track_key = (
track.trackid
if track and track.trackid
else track.display_name()
if track
else None
)
if self.status != WatchStatus.OK or self.lyrics is None:
# non-OK states don't have cursor position — discriminate by status alone
return ("status", self.status, self.active_player, track_key)
at_ms = position_ms + self.offset_ms
cursor = self.lyrics.signature_cursor(at_ms)
return ("lyrics", self.active_player, track_key, cursor)
class WatchViewModel:
"""ViewModel that projects WatchModel into view-consumable snapshots."""
_model: WatchModel
def __init__(self, model: WatchModel) -> None:
self._model = model
def signature(self, track: TrackMeta | None, position_ms: int) -> tuple:
"""Build dedupe signature for current projected state."""
return self._model.state_signature(track, position_ms)
def state(self, track: TrackMeta | None, position_ms: int) -> WatchState:
"""Project model values into immutable WatchState payload."""
return WatchState(
track=track,
lyrics=self._model.lyrics,
position_ms=position_ms,
offset_ms=self._model.offset_ms,
status=self._model.status,
)
class WatchCoordinator:
"""Application/service orchestration layer for watch runtime."""
_manager: LrcManager
_output: BaseOutput
_config: AppConfig
_model: WatchModel
_view_model: WatchViewModel
_player_hint: str | None
_last_emit_signature: tuple | None
_target: PlayerTarget
_control: ControlServer
_player_monitor: PlayerMonitor
_tracker: PositionTracker
_fetcher: LyricFetcher
_emit_scheduled: bool
_calibration_task: asyncio.Task | None
def __init__(
self,
manager: LrcManager,
output: BaseOutput,
player_hint: str | None,
config: AppConfig,
) -> None:
self._manager = manager
self._output = output
self._config = config
self._model = WatchModel()
self._view_model = WatchViewModel(self._model)
self._player_hint = player_hint
self._last_emit_signature: tuple | None = None
self._emit_scheduled = False
self._calibration_task = None
self._target = PlayerTarget(hint=player_hint)
self._control = ControlServer(socket_path=config.watch.socket_path)
self._player_monitor = PlayerMonitor(
on_players_changed=self._on_player_change,
on_seeked=self._on_seeked,
on_playback_status=self._on_playback_status,
player_blacklist=self._config.general.player_blacklist,
target=self._target,
)
self._tracker = PositionTracker(
poll_position_ms=self._player_monitor.get_position_ms,
config=self._config,
on_tick=self._on_tracker_tick,
)
self._fetcher = LyricFetcher(
fetch_func=self._fetch_lyrics,
on_fetching=self._on_fetching,
on_result=self._on_lyrics_update,
watch_debounce_ms=self._config.watch.debounce_ms,
)
async def run(self) -> bool:
"""Run watch workflow and return success flag."""
logger.info(
"watch session starting (player filter: {})",
self._player_hint or "<none>",
)
if not await self._control.start(self):
return False
try:
await self._player_monitor.start()
await self._tracker.start()
self._calibration_task = asyncio.create_task(self._calibration_loop())
# emit once at startup so outputs don't sit blank until the first event
self._schedule_emit()
# block forever; CancelledError from signal handler exits the loop cleanly
await asyncio.Event().wait()
return True
except asyncio.CancelledError:
return True
except Exception as exc:
logger.exception("watch runtime error: {}", exc)
return False
finally:
logger.info("watch session stopping")
if self._calibration_task is not None:
self._calibration_task.cancel()
await asyncio.gather(self._calibration_task, return_exceptions=True)
self._calibration_task = None
await self._fetcher.stop()
await self._tracker.stop()
await self._player_monitor.close()
await self._control.stop()
async def _calibration_loop(self) -> None:
"""Periodically refresh full MPRIS snapshot as fallback calibration."""
interval = max(0.1, self._config.watch.calibration_interval_s)
while True:
await asyncio.sleep(interval)
try:
await self._player_monitor.refresh()
except asyncio.CancelledError:
raise
except Exception as exc:
logger.debug("mpris calibration refresh failed: {}", exc)
def _active_track(self) -> TrackMeta | None:
"""Return active track metadata from selected player."""
player = self._player_monitor.players.get(self._model.active_player or "")
return player.track if player else None
def _request_fetch_for_active_track(self, reason: str) -> bool:
"""Trigger lyric fetch for active track when needed."""
track = self._active_track()
if track is None:
return False
if self._model.lyrics is not None:
# lyrics already loaded — nothing to fetch
return False
if self._model.status == WatchStatus.FETCHING:
# a fetch is already in flight — don't queue another
return False
logger.info("fetching lyrics for track ({}): {}", reason, track.display_name())
self._fetcher.request(track)
return True
async def _fetch_lyrics(self, track: TrackMeta) -> Optional[LRCData]:
"""Fetch lyrics in worker thread."""
result = await asyncio.to_thread(
self._manager.fetch_for_track,
track,
None,
False,
False,
)
if result and result.lyrics:
return result.lyrics
return None
def _on_player_change(self) -> None:
"""React to monitor player snapshot change."""
prev_player = self._model.active_player
prev_track_key = self._model.active_track_key
selected = ActivePlayerSelector.select(
self._player_monitor.players,
self._model.active_player,
self._config.general.preferred_player,
)
self._model.active_player = selected
if selected != prev_player:
logger.info(
"active player changed: {} -> {}",
prev_player or "<none>",
selected or "<none>",
)
if selected is None:
self._model.status = WatchStatus.IDLE
self._model.active_track_key = None
self._model.set_lyrics(None)
self._schedule_emit()
return
state = self._player_monitor.players.get(selected)
if state is None:
self._model.status = WatchStatus.IDLE
self._model.active_track_key = None
self._model.set_lyrics(None)
self._schedule_emit()
return
track = state.track
track_key = (
track.trackid
if track and track.trackid
else track.display_name()
if track
else None
)
track_changed = track_key != prev_track_key
player_changed = selected != prev_player
if track_changed or player_changed:
# clear stale lyrics immediately so the old track's lines don't flash
self._model.set_lyrics(None)
self._model.active_track_key = track_key
asyncio.create_task(
self._tracker.set_active_player(
selected,
state.status,
track_key,
)
)
# only fetch on identity change — calibration ticks must not re-trigger fetches
started_fetch = False
if track is not None and (player_changed or track_changed):
started_fetch = self._request_fetch_for_active_track("track-changed")
# derive status from what actually happened this tick; preserve FETCHING
# if an in-flight request was started before this snapshot arrived
if self._model.lyrics is not None:
self._model.status = WatchStatus.OK
elif started_fetch:
self._model.status = WatchStatus.FETCHING
elif self._model.status != WatchStatus.FETCHING:
# don't overwrite FETCHING with NO_LYRICS while a request is in flight
self._model.status = WatchStatus.NO_LYRICS
self._schedule_emit()
def _on_seeked(self, bus_name: str, position_ms: int) -> None:
"""Forward seek event to tracker."""
asyncio.create_task(self._tracker.on_seeked(bus_name, position_ms))
def _on_playback_status(self, bus_name: str, status: str) -> None:
"""Forward playback status change to position tracker."""
asyncio.create_task(self._tracker.on_playback_status(bus_name, status))
def _on_tracker_tick(self) -> None:
"""Emit updates from tracker tick only while lyrics are actively rendering."""
if self._model.status == WatchStatus.OK and self._output.position_sensitive:
self._schedule_emit()
def _schedule_emit(self) -> None:
"""Coalesce frequent events into at most one in-flight emit task."""
if self._emit_scheduled:
# a task is already queued; it will pick up the latest model state when it runs
return
self._emit_scheduled = True
asyncio.create_task(self._run_scheduled_emit())
async def _run_scheduled_emit(self) -> None:
"""Run one coalesced emit and release scheduler gate."""
try:
await self._emit_state()
finally:
# release the gate even on error so future events can still schedule
self._emit_scheduled = False
async def _on_fetching(self) -> None:
"""Mark model as fetching and emit state."""
self._model.status = WatchStatus.FETCHING
await self._emit_state()
async def _on_lyrics_update(self, lyrics: Optional[LRCData]) -> None:
"""Update model with fetched lyrics and emit state."""
self._model.set_lyrics(lyrics)
self._model.status = (
WatchStatus.OK if lyrics is not None else WatchStatus.NO_LYRICS
)
logger.info(
"lyrics update result: {}",
"found" if lyrics is not None else "not found",
)
await self._emit_state()
async def _emit_state(self) -> None:
"""Emit output state only when semantic signature changes."""
player = self._player_monitor.players.get(self._model.active_player or "")
track = player.track if player else None
# position=0 for non-position-sensitive outputs so the signature is stable
# across ticks and on_state fires at most once per track+status transition
position = (
await self._tracker.get_position_ms()
if self._output.position_sensitive
else 0
)
signature = self._view_model.signature(track, position)
if signature == self._last_emit_signature:
# state hasn't changed semantically — skip redundant render
return
self._last_emit_signature = signature
state = self._view_model.state(track, position)
await self._output.on_state(state)
def handle_offset(self, delta: int) -> dict:
"""Apply offset update requested by control channel."""
self._model.offset_ms += delta
return {"ok": True, "offset_ms": self._model.offset_ms}
def handle_status(self) -> dict:
"""Return status payload for control channel."""
player = self._player_monitor.players.get(self._model.active_player or "")
track = asdict(player.track) if player and player.track else None
return {
"ok": True,
"offset_ms": self._model.offset_ms,
"player": self._model.active_player,
"track": track,
"position_ms": self._tracker.peek_position_ms(),
"lyrics_status": self._model.status,
}
-156
View File
@@ -1,156 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-10 08:13:35
Description: Playback position tracking utilities for watch mode.
"""
from __future__ import annotations
import asyncio
import time
from typing import Awaitable, Callable, Optional
from ..config import AppConfig
class PositionTracker:
"""Maintains an estimated playback position from seek/status events plus local clock."""
_config: AppConfig
_poll_position_ms: Callable[[str], Awaitable[Optional[int]]]
_active_player: str | None
_is_playing: bool
_track_key: str | None
_position_ms: int
_last_tick: float
_fast_task: asyncio.Task | None
_on_tick: Callable[[], None] | None
_lock: asyncio.Lock
def __init__(
self,
poll_position_ms: Callable[[str], Awaitable[Optional[int]]],
config: AppConfig,
on_tick: Callable[[], None] | None = None,
) -> None:
"""Initialize tracker with position polling callback and runtime options."""
self._config = config
self._poll_position_ms = poll_position_ms
self._on_tick = on_tick
self._active_player: str | None = None
self._is_playing = False
self._track_key: str | None = None
self._position_ms = 0
self._last_tick = time.monotonic()
self._fast_task: asyncio.Task | None = None
self._lock = asyncio.Lock()
async def start(self) -> None:
"""Start local monotonic position ticking task."""
self._last_tick = time.monotonic()
self._fast_task = asyncio.create_task(self._fast_loop())
async def stop(self) -> None:
"""Stop tracker tasks and await clean cancellation."""
tasks = [t for t in (self._fast_task,) if t is not None]
for task in tasks:
task.cancel()
if tasks:
await asyncio.gather(*tasks, return_exceptions=True)
self._fast_task = None
async def set_active_player(
self,
bus_name: str | None,
playback_status: str,
track_key: str | None,
) -> None:
"""Switch active source and calibrate position once when entering a new playing track."""
should_calibrate_now = False
async with self._lock:
player_changed = self._active_player != bus_name
track_changed = self._track_key != track_key
was_playing = self._is_playing
self._active_player = bus_name
self._is_playing = playback_status == "Playing"
status_changed_to_playing = self._is_playing and not was_playing
if player_changed or track_changed:
# reset to 0 so stale position from a previous track doesn't bleed through
self._position_ms = 0
# poll MPRIS on any identity change (player, track, or resume) so a paused
# mid-song player gets its position anchored immediately; calibration-loop
# ticks are excluded because they pass the same player/track/status
should_calibrate_now = bool(self._active_player) and (
player_changed or track_changed or status_changed_to_playing
)
self._track_key = track_key
self._last_tick = time.monotonic()
if should_calibrate_now and self._active_player:
await self._calibrate_once(self._active_player)
async def on_seeked(self, bus_name: str, position_ms: int) -> None:
"""Apply explicit seek position update for active player."""
async with self._lock:
if bus_name != self._active_player:
return
self._position_ms = max(0, position_ms)
self._last_tick = time.monotonic()
async def on_playback_status(self, bus_name: str, playback_status: str) -> None:
"""Update playing state and calibrate once on paused-to-playing transition."""
should_calibrate_now = False
async with self._lock:
if bus_name != self._active_player:
return
was_playing = self._is_playing
self._is_playing = playback_status == "Playing"
# re-anchor last_tick when resuming so the gap while paused isn't counted
should_calibrate_now = self._is_playing and not was_playing
self._last_tick = time.monotonic()
if should_calibrate_now:
await self._calibrate_once(bus_name)
async def _fast_loop(self) -> None:
"""Advance position by monotonic clock while active player is playing."""
interval = self._config.watch.position_tick_ms / 1000.0
while True:
await asyncio.sleep(interval)
should_notify = False
async with self._lock:
now = time.monotonic()
if self._is_playing and self._active_player:
# accumulate elapsed wall-clock time as playback position;
# seek events and calibration snapshots correct drift periodically
delta_ms = int((now - self._last_tick) * 1000)
if delta_ms > 0:
self._position_ms += delta_ms
should_notify = True
# always update last_tick so paused time isn't counted on resume
self._last_tick = now
if should_notify and self._on_tick is not None:
self._on_tick()
async def _calibrate_once(self, bus_name: str) -> None:
"""Poll player-reported position once and synchronize local tracker state."""
polled = await self._poll_position_ms(bus_name)
if polled is None:
return
async with self._lock:
if bus_name != self._active_player:
return
# Drift correction is signal-assisted; polling is fallback.
self._position_ms = max(0, polled)
self._last_tick = time.monotonic()
async def get_position_ms(self) -> int:
"""Return current tracked position in milliseconds."""
async with self._lock:
return max(0, int(self._position_ms))
def peek_position_ms(self) -> int:
"""Return current tracked position without awaiting lock (best-effort snapshot)."""
return max(0, int(self._position_ms))
-102
View File
@@ -1,102 +0,0 @@
"""Output abstraction types for watch mode rendering."""
from __future__ import annotations
from abc import ABC, abstractmethod
from bisect import bisect_right
from dataclasses import dataclass
from enum import Enum
from typing import Optional
from ...lrc import LRCData, LyricLine
from ...models import TrackMeta
class WatchStatus(str, Enum):
IDLE = "idle"
FETCHING = "fetching"
OK = "ok"
NO_LYRICS = "no_lyrics"
@dataclass(slots=True, frozen=True)
class LyricView:
"""View-ready immutable lyric data projected from one normalized LRC object."""
normalized: LRCData
lines: tuple[str, ...]
timed_line_entries: tuple[tuple[int, int], ...]
timestamps: tuple[int, ...]
@staticmethod
def from_lrc(lyrics: LRCData) -> "LyricView":
"""Build a view projection once from normalized lyrics."""
normalized = lyrics.normalize()
lines: list[str] = []
entries: list[tuple[int, int]] = []
line_index = 0
for line in normalized.lines:
if not isinstance(line, LyricLine):
# skip metadata/tag lines that carry no renderable text
continue
text = line.text
lines.append(text)
# use first timestamp; clamp to 0 so bisect always works with non-negative ms
timestamp = line.line_times_ms[0] if line.line_times_ms else 0
entries.append((max(0, timestamp), line_index))
line_index += 1
# extract timestamps into a flat tuple so bisect_right can binary-search it
timestamps = tuple(timestamp for timestamp, _ in entries)
return LyricView(
normalized=normalized,
lines=tuple(lines),
timed_line_entries=tuple(entries),
timestamps=timestamps,
)
def signature_cursor(self, at_ms: int) -> tuple:
"""Build a stable cursor signature for dedupe decisions."""
if not self.timed_line_entries:
# untimed lyrics: signature is the full line set — changes only on track change
return ("plain", self.lines)
first_ts = self.timed_line_entries[0][0]
if at_ms < first_ts:
# playback hasn't reached the first lyric yet; hold until it does
return ("before_first", first_ts)
# bisect_right gives the insertion point after equal timestamps, so -1 gives
# the last line whose timestamp <= at_ms (i.e. the currently active line)
idx = bisect_right(self.timestamps, at_ms) - 1
if idx < 0:
idx = 0
ts, line_idx = self.timed_line_entries[idx]
text = self.lines[line_idx] if line_idx < len(self.lines) else ""
return ("ok", idx, ts, text)
@dataclass(slots=True)
class WatchState:
"""Immutable snapshot payload delivered from session to output implementations."""
track: Optional[TrackMeta]
lyrics: Optional[LyricView]
position_ms: int
offset_ms: int
status: WatchStatus
class BaseOutput(ABC):
# When False, the coordinator passes position=0 for signature computation and
# skips tracker-tick-driven emits, so on_state fires at most once per
# track+status transition rather than on every lyric cursor advance.
position_sensitive: bool = True
@abstractmethod
async def on_state(self, state: WatchState) -> None:
"""Render or deliver one watch state frame."""
...
-95
View File
@@ -1,95 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-10 08:15:17
Description: Pipe output implementation for watch mode.
"""
from __future__ import annotations
from bisect import bisect_right
from dataclasses import dataclass
import sys
from . import BaseOutput, WatchState, WatchStatus
@dataclass(slots=True)
class PipeOutput(BaseOutput):
"""Render a fixed lyric context window to stdout for streaming/pipe usage."""
before: int = 0
after: int = 0
no_newline: bool = False
def _window_size(self) -> int:
"""Return rendered lyric window size."""
return self.before + 1 + self.after
def _render_status(self, message: str) -> list[str]:
"""Render centered status line in fixed-size window."""
lines = [""] * self._window_size()
lines[self.before] = message
return lines
def _render_lyrics(self, state: WatchState) -> list[str]:
"""Render context lines centered on current timed lyric entry."""
if state.lyrics is None:
return self._render_status("[no lyrics]")
all_lines = state.lyrics.lines
if not all_lines:
return self._render_status("[no lyrics]")
entries = state.lyrics.timed_line_entries
effective_ms = state.position_ms + state.offset_ms
current_line_idx: int | None
if entries and effective_ms < entries[0][0]:
# playback hasn't reached the first lyric yet; treat current slot as empty
# so the after-window can show upcoming lines without a "current" anchor
current_line_idx = None
else:
if not entries:
current_line_idx = 0
else:
# bisect_right - 1 gives the last entry whose timestamp <= effective_ms
current_entry_idx = (
bisect_right(state.lyrics.timestamps, effective_ms) - 1
)
if current_entry_idx < 0:
current_entry_idx = 0
current_line_idx = entries[current_entry_idx][1]
out: list[str] = []
for rel in range(-self.before, self.after + 1):
if current_line_idx is None:
# before-first-timestamp: before/current slots are empty; after slots
# show lines starting from index 0 (rel=1 → line 0, rel=2 → line 1, …)
if rel <= 0:
out.append("")
continue
line_idx = rel - 1
else:
line_idx = current_line_idx + rel
if 0 <= line_idx < len(all_lines):
out.append(all_lines[line_idx])
else:
out.append("")
return out
async def on_state(self, state: WatchState) -> None:
"""Render and flush one frame for the latest watch state."""
if state.status == WatchStatus.FETCHING:
lines = self._render_status("[fetching...]")
elif state.status == WatchStatus.NO_LYRICS:
lines = self._render_status("[no lyrics]")
elif state.status == WatchStatus.IDLE:
lines = self._render_status("[idle]")
else:
lines = self._render_lyrics(state)
for line in lines:
# no_newline mode lets callers use \r to overwrite the previous frame in-place
sys.stdout.write(line + ("\n" if not self.no_newline else ""))
sys.stdout.flush()
-46
View File
@@ -1,46 +0,0 @@
"""
Author: Uyanide pywang0608@foxmail.com
Date: 2026-04-10 08:15:31
Description: Print output implementation for watch mode one shot per track.
"""
from __future__ import annotations
import sys
from . import BaseOutput, WatchState, WatchStatus
class PrintOutput(BaseOutput):
"""Emit full lyrics to stdout once per track transition, then stay silent.
Deduplication is delegated to the coordinator via position_sensitive=False:
the coordinator uses a fixed position for signatures, so on_state fires at
most once per (status, track_key) transition rather than on every tick.
"""
# fixed position=0 in signatures → coordinator calls on_state only on
# track/status transitions, never on lyric cursor advances
position_sensitive = False
plain: bool
def __init__(self, plain: bool = False) -> None:
self.plain = plain
async def on_state(self, state: WatchState) -> None:
if state.status == WatchStatus.FETCHING or state.status == WatchStatus.IDLE:
return
if state.status == WatchStatus.NO_LYRICS:
# emit a blank line as a machine-readable sentinel for "track changed, no lyrics"
sys.stdout.write("\n")
sys.stdout.flush()
elif state.status == WatchStatus.OK and state.lyrics is not None:
lrc = state.lyrics.normalized
if self.plain:
text = lrc.to_plain()
else:
text = str(lrc)
sys.stdout.write(text + "\n")
sys.stdout.flush()
-3
View File
@@ -1,3 +0,0 @@
from lrx_cli.config import enable_debug
enable_debug()
-4
View File
@@ -1,4 +0,0 @@
{
"syncedLyrics": "[00:01.00]s1\n[00:02.00]s2",
"plainLyrics": "p1\np2"
}
-20
View File
@@ -1,20 +0,0 @@
[
{
"id": 1,
"trackName": "My Love",
"artistName": "Westlife",
"albumName": "Coast To Coast",
"duration": 231.847,
"syncedLyrics": "[00:01.00]hello",
"plainLyrics": "hello"
},
{
"id": 2,
"trackName": "My Love (Live)",
"artistName": "Westlife",
"albumName": "Live",
"duration": 262.0,
"syncedLyrics": "",
"plainLyrics": "hello"
}
]
-28
View File
@@ -1,28 +0,0 @@
{
"message": {
"body": {
"macro_calls": {
"track.richsync.get": {
"message": {
"header": {
"status_code": 200
},
"body": {
"richsync": {
"richsync_body": "[{\"ts\": 1.2, \"x\": \"hello\"}, {\"ts\": 2.34, \"x\": \"world\"}]"
}
}
}
},
"track.subtitles.get": {
"message": {
"header": {
"status_code": 404
},
"body": {}
}
}
}
}
}
}
-32
View File
@@ -1,32 +0,0 @@
{
"message": {
"body": {
"macro_calls": {
"track.richsync.get": {
"message": {
"header": {
"status_code": 404
},
"body": {}
}
},
"track.subtitles.get": {
"message": {
"header": {
"status_code": 200
},
"body": {
"subtitle_list": [
{
"subtitle": {
"subtitle_body": "[{\"text\": \"hello\", \"time\": {\"total\": 1.1}}, {\"text\": \"world\", \"time\": {\"total\": 2.22}}]"
}
}
]
}
}
}
}
}
}
}
-20
View File
@@ -1,20 +0,0 @@
{
"message": {
"body": {
"track_list": [
{
"track": {
"commontrack_id": 123,
"track_length": 232,
"has_subtitles": 1,
"has_richsync": 0,
"track_name": "My Love",
"artist_name": "Westlife",
"album_name": "Coast To Coast",
"instrumental": 0
}
}
]
}
}
}
-5
View File
@@ -1,5 +0,0 @@
{
"lrc": {
"lyric": "[00:01.00]line1\n[00:02.00]line2"
}
}
-32
View File
@@ -1,32 +0,0 @@
{
"result": {
"songs": [
{
"id": 2080607,
"name": "My Love",
"dt": 231941,
"ar": [
{
"name": "Westlife"
}
],
"al": {
"name": "Unbreakable"
}
},
{
"id": 572412968,
"name": "My Love",
"dt": 231000,
"ar": [
{
"name": "Westlife"
}
],
"al": {
"name": "Pure... Love"
}
}
]
}
}
-6
View File
@@ -1,6 +0,0 @@
{
"code": 0,
"data": {
"lyric": "[00:01.00]hello\n[00:02.00]world"
}
}
-33
View File
@@ -1,33 +0,0 @@
{
"code": 0,
"data": {
"list": [
{
"mid": "mid1",
"interval": 232,
"name": "My Love",
"singer": [
{
"name": "Westlife"
}
],
"album": {
"name": "Coast To Coast"
}
},
{
"mid": "mid2",
"interval": 248,
"name": "My Love (Album Version)",
"singer": [
{
"name": "Little Texas"
}
],
"album": {
"name": "Greatest Hits"
}
}
]
}
}
-9
View File
@@ -1,9 +0,0 @@
{
"lyrics": {
"syncType": "LINE_SYNCED",
"lines": [
{"startTimeMs": "1000", "words": "hello"},
{"startTimeMs": "2500", "words": "world"}
]
}
}
-9
View File
@@ -1,9 +0,0 @@
{
"lyrics": {
"syncType": "UNSYNCED",
"lines": [
{"startTimeMs": "0", "words": "plain one"},
{"startTimeMs": "0", "words": "plain two"}
]
}
}
-18
View File
@@ -1,18 +0,0 @@
import pytest
from lrx_cli.config import load_config
_credentials = load_config().credentials
requires_spotify = pytest.mark.skipif(
not _credentials.spotify_sp_dc,
reason="requires credentials.spotify_sp_dc in config.toml",
)
requires_qq_music = pytest.mark.skipif(
not _credentials.qq_music_api_url,
reason="requires credentials.qq_music_api_url in config.toml",
)
requires_musixmatch_token = pytest.mark.skipif(
not _credentials.musixmatch_usertoken,
reason="requires credentials.musixmatch_usertoken in config.toml",
)
-559
View File
@@ -1,559 +0,0 @@
from __future__ import annotations
import sqlite3
from pathlib import Path
import pytest
from lrx_cli.cache import (
CacheEngine,
SLOT_SYNCED,
SLOT_UNSYNCED,
_generate_key,
)
from lrx_cli.config import DURATION_TOLERANCE_MS
from lrx_cli.models import CacheStatus, LyricResult, TrackMeta
from lrx_cli.lrc import LRCData
def _track(
*,
artist: str | None = "Artist",
title: str | None = "Song",
album: str | None = "Album",
length: int | None = 180000,
trackid: str | None = None,
url: str | None = None,
) -> TrackMeta:
return TrackMeta(
artist=artist,
title=title,
album=album,
length=length,
trackid=trackid,
url=url,
)
def _result(
status: CacheStatus,
lyrics: str | None,
source: str,
) -> LyricResult:
return LyricResult(status=status, lyrics=LRCData(lyrics), source=source)
@pytest.fixture
def cache_db(tmp_path: Path) -> CacheEngine:
db_path = tmp_path / "cache.db"
return CacheEngine(str(db_path))
def test_generate_key_uses_spotify_trackid_and_url_fallback() -> None:
spotify_track = _track(
trackid="abc123", artist=None, title=None, album=None, length=None
)
local_track = _track(
artist=None, title=None, album=None, length=None, url="file:///x.flac"
)
assert _generate_key(spotify_track, "spotify") == "spotify:abc123"
assert _generate_key(local_track, "local") == "local:url:file:///x.flac"
def test_generate_key_raises_when_metadata_missing() -> None:
with pytest.raises(ValueError):
_generate_key(
_track(artist=None, title=None, album=None, length=None, url=None), "lrclib"
)
def test_migrate_adds_confidence_version_and_boosts_unsynced(tmp_path: Path) -> None:
"""Legacy single-row cache is migrated to slot rows.
Expected behavior:
- add positive_kind and confidence_version
- boost SUCCESS_UNSYNCED confidence by +10 with cap at 100
- keep SUCCESS_SYNCED confidence unchanged
"""
db_path = tmp_path / "legacy-cache.db"
with sqlite3.connect(db_path) as conn:
conn.execute(
"""
CREATE TABLE cache (
key TEXT PRIMARY KEY,
source TEXT NOT NULL,
status TEXT NOT NULL,
lyrics TEXT,
created_at INTEGER NOT NULL,
expires_at INTEGER,
artist TEXT,
title TEXT,
album TEXT,
length INTEGER,
confidence REAL
)
"""
)
conn.execute(
"""
INSERT INTO cache
(key, source, status, lyrics, created_at, expires_at, artist, title, album, length, confidence)
VALUES
('u1', 's1', 'SUCCESS_UNSYNCED', 'u1', 1, NULL, 'A', 'T', 'AL', 180000, 85.0),
('u2', 's2', 'SUCCESS_UNSYNCED', 'u2', 1, NULL, 'A', 'T', 'AL', 180000, 98.0),
('s1', 's3', 'SUCCESS_SYNCED', 's1', 1, NULL, 'A', 'T', 'AL', 180000, 70.0)
"""
)
conn.commit()
CacheEngine(str(db_path))
with sqlite3.connect(db_path) as conn:
cols = {r[1] for r in conn.execute("PRAGMA table_info(cache)").fetchall()}
rows = conn.execute(
"SELECT key, positive_kind, status, confidence, confidence_version FROM cache ORDER BY key, positive_kind"
).fetchall()
assert "positive_kind" in cols
assert "confidence_version" in cols
by_key = {
(k, slot): (status, confidence, version)
for k, slot, status, confidence, version in rows
}
assert by_key[("u1", SLOT_UNSYNCED)] == ("SUCCESS_UNSYNCED", 95.0, 1)
assert by_key[("u2", SLOT_UNSYNCED)] == ("SUCCESS_UNSYNCED", 100.0, 1)
assert by_key[("s1", SLOT_SYNCED)] == ("SUCCESS_SYNCED", 70.0, 1)
def test_migrate_negative_row_splits_into_two_slot_rows(tmp_path: Path) -> None:
db_path = tmp_path / "legacy-negative.db"
with sqlite3.connect(db_path) as conn:
conn.execute(
"""
CREATE TABLE cache (
key TEXT PRIMARY KEY,
source TEXT NOT NULL,
status TEXT NOT NULL,
lyrics TEXT,
created_at INTEGER NOT NULL,
expires_at INTEGER,
artist TEXT,
title TEXT,
album TEXT,
length INTEGER,
confidence REAL
)
"""
)
conn.execute(
"""
INSERT INTO cache
(key, source, status, lyrics, created_at, expires_at, artist, title, album, length, confidence)
VALUES
('n1', 's1', 'NOT_FOUND', NULL, 1, NULL, 'A', 'T', 'AL', 180000, 0.0)
"""
)
conn.commit()
CacheEngine(str(db_path))
with sqlite3.connect(db_path) as conn:
rows = conn.execute(
"SELECT key, positive_kind, status FROM cache ORDER BY positive_kind"
).fetchall()
assert rows == [
("n1", SLOT_SYNCED, "NOT_FOUND"),
("n1", SLOT_UNSYNCED, "NOT_FOUND"),
]
def test_set_and_get_roundtrip_with_ttl(
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
) -> None:
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 1_000_000)
track = _track()
cache_db.set(
track,
"lrclib",
_result(CacheStatus.SUCCESS_SYNCED, "[00:01.00]line", "lrclib"),
ttl_seconds=120,
)
cached_rows = cache_db.get_all(track, "lrclib")
assert len(cached_rows) == 1
cached = cached_rows[0]
assert cached.status is CacheStatus.SUCCESS_SYNCED
assert str(cached.lyrics) == "[00:01.00]line"
assert cached.source == "lrclib"
assert cached.ttl == 120
def test_get_expired_entry_returns_none_and_removes_row(
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
) -> None:
track = _track()
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 2_000_000)
cache_db.set(
track,
"netease",
_result(CacheStatus.SUCCESS_UNSYNCED, "line", "netease"),
ttl_seconds=10,
)
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 2_000_020)
cached_rows = cache_db.get_all(track, "netease")
assert cached_rows == []
assert cache_db.query_all() == []
def test_set_negative_without_slot_writes_both_slots(cache_db: CacheEngine) -> None:
track = _track()
cache_db.set(
track, "src", _result(CacheStatus.NOT_FOUND, None, "src"), ttl_seconds=60
)
with sqlite3.connect(cache_db.db_path) as conn:
rows = conn.execute(
"SELECT positive_kind, status FROM cache ORDER BY positive_kind"
).fetchall()
assert rows == [
(SLOT_SYNCED, CacheStatus.NOT_FOUND.value),
(SLOT_UNSYNCED, CacheStatus.NOT_FOUND.value),
]
def test_get_backfills_missing_length_when_track_provides_it(
cache_db: CacheEngine,
) -> None:
track_without_length = _track(
trackid="spotify-track-1",
artist=None,
title=None,
album=None,
length=None,
)
cache_db.set(
track_without_length,
"spotify",
_result(CacheStatus.SUCCESS_SYNCED, "line", "spotify"),
)
track_with_length = _track(
trackid="spotify-track-1",
artist=None,
title=None,
album=None,
length=200000,
)
cached_rows = cache_db.get_all(track_with_length, "spotify")
assert cached_rows
with sqlite3.connect(cache_db.db_path) as conn:
row = conn.execute("SELECT length FROM cache LIMIT 1").fetchone()
assert row is not None
assert row[0] == 200000
def test_get_best_prefers_synced_and_skips_negative(
cache_db: CacheEngine,
) -> None:
track = _track()
cache_db.set(
track,
"source-a",
_result(CacheStatus.NOT_FOUND, None, "source-a"),
)
cache_db.set(
track,
"source-b",
_result(CacheStatus.SUCCESS_UNSYNCED, "unsynced", "source-b"),
)
cache_db.set(
track,
"source-c",
_result(CacheStatus.SUCCESS_SYNCED, "synced", "source-c"),
)
best = cache_db.get_best(track, ["source-a", "source-b", "source-c"])
assert best is not None
assert best.status is CacheStatus.SUCCESS_SYNCED
assert str(best.lyrics) == "synced"
def test_clear_track_and_clear_all_affect_expected_rows(cache_db: CacheEngine) -> None:
track_a = _track(artist="A", title="T", album="X")
track_b = _track(artist="B", title="T", album="X")
cache_db.set(track_a, "s1", _result(CacheStatus.SUCCESS_SYNCED, "a1", "s1"))
cache_db.set(track_a, "s2", _result(CacheStatus.SUCCESS_UNSYNCED, "a2", "s2"))
cache_db.set(track_b, "s1", _result(CacheStatus.SUCCESS_SYNCED, "b1", "s1"))
cache_db.clear_track(track_a)
rows_after_track_clear = cache_db.query_all()
assert len(rows_after_track_clear) == 1
assert rows_after_track_clear[0]["artist"] == "B"
cache_db.clear_all()
assert cache_db.query_all() == []
def test_prune_removes_only_expired_rows(
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
) -> None:
track = _track()
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 3_000_000)
cache_db.set(
track,
"s-expired",
_result(CacheStatus.SUCCESS_SYNCED, "x", "s-expired"),
ttl_seconds=1,
)
cache_db.set(
track,
"s-active",
_result(CacheStatus.SUCCESS_SYNCED, "y", "s-active"),
ttl_seconds=100,
)
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 3_000_010)
deleted = cache_db.prune()
assert deleted == 1
rows = cache_db.query_all()
assert len(rows) == 1
assert rows[0]["source"] == "s-active"
def test_find_best_positive_returns_status_specific_results(
cache_db: CacheEngine,
) -> None:
track = _track(artist="Artist", title="Song", album="Album")
cache_db.set(track, "u-high", _result(CacheStatus.SUCCESS_UNSYNCED, "u", "u-high"))
cache_db.set(track, "s-low", _result(CacheStatus.SUCCESS_SYNCED, "s", "s-low"))
cache_db.update_confidence(track, 95.0, "u-high")
cache_db.update_confidence(track, 70.0, "s-low")
best_synced = cache_db.find_best_positive(track, CacheStatus.SUCCESS_SYNCED)
assert best_synced is not None
assert best_synced.status is CacheStatus.SUCCESS_SYNCED
assert str(best_synced.lyrics) == "s"
assert best_synced.source == "cache-search"
best_unsynced = cache_db.find_best_positive(track, CacheStatus.SUCCESS_UNSYNCED)
assert best_unsynced is not None
assert best_unsynced.status is CacheStatus.SUCCESS_UNSYNCED
assert str(best_unsynced.lyrics) == "u"
def test_search_by_meta_fuzzy_rules_and_duration_sorting(cache_db: CacheEngine) -> None:
# Same logical title/artist after normalization, different length quality.
base = _track(
artist="A / B",
title="HelloWorld!",
album="Album",
length=200000,
)
close_synced = _track(
artist="B vs. A",
title="hello world",
album="Else",
length=200500,
)
close_unsynced = _track(
artist="A feat. C / B",
title="HELLO WORLD",
album="Else2",
length=201000,
)
unknown_len = _track(
artist="A & B",
title="Hello World",
album="Else3",
length=None,
)
far_len = _track(
artist="A / B",
title="Hello World",
album="Else4",
length=200000 + DURATION_TOLERANCE_MS + 1,
)
cache_db.set(base, "seed", _result(CacheStatus.SUCCESS_SYNCED, "seed", "seed"))
cache_db.set(
close_synced,
"close-synced",
_result(CacheStatus.SUCCESS_SYNCED, "cs", "close-synced"),
)
cache_db.set(
close_unsynced,
"close-unsynced",
_result(CacheStatus.SUCCESS_UNSYNCED, "cu", "close-unsynced"),
)
cache_db.set(
unknown_len,
"unknown-len",
_result(CacheStatus.SUCCESS_SYNCED, "ul", "unknown-len"),
)
cache_db.set(
far_len,
"far-len",
_result(CacheStatus.SUCCESS_SYNCED, "fl", "far-len"),
)
# Negative status should never appear in search results.
cache_db.set(
_track(artist="A / B", title="Hello World", album="Else5", length=200000),
"negative",
_result(CacheStatus.NOT_FOUND, None, "negative"),
)
rows = cache_db.search_by_meta(
title=" hello world ",
length=200000,
)
sources = [r["source"] for r in rows]
assert "negative" not in sources
assert "far-len" not in sources
assert "close-unsynced" in sources
# Sorted by duration diff, then confidence for equal diff.
assert sources[0] == "seed"
assert sources[1] == "close-synced"
assert sources[2] == "close-unsynced"
# Unknown length remains candidate with fallback distance priority.
assert sources[-1] == "unknown-len"
def test_update_confidence_targets_specific_source(cache_db: CacheEngine) -> None:
track = _track(artist="A", title="T", album="AL")
cache_db.set(track, "s1", _result(CacheStatus.SUCCESS_SYNCED, "x", "s1"))
cache_db.set(track, "s2", _result(CacheStatus.SUCCESS_UNSYNCED, "y", "s2"))
updated = cache_db.update_confidence(track, 75.0, "s1")
assert updated == 1
rows = {r["source"]: r for r in cache_db.query_track(track)}
assert rows["s1"]["confidence"] == 75.0
assert rows["s2"]["confidence"] == 100.0 # unchanged default
def test_update_confidence_updates_both_slots_for_same_source(
cache_db: CacheEngine,
) -> None:
track = _track(artist="A", title="T", album="AL")
cache_db.set(
track,
"src",
_result(CacheStatus.SUCCESS_SYNCED, "sync", "src"),
positive_kind=SLOT_SYNCED,
)
cache_db.set(
track,
"src",
_result(CacheStatus.SUCCESS_UNSYNCED, "unsync", "src"),
positive_kind=SLOT_UNSYNCED,
)
updated = cache_db.update_confidence(track, 66.0, "src")
assert updated == 2
with sqlite3.connect(cache_db.db_path) as conn:
rows = conn.execute(
"SELECT positive_kind, confidence FROM cache WHERE source = 'src' ORDER BY positive_kind"
).fetchall()
assert rows == [(SLOT_SYNCED, 66.0), (SLOT_UNSYNCED, 66.0)]
def test_update_confidence_returns_zero_for_missing_source(
cache_db: CacheEngine,
) -> None:
track = _track(artist="A", title="T", album="AL")
cache_db.set(track, "s1", _result(CacheStatus.SUCCESS_SYNCED, "x", "s1"))
assert cache_db.update_confidence(track, 50.0, "nonexistent") == 0
def test_update_confidence_returns_zero_for_empty_track(
cache_db: CacheEngine,
) -> None:
empty = _track(artist=None, title=None, album=None, length=None)
assert cache_db.update_confidence(empty, 50.0, "s1") == 0
def test_credential_set_and_get_roundtrip(cache_db: CacheEngine) -> None:
cache_db.set_credential("spotify", {"access_token": "tok", "expires_in": 3600})
result = cache_db.get_credential("spotify")
assert result == {"access_token": "tok", "expires_in": 3600}
def test_credential_get_returns_none_on_miss(cache_db: CacheEngine) -> None:
assert cache_db.get_credential("nonexistent") is None
def test_credential_expires_at_respected(
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
) -> None:
# Store with expiry 1000 ms in the future
now_ms = 5_000_000_000
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: now_ms / 1000)
cache_db.set_credential(
"musixmatch", {"user_token": "abc"}, expires_at_ms=now_ms + 1000
)
# Still valid
assert cache_db.get_credential("musixmatch") == {"user_token": "abc"}
# Advance past expiry
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: (now_ms + 2000) / 1000)
assert cache_db.get_credential("musixmatch") is None
def test_credential_no_expiry_never_expires(
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
) -> None:
cache_db.set_credential("spotify", {"token": "forever"}, expires_at_ms=None)
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 9_999_999_999.0)
assert cache_db.get_credential("spotify") == {"token": "forever"}
def test_credential_set_overwrites_existing(cache_db: CacheEngine) -> None:
cache_db.set_credential("spotify", {"token": "old"})
cache_db.set_credential("spotify", {"token": "new"})
assert cache_db.get_credential("spotify") == {"token": "new"}
def test_query_track_and_stats_return_expected_aggregates(
cache_db: CacheEngine,
) -> None:
cache_db.set(
_track(artist="A", title="T", album="AL"),
"s1",
_result(CacheStatus.SUCCESS_SYNCED, "x", "s1"),
)
cache_db.set(
_track(artist="A", title="T", album="AL"),
"s2",
_result(CacheStatus.SUCCESS_UNSYNCED, "y", "s2"),
)
rows = cache_db.query_track(_track(artist="A", title="T", album="AL"))
stats = cache_db.stats()
assert len(rows) == 2
assert stats["total"] == 2
assert stats["active"] == 2
assert stats["expired"] == 0
assert stats["by_status"][CacheStatus.SUCCESS_SYNCED.value] == 1
assert stats["by_status"][CacheStatus.SUCCESS_UNSYNCED.value] == 1
assert stats["by_slot"][SLOT_SYNCED] == 1
assert stats["by_slot"][SLOT_UNSYNCED] == 1
-61
View File
@@ -1,61 +0,0 @@
from __future__ import annotations
import pytest
from lrx_cli.config import AppConfig, CredentialConfig, WatchConfig, load_config
def test_missing_file_returns_defaults(tmp_path):
assert load_config(tmp_path / "nonexistent.toml") == AppConfig()
def test_empty_file_returns_defaults(tmp_path):
p = tmp_path / "config.toml"
p.write_text("")
assert load_config(p) == AppConfig()
def test_partial_section_keeps_other_defaults(tmp_path):
p = tmp_path / "config.toml"
p.write_bytes(b"[watch]\ndebounce_ms = 200\n")
cfg = load_config(p)
assert cfg.watch.debounce_ms == 200
assert cfg.watch.calibration_interval_s == WatchConfig().calibration_interval_s
def test_credentials_roundtrip(tmp_path):
p = tmp_path / "config.toml"
p.write_bytes(
b"[credentials]\n"
b'spotify_sp_dc = "abc"\n'
b'qq_music_api_url = "http://localhost:3000"\n'
)
assert load_config(p).credentials == CredentialConfig(
spotify_sp_dc="abc", qq_music_api_url="http://localhost:3000"
)
def test_int_coerced_to_float(tmp_path):
p = tmp_path / "config.toml"
p.write_bytes(b"[general]\nhttp_timeout = 5\n")
assert load_config(p).general.http_timeout == 5.0
def test_unknown_key_raises(tmp_path):
p = tmp_path / "config.toml"
p.write_bytes(b"[general]\ntypo_key = 1\n")
with pytest.raises(ValueError, match="Unknown config keys"):
load_config(p)
def test_wrong_type_raises(tmp_path):
p = tmp_path / "config.toml"
p.write_bytes(b"[watch]\ndebounce_ms = true\n")
with pytest.raises(ValueError, match="expected int"):
load_config(p)
def test_app_config_is_frozen():
cfg = AppConfig()
with pytest.raises(Exception):
cfg.general = None # type: ignore[misc]
-544
View File
@@ -1,544 +0,0 @@
from __future__ import annotations
from dataclasses import replace
import asyncio
import json
from pathlib import Path
from typing import Callable
import httpx
import pytest
from lrx_cli.authenticators import create_authenticators
from lrx_cli.cache import CacheEngine
from lrx_cli.config import AppConfig, load_config
from lrx_cli.core import LrcManager
from lrx_cli.fetchers import FetcherMethodType, create_fetchers
from lrx_cli.fetchers.lrclib import LrclibFetcher, _parse_lrclib_response
from lrx_cli.fetchers.lrclib_search import (
LrclibSearchFetcher,
_parse_lrclib_search_results,
)
from lrx_cli.fetchers.musixmatch import (
MusixmatchFetcher,
MusixmatchSpotifyFetcher,
_parse_mxm_macro,
_parse_mxm_search,
)
from lrx_cli.fetchers.netease import (
NeteaseFetcher,
_parse_netease_lyrics,
_parse_netease_search,
)
from lrx_cli.fetchers.qqmusic import QQMusicFetcher, _parse_qq_lyrics, _parse_qq_search
from lrx_cli.fetchers.spotify import SpotifyFetcher, _parse_spotify_lyrics
from lrx_cli.lrc import LRCData
from lrx_cli.models import CacheStatus, TrackMeta
from tests.marks import requires_musixmatch_token, requires_qq_music, requires_spotify
SAMPLE_TRACK = TrackMeta(
title="One Last Kiss",
artist="Hikaru Utada",
album="One Last Kiss",
length=252026,
trackid="5RhWszHMSKzb7KiXk4Ae0M",
url="https://open.spotify.com/track/5RhWszHMSKzb7KiXk4Ae0M",
)
SAMPLE_TRACK_ALBUM_MODIFIED = replace(SAMPLE_TRACK, album="BADモード")
SAMPLE_TRACK_ARTIST_MODIFIED = replace(SAMPLE_TRACK, artist="宇多田ヒカル")
SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED = replace(
SAMPLE_TRACK,
artist="宇多田ヒカル",
album="BADモード",
)
_FIXTURE_DIR = Path(__file__).parent / "fixtures" / "fetchers"
_NETWORK_TIMEOUT = 20.0
ParserFunc = Callable[[dict], LRCData | None]
@pytest.fixture
def lrc_manager(tmp_path: Path) -> LrcManager:
return LrcManager(str(tmp_path / "cache.db"), AppConfig())
@pytest.fixture
def cred_lrc_manager(tmp_path: Path) -> LrcManager:
return LrcManager(str(tmp_path / "cache.db"), load_config())
@pytest.fixture
def fetcher_runtime_anonymous(tmp_path: Path):
cfg = AppConfig()
cache = CacheEngine(str(tmp_path / "network-anon-cache.db"))
authenticators = create_authenticators(cache, cfg)
fetchers = create_fetchers(cache, authenticators, cfg)
return fetchers, cfg
@pytest.fixture
def fetcher_runtime_credentialed(tmp_path: Path):
cfg = load_config()
cache = CacheEngine(str(tmp_path / "network-cred-cache.db"))
authenticators = create_authenticators(cache, cfg)
fetchers = create_fetchers(cache, authenticators, cfg)
return fetchers, cfg
def _load_fixture(name: str) -> dict | list:
return json.loads((_FIXTURE_DIR / name).read_text(encoding="utf-8"))
def _assert_shape(actual: object, fixture: object) -> None:
"""Assert actual payload contains fixture structure recursively.
- dict: all fixture keys must exist with matching nested shape
- list: actual must contain at least fixture length and each indexed shape must match
- scalar: runtime type must match fixture type
"""
if isinstance(fixture, dict):
assert isinstance(actual, dict)
for key, value in fixture.items():
assert key in actual
_assert_shape(actual[key], value)
return
if isinstance(fixture, list):
assert isinstance(actual, list)
assert len(actual) >= len(fixture)
for idx, value in enumerate(fixture):
_assert_shape(actual[idx], value)
return
if fixture is None:
return
assert isinstance(actual, type(fixture))
def _fetch_with_method(
lrc_manager: LrcManager,
method: FetcherMethodType,
*,
bypass_cache: bool = False,
):
return lrc_manager.fetch_for_track(
SAMPLE_TRACK,
force_method=method,
bypass_cache=bypass_cache,
)
# Cache-search fetcher behavior
def test_cache_search_no_cache_fails(lrc_manager: LrcManager):
result = _fetch_with_method(lrc_manager, "cache-search", bypass_cache=False)
assert result is None
def test_cache_search_exact_hit(lrc_manager: LrcManager):
expected = "[00:00.01]lyrics"
lrc_manager.manual_insert(SAMPLE_TRACK, expected)
result = lrc_manager.fetch_for_track(
SAMPLE_TRACK,
force_method="cache-search",
bypass_cache=False,
)
assert result is not None
assert result.lyrics is not None
assert result.lyrics.to_text() == expected
@pytest.mark.parametrize(
"query_track",
[
pytest.param(SAMPLE_TRACK_ARTIST_MODIFIED, id="artist_modified"),
pytest.param(SAMPLE_TRACK_ALBUM_MODIFIED, id="album_modified"),
pytest.param(SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED, id="album_artist_modified"),
],
)
def test_cache_search_fuzzy_hit(lrc_manager: LrcManager, query_track: TrackMeta):
expected = "[00:00.01]lyrics"
lrc_manager.manual_insert(SAMPLE_TRACK, expected)
result = lrc_manager.fetch_for_track(
query_track,
force_method="cache-search",
bypass_cache=False,
)
assert result is not None
assert result.lyrics is not None
assert result.lyrics.to_text() == expected
def test_cache_search_prefer_better_match(lrc_manager: LrcManager):
lrc_manager.manual_insert(
SAMPLE_TRACK_ARTIST_MODIFIED,
"[00:00.01]artist modified",
)
lrc_manager.manual_insert(
SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED,
"[00:00.01]artist+album modified",
)
result = lrc_manager.fetch_for_track(
SAMPLE_TRACK,
force_method="cache-search",
bypass_cache=False,
)
assert result is not None
assert result.lyrics is not None
assert result.lyrics.to_text() == "[00:00.01]artist modified"
# API response format for every fetcher
@pytest.mark.network
def test_api_lrclib_response_shape(fetcher_runtime_anonymous):
fetchers, _cfg = fetcher_runtime_anonymous
fetcher = fetchers["lrclib"]
assert isinstance(fetcher, LrclibFetcher)
async def _run() -> dict:
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
response = await fetcher._api_get(client, SAMPLE_TRACK)
assert response.status_code == 200
payload = response.json()
assert isinstance(payload, dict)
return payload
payload = asyncio.run(_run())
_assert_shape(payload, _load_fixture("lrclib_response.json"))
@pytest.mark.network
def test_api_lrclib_search_response_shape(fetcher_runtime_anonymous):
fetchers, _cfg = fetcher_runtime_anonymous
fetcher = fetchers["lrclib-search"]
assert isinstance(fetcher, LrclibSearchFetcher)
async def _run() -> list[dict]:
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
items, had_error = await fetcher._api_candidates(client, SAMPLE_TRACK)
assert had_error is False
return items
payload = asyncio.run(_run())
_assert_shape(payload, _load_fixture("lrclib_search_results.json"))
@pytest.mark.network
def test_api_netease_response_shape(fetcher_runtime_anonymous):
fetchers, _cfg = fetcher_runtime_anonymous
fetcher = fetchers["netease"]
assert isinstance(fetcher, NeteaseFetcher)
async def _run() -> tuple[dict, dict]:
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
search = await fetcher._api_search_track(client, SAMPLE_TRACK, 5)
lyric = await fetcher._api_lyric_track(client, SAMPLE_TRACK, 5)
assert isinstance(search, dict)
assert isinstance(lyric, dict)
return search, lyric
search_payload, lyric_payload = asyncio.run(_run())
_assert_shape(search_payload, _load_fixture("netease_search.json"))
_assert_shape(lyric_payload, _load_fixture("netease_lyrics.json"))
@pytest.mark.network
@requires_spotify
def test_api_spotify_response_shape(fetcher_runtime_credentialed):
fetchers, _cfg = fetcher_runtime_credentialed
fetcher = fetchers["spotify"]
assert isinstance(fetcher, SpotifyFetcher)
async def _run() -> dict:
payload = await fetcher._api_lyrics(SAMPLE_TRACK)
assert isinstance(payload, dict)
return payload
payload = asyncio.run(_run())
_assert_shape(payload, _load_fixture("spotify_synced.json"))
@pytest.mark.network
@requires_qq_music
def test_api_qqmusic_response_shape(fetcher_runtime_credentialed):
fetchers, _cfg = fetcher_runtime_credentialed
fetcher = fetchers["qqmusic"]
assert isinstance(fetcher, QQMusicFetcher)
async def _run() -> tuple[dict, dict]:
search = await fetcher._api_search(SAMPLE_TRACK, 10)
lyric = await fetcher._api_lyric_track(SAMPLE_TRACK, 10)
assert isinstance(search, dict)
assert isinstance(lyric, dict)
return search, lyric
search_payload, lyric_payload = asyncio.run(_run())
_assert_shape(search_payload, _load_fixture("qq_search.json"))
_assert_shape(lyric_payload, _load_fixture("qq_lyrics.json"))
@pytest.mark.network
def test_api_musixmatch_anonymous_response_shape(fetcher_runtime_anonymous):
"""Anonymous musixmatch calls must share one cache/auth context in this test."""
fetchers, _cfg = fetcher_runtime_anonymous
search_fetcher = fetchers["musixmatch"]
spotify_fetcher = fetchers["musixmatch-spotify"]
assert isinstance(search_fetcher, MusixmatchFetcher)
assert isinstance(spotify_fetcher, MusixmatchSpotifyFetcher)
async def _run() -> tuple[dict, dict, dict]:
search = await search_fetcher._api_search_track(SAMPLE_TRACK)
macro_from_search = await search_fetcher._api_macro_track(SAMPLE_TRACK)
macro_from_spotify = await spotify_fetcher._api_macro_track(SAMPLE_TRACK)
assert isinstance(search, dict)
assert isinstance(macro_from_search, dict)
assert isinstance(macro_from_spotify, dict)
return search, macro_from_search, macro_from_spotify
search_payload, macro_payload, spotify_macro_payload = asyncio.run(_run())
_assert_shape(search_payload, _load_fixture("musixmatch_search.json"))
_assert_shape(macro_payload, _load_fixture("musixmatch_macro_richsync.json"))
_assert_shape(
spotify_macro_payload, _load_fixture("musixmatch_macro_richsync.json")
)
@pytest.mark.network
@requires_musixmatch_token
def test_api_musixmatch_token_response_shape(fetcher_runtime_credentialed):
fetchers, _cfg = fetcher_runtime_credentialed
search_fetcher = fetchers["musixmatch"]
spotify_fetcher = fetchers["musixmatch-spotify"]
assert isinstance(search_fetcher, MusixmatchFetcher)
assert isinstance(spotify_fetcher, MusixmatchSpotifyFetcher)
async def _run() -> tuple[dict, dict, dict]:
search = await search_fetcher._api_search_track(SAMPLE_TRACK)
macro_from_search = await search_fetcher._api_macro_track(SAMPLE_TRACK)
macro_from_spotify = await spotify_fetcher._api_macro_track(SAMPLE_TRACK)
assert isinstance(search, dict)
assert isinstance(macro_from_search, dict)
assert isinstance(macro_from_spotify, dict)
return search, macro_from_search, macro_from_spotify
search_payload, macro_payload, spotify_macro_payload = asyncio.run(_run())
_assert_shape(search_payload, _load_fixture("musixmatch_search.json"))
_assert_shape(macro_payload, _load_fixture("musixmatch_macro_richsync.json"))
_assert_shape(
spotify_macro_payload, _load_fixture("musixmatch_macro_richsync.json")
)
# Parse fixture JSON into real data structures
@pytest.mark.parametrize(
"fixture_name,parser,expected_status",
[
pytest.param(
"spotify_synced.json",
_parse_spotify_lyrics,
"SUCCESS_SYNCED",
id="spotify-synced",
),
pytest.param(
"spotify_unsynced.json",
_parse_spotify_lyrics,
"SUCCESS_UNSYNCED",
id="spotify-unsynced",
),
],
)
def test_parse_spotify_fixture(
fixture_name: str,
parser: ParserFunc,
expected_status: str,
):
payload = _load_fixture(fixture_name)
assert isinstance(payload, dict)
parsed = parser(payload)
assert parsed is not None
assert parsed.detect_sync_status().value == expected_status
if expected_status == "SUCCESS_SYNCED":
assert parsed.to_text() == "[00:01.00]hello\n[00:02.50]world"
else:
assert parsed.to_text() == "[00:00.00]plain one\n[00:00.00]plain two"
def test_parse_qq_search_fixture() -> None:
payload = _load_fixture("qq_search.json")
assert isinstance(payload, dict)
parsed = _parse_qq_search(payload)
assert len(parsed) == 2
assert parsed[0].item == "mid1"
assert parsed[0].title == "My Love"
assert parsed[0].artist == "Westlife"
assert parsed[0].duration_ms == 232000.0
assert parsed[0].album == "Coast To Coast"
assert parsed[1].item == "mid2"
assert parsed[1].title == "My Love (Album Version)"
assert parsed[1].artist == "Little Texas"
assert parsed[1].duration_ms == 248000.0
assert parsed[1].album == "Greatest Hits"
def test_parse_qq_lyrics_fixture() -> None:
payload = _load_fixture("qq_lyrics.json")
assert isinstance(payload, dict)
parsed = _parse_qq_lyrics(payload)
assert parsed is not None
assert len(parsed) == 2
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
def test_parse_lrclib_response_fixture() -> None:
payload = _load_fixture("lrclib_response.json")
assert isinstance(payload, dict)
parsed = _parse_lrclib_response(payload)
assert parsed.synced is not None and parsed.synced.lyrics is not None
assert parsed.unsynced is not None and parsed.unsynced.lyrics is not None
assert parsed.synced.status == CacheStatus.SUCCESS_SYNCED
assert parsed.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
assert parsed.synced.lyrics.to_text() == "[00:01.00]s1\n[00:02.00]s2"
assert parsed.unsynced.lyrics.to_text() == "[00:00.00]p1\n[00:00.00]p2"
def test_parse_lrclib_search_results_fixture() -> None:
payload = _load_fixture("lrclib_search_results.json")
assert isinstance(payload, list)
parsed = _parse_lrclib_search_results(payload)
assert len(parsed) == 2
assert parsed[0].item.get("id") == 1
assert parsed[0].duration_ms == 231847.0
assert parsed[0].is_synced is True
assert parsed[0].title == "My Love"
assert parsed[0].artist == "Westlife"
assert parsed[0].album == "Coast To Coast"
assert parsed[1].item.get("id") == 2
assert parsed[1].duration_ms == 262000.0
assert parsed[1].is_synced is False
assert parsed[1].title == "My Love (Live)"
assert parsed[1].artist == "Westlife"
assert parsed[1].album == "Live"
def test_parse_netease_search_fixture() -> None:
payload = _load_fixture("netease_search.json")
assert isinstance(payload, dict)
parsed = _parse_netease_search(payload)
assert len(parsed) == 2
assert parsed[0].item == 2080607
assert parsed[0].title == "My Love"
assert parsed[0].artist == "Westlife"
assert parsed[0].duration_ms == 231941.0
assert parsed[0].album == "Unbreakable"
assert parsed[1].item == 572412968
assert parsed[1].artist == "Westlife"
assert parsed[1].duration_ms == 231000.0
def test_parse_netease_lyrics_fixture() -> None:
payload = _load_fixture("netease_lyrics.json")
assert isinstance(payload, dict)
parsed = _parse_netease_lyrics(payload)
assert parsed is not None
assert len(parsed) == 2
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
assert parsed.to_text() == "[00:01.00]line1\n[00:02.00]line2"
def test_parse_musixmatch_search_fixture() -> None:
payload = _load_fixture("musixmatch_search.json")
assert isinstance(payload, dict)
parsed = _parse_mxm_search(payload)
assert len(parsed) == 1
assert parsed[0].item == 123
assert parsed[0].is_synced is True
assert parsed[0].title == "My Love"
assert parsed[0].artist == "Westlife"
assert parsed[0].duration_ms == 232000.0
assert parsed[0].album == "Coast To Coast"
def test_parse_musixmatch_macro_fixture() -> None:
payload = _load_fixture("musixmatch_macro_richsync.json")
assert isinstance(payload, dict)
parsed = _parse_mxm_macro(payload)
assert parsed is not None
assert len(parsed) == 2
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
def test_parse_musixmatch_macro_subtitle_fallback_fixture() -> None:
payload = _load_fixture("musixmatch_macro_subtitle.json")
assert isinstance(payload, dict)
parsed = _parse_mxm_macro(payload)
assert parsed is not None
assert len(parsed) == 2
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
assert parsed.to_text() == "[00:01.10]hello\n[00:02.22]world"
# Empty / partial-error response handling
def test_parse_spotify_empty_or_invalid() -> None:
assert _parse_spotify_lyrics({}) is None
assert _parse_spotify_lyrics({"lyrics": {"lines": []}}) is None
def test_parse_qq_search_empty_or_error() -> None:
assert _parse_qq_search({}) == []
assert _parse_qq_search({"code": 1}) == []
assert _parse_qq_search({"code": 0, "data": {"list": []}}) == []
def test_parse_qq_lyrics_empty_or_error() -> None:
assert _parse_qq_lyrics({}) is None
assert _parse_qq_lyrics({"code": 1}) is None
assert _parse_qq_lyrics({"code": 0, "data": {"lyric": ""}}) is None
def test_parse_lrclib_response_empty_or_partial() -> None:
parsed = _parse_lrclib_response({})
assert parsed.synced is not None
assert parsed.unsynced is not None
assert parsed.synced.lyrics is None
assert parsed.unsynced.lyrics is None
parsed_partial = _parse_lrclib_response({"syncedLyrics": "[00:01.00]line"})
assert (
parsed_partial.synced is not None and parsed_partial.synced.lyrics is not None
)
assert parsed_partial.unsynced is not None
def test_parse_netease_empty_or_partial() -> None:
assert _parse_netease_search({}) == []
assert _parse_netease_search({"result": {"songs": []}}) == []
assert _parse_netease_lyrics({}) is None
assert _parse_netease_lyrics({"lrc": {"lyric": ""}}) is None
def test_parse_musixmatch_empty_or_partial() -> None:
assert _parse_mxm_search({}) == []
assert _parse_mxm_search({"message": {"body": {"track_list": []}}}) == []
assert _parse_mxm_macro({}) is None
assert _parse_mxm_macro({"message": {"body": []}}) is None
-123
View File
@@ -1,123 +0,0 @@
from __future__ import annotations
import asyncio
from pathlib import Path
from lrx_cli.config import AppConfig
from lrx_cli.enrichers.audio_tag import AudioTagEnricher
from lrx_cli.enrichers.file_name import FileNameEnricher
from lrx_cli.models import CacheStatus, TrackMeta
from lrx_cli.fetchers.local import LocalFetcher
_GENERAL = AppConfig().general
def _local_track(path: Path) -> TrackMeta:
return TrackMeta(url=f"file://{path}")
def test_local_fetcher_unavailable_for_non_local_track():
fetcher = LocalFetcher(_GENERAL)
assert not fetcher.is_available(TrackMeta(title="Song", artist="Artist"))
def test_local_fetcher_available_for_local_track(tmp_path):
fetcher = LocalFetcher(_GENERAL)
assert fetcher.is_available(_local_track(tmp_path / "song.flac"))
def test_local_fetcher_returns_empty_for_non_file_url():
fetcher = LocalFetcher(_GENERAL)
track = TrackMeta(url="https://example.com/song.mp3")
result = asyncio.run(fetcher.fetch(track))
assert result.synced is None and result.unsynced is None
def test_local_fetcher_reads_synced_sidecar(tmp_path):
audio = tmp_path / "song.flac"
lrc = audio.with_suffix(".lrc")
lrc.write_text("[00:01.00]Hello\n[00:03.00]World\n")
fetcher = LocalFetcher(_GENERAL)
result = asyncio.run(fetcher.fetch(_local_track(audio)))
assert result.synced is not None
assert result.synced.status == CacheStatus.SUCCESS_SYNCED
assert result.synced.source is not None
assert "sidecar" in result.synced.source
def test_local_fetcher_reads_unsynced_sidecar(tmp_path):
audio = tmp_path / "song.flac"
lrc = audio.with_suffix(".lrc")
lrc.write_text("Hello\nWorld\n")
fetcher = LocalFetcher(_GENERAL)
result = asyncio.run(fetcher.fetch(_local_track(audio)))
assert result.unsynced is not None
assert result.synced is None
def test_local_fetcher_empty_sidecar_ignored(tmp_path):
audio = tmp_path / "song.flac"
(audio.with_suffix(".lrc")).write_text(" ")
fetcher = LocalFetcher(_GENERAL)
result = asyncio.run(fetcher.fetch(_local_track(audio)))
assert result.synced is None and result.unsynced is None
def _enrich(path: str, **existing) -> dict | None:
enricher = FileNameEnricher()
track = TrackMeta(url=f"file://{path}", **existing)
return asyncio.run(enricher.enrich(track))
def test_filename_enricher_artist_title_split(tmp_path):
result = _enrich(str(tmp_path / "Utada Hikaru - First Love.flac"))
assert result == {
"artist": "Utada Hikaru",
"title": "First Love",
"album": tmp_path.name,
}
def test_filename_enricher_track_number_prefix(tmp_path):
# "01. Title" — no " - " separator, regex strips leading "01. "
result = _enrich(str(tmp_path / "01. First Love.flac"))
assert result and result.get("title") == "First Love"
assert "artist" not in result
def test_filename_enricher_title_only(tmp_path):
result = _enrich(str(tmp_path / "First Love.flac"))
assert result and result.get("title") == "First Love"
def test_filename_enricher_does_not_overwrite_existing_fields(tmp_path):
result = _enrich(
str(tmp_path / "Artist - Title.flac"),
artist="Existing Artist",
title="Existing Title",
)
assert result is None or ("artist" not in result and "title" not in result)
def test_filename_enricher_non_local_returns_none():
enricher = FileNameEnricher()
track = TrackMeta(title="Song", artist="Artist")
assert asyncio.run(enricher.enrich(track)) is None
def test_audio_tag_enricher_non_local_returns_none():
enricher = AudioTagEnricher()
track = TrackMeta(title="Song", artist="Artist")
assert asyncio.run(enricher.enrich(track)) is None
def test_audio_tag_enricher_missing_file_returns_none(tmp_path):
enricher = AudioTagEnricher()
track = _local_track(tmp_path / "nonexistent.flac")
assert asyncio.run(enricher.enrich(track)) is None
-453
View File
@@ -1,453 +0,0 @@
from __future__ import annotations
from lrx_cli.lrc import LRCData
from lrx_cli.models import CacheStatus
def _reformat(text: str) -> str:
return str(LRCData(text))
def test_time_tag_formats_are_normalized() -> None:
raw = "\n".join(
[
"[00:01]a",
"[00:02.3]b",
"[00:03.45]c",
"[00:04.678]d",
"[00:05:999]e",
]
)
normalized = _reformat(raw)
assert normalized == "\n".join(
[
"[00:01.00]a",
"[00:02.30]b",
"[00:03.45]c",
"[00:04.68]d",
"[00:05.99]e",
]
)
def test_non_timed_lines_are_kept_as_lyrics() -> None:
raw = " plain line \n\n other line "
normalized = _reformat(raw)
assert normalized == "plain line\n\nother line"
def test_word_sync_tags_are_parsed_and_export_controlled() -> None:
raw = "[00:01.00]<00:01>he <00:01.50>llo\n[00:02.00]plain"
data = LRCData(raw)
assert data.to_text(include_word_sync=False) == "[00:01.00]he llo\n[00:02.00]plain"
assert (
data.to_text(include_word_sync=True)
== "[00:01.00]<00:01.00>he <00:01.50>llo\n[00:02.00]plain"
)
def test_midline_line_tags_are_kept_as_plain_text() -> None:
raw = "[00:01.00]Lyric [00:02.00]line"
normalized = _reformat(raw)
assert normalized == "[00:01.00]Lyric [00:02.00]line"
def test_space_between_line_tag_and_lyric_is_consumed() -> None:
raw = "[00:01.2] hello"
normalized = _reformat(raw)
assert normalized == "[00:01.20]hello"
def test_consecutive_line_sync_tags_with_spaces_are_parsed_as_one_line() -> None:
raw = "[00:01] [00:02.3] chorus"
data = LRCData(raw)
assert len(data.lines) == 1
assert str(data) == "[00:01.00][00:02.30]chorus"
assert data.to_plain() == "chorus\nchorus"
def test_non_leading_time_like_text_is_plain_lyric() -> None:
raw = "intro [00:01]line"
normalized = _reformat(raw)
assert normalized == "intro [00:01]line"
def test_is_synced_and_detect_sync_status_follow_non_zero_rule() -> None:
plain_text = "just some lyrics\nwithout tags"
unsynced_text = "[00:00.00]a\n[00:00.00]b"
synced_text = "[00:00.00]a\n[00:01.00]b"
assert LRCData(plain_text).is_synced() is False
assert LRCData(plain_text).detect_sync_status() is CacheStatus.SUCCESS_UNSYNCED
assert LRCData(unsynced_text).is_synced() is False
assert LRCData(unsynced_text).detect_sync_status() is CacheStatus.SUCCESS_UNSYNCED
assert LRCData(synced_text).is_synced() is True
assert LRCData(synced_text).detect_sync_status() is CacheStatus.SUCCESS_SYNCED
def test_normalize_unsynced_covers_documented_blank_and_tag_rules() -> None:
lyrics = "\n[00:12.34]first\nsecond\n\n[00:00.00]third"
normalized = str(LRCData(lyrics).normalize_unsynced())
assert normalized == "\n".join(
[
"[00:00.00]first",
"[00:00.00]second",
"[00:00.00]",
"[00:00.00]third",
]
)
def test_normalize_unsynced_preserves_doc_tags_and_middle_blanks() -> None:
text = "\n".join(["[ar:Artist]", "", "[00:03.00]line", "[ti:Song]", "", " tail "])
normalized = LRCData(text).normalize_unsynced()
assert normalized.tags == {"ar": "Artist", "ti": "Song"}
assert str(normalized) == "\n".join(
[
"[ar:Artist]",
"[00:00.00]line",
"[ti:Song]",
"[00:00.00]",
"[00:00.00]tail",
]
)
def test_normalize_unsynced_strips_word_sync_markup_from_lyric_text() -> None:
text = "[00:02.00]<00:01.00>he <00:01.50>llo"
normalized = str(LRCData(text).normalize_unsynced())
assert normalized == "[00:00.00]he llo"
def test_normalize_unsynced_result_is_always_unsynced() -> None:
text = "[00:05.00]a\n[00:10.00]b"
normalized = LRCData(text).normalize_unsynced()
assert normalized.is_synced() is False
assert normalized.detect_sync_status() is CacheStatus.SUCCESS_UNSYNCED
def test_normalize_moves_doc_tags_to_top_and_removes_offset_tag() -> None:
text = "\n".join(
[
"[00:02.00]b",
"[ar:Artist]",
"[offset:500]",
"[00:01.00]a",
"[ti:Song]",
]
)
normalized = LRCData(text).to_normalized_text()
assert normalized == "\n".join(
[
"[ar:Artist]",
"[ti:Song]",
"[00:01.50]a",
"[00:02.50]b",
]
)
def test_normalize_expands_multi_time_tags_and_sorts_lyrics() -> None:
text = "\n".join(
[
"[00:03.00]c",
"[00:02.00][00:01.00]x",
]
)
normalized = LRCData(text).to_normalized_text()
assert normalized == "\n".join(["[00:01.00]x", "[00:02.00]x", "[00:03.00]c"])
def test_normalize_preserves_input_order_for_equal_timestamps() -> None:
text = "\n".join(
[
"[00:00.00]first",
"[00:00.00]second",
"[00:00.00]third",
"[00:01.00]later",
]
)
normalized = LRCData(text).to_normalized_text()
assert normalized == "\n".join(
["[00:00.00]first", "[00:00.00]second", "[00:00.00]third", "[00:01.00]later"]
)
def test_normalize_converts_unsynced_lines_and_removes_word_sync_tags() -> None:
text = "\n".join(
[
"plain",
"<00:01.00>he <00:01.50>llo",
"[00:02.00]<00:02.20>world",
"",
]
)
normalized = LRCData(text).to_normalized_text()
assert normalized == "\n".join(
[
"[00:00.00]plain",
"[00:00.00]he llo",
"[00:02.00]world",
]
)
def test_to_normalized_text_is_separate_from_plain() -> None:
data = LRCData("[offset:500]\n[00:02.00]b\n[00:01.00]a")
assert data.to_plain() == "a\nb"
assert data.to_normalized_text() == "[00:01.50]a\n[00:02.50]b"
def test_to_text_default_forces_unsynced_tagging() -> None:
data = LRCData("line\nother")
assert data.to_text() == "[00:00.00]line\n[00:00.00]other"
def test_str_is_raw_serializer_while_to_text_converts_unsynced() -> None:
data = LRCData("line\nother")
assert str(data) == "line\nother"
assert data.to_text() == "[00:00.00]line\n[00:00.00]other"
def test_to_plain_duplicates_lines_for_multi_line_times() -> None:
text = "\n".join(
[
"[00:02.00][00:01.00]hello",
"[00:03.00]world",
"no-tag-line",
"[00:00.00]zero-only",
]
)
plain = LRCData(text).to_plain()
# In synced mode, lines with standard tags are kept (including [00:00.00]),
# lines without leading standard tags are ignored, and output is sorted by tag timestamp.
assert plain == "\n".join(["zero-only", "hello", "hello", "world"])
def test_to_plain_sorts_lines_by_timestamp_across_lines() -> None:
text = "\n".join(
[
"[00:05.00]late",
"[00:01.00]early",
"[00:03.00]middle",
]
)
plain = LRCData(text).to_plain()
assert plain == "\n".join(["early", "middle", "late"])
def test_to_plain_preserves_input_order_for_equal_timestamps() -> None:
text = "\n".join(
[
"[00:00.00]first",
"[00:00.00]second",
"[00:00.00]third",
"[00:01.00]later",
]
)
plain = LRCData(text).to_plain()
assert plain == "\n".join(["first", "second", "third", "later"])
def test_to_plain_deduplicate_collapses_only_consecutive_equals() -> None:
text = "\n".join(
[
"[00:01.00][00:02.00]hello",
"[00:03.00]hello",
"[00:04.00]",
"[00:05.00]",
"[00:06.00]world",
"[00:07.00]hello",
]
)
plain = LRCData(text).to_plain(deduplicate=True)
assert plain == "\n".join(["hello", "", "world", "hello"])
def test_to_plain_excludes_doc_tags_and_untagged_lines_in_unsynced_mode() -> None:
text = "\n".join(["[ar:Artist]", "[00:00.00]only-zero", "plain line"])
plain = LRCData(text).to_plain()
assert plain == "only-zero\nplain line"
def test_to_plain_outer_blanks_stripped_and_untagged_lines_excluded_in_synced_mode() -> (
None
):
text = "\n\n[00:01.00]line1\n\n[00:01.00]\n[00:02.00]line2\nline3\n \n"
plain = LRCData(text).to_plain()
assert plain == "line1\n\nline2"
def test_reformat_pipeline_trims_outer_blanks_and_preserves_inner_blanks() -> None:
text = "\n\n[00:01]a\n\n[00:02]b\n\n"
normalized = str(LRCData(text))
assert normalized == "[00:01.00]a\n\n[00:02.00]b"
def test_single_doc_tag_line_is_preserved_and_registered() -> None:
data = LRCData("[ar:Artist]\n[00:01.00]line")
assert data.tags == {"ar": "Artist"}
assert len(data.lines) == 2
assert str(data) == "[ar:Artist]\n[00:01.00]line"
assert data.to_plain() == "line"
def test_multiple_doc_tags_on_one_line_are_plain_lyrics() -> None:
data = LRCData("[ar:Artist][ti:Song]")
assert data.tags == {}
assert len(data.lines) == 1
assert data.lines[0].text == "[ar:Artist][ti:Song]"
def test_doc_tag_after_lyrics_is_still_recognized_as_doc_tag() -> None:
data = LRCData("[00:01.00]line\n[ar:Artist]")
assert data.tags == {"ar": "Artist"}
assert len(data.lines) == 2
assert str(data) == "[00:01.00]line\n[ar:Artist]"
assert data.to_plain() == "line"
def test_unknown_lines_before_lyrics_are_preserved_and_do_not_start_lyrics() -> None:
data = LRCData("comment line\n[ar:Artist]\n[00:01.00]line")
assert data.tags == {"ar": "Artist"}
assert len(data.lines) == 3
assert str(data) == "comment line\n[ar:Artist]\n[00:01.00]line"
assert data.to_plain() == "line"
def test_to_plain_excludes_doc_tags_but_keeps_lyrics() -> None:
data = LRCData("[ar:Artist]\n[00:01.00]line\n[ti:Song]\nplain")
assert data.to_plain() == "line"
def test_non_space_between_line_tags_stops_tag_parsing() -> None:
data = LRCData("[00:01.00]x[00:02.00]tail")
assert len(data.lines) == 1
assert str(data) == "[00:01.00]x[00:02.00]tail"
assert data.to_plain() == "x[00:02.00]tail"
def test_line_only_time_tag_is_valid_empty_lyric() -> None:
data = LRCData("[00:01.00]")
assert len(data.lines) == 1
assert str(data) == "[00:01.00]"
assert data.to_plain() == ""
def test_word_sync_markup_only_changes_output_when_enabled() -> None:
a = LRCData("[00:01.00]<00:00.50>lyric")
b = LRCData("[00:01.00]lyric")
assert a.to_text(include_word_sync=False) == "[00:01.00]lyric"
assert b.to_text(include_word_sync=False) == "[00:01.00]lyric"
assert a.to_text(include_word_sync=True) == "[00:01.00]<00:00.50>lyric"
assert b.to_text(include_word_sync=True) == "[00:01.00]lyric"
def test_str_preserves_word_sync_markup() -> None:
data = LRCData("[00:01.00]<00:00.50>lyric")
assert str(data) == "[00:01.00]<00:00.50>lyric"
def test_str_preserves_offset_tag_and_does_not_apply_it() -> None:
data = LRCData("[offset:500]\n[00:01.00]a")
assert str(data) == "[offset:500]\n[00:01.00]a"
assert data.to_normalized_text() == "[00:01.50]a"
def test_str_preserves_doc_tag_order_and_duplicates_exactly() -> None:
data = LRCData("[ar:First]\n[ti:Song]\n[ar:Second]\n[00:01.00]line")
assert str(data) == "[ar:First]\n[ti:Song]\n[ar:Second]\n[00:01.00]line"
def test_str_does_not_expand_or_sort_multi_time_lines() -> None:
data = LRCData("[00:03.00]c\n[00:02.00][00:01.00]x")
assert str(data) == "[00:03.00]c\n[00:02.00][00:01.00]x"
assert data.to_normalized_text() == "[00:01.00]x\n[00:02.00]x\n[00:03.00]c"
def test_str_preserves_plain_text_lines_without_injecting_time_tags() -> None:
data = LRCData("plain line\n[ar:Artist]\nother line")
assert str(data) == "plain line\n[ar:Artist]\nother line"
assert data.to_text() == "[00:00.00]plain line\n[ar:Artist]\n[00:00.00]other line"
def test_word_sync_line_with_empty_tail_keeps_word_tag_only_when_enabled() -> None:
data = LRCData("[00:01.00]<00:02.00>")
assert data.to_text(include_word_sync=False) == "[00:01.00]"
assert data.to_text(include_word_sync=True) == "[00:01.00]<00:02.00>"
def test_to_plain_for_doc_only_text_is_empty() -> None:
data = LRCData("[ar:Artist]\n[ti:Song]")
assert data.to_plain() == ""
def test_duplicate_doc_tag_key_last_value_wins_but_lines_are_kept() -> None:
data = LRCData("[ar:First]\n[ar:Second]\n[00:01.00]line")
assert data.tags == {"ar": "Second"}
assert len(data.lines) == 3
assert str(data).startswith("[ar:First]\n[ar:Second]\n")
-19
View File
@@ -1,19 +0,0 @@
from __future__ import annotations
from lrx_cli.normalize import normalize_for_match, normalize_artist
def test_normalize_for_match_covers_nfkc_punct_feat_and_whitespace() -> None:
text = " feat. SOMEONE "
normalized = normalize_for_match(text)
assert normalized == "test"
def test_normalize_artist_splits_separators_and_sorts_parts() -> None:
artist = "B / A feat. C; D vs. E × F 、 G"
normalized = normalize_artist(artist)
assert normalized == "a\0b\0d\0e\0f\0g"
-329
View File
@@ -1,329 +0,0 @@
from __future__ import annotations
import asyncio
from unittest.mock import patch
from lrx_cli.config import HIGH_CONFIDENCE
from lrx_cli.cache import SLOT_UNSYNCED
from lrx_cli.core import LrcManager
from lrx_cli.fetchers.base import BaseFetcher, FetchResult
from lrx_cli.lrc import LRCData
from lrx_cli.models import CacheStatus, LyricResult, TrackMeta
# Helpers
def _track(**kwargs) -> TrackMeta:
defaults = dict(artist="Artist", title="Song", album="Album", length=180000)
defaults.update(kwargs)
return TrackMeta(**defaults) # type: ignore
def _synced(source: str, confidence: float = HIGH_CONFIDENCE) -> LyricResult:
return LyricResult(
status=CacheStatus.SUCCESS_SYNCED,
lyrics=LRCData("[00:01.00]lyrics"),
source=source,
confidence=confidence,
)
def _unsynced(source: str, confidence: float = 60.0) -> LyricResult:
return LyricResult(
status=CacheStatus.SUCCESS_UNSYNCED,
lyrics=LRCData("lyrics"),
source=source,
confidence=confidence,
)
def _not_found() -> LyricResult:
return LyricResult(status=CacheStatus.NOT_FOUND)
def _fr(
synced: LyricResult | None = None,
unsynced: LyricResult | None = None,
) -> FetchResult:
return FetchResult(synced=synced, unsynced=unsynced)
class MockFetcher(BaseFetcher):
def __init__(self, name: str, result: FetchResult, delay: float = 0.0):
self._name = name
self._result = result
self._delay = delay
self.called = False
self.completed = False
@property
def source_name(self) -> str:
return self._name
def is_available(self, track: TrackMeta) -> bool:
return True
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
self.called = True
try:
if self._delay:
await asyncio.sleep(self._delay)
self.completed = True
return self._result
except asyncio.CancelledError:
raise
def make_manager(tmp_path) -> LrcManager:
return LrcManager(db_path=str(tmp_path / "cache.db"))
# Between-group termination
def test_unsynced_does_not_stop_next_group(tmp_path):
"""Unsynced result should not stop the pipeline — next group must still run."""
a = MockFetcher("a", _fr(unsynced=_unsynced("a")))
b = MockFetcher("b", _fr(synced=_synced("b")))
manager = make_manager(tmp_path)
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
result = manager.fetch_for_track(_track())
assert b.called
assert result is not None
assert result.source == "b"
def test_trusted_synced_stops_next_group(tmp_path):
"""Trusted synced from group1 must prevent group2 from running."""
a = MockFetcher("a", _fr(synced=_synced("a")))
b = MockFetcher("b", _fr(synced=_synced("b")))
manager = make_manager(tmp_path)
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
result = manager.fetch_for_track(_track())
assert not b.called
assert result is not None
assert result.source == "a"
def test_negative_continues_next_group(tmp_path):
"""NOT_FOUND from group1 must cause group2 to be tried."""
a = MockFetcher("a", _fr(synced=_not_found(), unsynced=_not_found()))
b = MockFetcher("b", _fr(synced=_synced("b")))
manager = make_manager(tmp_path)
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
result = manager.fetch_for_track(_track())
assert a.called
assert b.called
assert result is not None
assert result.source == "b"
# Within-group behaviour
def test_trusted_synced_cancels_sibling(tmp_path):
"""When a fast fetcher returns trusted synced, the slow sibling must be cancelled.
If cancellation is broken this test will block for 10 seconds."""
fast = MockFetcher("fast", _fr(synced=_synced("fast")))
slow = MockFetcher("slow", _fr(synced=_synced("slow")), delay=10.0)
manager = make_manager(tmp_path)
with patch("lrx_cli.core.build_plan", return_value=[[fast, slow]]):
result = manager.fetch_for_track(_track())
assert fast.called
assert slow.called # task was started
assert not slow.completed # but cancelled before finishing
assert result is not None
assert result.source == "fast"
def test_allow_unsynced_true_picks_highest_confidence_unsynced(tmp_path):
"""When allow_unsynced=True and no trusted synced result, highest-confidence unsynced is returned."""
low = MockFetcher("low", _fr(unsynced=_unsynced("low", confidence=40.0)))
high = MockFetcher("high", _fr(unsynced=_unsynced("high", confidence=70.0)))
manager = make_manager(tmp_path)
with patch("lrx_cli.core.build_plan", return_value=[[low, high]]):
result = manager.fetch_for_track(_track(), allow_unsynced=True)
assert result is not None
assert result.source == "high"
def test_equal_confidence_prefers_synced_when_unsynced_allowed(tmp_path):
"""Tie on confidence should still prefer synced over unsynced."""
dual = MockFetcher(
"dual",
_fr(
synced=_synced("dual", confidence=70.0),
unsynced=_unsynced("dual", confidence=70.0),
),
)
manager = make_manager(tmp_path)
with patch("lrx_cli.core.build_plan", return_value=[[dual]]):
result = manager.fetch_for_track(_track(), allow_unsynced=True)
assert result is not None
assert result.status == CacheStatus.SUCCESS_SYNCED
def test_unsynced_only_returns_none_when_not_allowed(tmp_path):
"""When allow_unsynced=False, unsynced-only pipeline result must be rejected."""
only_unsynced = MockFetcher(
"u",
_fr(unsynced=_unsynced("u", confidence=95.0)),
)
manager = make_manager(tmp_path)
with patch("lrx_cli.core.build_plan", return_value=[[only_unsynced]]):
result = manager.fetch_for_track(_track(), allow_unsynced=False)
assert result is None
def test_allow_unsynced_flag_controls_return_type(tmp_path):
"""With both slots available, allow_unsynced controls whether unsynced can be returned."""
dual = MockFetcher(
"dual",
_fr(
synced=_synced("dual", confidence=55.0),
unsynced=_unsynced("dual", confidence=95.0),
),
)
manager = make_manager(tmp_path)
with patch("lrx_cli.core.build_plan", return_value=[[dual]]):
synced_only = manager.fetch_for_track(_track(), allow_unsynced=False)
assert synced_only is not None
assert synced_only.status == CacheStatus.SUCCESS_SYNCED
with patch("lrx_cli.core.build_plan", return_value=[[dual]]):
allow_unsynced = manager.fetch_for_track(_track(), allow_unsynced=True)
assert allow_unsynced is not None
assert allow_unsynced.status == CacheStatus.SUCCESS_UNSYNCED
# Cache interaction
def test_cache_negative_skips_fetch(tmp_path):
"""A cached NOT_FOUND entry must prevent the fetcher from being called."""
fetcher = MockFetcher("src", _fr(synced=_synced("src")))
manager = make_manager(tmp_path)
track = _track()
manager.cache.set(track, "src", _not_found(), ttl_seconds=3600)
with patch("lrx_cli.core.build_plan", return_value=[[fetcher]]):
result = manager.fetch_for_track(track)
assert not fetcher.called
assert result is None
def test_cache_trusted_synced_no_fetch(tmp_path):
"""A trusted synced cache hit must be returned without calling the fetcher."""
fetcher = MockFetcher("src", _fr())
manager = make_manager(tmp_path)
track = _track()
manager.cache.set(track, "src", _synced("src"), ttl_seconds=3600)
with patch("lrx_cli.core.build_plan", return_value=[[fetcher]]):
result = manager.fetch_for_track(track)
assert not fetcher.called
assert result is not None
assert result.status == CacheStatus.SUCCESS_SYNCED
def test_cached_slots_support_strategy_switch_without_refetch(
tmp_path,
):
"""When both slots are cached, strategy switch should reuse cache without re-fetch."""
fetcher = MockFetcher(
"src",
_fr(
synced=_synced("src", confidence=85.0),
unsynced=_unsynced("src", confidence=95.0),
),
)
manager = make_manager(tmp_path)
track = _track()
# First request: permissive strategy, unsynced wins and is cached.
with patch("lrx_cli.core.build_plan", return_value=[[fetcher]]):
first = manager.fetch_for_track(track, allow_unsynced=True)
assert first is not None
assert first.status == CacheStatus.SUCCESS_UNSYNCED
fetcher.called = False
# Second request: stricter strategy should use synced cache slot directly.
with patch("lrx_cli.core.build_plan", return_value=[[fetcher]]):
second = manager.fetch_for_track(track, allow_unsynced=False)
assert not fetcher.called
assert second is not None
assert second.status == CacheStatus.SUCCESS_SYNCED
def test_unsynced_cache_only_still_fetches_when_unsynced_disallowed(tmp_path):
"""If only unsynced cache slot exists, allow_unsynced=False must still fetch synced."""
fetcher = MockFetcher("src", _fr(synced=_synced("src", confidence=88.0)))
manager = make_manager(tmp_path)
track = _track()
manager.cache.set(
track,
"src",
_unsynced("src", confidence=95.0),
ttl_seconds=3600,
positive_kind=SLOT_UNSYNCED,
)
with patch("lrx_cli.core.build_plan", return_value=[[fetcher]]):
result = manager.fetch_for_track(track, allow_unsynced=False)
assert fetcher.called
assert result is not None
assert result.status == CacheStatus.SUCCESS_SYNCED
# manual_insert
def test_manual_insert_synced_stored_with_correct_status(tmp_path):
manager = make_manager(tmp_path)
manager.manual_insert(_track(), "[00:01.00]Hello\n[00:03.00]World\n")
rows = manager.cache.query_track(_track())
assert any(r["status"] == CacheStatus.SUCCESS_SYNCED.value for r in rows)
def test_manual_insert_unsynced_stored_with_correct_status(tmp_path):
manager = make_manager(tmp_path)
manager.manual_insert(_track(), "Hello\nWorld\n")
rows = manager.cache.query_track(_track())
assert any(r["status"] == CacheStatus.SUCCESS_UNSYNCED.value for r in rows)
def test_manual_insert_source_and_ttl(tmp_path):
manager = make_manager(tmp_path)
manager.manual_insert(_track(), "[00:01.00]line\n")
rows = manager.cache.query_track(_track())
assert all(r["source"] == "manual" for r in rows)
assert all(r["expires_at"] is None for r in rows)
def test_manual_insert_overwrites_previous_entry(tmp_path):
manager = make_manager(tmp_path)
track = _track()
manager.manual_insert(track, "[00:01.00]old\n")
manager.manual_insert(track, "[00:01.00]new\n")
best = manager.cache.get_best(track, ["manual"])
assert best is not None
assert str(best.lyrics) == "[00:01.00]new"
def test_manual_insert_is_returned_by_fetch(tmp_path):
manager = make_manager(tmp_path)
track = _track()
manager.manual_insert(track, "[00:01.00]cached\n")
result = manager.fetch_for_track(track)
assert result is not None
assert result.lyrics is not None
assert str(result.lyrics) == "[00:01.00]cached"
-512
View File
@@ -1,512 +0,0 @@
from __future__ import annotations
from lrx_cli.fetchers.selection import (
SearchCandidate,
select_best,
select_ranked,
_score_candidate,
_text_similarity,
MIN_CONFIDENCE,
)
def test_text_similarity_exact() -> None:
assert _text_similarity("my love", "my love") == 1.0
def test_text_similarity_empty() -> None:
assert _text_similarity("", "anything") == 0.0
assert _text_similarity("anything", "") == 0.0
def test_text_similarity_no_overlap() -> None:
assert _text_similarity("hello", "world") == 0.0
def test_text_similarity_containment() -> None:
# "my love" is contained in "my love album version"
score = _text_similarity("my love", "my love album version")
assert 0.0 < score < 1.0
assert score == len("my love") / len("my love album version")
def test_score_perfect_match() -> None:
"""Exact metadata + close duration + synced = 100."""
c = SearchCandidate(
item="x",
duration_ms=232000.0,
is_synced=True,
title="My Love",
artist="Westlife",
album="Coast To Coast",
)
score = _score_candidate(c, "My Love", "Westlife", "Coast To Coast", 232000)
assert score == 100.0
def test_score_no_metadata_match() -> None:
"""Completely wrong metadata should score very low."""
c = SearchCandidate(
item="x",
duration_ms=192000.0,
is_synced=True,
title="Let My Love Be Your Pillow (Live)",
artist="Ronnie Milsap",
album="The Essential Ronnie Milsap",
)
score = _score_candidate(c, "My Love", "Westlife", "Coast To Coast", 232000)
assert score < MIN_CONFIDENCE
def test_score_missing_both_sides_neutral() -> None:
"""If neither ref nor candidate has any field, only synced bonus applies."""
c = SearchCandidate(item="x", is_synced=True)
score = _score_candidate(c, None, None, None, None)
# No comparable fields → metadata = 0, synced = 10
assert score == 10.0
def test_score_missing_one_side_gives_zero_for_field() -> None:
"""If ref has title but candidate doesn't, title gets 0 and weight still counts."""
c = SearchCandidate(item="x", title=None, is_synced=True)
# Only title is in play (weight=40), candidate missing → raw=0, rescaled=0, + synced=10
score = _score_candidate(c, "My Love", None, None, None)
assert score == 10.0
def test_synced_state_does_not_affect_score() -> None:
base = SearchCandidate(item="x", title="My Love", is_synced=False)
synced = SearchCandidate(item="x", title="My Love", is_synced=True)
diff = _score_candidate(synced, "My Love", None, None, None) - _score_candidate(
base, "My Love", None, None, None
)
assert diff == 0.0
def test_score_duration_linear_decay() -> None:
"""Duration score decays linearly; ratios between exact/half/edge are preserved."""
exact = SearchCandidate(item="x", duration_ms=232000.0)
score_exact = _score_candidate(exact, None, None, None, 232000)
half_tol = SearchCandidate(item="x", duration_ms=232000.0 + 1500.0)
score_half = _score_candidate(half_tol, None, None, None, 232000)
at_tol = SearchCandidate(item="x", duration_ms=232000.0 + 3000.0)
score_edge = _score_candidate(at_tol, None, None, None, 232000)
# Only duration is comparable → metadata spans 0-90, plus a constant baseline +10
# exact=100, half=55, edge=10
assert score_exact == 100.0
assert score_half == 55.0
assert score_edge == 10.0
def test_duration_hard_filter_rejects_all_mismatched() -> None:
"""All candidates outside duration tolerance are filtered before scoring."""
candidates = [
SearchCandidate(
item="wrong", duration_ms=180000.0, title="My Love", artist="Westlife"
),
SearchCandidate(
item="also-wrong", duration_ms=300000.0, title="My Love", artist="Westlife"
),
]
best, _ = select_best(candidates, 232000, title="My Love", artist="Westlife")
assert best is None
def test_duration_neutral_when_ref_has_no_duration() -> None:
"""Candidate duration does not penalise when the reference has no duration."""
# Candidate A: title only (no duration)
c_no_dur = SearchCandidate(item="no-dur", title="My Love")
# Candidate B: same title + a duration (ref has none)
c_with_dur = SearchCandidate(item="with-dur", title="My Love", duration_ms=232000.0)
score_no_dur = _score_candidate(c_no_dur, "My Love", None, None, None)
score_with_dur = _score_candidate(c_with_dur, "My Love", None, None, None)
assert score_no_dur == score_with_dur
def test_score_case_insensitive_title() -> None:
c = SearchCandidate(item="x", title="my love")
s1 = _score_candidate(c, "My Love", None, None, None)
s2 = _score_candidate(c, "my love", None, None, None)
assert s1 == s2
def test_score_artist_normalization() -> None:
"""'Westlife feat. Someone' should still match 'Westlife'."""
c = SearchCandidate(item="x", artist="Westlife feat. Someone")
# normalize_artist strips feat. → both become "westlife"
score = _score_candidate(c, None, "Westlife", None, None)
assert score >= 30.0 # full artist weight (30) when both None on other fields
# Reference track: Westlife - My Love, album Coast To Coast, ~232s
_REF_TITLE = "My Love"
_REF_ARTIST = "Westlife"
_REF_ALBUM = "Coast To Coast"
_REF_LENGTH = 232000 # ms
def _lrclib_candidates() -> list[SearchCandidate[dict]]:
"""Fixtures from real LRCLIB search results."""
raw = [
{
"trackName": "My Love",
"artistName": "Westlife",
"albumName": "null",
"duration": 232.0,
"synced": True,
},
{
"trackName": "My Love",
"artistName": "Westlife",
"albumName": "null",
"duration": 180.0,
"synced": True,
},
{
"trackName": "My love",
"artistName": "Westlife",
"albumName": "moments",
"duration": 235.327,
"synced": True,
},
{
"trackName": "My Love",
"artistName": "Westlife",
"albumName": "Unbreakable",
"duration": 233.026,
"synced": True,
},
{
"trackName": "My Love",
"artistName": "Westlife",
"albumName": "Coast To Coast",
"duration": 231.847,
"synced": True,
},
{
"trackName": "Hello My Love",
"artistName": "Westlife",
"albumName": "Spectrum",
"duration": 216.0,
"synced": True,
},
{
"trackName": "My Love",
"artistName": "Westlife",
"albumName": "Hitzone 13",
"duration": 231.0,
"synced": True,
},
]
return [
SearchCandidate(
item=r,
duration_ms=r["duration"] * 1000,
is_synced=r["synced"],
title=r["trackName"],
artist=r["artistName"],
album=r["albumName"],
)
for r in raw
]
def _lrclib_noisy_candidates() -> list[SearchCandidate[dict]]:
"""Fixtures from LRCLIB title-only search — lots of wrong artists."""
raw = [
{
"trackName": "Let My Love Be Your Pillow (Live)",
"artistName": "Ronnie Milsap",
"albumName": "The Essential Ronnie Milsap",
"duration": 192.0,
"synced": True,
},
{
"trackName": "My Love",
"artistName": "Little Texas",
"albumName": "Big Time",
"duration": 248.0,
"synced": True,
},
{
"trackName": "My Love (Album Version)",
"artistName": "Little Texas",
"albumName": "Greatest Hits",
"duration": 248.0,
"synced": True,
},
{
"trackName": "My Love - Digitally Remastered '89",
"artistName": "Sonny James",
"albumName": "Capitol Collectors Series",
"duration": 169.0,
"synced": False,
},
{
"trackName": "My Love",
"artistName": "Westlife",
"albumName": "Coast To Coast",
"duration": 231.847,
"synced": True,
},
]
return [
SearchCandidate(
item=r,
duration_ms=r["duration"] * 1000,
is_synced=r["synced"],
title=r["trackName"],
artist=r["artistName"],
album=r["albumName"],
)
for r in raw
]
def _netease_candidates() -> list[SearchCandidate[int]]:
"""Fixtures from real Netease search results."""
raw = [
{
"id": 2080607,
"name": "My Love",
"artist": "Westlife",
"album": "Unbreakable, Vol. 1 - The Greatest Hits",
"dt": 231941,
},
{
"id": 2080749,
"name": "My Love (Radio Edit)",
"artist": "Westlife",
"album": "World Of Our Own - No. 1 Hits Plus (EP)",
"dt": 232920,
},
{
"id": 29809886,
"name": "My Love (Live)",
"artist": "Westlife",
"album": "The Farewell Tour: Live at Croke Park",
"dt": 262000,
},
{
"id": 572412968,
"name": "My Love",
"artist": "Westlife",
"album": "Pure... Love",
"dt": 231000,
},
{
"id": 20707713,
"name": "You Raise Me Up",
"artist": "Westlife",
"album": "You Raise Me Up",
"dt": 241116,
},
]
return [
SearchCandidate(
item=r["id"],
duration_ms=float(r["dt"]),
title=r["name"],
artist=r["artist"],
album=r["album"],
)
for r in raw
]
def test_lrclib_picks_exact_album_match() -> None:
"""With full metadata, should pick the Coast To Coast entry."""
candidates = _lrclib_candidates()
best, score = select_best(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
artist=_REF_ARTIST,
album=_REF_ALBUM,
)
assert best is not None
assert best["albumName"] == "Coast To Coast"
assert score >= MIN_CONFIDENCE
def test_lrclib_noisy_picks_westlife() -> None:
"""In noisy title-only results, artist matching should filter to Westlife."""
candidates = _lrclib_noisy_candidates()
best, _ = select_best(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
artist=_REF_ARTIST,
album=_REF_ALBUM,
)
assert best is not None
assert best["artistName"] == "Westlife"
def test_lrclib_noisy_rejects_all_without_ref_artist() -> None:
"""Without ref artist, wrong-artist candidates may still win, but right title should rank higher."""
candidates = _lrclib_noisy_candidates()
best, _ = select_best(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
)
# Should pick a "My Love" over "Let My Love Be Your Pillow"
assert best is not None
assert "My Love" == best["trackName"] or best["trackName"].startswith("My Love")
def test_netease_picks_closest_duration() -> None:
candidates = _netease_candidates()
best, _ = select_best(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
artist=_REF_ARTIST,
album=_REF_ALBUM,
)
# 2080607 has dt=231941 (diff=59ms), closest to 232000
assert best == 2080607
def test_netease_rejects_wrong_title() -> None:
"""'You Raise Me Up' should not be selected."""
candidates = _netease_candidates()
best, _ = select_best(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
artist=_REF_ARTIST,
)
assert best != 20707713
def test_netease_without_ref_metadata_rejects_below_confidence() -> None:
"""Without any ref metadata, candidates with one-sided fields score low and get rejected."""
candidates = _netease_candidates()
best, _ = select_best(candidates, _REF_LENGTH)
# Candidates have title/artist/album but ref has None for all → 0 for text fields
# Only duration (max 10) contributes → below MIN_CONFIDENCE (25)
assert best is None
def test_empty_candidates_returns_none() -> None:
assert select_best([], track_length_ms=5000) == (None, 0.0)
assert select_best([], track_length_ms=None) == (None, 0.0)
def test_all_below_min_confidence_returns_none() -> None:
"""If all candidates score below threshold, return None."""
candidates = [
SearchCandidate(
item="x",
title="Completely Different Song",
artist="Unknown Artist",
album="Unknown Album",
duration_ms=999999.0,
),
]
result, _ = select_best(
candidates,
232000,
title="My Love",
artist="Westlife",
album="Coast To Coast",
min_confidence=90.0,
)
assert result is None
def test_generic_type_preserved() -> None:
int_candidates = [SearchCandidate(item=42, duration_ms=5000.0, title="x")]
best, _ = select_best(int_candidates, 5000, title="x")
assert best == 42
dict_candidates = [SearchCandidate(item={"id": 1}, title="x")]
best, _ = select_best(dict_candidates, title="x")
assert best == {"id": 1}
def test_select_ranked_empty_input() -> None:
assert select_ranked([]) == []
def test_select_ranked_all_below_confidence() -> None:
"""All candidates below threshold → empty list."""
candidates = [
SearchCandidate(item="x", title="Completely Different", duration_ms=999999.0)
]
result = select_ranked(
candidates, 232000, title="My Love", artist="Westlife", min_confidence=90.0
)
assert result == []
def test_select_ranked_sorted_descending() -> None:
"""Results are ordered highest score first."""
candidates = _netease_candidates()
ranked = select_ranked(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
artist=_REF_ARTIST,
album=_REF_ALBUM,
)
assert len(ranked) >= 2
scores = [score for _, score in ranked]
assert scores == sorted(scores, reverse=True)
def test_select_ranked_respects_max_results() -> None:
candidates = _netease_candidates()
ranked = select_ranked(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
artist=_REF_ARTIST,
album=_REF_ALBUM,
max_results=2,
)
assert len(ranked) <= 2
def test_select_ranked_consistent_with_select_best() -> None:
"""First result of select_ranked matches select_best."""
candidates = _netease_candidates()
kwargs = dict(title=_REF_TITLE, artist=_REF_ARTIST, album=_REF_ALBUM)
ranked = select_ranked(candidates, _REF_LENGTH, **kwargs) # type: ignore
best_item, best_score = select_best(candidates, _REF_LENGTH, **kwargs) # type: ignore
assert ranked[0] == (best_item, best_score)
def test_select_ranked_duration_hard_filter_applies() -> None:
"""Candidates outside duration tolerance are excluded from ranked results."""
candidates = _netease_candidates()
ranked = select_ranked(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
artist=_REF_ARTIST,
album=_REF_ALBUM,
)
ids = [item for item, _ in ranked]
# 29809886 (dt=262000, diff=30000ms) and 20707713 (dt=241116, diff=9116ms)
# both exceed DURATION_TOLERANCE_MS=3000 → must not appear
assert 29809886 not in ids
assert 20707713 not in ids
def test_select_ranked_netease_top_is_best_duration_match() -> None:
"""2080607 (diff=59ms) should rank first over 572412968 (diff=1000ms)."""
candidates = _netease_candidates()
ranked = select_ranked(
candidates,
_REF_LENGTH,
title=_REF_TITLE,
artist=_REF_ARTIST,
album=_REF_ALBUM,
)
assert ranked[0][0] == 2080607
-684
View File
@@ -1,684 +0,0 @@
from __future__ import annotations
import asyncio
from pathlib import Path
from typing import Optional
from lrx_cli.lrc import LRCData
from lrx_cli.models import TrackMeta
from lrx_cli.watch.control import ControlClient, ControlServer, parse_delta
from lrx_cli.watch.view import BaseOutput, LyricView, WatchState, WatchStatus
from lrx_cli.watch.view.pipe import PipeOutput
from lrx_cli.watch.view.print import PrintOutput
from lrx_cli.watch.player import ActivePlayerSelector, PlayerState, PlayerTarget
from lrx_cli.config import AppConfig
from lrx_cli.watch.tracker import PositionTracker
from lrx_cli.watch.session import WatchCoordinator
TEST_CONFIG = AppConfig()
BUS = "org.mpris.MediaPlayer2.spotify"
def test_parse_delta_supports_plus_minus_and_reset() -> None:
assert parse_delta("+200") == (True, 200, None)
assert parse_delta("-150") == (True, -150, None)
assert parse_delta("0") == (True, 0, None)
# PlayerTarget
def test_player_target_allows_all_when_hint_empty() -> None:
target = PlayerTarget()
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
assert target.allows("org.mpris.MediaPlayer2.mpd") is True
def test_player_target_filters_by_case_insensitive_substring() -> None:
target = PlayerTarget("Spot")
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
assert target.allows("org.mpris.MediaPlayer2.mpd") is False
def test_player_target_hint_allows_regardless_of_blacklist() -> None:
# --player bypasses PLAYER_BLACKLIST; PlayerTarget.allows() reflects the hint only
target = PlayerTarget("spot")
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
# ActivePlayerSelector
def _ps(bus: str, status: str = "Playing") -> PlayerState:
return PlayerState(bus_name=bus, status=status, track=TrackMeta(title="T"))
def test_active_player_selector_returns_none_when_no_players() -> None:
assert ActivePlayerSelector.select({}, None, "spotify") is None
def test_active_player_selector_prefers_single_playing() -> None:
players = {
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
"org.mpris.MediaPlayer2.bar": _ps("org.mpris.MediaPlayer2.bar", "Playing"),
}
assert (
ActivePlayerSelector.select(players, None, "spotify")
== "org.mpris.MediaPlayer2.bar"
)
def test_active_player_selector_prefers_keyword_among_multiple_playing() -> None:
players = {
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo"),
"org.mpris.MediaPlayer2.spotify": _ps("org.mpris.MediaPlayer2.spotify"),
}
assert (
ActivePlayerSelector.select(players, None, "spotify")
== "org.mpris.MediaPlayer2.spotify"
)
def test_active_player_selector_uses_last_active_when_no_playing() -> None:
players = {
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
"org.mpris.MediaPlayer2.bar": _ps("org.mpris.MediaPlayer2.bar", "Stopped"),
}
assert (
ActivePlayerSelector.select(players, "org.mpris.MediaPlayer2.bar", "spotify")
== "org.mpris.MediaPlayer2.bar"
)
def test_active_player_selector_falls_back_to_first_when_no_preference() -> None:
players = {
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
}
result = ActivePlayerSelector.select(players, None, "")
assert result == "org.mpris.MediaPlayer2.foo"
# PositionTracker
def test_position_tracker_seeked_calibrates_immediately() -> None:
async def _run() -> None:
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=1200), TEST_CONFIG)
await tracker.start()
await tracker.set_active_player(BUS, "Playing", "track-A")
await tracker.on_seeked(BUS, 3500)
pos = await tracker.get_position_ms()
await tracker.stop()
assert pos >= 3500
asyncio.run(_run())
def test_position_tracker_pause_stops_position_growth() -> None:
async def _run() -> None:
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=0), TEST_CONFIG)
await tracker.start()
await tracker.set_active_player(BUS, "Playing", "track-A")
await asyncio.sleep(0.08)
before = await tracker.get_position_ms()
await tracker.on_playback_status(BUS, "Paused")
await asyncio.sleep(0.08)
after = await tracker.get_position_ms()
await tracker.stop()
assert before > 0
assert after - before < 20
asyncio.run(_run())
def test_position_tracker_resume_via_playback_status_calibrates() -> None:
async def _run() -> None:
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=50000), TEST_CONFIG)
await tracker.start()
await tracker.set_active_player(BUS, "Paused", "track-A")
await tracker.on_playback_status(BUS, "Playing")
pos = await tracker.get_position_ms()
await tracker.stop()
assert pos >= 50000
asyncio.run(_run())
def test_position_tracker_paused_start_calibrates_initial_position() -> None:
"""set_active_player with Paused must still calibrate position — player may be mid-song."""
async def _run() -> None:
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=45000), TEST_CONFIG)
await tracker.start()
await tracker.set_active_player(BUS, "Paused", "track-A")
pos = await tracker.get_position_ms()
await tracker.stop()
assert pos >= 45000
asyncio.run(_run())
def test_position_tracker_resume_via_set_active_player_calibrates() -> None:
async def _run() -> None:
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=42000), TEST_CONFIG)
await tracker.start()
await tracker.set_active_player(BUS, "Paused", "track-A")
await tracker.set_active_player(BUS, "Playing", "track-A")
pos = await tracker.get_position_ms()
await tracker.stop()
assert pos >= 42000
asyncio.run(_run())
# ControlServer and ControlClient
def test_control_server_and_client_roundtrip(tmp_path: Path) -> None:
async def _run() -> None:
class _Session:
def __init__(self) -> None:
self.offset = 0
def handle_offset(self, delta: int) -> dict:
self.offset += delta
return {"ok": True, "offset_ms": self.offset}
def handle_status(self) -> dict:
return {"ok": True, "offset_ms": self.offset, "lyrics_status": "idle"}
socket_path = tmp_path / "watch.sock"
server = ControlServer(socket_path=str(socket_path))
await server.start(_Session()) # type: ignore
client = ControlClient(socket_path=str(socket_path))
r1 = await client._send_async({"cmd": "offset", "delta": 200})
r2 = await client._send_async({"cmd": "status"})
await server.stop()
assert r1 == {"ok": True, "offset_ms": 200}
assert r2["ok"] is True
assert r2["offset_ms"] == 200
asyncio.run(_run())
# PipeOutput
def _pipe_state(
status: WatchStatus,
lyrics: Optional[LRCData] = None,
position_ms: int = 0,
offset_ms: int = 0,
track: Optional[TrackMeta] = None,
) -> WatchState:
return WatchState(
track=track,
lyrics=LyricView.from_lrc(lyrics) if lyrics else None,
position_ms=position_ms,
offset_ms=offset_ms,
status=status,
)
def test_pipe_output_fetching_renders_status_window(capsys) -> None:
asyncio.run(
PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.FETCHING))
)
assert capsys.readouterr().out == "\n[fetching...]\n\n"
def test_pipe_output_no_lyrics_renders_status_window(capsys) -> None:
asyncio.run(
PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.NO_LYRICS))
)
assert capsys.readouterr().out == "\n[no lyrics]\n\n"
def test_pipe_output_idle_renders_status_window(capsys) -> None:
asyncio.run(PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.IDLE)))
assert capsys.readouterr().out == "\n[idle]\n\n"
def test_pipe_output_no_newline_mode(capsys) -> None:
asyncio.run(
PipeOutput(before=0, after=0, no_newline=True).on_state(
_pipe_state(WatchStatus.FETCHING)
)
)
assert capsys.readouterr().out == "[fetching...]"
def test_pipe_output_default_window_shows_current_line(capsys) -> None:
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
asyncio.run(
PipeOutput().on_state(_pipe_state(WatchStatus.OK, lrc, position_ms=2100))
)
assert capsys.readouterr().out == "b\n"
def test_pipe_output_context_window(capsys) -> None:
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
asyncio.run(
PipeOutput(before=1, after=1).on_state(
_pipe_state(WatchStatus.OK, lrc, position_ms=2100)
)
)
assert capsys.readouterr().out == "a\nb\nc\n"
def test_pipe_output_before_region_empty_at_first_line(capsys) -> None:
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
asyncio.run(
PipeOutput(before=1, after=1).on_state(
_pipe_state(WatchStatus.OK, lrc, position_ms=1100)
)
)
assert capsys.readouterr().out == "\na\nb\n"
def test_pipe_output_after_region_empty_at_last_line(capsys) -> None:
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
asyncio.run(
PipeOutput(before=1, after=1).on_state(
_pipe_state(WatchStatus.OK, lrc, position_ms=3100)
)
)
assert capsys.readouterr().out == "b\nc\n\n"
def test_pipe_output_upcoming_lines_before_first_timestamp(capsys) -> None:
lrc = LRCData("[00:02.00]a\n[00:03.00]b")
asyncio.run(
PipeOutput(before=1, after=1).on_state(
_pipe_state(WatchStatus.OK, lrc, position_ms=0)
)
)
assert capsys.readouterr().out == "\n\na\n"
def test_pipe_output_offset_ms_shifts_effective_position(capsys) -> None:
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
asyncio.run(
PipeOutput().on_state(
_pipe_state(WatchStatus.OK, lrc, position_ms=1000, offset_ms=1500)
)
)
# effective = 2500 ms → line b
assert capsys.readouterr().out == "b\n"
def test_pipe_output_repeated_text_uses_correct_timed_occurrence(capsys) -> None:
lrc = LRCData("[00:01.00]A\n[00:02.00]X\n[00:03.00]B\n[00:04.00]X\n[00:05.00]C")
asyncio.run(
PipeOutput(before=1, after=1).on_state(
_pipe_state(WatchStatus.OK, lrc, position_ms=4100)
)
)
assert capsys.readouterr().out == "B\nX\nC\n"
# PrintOutput
def _ok_state(lyrics: LRCData, track: Optional[TrackMeta] = None) -> WatchState:
return WatchState(
track=track or TrackMeta(title="Song", artist="Artist"),
lyrics=LyricView.from_lrc(lyrics),
position_ms=0,
offset_ms=0,
status=WatchStatus.OK,
)
def _status_state(status: WatchStatus, track: Optional[TrackMeta] = None) -> WatchState:
return WatchState(
track=track or TrackMeta(title="Song", artist="Artist"),
lyrics=None,
position_ms=0,
offset_ms=0,
status=status,
)
def test_print_output_emits_lrc_on_ok(capsys) -> None:
asyncio.run(
PrintOutput().on_state(_ok_state(LRCData("[00:01.00]Hello\n[00:02.00]World")))
)
assert capsys.readouterr().out.startswith("[00:01.00]")
def test_print_output_plain_strips_tags(capsys) -> None:
asyncio.run(
PrintOutput(plain=True).on_state(
_ok_state(LRCData("[00:01.00]Hello\n[00:02.00]World"))
)
)
out = capsys.readouterr().out
assert "[" not in out
assert "Hello" in out
def test_print_output_plain_with_unsynced_lyrics(capsys) -> None:
asyncio.run(PrintOutput(plain=True).on_state(_ok_state(LRCData("Hello\nWorld"))))
out = capsys.readouterr().out
assert "Hello" in out
assert "[" not in out
def test_print_output_no_lyrics_emits_blank_line(capsys) -> None:
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.NO_LYRICS)))
assert capsys.readouterr().out == "\n"
def test_print_output_fetching_emits_nothing(capsys) -> None:
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.FETCHING)))
assert capsys.readouterr().out == ""
def test_print_output_idle_emits_nothing(capsys) -> None:
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.IDLE)))
assert capsys.readouterr().out == ""
def test_print_output_is_stateless(capsys) -> None:
"""View has no internal deduplication — emits on every call."""
output = PrintOutput()
state = _ok_state(LRCData("[00:01.00]Hello"))
asyncio.run(output.on_state(state))
asyncio.run(output.on_state(state))
lines = [ln for ln in capsys.readouterr().out.splitlines() if ln]
assert len(lines) == 2
def test_print_output_position_sensitive_is_false() -> None:
assert PrintOutput.position_sensitive is False
# WatchCoordinator
class _CaptureFetcher:
def __init__(self) -> None:
self.requested: list[str] = []
def request(self, track: TrackMeta) -> None:
self.requested.append(track.display_name())
async def stop(self) -> None:
pass
def _make_coordinator(output: Optional[BaseOutput] = None) -> WatchCoordinator:
class _Manager:
def fetch_for_track(self, *_a, **_kw):
return None
class _NullOutput(BaseOutput):
async def on_state(self, state: WatchState) -> None:
pass
session = WatchCoordinator(
_Manager(), # type: ignore
output or _NullOutput(),
player_hint=None,
config=TEST_CONFIG,
)
session._tracker = PositionTracker(
lambda _bus: asyncio.sleep(0, result=0),
TEST_CONFIG,
)
return session
def _pstate(status: str = "Playing", title: str = "Song") -> PlayerState:
return PlayerState(
bus_name=BUS,
status=status,
track=TrackMeta(title=title, artist="Artist"),
)
def test_coordinator_fetches_on_initial_player() -> None:
async def _run() -> None:
session = _make_coordinator()
fetcher = _CaptureFetcher()
session._fetcher = fetcher # type: ignore[assignment]
session._player_monitor.players = {BUS: _pstate("Playing")}
session._on_player_change()
await asyncio.sleep(0)
assert fetcher.requested == ["Artist - Song"]
assert session._model.status == WatchStatus.FETCHING
asyncio.run(_run())
def test_coordinator_fetches_while_paused() -> None:
"""Fetch starts immediately even when player is paused — no wait for resume."""
async def _run() -> None:
session = _make_coordinator()
fetcher = _CaptureFetcher()
session._fetcher = fetcher # type: ignore[assignment]
session._player_monitor.players = {BUS: _pstate("Paused")}
session._on_player_change()
await asyncio.sleep(0)
assert fetcher.requested == ["Artist - Song"]
asyncio.run(_run())
def test_coordinator_paused_start_emits_correct_line_after_fetch() -> None:
"""After fetch completes with a mid-song paused player, the current lyric line must render."""
async def _run() -> None:
received: list[WatchState] = []
class _CaptureOutput(BaseOutput):
position_sensitive = True
async def on_state(self, state: WatchState) -> None:
received.append(state)
class _Manager:
def fetch_for_track(self, *_a, **_kw):
return None
PAUSED_MS = 45000
lrc = LRCData("[00:43.00]a\n[00:44.00]b\n[00:46.00]c")
session = WatchCoordinator(
_Manager(), # type: ignore
_CaptureOutput(),
player_hint=None,
config=TEST_CONFIG,
)
session._tracker = PositionTracker(
lambda _bus: asyncio.sleep(0, result=PAUSED_MS),
TEST_CONFIG,
)
await session._tracker.start()
# Calibrate tracker directly (tracker-level behavior already covered by
# test_position_tracker_paused_start_calibrates_initial_position)
await session._tracker.set_active_player(BUS, "Paused", "Artist - Song")
# Put model in the state _on_player_change would have produced
session._model.active_player = BUS
session._model.active_track_key = "Artist - Song"
session._model.status = WatchStatus.FETCHING
session._player_monitor.players = {BUS: _pstate("Paused")}
session._last_emit_signature = (
"status",
WatchStatus.FETCHING,
BUS,
"Artist - Song",
)
await session._on_lyrics_update(lrc)
last_ok = next(
(s for s in reversed(received) if s.status == WatchStatus.OK), None
)
assert last_ok is not None, "no OK state emitted after lyrics loaded"
assert last_ok.position_ms >= PAUSED_MS
await session._tracker.stop()
asyncio.run(_run())
def test_coordinator_fetches_on_track_change() -> None:
async def _run() -> None:
session = _make_coordinator()
session._model.active_player = BUS
session._model.active_track_key = "Artist - Old Song"
session._model.set_lyrics(LRCData("[00:01.00]old"))
session._model.status = WatchStatus.OK
fetcher = _CaptureFetcher()
session._fetcher = fetcher # type: ignore[assignment]
session._player_monitor.players = {BUS: _pstate("Playing", title="New Song")}
session._on_player_change()
await asyncio.sleep(0)
assert fetcher.requested == ["Artist - New Song"]
assert session._model.lyrics is None
asyncio.run(_run())
def test_coordinator_no_refetch_on_calibration_no_lyrics() -> None:
"""Calibration with same player/track and no_lyrics must NOT trigger a second fetch."""
async def _run() -> None:
session = _make_coordinator()
fetcher = _CaptureFetcher()
session._fetcher = fetcher # type: ignore[assignment]
session._player_monitor.players = {BUS: _pstate("Playing")}
session._on_player_change()
await asyncio.sleep(0)
assert len(fetcher.requested) == 1
session._model.status = WatchStatus.NO_LYRICS
session._on_player_change()
await asyncio.sleep(0)
assert len(fetcher.requested) == 1
asyncio.run(_run())
def test_coordinator_no_fetch_when_lyrics_present() -> None:
async def _run() -> None:
session = _make_coordinator()
session._model.active_player = BUS
session._model.active_track_key = "Artist - Song"
session._model.set_lyrics(LRCData("[00:01.00]line"))
session._model.status = WatchStatus.OK
fetcher = _CaptureFetcher()
session._fetcher = fetcher # type: ignore[assignment]
session._player_monitor.players = {BUS: _pstate("Playing")}
session._on_player_change()
await asyncio.sleep(0)
assert fetcher.requested == []
assert session._model.status == WatchStatus.OK
asyncio.run(_run())
def test_coordinator_player_disappears_goes_idle() -> None:
async def _run() -> None:
session = _make_coordinator()
session._model.active_player = BUS
session._model.active_track_key = "Artist - Song"
session._model.set_lyrics(LRCData("[00:01.00]line"))
session._model.status = WatchStatus.OK
session._player_monitor.players = {}
session._on_player_change()
await asyncio.sleep(0)
assert session._model.status == WatchStatus.IDLE
assert session._model.lyrics is None
assert session._model.active_player is None
asyncio.run(_run())
def test_coordinator_no_fetch_when_track_is_none() -> None:
"""Player present but reports no track metadata → no fetch, status NO_LYRICS."""
async def _run() -> None:
session = _make_coordinator()
fetcher = _CaptureFetcher()
session._fetcher = fetcher # type: ignore[assignment]
session._player_monitor.players = {
BUS: PlayerState(bus_name=BUS, status="Playing", track=None)
}
session._on_player_change()
await asyncio.sleep(0)
assert fetcher.requested == []
assert session._model.status == WatchStatus.NO_LYRICS
asyncio.run(_run())
def test_coordinator_emit_deduplicates_on_same_cursor() -> None:
async def _run() -> None:
counts = [0]
class _CountOutput(BaseOutput):
async def on_state(self, state: WatchState) -> None:
counts[0] += 1
session = _make_coordinator(_CountOutput())
track = TrackMeta(title="Song", artist="Artist")
session._model.active_player = BUS
session._player_monitor.players = {
BUS: PlayerState(bus_name=BUS, status="Playing", track=track)
}
session._model.set_lyrics(LRCData("[00:01.00]a\n[00:03.00]b"))
session._model.status = WatchStatus.OK
await session._tracker.set_active_player(BUS, "Playing", "Artist - Song")
await session._emit_state() # emits
await session._emit_state() # same cursor → no emit
assert counts[0] == 1
await session._tracker.on_seeked(BUS, 3200)
await session._emit_state() # cursor advanced → emits
assert counts[0] == 2
asyncio.run(_run())
def test_coordinator_position_insensitive_output_ignores_seeks() -> None:
"""With position_sensitive=False, seek events do not trigger re-emit."""
async def _run() -> None:
counts = [0]
class _CountPrint(PrintOutput):
async def on_state(self, state: WatchState) -> None:
counts[0] += 1
session = _make_coordinator(_CountPrint())
track = TrackMeta(title="Song", artist="Artist")
session._model.active_player = BUS
session._player_monitor.players = {
BUS: PlayerState(bus_name=BUS, status="Playing", track=track)
}
session._model.set_lyrics(LRCData("[00:01.00]a\n[00:03.00]b"))
session._model.status = WatchStatus.OK
await session._emit_state() # emits once
assert counts[0] == 1
await session._tracker.on_seeked(BUS, 3200)
await session._emit_state() # position fixed at 0 → same signature → no re-emit
assert counts[0] == 1
asyncio.run(_run())
Generated
+122 -151
View File
@@ -2,6 +2,15 @@ version = 1
revision = 3
requires-python = ">=3.13"
[[package]]
name = "annotated-types"
version = "0.7.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081, upload-time = "2024-05-20T21:33:25.928Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643, upload-time = "2024-05-20T21:33:24.1Z" },
]
[[package]]
name = "anyio"
version = "4.13.0"
@@ -43,7 +52,7 @@ wheels = [
[[package]]
name = "cyclopts"
version = "4.10.2"
version = "4.10.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "attrs" },
@@ -51,9 +60,9 @@ dependencies = [
{ name = "rich" },
{ name = "rich-rst" },
]
sdist = { url = "https://files.pythonhosted.org/packages/66/2c/fced34890f6e5a93a4b7afb2c71e8eee2a0719fb26193a0abf159ecb714d/cyclopts-4.10.2.tar.gz", hash = "sha256:d7b950457ef2563596d56331f80cbbbf86a2772535fb8b315c4f03bc7e6127f1", size = 166664, upload-time = "2026-04-08T23:57:45.805Z" }
sdist = { url = "https://files.pythonhosted.org/packages/6c/c4/2ce2ca1451487dc7d59f09334c3fa1182c46cfcf0a2d5f19f9b26d53ac74/cyclopts-4.10.1.tar.gz", hash = "sha256:ad4e4bb90576412d32276b14a76f55d43353753d16217f2c3cd5bdceba7f15a0", size = 166623, upload-time = "2026-03-23T14:43:01.098Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b4/bd/05055d8360cef0757d79367157f3b15c0a0715e81e08f86a04018ec045f0/cyclopts-4.10.2-py3-none-any.whl", hash = "sha256:a1f2d6f8f7afac9456b48f75a40b36658778ddc9c6d406b520d017ae32c990fe", size = 204314, upload-time = "2026-04-08T23:57:46.969Z" },
{ url = "https://files.pythonhosted.org/packages/8a/0b/2261922126b2e50c601fe22d7ff5194e0a4d50e654836260c0665e24d862/cyclopts-4.10.1-py3-none-any.whl", hash = "sha256:35f37257139380a386d9fe4475e1e7c87ca7795765ef4f31abba579fcfcb6ecd", size = 204331, upload-time = "2026-03-23T14:43:02.625Z" },
]
[[package]]
@@ -129,15 +138,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
]
[[package]]
name = "iniconfig"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
]
[[package]]
name = "loguru"
version = "0.7.3"
@@ -152,8 +152,8 @@ wheels = [
]
[[package]]
name = "lrx-cli"
version = "0.7.9"
name = "lrcfetch"
version = "0.1.5"
source = { editable = "." }
dependencies = [
{ name = "cyclopts" },
@@ -162,13 +162,12 @@ dependencies = [
{ name = "loguru" },
{ name = "mutagen" },
{ name = "platformdirs" },
{ name = "pydantic" },
{ name = "python-dotenv" },
]
[package.dev-dependencies]
dev = [
{ name = "poethepoet" },
{ name = "pyright" },
{ name = "pytest" },
{ name = "ruff" },
]
@@ -179,16 +178,13 @@ requires-dist = [
{ name = "httpx", specifier = ">=0.28.1" },
{ name = "loguru", specifier = ">=0.7.3" },
{ name = "mutagen", specifier = ">=1.47.0" },
{ name = "platformdirs", specifier = ">=4.9.6" },
{ name = "platformdirs", specifier = ">=4.9.4" },
{ name = "pydantic", specifier = ">=2.12.5" },
{ name = "python-dotenv", specifier = ">=1.2.2" },
]
[package.metadata.requires-dev]
dev = [
{ name = "poethepoet", specifier = ">=0.44.0" },
{ name = "pyright", specifier = ">=1.1.406" },
{ name = "pytest", specifier = ">=9.0.2" },
{ name = "ruff", specifier = ">=0.15.8" },
]
dev = [{ name = "ruff", specifier = ">=0.15.8" }]
[[package]]
name = "markdown-it-py"
@@ -220,136 +216,99 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b0/7a/620f945b96be1f6ee357d211d5bf74ab1b7fe72a9f1525aafbfe3aee6875/mutagen-1.47.0-py3-none-any.whl", hash = "sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719", size = 194391, upload-time = "2023-09-03T16:33:29.955Z" },
]
[[package]]
name = "nodeenv"
version = "1.10.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/24/bf/d1bda4f6168e0b2e9e5958945e01910052158313224ada5ce1fb2e1113b8/nodeenv-1.10.0.tar.gz", hash = "sha256:996c191ad80897d076bdfba80a41994c2b47c68e224c542b48feba42ba00f8bb", size = 55611, upload-time = "2025-12-20T14:08:54.006Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/88/b2/d0896bdcdc8d28a7fc5717c305f1a861c26e18c05047949fb371034d98bd/nodeenv-1.10.0-py2.py3-none-any.whl", hash = "sha256:5bb13e3eed2923615535339b3c620e76779af4cb4c6a90deccc9e36b274d3827", size = 23438, upload-time = "2025-12-20T14:08:52.782Z" },
]
[[package]]
name = "packaging"
version = "26.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/65/ee/299d360cdc32edc7d2cf530f3accf79c4fca01e96ffc950d8a52213bd8e4/packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4", size = 143416, upload-time = "2026-01-21T20:50:39.064Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529", size = 74366, upload-time = "2026-01-21T20:50:37.788Z" },
]
[[package]]
name = "pastel"
version = "0.2.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/76/f1/4594f5e0fcddb6953e5b8fe00da8c317b8b41b547e2b3ae2da7512943c62/pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d", size = 7555, upload-time = "2020-09-16T19:21:12.43Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/aa/18/a8444036c6dd65ba3624c63b734d3ba95ba63ace513078e1580590075d21/pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364", size = 5955, upload-time = "2020-09-16T19:21:11.409Z" },
]
[[package]]
name = "platformdirs"
version = "4.9.6"
version = "4.9.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/9f/4a/0883b8e3802965322523f0b200ecf33d31f10991d0401162f4b23c698b42/platformdirs-4.9.6.tar.gz", hash = "sha256:3bfa75b0ad0db84096ae777218481852c0ebc6c727b3168c1b9e0118e458cf0a", size = 29400, upload-time = "2026-04-09T00:04:10.812Z" }
sdist = { url = "https://files.pythonhosted.org/packages/19/56/8d4c30c8a1d07013911a8fdbd8f89440ef9f08d07a1b50ab8ca8be5a20f9/platformdirs-4.9.4.tar.gz", hash = "sha256:1ec356301b7dc906d83f371c8f487070e99d3ccf9e501686456394622a01a934", size = 28737, upload-time = "2026-03-05T18:34:13.271Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/75/a6/a0a304dc33b49145b21f4808d763822111e67d1c3a32b524a1baf947b6e1/platformdirs-4.9.6-py3-none-any.whl", hash = "sha256:e61adb1d5e5cb3441b4b7710bea7e4c12250ca49439228cc1021c00dcfac0917", size = 21348, upload-time = "2026-04-09T00:04:09.463Z" },
{ url = "https://files.pythonhosted.org/packages/63/d7/97f7e3a6abb67d8080dd406fd4df842c2be0efaf712d1c899c32a075027c/platformdirs-4.9.4-py3-none-any.whl", hash = "sha256:68a9a4619a666ea6439f2ff250c12a853cd1cbd5158d258bd824a7df6be2f868", size = 21216, upload-time = "2026-03-05T18:34:12.172Z" },
]
[[package]]
name = "pluggy"
version = "1.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
]
[[package]]
name = "poethepoet"
version = "0.44.0"
name = "pydantic"
version = "2.12.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pastel" },
{ name = "pyyaml" },
{ name = "annotated-types" },
{ name = "pydantic-core" },
{ name = "typing-extensions" },
{ name = "typing-inspection" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a1/a4/e487662f12a5ecd2ac4d77f7697e4bda481953bb80032b158e5ab55173d4/poethepoet-0.44.0.tar.gz", hash = "sha256:c2667b513621788fb46482e371cdf81c0b04344e0e0bcb7aa8af45f84c2fce7b", size = 96040, upload-time = "2026-04-06T19:40:58.908Z" }
sdist = { url = "https://files.pythonhosted.org/packages/69/44/36f1a6e523abc58ae5f928898e4aca2e0ea509b5aa6f6f392a5d882be928/pydantic-2.12.5.tar.gz", hash = "sha256:4d351024c75c0f085a9febbb665ce8c0c6ec5d30e903bdb6394b7ede26aebb49", size = 821591, upload-time = "2025-11-26T15:11:46.471Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/80/b7/503b7d3a51b0de9a329f1323048d166e309a97bb31bdc60e6acd11d2c71f/poethepoet-0.44.0-py3-none-any.whl", hash = "sha256:36d3d834708ed069ac1e4f8ed77915c55265b7b6e01aeb2fe617c9fe9cfd524a", size = 122873, upload-time = "2026-04-06T19:40:57.369Z" },
{ url = "https://files.pythonhosted.org/packages/5a/87/b70ad306ebb6f9b585f114d0ac2137d792b48be34d732d60e597c2f8465a/pydantic-2.12.5-py3-none-any.whl", hash = "sha256:e561593fccf61e8a20fc46dfc2dfe075b8be7d0188df33f221ad1f0139180f9d", size = 463580, upload-time = "2025-11-26T15:11:44.605Z" },
]
[[package]]
name = "pydantic-core"
version = "2.41.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/71/70/23b021c950c2addd24ec408e9ab05d59b035b39d97cdc1130e1bce647bb6/pydantic_core-2.41.5.tar.gz", hash = "sha256:08daa51ea16ad373ffd5e7606252cc32f07bc72b28284b6bc9c6df804816476e", size = 460952, upload-time = "2025-11-04T13:43:49.098Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/87/06/8806241ff1f70d9939f9af039c6c35f2360cf16e93c2ca76f184e76b1564/pydantic_core-2.41.5-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:941103c9be18ac8daf7b7adca8228f8ed6bb7a1849020f643b3a14d15b1924d9", size = 2120403, upload-time = "2025-11-04T13:40:25.248Z" },
{ url = "https://files.pythonhosted.org/packages/94/02/abfa0e0bda67faa65fef1c84971c7e45928e108fe24333c81f3bfe35d5f5/pydantic_core-2.41.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:112e305c3314f40c93998e567879e887a3160bb8689ef3d2c04b6cc62c33ac34", size = 1896206, upload-time = "2025-11-04T13:40:27.099Z" },
{ url = "https://files.pythonhosted.org/packages/15/df/a4c740c0943e93e6500f9eb23f4ca7ec9bf71b19e608ae5b579678c8d02f/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cbaad15cb0c90aa221d43c00e77bb33c93e8d36e0bf74760cd00e732d10a6a0", size = 1919307, upload-time = "2025-11-04T13:40:29.806Z" },
{ url = "https://files.pythonhosted.org/packages/9a/e3/6324802931ae1d123528988e0e86587c2072ac2e5394b4bc2bc34b61ff6e/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:03ca43e12fab6023fc79d28ca6b39b05f794ad08ec2feccc59a339b02f2b3d33", size = 2063258, upload-time = "2025-11-04T13:40:33.544Z" },
{ url = "https://files.pythonhosted.org/packages/c9/d4/2230d7151d4957dd79c3044ea26346c148c98fbf0ee6ebd41056f2d62ab5/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dc799088c08fa04e43144b164feb0c13f9a0bc40503f8df3e9fde58a3c0c101e", size = 2214917, upload-time = "2025-11-04T13:40:35.479Z" },
{ url = "https://files.pythonhosted.org/packages/e6/9f/eaac5df17a3672fef0081b6c1bb0b82b33ee89aa5cec0d7b05f52fd4a1fa/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:97aeba56665b4c3235a0e52b2c2f5ae9cd071b8a8310ad27bddb3f7fb30e9aa2", size = 2332186, upload-time = "2025-11-04T13:40:37.436Z" },
{ url = "https://files.pythonhosted.org/packages/cf/4e/35a80cae583a37cf15604b44240e45c05e04e86f9cfd766623149297e971/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:406bf18d345822d6c21366031003612b9c77b3e29ffdb0f612367352aab7d586", size = 2073164, upload-time = "2025-11-04T13:40:40.289Z" },
{ url = "https://files.pythonhosted.org/packages/bf/e3/f6e262673c6140dd3305d144d032f7bd5f7497d3871c1428521f19f9efa2/pydantic_core-2.41.5-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b93590ae81f7010dbe380cdeab6f515902ebcbefe0b9327cc4804d74e93ae69d", size = 2179146, upload-time = "2025-11-04T13:40:42.809Z" },
{ url = "https://files.pythonhosted.org/packages/75/c7/20bd7fc05f0c6ea2056a4565c6f36f8968c0924f19b7d97bbfea55780e73/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:01a3d0ab748ee531f4ea6c3e48ad9dac84ddba4b0d82291f87248f2f9de8d740", size = 2137788, upload-time = "2025-11-04T13:40:44.752Z" },
{ url = "https://files.pythonhosted.org/packages/3a/8d/34318ef985c45196e004bc46c6eab2eda437e744c124ef0dbe1ff2c9d06b/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:6561e94ba9dacc9c61bce40e2d6bdc3bfaa0259d3ff36ace3b1e6901936d2e3e", size = 2340133, upload-time = "2025-11-04T13:40:46.66Z" },
{ url = "https://files.pythonhosted.org/packages/9c/59/013626bf8c78a5a5d9350d12e7697d3d4de951a75565496abd40ccd46bee/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:915c3d10f81bec3a74fbd4faebe8391013ba61e5a1a8d48c4455b923bdda7858", size = 2324852, upload-time = "2025-11-04T13:40:48.575Z" },
{ url = "https://files.pythonhosted.org/packages/1a/d9/c248c103856f807ef70c18a4f986693a46a8ffe1602e5d361485da502d20/pydantic_core-2.41.5-cp313-cp313-win32.whl", hash = "sha256:650ae77860b45cfa6e2cdafc42618ceafab3a2d9a3811fcfbd3bbf8ac3c40d36", size = 1994679, upload-time = "2025-11-04T13:40:50.619Z" },
{ url = "https://files.pythonhosted.org/packages/9e/8b/341991b158ddab181cff136acd2552c9f35bd30380422a639c0671e99a91/pydantic_core-2.41.5-cp313-cp313-win_amd64.whl", hash = "sha256:79ec52ec461e99e13791ec6508c722742ad745571f234ea6255bed38c6480f11", size = 2019766, upload-time = "2025-11-04T13:40:52.631Z" },
{ url = "https://files.pythonhosted.org/packages/73/7d/f2f9db34af103bea3e09735bb40b021788a5e834c81eedb541991badf8f5/pydantic_core-2.41.5-cp313-cp313-win_arm64.whl", hash = "sha256:3f84d5c1b4ab906093bdc1ff10484838aca54ef08de4afa9de0f5f14d69639cd", size = 1981005, upload-time = "2025-11-04T13:40:54.734Z" },
{ url = "https://files.pythonhosted.org/packages/ea/28/46b7c5c9635ae96ea0fbb779e271a38129df2550f763937659ee6c5dbc65/pydantic_core-2.41.5-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:3f37a19d7ebcdd20b96485056ba9e8b304e27d9904d233d7b1015db320e51f0a", size = 2119622, upload-time = "2025-11-04T13:40:56.68Z" },
{ url = "https://files.pythonhosted.org/packages/74/1a/145646e5687e8d9a1e8d09acb278c8535ebe9e972e1f162ed338a622f193/pydantic_core-2.41.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1d1d9764366c73f996edd17abb6d9d7649a7eb690006ab6adbda117717099b14", size = 1891725, upload-time = "2025-11-04T13:40:58.807Z" },
{ url = "https://files.pythonhosted.org/packages/23/04/e89c29e267b8060b40dca97bfc64a19b2a3cf99018167ea1677d96368273/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25e1c2af0fce638d5f1988b686f3b3ea8cd7de5f244ca147c777769e798a9cd1", size = 1915040, upload-time = "2025-11-04T13:41:00.853Z" },
{ url = "https://files.pythonhosted.org/packages/84/a3/15a82ac7bd97992a82257f777b3583d3e84bdb06ba6858f745daa2ec8a85/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:506d766a8727beef16b7adaeb8ee6217c64fc813646b424d0804d67c16eddb66", size = 2063691, upload-time = "2025-11-04T13:41:03.504Z" },
{ url = "https://files.pythonhosted.org/packages/74/9b/0046701313c6ef08c0c1cf0e028c67c770a4e1275ca73131563c5f2a310a/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4819fa52133c9aa3c387b3328f25c1facc356491e6135b459f1de698ff64d869", size = 2213897, upload-time = "2025-11-04T13:41:05.804Z" },
{ url = "https://files.pythonhosted.org/packages/8a/cd/6bac76ecd1b27e75a95ca3a9a559c643b3afcd2dd62086d4b7a32a18b169/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2b761d210c9ea91feda40d25b4efe82a1707da2ef62901466a42492c028553a2", size = 2333302, upload-time = "2025-11-04T13:41:07.809Z" },
{ url = "https://files.pythonhosted.org/packages/4c/d2/ef2074dc020dd6e109611a8be4449b98cd25e1b9b8a303c2f0fca2f2bcf7/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:22f0fb8c1c583a3b6f24df2470833b40207e907b90c928cc8d3594b76f874375", size = 2064877, upload-time = "2025-11-04T13:41:09.827Z" },
{ url = "https://files.pythonhosted.org/packages/18/66/e9db17a9a763d72f03de903883c057b2592c09509ccfe468187f2a2eef29/pydantic_core-2.41.5-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2782c870e99878c634505236d81e5443092fba820f0373997ff75f90f68cd553", size = 2180680, upload-time = "2025-11-04T13:41:12.379Z" },
{ url = "https://files.pythonhosted.org/packages/d3/9e/3ce66cebb929f3ced22be85d4c2399b8e85b622db77dad36b73c5387f8f8/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:0177272f88ab8312479336e1d777f6b124537d47f2123f89cb37e0accea97f90", size = 2138960, upload-time = "2025-11-04T13:41:14.627Z" },
{ url = "https://files.pythonhosted.org/packages/a6/62/205a998f4327d2079326b01abee48e502ea739d174f0a89295c481a2272e/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:63510af5e38f8955b8ee5687740d6ebf7c2a0886d15a6d65c32814613681bc07", size = 2339102, upload-time = "2025-11-04T13:41:16.868Z" },
{ url = "https://files.pythonhosted.org/packages/3c/0d/f05e79471e889d74d3d88f5bd20d0ed189ad94c2423d81ff8d0000aab4ff/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:e56ba91f47764cc14f1daacd723e3e82d1a89d783f0f5afe9c364b8bb491ccdb", size = 2326039, upload-time = "2025-11-04T13:41:18.934Z" },
{ url = "https://files.pythonhosted.org/packages/ec/e1/e08a6208bb100da7e0c4b288eed624a703f4d129bde2da475721a80cab32/pydantic_core-2.41.5-cp314-cp314-win32.whl", hash = "sha256:aec5cf2fd867b4ff45b9959f8b20ea3993fc93e63c7363fe6851424c8a7e7c23", size = 1995126, upload-time = "2025-11-04T13:41:21.418Z" },
{ url = "https://files.pythonhosted.org/packages/48/5d/56ba7b24e9557f99c9237e29f5c09913c81eeb2f3217e40e922353668092/pydantic_core-2.41.5-cp314-cp314-win_amd64.whl", hash = "sha256:8e7c86f27c585ef37c35e56a96363ab8de4e549a95512445b85c96d3e2f7c1bf", size = 2015489, upload-time = "2025-11-04T13:41:24.076Z" },
{ url = "https://files.pythonhosted.org/packages/4e/bb/f7a190991ec9e3e0ba22e4993d8755bbc4a32925c0b5b42775c03e8148f9/pydantic_core-2.41.5-cp314-cp314-win_arm64.whl", hash = "sha256:e672ba74fbc2dc8eea59fb6d4aed6845e6905fc2a8afe93175d94a83ba2a01a0", size = 1977288, upload-time = "2025-11-04T13:41:26.33Z" },
{ url = "https://files.pythonhosted.org/packages/92/ed/77542d0c51538e32e15afe7899d79efce4b81eee631d99850edc2f5e9349/pydantic_core-2.41.5-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:8566def80554c3faa0e65ac30ab0932b9e3a5cd7f8323764303d468e5c37595a", size = 2120255, upload-time = "2025-11-04T13:41:28.569Z" },
{ url = "https://files.pythonhosted.org/packages/bb/3d/6913dde84d5be21e284439676168b28d8bbba5600d838b9dca99de0fad71/pydantic_core-2.41.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:b80aa5095cd3109962a298ce14110ae16b8c1aece8b72f9dafe81cf597ad80b3", size = 1863760, upload-time = "2025-11-04T13:41:31.055Z" },
{ url = "https://files.pythonhosted.org/packages/5a/f0/e5e6b99d4191da102f2b0eb9687aaa7f5bea5d9964071a84effc3e40f997/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3006c3dd9ba34b0c094c544c6006cc79e87d8612999f1a5d43b769b89181f23c", size = 1878092, upload-time = "2025-11-04T13:41:33.21Z" },
{ url = "https://files.pythonhosted.org/packages/71/48/36fb760642d568925953bcc8116455513d6e34c4beaa37544118c36aba6d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:72f6c8b11857a856bcfa48c86f5368439f74453563f951e473514579d44aa612", size = 2053385, upload-time = "2025-11-04T13:41:35.508Z" },
{ url = "https://files.pythonhosted.org/packages/20/25/92dc684dd8eb75a234bc1c764b4210cf2646479d54b47bf46061657292a8/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5cb1b2f9742240e4bb26b652a5aeb840aa4b417c7748b6f8387927bc6e45e40d", size = 2218832, upload-time = "2025-11-04T13:41:37.732Z" },
{ url = "https://files.pythonhosted.org/packages/e2/09/f53e0b05023d3e30357d82eb35835d0f6340ca344720a4599cd663dca599/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bd3d54f38609ff308209bd43acea66061494157703364ae40c951f83ba99a1a9", size = 2327585, upload-time = "2025-11-04T13:41:40Z" },
{ url = "https://files.pythonhosted.org/packages/aa/4e/2ae1aa85d6af35a39b236b1b1641de73f5a6ac4d5a7509f77b814885760c/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ff4321e56e879ee8d2a879501c8e469414d948f4aba74a2d4593184eb326660", size = 2041078, upload-time = "2025-11-04T13:41:42.323Z" },
{ url = "https://files.pythonhosted.org/packages/cd/13/2e215f17f0ef326fc72afe94776edb77525142c693767fc347ed6288728d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d0d2568a8c11bf8225044aa94409e21da0cb09dcdafe9ecd10250b2baad531a9", size = 2173914, upload-time = "2025-11-04T13:41:45.221Z" },
{ url = "https://files.pythonhosted.org/packages/02/7a/f999a6dcbcd0e5660bc348a3991c8915ce6599f4f2c6ac22f01d7a10816c/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:a39455728aabd58ceabb03c90e12f71fd30fa69615760a075b9fec596456ccc3", size = 2129560, upload-time = "2025-11-04T13:41:47.474Z" },
{ url = "https://files.pythonhosted.org/packages/3a/b1/6c990ac65e3b4c079a4fb9f5b05f5b013afa0f4ed6780a3dd236d2cbdc64/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_armv7l.whl", hash = "sha256:239edca560d05757817c13dc17c50766136d21f7cd0fac50295499ae24f90fdf", size = 2329244, upload-time = "2025-11-04T13:41:49.992Z" },
{ url = "https://files.pythonhosted.org/packages/d9/02/3c562f3a51afd4d88fff8dffb1771b30cfdfd79befd9883ee094f5b6c0d8/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:2a5e06546e19f24c6a96a129142a75cee553cc018ffee48a460059b1185f4470", size = 2331955, upload-time = "2025-11-04T13:41:54.079Z" },
{ url = "https://files.pythonhosted.org/packages/5c/96/5fb7d8c3c17bc8c62fdb031c47d77a1af698f1d7a406b0f79aaa1338f9ad/pydantic_core-2.41.5-cp314-cp314t-win32.whl", hash = "sha256:b4ececa40ac28afa90871c2cc2b9ffd2ff0bf749380fbdf57d165fd23da353aa", size = 1988906, upload-time = "2025-11-04T13:41:56.606Z" },
{ url = "https://files.pythonhosted.org/packages/22/ed/182129d83032702912c2e2d8bbe33c036f342cc735737064668585dac28f/pydantic_core-2.41.5-cp314-cp314t-win_amd64.whl", hash = "sha256:80aa89cad80b32a912a65332f64a4450ed00966111b6615ca6816153d3585a8c", size = 1981607, upload-time = "2025-11-04T13:41:58.889Z" },
{ url = "https://files.pythonhosted.org/packages/9f/ed/068e41660b832bb0b1aa5b58011dea2a3fe0ba7861ff38c4d4904c1c1a99/pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008", size = 1974769, upload-time = "2025-11-04T13:42:01.186Z" },
]
[[package]]
name = "pygments"
version = "2.20.0"
version = "2.19.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c3/b2/bc9c9196916376152d655522fdcebac55e66de6603a76a02bca1b6414f6c/pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f", size = 4955991, upload-time = "2026-03-29T13:29:33.898Z" }
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/7e/a72dd26f3b0f4f2bf1dd8923c85f7ceb43172af56d63c7383eb62b332364/pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176", size = 1231151, upload-time = "2026-03-29T13:29:30.038Z" },
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
]
[[package]]
name = "pyright"
version = "1.1.408"
name = "python-dotenv"
version = "1.2.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nodeenv" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/74/b2/5db700e52554b8f025faa9c3c624c59f1f6c8841ba81ab97641b54322f16/pyright-1.1.408.tar.gz", hash = "sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684", size = 4400578, upload-time = "2026-01-08T08:07:38.795Z" }
sdist = { url = "https://files.pythonhosted.org/packages/82/ed/0301aeeac3e5353ef3d94b6ec08bbcabd04a72018415dcb29e588514bba8/python_dotenv-1.2.2.tar.gz", hash = "sha256:2c371a91fbd7ba082c2c1dc1f8bf89ca22564a087c2c287cd9b662adde799cf3", size = 50135, upload-time = "2026-03-01T16:00:26.196Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0c/82/a2c93e32800940d9573fb28c346772a14778b84ba7524e691b324620ab89/pyright-1.1.408-py3-none-any.whl", hash = "sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1", size = 6399144, upload-time = "2026-01-08T08:07:37.082Z" },
]
[[package]]
name = "pytest"
version = "9.0.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "pygments" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" },
]
[[package]]
name = "pyyaml"
version = "6.0.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" },
{ url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" },
{ url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" },
{ url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" },
{ url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" },
{ url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" },
{ url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" },
{ url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" },
{ url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" },
{ url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" },
{ url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" },
{ url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" },
{ url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" },
{ url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" },
{ url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" },
{ url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" },
{ url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" },
{ url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" },
{ url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" },
{ url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" },
{ url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" },
{ url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" },
{ url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" },
{ url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" },
{ url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" },
{ url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" },
{ url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" },
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
{ url = "https://files.pythonhosted.org/packages/0b/d7/1959b9648791274998a9c3526f6d0ec8fd2233e4d4acce81bbae76b44b2a/python_dotenv-1.2.2-py3-none-any.whl", hash = "sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a", size = 22101, upload-time = "2026-03-01T16:00:25.09Z" },
]
[[package]]
@@ -380,27 +339,27 @@ wheels = [
[[package]]
name = "ruff"
version = "0.15.10"
version = "0.15.8"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e7/d9/aa3f7d59a10ef6b14fe3431706f854dbf03c5976be614a9796d36326810c/ruff-0.15.10.tar.gz", hash = "sha256:d1f86e67ebfdef88e00faefa1552b5e510e1d35f3be7d423dc7e84e63788c94e", size = 4631728, upload-time = "2026-04-09T14:06:09.884Z" }
sdist = { url = "https://files.pythonhosted.org/packages/14/b0/73cf7550861e2b4824950b8b52eebdcc5adc792a00c514406556c5b80817/ruff-0.15.8.tar.gz", hash = "sha256:995f11f63597ee362130d1d5a327a87cb6f3f5eae3094c620bcc632329a4d26e", size = 4610921, upload-time = "2026-03-26T18:39:38.675Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/eb/00/a1c2fdc9939b2c03691edbda290afcd297f1f389196172826b03d6b6a595/ruff-0.15.10-py3-none-linux_armv6l.whl", hash = "sha256:0744e31482f8f7d0d10a11fcbf897af272fefdfcb10f5af907b18c2813ff4d5f", size = 10563362, upload-time = "2026-04-09T14:06:21.189Z" },
{ url = "https://files.pythonhosted.org/packages/5c/15/006990029aea0bebe9d33c73c3e28c80c391ebdba408d1b08496f00d422d/ruff-0.15.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b1e7c16ea0ff5a53b7c2df52d947e685973049be1cdfe2b59a9c43601897b22e", size = 10951122, upload-time = "2026-04-09T14:06:02.236Z" },
{ url = "https://files.pythonhosted.org/packages/f2/c0/4ac978fe874d0618c7da647862afe697b281c2806f13ce904ad652fa87e4/ruff-0.15.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:93cc06a19e5155b4441dd72808fdf84290d84ad8a39ca3b0f994363ade4cebb1", size = 10314005, upload-time = "2026-04-09T14:06:00.026Z" },
{ url = "https://files.pythonhosted.org/packages/da/73/c209138a5c98c0d321266372fc4e33ad43d506d7e5dd817dd89b60a8548f/ruff-0.15.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83e1dd04312997c99ea6965df66a14fb4f03ba978564574ffc68b0d61fd3989e", size = 10643450, upload-time = "2026-04-09T14:05:42.137Z" },
{ url = "https://files.pythonhosted.org/packages/ec/76/0deec355d8ec10709653635b1f90856735302cb8e149acfdf6f82a5feb70/ruff-0.15.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8154d43684e4333360fedd11aaa40b1b08a4e37d8ffa9d95fee6fa5b37b6fab1", size = 10379597, upload-time = "2026-04-09T14:05:49.984Z" },
{ url = "https://files.pythonhosted.org/packages/dc/be/86bba8fc8798c081e28a4b3bb6d143ccad3fd5f6f024f02002b8f08a9fa3/ruff-0.15.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ab88715f3a6deb6bde6c227f3a123410bec7b855c3ae331b4c006189e895cef", size = 11146645, upload-time = "2026-04-09T14:06:12.246Z" },
{ url = "https://files.pythonhosted.org/packages/a8/89/140025e65911b281c57be1d385ba1d932c2366ca88ae6663685aed8d4881/ruff-0.15.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a768ff5969b4f44c349d48edf4ab4f91eddb27fd9d77799598e130fb628aa158", size = 12030289, upload-time = "2026-04-09T14:06:04.776Z" },
{ url = "https://files.pythonhosted.org/packages/88/de/ddacca9545a5e01332567db01d44bd8cf725f2db3b3d61a80550b48308ea/ruff-0.15.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ee3ef42dab7078bda5ff6a1bcba8539e9857deb447132ad5566a038674540d0", size = 11496266, upload-time = "2026-04-09T14:05:55.485Z" },
{ url = "https://files.pythonhosted.org/packages/bc/bb/7ddb00a83760ff4a83c4e2fc231fd63937cc7317c10c82f583302e0f6586/ruff-0.15.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51cb8cc943e891ba99989dd92d61e29b1d231e14811db9be6440ecf25d5c1609", size = 11256418, upload-time = "2026-04-09T14:05:57.69Z" },
{ url = "https://files.pythonhosted.org/packages/dc/8d/55de0d35aacf6cd50b6ee91ee0f291672080021896543776f4170fc5c454/ruff-0.15.10-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:e59c9bdc056a320fb9ea1700a8d591718b8faf78af065484e801258d3a76bc3f", size = 11288416, upload-time = "2026-04-09T14:05:44.695Z" },
{ url = "https://files.pythonhosted.org/packages/68/cf/9438b1a27426ec46a80e0a718093c7f958ef72f43eb3111862949ead3cc1/ruff-0.15.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:136c00ca2f47b0018b073f28cb5c1506642a830ea941a60354b0e8bc8076b151", size = 10621053, upload-time = "2026-04-09T14:05:52.782Z" },
{ url = "https://files.pythonhosted.org/packages/4c/50/e29be6e2c135e9cd4cb15fbade49d6a2717e009dff3766dd080fcb82e251/ruff-0.15.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8b80a2f3c9c8a950d6237f2ca12b206bccff626139be9fa005f14feb881a1ae8", size = 10378302, upload-time = "2026-04-09T14:06:14.361Z" },
{ url = "https://files.pythonhosted.org/packages/18/2f/e0b36a6f99c51bb89f3a30239bc7bf97e87a37ae80aa2d6542d6e5150364/ruff-0.15.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:e3e53c588164dc025b671c9df2462429d60357ea91af7e92e9d56c565a9f1b07", size = 10850074, upload-time = "2026-04-09T14:06:16.581Z" },
{ url = "https://files.pythonhosted.org/packages/11/08/874da392558ce087a0f9b709dc6ec0d60cbc694c1c772dab8d5f31efe8cb/ruff-0.15.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:b0c52744cf9f143a393e284125d2576140b68264a93c6716464e129a3e9adb48", size = 11358051, upload-time = "2026-04-09T14:06:18.948Z" },
{ url = "https://files.pythonhosted.org/packages/e4/46/602938f030adfa043e67112b73821024dc79f3ab4df5474c25fa4c1d2d14/ruff-0.15.10-py3-none-win32.whl", hash = "sha256:d4272e87e801e9a27a2e8df7b21011c909d9ddd82f4f3281d269b6ba19789ca5", size = 10588964, upload-time = "2026-04-09T14:06:07.14Z" },
{ url = "https://files.pythonhosted.org/packages/25/b6/261225b875d7a13b33a6d02508c39c28450b2041bb01d0f7f1a83d569512/ruff-0.15.10-py3-none-win_amd64.whl", hash = "sha256:28cb32d53203242d403d819fd6983152489b12e4a3ae44993543d6fe62ab42ed", size = 11745044, upload-time = "2026-04-09T14:05:39.473Z" },
{ url = "https://files.pythonhosted.org/packages/58/ed/dea90a65b7d9e69888890fb14c90d7f51bf0c1e82ad800aeb0160e4bacfd/ruff-0.15.10-py3-none-win_arm64.whl", hash = "sha256:601d1610a9e1f1c2165a4f561eeaa2e2ea1e97f3287c5aa258d3dab8b57c6188", size = 11035607, upload-time = "2026-04-09T14:05:47.593Z" },
{ url = "https://files.pythonhosted.org/packages/4a/92/c445b0cd6da6e7ae51e954939cb69f97e008dbe750cfca89b8cedc081be7/ruff-0.15.8-py3-none-linux_armv6l.whl", hash = "sha256:cbe05adeba76d58162762d6b239c9056f1a15a55bd4b346cfd21e26cd6ad7bc7", size = 10527394, upload-time = "2026-03-26T18:39:41.566Z" },
{ url = "https://files.pythonhosted.org/packages/eb/92/f1c662784d149ad1414cae450b082cf736430c12ca78367f20f5ed569d65/ruff-0.15.8-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:d3e3d0b6ba8dca1b7ef9ab80a28e840a20070c4b62e56d675c24f366ef330570", size = 10905693, upload-time = "2026-03-26T18:39:30.364Z" },
{ url = "https://files.pythonhosted.org/packages/ca/f2/7a631a8af6d88bcef997eb1bf87cc3da158294c57044aafd3e17030613de/ruff-0.15.8-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6ee3ae5c65a42f273f126686353f2e08ff29927b7b7e203b711514370d500de3", size = 10323044, upload-time = "2026-03-26T18:39:33.37Z" },
{ url = "https://files.pythonhosted.org/packages/67/18/1bf38e20914a05e72ef3b9569b1d5c70a7ef26cd188d69e9ca8ef588d5bf/ruff-0.15.8-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fdce027ada77baa448077ccc6ebb2fa9c3c62fd110d8659d601cf2f475858d94", size = 10629135, upload-time = "2026-03-26T18:39:44.142Z" },
{ url = "https://files.pythonhosted.org/packages/d2/e9/138c150ff9af60556121623d41aba18b7b57d95ac032e177b6a53789d279/ruff-0.15.8-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:12e617fc01a95e5821648a6df341d80456bd627bfab8a829f7cfc26a14a4b4a3", size = 10348041, upload-time = "2026-03-26T18:39:52.178Z" },
{ url = "https://files.pythonhosted.org/packages/02/f1/5bfb9298d9c323f842c5ddeb85f1f10ef51516ac7a34ba446c9347d898df/ruff-0.15.8-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:432701303b26416d22ba696c39f2c6f12499b89093b61360abc34bcc9bf07762", size = 11121987, upload-time = "2026-03-26T18:39:55.195Z" },
{ url = "https://files.pythonhosted.org/packages/10/11/6da2e538704e753c04e8d86b1fc55712fdbdcc266af1a1ece7a51fff0d10/ruff-0.15.8-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d910ae974b7a06a33a057cb87d2a10792a3b2b3b35e33d2699fdf63ec8f6b17a", size = 11951057, upload-time = "2026-03-26T18:39:19.18Z" },
{ url = "https://files.pythonhosted.org/packages/83/f0/c9208c5fd5101bf87002fed774ff25a96eea313d305f1e5d5744698dc314/ruff-0.15.8-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2033f963c43949d51e6fdccd3946633c6b37c484f5f98c3035f49c27395a8ab8", size = 11464613, upload-time = "2026-03-26T18:40:06.301Z" },
{ url = "https://files.pythonhosted.org/packages/f8/22/d7f2fabdba4fae9f3b570e5605d5eb4500dcb7b770d3217dca4428484b17/ruff-0.15.8-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f29b989a55572fb885b77464cf24af05500806ab4edf9a0fd8977f9759d85b1", size = 11257557, upload-time = "2026-03-26T18:39:57.972Z" },
{ url = "https://files.pythonhosted.org/packages/71/8c/382a9620038cf6906446b23ce8632ab8c0811b8f9d3e764f58bedd0c9a6f/ruff-0.15.8-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:ac51d486bf457cdc985a412fb1801b2dfd1bd8838372fc55de64b1510eff4bec", size = 11169440, upload-time = "2026-03-26T18:39:22.205Z" },
{ url = "https://files.pythonhosted.org/packages/4d/0d/0994c802a7eaaf99380085e4e40c845f8e32a562e20a38ec06174b52ef24/ruff-0.15.8-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:c9861eb959edab053c10ad62c278835ee69ca527b6dcd72b47d5c1e5648964f6", size = 10605963, upload-time = "2026-03-26T18:39:46.682Z" },
{ url = "https://files.pythonhosted.org/packages/19/aa/d624b86f5b0aad7cef6bbf9cd47a6a02dfdc4f72c92a337d724e39c9d14b/ruff-0.15.8-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8d9a5b8ea13f26ae90838afc33f91b547e61b794865374f114f349e9036835fb", size = 10357484, upload-time = "2026-03-26T18:39:49.176Z" },
{ url = "https://files.pythonhosted.org/packages/35/c3/e0b7835d23001f7d999f3895c6b569927c4d39912286897f625736e1fd04/ruff-0.15.8-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c2a33a529fb3cbc23a7124b5c6ff121e4d6228029cba374777bd7649cc8598b8", size = 10830426, upload-time = "2026-03-26T18:40:03.702Z" },
{ url = "https://files.pythonhosted.org/packages/f0/51/ab20b322f637b369383adc341d761eaaa0f0203d6b9a7421cd6e783d81b9/ruff-0.15.8-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:75e5cd06b1cf3f47a3996cfc999226b19aa92e7cce682dcd62f80d7035f98f49", size = 11345125, upload-time = "2026-03-26T18:39:27.799Z" },
{ url = "https://files.pythonhosted.org/packages/37/e6/90b2b33419f59d0f2c4c8a48a4b74b460709a557e8e0064cf33ad894f983/ruff-0.15.8-py3-none-win32.whl", hash = "sha256:bc1f0a51254ba21767bfa9a8b5013ca8149dcf38092e6a9eb704d876de94dc34", size = 10571959, upload-time = "2026-03-26T18:39:36.117Z" },
{ url = "https://files.pythonhosted.org/packages/1f/a2/ef467cb77099062317154c63f234b8a7baf7cb690b99af760c5b68b9ee7f/ruff-0.15.8-py3-none-win_amd64.whl", hash = "sha256:04f79eff02a72db209d47d665ba7ebcad609d8918a134f86cb13dd132159fc89", size = 11743893, upload-time = "2026-03-26T18:39:25.01Z" },
{ url = "https://files.pythonhosted.org/packages/15/e2/77be4fff062fa78d9b2a4dea85d14785dac5f1d0c1fb58ed52331f0ebe28/ruff-0.15.8-py3-none-win_arm64.whl", hash = "sha256:cf891fa8e3bb430c0e7fac93851a5978fc99c8fa2c053b57b118972866f8e5f2", size = 11048175, upload-time = "2026-03-26T18:40:01.06Z" },
]
[[package]]
@@ -412,6 +371,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
]
[[package]]
name = "typing-inspection"
version = "0.4.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/55/e3/70399cb7dd41c10ac53367ae42139cf4b1ca5f36bb3dc6c9d33acdb43655/typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464", size = 75949, upload-time = "2025-10-01T02:14:41.687Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/dc/9b/47798a6c91d8bdb567fe2698fe81e0c6b7cb7ef4d13da4114b41d239f65d/typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7", size = 14611, upload-time = "2025-10-01T02:14:40.154Z" },
]
[[package]]
name = "win32-setctime"
version = "1.2.0"