Compare commits
46 Commits
1ed51fdbdb
..
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
7fd98ffcb9
|
|||
|
cfac95ca03
|
|||
|
10bfb6090c
|
|||
|
c8ccf31583
|
|||
|
6e971941f8
|
|||
|
794f42b42b
|
|||
|
b18d860aca
|
|||
|
1771b43bba
|
|||
|
0485410285
|
|||
|
aba77a23cb
|
|||
|
c505e117d1
|
|||
|
1b48386132
|
|||
|
0f1f5b418a
|
|||
|
60732f2986
|
|||
|
633983ed98
|
|||
|
1c160d5ccb
|
|||
|
b7e539de3b
|
|||
|
e6997a76c4
|
|||
|
6d9cfaf8be
|
|||
|
66a32c751a
|
|||
|
d2a3e64b89
|
|||
|
e6b8583868
|
|||
|
03970bf17f
|
|||
|
9b42cab76b
|
|||
|
f8db549d8e
|
|||
|
d5188107ff
|
|||
|
587d5dbe46
|
|||
|
1e0f8e2868
|
|||
|
573f8b5b8b
|
|||
|
b922a0df28
|
|||
|
1414066eed
|
|||
|
5666dd13c0
|
|||
|
f175eda57e
|
|||
|
92860f0d30
|
|||
|
0c85af534e
|
|||
|
69b7f5c60c
|
|||
|
c5abbff14c
|
|||
|
2d70231502
|
|||
|
9b04160783
|
|||
|
a8335d9920
|
|||
|
8fabb0de86
|
|||
|
262d385b00
|
|||
|
65327eb431
|
|||
|
0d56cde927
|
|||
|
84a3e1076e
|
|||
|
449952c6c1
|
@@ -9,3 +9,7 @@ wheels/
|
|||||||
.*
|
.*
|
||||||
!.gitignore
|
!.gitignore
|
||||||
!.python-version
|
!.python-version
|
||||||
|
|
||||||
|
TODO.md
|
||||||
|
PENDING.md
|
||||||
|
SOLVED.md
|
||||||
|
|||||||
@@ -1,7 +1,11 @@
|
|||||||
Copyright 2026 Uyanide me@uyani.de
|
Copyright 2026 Uyanide
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
||||||
|
|
||||||
|
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
||||||
|
|
||||||
|
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|||||||
@@ -1,26 +1,40 @@
|
|||||||
> [!WARNING]
|
|
||||||
>
|
|
||||||
> This project is provided for educational and experimental purposes only. It is not intended for production or commercial use and may violate the terms of service of third‑party music platforms. Use of this software is at your own risk; the authors provide no warranties and accept no liability for any consequences arising from its use.
|
|
||||||
|
|
||||||
# LRX-CLI
|
# LRX-CLI
|
||||||
|
|
||||||
A CLI tool for fetching LRC lyrics on Linux. Automatically detects the currently playing track via MPRIS/DBus and retrieves the best-matching lyrics from multiple sources, ranked by confidence scoring.
|
> [!WARNING]
|
||||||
|
>
|
||||||
|
> This project is primarily provided for educational and experimental purposes.
|
||||||
|
> It is yet not ready for production or commercial use and may violate the terms
|
||||||
|
> of service (ToS) of third‑party music platforms. Use of this software is at
|
||||||
|
> your own risk; the authors provide no warranties and accept no liability for any
|
||||||
|
> consequences arising from its use.
|
||||||
|
|
||||||
|
A CLI tool for fetching LRC lyrics on Linux. Automatically detects the currently
|
||||||
|
playing track via MPRIS/DBus and retrieves the best-matching lyrics from
|
||||||
|
multiple sources, ranked by confidence scoring.
|
||||||
|
|
||||||
## Sources
|
## Sources
|
||||||
|
|
||||||
Sources are queried in order. High-confidence results (exact match or manual insert) terminate the pipeline early; otherwise all sources are tried and the highest-confidence result wins.
|
Sources are queried in order. High-confidence results (exact match or manual
|
||||||
|
insert) terminate the pipeline early; otherwise all sources are tried and the
|
||||||
|
highest-confidence result wins.
|
||||||
|
|
||||||
1. **Local** — sidecar `.lrc` files or embedded audio metadata (FLAC, MP3)
|
1. **Local** — sidecar `.lrc` files or embedded audio metadata (FLAC, MP3)
|
||||||
2. **Cache Search** — fuzzy cross-album lookup in local cache
|
2. **Cache Search** — fuzzy cross-album lookup in local cache
|
||||||
3. **Spotify** — synced lyrics via Spotify's API (requires `SPOTIFY_SP_DC` and Spotify trackid)
|
3. **Spotify** — synced lyrics via Spotify's API
|
||||||
4. **LRCLIB** — exact match from [lrclib.net](https://lrclib.net) (requires full metadata)
|
(requires `credentials.spotify_sp_dc` and Spotify trackid)
|
||||||
5. **Musixmatch (Spotify)** — Musixmatch API with Spotify trackid (requires `MUSIXMATCH_USERTOKEN` and Spotify trackid)
|
4. **LRCLIB** — exact match from [lrclib.net](https://lrclib.net)
|
||||||
|
(requires full metadata)
|
||||||
|
5. **Musixmatch (Spotify)** — Musixmatch API with Spotify trackid
|
||||||
|
(requires Spotify trackid)
|
||||||
6. **LRCLIB Search** — fuzzy search from lrclib.net (requires at least a title)
|
6. **LRCLIB Search** — fuzzy search from lrclib.net (requires at least a title)
|
||||||
7. **Musixmatch** — Musixmatch API with metadata search (requires `MUSIXMATCH_USERTOKEN` and at least a title)
|
7. **Musixmatch** — Musixmatch API with metadata search (requires at least a title)
|
||||||
8. **Netease** — Netease Cloud Music public API
|
8. **Netease** — Netease Cloud Music public API
|
||||||
9. **QQ Music** — QQ Music via self-hosted API proxy (requires `QQ_MUSIC_API_URL` that provides the same interface as [tooplick/qq-music-api](https://github.com/tooplick/qq-music-api))
|
9. **QQ Music** — QQ Music via self-hosted API proxy
|
||||||
|
(requires `credentials.qq_music_api_url`; compatible with [tooplick/qq-music-api](https://github.com/tooplick/qq-music-api))
|
||||||
|
|
||||||
> I'm aware that Spotify's lyrics are provided by Musixmatch, but the fact is that Musixmatch's own search will yield different (and more) results than Spotify's, so I treat them as separate sources.
|
> I'm aware that Spotify's lyrics are provided by Musixmatch, but the fact is
|
||||||
|
> that Musixmatch's own search will yield different (and more) results than
|
||||||
|
> Spotify's, so I treat them as separate sources.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
@@ -32,10 +46,10 @@ See `lrx --help` for full command reference. Common use cases:
|
|||||||
lrx fetch
|
lrx fetch
|
||||||
```
|
```
|
||||||
|
|
||||||
targeting a specific player and a source to fetch from:
|
targeting a specific player and source:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
lrx --player mpd fetch --method lrclib-search
|
lrx fetch --player mpd --method lrclib-search
|
||||||
```
|
```
|
||||||
|
|
||||||
- Search by metadata (bypasses MPRIS):
|
- Search by metadata (bypasses MPRIS):
|
||||||
@@ -59,42 +73,121 @@ See `lrx --help` for full command reference. Common use cases:
|
|||||||
lrx export --output /path/to/lyrics.lrc
|
lrx export --output /path/to/lyrics.lrc
|
||||||
```
|
```
|
||||||
|
|
||||||
|
- Watch active player and stream lyrics continuously to stdout:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lrx watch pipe
|
||||||
|
lrx watch pipe --before 1 --after 2 # show context lines
|
||||||
|
```
|
||||||
|
|
||||||
|
Control a running watch session:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lrx watch ctl status # print session status as JSON
|
||||||
|
lrx watch ctl offset +200 # shift lyrics forward 200 ms
|
||||||
|
lrx watch ctl offset -150
|
||||||
|
```
|
||||||
|
|
||||||
- Cache management:
|
- Cache management:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
lrx cache stats # statistics with source×status table and confidence distribution
|
lrx cache stats # statistics
|
||||||
lrx cache query # inspect cache entries for current track
|
lrx cache query # inspect cache entries for current track
|
||||||
lrx cache clear # clear cache of current track
|
lrx cache clear # clear cache of current track
|
||||||
lrx cache clear --all # clear entire cache
|
lrx cache clear --all # clear entire cache
|
||||||
lrx cache confidence spotify 100 # manually set confidence for a source
|
lrx cache confidence spotify 100 # manually set confidence for a source
|
||||||
```
|
```
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
Set credentials via environment variable or `.env` file:
|
|
||||||
|
|
||||||
- `~/.config/lrx/.env` — user-level
|
|
||||||
- `.env` in working directory — project-local
|
|
||||||
- Shell environment — highest priority
|
|
||||||
|
|
||||||
```env
|
|
||||||
SPOTIFY_SP_DC=your_cookie_value
|
|
||||||
MUSIXMATCH_USERTOKEN=your_musixmatch_usertoken
|
|
||||||
QQ_MUSIC_API_URL=https://api.example.com
|
|
||||||
PREFERRED_PLAYER=spotify
|
|
||||||
```
|
|
||||||
|
|
||||||
- `SPOTIFY_SP_DC` — required for Spotify source. Defaults to empty (disabled Spotify source).
|
|
||||||
- `MUSIXMATCH_USERTOKEN` — required for Musixmatch sources ([Curators Settings Page](https://curators.musixmatch.com/settings) -> Login (if required) -> "Copy debug info")
|
|
||||||
- `QQ_MUSIC_API_URL` — required for QQ Music source. Defaults to empty (disabled QQ Music source).
|
|
||||||
- `PREFERRED_PLAYER` — preferred MPRIS player when multiple are active. Defaults to `spotify`. Only used when no `--player` flag is given and more than one player (or none of them) is currently playing.
|
|
||||||
|
|
||||||
Shell completion (zsh/fish/bash):
|
Shell completion (zsh/fish/bash):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
lrx --install-completion
|
lrx --install-completion
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Configuration is read from `~/.config/lrx-cli/config.toml`. The file is
|
||||||
|
optional; all values have defaults. Unknown keys are rejected with an error.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[general]
|
||||||
|
preferred_player = "" # preferred MPRIS player when multiple are active
|
||||||
|
player_blacklist = ["firefox", "zen", "chrome", "chromium", "vivaldi", "edge", "opera", "mpv"] # bypassed by --player/-p
|
||||||
|
http_timeout = 10.0 # seconds
|
||||||
|
|
||||||
|
[credentials]
|
||||||
|
spotify_sp_dc = "" # required for Spotify source
|
||||||
|
musixmatch_usertoken = "" # optional; anonymous token fetched if empty
|
||||||
|
qq_music_api_url = "" # required for QQ Music source
|
||||||
|
|
||||||
|
[watch]
|
||||||
|
debounce_ms = 400 # ms to wait after a track change before fetching
|
||||||
|
calibration_interval_s = 3.0 # seconds between full MPRIS position recalibrations
|
||||||
|
position_tick_ms = 50 # ms between local position ticks
|
||||||
|
socket_path = "" # Unix socket path; defaults to <cache_dir>/watch.sock
|
||||||
|
```
|
||||||
|
|
||||||
|
**Credentials:**
|
||||||
|
|
||||||
|
- `spotify_sp_dc` — `SP_DC` cookie from a logged-in Spotify web session. Required
|
||||||
|
for the Spotify source; leave empty to disable it.
|
||||||
|
- `musixmatch_usertoken` — found at
|
||||||
|
[Curators Settings Page](https://curators.musixmatch.com/settings) → Login → "Copy debug info".
|
||||||
|
If empty, an anonymous token will be fetched at runtime, which could be more likely to
|
||||||
|
hit the rate limits.
|
||||||
|
- `qq_music_api_url` — base URL of a self-hosted
|
||||||
|
[qq-music-api](https://github.com/tooplick/qq-music-api) (compatible) instance. Required
|
||||||
|
for the QQ Music source; leave empty to disable it.
|
||||||
|
|
||||||
|
## Development
|
||||||
|
|
||||||
|
Clone this repository:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/Uyanide/lrx-cli.git
|
||||||
|
cd lrx-cli
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a virtual environment and install dependencies (for example, using uv):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv venv .venv
|
||||||
|
uv sync
|
||||||
|
```
|
||||||
|
|
||||||
|
Run tests (without network access):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run poe test
|
||||||
|
```
|
||||||
|
|
||||||
|
Run tests including **REAL EXTERNAL** API calls. Some of them will be skipped
|
||||||
|
if the required credentials are not configured as [above](#configuration). This might be useful
|
||||||
|
to verify whether the lyric sources are still valid and working as expected:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run poe test-api
|
||||||
|
```
|
||||||
|
|
||||||
|
Other unified tasks:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run poe fmt # ruff format
|
||||||
|
uv run poe lint # ruff check + pyright
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the CLI:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run lrx --help
|
||||||
|
```
|
||||||
|
|
||||||
|
Install to user-level (optional):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv tool install .
|
||||||
|
```
|
||||||
|
|
||||||
## Credits
|
## Credits
|
||||||
|
|
||||||
- [lrclib.net](https://lrclib.net)
|
- [lrclib.net](https://lrclib.net)
|
||||||
@@ -102,4 +195,6 @@ lrx --install-completion
|
|||||||
- [librelyrics-spotify](https://github.com/libre-lyrics/librelyrics-spotify)
|
- [librelyrics-spotify](https://github.com/libre-lyrics/librelyrics-spotify)
|
||||||
- [NeteaseCloudMusicAPI](https://www.npmjs.com/package/NeteaseCloudMusicApi?activeTab=readme)
|
- [NeteaseCloudMusicAPI](https://www.npmjs.com/package/NeteaseCloudMusicApi?activeTab=readme)
|
||||||
- [qq-music-api](https://github.com/tooplick/qq-music-api)
|
- [qq-music-api](https://github.com/tooplick/qq-music-api)
|
||||||
|
- [LyricsMPRIS-Rust](https://github.com/BEST8OY/LyricsMPRIS-Rust)
|
||||||
|
- [onetagger](https://github.com/Marekkon5/onetagger)
|
||||||
- [Rise Media Player](https://github.com/theimpactfulcompany/Rise-Media-Player)
|
- [Rise Media Player](https://github.com/theimpactfulcompany/Rise-Media-Player)
|
||||||
|
|||||||
@@ -1,4 +0,0 @@
|
|||||||
from lrx_cli.cli import run
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
run()
|
|
||||||
@@ -1,519 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-25 10:18:03
|
|
||||||
Description: SQLite-based lyric cache with per-source storage and TTL expiration
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sqlite3
|
|
||||||
import hashlib
|
|
||||||
import time
|
|
||||||
from typing import Optional
|
|
||||||
from loguru import logger
|
|
||||||
|
|
||||||
from .lrc import LRCData
|
|
||||||
from .normalize import normalize_for_match as _normalize_for_match
|
|
||||||
from .normalize import normalize_artist as _normalize_artist
|
|
||||||
from .config import (
|
|
||||||
DURATION_TOLERANCE_MS,
|
|
||||||
LEGACY_CONFIDENCE_SYNCED,
|
|
||||||
LEGACY_CONFIDENCE_UNSYNCED,
|
|
||||||
)
|
|
||||||
from .models import TrackMeta, LyricResult, CacheStatus
|
|
||||||
|
|
||||||
|
|
||||||
def _generate_key(track: TrackMeta, source: str) -> str:
|
|
||||||
"""Generate a unique cache key from track metadata and source.
|
|
||||||
|
|
||||||
The key is scoped by source so that different fetchers can cache
|
|
||||||
independently for the same track (e.g. Spotify synced vs Netease unsynced).
|
|
||||||
"""
|
|
||||||
# Spotify tracks always use their track ID as the primary identifier
|
|
||||||
if track.trackid and source == "spotify":
|
|
||||||
return f"spotify:{track.trackid}"
|
|
||||||
|
|
||||||
parts = []
|
|
||||||
if track.artist:
|
|
||||||
parts.append(track.artist)
|
|
||||||
if track.title:
|
|
||||||
parts.append(track.title)
|
|
||||||
if track.album:
|
|
||||||
parts.append(track.album)
|
|
||||||
if track.length:
|
|
||||||
parts.append(str(track.length))
|
|
||||||
|
|
||||||
# Fall back to URL for local files
|
|
||||||
if not parts and track.url:
|
|
||||||
return f"{source}:url:{track.url}"
|
|
||||||
|
|
||||||
if not parts:
|
|
||||||
raise ValueError("Insufficient metadata to generate cache key")
|
|
||||||
|
|
||||||
raw = "|".join(parts)
|
|
||||||
digest = hashlib.sha256(raw.encode()).hexdigest()
|
|
||||||
return f"{source}:{digest}"
|
|
||||||
|
|
||||||
|
|
||||||
class CacheEngine:
|
|
||||||
def __init__(self, db_path: str):
|
|
||||||
self.db_path = db_path
|
|
||||||
self._init_db()
|
|
||||||
|
|
||||||
def _init_db(self) -> None:
|
|
||||||
"""Create or migrate the cache table."""
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
conn.execute("""
|
|
||||||
CREATE TABLE IF NOT EXISTS cache (
|
|
||||||
key TEXT PRIMARY KEY,
|
|
||||||
source TEXT NOT NULL,
|
|
||||||
status TEXT NOT NULL,
|
|
||||||
lyrics TEXT,
|
|
||||||
created_at INTEGER NOT NULL,
|
|
||||||
expires_at INTEGER,
|
|
||||||
artist TEXT,
|
|
||||||
title TEXT,
|
|
||||||
album TEXT
|
|
||||||
)
|
|
||||||
""")
|
|
||||||
# Migrations
|
|
||||||
cols = {r[1] for r in conn.execute("PRAGMA table_info(cache)").fetchall()}
|
|
||||||
if "length" not in cols:
|
|
||||||
conn.execute("ALTER TABLE cache ADD COLUMN length INTEGER")
|
|
||||||
if "confidence" not in cols:
|
|
||||||
conn.execute("ALTER TABLE cache ADD COLUMN confidence REAL")
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
# Read
|
|
||||||
|
|
||||||
def get(self, track: TrackMeta, source: str) -> Optional[LyricResult]:
|
|
||||||
"""Look up a cached result for *track* from *source*.
|
|
||||||
|
|
||||||
Returns None on cache miss or expiration.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
key = _generate_key(track, source)
|
|
||||||
except ValueError:
|
|
||||||
return None
|
|
||||||
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
row = conn.execute(
|
|
||||||
"SELECT status, lyrics, source, expires_at, length, confidence FROM cache WHERE key = ?",
|
|
||||||
(key,),
|
|
||||||
).fetchone()
|
|
||||||
|
|
||||||
if not row:
|
|
||||||
logger.debug(f"Cache miss: {source} / {track.display_name()}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
status_str, lyrics, src, expires_at, cached_length, confidence = row
|
|
||||||
|
|
||||||
# Check TTL expiration
|
|
||||||
if expires_at and expires_at < int(time.time()):
|
|
||||||
logger.debug(f"Cache expired: {source} / {track.display_name()}")
|
|
||||||
conn.execute("DELETE FROM cache WHERE key = ?", (key,))
|
|
||||||
conn.commit()
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Backfill length if the cached row is missing it
|
|
||||||
if cached_length is None and track.length is not None:
|
|
||||||
conn.execute(
|
|
||||||
"UPDATE cache SET length = ? WHERE key = ?",
|
|
||||||
(track.length, key),
|
|
||||||
)
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
remaining = expires_at - int(time.time()) if expires_at else None
|
|
||||||
logger.debug(
|
|
||||||
f"Cache hit: {source} / {track.display_name()} "
|
|
||||||
f"[{status_str}, ttl={remaining}s]"
|
|
||||||
)
|
|
||||||
status = CacheStatus(status_str)
|
|
||||||
if confidence is None:
|
|
||||||
if status == CacheStatus.SUCCESS_SYNCED:
|
|
||||||
confidence = LEGACY_CONFIDENCE_SYNCED
|
|
||||||
elif status == CacheStatus.SUCCESS_UNSYNCED:
|
|
||||||
confidence = LEGACY_CONFIDENCE_UNSYNCED
|
|
||||||
else:
|
|
||||||
confidence = 0.0 # negative statuses: no confidence
|
|
||||||
|
|
||||||
return LyricResult(
|
|
||||||
status=status,
|
|
||||||
lyrics=LRCData(lyrics) if lyrics else None,
|
|
||||||
source=src,
|
|
||||||
ttl=remaining,
|
|
||||||
confidence=confidence,
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_best(self, track: TrackMeta, sources: list[str]) -> Optional[LyricResult]:
|
|
||||||
"""Return the best cached result across *sources* by confidence.
|
|
||||||
|
|
||||||
Skips negative statuses (NOT_FOUND, NETWORK_ERROR) — those are only
|
|
||||||
consulted per-source to avoid redundant fetches.
|
|
||||||
"""
|
|
||||||
best: Optional[LyricResult] = None
|
|
||||||
for src in sources:
|
|
||||||
cached = self.get(track, src)
|
|
||||||
if not cached:
|
|
||||||
continue
|
|
||||||
if cached.status not in (
|
|
||||||
CacheStatus.SUCCESS_SYNCED,
|
|
||||||
CacheStatus.SUCCESS_UNSYNCED,
|
|
||||||
):
|
|
||||||
continue
|
|
||||||
if best is None:
|
|
||||||
best = cached
|
|
||||||
elif cached.confidence > best.confidence:
|
|
||||||
best = cached
|
|
||||||
elif (
|
|
||||||
cached.confidence == best.confidence
|
|
||||||
and cached.status == CacheStatus.SUCCESS_SYNCED
|
|
||||||
and best.status != CacheStatus.SUCCESS_SYNCED
|
|
||||||
):
|
|
||||||
best = cached
|
|
||||||
return best
|
|
||||||
|
|
||||||
# Write
|
|
||||||
|
|
||||||
def set(
|
|
||||||
self,
|
|
||||||
track: TrackMeta,
|
|
||||||
source: str,
|
|
||||||
result: LyricResult,
|
|
||||||
ttl_seconds: Optional[int] = None,
|
|
||||||
) -> None:
|
|
||||||
"""Store a lyric result in the cache."""
|
|
||||||
try:
|
|
||||||
key = _generate_key(track, source)
|
|
||||||
except ValueError:
|
|
||||||
logger.warning("Cannot cache: insufficient track metadata.")
|
|
||||||
return
|
|
||||||
|
|
||||||
now = int(time.time())
|
|
||||||
expires_at = now + ttl_seconds if ttl_seconds else None
|
|
||||||
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
conn.execute(
|
|
||||||
"""INSERT OR REPLACE INTO cache
|
|
||||||
(key, source, status, lyrics, created_at, expires_at,
|
|
||||||
artist, title, album, length, confidence)
|
|
||||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
|
|
||||||
(
|
|
||||||
key,
|
|
||||||
source,
|
|
||||||
result.status.value,
|
|
||||||
str(result.lyrics) if result.lyrics else None,
|
|
||||||
now,
|
|
||||||
expires_at,
|
|
||||||
track.artist,
|
|
||||||
track.title,
|
|
||||||
track.album,
|
|
||||||
track.length,
|
|
||||||
result.confidence,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
conn.commit()
|
|
||||||
logger.debug(
|
|
||||||
f"Cached: {source} / {track.display_name()} "
|
|
||||||
f"[{result.status.value}, ttl={ttl_seconds}s]"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Delete
|
|
||||||
|
|
||||||
def clear_all(self) -> None:
|
|
||||||
"""Remove every entry from the cache."""
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
conn.execute("DELETE FROM cache")
|
|
||||||
conn.commit()
|
|
||||||
logger.info("Cache cleared.")
|
|
||||||
|
|
||||||
def clear_track(self, track: TrackMeta) -> None:
|
|
||||||
"""Remove all cached entries (every source) for a single track."""
|
|
||||||
conditions, params = self._track_where(track)
|
|
||||||
if not conditions:
|
|
||||||
logger.info(f"No cache entries found for {track.display_name()}.")
|
|
||||||
return
|
|
||||||
where = " AND ".join(conditions)
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
cur = conn.execute(f"DELETE FROM cache WHERE {where}", params)
|
|
||||||
conn.commit()
|
|
||||||
if cur.rowcount:
|
|
||||||
logger.info(
|
|
||||||
f"Cleared {cur.rowcount} cache entries for {track.display_name()}."
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
logger.info(f"No cache entries found for {track.display_name()}.")
|
|
||||||
|
|
||||||
def prune(self) -> int:
|
|
||||||
"""Remove all expired entries. Returns the number of rows deleted."""
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
cur = conn.execute(
|
|
||||||
"DELETE FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
|
|
||||||
(int(time.time()),),
|
|
||||||
)
|
|
||||||
conn.commit()
|
|
||||||
count = cur.rowcount
|
|
||||||
logger.info(f"Pruned {count} expired cache entries.")
|
|
||||||
return count
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _track_where(track: TrackMeta) -> tuple[list[str], list[str]]:
|
|
||||||
"""Build WHERE conditions to match a track across all sources."""
|
|
||||||
conditions: list[str] = []
|
|
||||||
params: list[str] = []
|
|
||||||
if track.artist:
|
|
||||||
conditions.append("artist = ?")
|
|
||||||
params.append(track.artist)
|
|
||||||
if track.title:
|
|
||||||
conditions.append("title = ?")
|
|
||||||
params.append(track.title)
|
|
||||||
if track.album:
|
|
||||||
conditions.append("album = ?")
|
|
||||||
params.append(track.album)
|
|
||||||
return conditions, params
|
|
||||||
|
|
||||||
# Exact cross-source search
|
|
||||||
|
|
||||||
def find_best_positive(self, track: TrackMeta) -> Optional[LyricResult]:
|
|
||||||
"""Find the best positive (synced/unsynced) cache entry for *track*.
|
|
||||||
|
|
||||||
Uses exact metadata match (artist + title + album) across all sources.
|
|
||||||
Returns the highest-confidence entry, or None.
|
|
||||||
"""
|
|
||||||
conditions, params = self._track_where(track)
|
|
||||||
if not conditions:
|
|
||||||
return None
|
|
||||||
|
|
||||||
now = int(time.time())
|
|
||||||
conditions.append("status IN (?, ?)")
|
|
||||||
params.extend(
|
|
||||||
[CacheStatus.SUCCESS_SYNCED.value, CacheStatus.SUCCESS_UNSYNCED.value]
|
|
||||||
)
|
|
||||||
conditions.append("(expires_at IS NULL OR expires_at > ?)")
|
|
||||||
params.append(str(now))
|
|
||||||
|
|
||||||
where = " AND ".join(conditions)
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
conn.row_factory = sqlite3.Row
|
|
||||||
rows = conn.execute(
|
|
||||||
f"SELECT status, lyrics, source, confidence FROM cache WHERE {where} "
|
|
||||||
"ORDER BY COALESCE(confidence, "
|
|
||||||
" CASE status WHEN ? THEN ? ELSE ? END"
|
|
||||||
") DESC, "
|
|
||||||
"CASE status WHEN ? THEN 0 ELSE 1 END, "
|
|
||||||
"created_at DESC LIMIT 1",
|
|
||||||
params
|
|
||||||
+ [
|
|
||||||
CacheStatus.SUCCESS_SYNCED.value,
|
|
||||||
LEGACY_CONFIDENCE_SYNCED,
|
|
||||||
LEGACY_CONFIDENCE_UNSYNCED,
|
|
||||||
CacheStatus.SUCCESS_SYNCED.value,
|
|
||||||
],
|
|
||||||
).fetchall()
|
|
||||||
|
|
||||||
if not rows:
|
|
||||||
return None
|
|
||||||
|
|
||||||
row = dict(rows[0])
|
|
||||||
confidence = row["confidence"]
|
|
||||||
if confidence is None:
|
|
||||||
confidence = (
|
|
||||||
LEGACY_CONFIDENCE_SYNCED
|
|
||||||
if row["status"] == CacheStatus.SUCCESS_SYNCED.value
|
|
||||||
else LEGACY_CONFIDENCE_UNSYNCED
|
|
||||||
)
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus(row["status"]),
|
|
||||||
lyrics=LRCData(row["lyrics"]) if row["lyrics"] else None,
|
|
||||||
source="cache-search",
|
|
||||||
confidence=confidence,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Fuzzy search
|
|
||||||
|
|
||||||
def search_by_meta(
|
|
||||||
self,
|
|
||||||
artist: Optional[str],
|
|
||||||
title: Optional[str],
|
|
||||||
length: Optional[int] = None,
|
|
||||||
) -> list[dict]:
|
|
||||||
"""Search cache for lyrics matching artist/title with fuzzy normalization.
|
|
||||||
|
|
||||||
Ignores album and source. Only returns positive results (synced/unsynced)
|
|
||||||
that have not expired. When *length* is provided, filters by duration
|
|
||||||
tolerance and sorts by closest match.
|
|
||||||
"""
|
|
||||||
if not title:
|
|
||||||
return []
|
|
||||||
|
|
||||||
now = int(time.time())
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
conn.row_factory = sqlite3.Row
|
|
||||||
rows = conn.execute(
|
|
||||||
"""SELECT * FROM cache
|
|
||||||
WHERE status IN (?, ?)
|
|
||||||
AND (expires_at IS NULL OR expires_at > ?)""",
|
|
||||||
(
|
|
||||||
CacheStatus.SUCCESS_SYNCED.value,
|
|
||||||
CacheStatus.SUCCESS_UNSYNCED.value,
|
|
||||||
now,
|
|
||||||
),
|
|
||||||
).fetchall()
|
|
||||||
|
|
||||||
norm_title = _normalize_for_match(title)
|
|
||||||
norm_artist = _normalize_artist(artist) if artist else None
|
|
||||||
|
|
||||||
matches: list[dict] = []
|
|
||||||
for row in rows:
|
|
||||||
row_dict = dict(row)
|
|
||||||
# Title must match
|
|
||||||
row_title = row_dict.get("title") or ""
|
|
||||||
if _normalize_for_match(row_title) != norm_title:
|
|
||||||
continue
|
|
||||||
# Artist must match if provided
|
|
||||||
if norm_artist:
|
|
||||||
row_artist = row_dict.get("artist") or ""
|
|
||||||
if _normalize_artist(row_artist) != norm_artist:
|
|
||||||
continue
|
|
||||||
matches.append(row_dict)
|
|
||||||
|
|
||||||
# Duration filtering
|
|
||||||
if length is not None and matches:
|
|
||||||
scored = []
|
|
||||||
for m in matches:
|
|
||||||
row_len = m.get("length")
|
|
||||||
if row_len is not None:
|
|
||||||
diff = abs(row_len - length)
|
|
||||||
if diff <= DURATION_TOLERANCE_MS:
|
|
||||||
scored.append((diff, m))
|
|
||||||
else:
|
|
||||||
# No duration info in cache — still a candidate but lower priority
|
|
||||||
scored.append((DURATION_TOLERANCE_MS, m))
|
|
||||||
scored.sort(
|
|
||||||
key=lambda x: (
|
|
||||||
x[0],
|
|
||||||
-(x[1].get("confidence") or 0),
|
|
||||||
x[1].get("status") != CacheStatus.SUCCESS_SYNCED.value,
|
|
||||||
-(x[1].get("created_at") or 0),
|
|
||||||
)
|
|
||||||
)
|
|
||||||
matches = [m for _, m in scored]
|
|
||||||
|
|
||||||
return matches
|
|
||||||
|
|
||||||
# Update
|
|
||||||
|
|
||||||
def update_confidence(
|
|
||||||
self,
|
|
||||||
track: TrackMeta,
|
|
||||||
confidence: float,
|
|
||||||
source: str,
|
|
||||||
) -> int:
|
|
||||||
"""Update confidence for a specific source's cache entry matching *track*.
|
|
||||||
|
|
||||||
Returns the number of rows updated.
|
|
||||||
"""
|
|
||||||
conditions, params = self._track_where(track)
|
|
||||||
if not conditions:
|
|
||||||
return 0
|
|
||||||
conditions.append("source = ?")
|
|
||||||
params.append(source)
|
|
||||||
where = " AND ".join(conditions)
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
cur = conn.execute(
|
|
||||||
f"UPDATE cache SET confidence = ? WHERE {where}",
|
|
||||||
[confidence] + params,
|
|
||||||
)
|
|
||||||
conn.commit()
|
|
||||||
return cur.rowcount
|
|
||||||
|
|
||||||
# Query / inspect
|
|
||||||
|
|
||||||
def query_track(self, track: TrackMeta) -> list[dict]:
|
|
||||||
"""Return all cached rows for a given track (across all sources)."""
|
|
||||||
conditions, params = self._track_where(track)
|
|
||||||
if not conditions:
|
|
||||||
return []
|
|
||||||
where = " AND ".join(conditions)
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
conn.row_factory = sqlite3.Row
|
|
||||||
return [
|
|
||||||
dict(r)
|
|
||||||
for r in conn.execute(
|
|
||||||
f"SELECT * FROM cache WHERE {where}", params
|
|
||||||
).fetchall()
|
|
||||||
]
|
|
||||||
|
|
||||||
def query_all(self) -> list[dict]:
|
|
||||||
"""Return every row in the cache table."""
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
conn.row_factory = sqlite3.Row
|
|
||||||
return [dict(r) for r in conn.execute("SELECT * FROM cache").fetchall()]
|
|
||||||
|
|
||||||
def stats(self) -> dict:
|
|
||||||
"""Return aggregate cache statistics."""
|
|
||||||
now = int(time.time())
|
|
||||||
with sqlite3.connect(self.db_path) as conn:
|
|
||||||
total = conn.execute("SELECT COUNT(*) FROM cache").fetchone()[0]
|
|
||||||
expired = conn.execute(
|
|
||||||
"SELECT COUNT(*) FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
|
|
||||||
(now,),
|
|
||||||
).fetchone()[0]
|
|
||||||
by_status = dict(
|
|
||||||
conn.execute(
|
|
||||||
"SELECT status, COUNT(*) FROM cache GROUP BY status"
|
|
||||||
).fetchall()
|
|
||||||
)
|
|
||||||
by_source = dict(
|
|
||||||
conn.execute(
|
|
||||||
"SELECT source, COUNT(*) FROM cache GROUP BY source"
|
|
||||||
).fetchall()
|
|
||||||
)
|
|
||||||
# Source × Status cross-tabulation
|
|
||||||
source_status = conn.execute(
|
|
||||||
"SELECT source, status, COUNT(*) FROM cache GROUP BY source, status"
|
|
||||||
).fetchall()
|
|
||||||
# Confidence buckets (only for positive statuses)
|
|
||||||
confidence_rows = conn.execute(
|
|
||||||
"SELECT confidence FROM cache WHERE status IN (?, ?)",
|
|
||||||
(
|
|
||||||
CacheStatus.SUCCESS_SYNCED.value,
|
|
||||||
CacheStatus.SUCCESS_UNSYNCED.value,
|
|
||||||
),
|
|
||||||
).fetchall()
|
|
||||||
|
|
||||||
# Build source×status table: {source: {status: count}}
|
|
||||||
source_status_table: dict[str, dict[str, int]] = {}
|
|
||||||
for src, status, count in source_status:
|
|
||||||
source_status_table.setdefault(src, {})[status] = count
|
|
||||||
|
|
||||||
# Build confidence buckets
|
|
||||||
buckets = {
|
|
||||||
"legacy (NULL)": 0,
|
|
||||||
"0-24": 0,
|
|
||||||
"25-49": 0,
|
|
||||||
"50-79": 0,
|
|
||||||
"80-99": 0,
|
|
||||||
"100": 0,
|
|
||||||
}
|
|
||||||
for (conf,) in confidence_rows:
|
|
||||||
if conf is None:
|
|
||||||
buckets["legacy (NULL)"] += 1
|
|
||||||
elif conf >= 100:
|
|
||||||
buckets["100"] += 1
|
|
||||||
elif conf >= 80:
|
|
||||||
buckets["80-99"] += 1
|
|
||||||
elif conf >= 50:
|
|
||||||
buckets["50-79"] += 1
|
|
||||||
elif conf >= 25:
|
|
||||||
buckets["25-49"] += 1
|
|
||||||
else:
|
|
||||||
buckets["0-24"] += 1
|
|
||||||
|
|
||||||
return {
|
|
||||||
"total": total,
|
|
||||||
"expired": expired,
|
|
||||||
"active": total - expired,
|
|
||||||
"by_status": by_status,
|
|
||||||
"by_source": by_source,
|
|
||||||
"source_status": source_status_table,
|
|
||||||
"confidence_buckets": buckets,
|
|
||||||
}
|
|
||||||
@@ -1,114 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-25 10:17:56
|
|
||||||
Description: Global configuration constants and logger setup
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from platformdirs import user_cache_dir, user_config_dir
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
from loguru import logger
|
|
||||||
from importlib.metadata import version
|
|
||||||
|
|
||||||
# Application
|
|
||||||
APP_NAME = "lrx-cli"
|
|
||||||
APP_AUTHOR = "Uyanide"
|
|
||||||
APP_VERSION = version(APP_NAME)
|
|
||||||
|
|
||||||
# Paths
|
|
||||||
CACHE_DIR = user_cache_dir(APP_NAME, APP_AUTHOR)
|
|
||||||
DB_PATH = os.path.join(CACHE_DIR, "cache.db")
|
|
||||||
|
|
||||||
# .env loading
|
|
||||||
_config_env = Path(user_config_dir(APP_NAME, APP_AUTHOR)) / ".env"
|
|
||||||
load_dotenv(_config_env) # ~/.config/lrx-cli/.env
|
|
||||||
load_dotenv() # .env in cwd (does NOT override existing vars)
|
|
||||||
|
|
||||||
# HTTP
|
|
||||||
HTTP_TIMEOUT = 10.0
|
|
||||||
|
|
||||||
# Cache TTLs (seconds)
|
|
||||||
TTL_SYNCED = None # never expires
|
|
||||||
TTL_UNSYNCED = 86400 # 1 day
|
|
||||||
TTL_NOT_FOUND = 86400 * 3 # 3 days
|
|
||||||
TTL_NETWORK_ERROR = 3600 # 1 hour
|
|
||||||
|
|
||||||
# Search
|
|
||||||
DURATION_TOLERANCE_MS = 3000 # max duration mismatch for search matching
|
|
||||||
|
|
||||||
# Confidence scoring weights (sum to 100)
|
|
||||||
SCORE_W_TITLE = 40.0
|
|
||||||
SCORE_W_ARTIST = 30.0
|
|
||||||
SCORE_W_ALBUM = 10.0
|
|
||||||
SCORE_W_DURATION = 10.0
|
|
||||||
SCORE_W_SYNCED = 10.0
|
|
||||||
|
|
||||||
# Confidence thresholds
|
|
||||||
MIN_CONFIDENCE = 25.0 # below this, candidate is rejected
|
|
||||||
HIGH_CONFIDENCE = 80.0 # at or above this, stop searching early
|
|
||||||
|
|
||||||
# Multi-candidate fetching
|
|
||||||
MULTI_CANDIDATE_LIMIT = 3 # max candidates to try per search-based fetcher
|
|
||||||
MULTI_CANDIDATE_DELAY_S = 0.2 # delay between sequential lyric fetches
|
|
||||||
|
|
||||||
# Legacy cache rows (no confidence stored) get a base score by sync status
|
|
||||||
LEGACY_CONFIDENCE_SYNCED = 50.0
|
|
||||||
LEGACY_CONFIDENCE_UNSYNCED = 40.0
|
|
||||||
|
|
||||||
# Spotify related
|
|
||||||
SPOTIFY_TOKEN_URL = "https://open.spotify.com/api/token"
|
|
||||||
SPOTIFY_LYRICS_URL = "https://spclient.wg.spotify.com/color-lyrics/v2/track/"
|
|
||||||
SPOTIFY_SERVER_TIME_URL = "https://open.spotify.com/api/server-time"
|
|
||||||
SPOTIFY_SECRET_URL = (
|
|
||||||
"https://raw.githubusercontent.com/xyloflake/spot-secrets-go"
|
|
||||||
"/refs/heads/main/secrets/secrets.json"
|
|
||||||
)
|
|
||||||
SPOTIFY_SP_DC = os.environ.get("SPOTIFY_SP_DC", "")
|
|
||||||
SPOTIFY_TOKEN_CACHE_FILE = os.path.join(CACHE_DIR, "spotify_token.json")
|
|
||||||
|
|
||||||
# Netease api
|
|
||||||
NETEASE_SEARCH_URL = "https://music.163.com/api/cloudsearch/pc"
|
|
||||||
NETEASE_LYRIC_URL = "https://interface3.music.163.com/api/song/lyric"
|
|
||||||
|
|
||||||
# LRCLIB api
|
|
||||||
LRCLIB_API_URL = "https://lrclib.net/api/get"
|
|
||||||
LRCLIB_SEARCH_URL = "https://lrclib.net/api/search"
|
|
||||||
|
|
||||||
# QQ Music API (self-hosted proxy)
|
|
||||||
QQ_MUSIC_API_URL = os.environ.get("QQ_MUSIC_API_URL", "").rstrip("/")
|
|
||||||
|
|
||||||
# Musixmatch desktop API
|
|
||||||
MUSIXMATCH_USERTOKEN = os.environ.get("MUSIXMATCH_USERTOKEN", "")
|
|
||||||
MUSIXMATCH_SEARCH_URL = "https://apic-desktop.musixmatch.com/ws/1.1/track.search"
|
|
||||||
MUSIXMATCH_MACRO_URL = "https://apic-desktop.musixmatch.com/ws/1.1/macro.subtitles.get"
|
|
||||||
MUSIXMATCH_TRACK_MATCH_URL = (
|
|
||||||
"https://apic-desktop.musixmatch.com/ws/1.1/matcher.track.get"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Player preference (used when multiple MPRIS players are active)
|
|
||||||
PREFERRED_PLAYER = os.environ.get("PREFERRED_PLAYER", "spotify")
|
|
||||||
|
|
||||||
# User-Agents
|
|
||||||
UA_BROWSER = "Mozilla/5.0 (X11; Linux x86_64; rv:149.0) Gecko/20100101 Firefox/149.0"
|
|
||||||
UA_LRX = f"LRX-CLI {APP_VERSION} (https://github.com/Uyanide/lrx-cli)"
|
|
||||||
|
|
||||||
os.makedirs(CACHE_DIR, exist_ok=True)
|
|
||||||
|
|
||||||
# Logger
|
|
||||||
_LOG_FORMAT = (
|
|
||||||
"<green>{time:YYYY-MM-DD HH:mm:ss}</green> | "
|
|
||||||
"<level>{level: <8}</level> | "
|
|
||||||
"<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - "
|
|
||||||
"<level>{message}</level>"
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.remove()
|
|
||||||
logger.add(sys.stderr, format=_LOG_FORMAT, level="INFO")
|
|
||||||
|
|
||||||
|
|
||||||
def enable_debug() -> None:
|
|
||||||
"""Switch logger to DEBUG level."""
|
|
||||||
logger.remove()
|
|
||||||
logger.add(sys.stderr, format=_LOG_FORMAT, level="DEBUG")
|
|
||||||
-230
@@ -1,230 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-25 11:09:53
|
|
||||||
Description: Core orchestrator — coordinates fetchers with cache-aware fallback
|
|
||||||
"""
|
|
||||||
|
|
||||||
import asyncio
|
|
||||||
from typing import Optional
|
|
||||||
from loguru import logger
|
|
||||||
|
|
||||||
from .fetchers import FetcherMethodType, build_plan, create_fetchers
|
|
||||||
from .fetchers.base import BaseFetcher
|
|
||||||
from .cache import CacheEngine
|
|
||||||
from .lrc import LRCData
|
|
||||||
from .config import (
|
|
||||||
TTL_SYNCED,
|
|
||||||
TTL_UNSYNCED,
|
|
||||||
TTL_NOT_FOUND,
|
|
||||||
TTL_NETWORK_ERROR,
|
|
||||||
HIGH_CONFIDENCE,
|
|
||||||
)
|
|
||||||
from .models import TrackMeta, LyricResult, CacheStatus
|
|
||||||
from .enrichers import enrich_track
|
|
||||||
|
|
||||||
|
|
||||||
# Maps CacheStatus to the default TTL used when storing results
|
|
||||||
_STATUS_TTL: dict[CacheStatus, Optional[int]] = {
|
|
||||||
CacheStatus.SUCCESS_SYNCED: TTL_SYNCED,
|
|
||||||
CacheStatus.SUCCESS_UNSYNCED: TTL_UNSYNCED,
|
|
||||||
CacheStatus.NOT_FOUND: TTL_NOT_FOUND,
|
|
||||||
CacheStatus.NETWORK_ERROR: TTL_NETWORK_ERROR,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def _is_better(new: LyricResult, old: LyricResult) -> bool:
|
|
||||||
"""Compare two results: higher confidence wins; synced breaks ties."""
|
|
||||||
if new.confidence != old.confidence:
|
|
||||||
return new.confidence > old.confidence
|
|
||||||
# Equal confidence — prefer synced as tiebreaker
|
|
||||||
return (
|
|
||||||
new.status == CacheStatus.SUCCESS_SYNCED
|
|
||||||
and old.status != CacheStatus.SUCCESS_SYNCED
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def _normalize_result(result: LyricResult) -> LyricResult:
|
|
||||||
"""Normalize unsynced lyrics before returning."""
|
|
||||||
if result.status == CacheStatus.SUCCESS_UNSYNCED and result.lyrics:
|
|
||||||
return LyricResult(
|
|
||||||
status=result.status,
|
|
||||||
lyrics=result.lyrics.normalize_unsynced(),
|
|
||||||
source=result.source,
|
|
||||||
ttl=result.ttl,
|
|
||||||
confidence=result.confidence,
|
|
||||||
)
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
class LrcManager:
|
|
||||||
"""Main entry point for fetching lyrics with caching."""
|
|
||||||
|
|
||||||
def __init__(self, db_path: str) -> None:
|
|
||||||
self.cache = CacheEngine(db_path=db_path)
|
|
||||||
self.fetchers = create_fetchers(self.cache)
|
|
||||||
|
|
||||||
async def _run_group(
|
|
||||||
self,
|
|
||||||
group: list[BaseFetcher],
|
|
||||||
track: TrackMeta,
|
|
||||||
bypass_cache: bool,
|
|
||||||
) -> list[tuple[str, LyricResult]]:
|
|
||||||
"""Run one group: cache-check first, then parallel-fetch uncached. Returns (source, result) pairs."""
|
|
||||||
cached_results: list[tuple[str, LyricResult]] = []
|
|
||||||
need_fetch: list[BaseFetcher] = []
|
|
||||||
|
|
||||||
for fetcher in group:
|
|
||||||
source = fetcher.source_name
|
|
||||||
if not bypass_cache and not fetcher.self_cached:
|
|
||||||
cached = self.cache.get(track, source)
|
|
||||||
if cached:
|
|
||||||
if cached.status in (
|
|
||||||
CacheStatus.NOT_FOUND,
|
|
||||||
CacheStatus.NETWORK_ERROR,
|
|
||||||
):
|
|
||||||
logger.debug(
|
|
||||||
f"[{source}] cache hit: {cached.status.value}, skipping"
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
is_trusted = cached.confidence >= HIGH_CONFIDENCE
|
|
||||||
logger.info(
|
|
||||||
f"[{source}] cache hit: {cached.status.value}"
|
|
||||||
f" (confidence={cached.confidence:.0f})"
|
|
||||||
)
|
|
||||||
cached_results.append((source, cached))
|
|
||||||
# Return immediately on trusted synced cache hit
|
|
||||||
if cached.status == CacheStatus.SUCCESS_SYNCED and is_trusted:
|
|
||||||
return cached_results
|
|
||||||
continue
|
|
||||||
elif not fetcher.self_cached:
|
|
||||||
logger.debug(f"[{source}] cache bypassed")
|
|
||||||
need_fetch.append(fetcher)
|
|
||||||
|
|
||||||
if need_fetch:
|
|
||||||
task_map: dict[asyncio.Task, BaseFetcher] = {
|
|
||||||
asyncio.create_task(f.fetch(track, bypass_cache=bypass_cache)): f
|
|
||||||
for f in need_fetch
|
|
||||||
}
|
|
||||||
pending = set(task_map)
|
|
||||||
|
|
||||||
while pending:
|
|
||||||
done, pending = await asyncio.wait(
|
|
||||||
pending, return_when=asyncio.FIRST_COMPLETED
|
|
||||||
)
|
|
||||||
found_trusted = False
|
|
||||||
for task in done:
|
|
||||||
fetcher = task_map[task]
|
|
||||||
source = fetcher.source_name
|
|
||||||
try:
|
|
||||||
result = task.result()
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"[{source}] fetch raised: {e}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if result is None:
|
|
||||||
logger.debug(f"[{source}] returned None")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if not fetcher.self_cached and not bypass_cache:
|
|
||||||
ttl = result.ttl or _STATUS_TTL.get(
|
|
||||||
result.status, TTL_NOT_FOUND
|
|
||||||
)
|
|
||||||
self.cache.set(track, source, result, ttl_seconds=ttl)
|
|
||||||
|
|
||||||
if result.status in (
|
|
||||||
CacheStatus.SUCCESS_SYNCED,
|
|
||||||
CacheStatus.SUCCESS_UNSYNCED,
|
|
||||||
):
|
|
||||||
logger.info(
|
|
||||||
f"[{source}] got {result.status.value} lyrics"
|
|
||||||
f" (confidence={result.confidence:.0f})"
|
|
||||||
)
|
|
||||||
cached_results.append((source, result))
|
|
||||||
|
|
||||||
if (
|
|
||||||
result.status == CacheStatus.SUCCESS_SYNCED
|
|
||||||
and result.confidence >= HIGH_CONFIDENCE
|
|
||||||
):
|
|
||||||
found_trusted = True
|
|
||||||
|
|
||||||
if found_trusted:
|
|
||||||
for t in pending:
|
|
||||||
t.cancel()
|
|
||||||
await asyncio.gather(*pending, return_exceptions=True)
|
|
||||||
break
|
|
||||||
|
|
||||||
return cached_results
|
|
||||||
|
|
||||||
async def _fetch_for_track(
|
|
||||||
self,
|
|
||||||
track: TrackMeta,
|
|
||||||
force_method: Optional[FetcherMethodType],
|
|
||||||
bypass_cache: bool,
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
track = await enrich_track(track)
|
|
||||||
logger.info(f"Fetching lyrics for: {track.display_name()}")
|
|
||||||
|
|
||||||
plan = build_plan(self.fetchers, track, force_method)
|
|
||||||
if not plan:
|
|
||||||
return None
|
|
||||||
|
|
||||||
best_result: Optional[LyricResult] = None
|
|
||||||
|
|
||||||
for group in plan:
|
|
||||||
group_results = await self._run_group(group, track, bypass_cache)
|
|
||||||
|
|
||||||
for source, result in group_results:
|
|
||||||
if result.status not in (
|
|
||||||
CacheStatus.SUCCESS_SYNCED,
|
|
||||||
CacheStatus.SUCCESS_UNSYNCED,
|
|
||||||
):
|
|
||||||
continue
|
|
||||||
|
|
||||||
is_trusted = result.confidence >= HIGH_CONFIDENCE
|
|
||||||
|
|
||||||
# Trusted synced → return immediately
|
|
||||||
if result.status == CacheStatus.SUCCESS_SYNCED and is_trusted:
|
|
||||||
logger.info(
|
|
||||||
f"Returning {result.status.value} lyrics from {source}"
|
|
||||||
f" (confidence={result.confidence:.0f})"
|
|
||||||
)
|
|
||||||
return _normalize_result(result)
|
|
||||||
|
|
||||||
if best_result is None or _is_better(result, best_result):
|
|
||||||
best_result = result
|
|
||||||
|
|
||||||
if best_result:
|
|
||||||
logger.info(
|
|
||||||
f"Returning {best_result.status.value} lyrics from {best_result.source}"
|
|
||||||
)
|
|
||||||
return _normalize_result(best_result)
|
|
||||||
|
|
||||||
logger.info(f"No lyrics found for {track.display_name()}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def fetch_for_track(
|
|
||||||
self,
|
|
||||||
track: TrackMeta,
|
|
||||||
force_method: Optional[FetcherMethodType] = None,
|
|
||||||
bypass_cache: bool = False,
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
"""Fetch lyrics for *track* using the group-based parallel pipeline."""
|
|
||||||
return asyncio.run(self._fetch_for_track(track, force_method, bypass_cache))
|
|
||||||
|
|
||||||
def manual_insert(
|
|
||||||
self,
|
|
||||||
track: TrackMeta,
|
|
||||||
lyrics: str,
|
|
||||||
) -> None:
|
|
||||||
"""Manually insert lyrics into the cache for a track."""
|
|
||||||
track = asyncio.run(enrich_track(track))
|
|
||||||
logger.info(f"Manually inserting lyrics for: {track.display_name()}")
|
|
||||||
lrc = LRCData(lyrics)
|
|
||||||
result = LyricResult(
|
|
||||||
status=lrc.detect_sync_status(),
|
|
||||||
lyrics=lrc,
|
|
||||||
source="manual",
|
|
||||||
ttl=None,
|
|
||||||
)
|
|
||||||
self.cache.set(track, "manual", result, ttl_seconds=None)
|
|
||||||
logger.info("Lyrics inserted into cache.")
|
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-25 02:33:26
|
|
||||||
Description: Base fetcher class and common interfaces
|
|
||||||
"""
|
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from typing import Optional
|
|
||||||
|
|
||||||
from ..models import TrackMeta, LyricResult
|
|
||||||
|
|
||||||
|
|
||||||
class BaseFetcher(ABC):
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def source_name(self) -> str:
|
|
||||||
"""Name of the fetcher source."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@property
|
|
||||||
def self_cached(self) -> bool:
|
|
||||||
"""True if this fetcher manages its own cache (skip per-source cache check)."""
|
|
||||||
return False
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
"""Check if the fetcher is available for the given track (e.g. has required metadata)."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
"""Fetch lyrics for the given track. Returns None if unable to fetch."""
|
|
||||||
pass
|
|
||||||
@@ -1,101 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-28 05:57:46
|
|
||||||
Description: Cache-search fetcher — cross-album fuzzy lookup in the local cache
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
Searches existing cache entries by artist + title with fuzzy normalization,
|
|
||||||
ignoring album and source. Useful when the same track appears on different
|
|
||||||
albums or is played from different players.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Optional
|
|
||||||
from loguru import logger
|
|
||||||
|
|
||||||
|
|
||||||
from .base import BaseFetcher
|
|
||||||
from .selection import SearchCandidate, select_best
|
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
|
||||||
from ..cache import CacheEngine
|
|
||||||
from ..lrc import LRCData
|
|
||||||
|
|
||||||
|
|
||||||
class CacheSearchFetcher(BaseFetcher):
|
|
||||||
def __init__(self, cache: CacheEngine) -> None:
|
|
||||||
self._cache = cache
|
|
||||||
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "cache-search"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def self_cached(self) -> bool:
|
|
||||||
return True
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return bool(track.title)
|
|
||||||
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
if bypass_cache:
|
|
||||||
logger.debug("Cache-search: bypassed by caller")
|
|
||||||
return None
|
|
||||||
|
|
||||||
if not track.title:
|
|
||||||
logger.debug("Cache-search: skipped — no title")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Fast path: exact metadata match (artist+title+album), single SQL query
|
|
||||||
exact = self._cache.find_best_positive(track)
|
|
||||||
if exact:
|
|
||||||
logger.info(f"Cache-search: exact hit ({exact.status.value})")
|
|
||||||
return exact
|
|
||||||
|
|
||||||
# Slow path: fuzzy cross-album search
|
|
||||||
matches = self._cache.search_by_meta(
|
|
||||||
artist=track.artist,
|
|
||||||
title=track.title,
|
|
||||||
length=track.length,
|
|
||||||
)
|
|
||||||
|
|
||||||
if not matches:
|
|
||||||
logger.debug(f"Cache-search: no match for {track.display_name()}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Pick best by confidence scoring
|
|
||||||
candidates = [
|
|
||||||
SearchCandidate(
|
|
||||||
item=m,
|
|
||||||
duration_ms=float(m["length"]) if m.get("length") else None,
|
|
||||||
is_synced=m.get("status") == CacheStatus.SUCCESS_SYNCED.value,
|
|
||||||
title=m.get("title"),
|
|
||||||
artist=m.get("artist"),
|
|
||||||
album=m.get("album"),
|
|
||||||
)
|
|
||||||
for m in matches
|
|
||||||
if m.get("lyrics")
|
|
||||||
]
|
|
||||||
best, confidence = select_best(
|
|
||||||
candidates,
|
|
||||||
track.length,
|
|
||||||
title=track.title,
|
|
||||||
artist=track.artist,
|
|
||||||
album=track.album,
|
|
||||||
)
|
|
||||||
|
|
||||||
if not best:
|
|
||||||
return None
|
|
||||||
|
|
||||||
status = CacheStatus(best["status"])
|
|
||||||
logger.info(
|
|
||||||
f"Cache-search: fuzzy hit from [{best.get('source')}] "
|
|
||||||
f"album={best.get('album')!r} ({status.value}, confidence={confidence:.0f})"
|
|
||||||
)
|
|
||||||
return LyricResult(
|
|
||||||
status=status,
|
|
||||||
lyrics=LRCData(best["lyrics"]),
|
|
||||||
source=self.source_name,
|
|
||||||
confidence=confidence,
|
|
||||||
)
|
|
||||||
@@ -1,104 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-26 02:08:41
|
|
||||||
Description: Local fetcher — reads lyrics from .lrc sidecar files or embedded audio metadata
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
Priority:
|
|
||||||
1. Same-directory .lrc file (e.g. /path/to/track.lrc)
|
|
||||||
2. Embedded lyrics in audio metadata (FLAC, MP3 USLT/SYLT tags)
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Optional
|
|
||||||
from loguru import logger
|
|
||||||
from mutagen._file import File
|
|
||||||
from mutagen.flac import FLAC
|
|
||||||
|
|
||||||
from .base import BaseFetcher
|
|
||||||
from ..models import TrackMeta, LyricResult
|
|
||||||
from ..lrc import get_audio_path, get_sidecar_path, LRCData
|
|
||||||
|
|
||||||
|
|
||||||
class LocalFetcher(BaseFetcher):
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "local"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return track.is_local
|
|
||||||
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
"""Attempt to read lyrics from local filesystem."""
|
|
||||||
if not track.is_local or not track.url:
|
|
||||||
return None
|
|
||||||
|
|
||||||
audio_path = get_audio_path(track.url, ensure_exists=False)
|
|
||||||
if not audio_path:
|
|
||||||
logger.debug(f"Local: audio URL is not a valid file path: {track.url}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
lrc_path = get_sidecar_path(
|
|
||||||
track.url, ensure_audio_exists=False, ensure_exists=True
|
|
||||||
)
|
|
||||||
if lrc_path:
|
|
||||||
try:
|
|
||||||
with open(lrc_path, "r", encoding="utf-8") as f:
|
|
||||||
content = f.read().strip()
|
|
||||||
if content:
|
|
||||||
lrc = LRCData(content)
|
|
||||||
status = lrc.detect_sync_status()
|
|
||||||
logger.info(
|
|
||||||
f"Local: found .lrc sidecar ({status.value}) for {audio_path.name}"
|
|
||||||
)
|
|
||||||
return LyricResult(
|
|
||||||
status=status,
|
|
||||||
lyrics=lrc,
|
|
||||||
source=self.source_name,
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Local: error reading {lrc_path}: {e}")
|
|
||||||
else:
|
|
||||||
logger.debug(f"Local: no .lrc sidecar found for {audio_path}")
|
|
||||||
|
|
||||||
# Embedded metadata
|
|
||||||
if not audio_path.exists():
|
|
||||||
logger.debug(f"Local: audio file does not exist: {audio_path}")
|
|
||||||
return None
|
|
||||||
try:
|
|
||||||
audio = File(audio_path)
|
|
||||||
if audio is not None:
|
|
||||||
lyrics = None
|
|
||||||
|
|
||||||
if isinstance(audio, FLAC):
|
|
||||||
# FLAC stores lyrics in vorbis comment tags
|
|
||||||
lyrics = (
|
|
||||||
audio.get("lyrics") or audio.get("unsynclyrics") or [None]
|
|
||||||
)[0]
|
|
||||||
elif hasattr(audio, "tags") and audio.tags:
|
|
||||||
# MP3 / other: look for USLT or SYLT ID3 frames
|
|
||||||
for key in audio.tags.keys():
|
|
||||||
if key.startswith("USLT") or key.startswith("SYLT"):
|
|
||||||
lyrics = str(audio.tags[key])
|
|
||||||
break
|
|
||||||
|
|
||||||
if lyrics:
|
|
||||||
lrc = LRCData(lyrics)
|
|
||||||
status = lrc.detect_sync_status()
|
|
||||||
logger.info(
|
|
||||||
f"Local: found embedded lyrics ({status.value}) for {audio_path.name}"
|
|
||||||
)
|
|
||||||
return LyricResult(
|
|
||||||
status=status,
|
|
||||||
lyrics=lrc,
|
|
||||||
source=f"{self.source_name} (embedded)",
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
logger.debug("Local: no embedded lyrics found")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Local: error reading metadata for {audio_path}: {e}")
|
|
||||||
|
|
||||||
logger.debug(f"Local: no lyrics found for {audio_path}")
|
|
||||||
return None
|
|
||||||
@@ -1,104 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-25 05:23:38
|
|
||||||
Description: LRCLIB fetcher — queries lrclib.net for synced/plain lyrics
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
Requires complete track metadata (artist, title, album, duration).
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Optional
|
|
||||||
import httpx
|
|
||||||
from loguru import logger
|
|
||||||
from urllib.parse import urlencode
|
|
||||||
|
|
||||||
from .base import BaseFetcher
|
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
|
||||||
from ..lrc import LRCData
|
|
||||||
from ..config import (
|
|
||||||
HTTP_TIMEOUT,
|
|
||||||
TTL_UNSYNCED,
|
|
||||||
TTL_NOT_FOUND,
|
|
||||||
TTL_NETWORK_ERROR,
|
|
||||||
LRCLIB_API_URL,
|
|
||||||
UA_LRX,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class LrclibFetcher(BaseFetcher):
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "lrclib"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return track.is_complete
|
|
||||||
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
"""Fetch lyrics from LRCLIB. Requires complete metadata."""
|
|
||||||
if not track.is_complete:
|
|
||||||
logger.debug("LRCLIB: skipped — incomplete metadata")
|
|
||||||
return None
|
|
||||||
|
|
||||||
params = {
|
|
||||||
"track_name": track.title,
|
|
||||||
"artist_name": track.artist,
|
|
||||||
"album_name": track.album,
|
|
||||||
"duration": track.length / 1000.0 if track.length else 0,
|
|
||||||
}
|
|
||||||
url = f"{LRCLIB_API_URL}?{urlencode(params)}"
|
|
||||||
logger.info(f"LRCLIB: fetching lyrics for {track.display_name()}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
resp = await client.get(url, headers={"User-Agent": UA_LRX})
|
|
||||||
|
|
||||||
if resp.status_code == 404:
|
|
||||||
logger.debug(f"LRCLIB: not found for {track.display_name()}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
if resp.status_code != 200:
|
|
||||||
logger.error(f"LRCLIB: API returned {resp.status_code}")
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
|
|
||||||
)
|
|
||||||
|
|
||||||
data = resp.json()
|
|
||||||
if not isinstance(data, dict):
|
|
||||||
logger.error(f"LRCLIB: unexpected response type: {type(data).__name__}")
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
|
|
||||||
)
|
|
||||||
|
|
||||||
synced = data.get("syncedLyrics")
|
|
||||||
unsynced = data.get("plainLyrics")
|
|
||||||
|
|
||||||
if isinstance(synced, str) and synced.strip():
|
|
||||||
lyrics = LRCData(synced)
|
|
||||||
logger.info(f"LRCLIB: got synced lyrics ({len(lyrics)} lines)")
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.SUCCESS_SYNCED,
|
|
||||||
lyrics=lyrics,
|
|
||||||
source=self.source_name,
|
|
||||||
)
|
|
||||||
elif isinstance(unsynced, str) and unsynced.strip():
|
|
||||||
lyrics = LRCData(unsynced)
|
|
||||||
logger.info(f"LRCLIB: got unsynced lyrics ({len(lyrics)} lines)")
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.SUCCESS_UNSYNCED,
|
|
||||||
lyrics=lyrics,
|
|
||||||
source=self.source_name,
|
|
||||||
ttl=TTL_UNSYNCED,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
logger.debug(f"LRCLIB: empty response for {track.display_name()}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
except httpx.HTTPError as e:
|
|
||||||
logger.error(f"LRCLIB: HTTP error: {e}")
|
|
||||||
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"LRCLIB: unexpected error: {e}")
|
|
||||||
return None
|
|
||||||
@@ -1,317 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-04-04 15:28:34
|
|
||||||
Description: Musixmatch fetchers (desktop API, usertoken auth)
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
Uses the Musixmatch desktop API (apic-desktop.musixmatch.com).
|
|
||||||
Requires MUSIXMATCH_USERTOKEN from https://curators.musixmatch.com/settings
|
|
||||||
→ "Copy debug info" → find UserToken.
|
|
||||||
|
|
||||||
Two fetchers:
|
|
||||||
musixmatch-spotify — direct lookup by Spotify track ID (exact, no search)
|
|
||||||
musixmatch — metadata search + multi-candidate fallback
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
from typing import Optional
|
|
||||||
from urllib.parse import urlencode
|
|
||||||
|
|
||||||
import httpx
|
|
||||||
from loguru import logger
|
|
||||||
|
|
||||||
from .base import BaseFetcher
|
|
||||||
from .selection import SearchCandidate, select_best
|
|
||||||
from ..lrc import LRCData
|
|
||||||
from ..models import CacheStatus, LyricResult, TrackMeta
|
|
||||||
from ..config import (
|
|
||||||
HTTP_TIMEOUT,
|
|
||||||
MUSIXMATCH_MACRO_URL,
|
|
||||||
MUSIXMATCH_SEARCH_URL,
|
|
||||||
MUSIXMATCH_USERTOKEN,
|
|
||||||
TTL_NETWORK_ERROR,
|
|
||||||
TTL_NOT_FOUND,
|
|
||||||
)
|
|
||||||
|
|
||||||
_MXM_HEADERS = {"Cookie": "x-mxm-token-guid="}
|
|
||||||
|
|
||||||
_MXM_MACRO_BASE_PARAMS: dict[str, str] = {
|
|
||||||
"format": "json",
|
|
||||||
"namespace": "lyrics_richsynched",
|
|
||||||
"subtitle_format": "mxm",
|
|
||||||
"optional_calls": "track.richsync",
|
|
||||||
"app_id": "web-desktop-app-v1.0",
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def _format_ts(s: float) -> str:
|
|
||||||
mm = int(s) // 60
|
|
||||||
ss = int(s) % 60
|
|
||||||
cs = min(round((s % 1) * 100), 99)
|
|
||||||
return f"[{mm:02d}:{ss:02d}.{cs:02d}]"
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_richsync(body: str) -> Optional[str]:
|
|
||||||
"""Parse richsync JSON body → LRC text. Each entry: {"ts": float, "x": str}."""
|
|
||||||
try:
|
|
||||||
data = json.loads(body)
|
|
||||||
if not isinstance(data, list):
|
|
||||||
return None
|
|
||||||
lines = []
|
|
||||||
for entry in data:
|
|
||||||
if not isinstance(entry, dict):
|
|
||||||
continue
|
|
||||||
ts = entry.get("ts")
|
|
||||||
x = entry.get("x")
|
|
||||||
if not isinstance(ts, (int, float)) or not isinstance(x, str):
|
|
||||||
continue
|
|
||||||
lines.append(f"{_format_ts(float(ts))}{x}")
|
|
||||||
return "\n".join(lines) if lines else None
|
|
||||||
except Exception:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_subtitle(body: str) -> Optional[str]:
|
|
||||||
"""Parse subtitle JSON body → LRC text. Each entry: {"text": str, "time": {"total": float}}."""
|
|
||||||
try:
|
|
||||||
data = json.loads(body)
|
|
||||||
if not isinstance(data, list):
|
|
||||||
return None
|
|
||||||
lines = []
|
|
||||||
for entry in data:
|
|
||||||
if not isinstance(entry, dict):
|
|
||||||
continue
|
|
||||||
text = entry.get("text")
|
|
||||||
time_obj = entry.get("time")
|
|
||||||
if not isinstance(text, str) or not isinstance(time_obj, dict):
|
|
||||||
continue
|
|
||||||
total = time_obj.get("total")
|
|
||||||
if not isinstance(total, (int, float)):
|
|
||||||
continue
|
|
||||||
lines.append(f"{_format_ts(float(total))}{text}")
|
|
||||||
return "\n".join(lines) if lines else None
|
|
||||||
except Exception:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
async def _fetch_macro(
|
|
||||||
client: httpx.AsyncClient,
|
|
||||||
params: dict[str, str],
|
|
||||||
) -> Optional[LRCData]:
|
|
||||||
"""
|
|
||||||
Call macro.subtitles.get with given params merged onto base params.
|
|
||||||
Returns LRCData on success (richsync preferred over subtitle),
|
|
||||||
None when the API returns no usable lyrics.
|
|
||||||
Raises on HTTP/network errors.
|
|
||||||
"""
|
|
||||||
merged = {**_MXM_MACRO_BASE_PARAMS, **params}
|
|
||||||
url = f"{MUSIXMATCH_MACRO_URL}?{urlencode(merged)}"
|
|
||||||
logger.debug(f"Musixmatch: macro call with {list(params.keys())}")
|
|
||||||
|
|
||||||
resp = await client.get(url, headers=_MXM_HEADERS)
|
|
||||||
resp.raise_for_status()
|
|
||||||
|
|
||||||
data = resp.json()
|
|
||||||
# Musixmatch returns body=[] (not {}) when the track is not found
|
|
||||||
body = data.get("message", {}).get("body", {})
|
|
||||||
if not isinstance(body, dict):
|
|
||||||
return None
|
|
||||||
macro_calls = body.get("macro_calls", {})
|
|
||||||
if not isinstance(macro_calls, dict):
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Prefer richsync (word-level timing)
|
|
||||||
richsync_msg = macro_calls.get("track.richsync.get", {}).get("message", {})
|
|
||||||
if (
|
|
||||||
isinstance(richsync_msg, dict)
|
|
||||||
and richsync_msg.get("header", {}).get("status_code") == 200
|
|
||||||
):
|
|
||||||
richsync_body = (
|
|
||||||
richsync_msg.get("body", {}).get("richsync", {}).get("richsync_body")
|
|
||||||
)
|
|
||||||
if isinstance(richsync_body, str):
|
|
||||||
lrc_text = _parse_richsync(richsync_body)
|
|
||||||
if lrc_text:
|
|
||||||
lrc = LRCData(lrc_text)
|
|
||||||
if lrc:
|
|
||||||
logger.debug("Musixmatch: got richsync lyrics")
|
|
||||||
return lrc
|
|
||||||
|
|
||||||
# Fall back to subtitle (line-level timing)
|
|
||||||
subtitle_msg = macro_calls.get("track.subtitles.get", {}).get("message", {})
|
|
||||||
if (
|
|
||||||
isinstance(subtitle_msg, dict)
|
|
||||||
and subtitle_msg.get("header", {}).get("status_code") == 200
|
|
||||||
):
|
|
||||||
subtitle_list = subtitle_msg.get("body", {}).get("subtitle_list", [])
|
|
||||||
if isinstance(subtitle_list, list) and subtitle_list:
|
|
||||||
subtitle_body = subtitle_list[0].get("subtitle", {}).get("subtitle_body")
|
|
||||||
if isinstance(subtitle_body, str):
|
|
||||||
lrc_text = _parse_subtitle(subtitle_body)
|
|
||||||
if lrc_text:
|
|
||||||
lrc = LRCData(lrc_text)
|
|
||||||
if lrc:
|
|
||||||
logger.debug("Musixmatch: got subtitle lyrics")
|
|
||||||
return lrc
|
|
||||||
|
|
||||||
logger.debug("Musixmatch: no usable lyrics in macro response")
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class MusixmatchSpotifyFetcher(BaseFetcher):
|
|
||||||
"""Direct lookup by Spotify track ID — no search, single request."""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "musixmatch-spotify"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return bool(track.trackid) and bool(MUSIXMATCH_USERTOKEN)
|
|
||||||
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
logger.info(f"Musixmatch-Spotify: fetching lyrics for {track.display_name()}")
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
lrc = await _fetch_macro(
|
|
||||||
client,
|
|
||||||
{
|
|
||||||
"track_spotify_id": track.trackid, # type: ignore[dict-item]
|
|
||||||
"usertoken": MUSIXMATCH_USERTOKEN,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Musixmatch-Spotify: fetch failed: {e}")
|
|
||||||
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
|
|
||||||
|
|
||||||
if lrc is None:
|
|
||||||
logger.debug(
|
|
||||||
f"Musixmatch-Spotify: no lyrics found for {track.display_name()}"
|
|
||||||
)
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
logger.info(f"Musixmatch-Spotify: got SUCCESS_SYNCED lyrics ({len(lrc)} lines)")
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.SUCCESS_SYNCED,
|
|
||||||
lyrics=lrc,
|
|
||||||
source=self.source_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class MusixmatchFetcher(BaseFetcher):
|
|
||||||
"""Metadata search + multi-candidate fallback."""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "musixmatch"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return bool(track.title) and bool(MUSIXMATCH_USERTOKEN)
|
|
||||||
|
|
||||||
async def _search(self, track: TrackMeta) -> tuple[Optional[int], float]:
|
|
||||||
params: dict[str, str] = {
|
|
||||||
"format": "json",
|
|
||||||
"app_id": "web-desktop-app-v1.0",
|
|
||||||
"q_track": track.title or "",
|
|
||||||
"usertoken": MUSIXMATCH_USERTOKEN,
|
|
||||||
"page_size": "10",
|
|
||||||
"f_has_lyrics": "1",
|
|
||||||
}
|
|
||||||
if track.artist:
|
|
||||||
params["q_artist"] = track.artist
|
|
||||||
if track.album:
|
|
||||||
params["q_album"] = track.album
|
|
||||||
|
|
||||||
url = f"{MUSIXMATCH_SEARCH_URL}?{urlencode(params)}"
|
|
||||||
logger.debug(f"Musixmatch: searching for '{track.display_name()}'")
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
resp = await client.get(url, headers=_MXM_HEADERS)
|
|
||||||
resp.raise_for_status()
|
|
||||||
data = resp.json()
|
|
||||||
|
|
||||||
track_list = data.get("message", {}).get("body", {}).get("track_list", [])
|
|
||||||
if not isinstance(track_list, list) or not track_list:
|
|
||||||
logger.debug("Musixmatch: search returned 0 results")
|
|
||||||
return None, 0.0
|
|
||||||
|
|
||||||
logger.debug(f"Musixmatch: search returned {len(track_list)} candidates")
|
|
||||||
|
|
||||||
candidates = [
|
|
||||||
SearchCandidate(
|
|
||||||
item=int(t["commontrack_id"]),
|
|
||||||
duration_ms=(
|
|
||||||
float(t["track_length"]) * 1000
|
|
||||||
if t.get("track_length")
|
|
||||||
else None
|
|
||||||
),
|
|
||||||
is_synced=bool(t.get("has_subtitles") or t.get("has_richsync")),
|
|
||||||
title=t.get("track_name"),
|
|
||||||
artist=t.get("artist_name"),
|
|
||||||
album=t.get("album_name"),
|
|
||||||
)
|
|
||||||
for item in track_list
|
|
||||||
if isinstance(item, dict)
|
|
||||||
and isinstance(t := item.get("track", {}), dict)
|
|
||||||
and isinstance(t.get("commontrack_id"), int)
|
|
||||||
and not t.get("instrumental")
|
|
||||||
]
|
|
||||||
|
|
||||||
best_id, confidence = select_best(
|
|
||||||
candidates,
|
|
||||||
track.length,
|
|
||||||
title=track.title,
|
|
||||||
artist=track.artist,
|
|
||||||
album=track.album,
|
|
||||||
)
|
|
||||||
if best_id is not None:
|
|
||||||
logger.debug(
|
|
||||||
f"Musixmatch: best candidate id={best_id} ({confidence:.0f})"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
logger.debug("Musixmatch: no suitable candidate found")
|
|
||||||
return best_id, confidence
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Musixmatch: search failed: {e}")
|
|
||||||
return None, 0.0
|
|
||||||
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
logger.info(f"Musixmatch: fetching lyrics for {track.display_name()}")
|
|
||||||
commontrack_id, confidence = await self._search(track)
|
|
||||||
if commontrack_id is None:
|
|
||||||
logger.debug(f"Musixmatch: no match found for {track.display_name()}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
lrc = await _fetch_macro(
|
|
||||||
client,
|
|
||||||
{
|
|
||||||
"commontrack_id": str(commontrack_id),
|
|
||||||
"usertoken": MUSIXMATCH_USERTOKEN,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Musixmatch: fetch failed: {e}")
|
|
||||||
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
|
|
||||||
|
|
||||||
if lrc is None:
|
|
||||||
logger.debug(f"Musixmatch: no lyrics for commontrack_id={commontrack_id}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
logger.info(
|
|
||||||
f"Musixmatch: got SUCCESS_SYNCED lyrics "
|
|
||||||
f"for commontrack_id={commontrack_id} ({len(lrc)} lines)"
|
|
||||||
)
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.SUCCESS_SYNCED,
|
|
||||||
lyrics=lrc,
|
|
||||||
source=self.source_name,
|
|
||||||
confidence=confidence,
|
|
||||||
)
|
|
||||||
@@ -1,204 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-25 11:04:51
|
|
||||||
Description: Netease Cloud Music fetcher
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
Uses the public cloudsearch API for searching and the song/lyric API for
|
|
||||||
retrieving lyrics. No authentication required.
|
|
||||||
|
|
||||||
Search results are filtered by duration when the track has a known length
|
|
||||||
to avoid returning lyrics for the wrong version of a song.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import asyncio
|
|
||||||
from typing import Optional
|
|
||||||
import httpx
|
|
||||||
from loguru import logger
|
|
||||||
|
|
||||||
from .base import BaseFetcher
|
|
||||||
from .selection import SearchCandidate, select_ranked
|
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
|
||||||
from ..lrc import LRCData
|
|
||||||
from ..config import (
|
|
||||||
HTTP_TIMEOUT,
|
|
||||||
TTL_NOT_FOUND,
|
|
||||||
TTL_NETWORK_ERROR,
|
|
||||||
MULTI_CANDIDATE_DELAY_S,
|
|
||||||
NETEASE_SEARCH_URL,
|
|
||||||
NETEASE_LYRIC_URL,
|
|
||||||
UA_BROWSER,
|
|
||||||
)
|
|
||||||
|
|
||||||
_HEADERS = {
|
|
||||||
"User-Agent": UA_BROWSER,
|
|
||||||
"Referer": "https://music.163.com/",
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class NeteaseFetcher(BaseFetcher):
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "netease"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return bool(track.title)
|
|
||||||
|
|
||||||
async def _search(
|
|
||||||
self, track: TrackMeta, limit: int = 10
|
|
||||||
) -> list[tuple[int, float]]:
|
|
||||||
query = f"{track.artist or ''} {track.title or ''}".strip()
|
|
||||||
if not query:
|
|
||||||
return []
|
|
||||||
|
|
||||||
logger.debug(f"Netease: searching for '{query}' (limit={limit})")
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
resp = await client.post(
|
|
||||||
NETEASE_SEARCH_URL,
|
|
||||||
headers=_HEADERS,
|
|
||||||
data={"s": query, "type": "1", "limit": str(limit), "offset": "0"},
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
result = resp.json()
|
|
||||||
|
|
||||||
if not isinstance(result, dict):
|
|
||||||
logger.error(
|
|
||||||
f"Netease: search returned non-dict: {type(result).__name__}"
|
|
||||||
)
|
|
||||||
return []
|
|
||||||
|
|
||||||
result_body = result.get("result")
|
|
||||||
if not isinstance(result_body, dict):
|
|
||||||
logger.debug("Netease: search 'result' field missing or invalid")
|
|
||||||
return []
|
|
||||||
|
|
||||||
songs = result_body.get("songs")
|
|
||||||
if not isinstance(songs, list) or len(songs) == 0:
|
|
||||||
logger.debug("Netease: search returned 0 results")
|
|
||||||
return []
|
|
||||||
|
|
||||||
logger.debug(f"Netease: search returned {len(songs)} candidates")
|
|
||||||
|
|
||||||
candidates = [
|
|
||||||
SearchCandidate(
|
|
||||||
item=song_id,
|
|
||||||
duration_ms=float(song["dt"])
|
|
||||||
if isinstance(song.get("dt"), int)
|
|
||||||
else None,
|
|
||||||
title=song.get("name"),
|
|
||||||
artist=", ".join(a.get("name", "") for a in song.get("ar", []))
|
|
||||||
or None,
|
|
||||||
album=(song.get("al") or {}).get("name"),
|
|
||||||
)
|
|
||||||
for song in songs
|
|
||||||
if isinstance(song, dict) and isinstance(song_id := song.get("id"), int)
|
|
||||||
]
|
|
||||||
ranked = select_ranked(
|
|
||||||
candidates,
|
|
||||||
track.length,
|
|
||||||
title=track.title,
|
|
||||||
artist=track.artist,
|
|
||||||
album=track.album,
|
|
||||||
)
|
|
||||||
if ranked:
|
|
||||||
logger.debug(
|
|
||||||
"Netease: top candidates: "
|
|
||||||
+ ", ".join(f"id={i} ({c:.0f})" for i, c in ranked)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
logger.debug("Netease: no suitable candidate found")
|
|
||||||
return ranked
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Netease: search failed: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
async def _get_lyric(
|
|
||||||
self, song_id: int, confidence: float = 0.0
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
logger.debug(f"Netease: fetching lyrics for song_id={song_id}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
resp = await client.post(
|
|
||||||
NETEASE_LYRIC_URL,
|
|
||||||
headers=_HEADERS,
|
|
||||||
data={
|
|
||||||
"id": str(song_id),
|
|
||||||
"cp": "false",
|
|
||||||
"tv": "0",
|
|
||||||
"lv": "0",
|
|
||||||
"rv": "0",
|
|
||||||
"kv": "0",
|
|
||||||
"yv": "0",
|
|
||||||
"ytv": "0",
|
|
||||||
"yrv": "0",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
data = resp.json()
|
|
||||||
|
|
||||||
if not isinstance(data, dict):
|
|
||||||
logger.error(
|
|
||||||
f"Netease: lyric response is not dict: {type(data).__name__}"
|
|
||||||
)
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
|
|
||||||
)
|
|
||||||
|
|
||||||
lrc_obj = data.get("lrc")
|
|
||||||
if not isinstance(lrc_obj, dict):
|
|
||||||
logger.debug(
|
|
||||||
f"Netease: no 'lrc' object in response for song_id={song_id}"
|
|
||||||
)
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
lrc: str = lrc_obj.get("lyric", "")
|
|
||||||
if not isinstance(lrc, str) or not lrc.strip():
|
|
||||||
logger.debug(f"Netease: empty lyrics for song_id={song_id}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
lrcdata = LRCData(lrc)
|
|
||||||
status = lrcdata.detect_sync_status()
|
|
||||||
logger.info(
|
|
||||||
f"Netease: got {status.value} lyrics for song_id={song_id} "
|
|
||||||
f"({len(lrcdata)} lines)"
|
|
||||||
)
|
|
||||||
return LyricResult(
|
|
||||||
status=status,
|
|
||||||
lyrics=lrcdata,
|
|
||||||
source=self.source_name,
|
|
||||||
confidence=confidence,
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Netease: lyric fetch failed for song_id={song_id}: {e}")
|
|
||||||
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
|
|
||||||
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
query = f"{track.artist or ''} {track.title or ''}".strip()
|
|
||||||
if not query:
|
|
||||||
logger.debug("Netease: skipped — insufficient metadata")
|
|
||||||
return None
|
|
||||||
|
|
||||||
logger.info(f"Netease: fetching lyrics for {track.display_name()}")
|
|
||||||
candidates = await self._search(track)
|
|
||||||
if not candidates:
|
|
||||||
logger.debug(f"Netease: no match found for {track.display_name()}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
for i, (song_id, confidence) in enumerate(candidates):
|
|
||||||
if i > 0:
|
|
||||||
await asyncio.sleep(MULTI_CANDIDATE_DELAY_S)
|
|
||||||
result = await self._get_lyric(song_id, confidence=confidence)
|
|
||||||
if result is None or result.status == CacheStatus.NETWORK_ERROR:
|
|
||||||
return result
|
|
||||||
if result.status != CacheStatus.NOT_FOUND:
|
|
||||||
return result
|
|
||||||
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
@@ -1,171 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-31 01:54:02
|
|
||||||
Description: QQ Music fetcher via self-hosted API proxy
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
Requires a running qq-music-api instance.
|
|
||||||
The base URL is read from the QQ_MUSIC_API_URL environment variable.
|
|
||||||
|
|
||||||
Search → pick best match by duration → fetch LRC lyrics.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import asyncio
|
|
||||||
from typing import Optional
|
|
||||||
import httpx
|
|
||||||
from loguru import logger
|
|
||||||
|
|
||||||
from .base import BaseFetcher
|
|
||||||
from .selection import SearchCandidate, select_ranked
|
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
|
||||||
from ..lrc import LRCData
|
|
||||||
from ..config import (
|
|
||||||
HTTP_TIMEOUT,
|
|
||||||
TTL_NOT_FOUND,
|
|
||||||
TTL_NETWORK_ERROR,
|
|
||||||
MULTI_CANDIDATE_DELAY_S,
|
|
||||||
QQ_MUSIC_API_URL,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class QQMusicFetcher(BaseFetcher):
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "qqmusic"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return bool(track.title) and bool(QQ_MUSIC_API_URL)
|
|
||||||
|
|
||||||
async def _search(
|
|
||||||
self, track: TrackMeta, limit: int = 10
|
|
||||||
) -> list[tuple[str, float]]:
|
|
||||||
query = f"{track.artist or ''} {track.title or ''}".strip()
|
|
||||||
if not query:
|
|
||||||
return []
|
|
||||||
|
|
||||||
logger.debug(f"QQMusic: searching for '{query}' (limit={limit})")
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
resp = await client.get(
|
|
||||||
f"{QQ_MUSIC_API_URL}/api/search",
|
|
||||||
params={"keyword": query, "type": "song", "num": limit},
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
data = resp.json()
|
|
||||||
|
|
||||||
if data.get("code") != 0:
|
|
||||||
logger.error(f"QQMusic: search API error: {data}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
songs = data.get("data", {}).get("list", [])
|
|
||||||
if not songs:
|
|
||||||
logger.debug("QQMusic: search returned 0 results")
|
|
||||||
return []
|
|
||||||
|
|
||||||
logger.debug(f"QQMusic: search returned {len(songs)} candidates")
|
|
||||||
|
|
||||||
candidates = [
|
|
||||||
SearchCandidate(
|
|
||||||
item=mid,
|
|
||||||
duration_ms=float(song["interval"]) * 1000
|
|
||||||
if isinstance(song.get("interval"), int)
|
|
||||||
else None,
|
|
||||||
title=song.get("name"),
|
|
||||||
artist=", ".join(s.get("name", "") for s in song.get("singer", []))
|
|
||||||
or None,
|
|
||||||
album=(song.get("album") or {}).get("name"),
|
|
||||||
)
|
|
||||||
for song in songs
|
|
||||||
if isinstance(song, dict) and isinstance(mid := song.get("mid"), str)
|
|
||||||
]
|
|
||||||
ranked = select_ranked(
|
|
||||||
candidates,
|
|
||||||
track.length,
|
|
||||||
title=track.title,
|
|
||||||
artist=track.artist,
|
|
||||||
album=track.album,
|
|
||||||
)
|
|
||||||
if ranked:
|
|
||||||
logger.debug(
|
|
||||||
"QQMusic: top candidates: "
|
|
||||||
+ ", ".join(f"mid={m} ({c:.0f})" for m, c in ranked)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
logger.debug("QQMusic: no suitable candidate found")
|
|
||||||
return ranked
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"QQMusic: search failed: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
async def _get_lyric(
|
|
||||||
self, mid: str, confidence: float = 0.0
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
logger.debug(f"QQMusic: fetching lyrics for mid={mid}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
resp = await client.get(
|
|
||||||
f"{QQ_MUSIC_API_URL}/api/lyric",
|
|
||||||
params={"mid": mid},
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
data = resp.json()
|
|
||||||
|
|
||||||
if data.get("code") != 0:
|
|
||||||
logger.error(f"QQMusic: lyric API error: {data}")
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
|
|
||||||
)
|
|
||||||
|
|
||||||
lrc = data.get("data", {}).get("lyric", "")
|
|
||||||
if not isinstance(lrc, str) or not lrc.strip():
|
|
||||||
logger.debug(f"QQMusic: empty lyrics for mid={mid}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
lrcdata = LRCData(lrc)
|
|
||||||
status = lrcdata.detect_sync_status()
|
|
||||||
logger.info(
|
|
||||||
f"QQMusic: got {status.value} lyrics for mid={mid} ({len(lrcdata)} lines)"
|
|
||||||
)
|
|
||||||
return LyricResult(
|
|
||||||
status=status,
|
|
||||||
lyrics=lrcdata,
|
|
||||||
source=self.source_name,
|
|
||||||
confidence=confidence,
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"QQMusic: lyric fetch failed for mid={mid}: {e}")
|
|
||||||
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
|
|
||||||
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
if not QQ_MUSIC_API_URL:
|
|
||||||
logger.debug("QQMusic: skipped — QQ_MUSIC_API_URL not configured")
|
|
||||||
return None
|
|
||||||
|
|
||||||
query = f"{track.artist or ''} {track.title or ''}".strip()
|
|
||||||
if not query:
|
|
||||||
logger.debug("QQMusic: skipped — insufficient metadata")
|
|
||||||
return None
|
|
||||||
|
|
||||||
logger.info(f"QQMusic: fetching lyrics for {track.display_name()}")
|
|
||||||
candidates = await self._search(track)
|
|
||||||
if not candidates:
|
|
||||||
logger.debug(f"QQMusic: no match found for {track.display_name()}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
for i, (mid, confidence) in enumerate(candidates):
|
|
||||||
if i > 0:
|
|
||||||
await asyncio.sleep(MULTI_CANDIDATE_DELAY_S)
|
|
||||||
result = await self._get_lyric(mid, confidence=confidence)
|
|
||||||
if result is None or result.status == CacheStatus.NETWORK_ERROR:
|
|
||||||
return result
|
|
||||||
if result.status != CacheStatus.NOT_FOUND:
|
|
||||||
return result
|
|
||||||
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
@@ -1,352 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-25 10:43:21
|
|
||||||
Description: Spotify fetcher — obtains synced lyrics via Spotify's internal color-lyrics API.
|
|
||||||
"""
|
|
||||||
|
|
||||||
"""
|
|
||||||
Authentication flow:
|
|
||||||
1. Fetch server time from Spotify
|
|
||||||
2. Fetch TOTP secret
|
|
||||||
3. Generate a TOTP code and exchange it (with SP_DC cookie) for an access token
|
|
||||||
4. Request lyrics using the access token
|
|
||||||
|
|
||||||
The secret and token are cached on the instance to avoid redundant network
|
|
||||||
calls within the same session.
|
|
||||||
|
|
||||||
Requires SPOTIFY_SP_DC environment variable to be set.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import httpx
|
|
||||||
import json
|
|
||||||
import time
|
|
||||||
import struct
|
|
||||||
import hmac
|
|
||||||
import hashlib
|
|
||||||
from typing import Optional, Tuple
|
|
||||||
from loguru import logger
|
|
||||||
|
|
||||||
from .base import BaseFetcher
|
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
|
||||||
from ..lrc import LRCData
|
|
||||||
from ..config import (
|
|
||||||
HTTP_TIMEOUT,
|
|
||||||
TTL_NOT_FOUND,
|
|
||||||
TTL_NETWORK_ERROR,
|
|
||||||
SPOTIFY_TOKEN_URL,
|
|
||||||
SPOTIFY_LYRICS_URL,
|
|
||||||
SPOTIFY_SERVER_TIME_URL,
|
|
||||||
SPOTIFY_SECRET_URL,
|
|
||||||
SPOTIFY_SP_DC,
|
|
||||||
SPOTIFY_TOKEN_CACHE_FILE,
|
|
||||||
UA_BROWSER,
|
|
||||||
)
|
|
||||||
|
|
||||||
_SPOTIFY_BASE_HEADERS = {
|
|
||||||
"Referer": "https://open.spotify.com/",
|
|
||||||
"Origin": "https://open.spotify.com",
|
|
||||||
"App-Platform": "WebPlayer",
|
|
||||||
"Spotify-App-Version": "1.2.88.21.g8e037c8f",
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class SpotifyFetcher(BaseFetcher):
|
|
||||||
def __init__(self) -> None:
|
|
||||||
# Session-level caches to avoid refetching within the same run
|
|
||||||
self._cached_secret: Optional[Tuple[str, int]] = None
|
|
||||||
self._cached_token: Optional[str] = None
|
|
||||||
self._token_expires_at: float = 0.0
|
|
||||||
|
|
||||||
@property
|
|
||||||
def source_name(self) -> str:
|
|
||||||
return "spotify"
|
|
||||||
|
|
||||||
def is_available(self, track: TrackMeta) -> bool:
|
|
||||||
return bool(track.trackid) and bool(SPOTIFY_SP_DC)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _generate_totp(server_time_s: int, secret: str) -> str:
|
|
||||||
"""Generate a 6-digit TOTP code compatible with Spotify's auth.
|
|
||||||
|
|
||||||
Uses HMAC-SHA1 with a 30-second period, matching the Go reference.
|
|
||||||
"""
|
|
||||||
counter = server_time_s // 30
|
|
||||||
counter_bytes = struct.pack(">Q", counter)
|
|
||||||
|
|
||||||
mac = hmac.new(secret.encode(), counter_bytes, hashlib.sha1).digest()
|
|
||||||
|
|
||||||
offset = mac[-1] & 0x0F
|
|
||||||
binary_code = (
|
|
||||||
(mac[offset] & 0x7F) << 24
|
|
||||||
| (mac[offset + 1] & 0xFF) << 16
|
|
||||||
| (mac[offset + 2] & 0xFF) << 8
|
|
||||||
| (mac[offset + 3] & 0xFF)
|
|
||||||
)
|
|
||||||
|
|
||||||
code = binary_code % (10**6)
|
|
||||||
return str(code).zfill(6)
|
|
||||||
|
|
||||||
def _load_cached_token(self) -> Optional[str]:
|
|
||||||
"""Try to load a valid token from the persistent cache file."""
|
|
||||||
try:
|
|
||||||
with open(SPOTIFY_TOKEN_CACHE_FILE, "r") as f:
|
|
||||||
data = json.load(f)
|
|
||||||
expires_ms = data.get("accessTokenExpirationTimestampMs", 0)
|
|
||||||
if expires_ms <= int(time.time() * 1000):
|
|
||||||
logger.debug("Spotify: persisted token expired")
|
|
||||||
return None
|
|
||||||
token = data.get("accessToken", "")
|
|
||||||
if not token:
|
|
||||||
return None
|
|
||||||
self._cached_token = token
|
|
||||||
self._token_expires_at = expires_ms / 1000.0
|
|
||||||
logger.debug("Spotify: loaded token from cache file")
|
|
||||||
return token
|
|
||||||
except (FileNotFoundError, json.JSONDecodeError, KeyError):
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _save_token(self, body: dict) -> None:
|
|
||||||
"""Persist the token response to disk."""
|
|
||||||
try:
|
|
||||||
with open(SPOTIFY_TOKEN_CACHE_FILE, "w") as f:
|
|
||||||
json.dump(body, f)
|
|
||||||
logger.debug("Spotify: token saved to cache file")
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Spotify: failed to write token cache: {e}")
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _format_lrc_line(start_ms: int, words: str) -> str:
|
|
||||||
"""Format a single lyric line as LRC ``[mm:ss.cc]text``."""
|
|
||||||
minutes = start_ms // 60000
|
|
||||||
seconds = (start_ms // 1000) % 60
|
|
||||||
centiseconds = round((start_ms % 1000) / 10.0)
|
|
||||||
return f"[{minutes:02d}:{seconds:02d}.{centiseconds:02.0f}]{words}"
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _is_truly_synced(lines: list[dict]) -> bool:
|
|
||||||
"""Check if lyrics are actually synced (not all timestamps zero)."""
|
|
||||||
for line in lines:
|
|
||||||
try:
|
|
||||||
ms = int(line.get("startTimeMs", "0"))
|
|
||||||
if ms > 0:
|
|
||||||
return True
|
|
||||||
except (ValueError, TypeError):
|
|
||||||
continue
|
|
||||||
return False
|
|
||||||
|
|
||||||
async def _get_server_time(self, client: httpx.AsyncClient) -> Optional[int]:
|
|
||||||
try:
|
|
||||||
res = await client.get(SPOTIFY_SERVER_TIME_URL, timeout=HTTP_TIMEOUT)
|
|
||||||
res.raise_for_status()
|
|
||||||
data = res.json()
|
|
||||||
if not isinstance(data, dict) or "serverTime" not in data:
|
|
||||||
logger.error(f"Spotify: unexpected server-time response: {data}")
|
|
||||||
return None
|
|
||||||
server_time = data["serverTime"]
|
|
||||||
logger.debug(f"Spotify: server time = {server_time}")
|
|
||||||
return server_time
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Spotify: failed to fetch server time: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
async def _get_secret(self, client: httpx.AsyncClient) -> Optional[Tuple[str, int]]:
|
|
||||||
if self._cached_secret is not None:
|
|
||||||
logger.debug("Spotify: using cached TOTP secret")
|
|
||||||
return self._cached_secret
|
|
||||||
|
|
||||||
try:
|
|
||||||
res = await client.get(SPOTIFY_SECRET_URL, timeout=HTTP_TIMEOUT)
|
|
||||||
res.raise_for_status()
|
|
||||||
data = res.json()
|
|
||||||
|
|
||||||
if not isinstance(data, list) or len(data) == 0:
|
|
||||||
logger.error(
|
|
||||||
f"Spotify: unexpected secrets response (type={type(data).__name__}, len={len(data) if isinstance(data, list) else '?'})"
|
|
||||||
)
|
|
||||||
return None
|
|
||||||
|
|
||||||
last = data[-1]
|
|
||||||
if "secret" not in last or "version" not in last:
|
|
||||||
logger.error(f"Spotify: malformed secret entry: {list(last.keys())}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
secret_raw = last["secret"]
|
|
||||||
version = last["version"]
|
|
||||||
|
|
||||||
parts = []
|
|
||||||
for i, char in enumerate(secret_raw):
|
|
||||||
parts.append(str(ord(char) ^ ((i % 33) + 9)))
|
|
||||||
secret = "".join(parts)
|
|
||||||
|
|
||||||
logger.debug(f"Spotify: decoded secret v{version} (len={len(secret)})")
|
|
||||||
self._cached_secret = (secret, version)
|
|
||||||
return self._cached_secret
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Spotify: failed to fetch secret: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
async def _get_token(self) -> Optional[str]:
|
|
||||||
if self._cached_token and time.time() < self._token_expires_at - 30:
|
|
||||||
logger.debug("Spotify: using in-memory cached token")
|
|
||||||
return self._cached_token
|
|
||||||
|
|
||||||
disk_token = self._load_cached_token()
|
|
||||||
if disk_token and time.time() < self._token_expires_at - 30:
|
|
||||||
return disk_token
|
|
||||||
|
|
||||||
if not SPOTIFY_SP_DC:
|
|
||||||
logger.error(
|
|
||||||
"Spotify: SPOTIFY_SP_DC env var not set — cannot authenticate with Spotify"
|
|
||||||
)
|
|
||||||
return None
|
|
||||||
|
|
||||||
headers = {
|
|
||||||
"User-Agent": UA_BROWSER,
|
|
||||||
"Accept": "*/*",
|
|
||||||
"Cookie": f"sp_dc={SPOTIFY_SP_DC}",
|
|
||||||
**_SPOTIFY_BASE_HEADERS,
|
|
||||||
}
|
|
||||||
|
|
||||||
async with httpx.AsyncClient(headers=headers) as client:
|
|
||||||
server_time = await self._get_server_time(client)
|
|
||||||
if server_time is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
secret_data = await self._get_secret(client)
|
|
||||||
if secret_data is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
secret, version = secret_data
|
|
||||||
totp = self._generate_totp(server_time, secret)
|
|
||||||
logger.debug(f"Spotify: generated TOTP v{version}: {totp}")
|
|
||||||
|
|
||||||
params = {
|
|
||||||
"reason": "init",
|
|
||||||
"productType": "web-player",
|
|
||||||
"totp": totp,
|
|
||||||
"totpVer": str(version),
|
|
||||||
"totpServer": totp,
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
res = await client.get(
|
|
||||||
SPOTIFY_TOKEN_URL, params=params, timeout=HTTP_TIMEOUT
|
|
||||||
)
|
|
||||||
if res.status_code != 200:
|
|
||||||
logger.error(f"Spotify: token request returned {res.status_code}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
body = res.json()
|
|
||||||
|
|
||||||
if not isinstance(body, dict) or "accessToken" not in body:
|
|
||||||
logger.error(
|
|
||||||
f"Spotify: unexpected token response keys: {list(body.keys()) if isinstance(body, dict) else type(body).__name__}"
|
|
||||||
)
|
|
||||||
return None
|
|
||||||
|
|
||||||
token = body["accessToken"]
|
|
||||||
is_anonymous = body.get("isAnonymous", False)
|
|
||||||
if is_anonymous:
|
|
||||||
logger.warning(
|
|
||||||
"Spotify: received anonymous token — SP_DC may be invalid"
|
|
||||||
)
|
|
||||||
|
|
||||||
expires_ms = body.get("accessTokenExpirationTimestampMs", 0)
|
|
||||||
if expires_ms and expires_ms > int(time.time() * 1000):
|
|
||||||
self._token_expires_at = expires_ms / 1000.0
|
|
||||||
else:
|
|
||||||
logger.warning("Spotify: token expiry missing or invalid")
|
|
||||||
self._token_expires_at = time.time() + 3600
|
|
||||||
|
|
||||||
self._cached_token = token
|
|
||||||
self._save_token(body)
|
|
||||||
logger.debug("Spotify: obtained access token")
|
|
||||||
return token
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Spotify: token request failed: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
async def fetch(
|
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> Optional[LyricResult]:
|
|
||||||
if not track.trackid:
|
|
||||||
logger.debug("Spotify: skipped — no trackid in metadata")
|
|
||||||
return None
|
|
||||||
|
|
||||||
logger.info(f"Spotify: fetching lyrics for trackid={track.trackid}")
|
|
||||||
|
|
||||||
token = await self._get_token()
|
|
||||||
if not token:
|
|
||||||
logger.error("Spotify: cannot fetch lyrics without a token")
|
|
||||||
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
|
|
||||||
|
|
||||||
url = f"{SPOTIFY_LYRICS_URL}{track.trackid}?format=json&vocalRemoval=false&market=from_token"
|
|
||||||
headers = {
|
|
||||||
"User-Agent": UA_BROWSER,
|
|
||||||
"Accept": "application/json",
|
|
||||||
"Authorization": f"Bearer {token}",
|
|
||||||
**_SPOTIFY_BASE_HEADERS,
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
|
||||||
res = await client.get(url, headers=headers)
|
|
||||||
|
|
||||||
if res.status_code == 404:
|
|
||||||
logger.debug(f"Spotify: 404 for trackid={track.trackid}")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
if res.status_code != 200:
|
|
||||||
logger.error(f"Spotify: lyrics API returned {res.status_code}")
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
|
|
||||||
)
|
|
||||||
|
|
||||||
data = res.json()
|
|
||||||
|
|
||||||
if not isinstance(data, dict) or "lyrics" not in data:
|
|
||||||
logger.error("Spotify: unexpected lyrics response structure")
|
|
||||||
return LyricResult(
|
|
||||||
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
|
|
||||||
)
|
|
||||||
|
|
||||||
lyrics_data = data["lyrics"]
|
|
||||||
sync_type = lyrics_data.get("syncType", "")
|
|
||||||
lines = lyrics_data.get("lines", [])
|
|
||||||
|
|
||||||
if not isinstance(lines, list) or len(lines) == 0:
|
|
||||||
logger.debug("Spotify: response contained no lyric lines")
|
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
is_synced = sync_type == "LINE_SYNCED" and self._is_truly_synced(lines)
|
|
||||||
|
|
||||||
lrc_lines: list[str] = []
|
|
||||||
for line in lines:
|
|
||||||
words = line.get("words", "")
|
|
||||||
if not isinstance(words, str):
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
ms = int(line.get("startTimeMs", "0"))
|
|
||||||
except (ValueError, TypeError):
|
|
||||||
ms = 0
|
|
||||||
|
|
||||||
if is_synced:
|
|
||||||
lrc_lines.append(self._format_lrc_line(ms, words))
|
|
||||||
else:
|
|
||||||
lrc_lines.append(f"[00:00.00]{words}")
|
|
||||||
|
|
||||||
content = LRCData("\n".join(lrc_lines))
|
|
||||||
status = (
|
|
||||||
CacheStatus.SUCCESS_SYNCED
|
|
||||||
if is_synced
|
|
||||||
else CacheStatus.SUCCESS_UNSYNCED
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.info(f"Spotify: got {status.value} lyrics ({len(lrc_lines)} lines)")
|
|
||||||
return LyricResult(status=status, lyrics=content, source=self.source_name)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Spotify: lyrics fetch failed: {e}")
|
|
||||||
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
|
|
||||||
-316
@@ -1,316 +0,0 @@
|
|||||||
"""
|
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
|
||||||
Date: 2026-03-25 21:54:01
|
|
||||||
Description: Shared LRC time-tag utilities (definitely overengineered)
|
|
||||||
"""
|
|
||||||
|
|
||||||
import re
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Optional
|
|
||||||
from urllib.parse import unquote
|
|
||||||
|
|
||||||
from .models import CacheStatus
|
|
||||||
|
|
||||||
# Parses any time tag input format:
|
|
||||||
# [mm:ss], [mm:ss.c], [mm:ss.cc], [mm:ss.ccc], [mm:ss:cc], …
|
|
||||||
_RAW_TAG_RE = re.compile(r"\[(\d{2,}):(\d{2})(?:[.:](\d{1,3}))?\]")
|
|
||||||
|
|
||||||
# Standard format after normalization: [mm:ss.cc]
|
|
||||||
# _STD_TAG_RE = re.compile(r"\[\d{2,}:\d{2}\.\d{2}\]")
|
|
||||||
|
|
||||||
# Standard format with capture groups
|
|
||||||
_STD_TAG_CAPTURE_RE = re.compile(r"\[(\d{2,}):(\d{2})\.(\d{2})\]")
|
|
||||||
|
|
||||||
# [offset:+/-xxx] tag — value in milliseconds
|
|
||||||
_OFFSET_RE = re.compile(r"^\[offset:\s*([+-]?\d+)\]\s*$", re.MULTILINE | re.IGNORECASE)
|
|
||||||
|
|
||||||
# Any number of ID/Time tags at the start of a line
|
|
||||||
_LINE_START_TAGS_RE = re.compile(r"^(?:\[[^\]]*\])+", re.MULTILINE)
|
|
||||||
|
|
||||||
# Any number of standard time tags at the start of a line
|
|
||||||
_LINE_START_STD_TAGS_RE = re.compile(r"^(?:\[\d{2,}:\d{2}\.\d{2}\])+", re.MULTILINE)
|
|
||||||
|
|
||||||
# Word-level sync tags
|
|
||||||
# <mm:ss>, <mm:ss.c>, <mm:ss.cc>, <mm:ss:cc>, <xx,yy,zz>
|
|
||||||
_WORD_SYNC_TAG_RE = re.compile(r"<\d{2,}:\d{2}(?:[.:]\d{1,3})?>|<\d+,\d+,\d+>")
|
|
||||||
|
|
||||||
# QRC is totally a completely different matter. Since they are still providing standard LRC APIs,
|
|
||||||
# it might be a good idea to leave this mass to the future :)
|
|
||||||
|
|
||||||
|
|
||||||
def _remove_pattern(text: str, pattern: re.Pattern) -> str:
|
|
||||||
"""Remove all occurrences of pattern from text, then strip leading/trailing whitespace."""
|
|
||||||
return pattern.sub("", text).strip()
|
|
||||||
|
|
||||||
|
|
||||||
def _raw_tag_to_ms(mm: str, ss: str, frac: Optional[str]) -> int:
|
|
||||||
"""Convert parsed time tag components to total milliseconds."""
|
|
||||||
if frac is None:
|
|
||||||
ms = 0
|
|
||||||
else:
|
|
||||||
n = len(frac)
|
|
||||||
if n == 1:
|
|
||||||
ms = int(frac) * 100
|
|
||||||
elif n == 2:
|
|
||||||
ms = int(frac) * 10
|
|
||||||
else:
|
|
||||||
ms = int(frac)
|
|
||||||
return (int(mm) * 60 + int(ss)) * 1000 + ms
|
|
||||||
|
|
||||||
|
|
||||||
def _raw_tag_to_cs(mm: str, ss: str, frac: Optional[str]) -> str:
|
|
||||||
"""Convert parsed time tag components to standard [mm:ss.cc] string."""
|
|
||||||
if frac is None:
|
|
||||||
ms = 0
|
|
||||||
else:
|
|
||||||
# cc in [mm:ss:cc] is also treated as centiseconds, per LRC spec
|
|
||||||
# ^
|
|
||||||
# why does this format even exist, idk
|
|
||||||
n = len(frac)
|
|
||||||
if n == 1:
|
|
||||||
ms = int(frac) * 100
|
|
||||||
elif n == 2:
|
|
||||||
ms = int(frac) * 10
|
|
||||||
else:
|
|
||||||
ms = int(frac)
|
|
||||||
cs = min(round(ms / 10), 99)
|
|
||||||
return f"[{mm}:{ss}.{cs:02d}]"
|
|
||||||
|
|
||||||
|
|
||||||
def _sanitize_lyric_text(text: str) -> str:
|
|
||||||
"""Remove possibly word-sync time tags in lyric
|
|
||||||
|
|
||||||
Assumes the normal line-sync time tags are already stripped.
|
|
||||||
"""
|
|
||||||
return _remove_pattern(text, _WORD_SYNC_TAG_RE)
|
|
||||||
|
|
||||||
|
|
||||||
def _reformat(text: str) -> list[str]:
|
|
||||||
"""Parse each line and reformat to standard [mm:ss.cc]...content form.
|
|
||||||
|
|
||||||
Handles any mix of time tag formats on input. Lines with no time tags
|
|
||||||
are stripped of leading/trailing whitespace and passed through unchanged.
|
|
||||||
"""
|
|
||||||
out: list[str] = []
|
|
||||||
for line in text.splitlines():
|
|
||||||
line = line.strip()
|
|
||||||
pos = 0
|
|
||||||
tags: list[str] = []
|
|
||||||
while True:
|
|
||||||
while pos < len(line) and line[pos].isspace():
|
|
||||||
pos += 1
|
|
||||||
m = _RAW_TAG_RE.match(line, pos)
|
|
||||||
# Non-time tags are passed through as-is, except for leading/trailing whitespace which is stripped.
|
|
||||||
if not m:
|
|
||||||
# No more tags on this line
|
|
||||||
break
|
|
||||||
tags.append(_raw_tag_to_cs(m.group(1), m.group(2), m.group(3)))
|
|
||||||
pos = m.end()
|
|
||||||
if tags:
|
|
||||||
# This could break lyric lines of some kind of word-synced LRC format, e.g.
|
|
||||||
# [00:01.00]Lyric [00:02.00]line
|
|
||||||
# but such format were not planned to be supported in the first place, so…
|
|
||||||
out.append(_sanitize_lyric_text("".join(tags) + line[pos:]))
|
|
||||||
else:
|
|
||||||
out.append(line)
|
|
||||||
# Empty lines with no tags are also preserved
|
|
||||||
|
|
||||||
# Remove empty lines at the start and end of the whole text, but preserve blank lines in the middle
|
|
||||||
while out and not out[0].strip():
|
|
||||||
out.pop(0)
|
|
||||||
while out and not out[-1].strip():
|
|
||||||
out.pop()
|
|
||||||
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
class LRCData:
|
|
||||||
_lines: list[str]
|
|
||||||
|
|
||||||
def __init__(self, text: str | None = None) -> None:
|
|
||||||
if not text:
|
|
||||||
self._lines = []
|
|
||||||
return
|
|
||||||
self._lines = _reformat(text)
|
|
||||||
self._apply_offset()
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
|
||||||
return "\n".join(self._lines)
|
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
return f"LRCData(lines={self._lines!r})"
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
return len(self._lines) > 0
|
|
||||||
|
|
||||||
def __len__(self) -> int:
|
|
||||||
return len(self._lines)
|
|
||||||
|
|
||||||
def _apply_offset(self):
|
|
||||||
"""Parse [offset:±ms] and shift all standard [mm:ss.cc] tags accordingly.
|
|
||||||
|
|
||||||
Per LRC spec, positive offset = lyrics appear sooner (subtract from timestamps).
|
|
||||||
"""
|
|
||||||
m: Optional[re.Match] = None
|
|
||||||
for i, line in enumerate(self._lines):
|
|
||||||
m = _OFFSET_RE.search(line)
|
|
||||||
if m:
|
|
||||||
self._lines.pop(i)
|
|
||||||
break
|
|
||||||
if not m:
|
|
||||||
return
|
|
||||||
offset_ms = int(m.group(1))
|
|
||||||
if offset_ms == 0:
|
|
||||||
return
|
|
||||||
|
|
||||||
def _shift(match: re.Match) -> str:
|
|
||||||
total_ms = max(
|
|
||||||
0,
|
|
||||||
(int(match.group(1)) * 60 + int(match.group(2))) * 1000
|
|
||||||
+ int(match.group(3)) * 10
|
|
||||||
- offset_ms,
|
|
||||||
)
|
|
||||||
new_mm = total_ms // 60000
|
|
||||||
new_ss = (total_ms % 60000) // 1000
|
|
||||||
new_cs = min(round((total_ms % 1000) / 10), 99)
|
|
||||||
return f"[{new_mm:02d}:{new_ss:02d}.{new_cs:02d}]"
|
|
||||||
|
|
||||||
self._lines = [_STD_TAG_CAPTURE_RE.sub(_shift, line) for line in self._lines]
|
|
||||||
|
|
||||||
def is_synced(self) -> bool:
|
|
||||||
"""Check whether text contains non-zero LRC time tags.
|
|
||||||
|
|
||||||
Assumes text has been normalized by normalize (standard [mm:ss.cc] format).
|
|
||||||
"""
|
|
||||||
for line in self._lines:
|
|
||||||
for m in _STD_TAG_CAPTURE_RE.finditer(line):
|
|
||||||
if m.group(1) != "00" or m.group(2) != "00" or m.group(3) != "00":
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def detect_sync_status(self) -> CacheStatus:
|
|
||||||
"""Determine whether lyrics contain meaningful LRC time tags.
|
|
||||||
|
|
||||||
Assumes text has been normalized by normalize.
|
|
||||||
"""
|
|
||||||
return (
|
|
||||||
CacheStatus.SUCCESS_SYNCED
|
|
||||||
if self.is_synced()
|
|
||||||
else CacheStatus.SUCCESS_UNSYNCED
|
|
||||||
)
|
|
||||||
|
|
||||||
def normalize_unsynced(self):
|
|
||||||
"""Normalize unsynced lyrics so every line has a [00:00.00] tag.
|
|
||||||
|
|
||||||
Assumes lyrics have been normalized by normalize.
|
|
||||||
- Lines that already have time tags: replace with [00:00.00]
|
|
||||||
- Lines without leading tags: prepend [00:00.00]
|
|
||||||
- Blank lines in middle are converted to [00:00.00]
|
|
||||||
"""
|
|
||||||
out: list[str] = []
|
|
||||||
first = True
|
|
||||||
for i, line in enumerate(self._lines):
|
|
||||||
stripped = line.strip()
|
|
||||||
if not stripped and not first:
|
|
||||||
out.append("[00:00.00]")
|
|
||||||
continue
|
|
||||||
elif not stripped:
|
|
||||||
# Skip leading blank lines
|
|
||||||
continue
|
|
||||||
first = False
|
|
||||||
cleaned = _remove_pattern(line, _LINE_START_STD_TAGS_RE)
|
|
||||||
out.append(f"[00:00.00]{cleaned}")
|
|
||||||
ret = LRCData()
|
|
||||||
ret._lines = out
|
|
||||||
return ret
|
|
||||||
|
|
||||||
def to_plain(
|
|
||||||
self,
|
|
||||||
deduplicate: bool = False,
|
|
||||||
) -> str:
|
|
||||||
"""Convert lyrics to plain text with all tags stripped.
|
|
||||||
|
|
||||||
If deduplicate is True, only keep the first line of consecutive lines with the same lyric text (after stripping tags).
|
|
||||||
Otherwise, lines with multiple time tags will be duplicated as many times as the number of tags.
|
|
||||||
Assumes text has been normalized by normalize.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not self.is_synced():
|
|
||||||
return "\n".join(
|
|
||||||
_remove_pattern(line, _LINE_START_TAGS_RE) for line in self._lines
|
|
||||||
).strip("\n")
|
|
||||||
|
|
||||||
tagged_lines = []
|
|
||||||
for line in self._lines:
|
|
||||||
pos = 0
|
|
||||||
tag_ms = []
|
|
||||||
while True:
|
|
||||||
# Only match strictly repeated standard time tags at the start of the line
|
|
||||||
# Lines without any time tags are ignored.
|
|
||||||
# Lyric lines are considered already stripped of whitespaces, so no strips here.
|
|
||||||
m = _STD_TAG_CAPTURE_RE.match(line, pos)
|
|
||||||
if not m:
|
|
||||||
lyric = line[pos:]
|
|
||||||
for tag in tag_ms:
|
|
||||||
tagged_lines.append((tag, lyric))
|
|
||||||
break
|
|
||||||
tag_ms.append(_raw_tag_to_ms(m.group(1), m.group(2), m.group(3)))
|
|
||||||
pos = m.end()
|
|
||||||
|
|
||||||
sorted_lines = [lyric for _, lyric in sorted(tagged_lines, key=lambda x: x[0])]
|
|
||||||
|
|
||||||
if deduplicate:
|
|
||||||
# Remove consecutive duplicates
|
|
||||||
deduped_lines = []
|
|
||||||
prev_line = None
|
|
||||||
for line in sorted_lines:
|
|
||||||
if line != prev_line:
|
|
||||||
deduped_lines.append(line)
|
|
||||||
prev_line = line
|
|
||||||
sorted_lines = deduped_lines
|
|
||||||
|
|
||||||
return "\n".join(sorted_lines).strip()
|
|
||||||
|
|
||||||
def print_lyrics(
|
|
||||||
self,
|
|
||||||
plain: bool = False,
|
|
||||||
) -> None:
|
|
||||||
"""Print lyrics, optionally stripping tags.
|
|
||||||
|
|
||||||
Assumes text has been normalized by normalize.
|
|
||||||
"""
|
|
||||||
if plain:
|
|
||||||
print(self.to_plain())
|
|
||||||
else:
|
|
||||||
print("\n".join(self._lines))
|
|
||||||
|
|
||||||
|
|
||||||
def get_audio_path(audio_url: str, ensure_exists: bool = False) -> Optional[Path]:
|
|
||||||
"""Convert file:// URL to Path, return None if invalid or (if ensure_exists) file doesn't exist."""
|
|
||||||
if not audio_url.startswith("file://"):
|
|
||||||
return None
|
|
||||||
file_path = unquote(audio_url.replace("file://", "", 1))
|
|
||||||
path = Path(file_path)
|
|
||||||
if ensure_exists and not path.exists():
|
|
||||||
return None
|
|
||||||
return path
|
|
||||||
|
|
||||||
|
|
||||||
def get_sidecar_path(
|
|
||||||
audio_url: str,
|
|
||||||
ensure_audio_exists: bool = False,
|
|
||||||
ensure_exists: bool = False,
|
|
||||||
extension: str = ".lrc",
|
|
||||||
) -> Optional[Path]:
|
|
||||||
"""Given a file:// URL, return the corresponding .lrc sidecar path.
|
|
||||||
|
|
||||||
If ensure_audio_exists is True, return None if the audio file does not exist.
|
|
||||||
If ensure_exists is True, return None if the .lrc file does not exist.
|
|
||||||
"""
|
|
||||||
audio_path = get_audio_path(audio_url, ensure_exists=ensure_audio_exists)
|
|
||||||
if not audio_path:
|
|
||||||
return None
|
|
||||||
lrc_path = audio_path.with_suffix(extension)
|
|
||||||
if ensure_exists and not lrc_path.exists():
|
|
||||||
return None
|
|
||||||
return lrc_path
|
|
||||||
@@ -0,0 +1,2 @@
|
|||||||
|
*
|
||||||
|
!.gitignore
|
||||||
@@ -0,0 +1,343 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import traceback
|
||||||
|
from dataclasses import asdict
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Awaitable, Callable
|
||||||
|
|
||||||
|
import httpx
|
||||||
|
|
||||||
|
from lrx_cli.authenticators import create_authenticators
|
||||||
|
from lrx_cli.cache import CacheEngine
|
||||||
|
from lrx_cli.config import AppConfig, load_config
|
||||||
|
from lrx_cli.fetchers import (
|
||||||
|
create_fetchers,
|
||||||
|
LrclibFetcher,
|
||||||
|
LrclibSearchFetcher,
|
||||||
|
NeteaseFetcher,
|
||||||
|
SpotifyFetcher,
|
||||||
|
QQMusicFetcher,
|
||||||
|
MusixmatchFetcher,
|
||||||
|
MusixmatchSpotifyFetcher,
|
||||||
|
)
|
||||||
|
from lrx_cli.models import TrackMeta
|
||||||
|
|
||||||
|
|
||||||
|
SAMPLE_TRACK = TrackMeta(
|
||||||
|
title="One Last Kiss",
|
||||||
|
artist="Hikaru Utada",
|
||||||
|
album="One Last Kiss",
|
||||||
|
length=252026,
|
||||||
|
trackid="5RhWszHMSKzb7KiXk4Ae0M",
|
||||||
|
url="https://open.spotify.com/track/5RhWszHMSKzb7KiXk4Ae0M",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _jsonable(value: Any) -> Any:
|
||||||
|
if isinstance(value, (str, int, float, bool)) or value is None:
|
||||||
|
return value
|
||||||
|
if isinstance(value, dict):
|
||||||
|
return {str(k): _jsonable(v) for k, v in value.items()}
|
||||||
|
if isinstance(value, (list, tuple)):
|
||||||
|
return [_jsonable(v) for v in value]
|
||||||
|
if isinstance(value, bytes):
|
||||||
|
try:
|
||||||
|
return value.decode("utf-8")
|
||||||
|
except Exception:
|
||||||
|
return value.hex()
|
||||||
|
if hasattr(value, "model_dump"):
|
||||||
|
return _jsonable(value.model_dump())
|
||||||
|
if hasattr(value, "__dict__"):
|
||||||
|
return _jsonable(vars(value))
|
||||||
|
return repr(value)
|
||||||
|
|
||||||
|
|
||||||
|
def _write_json(path: Path, payload: Any) -> None:
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
path.write_text(
|
||||||
|
json.dumps(_jsonable(payload), ensure_ascii=False, indent=2) + "\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _clear_output_files(out_dir: Path) -> None:
|
||||||
|
for pattern in ("*.json", "*.db"):
|
||||||
|
for path in out_dir.glob(pattern):
|
||||||
|
if path.is_file():
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
def _new_runtime(config: AppConfig, db_path: Path):
|
||||||
|
cache = CacheEngine(str(db_path))
|
||||||
|
authenticators = create_authenticators(cache, config)
|
||||||
|
fetchers = create_fetchers(cache, authenticators, config)
|
||||||
|
return fetchers, authenticators
|
||||||
|
|
||||||
|
|
||||||
|
async def _response_dump(resp: httpx.Response) -> dict[str, Any]:
|
||||||
|
out: dict[str, Any] = {
|
||||||
|
"status_code": resp.status_code,
|
||||||
|
"headers": dict(resp.headers),
|
||||||
|
"url": str(resp.request.url),
|
||||||
|
"method": resp.request.method,
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
out["json"] = resp.json()
|
||||||
|
except Exception:
|
||||||
|
out["text"] = resp.text
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def _decode_body(content: bytes) -> str:
|
||||||
|
if not content:
|
||||||
|
return ""
|
||||||
|
try:
|
||||||
|
return content.decode("utf-8")
|
||||||
|
except Exception:
|
||||||
|
return content.hex()
|
||||||
|
|
||||||
|
|
||||||
|
def _dump_request(req: httpx.Request) -> dict[str, Any]:
|
||||||
|
query_params = {k: v for k, v in req.url.params.multi_items()}
|
||||||
|
return {
|
||||||
|
"method": req.method,
|
||||||
|
"url": str(req.url),
|
||||||
|
"headers": dict(req.headers),
|
||||||
|
"query_params": query_params,
|
||||||
|
"body": _decode_body(req.content),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async def run_capture(out_dir: Path, timeout: float, strict: bool) -> int:
|
||||||
|
out_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
_clear_output_files(out_dir)
|
||||||
|
|
||||||
|
# Use isolated cache DBs to avoid polluting normal runtime cache.
|
||||||
|
anon_fetchers, _ = _new_runtime(AppConfig(), out_dir / ".capture-anon.db")
|
||||||
|
cred_fetchers, _ = _new_runtime(load_config(), out_dir / ".capture-cred.db")
|
||||||
|
|
||||||
|
calls: list[tuple[str, dict[str, Any], Callable[[], Awaitable[Any]]]] = []
|
||||||
|
|
||||||
|
captured_requests: list[dict[str, Any]] = []
|
||||||
|
original_send = httpx.AsyncClient.send
|
||||||
|
|
||||||
|
async def _patched_send(
|
||||||
|
self: httpx.AsyncClient,
|
||||||
|
request: httpx.Request,
|
||||||
|
*args: Any,
|
||||||
|
**kwargs: Any,
|
||||||
|
) -> httpx.Response:
|
||||||
|
captured_requests.append(_dump_request(request))
|
||||||
|
return await original_send(self, request, *args, **kwargs)
|
||||||
|
|
||||||
|
httpx.AsyncClient.send = _patched_send # type: ignore[method-assign]
|
||||||
|
|
||||||
|
async with httpx.AsyncClient(timeout=timeout) as client:
|
||||||
|
# LRCLIB
|
||||||
|
lrclib = anon_fetchers["lrclib"]
|
||||||
|
assert isinstance(lrclib, LrclibFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"lrclib_get",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: lrclib._api_get(client, SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
lrclib_search = anon_fetchers["lrclib-search"]
|
||||||
|
assert isinstance(lrclib_search, LrclibSearchFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"lrclib_search_candidates",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: lrclib_search._api_candidates(client, SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Netease
|
||||||
|
netease = anon_fetchers["netease"]
|
||||||
|
assert isinstance(netease, NeteaseFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"netease_search_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK), "limit": 5},
|
||||||
|
lambda: netease._api_search_track(client, SAMPLE_TRACK, 5),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"netease_lyric_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK), "limit": 5},
|
||||||
|
lambda: netease._api_lyric_track(client, SAMPLE_TRACK, 5),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Spotify (credentialed runtime)
|
||||||
|
spotify = cred_fetchers["spotify"]
|
||||||
|
assert isinstance(spotify, SpotifyFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"spotify_lyrics",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: spotify._api_lyrics(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# QQMusic (credentialed runtime)
|
||||||
|
qq = cred_fetchers["qqmusic"]
|
||||||
|
assert isinstance(qq, QQMusicFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"qqmusic_search_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK), "limit": 10},
|
||||||
|
lambda: qq._api_search(SAMPLE_TRACK, 10),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"qqmusic_lyric_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK), "limit": 10},
|
||||||
|
lambda: qq._api_lyric_track(SAMPLE_TRACK, 10),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Musixmatch anonymous
|
||||||
|
mxm_anon = anon_fetchers["musixmatch"]
|
||||||
|
mxm_sp_anon = anon_fetchers["musixmatch-spotify"]
|
||||||
|
assert isinstance(mxm_anon, MusixmatchFetcher)
|
||||||
|
assert isinstance(mxm_sp_anon, MusixmatchSpotifyFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_anonymous_search_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_anon._api_search_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_anonymous_macro_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_anon._api_macro_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_spotify_anonymous_macro_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_sp_anon._api_macro_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Musixmatch credentialed (if token configured, this uses it)
|
||||||
|
mxm_cred = cred_fetchers["musixmatch"]
|
||||||
|
mxm_sp_cred = cred_fetchers["musixmatch-spotify"]
|
||||||
|
assert isinstance(mxm_cred, MusixmatchFetcher)
|
||||||
|
assert isinstance(mxm_sp_cred, MusixmatchSpotifyFetcher)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_token_search_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_cred._api_search_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_token_macro_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_cred._api_macro_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
calls.append(
|
||||||
|
(
|
||||||
|
"musixmatch_spotify_token_macro_track",
|
||||||
|
{"track": asdict(SAMPLE_TRACK)},
|
||||||
|
lambda: mxm_sp_cred._api_macro_track(SAMPLE_TRACK),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
failures = 0
|
||||||
|
try:
|
||||||
|
for idx, (name, request_payload, fn) in enumerate(calls, start=1):
|
||||||
|
stem = f"{idx:03d}_{name}"
|
||||||
|
req_path = out_dir / f"{stem}.request.json"
|
||||||
|
resp_path = out_dir / f"{stem}.response.json"
|
||||||
|
|
||||||
|
captured_requests.clear()
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = await fn()
|
||||||
|
if isinstance(result, httpx.Response):
|
||||||
|
payload = await _response_dump(result)
|
||||||
|
else:
|
||||||
|
payload = _jsonable(result)
|
||||||
|
_write_json(
|
||||||
|
req_path,
|
||||||
|
{
|
||||||
|
"call": name,
|
||||||
|
"input": request_payload,
|
||||||
|
"http_requests": _jsonable(captured_requests),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
_write_json(resp_path, {"ok": True, "response": payload})
|
||||||
|
except Exception as exc:
|
||||||
|
failures += 1
|
||||||
|
_write_json(
|
||||||
|
req_path,
|
||||||
|
{
|
||||||
|
"call": name,
|
||||||
|
"input": request_payload,
|
||||||
|
"http_requests": _jsonable(captured_requests),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
_write_json(
|
||||||
|
resp_path,
|
||||||
|
{
|
||||||
|
"ok": False,
|
||||||
|
"error": str(exc),
|
||||||
|
"traceback": traceback.format_exc(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
if strict:
|
||||||
|
break
|
||||||
|
finally:
|
||||||
|
httpx.AsyncClient.send = original_send # type: ignore[method-assign]
|
||||||
|
|
||||||
|
return failures
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description=(
|
||||||
|
"Call external provider APIs with sample data and save request/response "
|
||||||
|
"pairs for API reference."
|
||||||
|
)
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--out-dir",
|
||||||
|
type=Path,
|
||||||
|
default=Path("misc/api_ref"),
|
||||||
|
help="Output directory for request/response files.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--timeout",
|
||||||
|
type=float,
|
||||||
|
default=20.0,
|
||||||
|
help="HTTP timeout in seconds.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--strict",
|
||||||
|
action="store_true",
|
||||||
|
help="Stop on first failed call.",
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
failures = asyncio.run(run_capture(args.out_dir, args.timeout, args.strict))
|
||||||
|
print(f"capture finished: failures={failures}, out_dir={args.out_dir}")
|
||||||
|
return 1 if (args.strict and failures > 0) else 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
+19
-4
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
|
|||||||
|
|
||||||
[project]
|
[project]
|
||||||
name = "lrx-cli"
|
name = "lrx-cli"
|
||||||
version = "0.4.6"
|
version = "0.7.9"
|
||||||
description = "Fetch line-synced lyrics for your music player."
|
description = "Fetch line-synced lyrics for your music player."
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.13"
|
requires-python = ">=3.13"
|
||||||
@@ -14,8 +14,7 @@ dependencies = [
|
|||||||
"httpx>=0.28.1",
|
"httpx>=0.28.1",
|
||||||
"loguru>=0.7.3",
|
"loguru>=0.7.3",
|
||||||
"mutagen>=1.47.0",
|
"mutagen>=1.47.0",
|
||||||
"platformdirs>=4.9.4",
|
"platformdirs>=4.9.6",
|
||||||
"python-dotenv>=1.2.2",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
@@ -25,4 +24,20 @@ lrx = "lrx_cli.cli:run"
|
|||||||
ignore = ["E402"] # Since there are headers
|
ignore = ["E402"] # Since there are headers
|
||||||
|
|
||||||
[dependency-groups]
|
[dependency-groups]
|
||||||
dev = ["pytest>=9.0.2", "ruff>=0.15.8"]
|
dev = [
|
||||||
|
"poethepoet>=0.44.0",
|
||||||
|
"pyright>=1.1.406",
|
||||||
|
"pytest>=9.0.2",
|
||||||
|
"ruff>=0.15.8",
|
||||||
|
]
|
||||||
|
|
||||||
|
[tool.poe.tasks]
|
||||||
|
fmt = "ruff format ."
|
||||||
|
lint = { shell = "ruff check . && pyright" }
|
||||||
|
test = "pytest"
|
||||||
|
test-api = "pytest -m 'network or not network'"
|
||||||
|
|
||||||
|
[tool.pyright]
|
||||||
|
pythonVersion = "3.13"
|
||||||
|
include = ["src", "tests", "misc"]
|
||||||
|
typeCheckingMode = "standard"
|
||||||
|
|||||||
@@ -0,0 +1,3 @@
|
|||||||
|
[pytest]
|
||||||
|
addopts = -m "not network"
|
||||||
|
markers = network: marks tests that require real network access to external APIs
|
||||||
@@ -0,0 +1,180 @@
|
|||||||
|
# This file was autogenerated by uv via the following command:
|
||||||
|
# uv export
|
||||||
|
-e .
|
||||||
|
anyio==4.13.0 \
|
||||||
|
--hash=sha256:08b310f9e24a9594186fd75b4f73f4a4152069e3853f1ed8bfbf58369f4ad708 \
|
||||||
|
--hash=sha256:334b70e641fd2221c1505b3890c69882fe4a2df910cba14d97019b90b24439dc
|
||||||
|
# via httpx
|
||||||
|
attrs==26.1.0 \
|
||||||
|
--hash=sha256:c647aa4a12dfbad9333ca4e71fe62ddc36f4e63b2d260a37a8b83d2f043ac309 \
|
||||||
|
--hash=sha256:d03ceb89cb322a8fd706d4fb91940737b6642aa36998fe130a9bc96c985eff32
|
||||||
|
# via cyclopts
|
||||||
|
certifi==2026.2.25 \
|
||||||
|
--hash=sha256:027692e4402ad994f1c42e52a4997a9763c646b73e4096e4d5d6db8af1d6f0fa \
|
||||||
|
--hash=sha256:e887ab5cee78ea814d3472169153c2d12cd43b14bd03329a39a9c6e2e80bfba7
|
||||||
|
# via
|
||||||
|
# httpcore
|
||||||
|
# httpx
|
||||||
|
colorama==0.4.6 ; sys_platform == 'win32' \
|
||||||
|
--hash=sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44 \
|
||||||
|
--hash=sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6
|
||||||
|
# via
|
||||||
|
# loguru
|
||||||
|
# pytest
|
||||||
|
cyclopts==4.10.2 \
|
||||||
|
--hash=sha256:a1f2d6f8f7afac9456b48f75a40b36658778ddc9c6d406b520d017ae32c990fe \
|
||||||
|
--hash=sha256:d7b950457ef2563596d56331f80cbbbf86a2772535fb8b315c4f03bc7e6127f1
|
||||||
|
# via lrx-cli
|
||||||
|
dbus-next==0.2.3 \
|
||||||
|
--hash=sha256:58948f9aff9db08316734c0be2a120f6dc502124d9642f55e90ac82ffb16a18b \
|
||||||
|
--hash=sha256:f4eae26909332ada528c0a3549dda8d4f088f9b365153952a408e28023a626a5
|
||||||
|
# via lrx-cli
|
||||||
|
docstring-parser==0.17.0 \
|
||||||
|
--hash=sha256:583de4a309722b3315439bb31d64ba3eebada841f2e2cee23b99df001434c912 \
|
||||||
|
--hash=sha256:cf2569abd23dce8099b300f9b4fa8191e9582dda731fd533daf54c4551658708
|
||||||
|
# via cyclopts
|
||||||
|
docutils==0.22.4 \
|
||||||
|
--hash=sha256:4db53b1fde9abecbb74d91230d32ab626d94f6badfc575d6db9194a49df29968 \
|
||||||
|
--hash=sha256:d0013f540772d1420576855455d050a2180186c91c15779301ac2ccb3eeb68de
|
||||||
|
# via rich-rst
|
||||||
|
h11==0.16.0 \
|
||||||
|
--hash=sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1 \
|
||||||
|
--hash=sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86
|
||||||
|
# via httpcore
|
||||||
|
httpcore==1.0.9 \
|
||||||
|
--hash=sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55 \
|
||||||
|
--hash=sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8
|
||||||
|
# via httpx
|
||||||
|
httpx==0.28.1 \
|
||||||
|
--hash=sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc \
|
||||||
|
--hash=sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad
|
||||||
|
# via lrx-cli
|
||||||
|
idna==3.11 \
|
||||||
|
--hash=sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea \
|
||||||
|
--hash=sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902
|
||||||
|
# via
|
||||||
|
# anyio
|
||||||
|
# httpx
|
||||||
|
iniconfig==2.3.0 \
|
||||||
|
--hash=sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730 \
|
||||||
|
--hash=sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12
|
||||||
|
# via pytest
|
||||||
|
loguru==0.7.3 \
|
||||||
|
--hash=sha256:19480589e77d47b8d85b2c827ad95d49bf31b0dcde16593892eb51dd18706eb6 \
|
||||||
|
--hash=sha256:31a33c10c8e1e10422bfd431aeb5d351c7cf7fa671e3c4df004162264b28220c
|
||||||
|
# via lrx-cli
|
||||||
|
markdown-it-py==4.0.0 \
|
||||||
|
--hash=sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147 \
|
||||||
|
--hash=sha256:cb0a2b4aa34f932c007117b194e945bd74e0ec24133ceb5bac59009cda1cb9f3
|
||||||
|
# via rich
|
||||||
|
mdurl==0.1.2 \
|
||||||
|
--hash=sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8 \
|
||||||
|
--hash=sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba
|
||||||
|
# via markdown-it-py
|
||||||
|
mutagen==1.47.0 \
|
||||||
|
--hash=sha256:719fadef0a978c31b4cf3c956261b3c58b6948b32023078a2117b1de09f0fc99 \
|
||||||
|
--hash=sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719
|
||||||
|
# via lrx-cli
|
||||||
|
nodeenv==1.10.0 \
|
||||||
|
--hash=sha256:5bb13e3eed2923615535339b3c620e76779af4cb4c6a90deccc9e36b274d3827 \
|
||||||
|
--hash=sha256:996c191ad80897d076bdfba80a41994c2b47c68e224c542b48feba42ba00f8bb
|
||||||
|
# via pyright
|
||||||
|
packaging==26.0 \
|
||||||
|
--hash=sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4 \
|
||||||
|
--hash=sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529
|
||||||
|
# via pytest
|
||||||
|
pastel==0.2.1 \
|
||||||
|
--hash=sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364 \
|
||||||
|
--hash=sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d
|
||||||
|
# via poethepoet
|
||||||
|
platformdirs==4.9.6 \
|
||||||
|
--hash=sha256:3bfa75b0ad0db84096ae777218481852c0ebc6c727b3168c1b9e0118e458cf0a \
|
||||||
|
--hash=sha256:e61adb1d5e5cb3441b4b7710bea7e4c12250ca49439228cc1021c00dcfac0917
|
||||||
|
# via lrx-cli
|
||||||
|
pluggy==1.6.0 \
|
||||||
|
--hash=sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3 \
|
||||||
|
--hash=sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746
|
||||||
|
# via pytest
|
||||||
|
poethepoet==0.44.0 \
|
||||||
|
--hash=sha256:36d3d834708ed069ac1e4f8ed77915c55265b7b6e01aeb2fe617c9fe9cfd524a \
|
||||||
|
--hash=sha256:c2667b513621788fb46482e371cdf81c0b04344e0e0bcb7aa8af45f84c2fce7b
|
||||||
|
pygments==2.20.0 \
|
||||||
|
--hash=sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f \
|
||||||
|
--hash=sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176
|
||||||
|
# via
|
||||||
|
# pytest
|
||||||
|
# rich
|
||||||
|
pyright==1.1.408 \
|
||||||
|
--hash=sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1 \
|
||||||
|
--hash=sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684
|
||||||
|
pytest==9.0.3 \
|
||||||
|
--hash=sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9 \
|
||||||
|
--hash=sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c
|
||||||
|
pyyaml==6.0.3 \
|
||||||
|
--hash=sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c \
|
||||||
|
--hash=sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3 \
|
||||||
|
--hash=sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6 \
|
||||||
|
--hash=sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65 \
|
||||||
|
--hash=sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1 \
|
||||||
|
--hash=sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310 \
|
||||||
|
--hash=sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac \
|
||||||
|
--hash=sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9 \
|
||||||
|
--hash=sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7 \
|
||||||
|
--hash=sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35 \
|
||||||
|
--hash=sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb \
|
||||||
|
--hash=sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065 \
|
||||||
|
--hash=sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c \
|
||||||
|
--hash=sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c \
|
||||||
|
--hash=sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764 \
|
||||||
|
--hash=sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac \
|
||||||
|
--hash=sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8 \
|
||||||
|
--hash=sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3 \
|
||||||
|
--hash=sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5 \
|
||||||
|
--hash=sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702 \
|
||||||
|
--hash=sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788 \
|
||||||
|
--hash=sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba \
|
||||||
|
--hash=sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5 \
|
||||||
|
--hash=sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26 \
|
||||||
|
--hash=sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f \
|
||||||
|
--hash=sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b \
|
||||||
|
--hash=sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be \
|
||||||
|
--hash=sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c \
|
||||||
|
--hash=sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6
|
||||||
|
# via poethepoet
|
||||||
|
rich==14.3.3 \
|
||||||
|
--hash=sha256:793431c1f8619afa7d3b52b2cdec859562b950ea0d4b6b505397612db8d5362d \
|
||||||
|
--hash=sha256:b8daa0b9e4eef54dd8cf7c86c03713f53241884e814f4e2f5fb342fe520f639b
|
||||||
|
# via
|
||||||
|
# cyclopts
|
||||||
|
# rich-rst
|
||||||
|
rich-rst==1.3.2 \
|
||||||
|
--hash=sha256:a1196fdddf1e364b02ec68a05e8ff8f6914fee10fbca2e6b6735f166bb0da8d4 \
|
||||||
|
--hash=sha256:a99b4907cbe118cf9d18b0b44de272efa61f15117c61e39ebdc431baf5df722a
|
||||||
|
# via cyclopts
|
||||||
|
ruff==0.15.10 \
|
||||||
|
--hash=sha256:0744e31482f8f7d0d10a11fcbf897af272fefdfcb10f5af907b18c2813ff4d5f \
|
||||||
|
--hash=sha256:0ee3ef42dab7078bda5ff6a1bcba8539e9857deb447132ad5566a038674540d0 \
|
||||||
|
--hash=sha256:136c00ca2f47b0018b073f28cb5c1506642a830ea941a60354b0e8bc8076b151 \
|
||||||
|
--hash=sha256:28cb32d53203242d403d819fd6983152489b12e4a3ae44993543d6fe62ab42ed \
|
||||||
|
--hash=sha256:51cb8cc943e891ba99989dd92d61e29b1d231e14811db9be6440ecf25d5c1609 \
|
||||||
|
--hash=sha256:601d1610a9e1f1c2165a4f561eeaa2e2ea1e97f3287c5aa258d3dab8b57c6188 \
|
||||||
|
--hash=sha256:8154d43684e4333360fedd11aaa40b1b08a4e37d8ffa9d95fee6fa5b37b6fab1 \
|
||||||
|
--hash=sha256:83e1dd04312997c99ea6965df66a14fb4f03ba978564574ffc68b0d61fd3989e \
|
||||||
|
--hash=sha256:8ab88715f3a6deb6bde6c227f3a123410bec7b855c3ae331b4c006189e895cef \
|
||||||
|
--hash=sha256:8b80a2f3c9c8a950d6237f2ca12b206bccff626139be9fa005f14feb881a1ae8 \
|
||||||
|
--hash=sha256:93cc06a19e5155b4441dd72808fdf84290d84ad8a39ca3b0f994363ade4cebb1 \
|
||||||
|
--hash=sha256:a768ff5969b4f44c349d48edf4ab4f91eddb27fd9d77799598e130fb628aa158 \
|
||||||
|
--hash=sha256:b0c52744cf9f143a393e284125d2576140b68264a93c6716464e129a3e9adb48 \
|
||||||
|
--hash=sha256:b1e7c16ea0ff5a53b7c2df52d947e685973049be1cdfe2b59a9c43601897b22e \
|
||||||
|
--hash=sha256:d1f86e67ebfdef88e00faefa1552b5e510e1d35f3be7d423dc7e84e63788c94e \
|
||||||
|
--hash=sha256:d4272e87e801e9a27a2e8df7b21011c909d9ddd82f4f3281d269b6ba19789ca5 \
|
||||||
|
--hash=sha256:e3e53c588164dc025b671c9df2462429d60357ea91af7e92e9d56c565a9f1b07 \
|
||||||
|
--hash=sha256:e59c9bdc056a320fb9ea1700a8d591718b8faf78af065484e801258d3a76bc3f
|
||||||
|
typing-extensions==4.15.0 \
|
||||||
|
--hash=sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466 \
|
||||||
|
--hash=sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548
|
||||||
|
# via pyright
|
||||||
|
win32-setctime==1.2.0 ; sys_platform == 'win32' \
|
||||||
|
--hash=sha256:95d644c4e708aba81dc3704a116d8cbc974d70b3bdb8be1d150e36be6e9d1390 \
|
||||||
|
--hash=sha256:ae1fdf948f5640aae05c511ade119313fb6a30d7eabe25fef9764dca5873c4c0
|
||||||
|
# via loguru
|
||||||
@@ -0,0 +1,21 @@
|
|||||||
|
from .config import AppConfig, GeneralConfig, CredentialConfig, load_config
|
||||||
|
from .core import LrcManager
|
||||||
|
from .models import CacheStatus, TrackMeta, LyricResult
|
||||||
|
from .lrc import LRCData, LyricLine
|
||||||
|
from .fetchers import FetcherMethodType
|
||||||
|
from .utils import get_sidecar_path
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"AppConfig",
|
||||||
|
"GeneralConfig",
|
||||||
|
"CredentialConfig",
|
||||||
|
"load_config",
|
||||||
|
"LrcManager",
|
||||||
|
"CacheStatus",
|
||||||
|
"TrackMeta",
|
||||||
|
"LRCData",
|
||||||
|
"LyricLine",
|
||||||
|
"LyricResult",
|
||||||
|
"FetcherMethodType",
|
||||||
|
"get_sidecar_path",
|
||||||
|
]
|
||||||
@@ -0,0 +1,12 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-06 08:19:54
|
||||||
|
Description: The entry point.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .cli import run
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
run()
|
||||||
@@ -0,0 +1,35 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-06 08:21:01
|
||||||
|
Description: Credential authenticators for third-party provider APIs
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from lrx_cli.authenticators.qqmusic import QQMusicAuthenticator
|
||||||
|
|
||||||
|
from .base import BaseAuthenticator
|
||||||
|
from .spotify import SpotifyAuthenticator
|
||||||
|
from .musixmatch import MusixmatchAuthenticator
|
||||||
|
from .dummy import DummyAuthenticator
|
||||||
|
from ..config import AppConfig
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"BaseAuthenticator",
|
||||||
|
"SpotifyAuthenticator",
|
||||||
|
"MusixmatchAuthenticator",
|
||||||
|
"QQMusicAuthenticator",
|
||||||
|
"DummyAuthenticator",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def create_authenticators(cache, config: AppConfig) -> dict[str, BaseAuthenticator]:
|
||||||
|
"""Factory function to create authenticators with injected config."""
|
||||||
|
return {
|
||||||
|
"dummy": DummyAuthenticator(cache, config.credentials, config.general),
|
||||||
|
"spotify": SpotifyAuthenticator(cache, config.credentials, config.general),
|
||||||
|
"musixmatch": MusixmatchAuthenticator(
|
||||||
|
cache, config.credentials, config.general
|
||||||
|
),
|
||||||
|
"qqmusic": QQMusicAuthenticator(cache, config.credentials, config.general),
|
||||||
|
}
|
||||||
@@ -0,0 +1,44 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-05 03:18:14
|
||||||
|
Description: Base class for credential authenticators.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from ..cache import CacheEngine
|
||||||
|
from ..config import CredentialConfig, GeneralConfig
|
||||||
|
|
||||||
|
|
||||||
|
class BaseAuthenticator(ABC):
|
||||||
|
"""Manages obtaining, caching, and refreshing a credential for one provider."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
|
||||||
|
) -> None:
|
||||||
|
self._cache = cache
|
||||||
|
self._credentials = credentials
|
||||||
|
self._general = general
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
def name(self) -> str: ...
|
||||||
|
|
||||||
|
def is_configured(self) -> bool:
|
||||||
|
"""True if the prerequisite config (e.g. env var) is present.
|
||||||
|
|
||||||
|
Default is True — authenticators that can obtain credentials anonymously
|
||||||
|
should not override this.
|
||||||
|
"""
|
||||||
|
return True
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def authenticate(self) -> Optional[str]:
|
||||||
|
"""Return current valid credential string, refreshing if needed.
|
||||||
|
|
||||||
|
Returns None if unavailable (misconfigured or network failure).
|
||||||
|
"""
|
||||||
|
...
|
||||||
@@ -0,0 +1,21 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-05 03:36:44
|
||||||
|
Description: A dummy authenticator that does nothing and always reports as configured.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .base import BaseAuthenticator
|
||||||
|
|
||||||
|
|
||||||
|
class DummyAuthenticator(BaseAuthenticator):
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "dummy"
|
||||||
|
|
||||||
|
def is_configured(self) -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def authenticate(self) -> None:
|
||||||
|
return None
|
||||||
@@ -0,0 +1,168 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-05 03:27:56
|
||||||
|
Description: Musixmatch authenticator — token management, 401 retry, and cooldown.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import time
|
||||||
|
from typing import Optional
|
||||||
|
from urllib.parse import urlencode
|
||||||
|
import httpx
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .base import BaseAuthenticator
|
||||||
|
from ..cache import CacheEngine
|
||||||
|
from ..config import CredentialConfig, GeneralConfig, MUSIXMATCH_COOLDOWN_MS
|
||||||
|
|
||||||
|
_MUSIXMATCH_TOKEN_URL = "https://apic-desktop.musixmatch.com/ws/1.1/token.get"
|
||||||
|
|
||||||
|
_MXM_HEADERS = {"Cookie": "x-mxm-token-guid="}
|
||||||
|
_MXM_BASE_PARAMS = {
|
||||||
|
"format": "json",
|
||||||
|
"app_id": "web-desktop-app-v1.0",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _new_mxm_client(timeout: float) -> httpx.AsyncClient:
|
||||||
|
"""Build Musixmatch client without httpx default User-Agent header."""
|
||||||
|
client = httpx.AsyncClient(timeout=timeout, headers=_MXM_HEADERS)
|
||||||
|
client.headers.pop("User-Agent", None)
|
||||||
|
return client
|
||||||
|
|
||||||
|
|
||||||
|
class MusixmatchAuthenticator(BaseAuthenticator):
|
||||||
|
def __init__(
|
||||||
|
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
|
||||||
|
) -> None:
|
||||||
|
super().__init__(cache, credentials, general)
|
||||||
|
self._cached_token: Optional[str] = None
|
||||||
|
self._cooldown_until_ms: int = 0
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "musixmatch"
|
||||||
|
|
||||||
|
def is_configured(self) -> bool:
|
||||||
|
return True # anonymous token always available
|
||||||
|
|
||||||
|
def is_cooldown(self) -> bool:
|
||||||
|
"""Return True if Musixmatch requests are blocked due to repeated auth failure."""
|
||||||
|
now_ms = int(time.time() * 1000)
|
||||||
|
if self._cooldown_until_ms > now_ms:
|
||||||
|
return True
|
||||||
|
data = self._cache.get_credential("musixmatch_cooldown")
|
||||||
|
if data:
|
||||||
|
until = data.get("until_ms", 0)
|
||||||
|
if until > now_ms:
|
||||||
|
self._cooldown_until_ms = until
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _set_cooldown(self) -> None:
|
||||||
|
now_ms = int(time.time() * 1000)
|
||||||
|
until_ms = now_ms + MUSIXMATCH_COOLDOWN_MS
|
||||||
|
self._cooldown_until_ms = until_ms
|
||||||
|
self._cache.set_credential(
|
||||||
|
"musixmatch_cooldown",
|
||||||
|
{"until_ms": until_ms},
|
||||||
|
expires_at_ms=until_ms,
|
||||||
|
)
|
||||||
|
logger.warning("Musixmatch: token unavailable, entering cooldown")
|
||||||
|
|
||||||
|
def _invalidate_token(self) -> None:
|
||||||
|
"""Discard the current token from memory and DB."""
|
||||||
|
self._cached_token = None
|
||||||
|
# Store with an already-expired timestamp so get_credential returns None
|
||||||
|
self._cache.set_credential("musixmatch", {"token": ""}, expires_at_ms=1)
|
||||||
|
|
||||||
|
async def _fetch_new_token(self) -> Optional[str]:
|
||||||
|
"""Call token.get and persist the result. Returns token string or None."""
|
||||||
|
params = {
|
||||||
|
**_MXM_BASE_PARAMS,
|
||||||
|
"user_language": "en",
|
||||||
|
"t": str(int(time.time() * 1000)),
|
||||||
|
}
|
||||||
|
url = f"{_MUSIXMATCH_TOKEN_URL}?{urlencode(params)}"
|
||||||
|
logger.debug("Musixmatch: fetching anonymous token")
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with _new_mxm_client(self._general.http_timeout) as client:
|
||||||
|
resp = await client.get(url)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Musixmatch: token fetch failed: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
token = (
|
||||||
|
data.get("message", {}).get("body", {}).get("user_token")
|
||||||
|
if isinstance(data, dict)
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
if not isinstance(token, str) or not token:
|
||||||
|
logger.warning("Musixmatch: unexpected token.get response structure")
|
||||||
|
return None
|
||||||
|
|
||||||
|
self._cached_token = token
|
||||||
|
# No expiry — token is valid until we get a 401
|
||||||
|
self._cache.set_credential("musixmatch", {"token": token}, expires_at_ms=None)
|
||||||
|
logger.debug("Musixmatch: obtained anonymous token")
|
||||||
|
return token
|
||||||
|
|
||||||
|
async def _get_token(self) -> Optional[str]:
|
||||||
|
"""Return a valid token: env var > memory > DB > fresh fetch."""
|
||||||
|
if self._credentials.musixmatch_usertoken:
|
||||||
|
return self._credentials.musixmatch_usertoken
|
||||||
|
|
||||||
|
if self._cached_token:
|
||||||
|
return self._cached_token
|
||||||
|
|
||||||
|
data = self._cache.get_credential("musixmatch")
|
||||||
|
if data and isinstance(data.get("token"), str) and data["token"]:
|
||||||
|
self._cached_token = data["token"]
|
||||||
|
return self._cached_token
|
||||||
|
|
||||||
|
return await self._fetch_new_token()
|
||||||
|
|
||||||
|
async def authenticate(self) -> Optional[str]:
|
||||||
|
if self.is_cooldown():
|
||||||
|
logger.debug("Musixmatch: authenticate called during cooldown")
|
||||||
|
return None
|
||||||
|
return await self._get_token()
|
||||||
|
|
||||||
|
async def get_json(self, url_base: str, params: dict) -> Optional[dict]:
|
||||||
|
"""Authenticated GET to a Musixmatch endpoint.
|
||||||
|
|
||||||
|
- Injects format, app_id, and usertoken automatically.
|
||||||
|
- On 401: invalidates token, fetches a fresh one, retries once.
|
||||||
|
- On failed token fetch (initial or retry): sets cooldown, returns None.
|
||||||
|
- On network / HTTP error: raises (callers map this to NETWORK_ERROR).
|
||||||
|
- Returns None if cooldown is active.
|
||||||
|
"""
|
||||||
|
if self.is_cooldown():
|
||||||
|
logger.debug("Musixmatch: request blocked by cooldown")
|
||||||
|
return None
|
||||||
|
|
||||||
|
token = await self._get_token()
|
||||||
|
if not token:
|
||||||
|
self._set_cooldown()
|
||||||
|
return None
|
||||||
|
|
||||||
|
async with _new_mxm_client(self._general.http_timeout) as client:
|
||||||
|
url = f"{url_base}?{urlencode({**_MXM_BASE_PARAMS, **params, 'usertoken': token})}"
|
||||||
|
resp = await client.get(url)
|
||||||
|
|
||||||
|
if resp.status_code == 401:
|
||||||
|
logger.debug("Musixmatch: 401 received, refreshing token")
|
||||||
|
self._invalidate_token()
|
||||||
|
token = await self._fetch_new_token()
|
||||||
|
if not token:
|
||||||
|
self._set_cooldown()
|
||||||
|
return None
|
||||||
|
url = f"{url_base}?{urlencode({**_MXM_BASE_PARAMS, **params, 'usertoken': token})}"
|
||||||
|
resp = await client.get(url)
|
||||||
|
|
||||||
|
resp.raise_for_status()
|
||||||
|
return resp.json()
|
||||||
@@ -0,0 +1,74 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-05 03:47:30
|
||||||
|
Description: QQ Music API authenticator - currently only a proxy.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Optional
|
||||||
|
import httpx
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .base import BaseAuthenticator
|
||||||
|
from ..cache import CacheEngine
|
||||||
|
from ..config import CredentialConfig, GeneralConfig
|
||||||
|
|
||||||
|
|
||||||
|
class QQMusicAuthenticator(BaseAuthenticator):
|
||||||
|
def __init__(
|
||||||
|
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
|
||||||
|
) -> None:
|
||||||
|
super().__init__(cache, credentials, general)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "qqmusic"
|
||||||
|
|
||||||
|
def is_configured(self) -> bool:
|
||||||
|
return bool(self._credentials.qq_music_api_url)
|
||||||
|
|
||||||
|
async def authenticate(self) -> Optional[str]:
|
||||||
|
return self._credentials.qq_music_api_url.rstrip("/") or None
|
||||||
|
|
||||||
|
async def search(self, keyword: str, num: int) -> dict | None:
|
||||||
|
"""Call qq-music-api search endpoint and return raw JSON payload."""
|
||||||
|
base_url = await self.authenticate()
|
||||||
|
if not base_url:
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
resp = await client.get(
|
||||||
|
f"{base_url}/api/search",
|
||||||
|
params={"keyword": keyword, "type": "song", "num": num},
|
||||||
|
)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"QQMusic: search request failed: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def get_lyric(self, mid: str) -> dict | None:
|
||||||
|
"""Call qq-music-api lyric endpoint and return raw JSON payload."""
|
||||||
|
base_url = await self.authenticate()
|
||||||
|
if not base_url:
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
resp = await client.get(
|
||||||
|
f"{base_url}/api/lyric",
|
||||||
|
params={"mid": mid},
|
||||||
|
)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"QQMusic: lyric request failed for mid={mid}: {e}")
|
||||||
|
return None
|
||||||
@@ -0,0 +1,245 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-05 03:18:14
|
||||||
|
Description: Spotify authenticator — TOTP-based access token via SP_DC cookie.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
import hmac
|
||||||
|
import struct
|
||||||
|
import time
|
||||||
|
from typing import Optional, Tuple
|
||||||
|
import httpx
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .base import BaseAuthenticator
|
||||||
|
from ..cache import CacheEngine
|
||||||
|
from ..config import CredentialConfig, GeneralConfig, UA_BROWSER
|
||||||
|
|
||||||
|
_SPOTIFY_TOKEN_URL = "https://open.spotify.com/api/token"
|
||||||
|
_SPOTIFY_SERVER_TIME_URL = "https://open.spotify.com/api/server-time"
|
||||||
|
_SPOTIFY_LYRICS_URL = "https://spclient.wg.spotify.com/color-lyrics/v2/track/"
|
||||||
|
_SPOTIFY_SECRET_URL = (
|
||||||
|
"https://raw.githubusercontent.com/xyloflake/spot-secrets-go"
|
||||||
|
"/refs/heads/main/secrets/secrets.json"
|
||||||
|
)
|
||||||
|
SPOTIFY_BASE_HEADERS = {
|
||||||
|
"User-Agent": UA_BROWSER,
|
||||||
|
"Referer": "https://open.spotify.com/",
|
||||||
|
"Origin": "https://open.spotify.com",
|
||||||
|
"App-Platform": "WebPlayer",
|
||||||
|
"Spotify-App-Version": "1.2.88.21.g8e037c8f",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class SpotifyAuthenticator(BaseAuthenticator):
|
||||||
|
def __init__(
|
||||||
|
self, cache: CacheEngine, credentials: CredentialConfig, general: GeneralConfig
|
||||||
|
) -> None:
|
||||||
|
super().__init__(cache, credentials, general)
|
||||||
|
self._cached_secret: Optional[Tuple[str, int]] = None
|
||||||
|
self._cached_token: Optional[str] = None
|
||||||
|
self._token_expires_at: float = 0.0
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "spotify"
|
||||||
|
|
||||||
|
def is_configured(self) -> bool:
|
||||||
|
return bool(self._credentials.spotify_sp_dc)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _generate_totp(server_time_s: int, secret: str) -> str:
|
||||||
|
counter = server_time_s // 30
|
||||||
|
counter_bytes = struct.pack(">Q", counter)
|
||||||
|
mac = hmac.new(secret.encode(), counter_bytes, hashlib.sha1).digest()
|
||||||
|
offset = mac[-1] & 0x0F
|
||||||
|
binary_code = (
|
||||||
|
(mac[offset] & 0x7F) << 24
|
||||||
|
| (mac[offset + 1] & 0xFF) << 16
|
||||||
|
| (mac[offset + 2] & 0xFF) << 8
|
||||||
|
| (mac[offset + 3] & 0xFF)
|
||||||
|
)
|
||||||
|
return str(binary_code % (10**6)).zfill(6)
|
||||||
|
|
||||||
|
def _load_cached_token(self) -> Optional[str]:
|
||||||
|
data = self._cache.get_credential("spotify")
|
||||||
|
if not data:
|
||||||
|
return None
|
||||||
|
expires_ms = data.get("accessTokenExpirationTimestampMs", 0)
|
||||||
|
if expires_ms <= int(time.time() * 1000):
|
||||||
|
logger.debug("Spotify: persisted token expired")
|
||||||
|
return None
|
||||||
|
token = data.get("accessToken", "")
|
||||||
|
if not token:
|
||||||
|
return None
|
||||||
|
self._cached_token = token
|
||||||
|
self._token_expires_at = expires_ms / 1000.0
|
||||||
|
logger.debug("Spotify: loaded token from DB cache")
|
||||||
|
return token
|
||||||
|
|
||||||
|
def _save_token(self, body: dict) -> None:
|
||||||
|
expires_ms = body.get("accessTokenExpirationTimestampMs")
|
||||||
|
self._cache.set_credential("spotify", body, expires_ms)
|
||||||
|
logger.debug("Spotify: token saved to DB cache")
|
||||||
|
|
||||||
|
async def _get_server_time(self, client: httpx.AsyncClient) -> Optional[int]:
|
||||||
|
try:
|
||||||
|
res = await client.get(
|
||||||
|
_SPOTIFY_SERVER_TIME_URL, timeout=self._general.http_timeout
|
||||||
|
)
|
||||||
|
res.raise_for_status()
|
||||||
|
data = res.json()
|
||||||
|
if not isinstance(data, dict) or "serverTime" not in data:
|
||||||
|
logger.error(f"Spotify: unexpected server-time response: {data}")
|
||||||
|
return None
|
||||||
|
server_time = data["serverTime"]
|
||||||
|
logger.debug(f"Spotify: server time = {server_time}")
|
||||||
|
return server_time
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Spotify: failed to fetch server time: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def _get_secret(self, client: httpx.AsyncClient) -> Optional[Tuple[str, int]]:
|
||||||
|
if self._cached_secret is not None:
|
||||||
|
logger.debug("Spotify: using cached TOTP secret")
|
||||||
|
return self._cached_secret
|
||||||
|
try:
|
||||||
|
res = await client.get(
|
||||||
|
_SPOTIFY_SECRET_URL, timeout=self._general.http_timeout
|
||||||
|
)
|
||||||
|
res.raise_for_status()
|
||||||
|
data = res.json()
|
||||||
|
if not isinstance(data, list) or len(data) == 0:
|
||||||
|
logger.error(
|
||||||
|
f"Spotify: unexpected secrets response (type={type(data).__name__})"
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
last = data[-1]
|
||||||
|
if "secret" not in last or "version" not in last:
|
||||||
|
logger.error(f"Spotify: malformed secret entry: {list(last.keys())}")
|
||||||
|
return None
|
||||||
|
secret_raw = last["secret"]
|
||||||
|
version = last["version"]
|
||||||
|
secret = "".join(
|
||||||
|
str(ord(c) ^ ((i % 33) + 9)) for i, c in enumerate(secret_raw)
|
||||||
|
)
|
||||||
|
logger.debug(f"Spotify: decoded secret v{version} (len={len(secret)})")
|
||||||
|
self._cached_secret = (secret, version)
|
||||||
|
return self._cached_secret
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Spotify: failed to fetch secret: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def authenticate(self) -> Optional[str]:
|
||||||
|
if self._cached_token and time.time() < self._token_expires_at - 30:
|
||||||
|
logger.debug("Spotify: using in-memory cached token")
|
||||||
|
return self._cached_token
|
||||||
|
|
||||||
|
db_token = self._load_cached_token()
|
||||||
|
if db_token and time.time() < self._token_expires_at - 30:
|
||||||
|
return db_token
|
||||||
|
|
||||||
|
if not self._credentials.spotify_sp_dc:
|
||||||
|
logger.error("Spotify: spotify_sp_dc not configured — cannot authenticate")
|
||||||
|
return None
|
||||||
|
|
||||||
|
headers = {
|
||||||
|
"Accept": "*/*",
|
||||||
|
"Cookie": f"sp_dc={self._credentials.spotify_sp_dc}",
|
||||||
|
**SPOTIFY_BASE_HEADERS,
|
||||||
|
}
|
||||||
|
|
||||||
|
async with httpx.AsyncClient(headers=headers) as client:
|
||||||
|
server_time = await self._get_server_time(client)
|
||||||
|
if server_time is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
secret_data = await self._get_secret(client)
|
||||||
|
if secret_data is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
secret, version = secret_data
|
||||||
|
totp = self._generate_totp(server_time, secret)
|
||||||
|
logger.debug(f"Spotify: generated TOTP v{version}: {totp}")
|
||||||
|
|
||||||
|
params = {
|
||||||
|
"reason": "init",
|
||||||
|
"productType": "web-player",
|
||||||
|
"totp": totp,
|
||||||
|
"totpVer": str(version),
|
||||||
|
"totpServer": totp,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
res = await client.get(
|
||||||
|
_SPOTIFY_TOKEN_URL,
|
||||||
|
params=params,
|
||||||
|
timeout=self._general.http_timeout,
|
||||||
|
)
|
||||||
|
if res.status_code != 200:
|
||||||
|
logger.error(f"Spotify: token request returned {res.status_code}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
body = res.json()
|
||||||
|
if not isinstance(body, dict) or "accessToken" not in body:
|
||||||
|
logger.error(
|
||||||
|
f"Spotify: unexpected token response keys: {list(body.keys()) if isinstance(body, dict) else type(body).__name__}"
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
|
||||||
|
token = body["accessToken"]
|
||||||
|
if body.get("isAnonymous", False):
|
||||||
|
logger.warning(
|
||||||
|
"Spotify: received anonymous token — SP_DC may be invalid"
|
||||||
|
)
|
||||||
|
|
||||||
|
expires_ms = body.get("accessTokenExpirationTimestampMs", 0)
|
||||||
|
if expires_ms and expires_ms > int(time.time() * 1000):
|
||||||
|
self._token_expires_at = expires_ms / 1000.0
|
||||||
|
else:
|
||||||
|
logger.warning("Spotify: token expiry missing or invalid")
|
||||||
|
self._token_expires_at = time.time() + 3600
|
||||||
|
|
||||||
|
self._cached_token = token
|
||||||
|
self._save_token(body)
|
||||||
|
logger.debug("Spotify: obtained access token")
|
||||||
|
return token
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Spotify: token request failed: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def get_lyrics(self, track_id: str) -> dict | None:
|
||||||
|
"""Fetch raw lyrics JSON payload for a Spotify track."""
|
||||||
|
token = await self.authenticate()
|
||||||
|
if not token:
|
||||||
|
return None
|
||||||
|
|
||||||
|
url = (
|
||||||
|
f"{_SPOTIFY_LYRICS_URL}{track_id}"
|
||||||
|
"?format=json&vocalRemoval=false&market=from_token"
|
||||||
|
)
|
||||||
|
headers = {
|
||||||
|
"Accept": "application/json",
|
||||||
|
"Authorization": f"Bearer {token}",
|
||||||
|
**SPOTIFY_BASE_HEADERS,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
res = await client.get(url, headers=headers)
|
||||||
|
if res.status_code == 404:
|
||||||
|
return None
|
||||||
|
if res.status_code != 200:
|
||||||
|
logger.error(f"Spotify: lyrics API returned {res.status_code}")
|
||||||
|
return None
|
||||||
|
data = res.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Spotify: lyrics fetch failed: {e}")
|
||||||
|
return None
|
||||||
@@ -0,0 +1,701 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-25 10:18:03
|
||||||
|
Description: SQLite-based lyric cache with per-source slot rows, TTL expiration,
|
||||||
|
and schema migrations (confidence versioning + slot migration).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import sqlite3
|
||||||
|
import hashlib
|
||||||
|
import time
|
||||||
|
from typing import Optional
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .lrc import LRCData
|
||||||
|
from .normalize import normalize_for_match as _normalize_for_match
|
||||||
|
from .config import (
|
||||||
|
DURATION_TOLERANCE_MS,
|
||||||
|
LEGACY_CONFIDENCE,
|
||||||
|
CONFIDENCE_ALGO_VERSION,
|
||||||
|
SLOT_SYNCED,
|
||||||
|
SLOT_UNSYNCED,
|
||||||
|
)
|
||||||
|
from .models import TrackMeta, LyricResult, CacheStatus
|
||||||
|
from .utils import is_positive_status, select_best_positive
|
||||||
|
|
||||||
|
|
||||||
|
_ALL_SLOTS = (SLOT_SYNCED, SLOT_UNSYNCED)
|
||||||
|
|
||||||
|
|
||||||
|
# Fixed WHERE clause for exact track matching. Column names are hardcoded
|
||||||
|
# literals; only the values come from user-supplied params — no injection risk.
|
||||||
|
_TRACK_WHERE = (
|
||||||
|
"(? IS NULL OR artist = ?) AND "
|
||||||
|
"(? IS NULL OR title = ?) AND "
|
||||||
|
"(? IS NULL OR album = ?)"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _track_where_params(track: TrackMeta) -> list:
|
||||||
|
return [
|
||||||
|
track.artist,
|
||||||
|
track.artist,
|
||||||
|
track.title,
|
||||||
|
track.title,
|
||||||
|
track.album,
|
||||||
|
track.album,
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _generate_key(track: TrackMeta, source: str) -> str:
|
||||||
|
"""Generate a unique cache key from track metadata and source.
|
||||||
|
|
||||||
|
The key is scoped by source so that different fetchers can cache
|
||||||
|
independently for the same track (e.g. Spotify synced vs Netease unsynced).
|
||||||
|
"""
|
||||||
|
# Spotify tracks always use their track ID as the primary identifier
|
||||||
|
if track.trackid and source == "spotify":
|
||||||
|
return f"spotify:{track.trackid}"
|
||||||
|
|
||||||
|
parts = []
|
||||||
|
if track.artist:
|
||||||
|
parts.append(track.artist)
|
||||||
|
if track.title:
|
||||||
|
parts.append(track.title)
|
||||||
|
if track.album:
|
||||||
|
parts.append(track.album)
|
||||||
|
if track.length:
|
||||||
|
parts.append(str(track.length))
|
||||||
|
|
||||||
|
# Fall back to URL for local files
|
||||||
|
if not parts and track.url:
|
||||||
|
return f"{source}:url:{track.url}"
|
||||||
|
|
||||||
|
if not parts:
|
||||||
|
raise ValueError("Insufficient metadata to generate cache key")
|
||||||
|
|
||||||
|
raw = "|".join(parts)
|
||||||
|
digest = hashlib.sha256(raw.encode()).hexdigest()
|
||||||
|
return f"{source}:{digest}"
|
||||||
|
|
||||||
|
|
||||||
|
class CacheEngine:
|
||||||
|
def __init__(self, db_path: str):
|
||||||
|
self.db_path = db_path
|
||||||
|
self._init_db()
|
||||||
|
|
||||||
|
def _connect(self) -> sqlite3.Connection:
|
||||||
|
conn = sqlite3.connect(self.db_path)
|
||||||
|
conn.execute("PRAGMA journal_mode=WAL")
|
||||||
|
conn.execute("PRAGMA busy_timeout=5000")
|
||||||
|
return conn
|
||||||
|
|
||||||
|
def _init_db(self) -> None:
|
||||||
|
"""Create cache tables and run one-time slot/cache migrations."""
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.execute("""
|
||||||
|
CREATE TABLE IF NOT EXISTS credentials (
|
||||||
|
name TEXT PRIMARY KEY,
|
||||||
|
data TEXT NOT NULL,
|
||||||
|
expires_at INTEGER
|
||||||
|
)
|
||||||
|
""")
|
||||||
|
cache_exists = conn.execute(
|
||||||
|
"SELECT 1 FROM sqlite_master WHERE type='table' AND name='cache'"
|
||||||
|
).fetchone()
|
||||||
|
if not cache_exists:
|
||||||
|
self._create_cache_table(conn)
|
||||||
|
conn.commit()
|
||||||
|
return
|
||||||
|
|
||||||
|
cols = {r[1] for r in conn.execute("PRAGMA table_info(cache)").fetchall()}
|
||||||
|
|
||||||
|
if "positive_kind" not in cols:
|
||||||
|
# Normalize legacy shape first so migration SQL can safely read all columns.
|
||||||
|
if "length" not in cols:
|
||||||
|
conn.execute("ALTER TABLE cache ADD COLUMN length INTEGER")
|
||||||
|
if "confidence" not in cols:
|
||||||
|
conn.execute("ALTER TABLE cache ADD COLUMN confidence REAL")
|
||||||
|
if "confidence_version" not in cols:
|
||||||
|
conn.execute(
|
||||||
|
"ALTER TABLE cache ADD COLUMN confidence_version INTEGER"
|
||||||
|
)
|
||||||
|
self._migrate_legacy_to_slot_cache(conn)
|
||||||
|
cols = {
|
||||||
|
r[1] for r in conn.execute("PRAGMA table_info(cache)").fetchall()
|
||||||
|
}
|
||||||
|
|
||||||
|
if "confidence_version" not in cols:
|
||||||
|
conn.execute("ALTER TABLE cache ADD COLUMN confidence_version INTEGER")
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
UPDATE cache
|
||||||
|
SET confidence = MIN(100.0, COALESCE(confidence, ?) + 10.0)
|
||||||
|
WHERE status = ? AND positive_kind = ?
|
||||||
|
""",
|
||||||
|
(
|
||||||
|
LEGACY_CONFIDENCE,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED.value,
|
||||||
|
SLOT_UNSYNCED,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE cache SET confidence_version = ? WHERE confidence_version IS NULL",
|
||||||
|
(CONFIDENCE_ALGO_VERSION,),
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
|
||||||
|
def _create_cache_table(self, conn: sqlite3.Connection) -> None:
|
||||||
|
conn.execute("""
|
||||||
|
CREATE TABLE IF NOT EXISTS cache (
|
||||||
|
key TEXT NOT NULL,
|
||||||
|
positive_kind TEXT NOT NULL,
|
||||||
|
source TEXT NOT NULL,
|
||||||
|
status TEXT NOT NULL,
|
||||||
|
lyrics TEXT,
|
||||||
|
created_at INTEGER NOT NULL,
|
||||||
|
expires_at INTEGER,
|
||||||
|
artist TEXT,
|
||||||
|
title TEXT,
|
||||||
|
album TEXT,
|
||||||
|
length INTEGER,
|
||||||
|
confidence REAL,
|
||||||
|
confidence_version INTEGER,
|
||||||
|
PRIMARY KEY (key, positive_kind)
|
||||||
|
)
|
||||||
|
""")
|
||||||
|
|
||||||
|
def _migrate_legacy_to_slot_cache(self, conn: sqlite3.Connection) -> None:
|
||||||
|
"""One-time migration from single-row cache to slot-scoped cache rows."""
|
||||||
|
conn.execute("ALTER TABLE cache RENAME TO cache_legacy")
|
||||||
|
self._create_cache_table(conn)
|
||||||
|
|
||||||
|
positive_statuses = (
|
||||||
|
CacheStatus.SUCCESS_SYNCED.value,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED.value,
|
||||||
|
)
|
||||||
|
negative_statuses = (
|
||||||
|
CacheStatus.NOT_FOUND.value,
|
||||||
|
CacheStatus.NETWORK_ERROR.value,
|
||||||
|
)
|
||||||
|
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO cache (
|
||||||
|
key, positive_kind, source, status, lyrics, created_at, expires_at,
|
||||||
|
artist, title, album, length, confidence, confidence_version
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
key,
|
||||||
|
CASE
|
||||||
|
WHEN status = ? THEN ?
|
||||||
|
WHEN status = ? THEN ?
|
||||||
|
ELSE ?
|
||||||
|
END,
|
||||||
|
source, status, lyrics, created_at, expires_at, artist, title, album, length,
|
||||||
|
CASE
|
||||||
|
WHEN status = ? THEN MIN(100.0, COALESCE(confidence, ?) + 10.0)
|
||||||
|
WHEN status = ? THEN COALESCE(confidence, ?)
|
||||||
|
ELSE COALESCE(confidence, 0.0)
|
||||||
|
END,
|
||||||
|
COALESCE(confidence_version, ?)
|
||||||
|
FROM cache_legacy
|
||||||
|
WHERE status IN (?, ?)
|
||||||
|
""",
|
||||||
|
(
|
||||||
|
CacheStatus.SUCCESS_SYNCED.value,
|
||||||
|
SLOT_SYNCED,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED.value,
|
||||||
|
SLOT_UNSYNCED,
|
||||||
|
SLOT_SYNCED,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED.value,
|
||||||
|
LEGACY_CONFIDENCE,
|
||||||
|
CacheStatus.SUCCESS_SYNCED.value,
|
||||||
|
LEGACY_CONFIDENCE,
|
||||||
|
CONFIDENCE_ALGO_VERSION,
|
||||||
|
positive_statuses[0],
|
||||||
|
positive_statuses[1],
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
for slot in _ALL_SLOTS:
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO cache (
|
||||||
|
key, positive_kind, source, status, lyrics, created_at, expires_at,
|
||||||
|
artist, title, album, length, confidence, confidence_version
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
key, ?, source, status, lyrics, created_at, expires_at, artist, title,
|
||||||
|
album, length,
|
||||||
|
COALESCE(confidence, 0.0),
|
||||||
|
COALESCE(confidence_version, ?)
|
||||||
|
FROM cache_legacy
|
||||||
|
WHERE status IN (?, ?)
|
||||||
|
""",
|
||||||
|
(
|
||||||
|
slot,
|
||||||
|
CONFIDENCE_ALGO_VERSION,
|
||||||
|
negative_statuses[0],
|
||||||
|
negative_statuses[1],
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
conn.execute("DROP TABLE cache_legacy")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _slot_for_status(status: CacheStatus) -> str:
|
||||||
|
if status == CacheStatus.SUCCESS_SYNCED:
|
||||||
|
return SLOT_SYNCED
|
||||||
|
if status == CacheStatus.SUCCESS_UNSYNCED:
|
||||||
|
return SLOT_UNSYNCED
|
||||||
|
raise ValueError(f"Status {status.value} requires explicit slot")
|
||||||
|
|
||||||
|
# Read
|
||||||
|
|
||||||
|
def get_all(self, track: TrackMeta, source: str) -> list[LyricResult]:
|
||||||
|
"""Return all non-expired cached slot rows for track/source."""
|
||||||
|
try:
|
||||||
|
key = _generate_key(track, source)
|
||||||
|
except ValueError:
|
||||||
|
return []
|
||||||
|
|
||||||
|
now = int(time.time())
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"DELETE FROM cache WHERE key = ? AND expires_at IS NOT NULL AND expires_at < ?",
|
||||||
|
(key, now),
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
rows = conn.execute(
|
||||||
|
"""
|
||||||
|
SELECT status, lyrics, source, expires_at, length, confidence
|
||||||
|
FROM cache
|
||||||
|
WHERE key = ? AND (expires_at IS NULL OR expires_at > ?)
|
||||||
|
ORDER BY positive_kind
|
||||||
|
""",
|
||||||
|
(key, now),
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
if not rows:
|
||||||
|
logger.debug(f"Cache miss: {source} / {track.display_name()}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
# Backfill missing length for all slot rows under the same key.
|
||||||
|
if track.length is not None:
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE cache SET length = ? WHERE key = ? AND length IS NULL",
|
||||||
|
(track.length, key),
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
|
||||||
|
results: list[LyricResult] = []
|
||||||
|
for status_str, lyrics, src, expires_at, _cached_length, confidence in rows:
|
||||||
|
remaining = expires_at - now if expires_at else None
|
||||||
|
status = CacheStatus(status_str)
|
||||||
|
if confidence is None:
|
||||||
|
if is_positive_status(status):
|
||||||
|
confidence = LEGACY_CONFIDENCE
|
||||||
|
else:
|
||||||
|
confidence = 0.0
|
||||||
|
results.append(
|
||||||
|
LyricResult(
|
||||||
|
status=status,
|
||||||
|
lyrics=LRCData(lyrics) if lyrics else None,
|
||||||
|
source=src,
|
||||||
|
ttl=remaining,
|
||||||
|
confidence=confidence,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
def get_best(self, track: TrackMeta, sources: list[str]) -> Optional[LyricResult]:
|
||||||
|
"""Return best positive cached result across sources.
|
||||||
|
|
||||||
|
Negative statuses are ignored by ranking.
|
||||||
|
"""
|
||||||
|
positives: list[LyricResult] = []
|
||||||
|
for src in sources:
|
||||||
|
rows = self.get_all(track, src)
|
||||||
|
positives.extend(r for r in rows if is_positive_status(r.status))
|
||||||
|
|
||||||
|
return select_best_positive(positives, allow_unsynced=True)
|
||||||
|
|
||||||
|
# Write
|
||||||
|
|
||||||
|
def set(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
source: str,
|
||||||
|
result: LyricResult,
|
||||||
|
ttl_seconds: Optional[int] = None,
|
||||||
|
positive_kind: Optional[str] = None,
|
||||||
|
) -> None:
|
||||||
|
"""Store a lyric result in the cache.
|
||||||
|
|
||||||
|
New/updated rows are tagged with the current confidence algorithm
|
||||||
|
version so future migrations can be applied deterministically.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
key = _generate_key(track, source)
|
||||||
|
except ValueError:
|
||||||
|
logger.warning("Cannot cache: insufficient track metadata.")
|
||||||
|
return
|
||||||
|
|
||||||
|
now = int(time.time())
|
||||||
|
expires_at = now + ttl_seconds if ttl_seconds else None
|
||||||
|
|
||||||
|
kinds: list[str]
|
||||||
|
if positive_kind is not None:
|
||||||
|
kinds = [positive_kind]
|
||||||
|
elif result.status in (
|
||||||
|
CacheStatus.SUCCESS_SYNCED,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED,
|
||||||
|
):
|
||||||
|
kinds = [self._slot_for_status(result.status)]
|
||||||
|
else:
|
||||||
|
# Convenience for callers that still pass a single negative result.
|
||||||
|
kinds = [SLOT_SYNCED, SLOT_UNSYNCED]
|
||||||
|
|
||||||
|
with self._connect() as conn:
|
||||||
|
for kind in kinds:
|
||||||
|
conn.execute(
|
||||||
|
"""INSERT OR REPLACE INTO cache
|
||||||
|
(key, positive_kind, source, status, lyrics, created_at, expires_at,
|
||||||
|
artist, title, album, length, confidence, confidence_version)
|
||||||
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
|
||||||
|
(
|
||||||
|
key,
|
||||||
|
kind,
|
||||||
|
source,
|
||||||
|
result.status.value,
|
||||||
|
str(result.lyrics) if result.lyrics else None,
|
||||||
|
now,
|
||||||
|
expires_at,
|
||||||
|
track.artist,
|
||||||
|
track.title,
|
||||||
|
track.album,
|
||||||
|
track.length,
|
||||||
|
result.confidence,
|
||||||
|
CONFIDENCE_ALGO_VERSION,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
logger.debug(
|
||||||
|
f"Cached: {source} / {track.display_name()} "
|
||||||
|
f"[{result.status.value}, ttl={ttl_seconds}s]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Delete
|
||||||
|
|
||||||
|
def clear_all(self) -> None:
|
||||||
|
"""Remove every entry from the cache."""
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.execute("DELETE FROM cache")
|
||||||
|
conn.commit()
|
||||||
|
logger.info("Cache cleared.")
|
||||||
|
|
||||||
|
def clear_track(self, track: TrackMeta) -> None:
|
||||||
|
"""Remove all cached entries (every source) for a single track."""
|
||||||
|
if not self._track_has_meta(track):
|
||||||
|
logger.info(f"No cache entries found for {track.display_name()}.")
|
||||||
|
return
|
||||||
|
with self._connect() as conn:
|
||||||
|
cur = conn.execute(
|
||||||
|
f"DELETE FROM cache WHERE {_TRACK_WHERE}",
|
||||||
|
_track_where_params(track),
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
if cur.rowcount:
|
||||||
|
logger.info(
|
||||||
|
f"Cleared {cur.rowcount} cache entries for {track.display_name()}."
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.info(f"No cache entries found for {track.display_name()}.")
|
||||||
|
|
||||||
|
def prune(self) -> int:
|
||||||
|
"""Remove all expired entries. Returns the number of rows deleted."""
|
||||||
|
with self._connect() as conn:
|
||||||
|
cur = conn.execute(
|
||||||
|
"DELETE FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
|
||||||
|
(int(time.time()),),
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
count = cur.rowcount
|
||||||
|
logger.info(f"Pruned {count} expired cache entries.")
|
||||||
|
return count
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _track_has_meta(track: TrackMeta) -> bool:
|
||||||
|
return bool(track.artist or track.title or track.album)
|
||||||
|
|
||||||
|
# Exact cross-source search
|
||||||
|
|
||||||
|
def find_best_positive(
|
||||||
|
self, track: TrackMeta, status: CacheStatus
|
||||||
|
) -> Optional[LyricResult]:
|
||||||
|
"""Find the best positive (synced/unsynced) cache entry for track.
|
||||||
|
|
||||||
|
Uses exact metadata match (artist + title + album) across all sources.
|
||||||
|
Returns the highest-confidence entry, or None.
|
||||||
|
"""
|
||||||
|
if not self._track_has_meta(track):
|
||||||
|
return None
|
||||||
|
|
||||||
|
now = int(time.time())
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.row_factory = sqlite3.Row
|
||||||
|
rows = conn.execute(
|
||||||
|
f"SELECT status, lyrics, source, confidence FROM cache"
|
||||||
|
f" WHERE {_TRACK_WHERE}"
|
||||||
|
" AND status = ?"
|
||||||
|
" AND positive_kind = ?"
|
||||||
|
" AND (expires_at IS NULL OR expires_at > ?)"
|
||||||
|
" ORDER BY COALESCE(confidence, ?) DESC,"
|
||||||
|
" CASE status WHEN ? THEN 0 ELSE 1 END,"
|
||||||
|
" created_at DESC",
|
||||||
|
_track_where_params(track)
|
||||||
|
+ [
|
||||||
|
status.value,
|
||||||
|
self._slot_for_status(status),
|
||||||
|
now,
|
||||||
|
LEGACY_CONFIDENCE,
|
||||||
|
CacheStatus.SUCCESS_SYNCED.value,
|
||||||
|
],
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
if not rows:
|
||||||
|
return None
|
||||||
|
|
||||||
|
row = dict(rows[0])
|
||||||
|
confidence = row["confidence"]
|
||||||
|
if confidence is None:
|
||||||
|
confidence = LEGACY_CONFIDENCE
|
||||||
|
return LyricResult(
|
||||||
|
status=CacheStatus(row["status"]),
|
||||||
|
lyrics=LRCData(row["lyrics"]) if row["lyrics"] else None,
|
||||||
|
source="cache-search",
|
||||||
|
confidence=confidence,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Fuzzy search
|
||||||
|
|
||||||
|
def search_by_meta(
|
||||||
|
self,
|
||||||
|
title: Optional[str],
|
||||||
|
length: Optional[int] = None,
|
||||||
|
) -> list[dict]:
|
||||||
|
"""Search cache for lyrics matching title with fuzzy normalization.
|
||||||
|
|
||||||
|
Artist is intentionally not filtered here — artist names can differ
|
||||||
|
significantly across languages (e.g. Japanese romanization vs. kanji),
|
||||||
|
making hard artist filtering unreliable for cross-language queries.
|
||||||
|
|
||||||
|
Ignores artist, album and source. Only returns positive results
|
||||||
|
(synced/unsynced) that have not expired. When length is provided,
|
||||||
|
filters by duration tolerance and sorts by closest match.
|
||||||
|
"""
|
||||||
|
if not title:
|
||||||
|
return []
|
||||||
|
|
||||||
|
now = int(time.time())
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.row_factory = sqlite3.Row
|
||||||
|
rows = conn.execute(
|
||||||
|
"""SELECT * FROM cache
|
||||||
|
WHERE status IN (?, ?)
|
||||||
|
AND (expires_at IS NULL OR expires_at > ?)""",
|
||||||
|
(
|
||||||
|
CacheStatus.SUCCESS_SYNCED.value,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED.value,
|
||||||
|
now,
|
||||||
|
),
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
norm_title = _normalize_for_match(title)
|
||||||
|
|
||||||
|
matches: list[dict] = []
|
||||||
|
for row in rows:
|
||||||
|
row_dict = dict(row)
|
||||||
|
# Title must match
|
||||||
|
row_title = row_dict.get("title") or ""
|
||||||
|
if _normalize_for_match(row_title) != norm_title:
|
||||||
|
continue
|
||||||
|
matches.append(row_dict)
|
||||||
|
|
||||||
|
# Duration filtering
|
||||||
|
if length is not None and matches:
|
||||||
|
scored = []
|
||||||
|
for m in matches:
|
||||||
|
row_len = m.get("length")
|
||||||
|
if row_len is not None:
|
||||||
|
diff = abs(row_len - length)
|
||||||
|
if diff <= DURATION_TOLERANCE_MS:
|
||||||
|
scored.append((diff, m))
|
||||||
|
else:
|
||||||
|
# No duration info in cache — still a candidate but lower priority
|
||||||
|
scored.append((DURATION_TOLERANCE_MS, m))
|
||||||
|
scored.sort(
|
||||||
|
key=lambda x: (
|
||||||
|
x[0],
|
||||||
|
-(x[1].get("confidence") or 0),
|
||||||
|
x[1].get("status") != CacheStatus.SUCCESS_SYNCED.value,
|
||||||
|
-(x[1].get("created_at") or 0),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
matches = [m for _, m in scored]
|
||||||
|
|
||||||
|
return matches
|
||||||
|
|
||||||
|
# Update
|
||||||
|
|
||||||
|
def update_confidence(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
confidence: float,
|
||||||
|
source: str,
|
||||||
|
) -> int:
|
||||||
|
"""Update confidence for a specific source's cache entry matching track.
|
||||||
|
|
||||||
|
Returns the number of rows updated.
|
||||||
|
"""
|
||||||
|
if not self._track_has_meta(track):
|
||||||
|
return 0
|
||||||
|
with self._connect() as conn:
|
||||||
|
cur = conn.execute(
|
||||||
|
f"UPDATE cache SET confidence = ? WHERE {_TRACK_WHERE} AND source = ?",
|
||||||
|
[confidence] + _track_where_params(track) + [source],
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
return cur.rowcount
|
||||||
|
|
||||||
|
# Query / inspect
|
||||||
|
|
||||||
|
def query_track(self, track: TrackMeta) -> list[dict]:
|
||||||
|
"""Return all cached rows for a given track (across all sources)."""
|
||||||
|
if not self._track_has_meta(track):
|
||||||
|
return []
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.row_factory = sqlite3.Row
|
||||||
|
return [
|
||||||
|
dict(r)
|
||||||
|
for r in conn.execute(
|
||||||
|
f"SELECT * FROM cache WHERE {_TRACK_WHERE}",
|
||||||
|
_track_where_params(track),
|
||||||
|
).fetchall()
|
||||||
|
]
|
||||||
|
|
||||||
|
# Credentials
|
||||||
|
|
||||||
|
def get_credential(self, name: str) -> Optional[dict]:
|
||||||
|
"""Return cached credential data if present and not expired."""
|
||||||
|
now_ms = int(time.time() * 1000)
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.row_factory = sqlite3.Row
|
||||||
|
row = conn.execute(
|
||||||
|
"SELECT data FROM credentials WHERE name = ? AND (expires_at IS NULL OR expires_at > ?)",
|
||||||
|
(name, now_ms),
|
||||||
|
).fetchone()
|
||||||
|
if row is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return json.loads(row["data"])
|
||||||
|
except (json.JSONDecodeError, KeyError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_credential(
|
||||||
|
self, name: str, data: dict, expires_at_ms: Optional[int] = None
|
||||||
|
) -> None:
|
||||||
|
"""Persist credential data, optionally with an expiry timestamp (Unix ms)."""
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"INSERT OR REPLACE INTO credentials (name, data, expires_at) VALUES (?, ?, ?)",
|
||||||
|
(name, json.dumps(data), expires_at_ms),
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
|
||||||
|
def query_all(self) -> list[dict]:
|
||||||
|
"""Return every row in the cache table."""
|
||||||
|
with self._connect() as conn:
|
||||||
|
conn.row_factory = sqlite3.Row
|
||||||
|
return [dict(r) for r in conn.execute("SELECT * FROM cache").fetchall()]
|
||||||
|
|
||||||
|
def stats(self) -> dict:
|
||||||
|
"""Return aggregate cache statistics."""
|
||||||
|
now = int(time.time())
|
||||||
|
with self._connect() as conn:
|
||||||
|
total = conn.execute("SELECT COUNT(*) FROM cache").fetchone()[0]
|
||||||
|
expired = conn.execute(
|
||||||
|
"SELECT COUNT(*) FROM cache WHERE expires_at IS NOT NULL AND expires_at < ?",
|
||||||
|
(now,),
|
||||||
|
).fetchone()[0]
|
||||||
|
by_status = dict(
|
||||||
|
conn.execute(
|
||||||
|
"SELECT status, COUNT(*) FROM cache GROUP BY status"
|
||||||
|
).fetchall()
|
||||||
|
)
|
||||||
|
by_source = dict(
|
||||||
|
conn.execute(
|
||||||
|
"SELECT source, COUNT(*) FROM cache GROUP BY source"
|
||||||
|
).fetchall()
|
||||||
|
)
|
||||||
|
by_slot = dict(
|
||||||
|
conn.execute(
|
||||||
|
"SELECT positive_kind, COUNT(*) FROM cache GROUP BY positive_kind"
|
||||||
|
).fetchall()
|
||||||
|
)
|
||||||
|
# Source × Status cross-tabulation
|
||||||
|
source_status = conn.execute(
|
||||||
|
"SELECT source, status, COUNT(*) FROM cache GROUP BY source, status"
|
||||||
|
).fetchall()
|
||||||
|
# Confidence buckets (only for positive statuses)
|
||||||
|
confidence_rows = conn.execute(
|
||||||
|
"SELECT confidence FROM cache WHERE status IN (?, ?)",
|
||||||
|
(
|
||||||
|
CacheStatus.SUCCESS_SYNCED.value,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED.value,
|
||||||
|
),
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
# Build source×status table: {source: {status: count}}
|
||||||
|
source_status_table: dict[str, dict[str, int]] = {}
|
||||||
|
for src, status, count in source_status:
|
||||||
|
source_status_table.setdefault(src, {})[status] = count
|
||||||
|
|
||||||
|
# Build confidence buckets
|
||||||
|
buckets = {
|
||||||
|
"legacy (NULL)": 0,
|
||||||
|
"0-24": 0,
|
||||||
|
"25-49": 0,
|
||||||
|
"50-79": 0,
|
||||||
|
"80-99": 0,
|
||||||
|
"100": 0,
|
||||||
|
}
|
||||||
|
for (conf,) in confidence_rows:
|
||||||
|
if conf is None:
|
||||||
|
buckets["legacy (NULL)"] += 1
|
||||||
|
elif conf >= 100:
|
||||||
|
buckets["100"] += 1
|
||||||
|
elif conf >= 80:
|
||||||
|
buckets["80-99"] += 1
|
||||||
|
elif conf >= 50:
|
||||||
|
buckets["50-79"] += 1
|
||||||
|
elif conf >= 25:
|
||||||
|
buckets["25-49"] += 1
|
||||||
|
else:
|
||||||
|
buckets["0-24"] += 1
|
||||||
|
|
||||||
|
return {
|
||||||
|
"total": total,
|
||||||
|
"expired": expired,
|
||||||
|
"active": total - expired,
|
||||||
|
"by_status": by_status,
|
||||||
|
"by_source": by_source,
|
||||||
|
"by_slot": by_slot,
|
||||||
|
"source_status": source_status_table,
|
||||||
|
"confidence_buckets": buckets,
|
||||||
|
}
|
||||||
@@ -1,24 +1,37 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-26 02:04:39
|
Date: 2026-03-26 02:04:39
|
||||||
Description: CLI interface
|
Description: CLI interface.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
import os
|
import os
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Annotated
|
from typing import Annotated
|
||||||
from urllib.parse import quote
|
from urllib.parse import quote
|
||||||
import cyclopts
|
import cyclopts
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .config import DB_PATH, enable_debug
|
from .config import (
|
||||||
from .models import TrackMeta, CacheStatus
|
DB_PATH,
|
||||||
|
AppConfig,
|
||||||
|
load_config,
|
||||||
|
enable_debug,
|
||||||
|
)
|
||||||
|
from .utils import get_sidecar_path
|
||||||
|
from .models import TrackMeta
|
||||||
from .mpris import get_current_track
|
from .mpris import get_current_track
|
||||||
from .core import LrcManager
|
from .core import LrcManager
|
||||||
from .fetchers import FetcherMethodType
|
from .fetchers import FetcherMethodType
|
||||||
from .lrc import get_sidecar_path
|
from .watch import WatchCoordinator
|
||||||
|
from .watch.control import ControlClient, parse_delta
|
||||||
|
from .watch.view.pipe import PipeOutput
|
||||||
|
from .watch.view.print import PrintOutput
|
||||||
|
|
||||||
|
|
||||||
app = cyclopts.App(
|
app = cyclopts.App(
|
||||||
@@ -29,10 +42,17 @@ app.register_install_completion_command()
|
|||||||
cache_app = cyclopts.App(name="cache", help="Manage the local SQLite cache.")
|
cache_app = cyclopts.App(name="cache", help="Manage the local SQLite cache.")
|
||||||
app.command(cache_app)
|
app.command(cache_app)
|
||||||
|
|
||||||
|
watch_app = cyclopts.App(name="watch", help="Watch MPRIS and output lyrics.")
|
||||||
|
app.command(watch_app)
|
||||||
|
|
||||||
|
ctl_app = cyclopts.App(name="ctl", help="Control a running watch session.")
|
||||||
|
watch_app.command(ctl_app)
|
||||||
|
|
||||||
|
|
||||||
# Global state set by the meta launcher
|
# Global state set by the meta launcher
|
||||||
_player: str | None = None
|
_player: str | None = None
|
||||||
_db_path: str | None = None
|
_db_path: str | None = None
|
||||||
|
_app_config: AppConfig = AppConfig()
|
||||||
|
|
||||||
# Will be initialized before any command runs, safe to set to None here
|
# Will be initialized before any command runs, safe to set to None here
|
||||||
manager: LrcManager = None # type: ignore
|
manager: LrcManager = None # type: ignore
|
||||||
@@ -51,7 +71,7 @@ def launcher(
|
|||||||
str | None,
|
str | None,
|
||||||
cyclopts.Parameter(
|
cyclopts.Parameter(
|
||||||
name=["--player", "-p"],
|
name=["--player", "-p"],
|
||||||
help="Target a specific MPRIS player using its DBus name or a portion thereof.",
|
help="Target a specific MPRIS player using its DBus name or a portion thereof. Bypasses player_blacklist.",
|
||||||
),
|
),
|
||||||
] = None,
|
] = None,
|
||||||
db_path: Annotated[
|
db_path: Annotated[
|
||||||
@@ -62,13 +82,13 @@ def launcher(
|
|||||||
),
|
),
|
||||||
] = None,
|
] = None,
|
||||||
):
|
):
|
||||||
global _player, _db_path
|
global _player, _db_path, _app_config, manager
|
||||||
if debug:
|
if debug:
|
||||||
enable_debug()
|
enable_debug()
|
||||||
_player = player
|
_player = player
|
||||||
_db_path = str(Path(db_path).resolve()) if db_path else DB_PATH
|
_db_path = str(Path(db_path).resolve()) if db_path else DB_PATH
|
||||||
global manager
|
_app_config = load_config()
|
||||||
manager = LrcManager(db_path=_db_path)
|
manager = LrcManager(db_path=_db_path, config=_app_config)
|
||||||
app(tokens)
|
app(tokens)
|
||||||
|
|
||||||
|
|
||||||
@@ -88,21 +108,37 @@ def fetch(
|
|||||||
name="--no-cache", negative="", help="Bypass the cache for this request."
|
name="--no-cache", negative="", help="Bypass the cache for this request."
|
||||||
),
|
),
|
||||||
] = False,
|
] = False,
|
||||||
only_synced: Annotated[
|
allow_unsynced: Annotated[
|
||||||
bool,
|
bool,
|
||||||
cyclopts.Parameter(
|
cyclopts.Parameter(
|
||||||
name="--only-synced", negative="", help="Only accept synced (timed) lyrics."
|
name="--allow-unsynced",
|
||||||
|
negative="",
|
||||||
|
help="Allow unsynced lyrics (will be displayed with all time tags set to [00:00.00]).",
|
||||||
),
|
),
|
||||||
] = False,
|
] = False,
|
||||||
plain: Annotated[
|
plain: Annotated[
|
||||||
bool,
|
bool,
|
||||||
cyclopts.Parameter(
|
cyclopts.Parameter(
|
||||||
name="--plain", negative="", help="Output only the raw lyrics without tags."
|
name="--plain",
|
||||||
|
negative="",
|
||||||
|
help="Output only plain lyrics without tags (highest priority over --normalize).",
|
||||||
|
),
|
||||||
|
] = False,
|
||||||
|
normalize: Annotated[
|
||||||
|
bool,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name="--normalize",
|
||||||
|
negative="",
|
||||||
|
help="Output normalized LRC (ignored when --plain is also set).",
|
||||||
),
|
),
|
||||||
] = False,
|
] = False,
|
||||||
):
|
):
|
||||||
"""Fetch and print lyrics for the currently playing track."""
|
"""Fetch and print lyrics for the currently playing track."""
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
|
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
@@ -110,17 +146,23 @@ def fetch(
|
|||||||
|
|
||||||
logger.info(f"Track: {track.display_name()}")
|
logger.info(f"Track: {track.display_name()}")
|
||||||
|
|
||||||
result = manager.fetch_for_track(track, force_method=method, bypass_cache=no_cache)
|
result = manager.fetch_for_track(
|
||||||
|
track,
|
||||||
|
force_method=method,
|
||||||
|
bypass_cache=no_cache,
|
||||||
|
allow_unsynced=allow_unsynced,
|
||||||
|
)
|
||||||
|
|
||||||
if not result or not result.lyrics:
|
if not result or not result.lyrics:
|
||||||
logger.error("No lyrics found.")
|
logger.error("No lyrics found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
if only_synced and result.status != CacheStatus.SUCCESS_SYNCED:
|
if plain:
|
||||||
logger.error("Only unsynced lyrics available (--only-synced requested).")
|
print(result.lyrics.to_plain())
|
||||||
sys.exit(1)
|
elif normalize:
|
||||||
|
print(result.lyrics.to_normalized_text())
|
||||||
result.lyrics.print_lyrics(plain=plain)
|
else:
|
||||||
|
print(result.lyrics.to_text())
|
||||||
|
|
||||||
|
|
||||||
# search
|
# search
|
||||||
@@ -165,16 +207,28 @@ def search(
|
|||||||
name="--no-cache", negative="", help="Bypass the cache for this request."
|
name="--no-cache", negative="", help="Bypass the cache for this request."
|
||||||
),
|
),
|
||||||
] = False,
|
] = False,
|
||||||
only_synced: Annotated[
|
allow_unsynced: Annotated[
|
||||||
bool,
|
bool,
|
||||||
cyclopts.Parameter(
|
cyclopts.Parameter(
|
||||||
name="--only-synced", negative="", help="Only accept synced (timed) lyrics."
|
name="--allow-unsynced",
|
||||||
|
negative="",
|
||||||
|
help="Allow unsynced lyrics (will be displayed with all time tags set to [00:00.00]).",
|
||||||
),
|
),
|
||||||
] = False,
|
] = False,
|
||||||
plain: Annotated[
|
plain: Annotated[
|
||||||
bool,
|
bool,
|
||||||
cyclopts.Parameter(
|
cyclopts.Parameter(
|
||||||
name="--plain", negative="", help="Output only the raw lyrics without tags."
|
name="--plain",
|
||||||
|
negative="",
|
||||||
|
help="Output only plain lyrics without tags (highest priority over --normalize).",
|
||||||
|
),
|
||||||
|
] = False,
|
||||||
|
normalize: Annotated[
|
||||||
|
bool,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name="--normalize",
|
||||||
|
negative="",
|
||||||
|
help="Output normalized LRC (ignored when --plain is also set).",
|
||||||
),
|
),
|
||||||
] = False,
|
] = False,
|
||||||
):
|
):
|
||||||
@@ -198,17 +252,23 @@ def search(
|
|||||||
|
|
||||||
logger.info(f"Track: {track.display_name()}")
|
logger.info(f"Track: {track.display_name()}")
|
||||||
|
|
||||||
result = manager.fetch_for_track(track, force_method=method, bypass_cache=no_cache)
|
result = manager.fetch_for_track(
|
||||||
|
track,
|
||||||
|
force_method=method,
|
||||||
|
bypass_cache=no_cache,
|
||||||
|
allow_unsynced=allow_unsynced,
|
||||||
|
)
|
||||||
|
|
||||||
if not result or not result.lyrics:
|
if not result or not result.lyrics:
|
||||||
logger.error("No lyrics found.")
|
logger.error("No lyrics found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
if only_synced and result.status != CacheStatus.SUCCESS_SYNCED:
|
if plain:
|
||||||
logger.error("Only unsynced lyrics available (--only-synced requested).")
|
print(result.lyrics.to_plain())
|
||||||
sys.exit(1)
|
elif normalize:
|
||||||
|
print(result.lyrics.to_normalized_text())
|
||||||
result.lyrics.print_lyrics(plain=plain)
|
else:
|
||||||
|
print(result.lyrics.to_text())
|
||||||
|
|
||||||
|
|
||||||
# export
|
# export
|
||||||
@@ -236,20 +296,47 @@ def export(
|
|||||||
name=["--overwrite", "-f"], negative="", help="Overwrite existing file."
|
name=["--overwrite", "-f"], negative="", help="Overwrite existing file."
|
||||||
),
|
),
|
||||||
] = False,
|
] = False,
|
||||||
|
allow_unsynced: Annotated[
|
||||||
|
bool,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name="--allow-unsynced",
|
||||||
|
negative="",
|
||||||
|
help="Allow unsynced lyrics (will be exported with all time tags set to [00:00.00] if --plain is not present).",
|
||||||
|
),
|
||||||
|
] = False,
|
||||||
plain: Annotated[
|
plain: Annotated[
|
||||||
bool,
|
bool,
|
||||||
cyclopts.Parameter(
|
cyclopts.Parameter(
|
||||||
name="--plain", negative="", help="Export only the raw lyrics without tags."
|
name="--plain",
|
||||||
|
negative="",
|
||||||
|
help="Export only plain lyrics (.txt, highest priority over --normalize).",
|
||||||
|
),
|
||||||
|
] = False,
|
||||||
|
normalize: Annotated[
|
||||||
|
bool,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name="--normalize",
|
||||||
|
negative="",
|
||||||
|
help="Export normalized LRC output (ignored when --plain is also set).",
|
||||||
),
|
),
|
||||||
] = False,
|
] = False,
|
||||||
):
|
):
|
||||||
"""Export lyrics of the current track to a .lrc file."""
|
"""Export lyrics of the current track to a .lrc file."""
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
result = manager.fetch_for_track(track, force_method=method, bypass_cache=no_cache)
|
result = manager.fetch_for_track(
|
||||||
|
track,
|
||||||
|
force_method=method,
|
||||||
|
bypass_cache=no_cache,
|
||||||
|
allow_unsynced=allow_unsynced,
|
||||||
|
)
|
||||||
if not result or not result.lyrics:
|
if not result or not result.lyrics:
|
||||||
logger.error("No lyrics available to export.")
|
logger.error("No lyrics available to export.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -288,14 +375,124 @@ def export(
|
|||||||
with open(output, "w", encoding="utf-8") as f:
|
with open(output, "w", encoding="utf-8") as f:
|
||||||
if plain:
|
if plain:
|
||||||
f.write(result.lyrics.to_plain())
|
f.write(result.lyrics.to_plain())
|
||||||
|
elif normalize:
|
||||||
|
f.write(result.lyrics.to_normalized_text())
|
||||||
else:
|
else:
|
||||||
f.write(str(result.lyrics))
|
f.write(result.lyrics.to_text())
|
||||||
logger.info(f"Exported lyrics to {output}")
|
logger.info(f"Exported lyrics to {output}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to write file: {e}")
|
logger.error(f"Failed to write file: {e}")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
# watch subcommands
|
||||||
|
|
||||||
|
|
||||||
|
@watch_app.command
|
||||||
|
def pipe(
|
||||||
|
before: Annotated[
|
||||||
|
int,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name=["--before", "-b"],
|
||||||
|
help="Number of lyric lines to show before current line.",
|
||||||
|
),
|
||||||
|
] = 0,
|
||||||
|
after: Annotated[
|
||||||
|
int,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name=["--after", "-a"],
|
||||||
|
help="Number of lyric lines to show after current line.",
|
||||||
|
),
|
||||||
|
] = 0,
|
||||||
|
no_newline: Annotated[
|
||||||
|
bool,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name=["--no-newline", "-n"],
|
||||||
|
negative="",
|
||||||
|
help="Do not append a new line after the lyric output.",
|
||||||
|
),
|
||||||
|
] = False,
|
||||||
|
):
|
||||||
|
"""Watch active player and continuously print lyric window to stdout."""
|
||||||
|
logger.info(
|
||||||
|
"Starting watch pipe (player filter: {})",
|
||||||
|
_player or "<none>",
|
||||||
|
)
|
||||||
|
output = PipeOutput(
|
||||||
|
before=max(0, before), after=max(0, after), no_newline=no_newline
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
session = WatchCoordinator(
|
||||||
|
manager,
|
||||||
|
output,
|
||||||
|
player_hint=_player,
|
||||||
|
config=_app_config,
|
||||||
|
)
|
||||||
|
success = asyncio.run(session.run())
|
||||||
|
if not success:
|
||||||
|
sys.exit(1)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logger.info("Watch stopped.")
|
||||||
|
|
||||||
|
|
||||||
|
@watch_app.command(name="print")
|
||||||
|
def watch_print(
|
||||||
|
plain: Annotated[
|
||||||
|
bool,
|
||||||
|
cyclopts.Parameter(
|
||||||
|
name="--plain",
|
||||||
|
negative="",
|
||||||
|
help="Output plain text (strips all tags). Takes priority over --normalize.",
|
||||||
|
),
|
||||||
|
] = False,
|
||||||
|
) -> None:
|
||||||
|
"""Watch active player and print all lyrics to stdout once per track change."""
|
||||||
|
logger.info(
|
||||||
|
"Starting watch print (player filter: {})",
|
||||||
|
_player or "<none>",
|
||||||
|
)
|
||||||
|
output = PrintOutput(plain=plain)
|
||||||
|
try:
|
||||||
|
session = WatchCoordinator(
|
||||||
|
manager,
|
||||||
|
output,
|
||||||
|
player_hint=_player,
|
||||||
|
config=_app_config,
|
||||||
|
)
|
||||||
|
success = asyncio.run(session.run())
|
||||||
|
if not success:
|
||||||
|
sys.exit(1)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
logger.info("Watch stopped.")
|
||||||
|
|
||||||
|
|
||||||
|
@ctl_app.command
|
||||||
|
def offset(delta: str) -> None:
|
||||||
|
"""Adjust watch offset. Examples: +200, -200, 0."""
|
||||||
|
parsed_ok, parsed_delta, parse_error = parse_delta(delta)
|
||||||
|
if not parsed_ok or parsed_delta is None:
|
||||||
|
logger.error(parse_error or "Invalid offset delta")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
response = ControlClient(_app_config.watch.socket_path).send(
|
||||||
|
{"cmd": "offset", "delta": parsed_delta}
|
||||||
|
)
|
||||||
|
if not response.get("ok"):
|
||||||
|
logger.error(response.get("error", "Unknown error"))
|
||||||
|
sys.exit(1)
|
||||||
|
print(json.dumps(response, indent=2, ensure_ascii=False))
|
||||||
|
|
||||||
|
|
||||||
|
@ctl_app.command
|
||||||
|
def status() -> None:
|
||||||
|
"""Print current watch session status as JSON."""
|
||||||
|
response = ControlClient(_app_config.watch.socket_path).send({"cmd": "status"})
|
||||||
|
if not response.get("ok"):
|
||||||
|
logger.error(response.get("error", "Unknown error"))
|
||||||
|
sys.exit(1)
|
||||||
|
print(json.dumps(response, indent=2, ensure_ascii=False))
|
||||||
|
|
||||||
|
|
||||||
# cache subcommands
|
# cache subcommands
|
||||||
|
|
||||||
|
|
||||||
@@ -318,7 +515,11 @@ def query(
|
|||||||
print()
|
print()
|
||||||
return
|
return
|
||||||
|
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -338,7 +539,11 @@ def clear(
|
|||||||
manager.cache.clear_all()
|
manager.cache.clear_all()
|
||||||
return
|
return
|
||||||
|
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -360,6 +565,13 @@ def stats():
|
|||||||
print(f"Active : {s['active']}")
|
print(f"Active : {s['active']}")
|
||||||
print(f"Expired : {s['expired']}")
|
print(f"Expired : {s['expired']}")
|
||||||
|
|
||||||
|
by_slot = s.get("by_slot", {})
|
||||||
|
if by_slot:
|
||||||
|
print(
|
||||||
|
"Slots : "
|
||||||
|
+ ", ".join(f"{k}={v}" for k, v in sorted(by_slot.items()))
|
||||||
|
)
|
||||||
|
|
||||||
# Source × Status table
|
# Source × Status table
|
||||||
table = s.get("source_status", {})
|
table = s.get("source_status", {})
|
||||||
if table:
|
if table:
|
||||||
@@ -421,7 +633,11 @@ def confidence(
|
|||||||
logger.error("Score must be between 0 and 100.")
|
logger.error("Score must be between 0 and 100.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -445,7 +661,11 @@ def insert(
|
|||||||
] = None,
|
] = None,
|
||||||
):
|
):
|
||||||
"""Manually insert lyrics into the cache for the current track."""
|
"""Manually insert lyrics into the cache for the current track."""
|
||||||
track = get_current_track(_player)
|
track = get_current_track(
|
||||||
|
_player,
|
||||||
|
preferred_player=_app_config.general.preferred_player,
|
||||||
|
player_blacklist=_app_config.general.player_blacklist,
|
||||||
|
)
|
||||||
if not track:
|
if not track:
|
||||||
logger.error("No active playing track found.")
|
logger.error("No active playing track found.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -490,6 +710,7 @@ def _print_cache_row(row: dict, indent: str = "") -> None:
|
|||||||
"""Pretty-print a single cache row."""
|
"""Pretty-print a single cache row."""
|
||||||
now = int(time.time())
|
now = int(time.time())
|
||||||
source = row.get("source", "?")
|
source = row.get("source", "?")
|
||||||
|
slot = row.get("positive_kind", "?")
|
||||||
status = row.get("status", "?")
|
status = row.get("status", "?")
|
||||||
artist = row.get("artist", "")
|
artist = row.get("artist", "")
|
||||||
title = row.get("title", "")
|
title = row.get("title", "")
|
||||||
@@ -500,7 +721,7 @@ def _print_cache_row(row: dict, indent: str = "") -> None:
|
|||||||
confidence = row.get("confidence")
|
confidence = row.get("confidence")
|
||||||
|
|
||||||
name = f"{artist} - {title}" if artist and title else row.get("key", "?")
|
name = f"{artist} - {title}" if artist and title else row.get("key", "?")
|
||||||
print(f"{indent}[{source}] {name}")
|
print(f"{indent}[{source}/{slot}] {name}")
|
||||||
if album:
|
if album:
|
||||||
print(f"{indent} Album : {album}")
|
print(f"{indent} Album : {album}")
|
||||||
print(f"{indent} Status : {status}")
|
print(f"{indent} Status : {status}")
|
||||||
@@ -0,0 +1,208 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-25 10:17:56
|
||||||
|
Description: Global configuration constants, typed config dataclasses, and logger setup.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import dataclasses
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import tomllib
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, get_type_hints
|
||||||
|
|
||||||
|
from platformdirs import user_cache_dir, user_config_dir
|
||||||
|
from loguru import logger
|
||||||
|
from importlib.metadata import version
|
||||||
|
|
||||||
|
# Application
|
||||||
|
APP_NAME = "lrx-cli"
|
||||||
|
APP_AUTHOR = "Uyanide"
|
||||||
|
APP_VERSION = version(APP_NAME)
|
||||||
|
|
||||||
|
# Paths
|
||||||
|
CACHE_DIR = user_cache_dir(APP_NAME, APP_AUTHOR)
|
||||||
|
DB_PATH = os.path.join(CACHE_DIR, "cache.db")
|
||||||
|
# Slot identifiers used by per-slot cache rows.
|
||||||
|
SLOT_SYNCED = "SYNCED"
|
||||||
|
SLOT_UNSYNCED = "UNSYNCED"
|
||||||
|
|
||||||
|
_WATCH_SOCKET_PATH = str(Path(CACHE_DIR) / "watch.sock")
|
||||||
|
|
||||||
|
# Cache TTLs (seconds)
|
||||||
|
TTL_SYNCED = None # never expires
|
||||||
|
TTL_UNSYNCED = None # never expires
|
||||||
|
TTL_NOT_FOUND = 86400 * 3 # 3 days
|
||||||
|
TTL_NETWORK_ERROR = 3600 # 1 hour
|
||||||
|
|
||||||
|
# Search
|
||||||
|
DURATION_TOLERANCE_MS = 3000 # max duration mismatch for search matching
|
||||||
|
|
||||||
|
# Confidence scoring weights (sum to 100)
|
||||||
|
SCORE_W_TITLE = 40.0
|
||||||
|
SCORE_W_ARTIST = 30.0
|
||||||
|
SCORE_W_ALBUM = 10.0
|
||||||
|
SCORE_W_DURATION = 10.0
|
||||||
|
SCORE_W_SYNCED = 10.0
|
||||||
|
CONFIDENCE_ALGO_VERSION = 1
|
||||||
|
|
||||||
|
# Confidence thresholds
|
||||||
|
MIN_CONFIDENCE = 40.0 # below this, candidate is rejected
|
||||||
|
HIGH_CONFIDENCE = 80.0 # at or above this, stop searching early
|
||||||
|
|
||||||
|
# Multi-candidate fetching
|
||||||
|
MULTI_CANDIDATE_LIMIT = 3 # max candidates to try per search-based fetcher
|
||||||
|
MULTI_CANDIDATE_DELAY_S = 0.2 # delay between sequential lyric fetches
|
||||||
|
|
||||||
|
# Legacy cache rows (no confidence stored) get a base score by sync status
|
||||||
|
LEGACY_CONFIDENCE = 50.0
|
||||||
|
|
||||||
|
# User-Agents
|
||||||
|
UA_BROWSER = "Mozilla/5.0 (X11; Linux x86_64; rv:149.0) Gecko/20100101 Firefox/149.0"
|
||||||
|
UA_LRX = f"LRX-CLI {APP_VERSION} (https://github.com/Uyanide/lrx-cli)"
|
||||||
|
|
||||||
|
MUSIXMATCH_COOLDOWN_MS = 600_000 # 10 minutes
|
||||||
|
|
||||||
|
os.makedirs(CACHE_DIR, exist_ok=True)
|
||||||
|
|
||||||
|
|
||||||
|
DEFAULT_PREFERRED_PLAYER = ""
|
||||||
|
DEFAULT_PLAYER_BLACKLIST: tuple[str, ...] = (
|
||||||
|
"firefox",
|
||||||
|
"zen",
|
||||||
|
"chrome",
|
||||||
|
"chromium",
|
||||||
|
"vivaldi",
|
||||||
|
"edge",
|
||||||
|
"opera",
|
||||||
|
"mpv",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class GeneralConfig:
|
||||||
|
preferred_player: str = DEFAULT_PREFERRED_PLAYER
|
||||||
|
player_blacklist: tuple[str, ...] = DEFAULT_PLAYER_BLACKLIST
|
||||||
|
http_timeout: float = 10.0
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class CredentialConfig:
|
||||||
|
spotify_sp_dc: str = ""
|
||||||
|
musixmatch_usertoken: str = ""
|
||||||
|
qq_music_api_url: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class WatchConfig:
|
||||||
|
debounce_ms: int = 400
|
||||||
|
calibration_interval_s: float = 3.0
|
||||||
|
position_tick_ms: int = 50
|
||||||
|
socket_path: str = field(default_factory=lambda: _WATCH_SOCKET_PATH)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class AppConfig:
|
||||||
|
general: GeneralConfig = field(default_factory=GeneralConfig)
|
||||||
|
credentials: CredentialConfig = field(default_factory=CredentialConfig)
|
||||||
|
watch: WatchConfig = field(default_factory=WatchConfig)
|
||||||
|
|
||||||
|
|
||||||
|
_CONFIG_PATH = Path(user_config_dir(APP_NAME, APP_AUTHOR)) / "config.toml"
|
||||||
|
|
||||||
|
|
||||||
|
def _coerce(val: Any, hint: Any, section: str, name: str) -> Any:
|
||||||
|
"""Coerce and validate one TOML value against its declared field type."""
|
||||||
|
if hint is str:
|
||||||
|
if not isinstance(val, str):
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}: expected str, got {type(val).__name__}"
|
||||||
|
)
|
||||||
|
return val
|
||||||
|
if hint is int:
|
||||||
|
if not isinstance(val, int) or isinstance(val, bool):
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}: expected int, got {type(val).__name__}"
|
||||||
|
)
|
||||||
|
return val
|
||||||
|
if hint is float:
|
||||||
|
if isinstance(val, bool):
|
||||||
|
raise ValueError(f"[{section}].{name}: expected float, got bool")
|
||||||
|
if isinstance(val, (int, float)):
|
||||||
|
return float(val)
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}: expected float, got {type(val).__name__}"
|
||||||
|
)
|
||||||
|
origin = getattr(hint, "__origin__", None)
|
||||||
|
if origin is tuple:
|
||||||
|
if not isinstance(val, list):
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}: expected array, got {type(val).__name__}"
|
||||||
|
)
|
||||||
|
for i, item in enumerate(val):
|
||||||
|
if not isinstance(item, str):
|
||||||
|
raise ValueError(
|
||||||
|
f"[{section}].{name}[{i}]: expected str, got {type(item).__name__}"
|
||||||
|
)
|
||||||
|
return tuple(val)
|
||||||
|
raise ValueError(f"[{section}].{name}: unsupported field type {hint!r}")
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_section(raw: dict[str, Any], cls: type, section: str) -> Any:
|
||||||
|
"""Parse one TOML section dict into a frozen dataclass, rejecting unknown keys."""
|
||||||
|
fields_map = {f.name: f for f in dataclasses.fields(cls)}
|
||||||
|
hints = get_type_hints(cls)
|
||||||
|
|
||||||
|
unknown = set(raw) - set(fields_map)
|
||||||
|
if unknown:
|
||||||
|
raise ValueError(
|
||||||
|
f"Unknown config keys in [{section}]: {', '.join(sorted(unknown))}"
|
||||||
|
)
|
||||||
|
|
||||||
|
kwargs: dict[str, Any] = {}
|
||||||
|
for name, f in fields_map.items():
|
||||||
|
if name not in raw:
|
||||||
|
if f.default is not dataclasses.MISSING:
|
||||||
|
kwargs[name] = f.default
|
||||||
|
elif f.default_factory is not dataclasses.MISSING: # type: ignore[misc]
|
||||||
|
kwargs[name] = f.default_factory()
|
||||||
|
continue
|
||||||
|
kwargs[name] = _coerce(raw[name], hints[name], section, name)
|
||||||
|
|
||||||
|
return cls(**kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def load_config(path: Path | None = None) -> AppConfig:
|
||||||
|
"""Load AppConfig from TOML file; return all-defaults when file is absent."""
|
||||||
|
resolved = path or _CONFIG_PATH
|
||||||
|
if not resolved.exists():
|
||||||
|
return AppConfig()
|
||||||
|
with open(resolved, "rb") as f:
|
||||||
|
data = tomllib.load(f)
|
||||||
|
return AppConfig(
|
||||||
|
general=_parse_section(data.get("general", {}), GeneralConfig, "general"),
|
||||||
|
credentials=_parse_section(
|
||||||
|
data.get("credentials", {}), CredentialConfig, "credentials"
|
||||||
|
),
|
||||||
|
watch=_parse_section(data.get("watch", {}), WatchConfig, "watch"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
_LOG_FORMAT = (
|
||||||
|
"<green>{time:YYYY-MM-DD HH:mm:ss}</green> | "
|
||||||
|
"<level>{level: <8}</level> | "
|
||||||
|
"<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - "
|
||||||
|
"<level>{message}</level>"
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.remove()
|
||||||
|
logger.add(sys.stderr, format=_LOG_FORMAT, level="INFO")
|
||||||
|
|
||||||
|
|
||||||
|
def enable_debug() -> None:
|
||||||
|
"""Switch logger to DEBUG level."""
|
||||||
|
logger.remove()
|
||||||
|
logger.add(sys.stderr, format=_LOG_FORMAT, level="DEBUG")
|
||||||
@@ -0,0 +1,307 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-25 11:09:53
|
||||||
|
Description: Core orchestrator — coordinates fetchers with cache-aware fallback.
|
||||||
|
Also handles enrichers & authenticators & …
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from typing import Optional
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .fetchers import FetcherMethodType, build_plan, create_fetchers
|
||||||
|
from .fetchers.base import BaseFetcher, FetchResult
|
||||||
|
from .authenticators import create_authenticators
|
||||||
|
from .cache import CacheEngine
|
||||||
|
from .lrc import LRCData
|
||||||
|
from .config import (
|
||||||
|
TTL_SYNCED,
|
||||||
|
TTL_UNSYNCED,
|
||||||
|
TTL_NOT_FOUND,
|
||||||
|
TTL_NETWORK_ERROR,
|
||||||
|
HIGH_CONFIDENCE,
|
||||||
|
SLOT_SYNCED,
|
||||||
|
SLOT_UNSYNCED,
|
||||||
|
AppConfig,
|
||||||
|
)
|
||||||
|
from .models import TrackMeta, LyricResult, CacheStatus
|
||||||
|
from .enrichers import create_enrichers, enrich_track
|
||||||
|
from .utils import is_better_result, select_best_positive
|
||||||
|
|
||||||
|
|
||||||
|
# Maps CacheStatus to the default TTL used when storing results
|
||||||
|
_STATUS_TTL: dict[CacheStatus, Optional[int]] = {
|
||||||
|
CacheStatus.SUCCESS_SYNCED: TTL_SYNCED,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED: TTL_UNSYNCED,
|
||||||
|
CacheStatus.NOT_FOUND: TTL_NOT_FOUND,
|
||||||
|
CacheStatus.NETWORK_ERROR: TTL_NETWORK_ERROR,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _pick_for_return(
|
||||||
|
result: FetchResult,
|
||||||
|
allow_unsynced: bool,
|
||||||
|
) -> Optional[LyricResult]:
|
||||||
|
"""Pick best positive slot for final selection under current strategy."""
|
||||||
|
candidates: list[LyricResult] = []
|
||||||
|
if result.synced and result.synced.status == CacheStatus.SUCCESS_SYNCED:
|
||||||
|
candidates.append(result.synced)
|
||||||
|
if (
|
||||||
|
allow_unsynced
|
||||||
|
and result.unsynced
|
||||||
|
and result.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
):
|
||||||
|
candidates.append(result.unsynced)
|
||||||
|
|
||||||
|
return select_best_positive(candidates, allow_unsynced=True)
|
||||||
|
|
||||||
|
|
||||||
|
def _iter_slot_results(result: FetchResult) -> list[tuple[str, LyricResult]]:
|
||||||
|
"""Return all non-None slot results with their cache slot key."""
|
||||||
|
out: list[tuple[str, LyricResult]] = []
|
||||||
|
if result.synced is not None:
|
||||||
|
out.append((SLOT_SYNCED, result.synced))
|
||||||
|
if result.unsynced is not None:
|
||||||
|
out.append((SLOT_UNSYNCED, result.unsynced))
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def _pick_cached_for_return(
|
||||||
|
cached_rows: list[LyricResult],
|
||||||
|
allow_unsynced: bool,
|
||||||
|
) -> Optional[LyricResult]:
|
||||||
|
"""Convert cached slot rows into FetchResult-like view and select return candidate."""
|
||||||
|
fr = FetchResult()
|
||||||
|
for row in cached_rows:
|
||||||
|
if row.status == CacheStatus.SUCCESS_SYNCED:
|
||||||
|
fr = FetchResult(synced=row, unsynced=fr.unsynced)
|
||||||
|
elif row.status == CacheStatus.SUCCESS_UNSYNCED:
|
||||||
|
fr = FetchResult(synced=fr.synced, unsynced=row)
|
||||||
|
return _pick_for_return(fr, allow_unsynced)
|
||||||
|
|
||||||
|
|
||||||
|
def _has_negative_for_both_slots(cached_rows: list[LyricResult]) -> bool:
|
||||||
|
"""True when both slot rows are present and both are negative."""
|
||||||
|
if len(cached_rows) < 2:
|
||||||
|
return False
|
||||||
|
return all(
|
||||||
|
r.status in (CacheStatus.NOT_FOUND, CacheStatus.NETWORK_ERROR)
|
||||||
|
for r in cached_rows
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class LrcManager:
|
||||||
|
"""Main entry point for fetching lyrics with caching."""
|
||||||
|
|
||||||
|
def __init__(self, db_path: str, config: AppConfig = AppConfig()) -> None:
|
||||||
|
self.cache = CacheEngine(db_path=db_path)
|
||||||
|
self.authenticators = create_authenticators(self.cache, config)
|
||||||
|
self.fetchers = create_fetchers(self.cache, self.authenticators, config)
|
||||||
|
self.enrichers = create_enrichers(self.authenticators)
|
||||||
|
|
||||||
|
async def _run_group(
|
||||||
|
self,
|
||||||
|
group: list[BaseFetcher],
|
||||||
|
track: TrackMeta,
|
||||||
|
bypass_cache: bool,
|
||||||
|
allow_unsynced: bool,
|
||||||
|
) -> list[tuple[str, LyricResult]]:
|
||||||
|
"""Run one group with slot-aware cache check then parallel fetch uncached sources."""
|
||||||
|
cached_results: list[tuple[str, LyricResult]] = []
|
||||||
|
need_fetch: list[BaseFetcher] = []
|
||||||
|
|
||||||
|
for fetcher in group:
|
||||||
|
source = fetcher.source_name
|
||||||
|
if not bypass_cache and not fetcher.self_cached:
|
||||||
|
cached_rows = self.cache.get_all(track, source)
|
||||||
|
if cached_rows:
|
||||||
|
if _has_negative_for_both_slots(cached_rows):
|
||||||
|
logger.debug(
|
||||||
|
f"[{source}] cache hit: all slots negative, skipping"
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
cached_for_return = _pick_cached_for_return(
|
||||||
|
cached_rows, allow_unsynced
|
||||||
|
)
|
||||||
|
if cached_for_return is not None:
|
||||||
|
is_trusted = cached_for_return.confidence >= HIGH_CONFIDENCE
|
||||||
|
logger.info(
|
||||||
|
f"[{source}] cache hit: {cached_for_return.status.value}"
|
||||||
|
f" (confidence={cached_for_return.confidence:.0f})"
|
||||||
|
)
|
||||||
|
cached_results.append((source, cached_for_return))
|
||||||
|
# Return immediately on trusted synced cache hit
|
||||||
|
if (
|
||||||
|
cached_for_return.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
and is_trusted
|
||||||
|
):
|
||||||
|
return cached_results
|
||||||
|
continue
|
||||||
|
elif not fetcher.self_cached:
|
||||||
|
logger.debug(f"[{source}] cache bypassed")
|
||||||
|
need_fetch.append(fetcher)
|
||||||
|
|
||||||
|
if need_fetch:
|
||||||
|
task_map: dict[asyncio.Task, BaseFetcher] = {
|
||||||
|
asyncio.create_task(f.fetch(track, bypass_cache=bypass_cache)): f
|
||||||
|
for f in need_fetch
|
||||||
|
}
|
||||||
|
pending = set(task_map)
|
||||||
|
|
||||||
|
while pending:
|
||||||
|
done, pending = await asyncio.wait(
|
||||||
|
pending, return_when=asyncio.FIRST_COMPLETED
|
||||||
|
)
|
||||||
|
found_trusted = False
|
||||||
|
for task in done:
|
||||||
|
fetcher = task_map[task]
|
||||||
|
source = fetcher.source_name
|
||||||
|
try:
|
||||||
|
result = task.result()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[{source}] fetch raised: {e}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
if result is None:
|
||||||
|
logger.debug(f"[{source}] returned None")
|
||||||
|
continue
|
||||||
|
|
||||||
|
return_result = _pick_for_return(result, allow_unsynced)
|
||||||
|
|
||||||
|
if not fetcher.self_cached and not bypass_cache:
|
||||||
|
for slot_kind, slot_result in _iter_slot_results(result):
|
||||||
|
ttl = slot_result.ttl or _STATUS_TTL.get(
|
||||||
|
slot_result.status, TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
self.cache.set(
|
||||||
|
track,
|
||||||
|
source,
|
||||||
|
slot_result,
|
||||||
|
ttl_seconds=ttl,
|
||||||
|
positive_kind=slot_kind,
|
||||||
|
)
|
||||||
|
|
||||||
|
if return_result is not None:
|
||||||
|
logger.info(
|
||||||
|
f"[{source}] got {return_result.status.value} lyrics"
|
||||||
|
f" (confidence={return_result.confidence:.0f})"
|
||||||
|
)
|
||||||
|
cached_results.append((source, return_result))
|
||||||
|
|
||||||
|
if (
|
||||||
|
return_result is not None
|
||||||
|
and return_result.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
and return_result.confidence >= HIGH_CONFIDENCE
|
||||||
|
):
|
||||||
|
found_trusted = True
|
||||||
|
|
||||||
|
if found_trusted:
|
||||||
|
for t in pending:
|
||||||
|
t.cancel()
|
||||||
|
await asyncio.gather(*pending, return_exceptions=True)
|
||||||
|
break
|
||||||
|
|
||||||
|
return cached_results
|
||||||
|
|
||||||
|
async def _fetch_for_track(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
force_method: Optional[FetcherMethodType],
|
||||||
|
bypass_cache: bool,
|
||||||
|
allow_unsynced: bool,
|
||||||
|
) -> Optional[LyricResult]:
|
||||||
|
track = await enrich_track(track, self.enrichers)
|
||||||
|
logger.info(f"Fetching lyrics for: {track.display_name()}")
|
||||||
|
|
||||||
|
plan = build_plan(self.fetchers, track, force_method)
|
||||||
|
if not plan:
|
||||||
|
return None
|
||||||
|
|
||||||
|
best_result: Optional[LyricResult] = None
|
||||||
|
|
||||||
|
for group in plan:
|
||||||
|
group_results = await self._run_group(
|
||||||
|
group,
|
||||||
|
track,
|
||||||
|
bypass_cache,
|
||||||
|
allow_unsynced,
|
||||||
|
)
|
||||||
|
|
||||||
|
for source, result in group_results:
|
||||||
|
if result.status not in (
|
||||||
|
CacheStatus.SUCCESS_SYNCED,
|
||||||
|
CacheStatus.SUCCESS_UNSYNCED,
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
|
is_trusted = result.confidence >= HIGH_CONFIDENCE
|
||||||
|
|
||||||
|
# Trusted synced → return immediately
|
||||||
|
if result.status == CacheStatus.SUCCESS_SYNCED and is_trusted:
|
||||||
|
logger.info(
|
||||||
|
f"Returning {result.status.value} lyrics from {source}"
|
||||||
|
f" (confidence={result.confidence:.0f})"
|
||||||
|
)
|
||||||
|
return result
|
||||||
|
|
||||||
|
if best_result is None or is_better_result(
|
||||||
|
result,
|
||||||
|
best_result,
|
||||||
|
allow_unsynced=allow_unsynced,
|
||||||
|
):
|
||||||
|
best_result = result
|
||||||
|
|
||||||
|
if best_result:
|
||||||
|
if (
|
||||||
|
best_result.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
and not allow_unsynced
|
||||||
|
):
|
||||||
|
logger.info(
|
||||||
|
f"Unsynced lyrics found from {best_result.source}, but unsynced results are not allowed"
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
logger.info(
|
||||||
|
f"Returning {best_result.status.value} lyrics from {best_result.source}"
|
||||||
|
)
|
||||||
|
return best_result
|
||||||
|
|
||||||
|
logger.info(f"No lyrics found for {track.display_name()}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def fetch_for_track(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
force_method: Optional[FetcherMethodType] = None,
|
||||||
|
bypass_cache: bool = False,
|
||||||
|
allow_unsynced: bool = False,
|
||||||
|
) -> Optional[LyricResult]:
|
||||||
|
"""Fetch lyrics for track using the group-based parallel pipeline."""
|
||||||
|
return asyncio.run(
|
||||||
|
self._fetch_for_track(
|
||||||
|
track,
|
||||||
|
force_method,
|
||||||
|
bypass_cache,
|
||||||
|
allow_unsynced,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def manual_insert(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
lyrics: str,
|
||||||
|
) -> None:
|
||||||
|
"""Manually insert lyrics into the cache for a track."""
|
||||||
|
track = asyncio.run(enrich_track(track, self.enrichers))
|
||||||
|
logger.info(f"Manually inserting lyrics for: {track.display_name()}")
|
||||||
|
lrc = LRCData(lyrics)
|
||||||
|
result = LyricResult(
|
||||||
|
status=lrc.detect_sync_status(),
|
||||||
|
lyrics=lrc,
|
||||||
|
source="manual",
|
||||||
|
ttl=None,
|
||||||
|
)
|
||||||
|
self.cache.set(track, "manual", result, ttl_seconds=None)
|
||||||
|
logger.info("Lyrics inserted into cache.")
|
||||||
@@ -4,30 +4,41 @@ Date: 2026-03-31 06:09:11
|
|||||||
Description: Metadata enrichment pipeline
|
Description: Metadata enrichment pipeline
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseEnricher
|
from .base import BaseEnricher
|
||||||
from .audio_tag import AudioTagEnricher
|
from .audio_tag import AudioTagEnricher
|
||||||
from .file_name import FileNameEnricher
|
from .file_name import FileNameEnricher
|
||||||
from .musixmatch import MusixmatchSpotifyEnricher
|
from .musixmatch import MusixmatchSpotifyEnricher
|
||||||
|
from ..authenticators import BaseAuthenticator, MusixmatchAuthenticator
|
||||||
from ..models import TrackMeta
|
from ..models import TrackMeta
|
||||||
|
|
||||||
# Enrichers run in order; earlier ones have higher priority.
|
# Enrichers run in order; earlier ones have higher priority.
|
||||||
# There are only a few of them, so we can just call them sequentially without worrying about async concurrency or batching.
|
# There are only a few of them, so we can just call them sequentially without worrying about async concurrency or batching.
|
||||||
_ENRICHERS: list[BaseEnricher] = [
|
|
||||||
AudioTagEnricher(),
|
|
||||||
FileNameEnricher(),
|
|
||||||
MusixmatchSpotifyEnricher(),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
async def enrich_track(track: TrackMeta) -> TrackMeta:
|
def create_enrichers(
|
||||||
|
authenticators: dict[str, BaseAuthenticator],
|
||||||
|
) -> list[BaseEnricher]:
|
||||||
|
"""Instantiate all enrichers."""
|
||||||
|
mxm_auth = authenticators["musixmatch"]
|
||||||
|
assert isinstance(mxm_auth, MusixmatchAuthenticator)
|
||||||
|
return [
|
||||||
|
AudioTagEnricher(),
|
||||||
|
FileNameEnricher(),
|
||||||
|
MusixmatchSpotifyEnricher(mxm_auth),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
async def enrich_track(track: TrackMeta, enrichers: list[BaseEnricher]) -> TrackMeta:
|
||||||
"""Run all enrichers and return a track with missing fields filled in.
|
"""Run all enrichers and return a track with missing fields filled in.
|
||||||
|
|
||||||
Each enricher sees the cumulative state (earlier enrichers' results
|
Each enricher sees the cumulative state (earlier enrichers' results
|
||||||
are already applied). A field is only set if it is currently None.
|
are already applied). A field is only set if it is currently None.
|
||||||
"""
|
"""
|
||||||
for enricher in _ENRICHERS:
|
for enricher in enrichers:
|
||||||
try:
|
try:
|
||||||
# Skip if all provided fields are already filled
|
# Skip if all provided fields are already filled
|
||||||
if all(
|
if all(
|
||||||
@@ -1,16 +1,18 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-31 06:11:27
|
Date: 2026-03-31 06:11:27
|
||||||
Description: Enricher that reads metadata from audio file tags (mutagen)
|
Description: Enricher that reads metadata from audio file tags.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from mutagen._file import File, FileType
|
from mutagen._file import File, FileType
|
||||||
|
|
||||||
from .base import BaseEnricher
|
from .base import BaseEnricher
|
||||||
from ..models import TrackMeta
|
from ..models import TrackMeta
|
||||||
from ..lrc import get_audio_path
|
from ..utils import get_audio_path
|
||||||
|
|
||||||
|
|
||||||
class AudioTagEnricher(BaseEnricher):
|
class AudioTagEnricher(BaseEnricher):
|
||||||
@@ -1,9 +1,11 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-31 06:08:16
|
Date: 2026-03-31 06:08:16
|
||||||
Description: Base class for metadata enrichers
|
Description: Base class for metadata enrichers.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
@@ -1,16 +1,18 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-31 06:08:44
|
Date: 2026-03-31 06:08:44
|
||||||
Description: Enricher that parses metadata from the audio file path
|
Description: Enricher that parses metadata from the audio file path.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import re
|
import re
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseEnricher
|
from .base import BaseEnricher
|
||||||
from ..models import TrackMeta
|
from ..models import TrackMeta
|
||||||
from ..lrc import get_audio_path
|
from ..utils import get_audio_path
|
||||||
|
|
||||||
|
|
||||||
# Common track-number prefixes: "01 - ", "01. ", "1 - ", etc.
|
# Common track-number prefixes: "01 - ", "01. ", "1 - ", etc.
|
||||||
@@ -1,34 +1,29 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-04-05 02:13:49
|
Date: 2026-04-05 02:13:49
|
||||||
Description: Musixmatch metadata enricher (matcher.track.get by Spotify track ID)
|
Description: Musixmatch metadata enricher (matcher.track.get by Spotify track ID).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from typing import Optional
|
from __future__ import annotations
|
||||||
from urllib.parse import urlencode
|
|
||||||
|
|
||||||
import httpx
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from .base import BaseEnricher
|
from .base import BaseEnricher
|
||||||
|
from ..authenticators.musixmatch import MusixmatchAuthenticator
|
||||||
from ..models import TrackMeta
|
from ..models import TrackMeta
|
||||||
from ..config import (
|
|
||||||
HTTP_TIMEOUT,
|
|
||||||
MUSIXMATCH_TRACK_MATCH_URL,
|
|
||||||
MUSIXMATCH_USERTOKEN,
|
|
||||||
)
|
|
||||||
|
|
||||||
_MXM_HEADERS = {"Cookie": "x-mxm-token-guid="}
|
_MUSIXMATCH_TRACK_MATCH_URL = (
|
||||||
_MXM_TRACK_MATCH_BASE_PARAMS = {
|
"https://apic-desktop.musixmatch.com/ws/1.1/matcher.track.get"
|
||||||
"format": "json",
|
)
|
||||||
"app_id": "web-desktop-app-v1.0",
|
|
||||||
"usertoken": MUSIXMATCH_USERTOKEN,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class MusixmatchSpotifyEnricher(BaseEnricher):
|
class MusixmatchSpotifyEnricher(BaseEnricher):
|
||||||
"""Fill title, artist, album, and length from Musixmatch using Spotify track ID."""
|
"""Fill title, artist, album, and length from Musixmatch using Spotify track ID."""
|
||||||
|
|
||||||
|
def __init__(self, auth: MusixmatchAuthenticator) -> None:
|
||||||
|
self.auth = auth
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def name(self) -> str:
|
def name(self) -> str:
|
||||||
return "musixmatch"
|
return "musixmatch"
|
||||||
@@ -38,25 +33,23 @@ class MusixmatchSpotifyEnricher(BaseEnricher):
|
|||||||
return {"title", "artist", "album", "length"}
|
return {"title", "artist", "album", "length"}
|
||||||
|
|
||||||
async def enrich(self, track: TrackMeta) -> Optional[dict]:
|
async def enrich(self, track: TrackMeta) -> Optional[dict]:
|
||||||
if not track.trackid or not MUSIXMATCH_USERTOKEN:
|
if not track.trackid:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
params = {
|
|
||||||
**_MXM_TRACK_MATCH_BASE_PARAMS,
|
|
||||||
"track_spotify_id": track.trackid,
|
|
||||||
}
|
|
||||||
url = f"{MUSIXMATCH_TRACK_MATCH_URL}?{urlencode(params)}"
|
|
||||||
logger.debug(f"Musixmatch enricher: looking up trackid={track.trackid}")
|
logger.debug(f"Musixmatch enricher: looking up trackid={track.trackid}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
data = await self.auth.get_json(
|
||||||
resp = await client.get(url, headers=_MXM_HEADERS)
|
_MUSIXMATCH_TRACK_MATCH_URL,
|
||||||
resp.raise_for_status()
|
{"track_spotify_id": track.trackid},
|
||||||
data = resp.json()
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning(f"Musixmatch enricher: request failed: {e}")
|
logger.warning(f"Musixmatch enricher: request failed: {e}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
if data is None:
|
||||||
|
return None
|
||||||
|
|
||||||
body = data.get("message", {}).get("body")
|
body = data.get("message", {}).get("body")
|
||||||
t = body.get("track") if isinstance(body, dict) else None
|
t = body.get("track") if isinstance(body, dict) else None
|
||||||
if not isinstance(t, dict):
|
if not isinstance(t, dict):
|
||||||
@@ -1,9 +1,11 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-25 02:33:26
|
Date: 2026-03-25 02:33:26
|
||||||
Description: Fetcher pipeline — registry and types
|
Description: Fetcher pipeline — registry and types.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Literal, Optional
|
from typing import Literal, Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
@@ -16,7 +18,14 @@ from .lrclib_search import LrclibSearchFetcher
|
|||||||
from .musixmatch import MusixmatchFetcher, MusixmatchSpotifyFetcher
|
from .musixmatch import MusixmatchFetcher, MusixmatchSpotifyFetcher
|
||||||
from .netease import NeteaseFetcher
|
from .netease import NeteaseFetcher
|
||||||
from .qqmusic import QQMusicFetcher
|
from .qqmusic import QQMusicFetcher
|
||||||
|
from ..authenticators import (
|
||||||
|
BaseAuthenticator,
|
||||||
|
SpotifyAuthenticator,
|
||||||
|
MusixmatchAuthenticator,
|
||||||
|
QQMusicAuthenticator,
|
||||||
|
)
|
||||||
from ..cache import CacheEngine
|
from ..cache import CacheEngine
|
||||||
|
from ..config import AppConfig
|
||||||
from ..models import TrackMeta
|
from ..models import TrackMeta
|
||||||
|
|
||||||
FetcherMethodType = Literal[
|
FetcherMethodType = Literal[
|
||||||
@@ -43,20 +52,30 @@ _FETCHER_GROUPS: list[list[FetcherMethodType]] = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
def create_fetchers(cache: CacheEngine) -> dict[FetcherMethodType, BaseFetcher]:
|
def create_fetchers(
|
||||||
|
cache: CacheEngine,
|
||||||
|
authenticators: dict[str, BaseAuthenticator],
|
||||||
|
config: AppConfig,
|
||||||
|
) -> dict[FetcherMethodType, BaseFetcher]:
|
||||||
"""Instantiate all fetchers. Returns a dict keyed by source name."""
|
"""Instantiate all fetchers. Returns a dict keyed by source name."""
|
||||||
fetchers: dict[FetcherMethodType, BaseFetcher] = {
|
spotify_auth = authenticators["spotify"]
|
||||||
"local": LocalFetcher(),
|
mxm_auth = authenticators["musixmatch"]
|
||||||
|
qqmusic_auth = authenticators["qqmusic"]
|
||||||
|
assert isinstance(spotify_auth, SpotifyAuthenticator)
|
||||||
|
assert isinstance(mxm_auth, MusixmatchAuthenticator)
|
||||||
|
assert isinstance(qqmusic_auth, QQMusicAuthenticator)
|
||||||
|
g = config.general
|
||||||
|
return {
|
||||||
|
"local": LocalFetcher(g),
|
||||||
"cache-search": CacheSearchFetcher(cache),
|
"cache-search": CacheSearchFetcher(cache),
|
||||||
"spotify": SpotifyFetcher(),
|
"spotify": SpotifyFetcher(g, spotify_auth),
|
||||||
"lrclib": LrclibFetcher(),
|
"lrclib": LrclibFetcher(g),
|
||||||
"musixmatch-spotify": MusixmatchSpotifyFetcher(),
|
"musixmatch-spotify": MusixmatchSpotifyFetcher(g, mxm_auth),
|
||||||
"lrclib-search": LrclibSearchFetcher(),
|
"lrclib-search": LrclibSearchFetcher(g),
|
||||||
"netease": NeteaseFetcher(),
|
"netease": NeteaseFetcher(g),
|
||||||
"qqmusic": QQMusicFetcher(),
|
"qqmusic": QQMusicFetcher(g, qqmusic_auth),
|
||||||
"musixmatch": MusixmatchFetcher(),
|
"musixmatch": MusixmatchFetcher(g, mxm_auth),
|
||||||
}
|
}
|
||||||
return fetchers
|
|
||||||
|
|
||||||
|
|
||||||
def build_plan(
|
def build_plan(
|
||||||
@@ -0,0 +1,70 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-25 02:33:26
|
||||||
|
Description: Base fetcher class and common interfaces.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Optional
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
from ..authenticators.base import BaseAuthenticator
|
||||||
|
from ..config import GeneralConfig
|
||||||
|
from ..models import CacheStatus, TrackMeta, LyricResult
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True, slots=True)
|
||||||
|
class FetchResult:
|
||||||
|
synced: Optional[LyricResult] = None
|
||||||
|
unsynced: Optional[LyricResult] = None
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def from_not_found() -> "FetchResult":
|
||||||
|
return FetchResult(
|
||||||
|
synced=LyricResult(status=CacheStatus.NOT_FOUND, lyrics=None, source=None),
|
||||||
|
unsynced=LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, lyrics=None, source=None
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def from_network_error() -> "FetchResult":
|
||||||
|
return FetchResult(
|
||||||
|
synced=LyricResult(
|
||||||
|
status=CacheStatus.NETWORK_ERROR, lyrics=None, source=None
|
||||||
|
),
|
||||||
|
unsynced=LyricResult(
|
||||||
|
status=CacheStatus.NETWORK_ERROR, lyrics=None, source=None
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class BaseFetcher(ABC):
|
||||||
|
def __init__(
|
||||||
|
self, general: GeneralConfig, auth: Optional[BaseAuthenticator] = None
|
||||||
|
) -> None:
|
||||||
|
self._general = general
|
||||||
|
self._auth = auth
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
def source_name(self) -> str:
|
||||||
|
"""Name of the fetcher source."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@property
|
||||||
|
def self_cached(self) -> bool:
|
||||||
|
"""True if this fetcher manages its own cache (skip per-source cache check)."""
|
||||||
|
return False
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
"""Check if the fetcher is available for the given track (e.g. has required metadata)."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
"""Fetch lyrics for the given track. Returns None if unable to fetch."""
|
||||||
|
pass
|
||||||
@@ -0,0 +1,121 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-28 05:57:46
|
||||||
|
Description: Cache-search fetcher — cross-album fuzzy lookup in the local cache.
|
||||||
|
|
||||||
|
Searches existing cache entries by artist + title with fuzzy normalization,
|
||||||
|
ignoring album and source. Useful when the same track appears on different
|
||||||
|
albums or is played from different players.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Optional
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .base import BaseFetcher, FetchResult
|
||||||
|
from .selection import SearchCandidate, select_best
|
||||||
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
|
from ..cache import CacheEngine
|
||||||
|
from ..lrc import LRCData
|
||||||
|
|
||||||
|
|
||||||
|
class CacheSearchFetcher(BaseFetcher):
|
||||||
|
def __init__(self, cache: CacheEngine) -> None:
|
||||||
|
self._cache = cache
|
||||||
|
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "cache-search"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def self_cached(self) -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return bool(track.title)
|
||||||
|
|
||||||
|
def _get_exact(self, track: TrackMeta, synced: bool) -> Optional[LyricResult]:
|
||||||
|
exact = self._cache.find_best_positive(
|
||||||
|
track,
|
||||||
|
CacheStatus.SUCCESS_SYNCED if synced else CacheStatus.SUCCESS_UNSYNCED,
|
||||||
|
)
|
||||||
|
if exact and exact.lyrics is not None:
|
||||||
|
logger.info(
|
||||||
|
f"Cache-search: exact {'synced' if synced else 'unsynced'} hit ({exact.status.value})"
|
||||||
|
)
|
||||||
|
return exact
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _get_fuzzy(
|
||||||
|
self, matches: list, track: TrackMeta, synced: bool
|
||||||
|
) -> Optional[LyricResult]:
|
||||||
|
filtered = [
|
||||||
|
SearchCandidate(
|
||||||
|
item=m,
|
||||||
|
duration_ms=float(m["length"]) if m.get("length") else None,
|
||||||
|
is_synced=synced,
|
||||||
|
title=m.get("title"),
|
||||||
|
artist=m.get("artist"),
|
||||||
|
album=m.get("album"),
|
||||||
|
)
|
||||||
|
for m in matches
|
||||||
|
if m.get("lyrics")
|
||||||
|
and (synced and m.get("status") == CacheStatus.SUCCESS_SYNCED.value)
|
||||||
|
or (not synced and m.get("status") == CacheStatus.SUCCESS_UNSYNCED.value)
|
||||||
|
]
|
||||||
|
|
||||||
|
best, confidence = select_best(
|
||||||
|
filtered,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if best and best.get("lyrics") is not None:
|
||||||
|
status = (
|
||||||
|
CacheStatus.SUCCESS_SYNCED if synced else CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
)
|
||||||
|
logger.info(
|
||||||
|
f"Cache-search: fuzzy {'synced' if synced else 'unsynced'} hit from "
|
||||||
|
f"[{best.get('source')}] album={best.get('album')!r} (confidence={confidence:.0f})"
|
||||||
|
)
|
||||||
|
return LyricResult(
|
||||||
|
status=status,
|
||||||
|
lyrics=LRCData(best["lyrics"]),
|
||||||
|
source=self.source_name,
|
||||||
|
confidence=confidence,
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
if bypass_cache:
|
||||||
|
logger.debug("Cache-search: bypassed by caller")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
if not track.title:
|
||||||
|
logger.debug("Cache-search: skipped — no title")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
res_synced: Optional[LyricResult] = None
|
||||||
|
res_unsynced: Optional[LyricResult] = None
|
||||||
|
|
||||||
|
# Fast path: exact metadata match (artist+title+album), single SQL query
|
||||||
|
res_synced = self._get_exact(track, synced=True)
|
||||||
|
res_unsynced = self._get_exact(track, synced=False)
|
||||||
|
if res_synced and res_unsynced:
|
||||||
|
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
||||||
|
|
||||||
|
# Slow path: fuzzy cross-album search
|
||||||
|
matches = self._cache.search_by_meta(title=track.title, length=track.length)
|
||||||
|
|
||||||
|
if not matches:
|
||||||
|
logger.debug(f"Cache-search: no match for {track.display_name()}")
|
||||||
|
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
||||||
|
|
||||||
|
if not res_synced:
|
||||||
|
res_synced = self._get_fuzzy(matches, track, synced=True)
|
||||||
|
if not res_unsynced:
|
||||||
|
res_unsynced = self._get_fuzzy(matches, track, synced=False)
|
||||||
|
|
||||||
|
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
||||||
@@ -0,0 +1,119 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-26 02:08:41
|
||||||
|
Description: Local fetcher — reads lyrics from .lrc sidecar files or embedded audio metadata.
|
||||||
|
Priority:
|
||||||
|
1. Same-directory .lrc file (e.g. /path/to/track.lrc)
|
||||||
|
2. Embedded lyrics in audio metadata (FLAC, MP3 USLT/SYLT tags)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Optional
|
||||||
|
from loguru import logger
|
||||||
|
from mutagen._file import File
|
||||||
|
from mutagen.flac import FLAC
|
||||||
|
|
||||||
|
from .base import BaseFetcher, FetchResult
|
||||||
|
from ..models import CacheStatus, TrackMeta, LyricResult
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..utils import get_audio_path, get_sidecar_path
|
||||||
|
|
||||||
|
|
||||||
|
class LocalFetcher(BaseFetcher):
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "local"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return track.is_local
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
"""Attempt to read lyrics from local filesystem."""
|
||||||
|
if not track.is_local or not track.url:
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
audio_path = get_audio_path(track.url, ensure_exists=False)
|
||||||
|
if not audio_path:
|
||||||
|
logger.debug(f"Local: audio URL is not a valid file path: {track.url}")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
synced_result: Optional[LyricResult] = None
|
||||||
|
unsynced_result: Optional[LyricResult] = None
|
||||||
|
|
||||||
|
lrc_path = get_sidecar_path(
|
||||||
|
track.url, ensure_audio_exists=False, ensure_exists=True
|
||||||
|
)
|
||||||
|
if lrc_path:
|
||||||
|
try:
|
||||||
|
with open(lrc_path, "r", encoding="utf-8") as f:
|
||||||
|
content = f.read().strip()
|
||||||
|
if content:
|
||||||
|
lrc = LRCData(content)
|
||||||
|
status = lrc.detect_sync_status()
|
||||||
|
logger.info(
|
||||||
|
f"Local: found .lrc sidecar ({status.value}) for {audio_path.name}"
|
||||||
|
)
|
||||||
|
if status == CacheStatus.SUCCESS_SYNCED:
|
||||||
|
synced_result = LyricResult(
|
||||||
|
status=status,
|
||||||
|
lyrics=lrc,
|
||||||
|
source=f"{self.source_name} (sidecar)",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
unsynced_result = LyricResult(
|
||||||
|
status=status,
|
||||||
|
lyrics=lrc,
|
||||||
|
source=f"{self.source_name} (sidecar)",
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Local: error reading {lrc_path}: {e}")
|
||||||
|
else:
|
||||||
|
logger.debug(f"Local: no .lrc sidecar found for {audio_path}")
|
||||||
|
|
||||||
|
# Embedded metadata
|
||||||
|
if not audio_path.exists():
|
||||||
|
logger.debug(f"Local: audio file does not exist: {audio_path}")
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
audio = File(audio_path)
|
||||||
|
if audio is not None:
|
||||||
|
lyrics = None
|
||||||
|
|
||||||
|
if isinstance(audio, FLAC):
|
||||||
|
# FLAC stores lyrics in vorbis comment tags
|
||||||
|
lyrics = (
|
||||||
|
audio.get("lyrics") or audio.get("unsynclyrics") or [None]
|
||||||
|
)[0]
|
||||||
|
elif hasattr(audio, "tags") and audio.tags:
|
||||||
|
# MP3 / other: look for USLT or SYLT ID3 frames
|
||||||
|
for key in audio.tags.keys():
|
||||||
|
if key.startswith("USLT") or key.startswith("SYLT"):
|
||||||
|
lyrics = str(audio.tags[key])
|
||||||
|
break
|
||||||
|
|
||||||
|
if lyrics:
|
||||||
|
lrc = LRCData(lyrics)
|
||||||
|
status = lrc.detect_sync_status()
|
||||||
|
logger.info(
|
||||||
|
f"Local: found embedded lyrics ({status.value}) for {audio_path.name}"
|
||||||
|
)
|
||||||
|
if status == CacheStatus.SUCCESS_SYNCED and not synced_result:
|
||||||
|
synced_result = LyricResult(
|
||||||
|
status=status,
|
||||||
|
lyrics=lrc,
|
||||||
|
source=f"{self.source_name} (embedded)",
|
||||||
|
)
|
||||||
|
elif not unsynced_result:
|
||||||
|
unsynced_result = LyricResult(
|
||||||
|
status=status,
|
||||||
|
lyrics=lrc,
|
||||||
|
source=f"{self.source_name} (embedded)",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.debug("Local: no embedded lyrics found")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Local: error reading metadata for {audio_path}: {e}")
|
||||||
|
|
||||||
|
return FetchResult(synced=synced_result, unsynced=unsynced_result)
|
||||||
@@ -0,0 +1,121 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-25 05:23:38
|
||||||
|
Description: LRCLIB fetcher — queries lrclib.net for synced/plain lyrics.
|
||||||
|
Requires complete track metadata (artist, title, album, duration).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import httpx
|
||||||
|
from loguru import logger
|
||||||
|
from urllib.parse import urlencode
|
||||||
|
|
||||||
|
from .base import BaseFetcher, FetchResult
|
||||||
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..config import (
|
||||||
|
TTL_UNSYNCED,
|
||||||
|
TTL_NOT_FOUND,
|
||||||
|
UA_LRX,
|
||||||
|
)
|
||||||
|
|
||||||
|
_LRCLIB_API_URL = "https://lrclib.net/api/get"
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_lrclib_response(data: dict) -> FetchResult:
|
||||||
|
"""Parse LRCLIB JSON response into synced/unsynced fetch result."""
|
||||||
|
synced = data.get("syncedLyrics")
|
||||||
|
unsynced = data.get("plainLyrics")
|
||||||
|
|
||||||
|
res_synced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
res_unsynced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
|
||||||
|
if isinstance(synced, str) and synced.strip():
|
||||||
|
lyrics = LRCData(synced)
|
||||||
|
res_synced = LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
|
lyrics=lyrics,
|
||||||
|
source="lrclib",
|
||||||
|
)
|
||||||
|
|
||||||
|
if isinstance(unsynced, str) and unsynced.strip():
|
||||||
|
lyrics = LRCData(unsynced)
|
||||||
|
res_unsynced = LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_UNSYNCED,
|
||||||
|
lyrics=lyrics,
|
||||||
|
source="lrclib",
|
||||||
|
ttl=TTL_UNSYNCED,
|
||||||
|
)
|
||||||
|
|
||||||
|
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
||||||
|
|
||||||
|
|
||||||
|
class LrclibFetcher(BaseFetcher):
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "lrclib"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return track.is_complete
|
||||||
|
|
||||||
|
async def _api_get(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
track: TrackMeta,
|
||||||
|
) -> httpx.Response:
|
||||||
|
"""Issue one LRCLIB get request using the same path as production fetch."""
|
||||||
|
params = {
|
||||||
|
"track_name": track.title,
|
||||||
|
"artist_name": track.artist,
|
||||||
|
"album_name": track.album,
|
||||||
|
"duration": track.length / 1000.0 if track.length else 0,
|
||||||
|
}
|
||||||
|
url = f"{_LRCLIB_API_URL}?{urlencode(params)}"
|
||||||
|
return await client.get(url, headers={"User-Agent": UA_LRX})
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
"""Fetch lyrics from LRCLIB. Requires complete metadata."""
|
||||||
|
if not track.is_complete:
|
||||||
|
logger.debug("LRCLIB: skipped — incomplete metadata")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
logger.info(f"LRCLIB: fetching lyrics for {track.display_name()}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
resp = await self._api_get(client, track)
|
||||||
|
|
||||||
|
if resp.status_code == 404:
|
||||||
|
logger.debug(f"LRCLIB: not found for {track.display_name()}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
if resp.status_code != 200:
|
||||||
|
logger.error(f"LRCLIB: API returned {resp.status_code}")
|
||||||
|
return FetchResult.from_network_error()
|
||||||
|
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
logger.error(f"LRCLIB: unexpected response type: {type(data).__name__}")
|
||||||
|
return FetchResult.from_network_error()
|
||||||
|
result = _parse_lrclib_response(data)
|
||||||
|
if result.synced and result.synced.lyrics:
|
||||||
|
logger.info(
|
||||||
|
f"LRCLIB: got synced lyrics ({len(result.synced.lyrics)} lines)"
|
||||||
|
)
|
||||||
|
if result.unsynced and result.unsynced.lyrics:
|
||||||
|
logger.info(
|
||||||
|
f"LRCLIB: got unsynced lyrics ({len(result.unsynced.lyrics)} lines)"
|
||||||
|
)
|
||||||
|
return result
|
||||||
|
|
||||||
|
except httpx.HTTPError as e:
|
||||||
|
logger.error(f"LRCLIB: HTTP error: {e}")
|
||||||
|
return FetchResult.from_network_error()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"LRCLIB: unexpected error: {e}")
|
||||||
|
return FetchResult()
|
||||||
@@ -1,33 +1,47 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-25 05:30:50
|
Date: 2026-03-25 05:30:50
|
||||||
Description: LRCLIB search fetcher — fuzzy search via lrclib.net /api/search
|
Description: LRCLIB search fetcher — fuzzy search via lrclib.net /api/search.
|
||||||
|
Used when metadata is incomplete (no album or duration) but title is available.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
"""
|
from __future__ import annotations
|
||||||
Used when metadata is incomplete (no album or duration) but title is available.
|
|
||||||
Selects the best match by duration when track length is known.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
import httpx
|
import httpx
|
||||||
from typing import Optional
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from urllib.parse import urlencode
|
from urllib.parse import urlencode
|
||||||
|
|
||||||
from .base import BaseFetcher
|
from .base import BaseFetcher, FetchResult
|
||||||
from .selection import SearchCandidate, select_best
|
from .selection import SearchCandidate, select_best
|
||||||
from ..models import TrackMeta, LyricResult, CacheStatus
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
from ..lrc import LRCData
|
from ..lrc import LRCData
|
||||||
from ..config import (
|
from ..config import (
|
||||||
HTTP_TIMEOUT,
|
|
||||||
TTL_UNSYNCED,
|
TTL_UNSYNCED,
|
||||||
TTL_NOT_FOUND,
|
TTL_NOT_FOUND,
|
||||||
TTL_NETWORK_ERROR,
|
|
||||||
LRCLIB_SEARCH_URL,
|
|
||||||
UA_LRX,
|
UA_LRX,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
_LRCLIB_SEARCH_URL = "https://lrclib.net/api/search"
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_lrclib_search_results(items: list[dict]) -> list[SearchCandidate[dict]]:
|
||||||
|
"""Map LRCLIB search JSON items to normalized SearchCandidate entries."""
|
||||||
|
return [
|
||||||
|
SearchCandidate(
|
||||||
|
item=item,
|
||||||
|
duration_ms=item["duration"] * 1000
|
||||||
|
if isinstance(item.get("duration"), (int, float))
|
||||||
|
else None,
|
||||||
|
is_synced=isinstance(item.get("syncedLyrics"), str)
|
||||||
|
and bool(item["syncedLyrics"].strip()),
|
||||||
|
title=item.get("trackName"),
|
||||||
|
artist=item.get("artistName"),
|
||||||
|
album=item.get("albumName"),
|
||||||
|
)
|
||||||
|
for item in items
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
class LrclibSearchFetcher(BaseFetcher):
|
class LrclibSearchFetcher(BaseFetcher):
|
||||||
@property
|
@property
|
||||||
@@ -65,79 +79,76 @@ class LrclibSearchFetcher(BaseFetcher):
|
|||||||
|
|
||||||
return queries
|
return queries
|
||||||
|
|
||||||
async def fetch(
|
async def _api_query(
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
self,
|
||||||
) -> Optional[LyricResult]:
|
client: httpx.AsyncClient,
|
||||||
if not track.title:
|
params: dict[str, str],
|
||||||
logger.debug("LRCLIB-search: skipped — no title")
|
) -> tuple[list[dict], bool]:
|
||||||
return None
|
"""Issue one LRCLIB search query using production request path."""
|
||||||
|
url = f"{_LRCLIB_SEARCH_URL}?{urlencode(params)}"
|
||||||
|
logger.debug(f"LRCLIB-search: query {params}")
|
||||||
|
try:
|
||||||
|
resp = await client.get(url, headers={"User-Agent": UA_LRX})
|
||||||
|
except httpx.HTTPError as e:
|
||||||
|
logger.error(f"LRCLIB-search: HTTP error: {e}")
|
||||||
|
return [], True
|
||||||
|
if resp.status_code != 200:
|
||||||
|
logger.error(f"LRCLIB-search: API returned {resp.status_code}")
|
||||||
|
return [], True
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, list):
|
||||||
|
return [], False
|
||||||
|
return [item for item in data if isinstance(item, dict)], False
|
||||||
|
|
||||||
|
async def _api_candidates(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
track: TrackMeta,
|
||||||
|
) -> tuple[list[dict], bool]:
|
||||||
|
"""Request and merge LRCLIB-search candidates using built-in query strategy."""
|
||||||
queries = self._build_queries(track)
|
queries = self._build_queries(track)
|
||||||
logger.info(f"LRCLIB-search: searching for {track.display_name()}")
|
all_results = await asyncio.gather(
|
||||||
|
*(self._api_query(client, p) for p in queries)
|
||||||
|
)
|
||||||
|
|
||||||
seen_ids: set[int] = set()
|
seen_ids: set[int] = set()
|
||||||
candidates: list[dict] = []
|
candidates: list[dict] = []
|
||||||
had_error = False
|
had_error = False
|
||||||
|
for items, err in all_results:
|
||||||
|
if err:
|
||||||
|
had_error = True
|
||||||
|
for item in items:
|
||||||
|
item_id = item.get("id")
|
||||||
|
if item_id is not None and item_id in seen_ids:
|
||||||
|
continue
|
||||||
|
if item_id is not None:
|
||||||
|
seen_ids.add(item_id)
|
||||||
|
candidates.append(item)
|
||||||
|
return candidates, had_error
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
if not track.title:
|
||||||
|
logger.debug("LRCLIB-search: skipped — no title")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
logger.info(f"LRCLIB-search: searching for {track.display_name()}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with httpx.AsyncClient(timeout=HTTP_TIMEOUT) as client:
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
candidates, had_error = await self._api_candidates(client, track)
|
||||||
async def _query(params: dict[str, str]) -> tuple[list[dict], bool]:
|
|
||||||
url = f"{LRCLIB_SEARCH_URL}?{urlencode(params)}"
|
|
||||||
logger.debug(f"LRCLIB-search: query {params}")
|
|
||||||
try:
|
|
||||||
resp = await client.get(url, headers={"User-Agent": UA_LRX})
|
|
||||||
except httpx.HTTPError as e:
|
|
||||||
logger.error(f"LRCLIB-search: HTTP error: {e}")
|
|
||||||
return [], True
|
|
||||||
if resp.status_code != 200:
|
|
||||||
logger.error(f"LRCLIB-search: API returned {resp.status_code}")
|
|
||||||
return [], True
|
|
||||||
data = resp.json()
|
|
||||||
if not isinstance(data, list):
|
|
||||||
return [], False
|
|
||||||
return [item for item in data if isinstance(item, dict)], False
|
|
||||||
|
|
||||||
all_results = await asyncio.gather(*(_query(p) for p in queries))
|
|
||||||
|
|
||||||
for items, err in all_results:
|
|
||||||
if err:
|
|
||||||
had_error = True
|
|
||||||
for item in items:
|
|
||||||
item_id = item.get("id")
|
|
||||||
if item_id is not None and item_id in seen_ids:
|
|
||||||
continue
|
|
||||||
if item_id is not None:
|
|
||||||
seen_ids.add(item_id)
|
|
||||||
candidates.append(item)
|
|
||||||
|
|
||||||
if not candidates:
|
if not candidates:
|
||||||
if had_error:
|
if had_error:
|
||||||
return LyricResult(
|
return FetchResult.from_network_error()
|
||||||
status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR
|
|
||||||
)
|
|
||||||
logger.debug(f"LRCLIB-search: no results for {track.display_name()}")
|
logger.debug(f"LRCLIB-search: no results for {track.display_name()}")
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"LRCLIB-search: got {len(candidates)} unique candidates "
|
f"LRCLIB-search: got {len(candidates)} unique candidates "
|
||||||
f"from {len(queries)} queries"
|
f"from {len(self._build_queries(track))} queries"
|
||||||
)
|
)
|
||||||
|
|
||||||
mapped = [
|
mapped = _parse_lrclib_search_results(candidates)
|
||||||
SearchCandidate(
|
|
||||||
item=item,
|
|
||||||
duration_ms=item["duration"] * 1000
|
|
||||||
if isinstance(item.get("duration"), (int, float))
|
|
||||||
else None,
|
|
||||||
is_synced=isinstance(item.get("syncedLyrics"), str)
|
|
||||||
and bool(item["syncedLyrics"].strip()),
|
|
||||||
title=item.get("trackName"),
|
|
||||||
artist=item.get("artistName"),
|
|
||||||
album=item.get("albumName"),
|
|
||||||
)
|
|
||||||
for item in candidates
|
|
||||||
]
|
|
||||||
best, confidence = select_best(
|
best, confidence = select_best(
|
||||||
mapped,
|
mapped,
|
||||||
track.length,
|
track.length,
|
||||||
@@ -147,41 +158,48 @@ class LrclibSearchFetcher(BaseFetcher):
|
|||||||
)
|
)
|
||||||
if best is None:
|
if best is None:
|
||||||
logger.debug("LRCLIB-search: no valid candidate found")
|
logger.debug("LRCLIB-search: no valid candidate found")
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
synced = best.get("syncedLyrics")
|
synced = best.get("syncedLyrics")
|
||||||
unsynced = best.get("plainLyrics")
|
unsynced = best.get("plainLyrics")
|
||||||
|
|
||||||
|
res_synced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
res_unsynced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
|
||||||
if isinstance(synced, str) and synced.strip():
|
if isinstance(synced, str) and synced.strip():
|
||||||
lyrics = LRCData(synced)
|
lyrics = LRCData(synced)
|
||||||
logger.info(
|
logger.info(
|
||||||
f"LRCLIB-search: got synced lyrics ({len(lyrics)} lines, confidence={confidence:.0f})"
|
f"LRCLIB-search: got synced lyrics ({len(lyrics)} lines, confidence={confidence:.0f})"
|
||||||
)
|
)
|
||||||
return LyricResult(
|
res_synced = LyricResult(
|
||||||
status=CacheStatus.SUCCESS_SYNCED,
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
lyrics=lyrics,
|
lyrics=lyrics,
|
||||||
source=self.source_name,
|
source=self.source_name,
|
||||||
confidence=confidence,
|
confidence=confidence,
|
||||||
)
|
)
|
||||||
elif isinstance(unsynced, str) and unsynced.strip():
|
|
||||||
|
if isinstance(unsynced, str) and unsynced.strip():
|
||||||
lyrics = LRCData(unsynced)
|
lyrics = LRCData(unsynced)
|
||||||
logger.info(
|
logger.info(
|
||||||
f"LRCLIB-search: got unsynced lyrics ({len(lyrics)} lines, confidence={confidence:.0f})"
|
f"LRCLIB-search: got unsynced lyrics ({len(lyrics)} lines, confidence={confidence:.0f})"
|
||||||
)
|
)
|
||||||
return LyricResult(
|
res_unsynced = LyricResult(
|
||||||
status=CacheStatus.SUCCESS_UNSYNCED,
|
status=CacheStatus.SUCCESS_UNSYNCED,
|
||||||
lyrics=lyrics,
|
lyrics=lyrics,
|
||||||
source=self.source_name,
|
source=self.source_name,
|
||||||
ttl=TTL_UNSYNCED,
|
ttl=TTL_UNSYNCED,
|
||||||
confidence=confidence,
|
confidence=confidence,
|
||||||
)
|
)
|
||||||
else:
|
|
||||||
logger.debug("LRCLIB-search: best candidate has empty lyrics")
|
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
||||||
return LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
|
||||||
|
|
||||||
except httpx.HTTPError as e:
|
except httpx.HTTPError as e:
|
||||||
logger.error(f"LRCLIB-search: HTTP error: {e}")
|
logger.error(f"LRCLIB-search: HTTP error: {e}")
|
||||||
return LyricResult(status=CacheStatus.NETWORK_ERROR, ttl=TTL_NETWORK_ERROR)
|
return FetchResult.from_network_error()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"LRCLIB-search: unexpected error: {e}")
|
logger.error(f"LRCLIB-search: unexpected error: {e}")
|
||||||
return None
|
return FetchResult()
|
||||||
@@ -0,0 +1,366 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-04 15:28:34
|
||||||
|
Description: Musixmatch fetchers (desktop API, anonymous or usertoken auth).
|
||||||
|
|
||||||
|
Uses the Musixmatch desktop API (apic-desktop.musixmatch.com).
|
||||||
|
Token and all HTTP calls are managed by MusixmatchAuthenticator.
|
||||||
|
|
||||||
|
Two fetchers:
|
||||||
|
musixmatch-spotify — direct lookup by Spotify track ID (exact, no search)
|
||||||
|
musixmatch — metadata search + best-candidate fallback
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from typing import Optional
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .base import BaseFetcher, FetchResult
|
||||||
|
from .selection import SearchCandidate, select_best
|
||||||
|
from ..authenticators.musixmatch import MusixmatchAuthenticator
|
||||||
|
from ..config import GeneralConfig
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..models import CacheStatus, LyricResult, TrackMeta
|
||||||
|
|
||||||
|
_MUSIXMATCH_MACRO_URL = "https://apic-desktop.musixmatch.com/ws/1.1/macro.subtitles.get"
|
||||||
|
_MUSIXMATCH_SEARCH_URL = "https://apic-desktop.musixmatch.com/ws/1.1/track.search"
|
||||||
|
|
||||||
|
# Macro-specific params (format/app_id injected by authenticator)
|
||||||
|
_MXM_MACRO_PARAMS = {
|
||||||
|
"namespace": "lyrics_richsynched",
|
||||||
|
"subtitle_format": "mxm",
|
||||||
|
"optional_calls": "track.richsync",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _format_ts(s: float) -> str:
|
||||||
|
mm = int(s) // 60
|
||||||
|
ss = int(s) % 60
|
||||||
|
cs = min(round((s % 1) * 100), 99)
|
||||||
|
return f"[{mm:02d}:{ss:02d}.{cs:02d}]"
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_richsync(body: str) -> Optional[str]:
|
||||||
|
"""Parse richsync JSON body → LRC text. Each entry: {"ts": float, "x": str}."""
|
||||||
|
try:
|
||||||
|
data = json.loads(body)
|
||||||
|
if not isinstance(data, list):
|
||||||
|
return None
|
||||||
|
lines = []
|
||||||
|
for entry in data:
|
||||||
|
if not isinstance(entry, dict):
|
||||||
|
continue
|
||||||
|
ts = entry.get("ts")
|
||||||
|
x = entry.get("x")
|
||||||
|
if not isinstance(ts, (int, float)) or not isinstance(x, str):
|
||||||
|
continue
|
||||||
|
lines.append(f"{_format_ts(float(ts))}{x}")
|
||||||
|
return "\n".join(lines) if lines else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_subtitle(body: str) -> Optional[str]:
|
||||||
|
"""Parse subtitle JSON body → LRC text. Each entry: {"text": str, "time": {"total": float}}."""
|
||||||
|
try:
|
||||||
|
data = json.loads(body)
|
||||||
|
if not isinstance(data, list):
|
||||||
|
return None
|
||||||
|
lines = []
|
||||||
|
for entry in data:
|
||||||
|
if not isinstance(entry, dict):
|
||||||
|
continue
|
||||||
|
text = entry.get("text")
|
||||||
|
time_obj = entry.get("time")
|
||||||
|
if not isinstance(text, str) or not isinstance(time_obj, dict):
|
||||||
|
continue
|
||||||
|
total = time_obj.get("total")
|
||||||
|
if not isinstance(total, (int, float)):
|
||||||
|
continue
|
||||||
|
lines.append(f"{_format_ts(float(total))}{text}")
|
||||||
|
return "\n".join(lines) if lines else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_mxm_macro(data: dict) -> LRCData | None:
|
||||||
|
"""Parse macro.subtitles.get payload into LRCData (richsync preferred)."""
|
||||||
|
body = data.get("message", {}).get("body", {})
|
||||||
|
if not isinstance(body, dict):
|
||||||
|
return None
|
||||||
|
macro_calls = body.get("macro_calls", {})
|
||||||
|
if not isinstance(macro_calls, dict):
|
||||||
|
return None
|
||||||
|
|
||||||
|
richsync_msg = macro_calls.get("track.richsync.get", {}).get("message", {})
|
||||||
|
if (
|
||||||
|
isinstance(richsync_msg, dict)
|
||||||
|
and richsync_msg.get("header", {}).get("status_code") == 200
|
||||||
|
):
|
||||||
|
richsync_body = (
|
||||||
|
richsync_msg.get("body", {}).get("richsync", {}).get("richsync_body")
|
||||||
|
)
|
||||||
|
if isinstance(richsync_body, str):
|
||||||
|
lrc_text = _parse_richsync(richsync_body)
|
||||||
|
if lrc_text:
|
||||||
|
lrc = LRCData(lrc_text)
|
||||||
|
if lrc:
|
||||||
|
return lrc
|
||||||
|
|
||||||
|
subtitle_msg = macro_calls.get("track.subtitles.get", {}).get("message", {})
|
||||||
|
if (
|
||||||
|
isinstance(subtitle_msg, dict)
|
||||||
|
and subtitle_msg.get("header", {}).get("status_code") == 200
|
||||||
|
):
|
||||||
|
subtitle_list = subtitle_msg.get("body", {}).get("subtitle_list", [])
|
||||||
|
if isinstance(subtitle_list, list) and subtitle_list:
|
||||||
|
subtitle_body = subtitle_list[0].get("subtitle", {}).get("subtitle_body")
|
||||||
|
if isinstance(subtitle_body, str):
|
||||||
|
lrc_text = _parse_subtitle(subtitle_body)
|
||||||
|
if lrc_text:
|
||||||
|
lrc = LRCData(lrc_text)
|
||||||
|
if lrc:
|
||||||
|
return lrc
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_mxm_search(data: dict) -> list[SearchCandidate[int]]:
|
||||||
|
"""Parse track.search payload to normalized candidates."""
|
||||||
|
track_list = data.get("message", {}).get("body", {}).get("track_list", [])
|
||||||
|
if not isinstance(track_list, list) or not track_list:
|
||||||
|
return []
|
||||||
|
|
||||||
|
return [
|
||||||
|
SearchCandidate(
|
||||||
|
item=int(t["commontrack_id"]),
|
||||||
|
duration_ms=(
|
||||||
|
float(t["track_length"]) * 1000 if t.get("track_length") else None
|
||||||
|
),
|
||||||
|
is_synced=bool(t.get("has_subtitles") or t.get("has_richsync")),
|
||||||
|
title=t.get("track_name"),
|
||||||
|
artist=t.get("artist_name"),
|
||||||
|
album=t.get("album_name"),
|
||||||
|
)
|
||||||
|
for item in track_list
|
||||||
|
if isinstance(item, dict)
|
||||||
|
and isinstance(t := item.get("track", {}), dict)
|
||||||
|
and isinstance(t.get("commontrack_id"), int)
|
||||||
|
and not t.get("instrumental")
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
class MusixmatchSpotifyFetcher(BaseFetcher):
|
||||||
|
"""Direct lookup by Spotify track ID — no search, single request."""
|
||||||
|
|
||||||
|
_auth: MusixmatchAuthenticator
|
||||||
|
|
||||||
|
def __init__(self, general: GeneralConfig, auth: MusixmatchAuthenticator) -> None:
|
||||||
|
super().__init__(general, auth)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "musixmatch-spotify"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return bool(track.trackid) and not self._auth.is_cooldown()
|
||||||
|
|
||||||
|
async def _api_macro(self, params: dict) -> dict | None:
|
||||||
|
"""Request macro payload through authenticator using production path."""
|
||||||
|
return await self._auth.get_json(
|
||||||
|
_MUSIXMATCH_MACRO_URL, {**_MXM_MACRO_PARAMS, **params}
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _api_macro_track(self, track: TrackMeta) -> dict | None:
|
||||||
|
"""Request macro payload for one track using Spotify ID lookup path."""
|
||||||
|
if not track.trackid:
|
||||||
|
return None
|
||||||
|
return await self._api_macro({"track_spotify_id": track.trackid})
|
||||||
|
|
||||||
|
async def _fetch_macro(self, params: dict) -> LRCData | None:
|
||||||
|
"""Request and parse Musixmatch macro lyrics payload."""
|
||||||
|
logger.debug(f"Musixmatch: macro call with {list(params.keys())}")
|
||||||
|
data = await self._api_macro(params)
|
||||||
|
if data is None:
|
||||||
|
return None
|
||||||
|
lrc = _parse_mxm_macro(data)
|
||||||
|
if lrc is None:
|
||||||
|
logger.debug("Musixmatch: no usable lyrics in macro response")
|
||||||
|
return None
|
||||||
|
logger.debug("Musixmatch: parsed macro lyrics")
|
||||||
|
return lrc
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
logger.info(f"Musixmatch-Spotify: fetching lyrics for {track.display_name()}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
lrc = await self._fetch_macro({"track_spotify_id": track.trackid}) # type: ignore[dict-item]
|
||||||
|
except AttributeError:
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Musixmatch-Spotify: fetch failed: {e}")
|
||||||
|
return FetchResult.from_network_error()
|
||||||
|
|
||||||
|
if lrc is None:
|
||||||
|
logger.debug(
|
||||||
|
f"Musixmatch-Spotify: no lyrics found for {track.display_name()}"
|
||||||
|
)
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
logger.info(f"Musixmatch-Spotify: got SUCCESS_SYNCED lyrics ({len(lrc)} lines)")
|
||||||
|
return FetchResult(
|
||||||
|
synced=LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
|
lyrics=lrc,
|
||||||
|
source=self.source_name,
|
||||||
|
),
|
||||||
|
# Fetching unsynced lyrics is not possible with current endpoint,
|
||||||
|
# so no need to cache NOT_FOUND to avoid repeated failed attempts
|
||||||
|
unsynced=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class MusixmatchFetcher(BaseFetcher):
|
||||||
|
"""Metadata search + best-candidate lyric fetch."""
|
||||||
|
|
||||||
|
_auth: MusixmatchAuthenticator
|
||||||
|
|
||||||
|
def __init__(self, general: GeneralConfig, auth: MusixmatchAuthenticator) -> None:
|
||||||
|
super().__init__(general, auth)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "musixmatch"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def requires_auth(self) -> str:
|
||||||
|
return "musixmatch"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return bool(track.title) and not self._auth.is_cooldown()
|
||||||
|
|
||||||
|
async def _api_search(self, params: dict) -> dict | None:
|
||||||
|
"""Request search payload through authenticator using production path."""
|
||||||
|
return await self._auth.get_json(_MUSIXMATCH_SEARCH_URL, params)
|
||||||
|
|
||||||
|
def _build_search_params(self, track: TrackMeta) -> dict[str, str]:
|
||||||
|
"""Build Musixmatch search params for one track."""
|
||||||
|
params: dict[str, str] = {
|
||||||
|
"q_track": track.title or "",
|
||||||
|
"page_size": "10",
|
||||||
|
"f_has_lyrics": "1",
|
||||||
|
}
|
||||||
|
if track.artist:
|
||||||
|
params["q_artist"] = track.artist
|
||||||
|
if track.album:
|
||||||
|
params["q_album"] = track.album
|
||||||
|
return params
|
||||||
|
|
||||||
|
async def _api_search_track(self, track: TrackMeta) -> dict | None:
|
||||||
|
"""Request search payload for one track using production path."""
|
||||||
|
return await self._api_search(self._build_search_params(track))
|
||||||
|
|
||||||
|
async def _api_macro(self, params: dict) -> dict | None:
|
||||||
|
"""Request macro payload through authenticator using production path."""
|
||||||
|
return await self._auth.get_json(
|
||||||
|
_MUSIXMATCH_MACRO_URL, {**_MXM_MACRO_PARAMS, **params}
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _api_macro_track(self, track: TrackMeta) -> dict | None:
|
||||||
|
"""Request macro payload for top-ranked search candidate of one track."""
|
||||||
|
search_data = await self._api_search_track(track)
|
||||||
|
if search_data is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
candidates = _parse_mxm_search(search_data)
|
||||||
|
if not candidates:
|
||||||
|
return None
|
||||||
|
|
||||||
|
commontrack_id, _confidence = select_best(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if commontrack_id is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
return await self._api_macro({"commontrack_id": str(commontrack_id)})
|
||||||
|
|
||||||
|
async def _fetch_macro(self, params: dict) -> LRCData | None:
|
||||||
|
"""Request and parse Musixmatch macro lyrics payload."""
|
||||||
|
logger.debug(f"Musixmatch: macro call with {list(params.keys())}")
|
||||||
|
data = await self._api_macro(params)
|
||||||
|
if data is None:
|
||||||
|
return None
|
||||||
|
lrc = _parse_mxm_macro(data)
|
||||||
|
if lrc is None:
|
||||||
|
logger.debug("Musixmatch: no usable lyrics in macro response")
|
||||||
|
return None
|
||||||
|
logger.debug("Musixmatch: parsed macro lyrics")
|
||||||
|
return lrc
|
||||||
|
|
||||||
|
async def _search(self, track: TrackMeta) -> tuple[Optional[int], float]:
|
||||||
|
"""Search for track metadata. Raises on network/HTTP errors."""
|
||||||
|
logger.debug(f"Musixmatch: searching for '{track.display_name()}'")
|
||||||
|
data = await self._api_search_track(track)
|
||||||
|
if data is None:
|
||||||
|
return None, 0.0
|
||||||
|
|
||||||
|
candidates = _parse_mxm_search(data)
|
||||||
|
if not candidates:
|
||||||
|
logger.debug("Musixmatch: search returned 0 results")
|
||||||
|
return None, 0.0
|
||||||
|
|
||||||
|
logger.debug(f"Musixmatch: search returned {len(candidates)} candidates")
|
||||||
|
|
||||||
|
best_id, confidence = select_best(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if best_id is not None:
|
||||||
|
logger.debug(f"Musixmatch: best candidate id={best_id} ({confidence:.0f})")
|
||||||
|
else:
|
||||||
|
logger.debug("Musixmatch: no suitable candidate found")
|
||||||
|
return best_id, confidence
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
logger.info(f"Musixmatch: fetching lyrics for {track.display_name()}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
commontrack_id, confidence = await self._search(track)
|
||||||
|
if commontrack_id is None:
|
||||||
|
logger.debug(f"Musixmatch: no match found for {track.display_name()}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
lrc = await self._fetch_macro({"commontrack_id": str(commontrack_id)})
|
||||||
|
except AttributeError:
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Musixmatch: fetch failed: {e}")
|
||||||
|
return FetchResult.from_network_error()
|
||||||
|
|
||||||
|
if lrc is None:
|
||||||
|
logger.debug(f"Musixmatch: no lyrics for commontrack_id={commontrack_id}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"Musixmatch: got SUCCESS_SYNCED lyrics "
|
||||||
|
f"for commontrack_id={commontrack_id} ({len(lrc)} lines)"
|
||||||
|
)
|
||||||
|
return FetchResult(
|
||||||
|
synced=LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
|
lyrics=lrc,
|
||||||
|
source=self.source_name,
|
||||||
|
confidence=confidence,
|
||||||
|
),
|
||||||
|
# Same as above
|
||||||
|
unsynced=None,
|
||||||
|
)
|
||||||
@@ -0,0 +1,298 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-25 11:04:51
|
||||||
|
Description: Netease Cloud Music fetcher.
|
||||||
|
|
||||||
|
Uses the public cloudsearch API for searching and the song/lyric API for
|
||||||
|
retrieving lyrics. No authentication required.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import httpx
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .base import BaseFetcher, FetchResult
|
||||||
|
from .selection import SearchCandidate, select_ranked
|
||||||
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..config import (
|
||||||
|
TTL_NOT_FOUND,
|
||||||
|
MULTI_CANDIDATE_DELAY_S,
|
||||||
|
UA_BROWSER,
|
||||||
|
)
|
||||||
|
|
||||||
|
_NETEASE_SEARCH_URL = "https://music.163.com/api/cloudsearch/pc"
|
||||||
|
_NETEASE_LYRIC_URL = "https://interface3.music.163.com/api/song/lyric"
|
||||||
|
_NETEASE_BASE_HEADERS = {
|
||||||
|
"User-Agent": UA_BROWSER,
|
||||||
|
"Referer": "https://music.163.com/",
|
||||||
|
"Origin": "https://music.163.com",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_netease_search(data: dict) -> list[SearchCandidate[int]]:
|
||||||
|
"""Parse Netease search response into scored candidates."""
|
||||||
|
result_body = data.get("result")
|
||||||
|
if not isinstance(result_body, dict):
|
||||||
|
return []
|
||||||
|
|
||||||
|
songs = result_body.get("songs")
|
||||||
|
if not isinstance(songs, list) or len(songs) == 0:
|
||||||
|
return []
|
||||||
|
|
||||||
|
return [
|
||||||
|
SearchCandidate(
|
||||||
|
item=song_id,
|
||||||
|
duration_ms=float(song["dt"]) if isinstance(song.get("dt"), int) else None,
|
||||||
|
title=song.get("name"),
|
||||||
|
artist=", ".join(a.get("name", "") for a in song.get("ar", [])) or None,
|
||||||
|
album=(song.get("al") or {}).get("name"),
|
||||||
|
)
|
||||||
|
for song in songs
|
||||||
|
if isinstance(song, dict) and isinstance(song_id := song.get("id"), int)
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_netease_lyrics(data: dict) -> LRCData | None:
|
||||||
|
"""Parse Netease lyric response to LRCData."""
|
||||||
|
lrc_obj = data.get("lrc")
|
||||||
|
if not isinstance(lrc_obj, dict):
|
||||||
|
return None
|
||||||
|
|
||||||
|
lrc = lrc_obj.get("lyric", "")
|
||||||
|
if not isinstance(lrc, str) or not lrc.strip():
|
||||||
|
return None
|
||||||
|
|
||||||
|
return LRCData(lrc)
|
||||||
|
|
||||||
|
|
||||||
|
class NeteaseFetcher(BaseFetcher):
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "netease"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return bool(track.title)
|
||||||
|
|
||||||
|
async def _api_search(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
query: str,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Issue one Netease search request and return JSON payload."""
|
||||||
|
resp = await client.post(
|
||||||
|
_NETEASE_SEARCH_URL,
|
||||||
|
headers=_NETEASE_BASE_HEADERS,
|
||||||
|
data={"s": query, "type": "1", "limit": str(limit), "offset": "0"},
|
||||||
|
)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def _api_search_track(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
track: TrackMeta,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Request Netease search payload for one track using production query strategy."""
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
if not query:
|
||||||
|
return None
|
||||||
|
return await self._api_search(client, query, limit)
|
||||||
|
|
||||||
|
async def _api_lyric(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
song_id: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Issue one Netease lyric request and return JSON payload."""
|
||||||
|
resp = await client.post(
|
||||||
|
_NETEASE_LYRIC_URL,
|
||||||
|
headers=_NETEASE_BASE_HEADERS,
|
||||||
|
data={
|
||||||
|
"id": str(song_id),
|
||||||
|
"cp": "false",
|
||||||
|
"tv": "0",
|
||||||
|
"lv": "0",
|
||||||
|
"rv": "0",
|
||||||
|
"kv": "0",
|
||||||
|
"yv": "0",
|
||||||
|
"ytv": "0",
|
||||||
|
"yrv": "0",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
resp.raise_for_status()
|
||||||
|
data = resp.json()
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def _api_lyric_track(
|
||||||
|
self,
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
track: TrackMeta,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Request lyric payload for top-ranked candidate of a track."""
|
||||||
|
search_data = await self._api_search_track(client, track, limit)
|
||||||
|
if search_data is None:
|
||||||
|
return None
|
||||||
|
candidates = _parse_netease_search(search_data)
|
||||||
|
if not candidates:
|
||||||
|
return None
|
||||||
|
ranked = select_ranked(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if not ranked:
|
||||||
|
return None
|
||||||
|
top_song_id = ranked[0][0]
|
||||||
|
return await self._api_lyric(client, top_song_id)
|
||||||
|
|
||||||
|
async def _search(
|
||||||
|
self, track: TrackMeta, limit: int = 10
|
||||||
|
) -> list[tuple[int, float]]:
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
if not query:
|
||||||
|
return []
|
||||||
|
|
||||||
|
logger.debug(f"Netease: searching for '{query}' (limit={limit})")
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
result = await self._api_search_track(client, track, limit)
|
||||||
|
|
||||||
|
if result is None:
|
||||||
|
logger.error("Netease: search returned non-dict payload")
|
||||||
|
return []
|
||||||
|
|
||||||
|
candidates = _parse_netease_search(result)
|
||||||
|
if not candidates:
|
||||||
|
logger.debug("Netease: search returned 0 results")
|
||||||
|
return []
|
||||||
|
|
||||||
|
logger.debug(f"Netease: search returned {len(candidates)} candidates")
|
||||||
|
ranked = select_ranked(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if ranked:
|
||||||
|
logger.debug(
|
||||||
|
"Netease: top candidates: "
|
||||||
|
+ ", ".join(f"id={i} ({c:.0f})" for i, c in ranked)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.debug("Netease: no suitable candidate found")
|
||||||
|
return ranked
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Netease: search failed: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
async def _get_lyric(self, song_id: int, confidence: float = 0.0) -> FetchResult:
|
||||||
|
logger.debug(f"Netease: fetching lyrics for song_id={song_id}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with httpx.AsyncClient(timeout=self._general.http_timeout) as client:
|
||||||
|
data = await self._api_lyric(client, song_id)
|
||||||
|
|
||||||
|
if data is None:
|
||||||
|
logger.error("Netease: lyric response is not dict")
|
||||||
|
return FetchResult.from_network_error()
|
||||||
|
|
||||||
|
lrcdata = _parse_netease_lyrics(data)
|
||||||
|
if lrcdata is None:
|
||||||
|
logger.debug(f"Netease: empty lyrics for song_id={song_id}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
status = lrcdata.detect_sync_status()
|
||||||
|
logger.info(
|
||||||
|
f"Netease: got {status.value} lyrics for song_id={song_id} "
|
||||||
|
f"({len(lrcdata)} lines)"
|
||||||
|
)
|
||||||
|
not_found = LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
||||||
|
if status == CacheStatus.SUCCESS_SYNCED:
|
||||||
|
return FetchResult(
|
||||||
|
synced=LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
|
lyrics=lrcdata,
|
||||||
|
source=self.source_name,
|
||||||
|
confidence=confidence,
|
||||||
|
),
|
||||||
|
unsynced=not_found,
|
||||||
|
)
|
||||||
|
return FetchResult(
|
||||||
|
synced=not_found,
|
||||||
|
unsynced=LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_UNSYNCED,
|
||||||
|
lyrics=lrcdata,
|
||||||
|
source=self.source_name,
|
||||||
|
confidence=confidence,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Netease: lyric fetch failed for song_id={song_id}: {e}")
|
||||||
|
return FetchResult.from_network_error()
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
if not query:
|
||||||
|
logger.debug("Netease: skipped — insufficient metadata")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
logger.info(f"Netease: fetching lyrics for {track.display_name()}")
|
||||||
|
candidates = await self._search(track)
|
||||||
|
if not candidates:
|
||||||
|
logger.debug(f"Netease: no match found for {track.display_name()}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
res_synced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
res_unsynced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
|
||||||
|
for i, (song_id, confidence) in enumerate(candidates):
|
||||||
|
if i > 0:
|
||||||
|
await asyncio.sleep(MULTI_CANDIDATE_DELAY_S)
|
||||||
|
result = await self._get_lyric(song_id, confidence=confidence)
|
||||||
|
if result.synced and result.synced.status == CacheStatus.NETWORK_ERROR:
|
||||||
|
return result
|
||||||
|
if result.unsynced and result.unsynced.status == CacheStatus.NETWORK_ERROR:
|
||||||
|
return result
|
||||||
|
|
||||||
|
if (
|
||||||
|
res_synced.status == CacheStatus.NOT_FOUND
|
||||||
|
and result.synced
|
||||||
|
and result.synced.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
):
|
||||||
|
res_synced = result.synced
|
||||||
|
if (
|
||||||
|
res_unsynced.status == CacheStatus.NOT_FOUND
|
||||||
|
and result.unsynced
|
||||||
|
and result.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
):
|
||||||
|
res_unsynced = result.unsynced
|
||||||
|
|
||||||
|
# Netease API is quite expensive, so we stop after finding synced lyrics,
|
||||||
|
# instead of trying to find both synced and unsynced versions
|
||||||
|
if (
|
||||||
|
res_synced.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
# and res_unsynced.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
):
|
||||||
|
break
|
||||||
|
|
||||||
|
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
||||||
@@ -0,0 +1,249 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-31 01:54:02
|
||||||
|
Description: QQ Music fetcher via self-hosted API proxy.
|
||||||
|
|
||||||
|
Requires a running qq-music-api instance.
|
||||||
|
The base URL is read from the QQ_MUSIC_API_URL environment variable.
|
||||||
|
|
||||||
|
Search → pick best match → fetch LRC lyrics.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .base import BaseFetcher, FetchResult
|
||||||
|
from .selection import SearchCandidate, select_ranked
|
||||||
|
from ..authenticators import QQMusicAuthenticator
|
||||||
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..config import (
|
||||||
|
GeneralConfig,
|
||||||
|
TTL_NOT_FOUND,
|
||||||
|
MULTI_CANDIDATE_DELAY_S,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_qq_search(data: dict) -> list[SearchCandidate[str]]:
|
||||||
|
"""Parse QQMusic search response into normalized candidates."""
|
||||||
|
if data.get("code") != 0:
|
||||||
|
return []
|
||||||
|
|
||||||
|
songs = data.get("data", {}).get("list", [])
|
||||||
|
if not isinstance(songs, list):
|
||||||
|
return []
|
||||||
|
|
||||||
|
return [
|
||||||
|
SearchCandidate(
|
||||||
|
item=mid,
|
||||||
|
duration_ms=float(song["interval"]) * 1000
|
||||||
|
if isinstance(song.get("interval"), int)
|
||||||
|
else None,
|
||||||
|
title=song.get("name"),
|
||||||
|
artist=", ".join(s.get("name", "") for s in song.get("singer", [])) or None,
|
||||||
|
album=(song.get("album") or {}).get("name"),
|
||||||
|
)
|
||||||
|
for song in songs
|
||||||
|
if isinstance(song, dict) and isinstance(mid := song.get("mid"), str)
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_qq_lyrics(data: dict) -> LRCData | None:
|
||||||
|
"""Parse QQMusic lyric response to LRCData."""
|
||||||
|
if data.get("code") != 0:
|
||||||
|
return None
|
||||||
|
|
||||||
|
lrc = data.get("data", {}).get("lyric", "")
|
||||||
|
if not isinstance(lrc, str) or not lrc.strip():
|
||||||
|
return None
|
||||||
|
return LRCData(lrc)
|
||||||
|
|
||||||
|
|
||||||
|
class QQMusicFetcher(BaseFetcher):
|
||||||
|
_auth: QQMusicAuthenticator
|
||||||
|
|
||||||
|
def __init__(self, general: GeneralConfig, auth: QQMusicAuthenticator) -> None:
|
||||||
|
super().__init__(general, auth)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "qqmusic"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return bool(track.title) and self._auth.is_configured()
|
||||||
|
|
||||||
|
async def _api_search(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Return raw QQMusic search payload for one track."""
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
if not query:
|
||||||
|
return None
|
||||||
|
data = await self._auth.search(query, limit)
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def _api_lyric(
|
||||||
|
self,
|
||||||
|
mid: str,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Return raw QQMusic lyric payload for one song MID."""
|
||||||
|
data = await self._auth.get_lyric(mid)
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def _api_lyric_track(
|
||||||
|
self,
|
||||||
|
track: TrackMeta,
|
||||||
|
limit: int,
|
||||||
|
) -> dict | None:
|
||||||
|
"""Return raw QQMusic lyric payload for top-ranked search candidate."""
|
||||||
|
search_data = await self._api_search(track, limit)
|
||||||
|
if search_data is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
candidates = _parse_qq_search(search_data)
|
||||||
|
if not candidates:
|
||||||
|
return None
|
||||||
|
|
||||||
|
ranked = select_ranked(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if not ranked:
|
||||||
|
return None
|
||||||
|
|
||||||
|
mid = ranked[0][0]
|
||||||
|
return await self._api_lyric(mid)
|
||||||
|
|
||||||
|
async def _search(
|
||||||
|
self, track: TrackMeta, limit: int = 10
|
||||||
|
) -> list[tuple[str, float]]:
|
||||||
|
search_data = await self._api_search(track, limit)
|
||||||
|
if search_data is None:
|
||||||
|
return []
|
||||||
|
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
logger.debug(f"QQMusic: searching for '{query}' (limit={limit})")
|
||||||
|
|
||||||
|
candidates = _parse_qq_search(search_data)
|
||||||
|
if not candidates:
|
||||||
|
logger.debug("QQMusic: search returned 0 results")
|
||||||
|
return []
|
||||||
|
|
||||||
|
logger.debug(f"QQMusic: search returned {len(candidates)} candidates")
|
||||||
|
ranked = select_ranked(
|
||||||
|
candidates,
|
||||||
|
track.length,
|
||||||
|
title=track.title,
|
||||||
|
artist=track.artist,
|
||||||
|
album=track.album,
|
||||||
|
)
|
||||||
|
if ranked:
|
||||||
|
logger.debug(
|
||||||
|
"QQMusic: top candidates: "
|
||||||
|
+ ", ".join(f"mid={m} ({c:.0f})" for m, c in ranked)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.debug("QQMusic: no suitable candidate found")
|
||||||
|
return ranked
|
||||||
|
|
||||||
|
async def _get_lyric(self, mid: str, confidence: float = 0.0) -> FetchResult:
|
||||||
|
logger.debug(f"QQMusic: fetching lyrics for mid={mid}")
|
||||||
|
data = await self._api_lyric(mid)
|
||||||
|
if data is None:
|
||||||
|
return FetchResult.from_network_error()
|
||||||
|
|
||||||
|
lrcdata = _parse_qq_lyrics(data)
|
||||||
|
if lrcdata is None:
|
||||||
|
logger.debug(f"QQMusic: empty lyrics for mid={mid}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
status = lrcdata.detect_sync_status()
|
||||||
|
logger.info(
|
||||||
|
f"QQMusic: got {status.value} lyrics for mid={mid} ({len(lrcdata)} lines)"
|
||||||
|
)
|
||||||
|
not_found = LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
||||||
|
if status == CacheStatus.SUCCESS_SYNCED:
|
||||||
|
return FetchResult(
|
||||||
|
synced=LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
|
lyrics=lrcdata,
|
||||||
|
source=self.source_name,
|
||||||
|
confidence=confidence,
|
||||||
|
),
|
||||||
|
unsynced=not_found,
|
||||||
|
)
|
||||||
|
return FetchResult(
|
||||||
|
synced=not_found,
|
||||||
|
unsynced=LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_UNSYNCED,
|
||||||
|
lyrics=lrcdata,
|
||||||
|
source=self.source_name,
|
||||||
|
confidence=confidence,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
if not self._auth.is_configured():
|
||||||
|
logger.debug("QQMusic: skipped — Auth not configured")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
query = f"{track.artist or ''} {track.title or ''}".strip()
|
||||||
|
if not query:
|
||||||
|
logger.debug("QQMusic: skipped — insufficient metadata")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
logger.info(f"QQMusic: fetching lyrics for {track.display_name()}")
|
||||||
|
candidates = await self._search(track)
|
||||||
|
if not candidates:
|
||||||
|
logger.debug(f"QQMusic: no match found for {track.display_name()}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
res_synced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
res_unsynced: LyricResult = LyricResult(
|
||||||
|
status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND
|
||||||
|
)
|
||||||
|
|
||||||
|
for i, (mid, confidence) in enumerate(candidates):
|
||||||
|
if i > 0:
|
||||||
|
await asyncio.sleep(MULTI_CANDIDATE_DELAY_S)
|
||||||
|
result = await self._get_lyric(mid, confidence=confidence)
|
||||||
|
if result.synced and result.synced.status == CacheStatus.NETWORK_ERROR:
|
||||||
|
return result
|
||||||
|
if result.unsynced and result.unsynced.status == CacheStatus.NETWORK_ERROR:
|
||||||
|
return result
|
||||||
|
|
||||||
|
if (
|
||||||
|
res_synced.status == CacheStatus.NOT_FOUND
|
||||||
|
and result.synced
|
||||||
|
and result.synced.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
):
|
||||||
|
res_synced = result.synced
|
||||||
|
if (
|
||||||
|
res_unsynced.status == CacheStatus.NOT_FOUND
|
||||||
|
and result.unsynced
|
||||||
|
and result.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
):
|
||||||
|
res_unsynced = result.unsynced
|
||||||
|
|
||||||
|
# QQMusic API is quite expensive, so we stop after finding synced lyrics,
|
||||||
|
# instead of trying to find both synced and unsynced versions
|
||||||
|
if (
|
||||||
|
res_synced.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
# and res_unsynced.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
):
|
||||||
|
break
|
||||||
|
|
||||||
|
return FetchResult(synced=res_synced, unsynced=res_unsynced)
|
||||||
@@ -1,11 +1,15 @@
|
|||||||
"""
|
"""
|
||||||
Shared candidate-selection logic for search-based fetchers.
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-04 11:32:23
|
||||||
|
Description: Shared candidate-selection logic for search-based fetchers.
|
||||||
|
|
||||||
Each fetcher maps its API-specific results to SearchCandidate, then calls
|
Each fetcher maps its API-specific results to SearchCandidate, then calls
|
||||||
select_best() which scores candidates by metadata similarity, duration
|
select_best() which scores candidates by metadata similarity, duration
|
||||||
proximity, and sync status.
|
proximity, and sync status.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from typing import Generic, Optional, TypeVar
|
from typing import Generic, Optional, TypeVar
|
||||||
|
|
||||||
@@ -68,14 +72,12 @@ def _score_candidate(
|
|||||||
|
|
||||||
Scoring works in two tiers:
|
Scoring works in two tiers:
|
||||||
|
|
||||||
1. **Metadata score** — computed from fields available on *both* sides,
|
Metadata score — computed from fields available on both sides,
|
||||||
then rescaled to fill the 0-90 range so that missing fields don't
|
then rescaled to fill the 0-90 range so that missing fields don't
|
||||||
inflate the score. Fields missing on both sides are simply excluded
|
inflate the score. Fields missing on both sides are simply excluded
|
||||||
from the calculation (neutral). Fields present on only one side
|
from the calculation (neutral). Fields present on only one side
|
||||||
contribute 0 to the numerator but their weight still counts in the
|
contribute 0 to the numerator but their weight still counts in the
|
||||||
denominator (penalty for asymmetric absence).
|
denominator (penalty for asymmetric absence).
|
||||||
|
|
||||||
2. **Synced bonus** — a flat 10 pts, always applied independently.
|
|
||||||
|
|
||||||
Field weights (before rescaling):
|
Field weights (before rescaling):
|
||||||
- Title: 40
|
- Title: 40
|
||||||
@@ -139,7 +141,10 @@ def _score_candidate(
|
|||||||
metadata_score = 0.0
|
metadata_score = 0.0
|
||||||
|
|
||||||
# Synced bonus (always 10 pts, independent of metadata)
|
# Synced bonus (always 10 pts, independent of metadata)
|
||||||
synced_score = _W_SYNCED if c.is_synced else 0.0
|
# synced_score = _W_SYNCED if c.is_synced else 0.0
|
||||||
|
# EDIT: synced or not should not affect the score that indicates metadata similarity.
|
||||||
|
# Always apply synced bonus regardless of is_synced.
|
||||||
|
synced_score = _W_SYNCED
|
||||||
|
|
||||||
return metadata_score + synced_score
|
return metadata_score + synced_score
|
||||||
|
|
||||||
@@ -0,0 +1,129 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-25 10:43:21
|
||||||
|
Description: Spotify fetcher — obtains synced lyrics via Spotify's internal color-lyrics API.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from .base import BaseFetcher, FetchResult
|
||||||
|
from ..authenticators.spotify import SpotifyAuthenticator
|
||||||
|
from ..models import TrackMeta, LyricResult, CacheStatus
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..config import GeneralConfig, TTL_NOT_FOUND
|
||||||
|
|
||||||
|
|
||||||
|
def _format_lrc_line(start_ms: int, words: str) -> str:
|
||||||
|
minutes = start_ms // 60000
|
||||||
|
seconds = (start_ms // 1000) % 60
|
||||||
|
centiseconds = round((start_ms % 1000) / 10.0)
|
||||||
|
return f"[{minutes:02d}:{seconds:02d}.{centiseconds:02.0f}]{words}"
|
||||||
|
|
||||||
|
|
||||||
|
def _is_truly_synced(lines: list[dict]) -> bool:
|
||||||
|
for line in lines:
|
||||||
|
try:
|
||||||
|
ms = int(line.get("startTimeMs", "0"))
|
||||||
|
if ms > 0:
|
||||||
|
return True
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
continue
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_spotify_lyrics(data: dict) -> LRCData | None:
|
||||||
|
"""Parse Spotify color-lyrics payload to LRCData."""
|
||||||
|
lyrics_data = data.get("lyrics")
|
||||||
|
if not isinstance(lyrics_data, dict):
|
||||||
|
return None
|
||||||
|
|
||||||
|
sync_type = lyrics_data.get("syncType", "")
|
||||||
|
lines = lyrics_data.get("lines", [])
|
||||||
|
if not isinstance(lines, list) or len(lines) == 0:
|
||||||
|
return None
|
||||||
|
|
||||||
|
is_synced = sync_type == "LINE_SYNCED" and _is_truly_synced(lines)
|
||||||
|
|
||||||
|
lrc_lines: list[str] = []
|
||||||
|
for line in lines:
|
||||||
|
if not isinstance(line, dict):
|
||||||
|
continue
|
||||||
|
words = line.get("words", "")
|
||||||
|
if not isinstance(words, str):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
ms = int(line.get("startTimeMs", "0"))
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
ms = 0
|
||||||
|
|
||||||
|
if is_synced:
|
||||||
|
lrc_lines.append(_format_lrc_line(ms, words))
|
||||||
|
else:
|
||||||
|
lrc_lines.append(f"[00:00.00]{words}")
|
||||||
|
|
||||||
|
if not lrc_lines:
|
||||||
|
return None
|
||||||
|
return LRCData("\n".join(lrc_lines))
|
||||||
|
|
||||||
|
|
||||||
|
class SpotifyFetcher(BaseFetcher):
|
||||||
|
def __init__(self, general: GeneralConfig, auth: SpotifyAuthenticator) -> None:
|
||||||
|
super().__init__(general, auth)
|
||||||
|
|
||||||
|
_auth: SpotifyAuthenticator
|
||||||
|
|
||||||
|
@property
|
||||||
|
def source_name(self) -> str:
|
||||||
|
return "spotify"
|
||||||
|
|
||||||
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
|
return bool(track.trackid) and self._auth.is_configured()
|
||||||
|
|
||||||
|
async def _api_lyrics(self, track: TrackMeta) -> dict | None:
|
||||||
|
"""Return raw Spotify lyrics payload for one track using production auth path."""
|
||||||
|
if not track.trackid:
|
||||||
|
return None
|
||||||
|
data = await self._auth.get_lyrics(track.trackid)
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
return None
|
||||||
|
return data
|
||||||
|
|
||||||
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
|
if not track.trackid:
|
||||||
|
logger.debug("Spotify: skipped — no trackid in metadata")
|
||||||
|
return FetchResult()
|
||||||
|
|
||||||
|
logger.info(f"Spotify: fetching lyrics for trackid={track.trackid}")
|
||||||
|
|
||||||
|
data = await self._api_lyrics(track)
|
||||||
|
if data is None:
|
||||||
|
logger.debug(f"Spotify: no lyrics payload for trackid={track.trackid}")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
content = _parse_spotify_lyrics(data)
|
||||||
|
if content is None:
|
||||||
|
logger.debug("Spotify: response contained no parseable lyric lines")
|
||||||
|
return FetchResult.from_not_found()
|
||||||
|
|
||||||
|
status = content.detect_sync_status()
|
||||||
|
logger.info(f"Spotify: got {status.value} lyrics ({len(content)} lines)")
|
||||||
|
not_found = LyricResult(status=CacheStatus.NOT_FOUND, ttl=TTL_NOT_FOUND)
|
||||||
|
if status == CacheStatus.SUCCESS_SYNCED:
|
||||||
|
return FetchResult(
|
||||||
|
synced=LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_SYNCED,
|
||||||
|
lyrics=content,
|
||||||
|
source=self.source_name,
|
||||||
|
),
|
||||||
|
unsynced=not_found,
|
||||||
|
)
|
||||||
|
return FetchResult(
|
||||||
|
synced=not_found,
|
||||||
|
unsynced=LyricResult(
|
||||||
|
status=CacheStatus.SUCCESS_UNSYNCED,
|
||||||
|
lyrics=content,
|
||||||
|
source=self.source_name,
|
||||||
|
),
|
||||||
|
)
|
||||||
@@ -0,0 +1,465 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-03-25 21:54:01
|
||||||
|
Description: LRC parsing, modeling, and serialization helpers.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
import re
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from .models import CacheStatus
|
||||||
|
|
||||||
|
# Parses any time tag input format:
|
||||||
|
# [mm:ss], [mm:ss.c], [mm:ss.cc], [mm:ss.ccc], [mm:ss:cc], …
|
||||||
|
_RAW_TAG_RE = re.compile(r"\[(\d{2,}):(\d{2})(?:[.:](\d{1,3}))?\]")
|
||||||
|
|
||||||
|
# One or more leading bracket tags at line start.
|
||||||
|
# Used to strip start tags in plain-mode fallback.
|
||||||
|
_LINE_START_TAGS_RE = re.compile(r"^(?:\[[^\]]*\])+", re.MULTILINE)
|
||||||
|
|
||||||
|
# Timed word-sync tags: <mm:ss>, <mm:ss.c>, <mm:ss.cc>, <mm:ss:cc>
|
||||||
|
_WORD_SYNC_TAG_RE = re.compile(r"<(\d{2,}):(\d{2})(?:[.:](\d{1,3}))?>")
|
||||||
|
|
||||||
|
# A single doc-level tag line: [key:value].
|
||||||
|
# Disallow nested [] in value so multi-tag lines are not treated as doc tags.
|
||||||
|
_DOC_TAG_RE = re.compile(r"^\[([^:\]\[]+):([^\[\]]*)\]$")
|
||||||
|
|
||||||
|
# QRC uses a different format and is intentionally out of scope here.
|
||||||
|
|
||||||
|
|
||||||
|
def _remove_pattern(text: str, pattern: re.Pattern) -> str:
|
||||||
|
"""Remove all occurrences of pattern from text, then strip leading/trailing whitespace."""
|
||||||
|
return pattern.sub("", text).strip()
|
||||||
|
|
||||||
|
|
||||||
|
def _raw_tag_to_ms(mm: str, ss: str, frac: Optional[str]) -> int:
|
||||||
|
"""Convert parsed time tag components to total milliseconds."""
|
||||||
|
if frac is None:
|
||||||
|
ms = 0
|
||||||
|
else:
|
||||||
|
n = len(frac)
|
||||||
|
if n == 1:
|
||||||
|
ms = int(frac) * 100
|
||||||
|
elif n == 2:
|
||||||
|
ms = int(frac) * 10
|
||||||
|
else:
|
||||||
|
ms = int(frac)
|
||||||
|
return (int(mm) * 60 + int(ss)) * 1000 + ms
|
||||||
|
|
||||||
|
|
||||||
|
def _ms_to_std_tag(total_ms: int) -> str:
|
||||||
|
mm = max(0, total_ms) // 60000
|
||||||
|
ss = (max(0, total_ms) % 60000) // 1000
|
||||||
|
cs = min(round((max(0, total_ms) % 1000) / 10), 99)
|
||||||
|
return f"[{mm:02d}:{ss:02d}.{cs:02d}]"
|
||||||
|
|
||||||
|
|
||||||
|
def _ms_to_word_tag(total_ms: int) -> str:
|
||||||
|
mm = max(0, total_ms) // 60000
|
||||||
|
ss = (max(0, total_ms) % 60000) // 1000
|
||||||
|
cs = min(round((max(0, total_ms) % 1000) / 10), 99)
|
||||||
|
return f"<{mm:02d}:{ss:02d}.{cs:02d}>"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class LrcWordSegment:
|
||||||
|
text: str
|
||||||
|
time_ms: Optional[int] = None
|
||||||
|
duration_ms: Optional[int] = None
|
||||||
|
|
||||||
|
|
||||||
|
class BaseLine(ABC):
|
||||||
|
"""Common line interface for rendering and text extraction."""
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
def text(self) -> str:
|
||||||
|
"""Return plain text content for this line."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def to_text(self, include_word_sync: bool) -> str:
|
||||||
|
"""Return full serialized line text."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def to_plain_unsynced(self) -> Optional[str]:
|
||||||
|
"""Return this line's plain-text contribution in unsynced mode."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def timed_plain_entries(self) -> list[tuple[int, str]]:
|
||||||
|
"""Return (timestamp_ms, text) entries for synced plain-mode output."""
|
||||||
|
|
||||||
|
def has_nonzero_timestamp(self) -> bool:
|
||||||
|
return any(ts > 0 for ts, _ in self.timed_plain_entries())
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class DocTagLine(BaseLine):
|
||||||
|
"""Represents a single doc tag line like [ar:Artist]."""
|
||||||
|
|
||||||
|
key: str
|
||||||
|
value: str
|
||||||
|
|
||||||
|
@property
|
||||||
|
def text(self) -> str:
|
||||||
|
return f"[{self.key}:{self.value}]"
|
||||||
|
|
||||||
|
def to_text(self, include_word_sync: bool) -> str:
|
||||||
|
return self.text
|
||||||
|
|
||||||
|
def to_plain_unsynced(self) -> Optional[str]:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def timed_plain_entries(self) -> list[tuple[int, str]]:
|
||||||
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LyricLine(BaseLine):
|
||||||
|
"""Lyric line with optional line-level timestamps."""
|
||||||
|
|
||||||
|
line_times_ms: list[int] = field(default_factory=list)
|
||||||
|
words: list[LrcWordSegment] = field(default_factory=list)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def text(self) -> str:
|
||||||
|
return "".join(seg.text for seg in self.words)
|
||||||
|
|
||||||
|
def to_text(self, include_word_sync: bool) -> str:
|
||||||
|
prefix = "".join(_ms_to_std_tag(ms) for ms in self.line_times_ms)
|
||||||
|
return prefix + self.text
|
||||||
|
|
||||||
|
def to_plain_unsynced(self) -> Optional[str]:
|
||||||
|
return _remove_pattern(self.text, _LINE_START_TAGS_RE)
|
||||||
|
|
||||||
|
def timed_plain_entries(self) -> list[tuple[int, str]]:
|
||||||
|
return [(tag_ms, self.text) for tag_ms in self.line_times_ms]
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class WordSyncLyricLine(LyricLine):
|
||||||
|
"""Lyric line that can render per-word sync tags when requested."""
|
||||||
|
|
||||||
|
def to_text(self, include_word_sync: bool) -> str:
|
||||||
|
prefix = "".join(_ms_to_std_tag(ms) for ms in self.line_times_ms)
|
||||||
|
if not include_word_sync:
|
||||||
|
return prefix + self.text
|
||||||
|
parts: list[str] = []
|
||||||
|
for seg in self.words:
|
||||||
|
if seg.time_ms is not None:
|
||||||
|
parts.append(_ms_to_word_tag(seg.time_ms))
|
||||||
|
parts.append(seg.text)
|
||||||
|
return prefix + "".join(parts)
|
||||||
|
|
||||||
|
|
||||||
|
def _split_trimmed_lines(text: str) -> list[str]:
|
||||||
|
"""Split text into lines, strip each line, and drop outer blank lines."""
|
||||||
|
|
||||||
|
lines = [line.strip() for line in text.splitlines()]
|
||||||
|
while lines and not lines[0].strip():
|
||||||
|
lines.pop(0)
|
||||||
|
while lines and not lines[-1].strip():
|
||||||
|
lines.pop()
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_leading_line_tags(line: str) -> tuple[list[int], str]:
|
||||||
|
"""Parse leading line-sync tags and return (times_ms, lyric_part).
|
||||||
|
|
||||||
|
Spaces between consecutive leading tags are dropped. If non-space text
|
||||||
|
appears, parsing of leading tags stops and the remainder is lyric text.
|
||||||
|
"""
|
||||||
|
pos = 0
|
||||||
|
tags_ms: list[int] = []
|
||||||
|
while True:
|
||||||
|
m = _RAW_TAG_RE.match(line, pos)
|
||||||
|
if not m:
|
||||||
|
break
|
||||||
|
tags_ms.append(_raw_tag_to_ms(m.group(1), m.group(2), m.group(3)))
|
||||||
|
pos = m.end()
|
||||||
|
|
||||||
|
# Allow spaces only between consecutive leading tags.
|
||||||
|
# We only check for '[' here; the next loop decides whether it is a valid time tag.
|
||||||
|
scan = pos
|
||||||
|
while scan < len(line) and line[scan].isspace():
|
||||||
|
scan += 1
|
||||||
|
if scan < len(line) and line[scan] == "[":
|
||||||
|
pos = scan
|
||||||
|
continue
|
||||||
|
pos = scan
|
||||||
|
break
|
||||||
|
return tags_ms, line[pos:]
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_word_segments(lyric_part: str) -> tuple[list[LrcWordSegment], bool]:
|
||||||
|
"""Parse timed word-sync tags while preserving all lyric text exactly."""
|
||||||
|
segments: list[LrcWordSegment] = []
|
||||||
|
cursor = 0
|
||||||
|
current_time: Optional[int] = None
|
||||||
|
has_word_sync = False
|
||||||
|
|
||||||
|
for m in _WORD_SYNC_TAG_RE.finditer(lyric_part):
|
||||||
|
piece = lyric_part[cursor : m.start()]
|
||||||
|
if piece:
|
||||||
|
segments.append(LrcWordSegment(text=piece, time_ms=current_time))
|
||||||
|
current_time = _raw_tag_to_ms(m.group(1), m.group(2), m.group(3))
|
||||||
|
has_word_sync = True
|
||||||
|
cursor = m.end()
|
||||||
|
|
||||||
|
tail = lyric_part[cursor:]
|
||||||
|
if tail or not segments:
|
||||||
|
segments.append(
|
||||||
|
LrcWordSegment(
|
||||||
|
text=tail,
|
||||||
|
time_ms=current_time if has_word_sync else None,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return segments, has_word_sync
|
||||||
|
|
||||||
|
|
||||||
|
def _is_single_doc_tag_line(line: str) -> Optional[tuple[str, str]]:
|
||||||
|
"""Return (key, value) only for standalone single doc-tag lines."""
|
||||||
|
|
||||||
|
if _RAW_TAG_RE.fullmatch(line):
|
||||||
|
return None
|
||||||
|
m = _DOC_TAG_RE.fullmatch(line)
|
||||||
|
if not m:
|
||||||
|
return None
|
||||||
|
key = m.group(1).strip()
|
||||||
|
value = m.group(2).strip()
|
||||||
|
return key, value
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_offset_value(value: str) -> Optional[int]:
|
||||||
|
"""Parse doc offset value in milliseconds, returning None for invalid values."""
|
||||||
|
try:
|
||||||
|
return int(value.strip())
|
||||||
|
except ValueError:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class LRCData:
|
||||||
|
_lines: list[BaseLine]
|
||||||
|
_doc_tags: dict[str, str]
|
||||||
|
|
||||||
|
def __init__(self, text: Optional[str] = None) -> None:
|
||||||
|
self._doc_tags = {}
|
||||||
|
if not text:
|
||||||
|
self._lines = []
|
||||||
|
return
|
||||||
|
|
||||||
|
raw_lines = _split_trimmed_lines(text)
|
||||||
|
parsed: list[BaseLine] = []
|
||||||
|
|
||||||
|
for raw in raw_lines:
|
||||||
|
maybe_tag = _is_single_doc_tag_line(raw)
|
||||||
|
if maybe_tag is not None:
|
||||||
|
key, value = maybe_tag
|
||||||
|
self._doc_tags[key] = value
|
||||||
|
parsed.append(DocTagLine(key=key, value=value))
|
||||||
|
continue
|
||||||
|
|
||||||
|
tags_ms, lyric_part = _extract_leading_line_tags(raw)
|
||||||
|
words, has_word_sync = _parse_word_segments(lyric_part if tags_ms else raw)
|
||||||
|
|
||||||
|
if has_word_sync:
|
||||||
|
parsed.append(WordSyncLyricLine(line_times_ms=tags_ms, words=words))
|
||||||
|
else:
|
||||||
|
parsed.append(LyricLine(line_times_ms=tags_ms, words=words))
|
||||||
|
|
||||||
|
self._lines = parsed
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
return self._serialize_lines(self._lines, include_word_sync=True)
|
||||||
|
|
||||||
|
def __repr__(self) -> str:
|
||||||
|
return f"LRCData(doc_tags={self._doc_tags!r}, lines={self._lines!r})"
|
||||||
|
|
||||||
|
def __len__(self) -> int:
|
||||||
|
return len(self._lines)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def tags(self) -> dict[str, str]:
|
||||||
|
return self._doc_tags
|
||||||
|
|
||||||
|
@property
|
||||||
|
def lines(self) -> list[BaseLine]:
|
||||||
|
return self._lines
|
||||||
|
|
||||||
|
def is_synced(self) -> bool:
|
||||||
|
"""Return True if any lyric line contains a non-zero line timestamp."""
|
||||||
|
return any(line.has_nonzero_timestamp() for line in self._lines)
|
||||||
|
|
||||||
|
def detect_sync_status(self) -> CacheStatus:
|
||||||
|
"""Map sync detection result to cache status."""
|
||||||
|
return (
|
||||||
|
CacheStatus.SUCCESS_SYNCED
|
||||||
|
if self.is_synced()
|
||||||
|
else CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
)
|
||||||
|
|
||||||
|
def normalize_unsynced(self) -> "LRCData":
|
||||||
|
"""Convert lyrics into unsynced LRC form with [00:00.00] tags.
|
||||||
|
|
||||||
|
- Leading blank lyric lines are skipped.
|
||||||
|
- Middle blank lyric lines are preserved as empty synced lines.
|
||||||
|
- Doc-tag lines are preserved unchanged.
|
||||||
|
"""
|
||||||
|
out: list[BaseLine] = []
|
||||||
|
first = True
|
||||||
|
for line in self._lines:
|
||||||
|
if isinstance(line, DocTagLine):
|
||||||
|
out.append(DocTagLine(key=line.key, value=line.value))
|
||||||
|
continue
|
||||||
|
|
||||||
|
assert isinstance(line, LyricLine)
|
||||||
|
|
||||||
|
stripped = line.text.strip()
|
||||||
|
if not stripped and not first:
|
||||||
|
out.append(
|
||||||
|
LyricLine(line_times_ms=[0], words=[LrcWordSegment(text="")])
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
elif not stripped:
|
||||||
|
continue
|
||||||
|
first = False
|
||||||
|
out.append(
|
||||||
|
LyricLine(
|
||||||
|
line_times_ms=[0],
|
||||||
|
words=[LrcWordSegment(text=line.text)],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
ret = LRCData()
|
||||||
|
ret._lines = out
|
||||||
|
ret._doc_tags = dict(self._doc_tags)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def normalize(self) -> "LRCData":
|
||||||
|
"""Normalize LRC for decode/export oriented output.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- Move all doc tags to the beginning, preserving line order and duplicates.
|
||||||
|
- Keep doc tags unchanged except removing all offset tags.
|
||||||
|
- Remove word-sync tags.
|
||||||
|
- Convert untagged non-empty lyric lines to [00:00.00] lyrics.
|
||||||
|
- Drop empty lyric lines.
|
||||||
|
- Expand lyric lines with multiple time tags into one line per tag.
|
||||||
|
- Apply offset (ms) to lyric timestamps and sort by timestamp.
|
||||||
|
"""
|
||||||
|
out_doc_tags: list[DocTagLine] = []
|
||||||
|
lyric_entries: list[tuple[int, str]] = []
|
||||||
|
offset_ms = 0
|
||||||
|
|
||||||
|
# Resolve offset first so it applies to all lyric lines, independent of tag position.
|
||||||
|
for line in self._lines:
|
||||||
|
if isinstance(line, DocTagLine) and line.key.strip().lower() == "offset":
|
||||||
|
parsed_offset = _parse_offset_value(line.value)
|
||||||
|
if parsed_offset is not None:
|
||||||
|
offset_ms = parsed_offset
|
||||||
|
|
||||||
|
for line in self._lines:
|
||||||
|
if isinstance(line, DocTagLine):
|
||||||
|
if line.key.strip().lower() == "offset":
|
||||||
|
continue
|
||||||
|
out_doc_tags.append(DocTagLine(key=line.key, value=line.value))
|
||||||
|
continue
|
||||||
|
|
||||||
|
assert isinstance(line, LyricLine)
|
||||||
|
|
||||||
|
lyric_text = line.text
|
||||||
|
if not lyric_text.strip():
|
||||||
|
continue
|
||||||
|
|
||||||
|
line_times = line.line_times_ms if line.line_times_ms else [0]
|
||||||
|
for time_ms in line_times:
|
||||||
|
shifted = max(0, time_ms + offset_ms)
|
||||||
|
lyric_entries.append((shifted, lyric_text))
|
||||||
|
|
||||||
|
# Sort by timestamp; original index as tiebreaker so equal-time entries
|
||||||
|
# retain the order they appeared in the input.
|
||||||
|
lyric_entries = [
|
||||||
|
e
|
||||||
|
for _, e in sorted(enumerate(lyric_entries), key=lambda x: (x[1][0], x[0]))
|
||||||
|
]
|
||||||
|
|
||||||
|
out_lyrics: list[LyricLine] = [
|
||||||
|
LyricLine(line_times_ms=[time_ms], words=[LrcWordSegment(text=text)])
|
||||||
|
for time_ms, text in lyric_entries
|
||||||
|
]
|
||||||
|
|
||||||
|
ret = LRCData()
|
||||||
|
ret._lines = [*out_doc_tags, *out_lyrics]
|
||||||
|
ret._doc_tags = {line.key: line.value for line in out_doc_tags}
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def to_plain(
|
||||||
|
self,
|
||||||
|
deduplicate: bool = False,
|
||||||
|
) -> str:
|
||||||
|
"""Convert lyrics to plain text with all tags stripped.
|
||||||
|
|
||||||
|
If synced, output is sorted by line timestamp and duplicated for multi-tag lines.
|
||||||
|
If not synced, leading bracket tags are stripped per line and original order is kept.
|
||||||
|
If deduplicate is True, only consecutive duplicate plain lines are collapsed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if not self.is_synced():
|
||||||
|
plain_lines = [
|
||||||
|
text
|
||||||
|
for text in (line.to_plain_unsynced() for line in self._lines)
|
||||||
|
if text is not None
|
||||||
|
]
|
||||||
|
return "\n".join(plain_lines).strip("\n")
|
||||||
|
|
||||||
|
tagged_lines: list[tuple[int, str]] = []
|
||||||
|
for line in self._lines:
|
||||||
|
tagged_lines.extend(line.timed_plain_entries())
|
||||||
|
|
||||||
|
sorted_lines = [
|
||||||
|
lyric
|
||||||
|
for _, (_, lyric) in sorted(
|
||||||
|
enumerate(tagged_lines), key=lambda x: (x[1][0], x[0])
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
if deduplicate:
|
||||||
|
# Remove consecutive duplicates
|
||||||
|
deduped_lines = []
|
||||||
|
prev_line = None
|
||||||
|
for line in sorted_lines:
|
||||||
|
if line != prev_line:
|
||||||
|
deduped_lines.append(line)
|
||||||
|
prev_line = line
|
||||||
|
sorted_lines = deduped_lines
|
||||||
|
|
||||||
|
return "\n".join(sorted_lines).strip()
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _serialize_lines(lines: list[BaseLine], include_word_sync: bool) -> str:
|
||||||
|
return "\n".join(
|
||||||
|
line.to_text(include_word_sync=include_word_sync) for line in lines
|
||||||
|
)
|
||||||
|
|
||||||
|
def to_text(
|
||||||
|
self,
|
||||||
|
include_word_sync: bool = False,
|
||||||
|
) -> str:
|
||||||
|
"""Serialize to non-normalized LRC text.
|
||||||
|
|
||||||
|
- Unsynced lyrics are converted to [00:00.00]-tagged form.
|
||||||
|
- include_word_sync only controls rendering of per-word tags.
|
||||||
|
- This method does not apply normalize() rules.
|
||||||
|
"""
|
||||||
|
res = self if self.is_synced() else self.normalize_unsynced()
|
||||||
|
return self._serialize_lines(res._lines, include_word_sync=include_word_sync)
|
||||||
|
|
||||||
|
def to_normalized_text(self) -> str:
|
||||||
|
"""Serialize using normalize() rules.
|
||||||
|
|
||||||
|
Normalized output always strips word-sync tags.
|
||||||
|
"""
|
||||||
|
normalized = self.normalize()
|
||||||
|
return self._serialize_lines(normalized._lines, include_word_sync=False)
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-25 04:09:36
|
Date: 2026-03-25 04:09:36
|
||||||
Description: Data models
|
Description: Data models.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
@@ -1,21 +1,24 @@
|
|||||||
"""
|
"""
|
||||||
Author: Uyanide pywang0608@foxmail.com
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
Date: 2026-03-25 04:44:15
|
Date: 2026-03-25 04:44:15
|
||||||
Description: MPRIS integration for fetching track metadata
|
Description: MPRIS integration for fetching track metadata.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
from dbus_next.aio.message_bus import MessageBus
|
from dbus_next.aio.message_bus import MessageBus
|
||||||
from dbus_next.constants import BusType
|
from dbus_next.constants import BusType
|
||||||
from dbus_next.message import Message
|
from dbus_next.message import Message
|
||||||
from lrx_cli.models import TrackMeta
|
|
||||||
from lrx_cli.config import PREFERRED_PLAYER
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from typing import Optional, List, Any
|
from typing import Optional, List, Any
|
||||||
|
|
||||||
|
from .config import DEFAULT_PLAYER_BLACKLIST, DEFAULT_PREFERRED_PLAYER
|
||||||
|
from .models import TrackMeta
|
||||||
|
|
||||||
|
|
||||||
async def _list_mpris_players(bus: MessageBus) -> List[str]:
|
async def _list_mpris_players(bus: MessageBus) -> List[str]:
|
||||||
"""List all MPRIS player bus names."""
|
"""List all MPRIS player bus names without any filtering."""
|
||||||
try:
|
try:
|
||||||
reply = await bus.call(
|
reply = await bus.call(
|
||||||
Message(
|
Message(
|
||||||
@@ -52,47 +55,79 @@ async def _get_playback_status(bus: MessageBus, player_name: str) -> Optional[st
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def pick_active_player(
|
||||||
|
all_names: list[str],
|
||||||
|
playing: list[str],
|
||||||
|
preferred: str,
|
||||||
|
last_active: str | None = None,
|
||||||
|
) -> str | None:
|
||||||
|
"""Select the best MPRIS player by play state, preferred keyword, and continuity.
|
||||||
|
|
||||||
|
Priority: single playing > preferred keyword among playing > preferred keyword
|
||||||
|
among all candidates > last active > first candidate.
|
||||||
|
"""
|
||||||
|
if not all_names:
|
||||||
|
return None
|
||||||
|
if len(playing) == 1:
|
||||||
|
return playing[0]
|
||||||
|
candidates = playing if playing else all_names
|
||||||
|
preferred_lower = preferred.lower().strip()
|
||||||
|
if preferred_lower:
|
||||||
|
for name in candidates:
|
||||||
|
if preferred_lower in name.lower():
|
||||||
|
return name
|
||||||
|
if last_active and last_active in all_names:
|
||||||
|
return last_active
|
||||||
|
return candidates[0] if candidates else None
|
||||||
|
|
||||||
|
|
||||||
async def _select_player(
|
async def _select_player(
|
||||||
bus: MessageBus, specific_player: Optional[str] = None
|
bus: MessageBus,
|
||||||
|
specific_player: Optional[str],
|
||||||
|
preferred_player: str,
|
||||||
|
player_blacklist: tuple[str, ...],
|
||||||
) -> Optional[str]:
|
) -> Optional[str]:
|
||||||
"""Select the best MPRIS player.
|
"""Select the best MPRIS player.
|
||||||
|
|
||||||
When specific_player is given, filter by name match.
|
When specific_player is given, it bypasses player_blacklist and filters by name.
|
||||||
Otherwise: prefer the currently playing player. If multiple are playing,
|
Otherwise: prefer the currently playing player. If multiple are playing,
|
||||||
prefer the one matching PREFERRED_PLAYER env var (default: spotify).
|
prefer the one matching preferred_player (default: spotify).
|
||||||
"""
|
"""
|
||||||
players = await _list_mpris_players(bus)
|
all_names = await _list_mpris_players(bus)
|
||||||
if not players:
|
if not all_names:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
if specific_player:
|
if specific_player:
|
||||||
players = [p for p in players if specific_player.lower() in p.lower()]
|
# --player bypasses player_blacklist so the user can target any player
|
||||||
return players[0] if players else None
|
matched = [p for p in all_names if specific_player.lower() in p.lower()]
|
||||||
|
return matched[0] if matched else None
|
||||||
|
|
||||||
# Check playback status for each player
|
# auto-selection: apply blacklist before choosing
|
||||||
playing = []
|
# candidates = []
|
||||||
for p in players:
|
# for p in all_names:
|
||||||
|
# if any(x.lower() in p.lower() for x in player_blacklist):
|
||||||
|
# logger.info(f"Excluding blacklisted player: {p}")
|
||||||
|
# else:
|
||||||
|
# candidates.append(p)
|
||||||
|
candidates = [
|
||||||
|
p
|
||||||
|
for p in all_names
|
||||||
|
if not any(x.lower() in p.lower() for x in player_blacklist)
|
||||||
|
]
|
||||||
|
playing: list[str] = []
|
||||||
|
for p in candidates:
|
||||||
status = await _get_playback_status(bus, p)
|
status = await _get_playback_status(bus, p)
|
||||||
logger.debug(f"Player {p}: {status}")
|
logger.debug(f"Player {p}: {status}")
|
||||||
if status == "Playing":
|
if status == "Playing":
|
||||||
playing.append(p)
|
playing.append(p)
|
||||||
|
|
||||||
candidates = playing if playing else players
|
return pick_active_player(candidates, playing, preferred_player)
|
||||||
|
|
||||||
if len(candidates) == 1:
|
|
||||||
return candidates[0]
|
|
||||||
|
|
||||||
# Multiple candidates: prefer PREFERRED_PLAYER
|
|
||||||
preferred = PREFERRED_PLAYER.lower()
|
|
||||||
if preferred:
|
|
||||||
for p in candidates:
|
|
||||||
if preferred in p.lower():
|
|
||||||
return p
|
|
||||||
return candidates[0]
|
|
||||||
|
|
||||||
|
|
||||||
async def _fetch_metadata_dbus(
|
async def _fetch_metadata_dbus(
|
||||||
specific_player: Optional[str] = None,
|
specific_player: Optional[str],
|
||||||
|
preferred_player: str,
|
||||||
|
player_blacklist: tuple[str, ...],
|
||||||
) -> Optional[TrackMeta]:
|
) -> Optional[TrackMeta]:
|
||||||
bus = None
|
bus = None
|
||||||
try:
|
try:
|
||||||
@@ -102,7 +137,9 @@ async def _fetch_metadata_dbus(
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
player_name = await _select_player(bus, specific_player)
|
player_name = await _select_player(
|
||||||
|
bus, specific_player, preferred_player, player_blacklist
|
||||||
|
)
|
||||||
if not player_name:
|
if not player_name:
|
||||||
logger.debug(
|
logger.debug(
|
||||||
f"No active MPRIS players found via DBus{' for ' + specific_player if specific_player else ''}."
|
f"No active MPRIS players found via DBus{' for ' + specific_player if specific_player else ''}."
|
||||||
@@ -182,9 +219,15 @@ async def _fetch_metadata_dbus(
|
|||||||
bus.disconnect()
|
bus.disconnect()
|
||||||
|
|
||||||
|
|
||||||
def get_current_track(player_name: Optional[str] = None) -> Optional[TrackMeta]:
|
def get_current_track(
|
||||||
|
player_name: Optional[str] = None,
|
||||||
|
preferred_player: str = DEFAULT_PREFERRED_PLAYER,
|
||||||
|
player_blacklist: tuple[str, ...] = DEFAULT_PLAYER_BLACKLIST,
|
||||||
|
) -> Optional[TrackMeta]:
|
||||||
try:
|
try:
|
||||||
return asyncio.run(_fetch_metadata_dbus(player_name))
|
return asyncio.run(
|
||||||
|
_fetch_metadata_dbus(player_name, preferred_player, player_blacklist)
|
||||||
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"DBus async loop failed: {e}")
|
logger.error(f"DBus async loop failed: {e}")
|
||||||
return None
|
return None
|
||||||
@@ -1,8 +1,11 @@
|
|||||||
"""
|
"""
|
||||||
Shared text normalization utilities for fuzzy matching.
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-02 05:24:27
|
||||||
|
Description: Shared text normalization utilities for fuzzy matching.
|
||||||
|
Used by cache key generation, cache search, and candidate selection scoring.
|
||||||
|
"""
|
||||||
|
|
||||||
Used by cache key generation, cache search, and candidate selection scoring.
|
from __future__ import annotations
|
||||||
"""
|
|
||||||
|
|
||||||
import re
|
import re
|
||||||
import unicodedata
|
import unicodedata
|
||||||
@@ -0,0 +1,111 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 17:06:37
|
||||||
|
Description: Utility functions
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import TYPE_CHECKING, Optional
|
||||||
|
from urllib.parse import unquote
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from .models import CacheStatus
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from .models import LyricResult
|
||||||
|
|
||||||
|
|
||||||
|
# Paths
|
||||||
|
|
||||||
|
|
||||||
|
def get_audio_path(audio_url: str, ensure_exists: bool = False) -> Optional[Path]:
|
||||||
|
"""Convert file:// URL to Path, return None if invalid or (if ensure_exists) file doesn't exist."""
|
||||||
|
if not audio_url.startswith("file://"):
|
||||||
|
return None
|
||||||
|
file_path = unquote(audio_url.replace("file://", "", 1))
|
||||||
|
path = Path(file_path)
|
||||||
|
if ensure_exists and not path.exists():
|
||||||
|
return None
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def get_sidecar_path(
|
||||||
|
audio_url: str,
|
||||||
|
ensure_audio_exists: bool = False,
|
||||||
|
ensure_exists: bool = False,
|
||||||
|
extension: str = ".lrc",
|
||||||
|
) -> Optional[Path]:
|
||||||
|
"""Given a file:// URL, return the corresponding .lrc sidecar path.
|
||||||
|
|
||||||
|
If ensure_audio_exists is True, return None if the audio file does not exist.
|
||||||
|
If ensure_exists is True, return None if the .lrc file does not exist.
|
||||||
|
"""
|
||||||
|
audio_path = get_audio_path(audio_url, ensure_exists=ensure_audio_exists)
|
||||||
|
if not audio_path:
|
||||||
|
return None
|
||||||
|
lrc_path = audio_path.with_suffix(extension)
|
||||||
|
if ensure_exists and not lrc_path.exists():
|
||||||
|
return None
|
||||||
|
return lrc_path
|
||||||
|
|
||||||
|
|
||||||
|
# Ranking
|
||||||
|
|
||||||
|
|
||||||
|
def is_positive_status(status: CacheStatus) -> bool:
|
||||||
|
return status in (CacheStatus.SUCCESS_SYNCED, CacheStatus.SUCCESS_UNSYNCED)
|
||||||
|
|
||||||
|
|
||||||
|
def is_better_result(
|
||||||
|
new: LyricResult,
|
||||||
|
old: LyricResult,
|
||||||
|
*,
|
||||||
|
allow_unsynced: bool,
|
||||||
|
) -> bool:
|
||||||
|
"""Return True when new should rank above old.
|
||||||
|
|
||||||
|
Ordering rules (highest first):
|
||||||
|
1) Positive statuses always beat negative statuses.
|
||||||
|
2) When allow_unsynced=False, SUCCESS_SYNCED always beats SUCCESS_UNSYNCED.
|
||||||
|
3) Higher confidence beats lower confidence.
|
||||||
|
4) On equal confidence, SUCCESS_SYNCED beats SUCCESS_UNSYNCED.
|
||||||
|
"""
|
||||||
|
new_positive = is_positive_status(new.status)
|
||||||
|
old_positive = is_positive_status(old.status)
|
||||||
|
|
||||||
|
if not new_positive:
|
||||||
|
return False
|
||||||
|
if not old_positive:
|
||||||
|
return True
|
||||||
|
|
||||||
|
new_synced = new.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
old_synced = old.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
if not allow_unsynced and new_synced != old_synced:
|
||||||
|
return new_synced
|
||||||
|
|
||||||
|
if new.confidence != old.confidence:
|
||||||
|
return new.confidence > old.confidence
|
||||||
|
|
||||||
|
return new_synced and not old_synced
|
||||||
|
|
||||||
|
|
||||||
|
def select_best_positive(
|
||||||
|
candidates: list[LyricResult],
|
||||||
|
*,
|
||||||
|
allow_unsynced: bool,
|
||||||
|
) -> Optional[LyricResult]:
|
||||||
|
"""Pick best positive LyricResult from candidates.
|
||||||
|
|
||||||
|
Negative statuses are ignored.
|
||||||
|
"""
|
||||||
|
positives = [c for c in candidates if is_positive_status(c.status)]
|
||||||
|
if not positives:
|
||||||
|
return None
|
||||||
|
|
||||||
|
best = positives[0]
|
||||||
|
for c in positives[1:]:
|
||||||
|
if is_better_result(c, best, allow_unsynced=allow_unsynced):
|
||||||
|
best = c
|
||||||
|
return best
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .session import WatchCoordinator
|
||||||
|
|
||||||
|
__all__ = ["WatchCoordinator"]
|
||||||
@@ -0,0 +1,154 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:14:58
|
||||||
|
Description: Unix-socket control channel for communicating with a running watch session.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from .session import WatchCoordinator
|
||||||
|
|
||||||
|
|
||||||
|
class ControlServer:
|
||||||
|
"""Control server that handles offset/status commands over a Unix socket."""
|
||||||
|
|
||||||
|
_socket_path: Path
|
||||||
|
_server: asyncio.AbstractServer | None
|
||||||
|
|
||||||
|
def __init__(self, socket_path: str) -> None:
|
||||||
|
"""Initialize control server with socket path from config or explicit override."""
|
||||||
|
self._socket_path = Path(socket_path)
|
||||||
|
self._server: asyncio.AbstractServer | None = None
|
||||||
|
|
||||||
|
async def start(self, session: "WatchCoordinator") -> bool:
|
||||||
|
"""Start listening for control requests and bind session handlers."""
|
||||||
|
if not await self._prepare_socket_path():
|
||||||
|
return False
|
||||||
|
|
||||||
|
self._socket_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
self._server = await asyncio.start_unix_server(
|
||||||
|
lambda r, w: self._handle(session, r, w),
|
||||||
|
path=str(self._socket_path),
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def _prepare_socket_path(self) -> bool:
|
||||||
|
"""Ensure socket path is usable and reject when another session is active."""
|
||||||
|
if not self._socket_path.exists():
|
||||||
|
return True
|
||||||
|
|
||||||
|
try:
|
||||||
|
# probe the socket to distinguish a live session from a stale socket file
|
||||||
|
reader, writer = await asyncio.open_unix_connection(str(self._socket_path))
|
||||||
|
writer.close()
|
||||||
|
await writer.wait_closed()
|
||||||
|
# connection succeeded → another watch session is actively listening
|
||||||
|
logger.error(
|
||||||
|
"A watch session is already running. Use 'lrx watch ctl status'."
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
except Exception:
|
||||||
|
# connection refused / file is stale → safe to remove and reuse
|
||||||
|
try:
|
||||||
|
self._socket_path.unlink(missing_ok=True)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
"""Stop control server and remove stale socket path."""
|
||||||
|
if self._server is not None:
|
||||||
|
self._server.close()
|
||||||
|
await self._server.wait_closed()
|
||||||
|
self._server = None
|
||||||
|
try:
|
||||||
|
self._socket_path.unlink(missing_ok=True)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
async def _handle(
|
||||||
|
self,
|
||||||
|
session: "WatchCoordinator",
|
||||||
|
reader: asyncio.StreamReader,
|
||||||
|
writer: asyncio.StreamWriter,
|
||||||
|
) -> None:
|
||||||
|
"""Handle one control request and send JSON response."""
|
||||||
|
resp: dict[str, object] = {"ok": False, "error": "internal error"}
|
||||||
|
try:
|
||||||
|
line = await reader.readline()
|
||||||
|
if not line:
|
||||||
|
resp = {"ok": False, "error": "empty request"}
|
||||||
|
else:
|
||||||
|
req = json.loads(line.decode("utf-8"))
|
||||||
|
cmd = req.get("cmd")
|
||||||
|
if cmd == "offset":
|
||||||
|
delta = int(req.get("delta", 0))
|
||||||
|
resp = session.handle_offset(delta)
|
||||||
|
elif cmd == "status":
|
||||||
|
resp = session.handle_status()
|
||||||
|
else:
|
||||||
|
resp = {"ok": False, "error": "unknown command"}
|
||||||
|
except Exception as e:
|
||||||
|
resp = {"ok": False, "error": str(e)}
|
||||||
|
finally:
|
||||||
|
writer.write((json.dumps(resp) + "\n").encode("utf-8"))
|
||||||
|
await writer.drain()
|
||||||
|
writer.close()
|
||||||
|
await writer.wait_closed()
|
||||||
|
|
||||||
|
|
||||||
|
class ControlClient:
|
||||||
|
"""Control client used by CLI commands to talk to active watch session."""
|
||||||
|
|
||||||
|
_socket_path: Path
|
||||||
|
|
||||||
|
def __init__(self, socket_path: str) -> None:
|
||||||
|
"""Initialize control client with socket path from config or explicit override."""
|
||||||
|
self._socket_path = Path(socket_path)
|
||||||
|
|
||||||
|
async def _send_async(self, cmd: dict[str, object]) -> dict[str, object]:
|
||||||
|
"""Send one JSON command to control server and return JSON response."""
|
||||||
|
if not self._socket_path.exists():
|
||||||
|
return {"ok": False, "error": "No watch session running."}
|
||||||
|
|
||||||
|
try:
|
||||||
|
reader, writer = await asyncio.open_unix_connection(str(self._socket_path))
|
||||||
|
except Exception:
|
||||||
|
return {"ok": False, "error": "No watch session running."}
|
||||||
|
|
||||||
|
writer.write((json.dumps(cmd) + "\n").encode("utf-8"))
|
||||||
|
await writer.drain()
|
||||||
|
line = await reader.readline()
|
||||||
|
writer.close()
|
||||||
|
await writer.wait_closed()
|
||||||
|
if not line:
|
||||||
|
return {"ok": False, "error": "Empty response."}
|
||||||
|
return json.loads(line.decode("utf-8"))
|
||||||
|
|
||||||
|
def send(self, cmd: dict[str, object]) -> dict[str, object]:
|
||||||
|
"""Synchronous wrapper around async control request."""
|
||||||
|
return asyncio.run(self._send_async(cmd))
|
||||||
|
|
||||||
|
|
||||||
|
def parse_delta(raw: str) -> tuple[bool, int | None, str | None]:
|
||||||
|
"""Parse signed millisecond offset delta string for ctl offset command."""
|
||||||
|
value = raw.strip()
|
||||||
|
try:
|
||||||
|
if value.startswith("+"):
|
||||||
|
return True, int(value[1:]), None
|
||||||
|
if value.startswith("-"):
|
||||||
|
# keep the sign by negating; bare int() would accept "-123" too but
|
||||||
|
# explicit split is clearer about intent and avoids double-negative edge cases
|
||||||
|
return True, -int(value[1:]), None
|
||||||
|
return True, int(value), None
|
||||||
|
except ValueError:
|
||||||
|
return False, None, f"Invalid offset delta: {raw}"
|
||||||
@@ -0,0 +1,89 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:14:41
|
||||||
|
Description: Debounced lyric fetch orchestration for watch session.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from typing import Awaitable, Callable, Optional
|
||||||
|
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..models import TrackMeta
|
||||||
|
|
||||||
|
|
||||||
|
class LyricFetcher:
|
||||||
|
"""Debounces track updates and runs at most one lyric fetch task at a time."""
|
||||||
|
|
||||||
|
_watch_debounce_ms: int
|
||||||
|
_fetch_func: Callable[[TrackMeta], Awaitable[Optional[LRCData]]]
|
||||||
|
_on_fetching: Callable[[], Awaitable[None] | None]
|
||||||
|
_on_result: Callable[[Optional[LRCData]], Awaitable[None] | None]
|
||||||
|
_debounce_task: asyncio.Task | None
|
||||||
|
_fetch_task: asyncio.Task | None
|
||||||
|
_pending_track: TrackMeta | None
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
fetch_func: Callable[[TrackMeta], Awaitable[Optional[LRCData]]],
|
||||||
|
on_fetching: Callable[[], Awaitable[None] | None],
|
||||||
|
on_result: Callable[[Optional[LRCData]], Awaitable[None] | None],
|
||||||
|
watch_debounce_ms: int,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize fetch callbacks and runtime options."""
|
||||||
|
self._watch_debounce_ms = watch_debounce_ms
|
||||||
|
self._fetch_func = fetch_func
|
||||||
|
self._on_fetching = on_fetching
|
||||||
|
self._on_result = on_result
|
||||||
|
self._debounce_task: asyncio.Task | None = None
|
||||||
|
self._fetch_task: asyncio.Task | None = None
|
||||||
|
self._pending_track: TrackMeta | None = None
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
"""Cancel and await all in-flight debounce/fetch tasks."""
|
||||||
|
for task in (self._debounce_task, self._fetch_task):
|
||||||
|
if task is not None:
|
||||||
|
task.cancel()
|
||||||
|
await asyncio.gather(
|
||||||
|
*[t for t in (self._debounce_task, self._fetch_task) if t is not None],
|
||||||
|
return_exceptions=True,
|
||||||
|
)
|
||||||
|
self._debounce_task = None
|
||||||
|
self._fetch_task = None
|
||||||
|
|
||||||
|
def request(self, track: TrackMeta) -> None:
|
||||||
|
"""Request lyrics for track with debounce collapsing."""
|
||||||
|
self._pending_track = track
|
||||||
|
if self._debounce_task is not None:
|
||||||
|
# cancel any pending debounce window — the new request supersedes it
|
||||||
|
self._debounce_task.cancel()
|
||||||
|
self._debounce_task = asyncio.create_task(self._debounce_then_fetch())
|
||||||
|
|
||||||
|
async def _debounce_then_fetch(self) -> None:
|
||||||
|
"""Wait debounce window then start a fresh fetch task for latest pending track."""
|
||||||
|
await asyncio.sleep(self._watch_debounce_ms / 1000.0)
|
||||||
|
track = self._pending_track
|
||||||
|
if track is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
if self._fetch_task is not None:
|
||||||
|
# abort any in-flight fetch for a previous track before starting the new one
|
||||||
|
self._fetch_task.cancel()
|
||||||
|
await asyncio.gather(self._fetch_task, return_exceptions=True)
|
||||||
|
|
||||||
|
self._fetch_task = asyncio.create_task(self._do_fetch(track))
|
||||||
|
|
||||||
|
async def _do_fetch(self, track: TrackMeta) -> None:
|
||||||
|
"""Execute fetch lifecycle callbacks and fetch lyrics for a track."""
|
||||||
|
# callbacks may be plain functions or coroutines — handle both
|
||||||
|
fetching_callback_result = self._on_fetching()
|
||||||
|
if asyncio.iscoroutine(fetching_callback_result):
|
||||||
|
await fetching_callback_result
|
||||||
|
|
||||||
|
lyrics = await self._fetch_func(track)
|
||||||
|
|
||||||
|
result_callback_result = self._on_result(lyrics)
|
||||||
|
if asyncio.iscoroutine(result_callback_result):
|
||||||
|
await result_callback_result
|
||||||
@@ -0,0 +1,402 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:14:27
|
||||||
|
Description: Player discovery, state monitoring, and active-player selection for watch mode.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Callable, Optional
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
from dbus_next.aio.message_bus import MessageBus
|
||||||
|
from dbus_next.constants import BusType
|
||||||
|
from dbus_next.message import Message
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from ..models import TrackMeta
|
||||||
|
from ..mpris import pick_active_player
|
||||||
|
|
||||||
|
|
||||||
|
def _variant_value(item: object) -> object | None:
|
||||||
|
"""Extract .value from DBus variant-like objects when available."""
|
||||||
|
if hasattr(item, "value"):
|
||||||
|
return getattr(item, "value")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(slots=True)
|
||||||
|
class PlayerState:
|
||||||
|
"""Current observable state for one MPRIS player."""
|
||||||
|
|
||||||
|
bus_name: str
|
||||||
|
status: str
|
||||||
|
track: Optional[TrackMeta]
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True, slots=True)
|
||||||
|
class PlayerTarget:
|
||||||
|
"""Constraint for choosing which players are visible to watch."""
|
||||||
|
|
||||||
|
hint: Optional[str] = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def normalized_hint(self) -> str:
|
||||||
|
"""Return normalized lowercase player hint string."""
|
||||||
|
return (self.hint or "").strip().lower()
|
||||||
|
|
||||||
|
def allows(self, bus_name: str) -> bool:
|
||||||
|
"""Return whether given MPRIS bus name passes this target constraint."""
|
||||||
|
normalized_hint = self.normalized_hint
|
||||||
|
if not normalized_hint:
|
||||||
|
return True
|
||||||
|
return _keyword_match(bus_name, normalized_hint)
|
||||||
|
|
||||||
|
|
||||||
|
def _keyword_match(text: str, keyword: str) -> bool:
|
||||||
|
"""Return True when keyword exists in text, case-insensitively."""
|
||||||
|
return keyword.strip().lower() in text.lower()
|
||||||
|
|
||||||
|
|
||||||
|
class PlayerMonitor:
|
||||||
|
"""Tracks MPRIS players and forwards signal-driven state updates to session callbacks."""
|
||||||
|
|
||||||
|
_player_blacklist: tuple[str, ...]
|
||||||
|
_on_players_changed: Callable[[], None]
|
||||||
|
_on_seeked: Callable[[str, int], None]
|
||||||
|
_on_playback_status: Callable[[str, str], None]
|
||||||
|
_target: PlayerTarget
|
||||||
|
players: dict[str, PlayerState]
|
||||||
|
_bus: MessageBus | None
|
||||||
|
_props_cache: dict[str, object]
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
on_players_changed: Callable[[], None],
|
||||||
|
on_seeked: Callable[[str, int], None],
|
||||||
|
on_playback_status: Callable[[str, str], None],
|
||||||
|
player_blacklist: tuple[str, ...],
|
||||||
|
target: Optional[PlayerTarget] = None,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize monitor callbacks, runtime options, and player target filter."""
|
||||||
|
self._player_blacklist = player_blacklist
|
||||||
|
self._on_players_changed = on_players_changed
|
||||||
|
self._on_seeked = on_seeked
|
||||||
|
self._on_playback_status = on_playback_status
|
||||||
|
self._target = target or PlayerTarget()
|
||||||
|
self.players: dict[str, PlayerState] = {}
|
||||||
|
self._bus: MessageBus | None = None
|
||||||
|
self._props_cache: dict[str, object] = {}
|
||||||
|
|
||||||
|
async def start(self) -> None:
|
||||||
|
"""Start DBus monitoring and populate initial player snapshot."""
|
||||||
|
self._bus = await MessageBus(bus_type=BusType.SESSION).connect()
|
||||||
|
self._bus.add_message_handler(self._on_message)
|
||||||
|
await self._add_match_rules()
|
||||||
|
await self.refresh()
|
||||||
|
|
||||||
|
async def close(self) -> None:
|
||||||
|
"""Stop DBus monitoring and close bus connection."""
|
||||||
|
self._props_cache.clear()
|
||||||
|
if self._bus:
|
||||||
|
self._bus.disconnect()
|
||||||
|
self._bus = None
|
||||||
|
|
||||||
|
async def _get_player_props(self, bus_name: str) -> object | None:
|
||||||
|
"""Return cached DBus Properties interface for player, creating it if missing."""
|
||||||
|
if not self._bus:
|
||||||
|
return None
|
||||||
|
if bus_name in self._props_cache:
|
||||||
|
return self._props_cache[bus_name]
|
||||||
|
|
||||||
|
try:
|
||||||
|
introspection = await self._bus.introspect(
|
||||||
|
bus_name, "/org/mpris/MediaPlayer2"
|
||||||
|
)
|
||||||
|
proxy = self._bus.get_proxy_object(
|
||||||
|
bus_name, "/org/mpris/MediaPlayer2", introspection
|
||||||
|
)
|
||||||
|
props = proxy.get_interface("org.freedesktop.DBus.Properties")
|
||||||
|
self._props_cache[bus_name] = props
|
||||||
|
return props
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to prepare DBus props for {bus_name}: {e}")
|
||||||
|
self._props_cache.pop(bus_name, None)
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def _add_match_rules(self) -> None:
|
||||||
|
"""Register signal subscriptions needed by monitor."""
|
||||||
|
if not self._bus:
|
||||||
|
return
|
||||||
|
rules = [
|
||||||
|
"type='signal',interface='org.freedesktop.DBus',member='NameOwnerChanged'",
|
||||||
|
"type='signal',interface='org.freedesktop.DBus.Properties',member='PropertiesChanged'",
|
||||||
|
"type='signal',interface='org.mpris.MediaPlayer2.Player',member='Seeked'",
|
||||||
|
]
|
||||||
|
for rule in rules:
|
||||||
|
try:
|
||||||
|
await self._bus.call(
|
||||||
|
Message(
|
||||||
|
destination="org.freedesktop.DBus",
|
||||||
|
path="/org/freedesktop/DBus",
|
||||||
|
interface="org.freedesktop.DBus",
|
||||||
|
member="AddMatch",
|
||||||
|
signature="s",
|
||||||
|
body=[rule],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to add DBus match rule {rule}: {e}")
|
||||||
|
|
||||||
|
async def _list_mpris_players(self) -> list[str]:
|
||||||
|
"""List visible MPRIS players after applying target filter and optional blacklist.
|
||||||
|
|
||||||
|
The blacklist is skipped when an explicit player hint is active so that
|
||||||
|
``--player`` can target any player regardless of PLAYER_BLACKLIST.
|
||||||
|
"""
|
||||||
|
if not self._bus:
|
||||||
|
return []
|
||||||
|
try:
|
||||||
|
reply = await self._bus.call(
|
||||||
|
Message(
|
||||||
|
destination="org.freedesktop.DBus",
|
||||||
|
path="/org/freedesktop/DBus",
|
||||||
|
interface="org.freedesktop.DBus",
|
||||||
|
member="ListNames",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if not reply or not reply.body:
|
||||||
|
return []
|
||||||
|
out: list[str] = []
|
||||||
|
hint_active = bool(self._target.normalized_hint)
|
||||||
|
for name in reply.body[0]:
|
||||||
|
if not name.startswith("org.mpris.MediaPlayer2."):
|
||||||
|
continue
|
||||||
|
# --player bypasses the blacklist; only filter when no hint is given
|
||||||
|
if not hint_active and any(
|
||||||
|
x.lower() in name.lower() for x in self._player_blacklist
|
||||||
|
):
|
||||||
|
# logger.info(f"Excluding blacklisted player: {name}")
|
||||||
|
continue
|
||||||
|
if not self._target.allows(name):
|
||||||
|
continue
|
||||||
|
out.append(name)
|
||||||
|
return out
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to list mpris players: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
async def _fetch_player_state(self, bus_name: str) -> Optional[PlayerState]:
|
||||||
|
"""Read current playback status and metadata from one player service."""
|
||||||
|
props = await self._get_player_props(bus_name)
|
||||||
|
if props is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
status_var = await getattr(props, "call_get")(
|
||||||
|
"org.mpris.MediaPlayer2.Player", "PlaybackStatus"
|
||||||
|
)
|
||||||
|
metadata_var = await getattr(props, "call_get")(
|
||||||
|
"org.mpris.MediaPlayer2.Player", "Metadata"
|
||||||
|
)
|
||||||
|
status = status_var.value if status_var else "Stopped"
|
||||||
|
track = self._track_from_metadata(
|
||||||
|
metadata_var.value if metadata_var else {}
|
||||||
|
)
|
||||||
|
return PlayerState(bus_name=bus_name, status=status, track=track)
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to read state for {bus_name}: {e}")
|
||||||
|
self._props_cache.pop(bus_name, None)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _track_from_metadata(self, metadata: dict[str, object]) -> Optional[TrackMeta]:
|
||||||
|
"""Build TrackMeta object from MPRIS metadata map."""
|
||||||
|
if not metadata:
|
||||||
|
return None
|
||||||
|
trackid = metadata.get("mpris:trackid")
|
||||||
|
if trackid is not None:
|
||||||
|
trackid = _variant_value(trackid)
|
||||||
|
# normalize Spotify track IDs — the raw MPRIS value varies by client version
|
||||||
|
if isinstance(trackid, str) and trackid.startswith("spotify:track:"):
|
||||||
|
trackid = trackid.removeprefix("spotify:track:")
|
||||||
|
elif isinstance(trackid, str) and trackid.startswith("/com/spotify/track/"):
|
||||||
|
trackid = trackid.removeprefix("/com/spotify/track/")
|
||||||
|
elif not isinstance(trackid, str):
|
||||||
|
trackid = None
|
||||||
|
|
||||||
|
length = metadata.get("mpris:length")
|
||||||
|
length_ms = None
|
||||||
|
length_value = _variant_value(length) if length is not None else None
|
||||||
|
if isinstance(length_value, int):
|
||||||
|
# MPRIS reports length in microseconds; convert to milliseconds
|
||||||
|
length_ms = length_value // 1000
|
||||||
|
|
||||||
|
artist = metadata.get("xesam:artist")
|
||||||
|
artist_v = None
|
||||||
|
artist_value = _variant_value(artist) if artist is not None else None
|
||||||
|
if isinstance(artist_value, list) and artist_value:
|
||||||
|
# xesam:artist is a list; take the first entry as primary artist
|
||||||
|
artist_v = artist_value[0]
|
||||||
|
|
||||||
|
title = metadata.get("xesam:title")
|
||||||
|
album = metadata.get("xesam:album")
|
||||||
|
url = metadata.get("xesam:url")
|
||||||
|
|
||||||
|
title_value = _variant_value(title) if title is not None else None
|
||||||
|
album_value = _variant_value(album) if album is not None else None
|
||||||
|
url_value = _variant_value(url) if url is not None else None
|
||||||
|
|
||||||
|
return TrackMeta(
|
||||||
|
trackid=trackid,
|
||||||
|
length=length_ms,
|
||||||
|
album=album_value if isinstance(album_value, str) else None,
|
||||||
|
artist=artist_v,
|
||||||
|
title=title_value if isinstance(title_value, str) else None,
|
||||||
|
url=url_value if isinstance(url_value, str) else None,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def refresh(self) -> None:
|
||||||
|
"""Refresh full player snapshot and notify session when visible set changes."""
|
||||||
|
players = await self._list_mpris_players()
|
||||||
|
updated: dict[str, PlayerState] = {}
|
||||||
|
for bus_name in players:
|
||||||
|
st = await self._fetch_player_state(bus_name)
|
||||||
|
if st is not None:
|
||||||
|
updated[bus_name] = st
|
||||||
|
|
||||||
|
before = set(self.players.keys())
|
||||||
|
after = set(updated.keys())
|
||||||
|
added = sorted(after - before)
|
||||||
|
removed = sorted(before - after)
|
||||||
|
|
||||||
|
for bus_name in removed:
|
||||||
|
self._props_cache.pop(bus_name, None)
|
||||||
|
|
||||||
|
self.players = updated
|
||||||
|
|
||||||
|
if added or removed:
|
||||||
|
logger.info(
|
||||||
|
"MPRIS players updated: added={}, removed={}",
|
||||||
|
added,
|
||||||
|
removed,
|
||||||
|
)
|
||||||
|
|
||||||
|
self._on_players_changed()
|
||||||
|
|
||||||
|
async def _resolve_well_known_name(self, unique_sender: str) -> str | None:
|
||||||
|
"""Map a DBus unique sender (e.g. :1.42) to a tracked MPRIS bus name."""
|
||||||
|
if unique_sender in self.players:
|
||||||
|
# sender is already a well-known name we track (unlikely but fast path)
|
||||||
|
return unique_sender
|
||||||
|
if not self._bus:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Seeked signals arrive with the unique connection name (:1.N), not the
|
||||||
|
# well-known bus name (org.mpris.MediaPlayer2.X). Ask D-Bus which
|
||||||
|
# well-known name owns that unique name.
|
||||||
|
for bus_name in self.players:
|
||||||
|
try:
|
||||||
|
reply = await self._bus.call(
|
||||||
|
Message(
|
||||||
|
destination="org.freedesktop.DBus",
|
||||||
|
path="/org/freedesktop/DBus",
|
||||||
|
interface="org.freedesktop.DBus",
|
||||||
|
member="GetNameOwner",
|
||||||
|
signature="s",
|
||||||
|
body=[bus_name],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if reply and reply.body and str(reply.body[0]) == unique_sender:
|
||||||
|
return bus_name
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def _handle_seeked_signal(self, sender: str, position_ms: int) -> None:
|
||||||
|
"""Route Seeked signal to session using well-known bus name when possible."""
|
||||||
|
bus_name = await self._resolve_well_known_name(sender)
|
||||||
|
if bus_name is not None:
|
||||||
|
self._on_seeked(bus_name, position_ms)
|
||||||
|
return
|
||||||
|
|
||||||
|
# If we cannot map sender reliably, force a state refresh to converge.
|
||||||
|
await self.refresh()
|
||||||
|
|
||||||
|
def _on_message(self, message: Message) -> bool:
|
||||||
|
"""Low-level DBus signal handler for player lifecycle/status/seek events."""
|
||||||
|
try:
|
||||||
|
if (
|
||||||
|
message.interface == "org.freedesktop.DBus"
|
||||||
|
and message.member == "NameOwnerChanged"
|
||||||
|
):
|
||||||
|
# a player appeared or disappeared — rescan the full player list
|
||||||
|
if message.body and str(message.body[0]).startswith(
|
||||||
|
"org.mpris.MediaPlayer2."
|
||||||
|
):
|
||||||
|
asyncio.create_task(self.refresh())
|
||||||
|
return False
|
||||||
|
|
||||||
|
if (
|
||||||
|
message.interface == "org.freedesktop.DBus.Properties"
|
||||||
|
and message.member == "PropertiesChanged"
|
||||||
|
):
|
||||||
|
# message.sender is a unique connection name, not the well-known bus
|
||||||
|
# name, so we can't filter by sender here — match by object path and
|
||||||
|
# interface instead to scope it to MPRIS Player properties only
|
||||||
|
path_ok = message.path == "/org/mpris/MediaPlayer2"
|
||||||
|
iface = message.body[0] if message.body else None
|
||||||
|
if path_ok and iface == "org.mpris.MediaPlayer2.Player":
|
||||||
|
asyncio.create_task(self.refresh())
|
||||||
|
return False
|
||||||
|
|
||||||
|
if (
|
||||||
|
message.interface == "org.mpris.MediaPlayer2.Player"
|
||||||
|
and message.member == "Seeked"
|
||||||
|
):
|
||||||
|
sender = message.sender or ""
|
||||||
|
if sender and message.body:
|
||||||
|
# MPRIS Seeked position is in microseconds; convert to ms
|
||||||
|
position_us = int(message.body[0])
|
||||||
|
asyncio.create_task(
|
||||||
|
self._handle_seeked_signal(
|
||||||
|
sender,
|
||||||
|
max(0, position_us // 1000),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"PlayerMonitor signal handling error: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def get_position_ms(self, bus_name: str) -> Optional[int]:
|
||||||
|
"""Read player-reported position in milliseconds."""
|
||||||
|
props = await self._get_player_props(bus_name)
|
||||||
|
if props is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
position_var = await getattr(props, "call_get")(
|
||||||
|
"org.mpris.MediaPlayer2.Player", "Position"
|
||||||
|
)
|
||||||
|
if position_var is None:
|
||||||
|
return None
|
||||||
|
return max(0, int(position_var.value) // 1000)
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to read position from {bus_name}: {e}")
|
||||||
|
self._props_cache.pop(bus_name, None)
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class ActivePlayerSelector:
|
||||||
|
@staticmethod
|
||||||
|
def select(
|
||||||
|
players: dict[str, PlayerState],
|
||||||
|
last_active: str | None,
|
||||||
|
preferred_player: str,
|
||||||
|
) -> str | None:
|
||||||
|
"""Select active player by playing state, preferred keyword, and continuity."""
|
||||||
|
if not players:
|
||||||
|
return None
|
||||||
|
all_names = list(players.keys())
|
||||||
|
playing = [name for name, st in players.items() if st.status == "Playing"]
|
||||||
|
return pick_active_player(all_names, playing, preferred_player, last_active)
|
||||||
@@ -0,0 +1,390 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:10:52
|
||||||
|
Description: Watch orchestration with explicit MVVM role boundaries.
|
||||||
|
|
||||||
|
- Model: WatchModel stores domain state.
|
||||||
|
- ViewModel: WatchViewModel projects model to output-facing state/signature.
|
||||||
|
- Coordinator: WatchCoordinator wires services and drives async workflows.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from dataclasses import asdict
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from ..core import LrcManager
|
||||||
|
from ..lrc import LRCData
|
||||||
|
from ..models import TrackMeta
|
||||||
|
from .control import ControlServer
|
||||||
|
from .fetcher import LyricFetcher
|
||||||
|
from ..config import AppConfig
|
||||||
|
from .view import BaseOutput, LyricView, WatchState, WatchStatus
|
||||||
|
from .player import ActivePlayerSelector, PlayerMonitor, PlayerTarget
|
||||||
|
from .tracker import PositionTracker
|
||||||
|
|
||||||
|
|
||||||
|
class WatchModel:
|
||||||
|
"""Model layer that owns watch state and lyric timeline representation."""
|
||||||
|
|
||||||
|
offset_ms: int
|
||||||
|
active_player: str | None
|
||||||
|
active_track_key: str | None
|
||||||
|
status: WatchStatus
|
||||||
|
lyrics: LyricView | None
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.offset_ms = 0
|
||||||
|
self.active_player: str | None = None
|
||||||
|
self.active_track_key: str | None = None
|
||||||
|
self.status: WatchStatus = WatchStatus.IDLE
|
||||||
|
self.lyrics: LyricView | None = None
|
||||||
|
|
||||||
|
def set_lyrics(self, lyrics: LRCData | None) -> None:
|
||||||
|
"""Update lyrics and rebuild projection once per lyric object change."""
|
||||||
|
if lyrics is None:
|
||||||
|
self.lyrics = None
|
||||||
|
return
|
||||||
|
|
||||||
|
self.lyrics = LyricView.from_lrc(lyrics)
|
||||||
|
|
||||||
|
def state_signature(self, track: TrackMeta | None, position_ms: int) -> tuple:
|
||||||
|
"""Build dedupe signature from model state and current lyric cursor."""
|
||||||
|
# prefer trackid when available; fall back to display name for players
|
||||||
|
# that don't expose a stable ID (e.g. some MPRIS implementations)
|
||||||
|
track_key = (
|
||||||
|
track.trackid
|
||||||
|
if track and track.trackid
|
||||||
|
else track.display_name()
|
||||||
|
if track
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.status != WatchStatus.OK or self.lyrics is None:
|
||||||
|
# non-OK states don't have cursor position — discriminate by status alone
|
||||||
|
return ("status", self.status, self.active_player, track_key)
|
||||||
|
at_ms = position_ms + self.offset_ms
|
||||||
|
cursor = self.lyrics.signature_cursor(at_ms)
|
||||||
|
return ("lyrics", self.active_player, track_key, cursor)
|
||||||
|
|
||||||
|
|
||||||
|
class WatchViewModel:
|
||||||
|
"""ViewModel that projects WatchModel into view-consumable snapshots."""
|
||||||
|
|
||||||
|
_model: WatchModel
|
||||||
|
|
||||||
|
def __init__(self, model: WatchModel) -> None:
|
||||||
|
self._model = model
|
||||||
|
|
||||||
|
def signature(self, track: TrackMeta | None, position_ms: int) -> tuple:
|
||||||
|
"""Build dedupe signature for current projected state."""
|
||||||
|
return self._model.state_signature(track, position_ms)
|
||||||
|
|
||||||
|
def state(self, track: TrackMeta | None, position_ms: int) -> WatchState:
|
||||||
|
"""Project model values into immutable WatchState payload."""
|
||||||
|
return WatchState(
|
||||||
|
track=track,
|
||||||
|
lyrics=self._model.lyrics,
|
||||||
|
position_ms=position_ms,
|
||||||
|
offset_ms=self._model.offset_ms,
|
||||||
|
status=self._model.status,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class WatchCoordinator:
|
||||||
|
"""Application/service orchestration layer for watch runtime."""
|
||||||
|
|
||||||
|
_manager: LrcManager
|
||||||
|
_output: BaseOutput
|
||||||
|
_config: AppConfig
|
||||||
|
_model: WatchModel
|
||||||
|
_view_model: WatchViewModel
|
||||||
|
_player_hint: str | None
|
||||||
|
_last_emit_signature: tuple | None
|
||||||
|
_target: PlayerTarget
|
||||||
|
_control: ControlServer
|
||||||
|
_player_monitor: PlayerMonitor
|
||||||
|
_tracker: PositionTracker
|
||||||
|
_fetcher: LyricFetcher
|
||||||
|
_emit_scheduled: bool
|
||||||
|
_calibration_task: asyncio.Task | None
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
manager: LrcManager,
|
||||||
|
output: BaseOutput,
|
||||||
|
player_hint: str | None,
|
||||||
|
config: AppConfig,
|
||||||
|
) -> None:
|
||||||
|
self._manager = manager
|
||||||
|
self._output = output
|
||||||
|
self._config = config
|
||||||
|
self._model = WatchModel()
|
||||||
|
self._view_model = WatchViewModel(self._model)
|
||||||
|
self._player_hint = player_hint
|
||||||
|
self._last_emit_signature: tuple | None = None
|
||||||
|
self._emit_scheduled = False
|
||||||
|
self._calibration_task = None
|
||||||
|
|
||||||
|
self._target = PlayerTarget(hint=player_hint)
|
||||||
|
|
||||||
|
self._control = ControlServer(socket_path=config.watch.socket_path)
|
||||||
|
self._player_monitor = PlayerMonitor(
|
||||||
|
on_players_changed=self._on_player_change,
|
||||||
|
on_seeked=self._on_seeked,
|
||||||
|
on_playback_status=self._on_playback_status,
|
||||||
|
player_blacklist=self._config.general.player_blacklist,
|
||||||
|
target=self._target,
|
||||||
|
)
|
||||||
|
self._tracker = PositionTracker(
|
||||||
|
poll_position_ms=self._player_monitor.get_position_ms,
|
||||||
|
config=self._config,
|
||||||
|
on_tick=self._on_tracker_tick,
|
||||||
|
)
|
||||||
|
self._fetcher = LyricFetcher(
|
||||||
|
fetch_func=self._fetch_lyrics,
|
||||||
|
on_fetching=self._on_fetching,
|
||||||
|
on_result=self._on_lyrics_update,
|
||||||
|
watch_debounce_ms=self._config.watch.debounce_ms,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def run(self) -> bool:
|
||||||
|
"""Run watch workflow and return success flag."""
|
||||||
|
logger.info(
|
||||||
|
"watch session starting (player filter: {})",
|
||||||
|
self._player_hint or "<none>",
|
||||||
|
)
|
||||||
|
|
||||||
|
if not await self._control.start(self):
|
||||||
|
return False
|
||||||
|
try:
|
||||||
|
await self._player_monitor.start()
|
||||||
|
await self._tracker.start()
|
||||||
|
self._calibration_task = asyncio.create_task(self._calibration_loop())
|
||||||
|
# emit once at startup so outputs don't sit blank until the first event
|
||||||
|
self._schedule_emit()
|
||||||
|
# block forever; CancelledError from signal handler exits the loop cleanly
|
||||||
|
await asyncio.Event().wait()
|
||||||
|
return True
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
return True
|
||||||
|
except Exception as exc:
|
||||||
|
logger.exception("watch runtime error: {}", exc)
|
||||||
|
return False
|
||||||
|
finally:
|
||||||
|
logger.info("watch session stopping")
|
||||||
|
if self._calibration_task is not None:
|
||||||
|
self._calibration_task.cancel()
|
||||||
|
await asyncio.gather(self._calibration_task, return_exceptions=True)
|
||||||
|
self._calibration_task = None
|
||||||
|
await self._fetcher.stop()
|
||||||
|
await self._tracker.stop()
|
||||||
|
await self._player_monitor.close()
|
||||||
|
await self._control.stop()
|
||||||
|
|
||||||
|
async def _calibration_loop(self) -> None:
|
||||||
|
"""Periodically refresh full MPRIS snapshot as fallback calibration."""
|
||||||
|
interval = max(0.1, self._config.watch.calibration_interval_s)
|
||||||
|
while True:
|
||||||
|
await asyncio.sleep(interval)
|
||||||
|
try:
|
||||||
|
await self._player_monitor.refresh()
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
raise
|
||||||
|
except Exception as exc:
|
||||||
|
logger.debug("mpris calibration refresh failed: {}", exc)
|
||||||
|
|
||||||
|
def _active_track(self) -> TrackMeta | None:
|
||||||
|
"""Return active track metadata from selected player."""
|
||||||
|
player = self._player_monitor.players.get(self._model.active_player or "")
|
||||||
|
return player.track if player else None
|
||||||
|
|
||||||
|
def _request_fetch_for_active_track(self, reason: str) -> bool:
|
||||||
|
"""Trigger lyric fetch for active track when needed."""
|
||||||
|
track = self._active_track()
|
||||||
|
if track is None:
|
||||||
|
return False
|
||||||
|
if self._model.lyrics is not None:
|
||||||
|
# lyrics already loaded — nothing to fetch
|
||||||
|
return False
|
||||||
|
if self._model.status == WatchStatus.FETCHING:
|
||||||
|
# a fetch is already in flight — don't queue another
|
||||||
|
return False
|
||||||
|
logger.info("fetching lyrics for track ({}): {}", reason, track.display_name())
|
||||||
|
self._fetcher.request(track)
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def _fetch_lyrics(self, track: TrackMeta) -> Optional[LRCData]:
|
||||||
|
"""Fetch lyrics in worker thread."""
|
||||||
|
result = await asyncio.to_thread(
|
||||||
|
self._manager.fetch_for_track,
|
||||||
|
track,
|
||||||
|
None,
|
||||||
|
False,
|
||||||
|
False,
|
||||||
|
)
|
||||||
|
if result and result.lyrics:
|
||||||
|
return result.lyrics
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _on_player_change(self) -> None:
|
||||||
|
"""React to monitor player snapshot change."""
|
||||||
|
prev_player = self._model.active_player
|
||||||
|
prev_track_key = self._model.active_track_key
|
||||||
|
|
||||||
|
selected = ActivePlayerSelector.select(
|
||||||
|
self._player_monitor.players,
|
||||||
|
self._model.active_player,
|
||||||
|
self._config.general.preferred_player,
|
||||||
|
)
|
||||||
|
self._model.active_player = selected
|
||||||
|
|
||||||
|
if selected != prev_player:
|
||||||
|
logger.info(
|
||||||
|
"active player changed: {} -> {}",
|
||||||
|
prev_player or "<none>",
|
||||||
|
selected or "<none>",
|
||||||
|
)
|
||||||
|
|
||||||
|
if selected is None:
|
||||||
|
self._model.status = WatchStatus.IDLE
|
||||||
|
self._model.active_track_key = None
|
||||||
|
self._model.set_lyrics(None)
|
||||||
|
self._schedule_emit()
|
||||||
|
return
|
||||||
|
|
||||||
|
state = self._player_monitor.players.get(selected)
|
||||||
|
if state is None:
|
||||||
|
self._model.status = WatchStatus.IDLE
|
||||||
|
self._model.active_track_key = None
|
||||||
|
self._model.set_lyrics(None)
|
||||||
|
self._schedule_emit()
|
||||||
|
return
|
||||||
|
|
||||||
|
track = state.track
|
||||||
|
track_key = (
|
||||||
|
track.trackid
|
||||||
|
if track and track.trackid
|
||||||
|
else track.display_name()
|
||||||
|
if track
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
|
||||||
|
track_changed = track_key != prev_track_key
|
||||||
|
player_changed = selected != prev_player
|
||||||
|
if track_changed or player_changed:
|
||||||
|
# clear stale lyrics immediately so the old track's lines don't flash
|
||||||
|
self._model.set_lyrics(None)
|
||||||
|
|
||||||
|
self._model.active_track_key = track_key
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
self._tracker.set_active_player(
|
||||||
|
selected,
|
||||||
|
state.status,
|
||||||
|
track_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# only fetch on identity change — calibration ticks must not re-trigger fetches
|
||||||
|
started_fetch = False
|
||||||
|
if track is not None and (player_changed or track_changed):
|
||||||
|
started_fetch = self._request_fetch_for_active_track("track-changed")
|
||||||
|
|
||||||
|
# derive status from what actually happened this tick; preserve FETCHING
|
||||||
|
# if an in-flight request was started before this snapshot arrived
|
||||||
|
if self._model.lyrics is not None:
|
||||||
|
self._model.status = WatchStatus.OK
|
||||||
|
elif started_fetch:
|
||||||
|
self._model.status = WatchStatus.FETCHING
|
||||||
|
elif self._model.status != WatchStatus.FETCHING:
|
||||||
|
# don't overwrite FETCHING with NO_LYRICS while a request is in flight
|
||||||
|
self._model.status = WatchStatus.NO_LYRICS
|
||||||
|
self._schedule_emit()
|
||||||
|
|
||||||
|
def _on_seeked(self, bus_name: str, position_ms: int) -> None:
|
||||||
|
"""Forward seek event to tracker."""
|
||||||
|
asyncio.create_task(self._tracker.on_seeked(bus_name, position_ms))
|
||||||
|
|
||||||
|
def _on_playback_status(self, bus_name: str, status: str) -> None:
|
||||||
|
"""Forward playback status change to position tracker."""
|
||||||
|
asyncio.create_task(self._tracker.on_playback_status(bus_name, status))
|
||||||
|
|
||||||
|
def _on_tracker_tick(self) -> None:
|
||||||
|
"""Emit updates from tracker tick only while lyrics are actively rendering."""
|
||||||
|
if self._model.status == WatchStatus.OK and self._output.position_sensitive:
|
||||||
|
self._schedule_emit()
|
||||||
|
|
||||||
|
def _schedule_emit(self) -> None:
|
||||||
|
"""Coalesce frequent events into at most one in-flight emit task."""
|
||||||
|
if self._emit_scheduled:
|
||||||
|
# a task is already queued; it will pick up the latest model state when it runs
|
||||||
|
return
|
||||||
|
self._emit_scheduled = True
|
||||||
|
asyncio.create_task(self._run_scheduled_emit())
|
||||||
|
|
||||||
|
async def _run_scheduled_emit(self) -> None:
|
||||||
|
"""Run one coalesced emit and release scheduler gate."""
|
||||||
|
try:
|
||||||
|
await self._emit_state()
|
||||||
|
finally:
|
||||||
|
# release the gate even on error so future events can still schedule
|
||||||
|
self._emit_scheduled = False
|
||||||
|
|
||||||
|
async def _on_fetching(self) -> None:
|
||||||
|
"""Mark model as fetching and emit state."""
|
||||||
|
self._model.status = WatchStatus.FETCHING
|
||||||
|
await self._emit_state()
|
||||||
|
|
||||||
|
async def _on_lyrics_update(self, lyrics: Optional[LRCData]) -> None:
|
||||||
|
"""Update model with fetched lyrics and emit state."""
|
||||||
|
self._model.set_lyrics(lyrics)
|
||||||
|
self._model.status = (
|
||||||
|
WatchStatus.OK if lyrics is not None else WatchStatus.NO_LYRICS
|
||||||
|
)
|
||||||
|
logger.info(
|
||||||
|
"lyrics update result: {}",
|
||||||
|
"found" if lyrics is not None else "not found",
|
||||||
|
)
|
||||||
|
await self._emit_state()
|
||||||
|
|
||||||
|
async def _emit_state(self) -> None:
|
||||||
|
"""Emit output state only when semantic signature changes."""
|
||||||
|
player = self._player_monitor.players.get(self._model.active_player or "")
|
||||||
|
track = player.track if player else None
|
||||||
|
# position=0 for non-position-sensitive outputs so the signature is stable
|
||||||
|
# across ticks and on_state fires at most once per track+status transition
|
||||||
|
position = (
|
||||||
|
await self._tracker.get_position_ms()
|
||||||
|
if self._output.position_sensitive
|
||||||
|
else 0
|
||||||
|
)
|
||||||
|
|
||||||
|
signature = self._view_model.signature(track, position)
|
||||||
|
if signature == self._last_emit_signature:
|
||||||
|
# state hasn't changed semantically — skip redundant render
|
||||||
|
return
|
||||||
|
self._last_emit_signature = signature
|
||||||
|
state = self._view_model.state(track, position)
|
||||||
|
await self._output.on_state(state)
|
||||||
|
|
||||||
|
def handle_offset(self, delta: int) -> dict:
|
||||||
|
"""Apply offset update requested by control channel."""
|
||||||
|
self._model.offset_ms += delta
|
||||||
|
return {"ok": True, "offset_ms": self._model.offset_ms}
|
||||||
|
|
||||||
|
def handle_status(self) -> dict:
|
||||||
|
"""Return status payload for control channel."""
|
||||||
|
player = self._player_monitor.players.get(self._model.active_player or "")
|
||||||
|
track = asdict(player.track) if player and player.track else None
|
||||||
|
return {
|
||||||
|
"ok": True,
|
||||||
|
"offset_ms": self._model.offset_ms,
|
||||||
|
"player": self._model.active_player,
|
||||||
|
"track": track,
|
||||||
|
"position_ms": self._tracker.peek_position_ms(),
|
||||||
|
"lyrics_status": self._model.status,
|
||||||
|
}
|
||||||
@@ -0,0 +1,156 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:13:35
|
||||||
|
Description: Playback position tracking utilities for watch mode.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
from typing import Awaitable, Callable, Optional
|
||||||
|
|
||||||
|
from ..config import AppConfig
|
||||||
|
|
||||||
|
|
||||||
|
class PositionTracker:
|
||||||
|
"""Maintains an estimated playback position from seek/status events plus local clock."""
|
||||||
|
|
||||||
|
_config: AppConfig
|
||||||
|
_poll_position_ms: Callable[[str], Awaitable[Optional[int]]]
|
||||||
|
_active_player: str | None
|
||||||
|
_is_playing: bool
|
||||||
|
_track_key: str | None
|
||||||
|
_position_ms: int
|
||||||
|
_last_tick: float
|
||||||
|
_fast_task: asyncio.Task | None
|
||||||
|
_on_tick: Callable[[], None] | None
|
||||||
|
_lock: asyncio.Lock
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
poll_position_ms: Callable[[str], Awaitable[Optional[int]]],
|
||||||
|
config: AppConfig,
|
||||||
|
on_tick: Callable[[], None] | None = None,
|
||||||
|
) -> None:
|
||||||
|
"""Initialize tracker with position polling callback and runtime options."""
|
||||||
|
self._config = config
|
||||||
|
self._poll_position_ms = poll_position_ms
|
||||||
|
self._on_tick = on_tick
|
||||||
|
self._active_player: str | None = None
|
||||||
|
self._is_playing = False
|
||||||
|
self._track_key: str | None = None
|
||||||
|
self._position_ms = 0
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
self._fast_task: asyncio.Task | None = None
|
||||||
|
self._lock = asyncio.Lock()
|
||||||
|
|
||||||
|
async def start(self) -> None:
|
||||||
|
"""Start local monotonic position ticking task."""
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
self._fast_task = asyncio.create_task(self._fast_loop())
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
"""Stop tracker tasks and await clean cancellation."""
|
||||||
|
tasks = [t for t in (self._fast_task,) if t is not None]
|
||||||
|
for task in tasks:
|
||||||
|
task.cancel()
|
||||||
|
if tasks:
|
||||||
|
await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
self._fast_task = None
|
||||||
|
|
||||||
|
async def set_active_player(
|
||||||
|
self,
|
||||||
|
bus_name: str | None,
|
||||||
|
playback_status: str,
|
||||||
|
track_key: str | None,
|
||||||
|
) -> None:
|
||||||
|
"""Switch active source and calibrate position once when entering a new playing track."""
|
||||||
|
should_calibrate_now = False
|
||||||
|
async with self._lock:
|
||||||
|
player_changed = self._active_player != bus_name
|
||||||
|
track_changed = self._track_key != track_key
|
||||||
|
was_playing = self._is_playing
|
||||||
|
self._active_player = bus_name
|
||||||
|
self._is_playing = playback_status == "Playing"
|
||||||
|
status_changed_to_playing = self._is_playing and not was_playing
|
||||||
|
if player_changed or track_changed:
|
||||||
|
# reset to 0 so stale position from a previous track doesn't bleed through
|
||||||
|
self._position_ms = 0
|
||||||
|
# poll MPRIS on any identity change (player, track, or resume) so a paused
|
||||||
|
# mid-song player gets its position anchored immediately; calibration-loop
|
||||||
|
# ticks are excluded because they pass the same player/track/status
|
||||||
|
should_calibrate_now = bool(self._active_player) and (
|
||||||
|
player_changed or track_changed or status_changed_to_playing
|
||||||
|
)
|
||||||
|
self._track_key = track_key
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
|
||||||
|
if should_calibrate_now and self._active_player:
|
||||||
|
await self._calibrate_once(self._active_player)
|
||||||
|
|
||||||
|
async def on_seeked(self, bus_name: str, position_ms: int) -> None:
|
||||||
|
"""Apply explicit seek position update for active player."""
|
||||||
|
async with self._lock:
|
||||||
|
if bus_name != self._active_player:
|
||||||
|
return
|
||||||
|
self._position_ms = max(0, position_ms)
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
|
||||||
|
async def on_playback_status(self, bus_name: str, playback_status: str) -> None:
|
||||||
|
"""Update playing state and calibrate once on paused-to-playing transition."""
|
||||||
|
should_calibrate_now = False
|
||||||
|
async with self._lock:
|
||||||
|
if bus_name != self._active_player:
|
||||||
|
return
|
||||||
|
was_playing = self._is_playing
|
||||||
|
self._is_playing = playback_status == "Playing"
|
||||||
|
# re-anchor last_tick when resuming so the gap while paused isn't counted
|
||||||
|
should_calibrate_now = self._is_playing and not was_playing
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
|
||||||
|
if should_calibrate_now:
|
||||||
|
await self._calibrate_once(bus_name)
|
||||||
|
|
||||||
|
async def _fast_loop(self) -> None:
|
||||||
|
"""Advance position by monotonic clock while active player is playing."""
|
||||||
|
interval = self._config.watch.position_tick_ms / 1000.0
|
||||||
|
while True:
|
||||||
|
await asyncio.sleep(interval)
|
||||||
|
should_notify = False
|
||||||
|
async with self._lock:
|
||||||
|
now = time.monotonic()
|
||||||
|
if self._is_playing and self._active_player:
|
||||||
|
# accumulate elapsed wall-clock time as playback position;
|
||||||
|
# seek events and calibration snapshots correct drift periodically
|
||||||
|
delta_ms = int((now - self._last_tick) * 1000)
|
||||||
|
if delta_ms > 0:
|
||||||
|
self._position_ms += delta_ms
|
||||||
|
should_notify = True
|
||||||
|
# always update last_tick so paused time isn't counted on resume
|
||||||
|
self._last_tick = now
|
||||||
|
|
||||||
|
if should_notify and self._on_tick is not None:
|
||||||
|
self._on_tick()
|
||||||
|
|
||||||
|
async def _calibrate_once(self, bus_name: str) -> None:
|
||||||
|
"""Poll player-reported position once and synchronize local tracker state."""
|
||||||
|
polled = await self._poll_position_ms(bus_name)
|
||||||
|
if polled is None:
|
||||||
|
return
|
||||||
|
async with self._lock:
|
||||||
|
if bus_name != self._active_player:
|
||||||
|
return
|
||||||
|
# Drift correction is signal-assisted; polling is fallback.
|
||||||
|
self._position_ms = max(0, polled)
|
||||||
|
self._last_tick = time.monotonic()
|
||||||
|
|
||||||
|
async def get_position_ms(self) -> int:
|
||||||
|
"""Return current tracked position in milliseconds."""
|
||||||
|
async with self._lock:
|
||||||
|
return max(0, int(self._position_ms))
|
||||||
|
|
||||||
|
def peek_position_ms(self) -> int:
|
||||||
|
"""Return current tracked position without awaiting lock (best-effort snapshot)."""
|
||||||
|
return max(0, int(self._position_ms))
|
||||||
@@ -0,0 +1,102 @@
|
|||||||
|
"""Output abstraction types for watch mode rendering."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from bisect import bisect_right
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from enum import Enum
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from ...lrc import LRCData, LyricLine
|
||||||
|
from ...models import TrackMeta
|
||||||
|
|
||||||
|
|
||||||
|
class WatchStatus(str, Enum):
|
||||||
|
IDLE = "idle"
|
||||||
|
FETCHING = "fetching"
|
||||||
|
OK = "ok"
|
||||||
|
NO_LYRICS = "no_lyrics"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(slots=True, frozen=True)
|
||||||
|
class LyricView:
|
||||||
|
"""View-ready immutable lyric data projected from one normalized LRC object."""
|
||||||
|
|
||||||
|
normalized: LRCData
|
||||||
|
lines: tuple[str, ...]
|
||||||
|
timed_line_entries: tuple[tuple[int, int], ...]
|
||||||
|
timestamps: tuple[int, ...]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def from_lrc(lyrics: LRCData) -> "LyricView":
|
||||||
|
"""Build a view projection once from normalized lyrics."""
|
||||||
|
normalized = lyrics.normalize()
|
||||||
|
|
||||||
|
lines: list[str] = []
|
||||||
|
entries: list[tuple[int, int]] = []
|
||||||
|
|
||||||
|
line_index = 0
|
||||||
|
for line in normalized.lines:
|
||||||
|
if not isinstance(line, LyricLine):
|
||||||
|
# skip metadata/tag lines that carry no renderable text
|
||||||
|
continue
|
||||||
|
text = line.text
|
||||||
|
lines.append(text)
|
||||||
|
# use first timestamp; clamp to 0 so bisect always works with non-negative ms
|
||||||
|
timestamp = line.line_times_ms[0] if line.line_times_ms else 0
|
||||||
|
entries.append((max(0, timestamp), line_index))
|
||||||
|
line_index += 1
|
||||||
|
|
||||||
|
# extract timestamps into a flat tuple so bisect_right can binary-search it
|
||||||
|
timestamps = tuple(timestamp for timestamp, _ in entries)
|
||||||
|
return LyricView(
|
||||||
|
normalized=normalized,
|
||||||
|
lines=tuple(lines),
|
||||||
|
timed_line_entries=tuple(entries),
|
||||||
|
timestamps=timestamps,
|
||||||
|
)
|
||||||
|
|
||||||
|
def signature_cursor(self, at_ms: int) -> tuple:
|
||||||
|
"""Build a stable cursor signature for dedupe decisions."""
|
||||||
|
if not self.timed_line_entries:
|
||||||
|
# untimed lyrics: signature is the full line set — changes only on track change
|
||||||
|
return ("plain", self.lines)
|
||||||
|
|
||||||
|
first_ts = self.timed_line_entries[0][0]
|
||||||
|
if at_ms < first_ts:
|
||||||
|
# playback hasn't reached the first lyric yet; hold until it does
|
||||||
|
return ("before_first", first_ts)
|
||||||
|
|
||||||
|
# bisect_right gives the insertion point after equal timestamps, so -1 gives
|
||||||
|
# the last line whose timestamp <= at_ms (i.e. the currently active line)
|
||||||
|
idx = bisect_right(self.timestamps, at_ms) - 1
|
||||||
|
if idx < 0:
|
||||||
|
idx = 0
|
||||||
|
|
||||||
|
ts, line_idx = self.timed_line_entries[idx]
|
||||||
|
text = self.lines[line_idx] if line_idx < len(self.lines) else ""
|
||||||
|
return ("ok", idx, ts, text)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(slots=True)
|
||||||
|
class WatchState:
|
||||||
|
"""Immutable snapshot payload delivered from session to output implementations."""
|
||||||
|
|
||||||
|
track: Optional[TrackMeta]
|
||||||
|
lyrics: Optional[LyricView]
|
||||||
|
position_ms: int
|
||||||
|
offset_ms: int
|
||||||
|
status: WatchStatus
|
||||||
|
|
||||||
|
|
||||||
|
class BaseOutput(ABC):
|
||||||
|
# When False, the coordinator passes position=0 for signature computation and
|
||||||
|
# skips tracker-tick-driven emits, so on_state fires at most once per
|
||||||
|
# track+status transition rather than on every lyric cursor advance.
|
||||||
|
position_sensitive: bool = True
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
"""Render or deliver one watch state frame."""
|
||||||
|
...
|
||||||
@@ -0,0 +1,95 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:15:17
|
||||||
|
Description: Pipe output implementation for watch mode.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from bisect import bisect_right
|
||||||
|
from dataclasses import dataclass
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from . import BaseOutput, WatchState, WatchStatus
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(slots=True)
|
||||||
|
class PipeOutput(BaseOutput):
|
||||||
|
"""Render a fixed lyric context window to stdout for streaming/pipe usage."""
|
||||||
|
|
||||||
|
before: int = 0
|
||||||
|
after: int = 0
|
||||||
|
no_newline: bool = False
|
||||||
|
|
||||||
|
def _window_size(self) -> int:
|
||||||
|
"""Return rendered lyric window size."""
|
||||||
|
return self.before + 1 + self.after
|
||||||
|
|
||||||
|
def _render_status(self, message: str) -> list[str]:
|
||||||
|
"""Render centered status line in fixed-size window."""
|
||||||
|
lines = [""] * self._window_size()
|
||||||
|
lines[self.before] = message
|
||||||
|
return lines
|
||||||
|
|
||||||
|
def _render_lyrics(self, state: WatchState) -> list[str]:
|
||||||
|
"""Render context lines centered on current timed lyric entry."""
|
||||||
|
if state.lyrics is None:
|
||||||
|
return self._render_status("[no lyrics]")
|
||||||
|
|
||||||
|
all_lines = state.lyrics.lines
|
||||||
|
if not all_lines:
|
||||||
|
return self._render_status("[no lyrics]")
|
||||||
|
entries = state.lyrics.timed_line_entries
|
||||||
|
|
||||||
|
effective_ms = state.position_ms + state.offset_ms
|
||||||
|
current_line_idx: int | None
|
||||||
|
if entries and effective_ms < entries[0][0]:
|
||||||
|
# playback hasn't reached the first lyric yet; treat current slot as empty
|
||||||
|
# so the after-window can show upcoming lines without a "current" anchor
|
||||||
|
current_line_idx = None
|
||||||
|
else:
|
||||||
|
if not entries:
|
||||||
|
current_line_idx = 0
|
||||||
|
else:
|
||||||
|
# bisect_right - 1 gives the last entry whose timestamp <= effective_ms
|
||||||
|
current_entry_idx = (
|
||||||
|
bisect_right(state.lyrics.timestamps, effective_ms) - 1
|
||||||
|
)
|
||||||
|
if current_entry_idx < 0:
|
||||||
|
current_entry_idx = 0
|
||||||
|
current_line_idx = entries[current_entry_idx][1]
|
||||||
|
|
||||||
|
out: list[str] = []
|
||||||
|
for rel in range(-self.before, self.after + 1):
|
||||||
|
if current_line_idx is None:
|
||||||
|
# before-first-timestamp: before/current slots are empty; after slots
|
||||||
|
# show lines starting from index 0 (rel=1 → line 0, rel=2 → line 1, …)
|
||||||
|
if rel <= 0:
|
||||||
|
out.append("")
|
||||||
|
continue
|
||||||
|
line_idx = rel - 1
|
||||||
|
else:
|
||||||
|
line_idx = current_line_idx + rel
|
||||||
|
|
||||||
|
if 0 <= line_idx < len(all_lines):
|
||||||
|
out.append(all_lines[line_idx])
|
||||||
|
else:
|
||||||
|
out.append("")
|
||||||
|
|
||||||
|
return out
|
||||||
|
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
"""Render and flush one frame for the latest watch state."""
|
||||||
|
if state.status == WatchStatus.FETCHING:
|
||||||
|
lines = self._render_status("[fetching...]")
|
||||||
|
elif state.status == WatchStatus.NO_LYRICS:
|
||||||
|
lines = self._render_status("[no lyrics]")
|
||||||
|
elif state.status == WatchStatus.IDLE:
|
||||||
|
lines = self._render_status("[idle]")
|
||||||
|
else:
|
||||||
|
lines = self._render_lyrics(state)
|
||||||
|
|
||||||
|
for line in lines:
|
||||||
|
# no_newline mode lets callers use \r to overwrite the previous frame in-place
|
||||||
|
sys.stdout.write(line + ("\n" if not self.no_newline else ""))
|
||||||
|
sys.stdout.flush()
|
||||||
@@ -0,0 +1,46 @@
|
|||||||
|
"""
|
||||||
|
Author: Uyanide pywang0608@foxmail.com
|
||||||
|
Date: 2026-04-10 08:15:31
|
||||||
|
Description: Print output implementation for watch mode — one shot per track.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from . import BaseOutput, WatchState, WatchStatus
|
||||||
|
|
||||||
|
|
||||||
|
class PrintOutput(BaseOutput):
|
||||||
|
"""Emit full lyrics to stdout once per track transition, then stay silent.
|
||||||
|
|
||||||
|
Deduplication is delegated to the coordinator via position_sensitive=False:
|
||||||
|
the coordinator uses a fixed position for signatures, so on_state fires at
|
||||||
|
most once per (status, track_key) transition rather than on every tick.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# fixed position=0 in signatures → coordinator calls on_state only on
|
||||||
|
# track/status transitions, never on lyric cursor advances
|
||||||
|
position_sensitive = False
|
||||||
|
|
||||||
|
plain: bool
|
||||||
|
|
||||||
|
def __init__(self, plain: bool = False) -> None:
|
||||||
|
self.plain = plain
|
||||||
|
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
if state.status == WatchStatus.FETCHING or state.status == WatchStatus.IDLE:
|
||||||
|
return
|
||||||
|
|
||||||
|
if state.status == WatchStatus.NO_LYRICS:
|
||||||
|
# emit a blank line as a machine-readable sentinel for "track changed, no lyrics"
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
sys.stdout.flush()
|
||||||
|
elif state.status == WatchStatus.OK and state.lyrics is not None:
|
||||||
|
lrc = state.lyrics.normalized
|
||||||
|
if self.plain:
|
||||||
|
text = lrc.to_plain()
|
||||||
|
else:
|
||||||
|
text = str(lrc)
|
||||||
|
sys.stdout.write(text + "\n")
|
||||||
|
sys.stdout.flush()
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
from lrx_cli.config import enable_debug
|
||||||
|
|
||||||
|
enable_debug()
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"syncedLyrics": "[00:01.00]s1\n[00:02.00]s2",
|
||||||
|
"plainLyrics": "p1\np2"
|
||||||
|
}
|
||||||
@@ -0,0 +1,20 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"trackName": "My Love",
|
||||||
|
"artistName": "Westlife",
|
||||||
|
"albumName": "Coast To Coast",
|
||||||
|
"duration": 231.847,
|
||||||
|
"syncedLyrics": "[00:01.00]hello",
|
||||||
|
"plainLyrics": "hello"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"trackName": "My Love (Live)",
|
||||||
|
"artistName": "Westlife",
|
||||||
|
"albumName": "Live",
|
||||||
|
"duration": 262.0,
|
||||||
|
"syncedLyrics": "",
|
||||||
|
"plainLyrics": "hello"
|
||||||
|
}
|
||||||
|
]
|
||||||
@@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"body": {
|
||||||
|
"macro_calls": {
|
||||||
|
"track.richsync.get": {
|
||||||
|
"message": {
|
||||||
|
"header": {
|
||||||
|
"status_code": 200
|
||||||
|
},
|
||||||
|
"body": {
|
||||||
|
"richsync": {
|
||||||
|
"richsync_body": "[{\"ts\": 1.2, \"x\": \"hello\"}, {\"ts\": 2.34, \"x\": \"world\"}]"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"track.subtitles.get": {
|
||||||
|
"message": {
|
||||||
|
"header": {
|
||||||
|
"status_code": 404
|
||||||
|
},
|
||||||
|
"body": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"body": {
|
||||||
|
"macro_calls": {
|
||||||
|
"track.richsync.get": {
|
||||||
|
"message": {
|
||||||
|
"header": {
|
||||||
|
"status_code": 404
|
||||||
|
},
|
||||||
|
"body": {}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"track.subtitles.get": {
|
||||||
|
"message": {
|
||||||
|
"header": {
|
||||||
|
"status_code": 200
|
||||||
|
},
|
||||||
|
"body": {
|
||||||
|
"subtitle_list": [
|
||||||
|
{
|
||||||
|
"subtitle": {
|
||||||
|
"subtitle_body": "[{\"text\": \"hello\", \"time\": {\"total\": 1.1}}, {\"text\": \"world\", \"time\": {\"total\": 2.22}}]"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
+20
@@ -0,0 +1,20 @@
|
|||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"body": {
|
||||||
|
"track_list": [
|
||||||
|
{
|
||||||
|
"track": {
|
||||||
|
"commontrack_id": 123,
|
||||||
|
"track_length": 232,
|
||||||
|
"has_subtitles": 1,
|
||||||
|
"has_richsync": 0,
|
||||||
|
"track_name": "My Love",
|
||||||
|
"artist_name": "Westlife",
|
||||||
|
"album_name": "Coast To Coast",
|
||||||
|
"instrumental": 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
+5
@@ -0,0 +1,5 @@
|
|||||||
|
{
|
||||||
|
"lrc": {
|
||||||
|
"lyric": "[00:01.00]line1\n[00:02.00]line2"
|
||||||
|
}
|
||||||
|
}
|
||||||
+32
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"result": {
|
||||||
|
"songs": [
|
||||||
|
{
|
||||||
|
"id": 2080607,
|
||||||
|
"name": "My Love",
|
||||||
|
"dt": 231941,
|
||||||
|
"ar": [
|
||||||
|
{
|
||||||
|
"name": "Westlife"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"al": {
|
||||||
|
"name": "Unbreakable"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 572412968,
|
||||||
|
"name": "My Love",
|
||||||
|
"dt": 231000,
|
||||||
|
"ar": [
|
||||||
|
{
|
||||||
|
"name": "Westlife"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"al": {
|
||||||
|
"name": "Pure... Love"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
+6
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"code": 0,
|
||||||
|
"data": {
|
||||||
|
"lyric": "[00:01.00]hello\n[00:02.00]world"
|
||||||
|
}
|
||||||
|
}
|
||||||
+33
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"code": 0,
|
||||||
|
"data": {
|
||||||
|
"list": [
|
||||||
|
{
|
||||||
|
"mid": "mid1",
|
||||||
|
"interval": 232,
|
||||||
|
"name": "My Love",
|
||||||
|
"singer": [
|
||||||
|
{
|
||||||
|
"name": "Westlife"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"album": {
|
||||||
|
"name": "Coast To Coast"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"mid": "mid2",
|
||||||
|
"interval": 248,
|
||||||
|
"name": "My Love (Album Version)",
|
||||||
|
"singer": [
|
||||||
|
{
|
||||||
|
"name": "Little Texas"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"album": {
|
||||||
|
"name": "Greatest Hits"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
+9
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"lyrics": {
|
||||||
|
"syncType": "LINE_SYNCED",
|
||||||
|
"lines": [
|
||||||
|
{"startTimeMs": "1000", "words": "hello"},
|
||||||
|
{"startTimeMs": "2500", "words": "world"}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"lyrics": {
|
||||||
|
"syncType": "UNSYNCED",
|
||||||
|
"lines": [
|
||||||
|
{"startTimeMs": "0", "words": "plain one"},
|
||||||
|
{"startTimeMs": "0", "words": "plain two"}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,18 @@
|
|||||||
|
import pytest
|
||||||
|
|
||||||
|
from lrx_cli.config import load_config
|
||||||
|
|
||||||
|
_credentials = load_config().credentials
|
||||||
|
|
||||||
|
requires_spotify = pytest.mark.skipif(
|
||||||
|
not _credentials.spotify_sp_dc,
|
||||||
|
reason="requires credentials.spotify_sp_dc in config.toml",
|
||||||
|
)
|
||||||
|
requires_qq_music = pytest.mark.skipif(
|
||||||
|
not _credentials.qq_music_api_url,
|
||||||
|
reason="requires credentials.qq_music_api_url in config.toml",
|
||||||
|
)
|
||||||
|
requires_musixmatch_token = pytest.mark.skipif(
|
||||||
|
not _credentials.musixmatch_usertoken,
|
||||||
|
reason="requires credentials.musixmatch_usertoken in config.toml",
|
||||||
|
)
|
||||||
+218
-18
@@ -7,6 +7,8 @@ import pytest
|
|||||||
|
|
||||||
from lrx_cli.cache import (
|
from lrx_cli.cache import (
|
||||||
CacheEngine,
|
CacheEngine,
|
||||||
|
SLOT_SYNCED,
|
||||||
|
SLOT_UNSYNCED,
|
||||||
_generate_key,
|
_generate_key,
|
||||||
)
|
)
|
||||||
from lrx_cli.config import DURATION_TOLERANCE_MS
|
from lrx_cli.config import DURATION_TOLERANCE_MS
|
||||||
@@ -66,6 +68,109 @@ def test_generate_key_raises_when_metadata_missing() -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_migrate_adds_confidence_version_and_boosts_unsynced(tmp_path: Path) -> None:
|
||||||
|
"""Legacy single-row cache is migrated to slot rows.
|
||||||
|
|
||||||
|
Expected behavior:
|
||||||
|
- add positive_kind and confidence_version
|
||||||
|
- boost SUCCESS_UNSYNCED confidence by +10 with cap at 100
|
||||||
|
- keep SUCCESS_SYNCED confidence unchanged
|
||||||
|
"""
|
||||||
|
db_path = tmp_path / "legacy-cache.db"
|
||||||
|
|
||||||
|
with sqlite3.connect(db_path) as conn:
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
CREATE TABLE cache (
|
||||||
|
key TEXT PRIMARY KEY,
|
||||||
|
source TEXT NOT NULL,
|
||||||
|
status TEXT NOT NULL,
|
||||||
|
lyrics TEXT,
|
||||||
|
created_at INTEGER NOT NULL,
|
||||||
|
expires_at INTEGER,
|
||||||
|
artist TEXT,
|
||||||
|
title TEXT,
|
||||||
|
album TEXT,
|
||||||
|
length INTEGER,
|
||||||
|
confidence REAL
|
||||||
|
)
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO cache
|
||||||
|
(key, source, status, lyrics, created_at, expires_at, artist, title, album, length, confidence)
|
||||||
|
VALUES
|
||||||
|
('u1', 's1', 'SUCCESS_UNSYNCED', 'u1', 1, NULL, 'A', 'T', 'AL', 180000, 85.0),
|
||||||
|
('u2', 's2', 'SUCCESS_UNSYNCED', 'u2', 1, NULL, 'A', 'T', 'AL', 180000, 98.0),
|
||||||
|
('s1', 's3', 'SUCCESS_SYNCED', 's1', 1, NULL, 'A', 'T', 'AL', 180000, 70.0)
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
|
||||||
|
CacheEngine(str(db_path))
|
||||||
|
|
||||||
|
with sqlite3.connect(db_path) as conn:
|
||||||
|
cols = {r[1] for r in conn.execute("PRAGMA table_info(cache)").fetchall()}
|
||||||
|
rows = conn.execute(
|
||||||
|
"SELECT key, positive_kind, status, confidence, confidence_version FROM cache ORDER BY key, positive_kind"
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
assert "positive_kind" in cols
|
||||||
|
assert "confidence_version" in cols
|
||||||
|
by_key = {
|
||||||
|
(k, slot): (status, confidence, version)
|
||||||
|
for k, slot, status, confidence, version in rows
|
||||||
|
}
|
||||||
|
assert by_key[("u1", SLOT_UNSYNCED)] == ("SUCCESS_UNSYNCED", 95.0, 1)
|
||||||
|
assert by_key[("u2", SLOT_UNSYNCED)] == ("SUCCESS_UNSYNCED", 100.0, 1)
|
||||||
|
assert by_key[("s1", SLOT_SYNCED)] == ("SUCCESS_SYNCED", 70.0, 1)
|
||||||
|
|
||||||
|
|
||||||
|
def test_migrate_negative_row_splits_into_two_slot_rows(tmp_path: Path) -> None:
|
||||||
|
db_path = tmp_path / "legacy-negative.db"
|
||||||
|
|
||||||
|
with sqlite3.connect(db_path) as conn:
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
CREATE TABLE cache (
|
||||||
|
key TEXT PRIMARY KEY,
|
||||||
|
source TEXT NOT NULL,
|
||||||
|
status TEXT NOT NULL,
|
||||||
|
lyrics TEXT,
|
||||||
|
created_at INTEGER NOT NULL,
|
||||||
|
expires_at INTEGER,
|
||||||
|
artist TEXT,
|
||||||
|
title TEXT,
|
||||||
|
album TEXT,
|
||||||
|
length INTEGER,
|
||||||
|
confidence REAL
|
||||||
|
)
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
conn.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO cache
|
||||||
|
(key, source, status, lyrics, created_at, expires_at, artist, title, album, length, confidence)
|
||||||
|
VALUES
|
||||||
|
('n1', 's1', 'NOT_FOUND', NULL, 1, NULL, 'A', 'T', 'AL', 180000, 0.0)
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
conn.commit()
|
||||||
|
|
||||||
|
CacheEngine(str(db_path))
|
||||||
|
|
||||||
|
with sqlite3.connect(db_path) as conn:
|
||||||
|
rows = conn.execute(
|
||||||
|
"SELECT key, positive_kind, status FROM cache ORDER BY positive_kind"
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
assert rows == [
|
||||||
|
("n1", SLOT_SYNCED, "NOT_FOUND"),
|
||||||
|
("n1", SLOT_UNSYNCED, "NOT_FOUND"),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
def test_set_and_get_roundtrip_with_ttl(
|
def test_set_and_get_roundtrip_with_ttl(
|
||||||
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
|
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
|
||||||
) -> None:
|
) -> None:
|
||||||
@@ -79,9 +184,10 @@ def test_set_and_get_roundtrip_with_ttl(
|
|||||||
ttl_seconds=120,
|
ttl_seconds=120,
|
||||||
)
|
)
|
||||||
|
|
||||||
cached = cache_db.get(track, "lrclib")
|
cached_rows = cache_db.get_all(track, "lrclib")
|
||||||
|
|
||||||
assert cached is not None
|
assert len(cached_rows) == 1
|
||||||
|
cached = cached_rows[0]
|
||||||
assert cached.status is CacheStatus.SUCCESS_SYNCED
|
assert cached.status is CacheStatus.SUCCESS_SYNCED
|
||||||
assert str(cached.lyrics) == "[00:01.00]line"
|
assert str(cached.lyrics) == "[00:01.00]line"
|
||||||
assert cached.source == "lrclib"
|
assert cached.source == "lrclib"
|
||||||
@@ -101,12 +207,29 @@ def test_get_expired_entry_returns_none_and_removes_row(
|
|||||||
)
|
)
|
||||||
|
|
||||||
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 2_000_020)
|
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 2_000_020)
|
||||||
cached = cache_db.get(track, "netease")
|
cached_rows = cache_db.get_all(track, "netease")
|
||||||
|
|
||||||
assert cached is None
|
assert cached_rows == []
|
||||||
assert cache_db.query_all() == []
|
assert cache_db.query_all() == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_set_negative_without_slot_writes_both_slots(cache_db: CacheEngine) -> None:
|
||||||
|
track = _track()
|
||||||
|
cache_db.set(
|
||||||
|
track, "src", _result(CacheStatus.NOT_FOUND, None, "src"), ttl_seconds=60
|
||||||
|
)
|
||||||
|
|
||||||
|
with sqlite3.connect(cache_db.db_path) as conn:
|
||||||
|
rows = conn.execute(
|
||||||
|
"SELECT positive_kind, status FROM cache ORDER BY positive_kind"
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
assert rows == [
|
||||||
|
(SLOT_SYNCED, CacheStatus.NOT_FOUND.value),
|
||||||
|
(SLOT_UNSYNCED, CacheStatus.NOT_FOUND.value),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
def test_get_backfills_missing_length_when_track_provides_it(
|
def test_get_backfills_missing_length_when_track_provides_it(
|
||||||
cache_db: CacheEngine,
|
cache_db: CacheEngine,
|
||||||
) -> None:
|
) -> None:
|
||||||
@@ -130,9 +253,9 @@ def test_get_backfills_missing_length_when_track_provides_it(
|
|||||||
album=None,
|
album=None,
|
||||||
length=200000,
|
length=200000,
|
||||||
)
|
)
|
||||||
cached = cache_db.get(track_with_length, "spotify")
|
cached_rows = cache_db.get_all(track_with_length, "spotify")
|
||||||
|
|
||||||
assert cached is not None
|
assert cached_rows
|
||||||
|
|
||||||
with sqlite3.connect(cache_db.db_path) as conn:
|
with sqlite3.connect(cache_db.db_path) as conn:
|
||||||
row = conn.execute("SELECT length FROM cache LIMIT 1").fetchone()
|
row = conn.execute("SELECT length FROM cache LIMIT 1").fetchone()
|
||||||
@@ -140,7 +263,7 @@ def test_get_backfills_missing_length_when_track_provides_it(
|
|||||||
assert row[0] == 200000
|
assert row[0] == 200000
|
||||||
|
|
||||||
|
|
||||||
def test_get_best_prefers_higher_confidence_and_skips_negative(
|
def test_get_best_prefers_synced_and_skips_negative(
|
||||||
cache_db: CacheEngine,
|
cache_db: CacheEngine,
|
||||||
) -> None:
|
) -> None:
|
||||||
track = _track()
|
track = _track()
|
||||||
@@ -211,20 +334,25 @@ def test_prune_removes_only_expired_rows(
|
|||||||
assert rows[0]["source"] == "s-active"
|
assert rows[0]["source"] == "s-active"
|
||||||
|
|
||||||
|
|
||||||
def test_find_best_positive_uses_exact_match_and_prefers_synced(
|
def test_find_best_positive_returns_status_specific_results(
|
||||||
cache_db: CacheEngine,
|
cache_db: CacheEngine,
|
||||||
) -> None:
|
) -> None:
|
||||||
track = _track(artist="Artist", title="Song", album="Album")
|
track = _track(artist="Artist", title="Song", album="Album")
|
||||||
cache_db.set(track, "s1", _result(CacheStatus.SUCCESS_UNSYNCED, "u", "s1"))
|
cache_db.set(track, "u-high", _result(CacheStatus.SUCCESS_UNSYNCED, "u", "u-high"))
|
||||||
cache_db.set(track, "s2", _result(CacheStatus.SUCCESS_SYNCED, "s", "s2"))
|
cache_db.set(track, "s-low", _result(CacheStatus.SUCCESS_SYNCED, "s", "s-low"))
|
||||||
|
cache_db.update_confidence(track, 95.0, "u-high")
|
||||||
|
cache_db.update_confidence(track, 70.0, "s-low")
|
||||||
|
|
||||||
best = cache_db.find_best_positive(track)
|
best_synced = cache_db.find_best_positive(track, CacheStatus.SUCCESS_SYNCED)
|
||||||
|
assert best_synced is not None
|
||||||
|
assert best_synced.status is CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert str(best_synced.lyrics) == "s"
|
||||||
|
assert best_synced.source == "cache-search"
|
||||||
|
|
||||||
assert best is not None
|
best_unsynced = cache_db.find_best_positive(track, CacheStatus.SUCCESS_UNSYNCED)
|
||||||
assert best.status is CacheStatus.SUCCESS_SYNCED
|
assert best_unsynced is not None
|
||||||
assert str(best.lyrics) == "s"
|
assert best_unsynced.status is CacheStatus.SUCCESS_UNSYNCED
|
||||||
# find_best_positive always reports cache-search source
|
assert str(best_unsynced.lyrics) == "u"
|
||||||
assert best.source == "cache-search"
|
|
||||||
|
|
||||||
|
|
||||||
def test_search_by_meta_fuzzy_rules_and_duration_sorting(cache_db: CacheEngine) -> None:
|
def test_search_by_meta_fuzzy_rules_and_duration_sorting(cache_db: CacheEngine) -> None:
|
||||||
@@ -289,7 +417,6 @@ def test_search_by_meta_fuzzy_rules_and_duration_sorting(cache_db: CacheEngine)
|
|||||||
)
|
)
|
||||||
|
|
||||||
rows = cache_db.search_by_meta(
|
rows = cache_db.search_by_meta(
|
||||||
artist="B ; A",
|
|
||||||
title=" hello world ",
|
title=" hello world ",
|
||||||
length=200000,
|
length=200000,
|
||||||
)
|
)
|
||||||
@@ -297,6 +424,7 @@ def test_search_by_meta_fuzzy_rules_and_duration_sorting(cache_db: CacheEngine)
|
|||||||
sources = [r["source"] for r in rows]
|
sources = [r["source"] for r in rows]
|
||||||
assert "negative" not in sources
|
assert "negative" not in sources
|
||||||
assert "far-len" not in sources
|
assert "far-len" not in sources
|
||||||
|
assert "close-unsynced" in sources
|
||||||
# Sorted by duration diff, then confidence for equal diff.
|
# Sorted by duration diff, then confidence for equal diff.
|
||||||
assert sources[0] == "seed"
|
assert sources[0] == "seed"
|
||||||
assert sources[1] == "close-synced"
|
assert sources[1] == "close-synced"
|
||||||
@@ -315,7 +443,35 @@ def test_update_confidence_targets_specific_source(cache_db: CacheEngine) -> Non
|
|||||||
assert updated == 1
|
assert updated == 1
|
||||||
rows = {r["source"]: r for r in cache_db.query_track(track)}
|
rows = {r["source"]: r for r in cache_db.query_track(track)}
|
||||||
assert rows["s1"]["confidence"] == 75.0
|
assert rows["s1"]["confidence"] == 75.0
|
||||||
assert rows["s2"]["confidence"] == 100.0 # unchanged
|
assert rows["s2"]["confidence"] == 100.0 # unchanged default
|
||||||
|
|
||||||
|
|
||||||
|
def test_update_confidence_updates_both_slots_for_same_source(
|
||||||
|
cache_db: CacheEngine,
|
||||||
|
) -> None:
|
||||||
|
track = _track(artist="A", title="T", album="AL")
|
||||||
|
cache_db.set(
|
||||||
|
track,
|
||||||
|
"src",
|
||||||
|
_result(CacheStatus.SUCCESS_SYNCED, "sync", "src"),
|
||||||
|
positive_kind=SLOT_SYNCED,
|
||||||
|
)
|
||||||
|
cache_db.set(
|
||||||
|
track,
|
||||||
|
"src",
|
||||||
|
_result(CacheStatus.SUCCESS_UNSYNCED, "unsync", "src"),
|
||||||
|
positive_kind=SLOT_UNSYNCED,
|
||||||
|
)
|
||||||
|
|
||||||
|
updated = cache_db.update_confidence(track, 66.0, "src")
|
||||||
|
assert updated == 2
|
||||||
|
|
||||||
|
with sqlite3.connect(cache_db.db_path) as conn:
|
||||||
|
rows = conn.execute(
|
||||||
|
"SELECT positive_kind, confidence FROM cache WHERE source = 'src' ORDER BY positive_kind"
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
assert rows == [(SLOT_SYNCED, 66.0), (SLOT_UNSYNCED, 66.0)]
|
||||||
|
|
||||||
|
|
||||||
def test_update_confidence_returns_zero_for_missing_source(
|
def test_update_confidence_returns_zero_for_missing_source(
|
||||||
@@ -334,6 +490,48 @@ def test_update_confidence_returns_zero_for_empty_track(
|
|||||||
assert cache_db.update_confidence(empty, 50.0, "s1") == 0
|
assert cache_db.update_confidence(empty, 50.0, "s1") == 0
|
||||||
|
|
||||||
|
|
||||||
|
def test_credential_set_and_get_roundtrip(cache_db: CacheEngine) -> None:
|
||||||
|
cache_db.set_credential("spotify", {"access_token": "tok", "expires_in": 3600})
|
||||||
|
result = cache_db.get_credential("spotify")
|
||||||
|
assert result == {"access_token": "tok", "expires_in": 3600}
|
||||||
|
|
||||||
|
|
||||||
|
def test_credential_get_returns_none_on_miss(cache_db: CacheEngine) -> None:
|
||||||
|
assert cache_db.get_credential("nonexistent") is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_credential_expires_at_respected(
|
||||||
|
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
|
||||||
|
) -> None:
|
||||||
|
# Store with expiry 1000 ms in the future
|
||||||
|
now_ms = 5_000_000_000
|
||||||
|
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: now_ms / 1000)
|
||||||
|
cache_db.set_credential(
|
||||||
|
"musixmatch", {"user_token": "abc"}, expires_at_ms=now_ms + 1000
|
||||||
|
)
|
||||||
|
|
||||||
|
# Still valid
|
||||||
|
assert cache_db.get_credential("musixmatch") == {"user_token": "abc"}
|
||||||
|
|
||||||
|
# Advance past expiry
|
||||||
|
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: (now_ms + 2000) / 1000)
|
||||||
|
assert cache_db.get_credential("musixmatch") is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_credential_no_expiry_never_expires(
|
||||||
|
monkeypatch: pytest.MonkeyPatch, cache_db: CacheEngine
|
||||||
|
) -> None:
|
||||||
|
cache_db.set_credential("spotify", {"token": "forever"}, expires_at_ms=None)
|
||||||
|
monkeypatch.setattr("lrx_cli.cache.time.time", lambda: 9_999_999_999.0)
|
||||||
|
assert cache_db.get_credential("spotify") == {"token": "forever"}
|
||||||
|
|
||||||
|
|
||||||
|
def test_credential_set_overwrites_existing(cache_db: CacheEngine) -> None:
|
||||||
|
cache_db.set_credential("spotify", {"token": "old"})
|
||||||
|
cache_db.set_credential("spotify", {"token": "new"})
|
||||||
|
assert cache_db.get_credential("spotify") == {"token": "new"}
|
||||||
|
|
||||||
|
|
||||||
def test_query_track_and_stats_return_expected_aggregates(
|
def test_query_track_and_stats_return_expected_aggregates(
|
||||||
cache_db: CacheEngine,
|
cache_db: CacheEngine,
|
||||||
) -> None:
|
) -> None:
|
||||||
@@ -357,3 +555,5 @@ def test_query_track_and_stats_return_expected_aggregates(
|
|||||||
assert stats["expired"] == 0
|
assert stats["expired"] == 0
|
||||||
assert stats["by_status"][CacheStatus.SUCCESS_SYNCED.value] == 1
|
assert stats["by_status"][CacheStatus.SUCCESS_SYNCED.value] == 1
|
||||||
assert stats["by_status"][CacheStatus.SUCCESS_UNSYNCED.value] == 1
|
assert stats["by_status"][CacheStatus.SUCCESS_UNSYNCED.value] == 1
|
||||||
|
assert stats["by_slot"][SLOT_SYNCED] == 1
|
||||||
|
assert stats["by_slot"][SLOT_UNSYNCED] == 1
|
||||||
|
|||||||
@@ -0,0 +1,61 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from lrx_cli.config import AppConfig, CredentialConfig, WatchConfig, load_config
|
||||||
|
|
||||||
|
|
||||||
|
def test_missing_file_returns_defaults(tmp_path):
|
||||||
|
assert load_config(tmp_path / "nonexistent.toml") == AppConfig()
|
||||||
|
|
||||||
|
|
||||||
|
def test_empty_file_returns_defaults(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_text("")
|
||||||
|
assert load_config(p) == AppConfig()
|
||||||
|
|
||||||
|
|
||||||
|
def test_partial_section_keeps_other_defaults(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(b"[watch]\ndebounce_ms = 200\n")
|
||||||
|
cfg = load_config(p)
|
||||||
|
assert cfg.watch.debounce_ms == 200
|
||||||
|
assert cfg.watch.calibration_interval_s == WatchConfig().calibration_interval_s
|
||||||
|
|
||||||
|
|
||||||
|
def test_credentials_roundtrip(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(
|
||||||
|
b"[credentials]\n"
|
||||||
|
b'spotify_sp_dc = "abc"\n'
|
||||||
|
b'qq_music_api_url = "http://localhost:3000"\n'
|
||||||
|
)
|
||||||
|
assert load_config(p).credentials == CredentialConfig(
|
||||||
|
spotify_sp_dc="abc", qq_music_api_url="http://localhost:3000"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_int_coerced_to_float(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(b"[general]\nhttp_timeout = 5\n")
|
||||||
|
assert load_config(p).general.http_timeout == 5.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_unknown_key_raises(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(b"[general]\ntypo_key = 1\n")
|
||||||
|
with pytest.raises(ValueError, match="Unknown config keys"):
|
||||||
|
load_config(p)
|
||||||
|
|
||||||
|
|
||||||
|
def test_wrong_type_raises(tmp_path):
|
||||||
|
p = tmp_path / "config.toml"
|
||||||
|
p.write_bytes(b"[watch]\ndebounce_ms = true\n")
|
||||||
|
with pytest.raises(ValueError, match="expected int"):
|
||||||
|
load_config(p)
|
||||||
|
|
||||||
|
|
||||||
|
def test_app_config_is_frozen():
|
||||||
|
cfg = AppConfig()
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
cfg.general = None # type: ignore[misc]
|
||||||
@@ -0,0 +1,544 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from dataclasses import replace
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Callable
|
||||||
|
|
||||||
|
import httpx
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from lrx_cli.authenticators import create_authenticators
|
||||||
|
from lrx_cli.cache import CacheEngine
|
||||||
|
from lrx_cli.config import AppConfig, load_config
|
||||||
|
from lrx_cli.core import LrcManager
|
||||||
|
from lrx_cli.fetchers import FetcherMethodType, create_fetchers
|
||||||
|
from lrx_cli.fetchers.lrclib import LrclibFetcher, _parse_lrclib_response
|
||||||
|
from lrx_cli.fetchers.lrclib_search import (
|
||||||
|
LrclibSearchFetcher,
|
||||||
|
_parse_lrclib_search_results,
|
||||||
|
)
|
||||||
|
from lrx_cli.fetchers.musixmatch import (
|
||||||
|
MusixmatchFetcher,
|
||||||
|
MusixmatchSpotifyFetcher,
|
||||||
|
_parse_mxm_macro,
|
||||||
|
_parse_mxm_search,
|
||||||
|
)
|
||||||
|
from lrx_cli.fetchers.netease import (
|
||||||
|
NeteaseFetcher,
|
||||||
|
_parse_netease_lyrics,
|
||||||
|
_parse_netease_search,
|
||||||
|
)
|
||||||
|
from lrx_cli.fetchers.qqmusic import QQMusicFetcher, _parse_qq_lyrics, _parse_qq_search
|
||||||
|
from lrx_cli.fetchers.spotify import SpotifyFetcher, _parse_spotify_lyrics
|
||||||
|
from lrx_cli.lrc import LRCData
|
||||||
|
from lrx_cli.models import CacheStatus, TrackMeta
|
||||||
|
from tests.marks import requires_musixmatch_token, requires_qq_music, requires_spotify
|
||||||
|
|
||||||
|
SAMPLE_TRACK = TrackMeta(
|
||||||
|
title="One Last Kiss",
|
||||||
|
artist="Hikaru Utada",
|
||||||
|
album="One Last Kiss",
|
||||||
|
length=252026,
|
||||||
|
trackid="5RhWszHMSKzb7KiXk4Ae0M",
|
||||||
|
url="https://open.spotify.com/track/5RhWszHMSKzb7KiXk4Ae0M",
|
||||||
|
)
|
||||||
|
|
||||||
|
SAMPLE_TRACK_ALBUM_MODIFIED = replace(SAMPLE_TRACK, album="BADモード")
|
||||||
|
SAMPLE_TRACK_ARTIST_MODIFIED = replace(SAMPLE_TRACK, artist="宇多田ヒカル")
|
||||||
|
SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED = replace(
|
||||||
|
SAMPLE_TRACK,
|
||||||
|
artist="宇多田ヒカル",
|
||||||
|
album="BADモード",
|
||||||
|
)
|
||||||
|
|
||||||
|
_FIXTURE_DIR = Path(__file__).parent / "fixtures" / "fetchers"
|
||||||
|
_NETWORK_TIMEOUT = 20.0
|
||||||
|
|
||||||
|
ParserFunc = Callable[[dict], LRCData | None]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def lrc_manager(tmp_path: Path) -> LrcManager:
|
||||||
|
return LrcManager(str(tmp_path / "cache.db"), AppConfig())
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def cred_lrc_manager(tmp_path: Path) -> LrcManager:
|
||||||
|
return LrcManager(str(tmp_path / "cache.db"), load_config())
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def fetcher_runtime_anonymous(tmp_path: Path):
|
||||||
|
cfg = AppConfig()
|
||||||
|
cache = CacheEngine(str(tmp_path / "network-anon-cache.db"))
|
||||||
|
authenticators = create_authenticators(cache, cfg)
|
||||||
|
fetchers = create_fetchers(cache, authenticators, cfg)
|
||||||
|
return fetchers, cfg
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def fetcher_runtime_credentialed(tmp_path: Path):
|
||||||
|
cfg = load_config()
|
||||||
|
cache = CacheEngine(str(tmp_path / "network-cred-cache.db"))
|
||||||
|
authenticators = create_authenticators(cache, cfg)
|
||||||
|
fetchers = create_fetchers(cache, authenticators, cfg)
|
||||||
|
return fetchers, cfg
|
||||||
|
|
||||||
|
|
||||||
|
def _load_fixture(name: str) -> dict | list:
|
||||||
|
return json.loads((_FIXTURE_DIR / name).read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def _assert_shape(actual: object, fixture: object) -> None:
|
||||||
|
"""Assert actual payload contains fixture structure recursively.
|
||||||
|
|
||||||
|
- dict: all fixture keys must exist with matching nested shape
|
||||||
|
- list: actual must contain at least fixture length and each indexed shape must match
|
||||||
|
- scalar: runtime type must match fixture type
|
||||||
|
"""
|
||||||
|
if isinstance(fixture, dict):
|
||||||
|
assert isinstance(actual, dict)
|
||||||
|
for key, value in fixture.items():
|
||||||
|
assert key in actual
|
||||||
|
_assert_shape(actual[key], value)
|
||||||
|
return
|
||||||
|
|
||||||
|
if isinstance(fixture, list):
|
||||||
|
assert isinstance(actual, list)
|
||||||
|
assert len(actual) >= len(fixture)
|
||||||
|
for idx, value in enumerate(fixture):
|
||||||
|
_assert_shape(actual[idx], value)
|
||||||
|
return
|
||||||
|
|
||||||
|
if fixture is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
assert isinstance(actual, type(fixture))
|
||||||
|
|
||||||
|
|
||||||
|
def _fetch_with_method(
|
||||||
|
lrc_manager: LrcManager,
|
||||||
|
method: FetcherMethodType,
|
||||||
|
*,
|
||||||
|
bypass_cache: bool = False,
|
||||||
|
):
|
||||||
|
return lrc_manager.fetch_for_track(
|
||||||
|
SAMPLE_TRACK,
|
||||||
|
force_method=method,
|
||||||
|
bypass_cache=bypass_cache,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# Cache-search fetcher behavior
|
||||||
|
|
||||||
|
|
||||||
|
def test_cache_search_no_cache_fails(lrc_manager: LrcManager):
|
||||||
|
result = _fetch_with_method(lrc_manager, "cache-search", bypass_cache=False)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_cache_search_exact_hit(lrc_manager: LrcManager):
|
||||||
|
expected = "[00:00.01]lyrics"
|
||||||
|
lrc_manager.manual_insert(SAMPLE_TRACK, expected)
|
||||||
|
|
||||||
|
result = lrc_manager.fetch_for_track(
|
||||||
|
SAMPLE_TRACK,
|
||||||
|
force_method="cache-search",
|
||||||
|
bypass_cache=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result is not None
|
||||||
|
assert result.lyrics is not None
|
||||||
|
assert result.lyrics.to_text() == expected
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
"query_track",
|
||||||
|
[
|
||||||
|
pytest.param(SAMPLE_TRACK_ARTIST_MODIFIED, id="artist_modified"),
|
||||||
|
pytest.param(SAMPLE_TRACK_ALBUM_MODIFIED, id="album_modified"),
|
||||||
|
pytest.param(SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED, id="album_artist_modified"),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
def test_cache_search_fuzzy_hit(lrc_manager: LrcManager, query_track: TrackMeta):
|
||||||
|
expected = "[00:00.01]lyrics"
|
||||||
|
lrc_manager.manual_insert(SAMPLE_TRACK, expected)
|
||||||
|
|
||||||
|
result = lrc_manager.fetch_for_track(
|
||||||
|
query_track,
|
||||||
|
force_method="cache-search",
|
||||||
|
bypass_cache=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result is not None
|
||||||
|
assert result.lyrics is not None
|
||||||
|
assert result.lyrics.to_text() == expected
|
||||||
|
|
||||||
|
|
||||||
|
def test_cache_search_prefer_better_match(lrc_manager: LrcManager):
|
||||||
|
lrc_manager.manual_insert(
|
||||||
|
SAMPLE_TRACK_ARTIST_MODIFIED,
|
||||||
|
"[00:00.01]artist modified",
|
||||||
|
)
|
||||||
|
lrc_manager.manual_insert(
|
||||||
|
SAMPLE_TRACK_ALBUM_ARTIST_MODIFIED,
|
||||||
|
"[00:00.01]artist+album modified",
|
||||||
|
)
|
||||||
|
|
||||||
|
result = lrc_manager.fetch_for_track(
|
||||||
|
SAMPLE_TRACK,
|
||||||
|
force_method="cache-search",
|
||||||
|
bypass_cache=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result is not None
|
||||||
|
assert result.lyrics is not None
|
||||||
|
assert result.lyrics.to_text() == "[00:00.01]artist modified"
|
||||||
|
|
||||||
|
|
||||||
|
# API response format for every fetcher
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.network
|
||||||
|
def test_api_lrclib_response_shape(fetcher_runtime_anonymous):
|
||||||
|
fetchers, _cfg = fetcher_runtime_anonymous
|
||||||
|
fetcher = fetchers["lrclib"]
|
||||||
|
assert isinstance(fetcher, LrclibFetcher)
|
||||||
|
|
||||||
|
async def _run() -> dict:
|
||||||
|
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
|
||||||
|
response = await fetcher._api_get(client, SAMPLE_TRACK)
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
return payload
|
||||||
|
|
||||||
|
payload = asyncio.run(_run())
|
||||||
|
_assert_shape(payload, _load_fixture("lrclib_response.json"))
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.network
|
||||||
|
def test_api_lrclib_search_response_shape(fetcher_runtime_anonymous):
|
||||||
|
fetchers, _cfg = fetcher_runtime_anonymous
|
||||||
|
fetcher = fetchers["lrclib-search"]
|
||||||
|
assert isinstance(fetcher, LrclibSearchFetcher)
|
||||||
|
|
||||||
|
async def _run() -> list[dict]:
|
||||||
|
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
|
||||||
|
items, had_error = await fetcher._api_candidates(client, SAMPLE_TRACK)
|
||||||
|
assert had_error is False
|
||||||
|
return items
|
||||||
|
|
||||||
|
payload = asyncio.run(_run())
|
||||||
|
_assert_shape(payload, _load_fixture("lrclib_search_results.json"))
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.network
|
||||||
|
def test_api_netease_response_shape(fetcher_runtime_anonymous):
|
||||||
|
fetchers, _cfg = fetcher_runtime_anonymous
|
||||||
|
fetcher = fetchers["netease"]
|
||||||
|
assert isinstance(fetcher, NeteaseFetcher)
|
||||||
|
|
||||||
|
async def _run() -> tuple[dict, dict]:
|
||||||
|
async with httpx.AsyncClient(timeout=_NETWORK_TIMEOUT) as client:
|
||||||
|
search = await fetcher._api_search_track(client, SAMPLE_TRACK, 5)
|
||||||
|
lyric = await fetcher._api_lyric_track(client, SAMPLE_TRACK, 5)
|
||||||
|
assert isinstance(search, dict)
|
||||||
|
assert isinstance(lyric, dict)
|
||||||
|
return search, lyric
|
||||||
|
|
||||||
|
search_payload, lyric_payload = asyncio.run(_run())
|
||||||
|
_assert_shape(search_payload, _load_fixture("netease_search.json"))
|
||||||
|
_assert_shape(lyric_payload, _load_fixture("netease_lyrics.json"))
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.network
|
||||||
|
@requires_spotify
|
||||||
|
def test_api_spotify_response_shape(fetcher_runtime_credentialed):
|
||||||
|
fetchers, _cfg = fetcher_runtime_credentialed
|
||||||
|
fetcher = fetchers["spotify"]
|
||||||
|
assert isinstance(fetcher, SpotifyFetcher)
|
||||||
|
|
||||||
|
async def _run() -> dict:
|
||||||
|
payload = await fetcher._api_lyrics(SAMPLE_TRACK)
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
return payload
|
||||||
|
|
||||||
|
payload = asyncio.run(_run())
|
||||||
|
_assert_shape(payload, _load_fixture("spotify_synced.json"))
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.network
|
||||||
|
@requires_qq_music
|
||||||
|
def test_api_qqmusic_response_shape(fetcher_runtime_credentialed):
|
||||||
|
fetchers, _cfg = fetcher_runtime_credentialed
|
||||||
|
fetcher = fetchers["qqmusic"]
|
||||||
|
assert isinstance(fetcher, QQMusicFetcher)
|
||||||
|
|
||||||
|
async def _run() -> tuple[dict, dict]:
|
||||||
|
search = await fetcher._api_search(SAMPLE_TRACK, 10)
|
||||||
|
lyric = await fetcher._api_lyric_track(SAMPLE_TRACK, 10)
|
||||||
|
assert isinstance(search, dict)
|
||||||
|
assert isinstance(lyric, dict)
|
||||||
|
return search, lyric
|
||||||
|
|
||||||
|
search_payload, lyric_payload = asyncio.run(_run())
|
||||||
|
_assert_shape(search_payload, _load_fixture("qq_search.json"))
|
||||||
|
_assert_shape(lyric_payload, _load_fixture("qq_lyrics.json"))
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.network
|
||||||
|
def test_api_musixmatch_anonymous_response_shape(fetcher_runtime_anonymous):
|
||||||
|
"""Anonymous musixmatch calls must share one cache/auth context in this test."""
|
||||||
|
fetchers, _cfg = fetcher_runtime_anonymous
|
||||||
|
search_fetcher = fetchers["musixmatch"]
|
||||||
|
spotify_fetcher = fetchers["musixmatch-spotify"]
|
||||||
|
assert isinstance(search_fetcher, MusixmatchFetcher)
|
||||||
|
assert isinstance(spotify_fetcher, MusixmatchSpotifyFetcher)
|
||||||
|
|
||||||
|
async def _run() -> tuple[dict, dict, dict]:
|
||||||
|
search = await search_fetcher._api_search_track(SAMPLE_TRACK)
|
||||||
|
macro_from_search = await search_fetcher._api_macro_track(SAMPLE_TRACK)
|
||||||
|
macro_from_spotify = await spotify_fetcher._api_macro_track(SAMPLE_TRACK)
|
||||||
|
assert isinstance(search, dict)
|
||||||
|
assert isinstance(macro_from_search, dict)
|
||||||
|
assert isinstance(macro_from_spotify, dict)
|
||||||
|
return search, macro_from_search, macro_from_spotify
|
||||||
|
|
||||||
|
search_payload, macro_payload, spotify_macro_payload = asyncio.run(_run())
|
||||||
|
_assert_shape(search_payload, _load_fixture("musixmatch_search.json"))
|
||||||
|
_assert_shape(macro_payload, _load_fixture("musixmatch_macro_richsync.json"))
|
||||||
|
_assert_shape(
|
||||||
|
spotify_macro_payload, _load_fixture("musixmatch_macro_richsync.json")
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.network
|
||||||
|
@requires_musixmatch_token
|
||||||
|
def test_api_musixmatch_token_response_shape(fetcher_runtime_credentialed):
|
||||||
|
fetchers, _cfg = fetcher_runtime_credentialed
|
||||||
|
search_fetcher = fetchers["musixmatch"]
|
||||||
|
spotify_fetcher = fetchers["musixmatch-spotify"]
|
||||||
|
assert isinstance(search_fetcher, MusixmatchFetcher)
|
||||||
|
assert isinstance(spotify_fetcher, MusixmatchSpotifyFetcher)
|
||||||
|
|
||||||
|
async def _run() -> tuple[dict, dict, dict]:
|
||||||
|
search = await search_fetcher._api_search_track(SAMPLE_TRACK)
|
||||||
|
macro_from_search = await search_fetcher._api_macro_track(SAMPLE_TRACK)
|
||||||
|
macro_from_spotify = await spotify_fetcher._api_macro_track(SAMPLE_TRACK)
|
||||||
|
assert isinstance(search, dict)
|
||||||
|
assert isinstance(macro_from_search, dict)
|
||||||
|
assert isinstance(macro_from_spotify, dict)
|
||||||
|
return search, macro_from_search, macro_from_spotify
|
||||||
|
|
||||||
|
search_payload, macro_payload, spotify_macro_payload = asyncio.run(_run())
|
||||||
|
_assert_shape(search_payload, _load_fixture("musixmatch_search.json"))
|
||||||
|
_assert_shape(macro_payload, _load_fixture("musixmatch_macro_richsync.json"))
|
||||||
|
_assert_shape(
|
||||||
|
spotify_macro_payload, _load_fixture("musixmatch_macro_richsync.json")
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# Parse fixture JSON into real data structures
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
"fixture_name,parser,expected_status",
|
||||||
|
[
|
||||||
|
pytest.param(
|
||||||
|
"spotify_synced.json",
|
||||||
|
_parse_spotify_lyrics,
|
||||||
|
"SUCCESS_SYNCED",
|
||||||
|
id="spotify-synced",
|
||||||
|
),
|
||||||
|
pytest.param(
|
||||||
|
"spotify_unsynced.json",
|
||||||
|
_parse_spotify_lyrics,
|
||||||
|
"SUCCESS_UNSYNCED",
|
||||||
|
id="spotify-unsynced",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
def test_parse_spotify_fixture(
|
||||||
|
fixture_name: str,
|
||||||
|
parser: ParserFunc,
|
||||||
|
expected_status: str,
|
||||||
|
):
|
||||||
|
payload = _load_fixture(fixture_name)
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = parser(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert parsed.detect_sync_status().value == expected_status
|
||||||
|
if expected_status == "SUCCESS_SYNCED":
|
||||||
|
assert parsed.to_text() == "[00:01.00]hello\n[00:02.50]world"
|
||||||
|
else:
|
||||||
|
assert parsed.to_text() == "[00:00.00]plain one\n[00:00.00]plain two"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_qq_search_fixture() -> None:
|
||||||
|
payload = _load_fixture("qq_search.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_qq_search(payload)
|
||||||
|
assert len(parsed) == 2
|
||||||
|
|
||||||
|
assert parsed[0].item == "mid1"
|
||||||
|
assert parsed[0].title == "My Love"
|
||||||
|
assert parsed[0].artist == "Westlife"
|
||||||
|
assert parsed[0].duration_ms == 232000.0
|
||||||
|
assert parsed[0].album == "Coast To Coast"
|
||||||
|
|
||||||
|
assert parsed[1].item == "mid2"
|
||||||
|
assert parsed[1].title == "My Love (Album Version)"
|
||||||
|
assert parsed[1].artist == "Little Texas"
|
||||||
|
assert parsed[1].duration_ms == 248000.0
|
||||||
|
assert parsed[1].album == "Greatest Hits"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_qq_lyrics_fixture() -> None:
|
||||||
|
payload = _load_fixture("qq_lyrics.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_qq_lyrics(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_lrclib_response_fixture() -> None:
|
||||||
|
payload = _load_fixture("lrclib_response.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_lrclib_response(payload)
|
||||||
|
assert parsed.synced is not None and parsed.synced.lyrics is not None
|
||||||
|
assert parsed.unsynced is not None and parsed.unsynced.lyrics is not None
|
||||||
|
assert parsed.synced.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert parsed.unsynced.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
assert parsed.synced.lyrics.to_text() == "[00:01.00]s1\n[00:02.00]s2"
|
||||||
|
assert parsed.unsynced.lyrics.to_text() == "[00:00.00]p1\n[00:00.00]p2"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_lrclib_search_results_fixture() -> None:
|
||||||
|
payload = _load_fixture("lrclib_search_results.json")
|
||||||
|
assert isinstance(payload, list)
|
||||||
|
parsed = _parse_lrclib_search_results(payload)
|
||||||
|
assert len(parsed) == 2
|
||||||
|
|
||||||
|
assert parsed[0].item.get("id") == 1
|
||||||
|
assert parsed[0].duration_ms == 231847.0
|
||||||
|
assert parsed[0].is_synced is True
|
||||||
|
assert parsed[0].title == "My Love"
|
||||||
|
assert parsed[0].artist == "Westlife"
|
||||||
|
assert parsed[0].album == "Coast To Coast"
|
||||||
|
|
||||||
|
assert parsed[1].item.get("id") == 2
|
||||||
|
assert parsed[1].duration_ms == 262000.0
|
||||||
|
assert parsed[1].is_synced is False
|
||||||
|
assert parsed[1].title == "My Love (Live)"
|
||||||
|
assert parsed[1].artist == "Westlife"
|
||||||
|
assert parsed[1].album == "Live"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_netease_search_fixture() -> None:
|
||||||
|
payload = _load_fixture("netease_search.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_netease_search(payload)
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed[0].item == 2080607
|
||||||
|
assert parsed[0].title == "My Love"
|
||||||
|
assert parsed[0].artist == "Westlife"
|
||||||
|
assert parsed[0].duration_ms == 231941.0
|
||||||
|
assert parsed[0].album == "Unbreakable"
|
||||||
|
|
||||||
|
assert parsed[1].item == 572412968
|
||||||
|
assert parsed[1].artist == "Westlife"
|
||||||
|
assert parsed[1].duration_ms == 231000.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_netease_lyrics_fixture() -> None:
|
||||||
|
payload = _load_fixture("netease_lyrics.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_netease_lyrics(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert parsed.to_text() == "[00:01.00]line1\n[00:02.00]line2"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_musixmatch_search_fixture() -> None:
|
||||||
|
payload = _load_fixture("musixmatch_search.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_mxm_search(payload)
|
||||||
|
assert len(parsed) == 1
|
||||||
|
assert parsed[0].item == 123
|
||||||
|
assert parsed[0].is_synced is True
|
||||||
|
assert parsed[0].title == "My Love"
|
||||||
|
assert parsed[0].artist == "Westlife"
|
||||||
|
assert parsed[0].duration_ms == 232000.0
|
||||||
|
assert parsed[0].album == "Coast To Coast"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_musixmatch_macro_fixture() -> None:
|
||||||
|
payload = _load_fixture("musixmatch_macro_richsync.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_mxm_macro(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_musixmatch_macro_subtitle_fallback_fixture() -> None:
|
||||||
|
payload = _load_fixture("musixmatch_macro_subtitle.json")
|
||||||
|
assert isinstance(payload, dict)
|
||||||
|
parsed = _parse_mxm_macro(payload)
|
||||||
|
assert parsed is not None
|
||||||
|
assert len(parsed) == 2
|
||||||
|
assert parsed.detect_sync_status() == CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert parsed.to_text() == "[00:01.10]hello\n[00:02.22]world"
|
||||||
|
|
||||||
|
|
||||||
|
# Empty / partial-error response handling
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_spotify_empty_or_invalid() -> None:
|
||||||
|
assert _parse_spotify_lyrics({}) is None
|
||||||
|
assert _parse_spotify_lyrics({"lyrics": {"lines": []}}) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_qq_search_empty_or_error() -> None:
|
||||||
|
assert _parse_qq_search({}) == []
|
||||||
|
assert _parse_qq_search({"code": 1}) == []
|
||||||
|
assert _parse_qq_search({"code": 0, "data": {"list": []}}) == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_qq_lyrics_empty_or_error() -> None:
|
||||||
|
assert _parse_qq_lyrics({}) is None
|
||||||
|
assert _parse_qq_lyrics({"code": 1}) is None
|
||||||
|
assert _parse_qq_lyrics({"code": 0, "data": {"lyric": ""}}) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_lrclib_response_empty_or_partial() -> None:
|
||||||
|
parsed = _parse_lrclib_response({})
|
||||||
|
assert parsed.synced is not None
|
||||||
|
assert parsed.unsynced is not None
|
||||||
|
assert parsed.synced.lyrics is None
|
||||||
|
assert parsed.unsynced.lyrics is None
|
||||||
|
|
||||||
|
parsed_partial = _parse_lrclib_response({"syncedLyrics": "[00:01.00]line"})
|
||||||
|
assert (
|
||||||
|
parsed_partial.synced is not None and parsed_partial.synced.lyrics is not None
|
||||||
|
)
|
||||||
|
assert parsed_partial.unsynced is not None
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_netease_empty_or_partial() -> None:
|
||||||
|
assert _parse_netease_search({}) == []
|
||||||
|
assert _parse_netease_search({"result": {"songs": []}}) == []
|
||||||
|
assert _parse_netease_lyrics({}) is None
|
||||||
|
assert _parse_netease_lyrics({"lrc": {"lyric": ""}}) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_musixmatch_empty_or_partial() -> None:
|
||||||
|
assert _parse_mxm_search({}) == []
|
||||||
|
assert _parse_mxm_search({"message": {"body": {"track_list": []}}}) == []
|
||||||
|
assert _parse_mxm_macro({}) is None
|
||||||
|
assert _parse_mxm_macro({"message": {"body": []}}) is None
|
||||||
@@ -0,0 +1,123 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from lrx_cli.config import AppConfig
|
||||||
|
from lrx_cli.enrichers.audio_tag import AudioTagEnricher
|
||||||
|
from lrx_cli.enrichers.file_name import FileNameEnricher
|
||||||
|
from lrx_cli.models import CacheStatus, TrackMeta
|
||||||
|
from lrx_cli.fetchers.local import LocalFetcher
|
||||||
|
|
||||||
|
_GENERAL = AppConfig().general
|
||||||
|
|
||||||
|
|
||||||
|
def _local_track(path: Path) -> TrackMeta:
|
||||||
|
return TrackMeta(url=f"file://{path}")
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_unavailable_for_non_local_track():
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
assert not fetcher.is_available(TrackMeta(title="Song", artist="Artist"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_available_for_local_track(tmp_path):
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
assert fetcher.is_available(_local_track(tmp_path / "song.flac"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_returns_empty_for_non_file_url():
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
track = TrackMeta(url="https://example.com/song.mp3")
|
||||||
|
result = asyncio.run(fetcher.fetch(track))
|
||||||
|
assert result.synced is None and result.unsynced is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_reads_synced_sidecar(tmp_path):
|
||||||
|
audio = tmp_path / "song.flac"
|
||||||
|
lrc = audio.with_suffix(".lrc")
|
||||||
|
lrc.write_text("[00:01.00]Hello\n[00:03.00]World\n")
|
||||||
|
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
result = asyncio.run(fetcher.fetch(_local_track(audio)))
|
||||||
|
|
||||||
|
assert result.synced is not None
|
||||||
|
assert result.synced.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
assert result.synced.source is not None
|
||||||
|
assert "sidecar" in result.synced.source
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_reads_unsynced_sidecar(tmp_path):
|
||||||
|
audio = tmp_path / "song.flac"
|
||||||
|
lrc = audio.with_suffix(".lrc")
|
||||||
|
lrc.write_text("Hello\nWorld\n")
|
||||||
|
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
result = asyncio.run(fetcher.fetch(_local_track(audio)))
|
||||||
|
|
||||||
|
assert result.unsynced is not None
|
||||||
|
assert result.synced is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_fetcher_empty_sidecar_ignored(tmp_path):
|
||||||
|
audio = tmp_path / "song.flac"
|
||||||
|
(audio.with_suffix(".lrc")).write_text(" ")
|
||||||
|
|
||||||
|
fetcher = LocalFetcher(_GENERAL)
|
||||||
|
result = asyncio.run(fetcher.fetch(_local_track(audio)))
|
||||||
|
|
||||||
|
assert result.synced is None and result.unsynced is None
|
||||||
|
|
||||||
|
|
||||||
|
def _enrich(path: str, **existing) -> dict | None:
|
||||||
|
enricher = FileNameEnricher()
|
||||||
|
track = TrackMeta(url=f"file://{path}", **existing)
|
||||||
|
return asyncio.run(enricher.enrich(track))
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_artist_title_split(tmp_path):
|
||||||
|
result = _enrich(str(tmp_path / "Utada Hikaru - First Love.flac"))
|
||||||
|
assert result == {
|
||||||
|
"artist": "Utada Hikaru",
|
||||||
|
"title": "First Love",
|
||||||
|
"album": tmp_path.name,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_track_number_prefix(tmp_path):
|
||||||
|
# "01. Title" — no " - " separator, regex strips leading "01. "
|
||||||
|
result = _enrich(str(tmp_path / "01. First Love.flac"))
|
||||||
|
assert result and result.get("title") == "First Love"
|
||||||
|
assert "artist" not in result
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_title_only(tmp_path):
|
||||||
|
result = _enrich(str(tmp_path / "First Love.flac"))
|
||||||
|
assert result and result.get("title") == "First Love"
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_does_not_overwrite_existing_fields(tmp_path):
|
||||||
|
result = _enrich(
|
||||||
|
str(tmp_path / "Artist - Title.flac"),
|
||||||
|
artist="Existing Artist",
|
||||||
|
title="Existing Title",
|
||||||
|
)
|
||||||
|
assert result is None or ("artist" not in result and "title" not in result)
|
||||||
|
|
||||||
|
|
||||||
|
def test_filename_enricher_non_local_returns_none():
|
||||||
|
enricher = FileNameEnricher()
|
||||||
|
track = TrackMeta(title="Song", artist="Artist")
|
||||||
|
assert asyncio.run(enricher.enrich(track)) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_audio_tag_enricher_non_local_returns_none():
|
||||||
|
enricher = AudioTagEnricher()
|
||||||
|
track = TrackMeta(title="Song", artist="Artist")
|
||||||
|
assert asyncio.run(enricher.enrich(track)) is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_audio_tag_enricher_missing_file_returns_none(tmp_path):
|
||||||
|
enricher = AudioTagEnricher()
|
||||||
|
track = _local_track(tmp_path / "nonexistent.flac")
|
||||||
|
assert asyncio.run(enricher.enrich(track)) is None
|
||||||
+295
-54
@@ -4,11 +4,11 @@ from lrx_cli.lrc import LRCData
|
|||||||
from lrx_cli.models import CacheStatus
|
from lrx_cli.models import CacheStatus
|
||||||
|
|
||||||
|
|
||||||
def _normalize(text: str) -> str:
|
def _reformat(text: str) -> str:
|
||||||
return str(LRCData(text))
|
return str(LRCData(text))
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_supports_all_raw_time_formats() -> None:
|
def test_time_tag_formats_are_normalized() -> None:
|
||||||
raw = "\n".join(
|
raw = "\n".join(
|
||||||
[
|
[
|
||||||
"[00:01]a",
|
"[00:01]a",
|
||||||
@@ -19,7 +19,7 @@ def test_normalize_tags_supports_all_raw_time_formats() -> None:
|
|||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
normalized = _normalize(raw)
|
normalized = _reformat(raw)
|
||||||
|
|
||||||
assert normalized == "\n".join(
|
assert normalized == "\n".join(
|
||||||
[
|
[
|
||||||
@@ -32,84 +32,59 @@ def test_normalize_tags_supports_all_raw_time_formats() -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_keeps_non_timed_lines_trimmed_and_unchanged() -> None:
|
def test_non_timed_lines_are_kept_as_lyrics() -> None:
|
||||||
raw = " plain line \n\n [ar:Meta Header] "
|
raw = " plain line \n\n other line "
|
||||||
|
|
||||||
normalized = _normalize(raw)
|
normalized = _reformat(raw)
|
||||||
|
|
||||||
assert normalized == "plain line\n\n[ar:Meta Header]"
|
assert normalized == "plain line\n\nother line"
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_removes_word_sync_patterns() -> None:
|
def test_word_sync_tags_are_parsed_and_export_controlled() -> None:
|
||||||
raw = (
|
raw = "[00:01.00]<00:01>he <00:01.50>llo\n[00:02.00]plain"
|
||||||
"[00:01.00]<00:01>hello\n"
|
|
||||||
"[00:02.00]<00:02.3>world\n"
|
|
||||||
"[00:03.00]<00:03.45>foo\n"
|
|
||||||
"[00:04.00]<00:04:678>bar\n"
|
|
||||||
"[00:05.00]<1,2,3>baz"
|
|
||||||
)
|
|
||||||
|
|
||||||
normalized = _normalize(raw)
|
data = LRCData(raw)
|
||||||
|
|
||||||
assert normalized == "\n".join(
|
assert data.to_text(include_word_sync=False) == "[00:01.00]he llo\n[00:02.00]plain"
|
||||||
[
|
assert (
|
||||||
"[00:01.00]hello",
|
data.to_text(include_word_sync=True)
|
||||||
"[00:02.00]world",
|
== "[00:01.00]<00:01.00>he <00:01.50>llo\n[00:02.00]plain"
|
||||||
"[00:03.00]foo",
|
|
||||||
"[00:04.00]bar",
|
|
||||||
"[00:05.00]baz",
|
|
||||||
]
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_keeps_midline_timestamps_as_is() -> None:
|
def test_midline_line_tags_are_kept_as_plain_text() -> None:
|
||||||
raw = "[00:01.00]Lyric [00:02.00]line"
|
raw = "[00:01.00]Lyric [00:02.00]line"
|
||||||
|
|
||||||
normalized = _normalize(raw)
|
normalized = _reformat(raw)
|
||||||
|
|
||||||
assert normalized == "[00:01.00]Lyric [00:02.00]line"
|
assert normalized == "[00:01.00]Lyric [00:02.00]line"
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_applies_positive_and_negative_offset_per_spec() -> None:
|
def test_space_between_line_tag_and_lyric_is_consumed() -> None:
|
||||||
positive = _normalize("[offset:+1000]\n[00:10.00]line")
|
raw = "[00:01.2] hello"
|
||||||
negative = _normalize("[offset:-500]\n[00:10.00]line")
|
|
||||||
|
|
||||||
assert positive == "[00:09.00]line"
|
normalized = _reformat(raw)
|
||||||
assert negative == "[00:10.50]line"
|
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_accepts_leading_spaces_and_tabs_before_tags() -> None:
|
|
||||||
raw = "\t [00:01.2] hello"
|
|
||||||
|
|
||||||
normalized = _normalize(raw)
|
|
||||||
|
|
||||||
assert normalized == "[00:01.20]hello"
|
assert normalized == "[00:01.20]hello"
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_handles_consecutive_start_tags_with_spaces_between() -> None:
|
def test_consecutive_line_sync_tags_with_spaces_are_parsed_as_one_line() -> None:
|
||||||
raw = "[00:01] [00:02.3] chorus"
|
raw = "[00:01] [00:02.3] chorus"
|
||||||
|
|
||||||
normalized = _normalize(raw)
|
data = LRCData(raw)
|
||||||
|
assert len(data.lines) == 1
|
||||||
assert normalized == "[00:01.00][00:02.30]chorus"
|
assert str(data) == "[00:01.00][00:02.30]chorus"
|
||||||
|
assert data.to_plain() == "chorus\nchorus"
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_preserves_non_leading_raw_like_tags() -> None:
|
def test_non_leading_time_like_text_is_plain_lyric() -> None:
|
||||||
raw = "intro [00:01]line"
|
raw = "intro [00:01]line"
|
||||||
|
|
||||||
normalized = _normalize(raw)
|
normalized = _reformat(raw)
|
||||||
|
|
||||||
assert normalized == "intro [00:01]line"
|
assert normalized == "intro [00:01]line"
|
||||||
|
|
||||||
|
|
||||||
def test_normalize_tags_removes_offset_tag_line_even_without_lyrics() -> None:
|
|
||||||
raw = "[offset:+500]"
|
|
||||||
|
|
||||||
normalized = _normalize(raw)
|
|
||||||
|
|
||||||
assert normalized == ""
|
|
||||||
|
|
||||||
|
|
||||||
def test_is_synced_and_detect_sync_status_follow_non_zero_rule() -> None:
|
def test_is_synced_and_detect_sync_status_follow_non_zero_rule() -> None:
|
||||||
plain_text = "just some lyrics\nwithout tags"
|
plain_text = "just some lyrics\nwithout tags"
|
||||||
unsynced_text = "[00:00.00]a\n[00:00.00]b"
|
unsynced_text = "[00:00.00]a\n[00:00.00]b"
|
||||||
@@ -140,7 +115,135 @@ def test_normalize_unsynced_covers_documented_blank_and_tag_rules() -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_to_plain_duplicates_lines_by_leading_repeated_timestamps() -> None:
|
def test_normalize_unsynced_preserves_doc_tags_and_middle_blanks() -> None:
|
||||||
|
text = "\n".join(["[ar:Artist]", "", "[00:03.00]line", "[ti:Song]", "", " tail "])
|
||||||
|
|
||||||
|
normalized = LRCData(text).normalize_unsynced()
|
||||||
|
|
||||||
|
assert normalized.tags == {"ar": "Artist", "ti": "Song"}
|
||||||
|
assert str(normalized) == "\n".join(
|
||||||
|
[
|
||||||
|
"[ar:Artist]",
|
||||||
|
"[00:00.00]line",
|
||||||
|
"[ti:Song]",
|
||||||
|
"[00:00.00]",
|
||||||
|
"[00:00.00]tail",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_normalize_unsynced_strips_word_sync_markup_from_lyric_text() -> None:
|
||||||
|
text = "[00:02.00]<00:01.00>he <00:01.50>llo"
|
||||||
|
|
||||||
|
normalized = str(LRCData(text).normalize_unsynced())
|
||||||
|
|
||||||
|
assert normalized == "[00:00.00]he llo"
|
||||||
|
|
||||||
|
|
||||||
|
def test_normalize_unsynced_result_is_always_unsynced() -> None:
|
||||||
|
text = "[00:05.00]a\n[00:10.00]b"
|
||||||
|
|
||||||
|
normalized = LRCData(text).normalize_unsynced()
|
||||||
|
|
||||||
|
assert normalized.is_synced() is False
|
||||||
|
assert normalized.detect_sync_status() is CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
|
||||||
|
|
||||||
|
def test_normalize_moves_doc_tags_to_top_and_removes_offset_tag() -> None:
|
||||||
|
text = "\n".join(
|
||||||
|
[
|
||||||
|
"[00:02.00]b",
|
||||||
|
"[ar:Artist]",
|
||||||
|
"[offset:500]",
|
||||||
|
"[00:01.00]a",
|
||||||
|
"[ti:Song]",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
normalized = LRCData(text).to_normalized_text()
|
||||||
|
|
||||||
|
assert normalized == "\n".join(
|
||||||
|
[
|
||||||
|
"[ar:Artist]",
|
||||||
|
"[ti:Song]",
|
||||||
|
"[00:01.50]a",
|
||||||
|
"[00:02.50]b",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_normalize_expands_multi_time_tags_and_sorts_lyrics() -> None:
|
||||||
|
text = "\n".join(
|
||||||
|
[
|
||||||
|
"[00:03.00]c",
|
||||||
|
"[00:02.00][00:01.00]x",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
normalized = LRCData(text).to_normalized_text()
|
||||||
|
|
||||||
|
assert normalized == "\n".join(["[00:01.00]x", "[00:02.00]x", "[00:03.00]c"])
|
||||||
|
|
||||||
|
|
||||||
|
def test_normalize_preserves_input_order_for_equal_timestamps() -> None:
|
||||||
|
text = "\n".join(
|
||||||
|
[
|
||||||
|
"[00:00.00]first",
|
||||||
|
"[00:00.00]second",
|
||||||
|
"[00:00.00]third",
|
||||||
|
"[00:01.00]later",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
normalized = LRCData(text).to_normalized_text()
|
||||||
|
|
||||||
|
assert normalized == "\n".join(
|
||||||
|
["[00:00.00]first", "[00:00.00]second", "[00:00.00]third", "[00:01.00]later"]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_normalize_converts_unsynced_lines_and_removes_word_sync_tags() -> None:
|
||||||
|
text = "\n".join(
|
||||||
|
[
|
||||||
|
"plain",
|
||||||
|
"<00:01.00>he <00:01.50>llo",
|
||||||
|
"[00:02.00]<00:02.20>world",
|
||||||
|
"",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
normalized = LRCData(text).to_normalized_text()
|
||||||
|
|
||||||
|
assert normalized == "\n".join(
|
||||||
|
[
|
||||||
|
"[00:00.00]plain",
|
||||||
|
"[00:00.00]he llo",
|
||||||
|
"[00:02.00]world",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_to_normalized_text_is_separate_from_plain() -> None:
|
||||||
|
data = LRCData("[offset:500]\n[00:02.00]b\n[00:01.00]a")
|
||||||
|
|
||||||
|
assert data.to_plain() == "a\nb"
|
||||||
|
assert data.to_normalized_text() == "[00:01.50]a\n[00:02.50]b"
|
||||||
|
|
||||||
|
|
||||||
|
def test_to_text_default_forces_unsynced_tagging() -> None:
|
||||||
|
data = LRCData("line\nother")
|
||||||
|
|
||||||
|
assert data.to_text() == "[00:00.00]line\n[00:00.00]other"
|
||||||
|
|
||||||
|
|
||||||
|
def test_str_is_raw_serializer_while_to_text_converts_unsynced() -> None:
|
||||||
|
data = LRCData("line\nother")
|
||||||
|
|
||||||
|
assert str(data) == "line\nother"
|
||||||
|
assert data.to_text() == "[00:00.00]line\n[00:00.00]other"
|
||||||
|
|
||||||
|
|
||||||
|
def test_to_plain_duplicates_lines_for_multi_line_times() -> None:
|
||||||
text = "\n".join(
|
text = "\n".join(
|
||||||
[
|
[
|
||||||
"[00:02.00][00:01.00]hello",
|
"[00:02.00][00:01.00]hello",
|
||||||
@@ -171,6 +274,21 @@ def test_to_plain_sorts_lines_by_timestamp_across_lines() -> None:
|
|||||||
assert plain == "\n".join(["early", "middle", "late"])
|
assert plain == "\n".join(["early", "middle", "late"])
|
||||||
|
|
||||||
|
|
||||||
|
def test_to_plain_preserves_input_order_for_equal_timestamps() -> None:
|
||||||
|
text = "\n".join(
|
||||||
|
[
|
||||||
|
"[00:00.00]first",
|
||||||
|
"[00:00.00]second",
|
||||||
|
"[00:00.00]third",
|
||||||
|
"[00:01.00]later",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
plain = LRCData(text).to_plain()
|
||||||
|
|
||||||
|
assert plain == "\n".join(["first", "second", "third", "later"])
|
||||||
|
|
||||||
|
|
||||||
def test_to_plain_deduplicate_collapses_only_consecutive_equals() -> None:
|
def test_to_plain_deduplicate_collapses_only_consecutive_equals() -> None:
|
||||||
text = "\n".join(
|
text = "\n".join(
|
||||||
[
|
[
|
||||||
@@ -188,7 +306,7 @@ def test_to_plain_deduplicate_collapses_only_consecutive_equals() -> None:
|
|||||||
assert plain == "\n".join(["hello", "", "world", "hello"])
|
assert plain == "\n".join(["hello", "", "world", "hello"])
|
||||||
|
|
||||||
|
|
||||||
def test_to_plain_fallback_for_non_synced_text_strips_start_tags() -> None:
|
def test_to_plain_excludes_doc_tags_and_untagged_lines_in_unsynced_mode() -> None:
|
||||||
text = "\n".join(["[ar:Artist]", "[00:00.00]only-zero", "plain line"])
|
text = "\n".join(["[ar:Artist]", "[00:00.00]only-zero", "plain line"])
|
||||||
|
|
||||||
plain = LRCData(text).to_plain()
|
plain = LRCData(text).to_plain()
|
||||||
@@ -196,7 +314,9 @@ def test_to_plain_fallback_for_non_synced_text_strips_start_tags() -> None:
|
|||||||
assert plain == "only-zero\nplain line"
|
assert plain == "only-zero\nplain line"
|
||||||
|
|
||||||
|
|
||||||
def test_to_plain_trims_leading_and_trailing_blank_lines() -> None:
|
def test_to_plain_outer_blanks_stripped_and_untagged_lines_excluded_in_synced_mode() -> (
|
||||||
|
None
|
||||||
|
):
|
||||||
text = "\n\n[00:01.00]line1\n\n[00:01.00]\n[00:02.00]line2\nline3\n \n"
|
text = "\n\n[00:01.00]line1\n\n[00:01.00]\n[00:02.00]line2\nline3\n \n"
|
||||||
|
|
||||||
plain = LRCData(text).to_plain()
|
plain = LRCData(text).to_plain()
|
||||||
@@ -210,3 +330,124 @@ def test_reformat_pipeline_trims_outer_blanks_and_preserves_inner_blanks() -> No
|
|||||||
normalized = str(LRCData(text))
|
normalized = str(LRCData(text))
|
||||||
|
|
||||||
assert normalized == "[00:01.00]a\n\n[00:02.00]b"
|
assert normalized == "[00:01.00]a\n\n[00:02.00]b"
|
||||||
|
|
||||||
|
|
||||||
|
def test_single_doc_tag_line_is_preserved_and_registered() -> None:
|
||||||
|
data = LRCData("[ar:Artist]\n[00:01.00]line")
|
||||||
|
|
||||||
|
assert data.tags == {"ar": "Artist"}
|
||||||
|
assert len(data.lines) == 2
|
||||||
|
assert str(data) == "[ar:Artist]\n[00:01.00]line"
|
||||||
|
assert data.to_plain() == "line"
|
||||||
|
|
||||||
|
|
||||||
|
def test_multiple_doc_tags_on_one_line_are_plain_lyrics() -> None:
|
||||||
|
data = LRCData("[ar:Artist][ti:Song]")
|
||||||
|
|
||||||
|
assert data.tags == {}
|
||||||
|
assert len(data.lines) == 1
|
||||||
|
assert data.lines[0].text == "[ar:Artist][ti:Song]"
|
||||||
|
|
||||||
|
|
||||||
|
def test_doc_tag_after_lyrics_is_still_recognized_as_doc_tag() -> None:
|
||||||
|
data = LRCData("[00:01.00]line\n[ar:Artist]")
|
||||||
|
|
||||||
|
assert data.tags == {"ar": "Artist"}
|
||||||
|
assert len(data.lines) == 2
|
||||||
|
assert str(data) == "[00:01.00]line\n[ar:Artist]"
|
||||||
|
assert data.to_plain() == "line"
|
||||||
|
|
||||||
|
|
||||||
|
def test_unknown_lines_before_lyrics_are_preserved_and_do_not_start_lyrics() -> None:
|
||||||
|
data = LRCData("comment line\n[ar:Artist]\n[00:01.00]line")
|
||||||
|
|
||||||
|
assert data.tags == {"ar": "Artist"}
|
||||||
|
assert len(data.lines) == 3
|
||||||
|
assert str(data) == "comment line\n[ar:Artist]\n[00:01.00]line"
|
||||||
|
assert data.to_plain() == "line"
|
||||||
|
|
||||||
|
|
||||||
|
def test_to_plain_excludes_doc_tags_but_keeps_lyrics() -> None:
|
||||||
|
data = LRCData("[ar:Artist]\n[00:01.00]line\n[ti:Song]\nplain")
|
||||||
|
|
||||||
|
assert data.to_plain() == "line"
|
||||||
|
|
||||||
|
|
||||||
|
def test_non_space_between_line_tags_stops_tag_parsing() -> None:
|
||||||
|
data = LRCData("[00:01.00]x[00:02.00]tail")
|
||||||
|
|
||||||
|
assert len(data.lines) == 1
|
||||||
|
assert str(data) == "[00:01.00]x[00:02.00]tail"
|
||||||
|
assert data.to_plain() == "x[00:02.00]tail"
|
||||||
|
|
||||||
|
|
||||||
|
def test_line_only_time_tag_is_valid_empty_lyric() -> None:
|
||||||
|
data = LRCData("[00:01.00]")
|
||||||
|
|
||||||
|
assert len(data.lines) == 1
|
||||||
|
assert str(data) == "[00:01.00]"
|
||||||
|
assert data.to_plain() == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_word_sync_markup_only_changes_output_when_enabled() -> None:
|
||||||
|
a = LRCData("[00:01.00]<00:00.50>lyric")
|
||||||
|
b = LRCData("[00:01.00]lyric")
|
||||||
|
|
||||||
|
assert a.to_text(include_word_sync=False) == "[00:01.00]lyric"
|
||||||
|
assert b.to_text(include_word_sync=False) == "[00:01.00]lyric"
|
||||||
|
assert a.to_text(include_word_sync=True) == "[00:01.00]<00:00.50>lyric"
|
||||||
|
assert b.to_text(include_word_sync=True) == "[00:01.00]lyric"
|
||||||
|
|
||||||
|
|
||||||
|
def test_str_preserves_word_sync_markup() -> None:
|
||||||
|
data = LRCData("[00:01.00]<00:00.50>lyric")
|
||||||
|
|
||||||
|
assert str(data) == "[00:01.00]<00:00.50>lyric"
|
||||||
|
|
||||||
|
|
||||||
|
def test_str_preserves_offset_tag_and_does_not_apply_it() -> None:
|
||||||
|
data = LRCData("[offset:500]\n[00:01.00]a")
|
||||||
|
|
||||||
|
assert str(data) == "[offset:500]\n[00:01.00]a"
|
||||||
|
assert data.to_normalized_text() == "[00:01.50]a"
|
||||||
|
|
||||||
|
|
||||||
|
def test_str_preserves_doc_tag_order_and_duplicates_exactly() -> None:
|
||||||
|
data = LRCData("[ar:First]\n[ti:Song]\n[ar:Second]\n[00:01.00]line")
|
||||||
|
|
||||||
|
assert str(data) == "[ar:First]\n[ti:Song]\n[ar:Second]\n[00:01.00]line"
|
||||||
|
|
||||||
|
|
||||||
|
def test_str_does_not_expand_or_sort_multi_time_lines() -> None:
|
||||||
|
data = LRCData("[00:03.00]c\n[00:02.00][00:01.00]x")
|
||||||
|
|
||||||
|
assert str(data) == "[00:03.00]c\n[00:02.00][00:01.00]x"
|
||||||
|
assert data.to_normalized_text() == "[00:01.00]x\n[00:02.00]x\n[00:03.00]c"
|
||||||
|
|
||||||
|
|
||||||
|
def test_str_preserves_plain_text_lines_without_injecting_time_tags() -> None:
|
||||||
|
data = LRCData("plain line\n[ar:Artist]\nother line")
|
||||||
|
|
||||||
|
assert str(data) == "plain line\n[ar:Artist]\nother line"
|
||||||
|
assert data.to_text() == "[00:00.00]plain line\n[ar:Artist]\n[00:00.00]other line"
|
||||||
|
|
||||||
|
|
||||||
|
def test_word_sync_line_with_empty_tail_keeps_word_tag_only_when_enabled() -> None:
|
||||||
|
data = LRCData("[00:01.00]<00:02.00>")
|
||||||
|
|
||||||
|
assert data.to_text(include_word_sync=False) == "[00:01.00]"
|
||||||
|
assert data.to_text(include_word_sync=True) == "[00:01.00]<00:02.00>"
|
||||||
|
|
||||||
|
|
||||||
|
def test_to_plain_for_doc_only_text_is_empty() -> None:
|
||||||
|
data = LRCData("[ar:Artist]\n[ti:Song]")
|
||||||
|
|
||||||
|
assert data.to_plain() == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_duplicate_doc_tag_key_last_value_wins_but_lines_are_kept() -> None:
|
||||||
|
data = LRCData("[ar:First]\n[ar:Second]\n[00:01.00]line")
|
||||||
|
|
||||||
|
assert data.tags == {"ar": "Second"}
|
||||||
|
assert len(data.lines) == 3
|
||||||
|
assert str(data).startswith("[ar:First]\n[ar:Second]\n")
|
||||||
|
|||||||
+179
-20
@@ -4,8 +4,9 @@ import asyncio
|
|||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
from lrx_cli.config import HIGH_CONFIDENCE
|
from lrx_cli.config import HIGH_CONFIDENCE
|
||||||
|
from lrx_cli.cache import SLOT_UNSYNCED
|
||||||
from lrx_cli.core import LrcManager
|
from lrx_cli.core import LrcManager
|
||||||
from lrx_cli.fetchers.base import BaseFetcher
|
from lrx_cli.fetchers.base import BaseFetcher, FetchResult
|
||||||
from lrx_cli.lrc import LRCData
|
from lrx_cli.lrc import LRCData
|
||||||
from lrx_cli.models import CacheStatus, LyricResult, TrackMeta
|
from lrx_cli.models import CacheStatus, LyricResult, TrackMeta
|
||||||
|
|
||||||
@@ -41,8 +42,15 @@ def _not_found() -> LyricResult:
|
|||||||
return LyricResult(status=CacheStatus.NOT_FOUND)
|
return LyricResult(status=CacheStatus.NOT_FOUND)
|
||||||
|
|
||||||
|
|
||||||
|
def _fr(
|
||||||
|
synced: LyricResult | None = None,
|
||||||
|
unsynced: LyricResult | None = None,
|
||||||
|
) -> FetchResult:
|
||||||
|
return FetchResult(synced=synced, unsynced=unsynced)
|
||||||
|
|
||||||
|
|
||||||
class MockFetcher(BaseFetcher):
|
class MockFetcher(BaseFetcher):
|
||||||
def __init__(self, name: str, result: LyricResult | None, delay: float = 0.0):
|
def __init__(self, name: str, result: FetchResult, delay: float = 0.0):
|
||||||
self._name = name
|
self._name = name
|
||||||
self._result = result
|
self._result = result
|
||||||
self._delay = delay
|
self._delay = delay
|
||||||
@@ -56,9 +64,7 @@ class MockFetcher(BaseFetcher):
|
|||||||
def is_available(self, track: TrackMeta) -> bool:
|
def is_available(self, track: TrackMeta) -> bool:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
async def fetch(
|
async def fetch(self, track: TrackMeta, bypass_cache: bool = False) -> FetchResult:
|
||||||
self, track: TrackMeta, bypass_cache: bool = False
|
|
||||||
) -> LyricResult | None:
|
|
||||||
self.called = True
|
self.called = True
|
||||||
try:
|
try:
|
||||||
if self._delay:
|
if self._delay:
|
||||||
@@ -78,8 +84,8 @@ def make_manager(tmp_path) -> LrcManager:
|
|||||||
|
|
||||||
def test_unsynced_does_not_stop_next_group(tmp_path):
|
def test_unsynced_does_not_stop_next_group(tmp_path):
|
||||||
"""Unsynced result should not stop the pipeline — next group must still run."""
|
"""Unsynced result should not stop the pipeline — next group must still run."""
|
||||||
a = MockFetcher("a", _unsynced("a"))
|
a = MockFetcher("a", _fr(unsynced=_unsynced("a")))
|
||||||
b = MockFetcher("b", _synced("b"))
|
b = MockFetcher("b", _fr(synced=_synced("b")))
|
||||||
manager = make_manager(tmp_path)
|
manager = make_manager(tmp_path)
|
||||||
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
|
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
|
||||||
result = manager.fetch_for_track(_track())
|
result = manager.fetch_for_track(_track())
|
||||||
@@ -90,8 +96,8 @@ def test_unsynced_does_not_stop_next_group(tmp_path):
|
|||||||
|
|
||||||
def test_trusted_synced_stops_next_group(tmp_path):
|
def test_trusted_synced_stops_next_group(tmp_path):
|
||||||
"""Trusted synced from group1 must prevent group2 from running."""
|
"""Trusted synced from group1 must prevent group2 from running."""
|
||||||
a = MockFetcher("a", _synced("a"))
|
a = MockFetcher("a", _fr(synced=_synced("a")))
|
||||||
b = MockFetcher("b", _synced("b"))
|
b = MockFetcher("b", _fr(synced=_synced("b")))
|
||||||
manager = make_manager(tmp_path)
|
manager = make_manager(tmp_path)
|
||||||
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
|
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
|
||||||
result = manager.fetch_for_track(_track())
|
result = manager.fetch_for_track(_track())
|
||||||
@@ -102,8 +108,8 @@ def test_trusted_synced_stops_next_group(tmp_path):
|
|||||||
|
|
||||||
def test_negative_continues_next_group(tmp_path):
|
def test_negative_continues_next_group(tmp_path):
|
||||||
"""NOT_FOUND from group1 must cause group2 to be tried."""
|
"""NOT_FOUND from group1 must cause group2 to be tried."""
|
||||||
a = MockFetcher("a", _not_found())
|
a = MockFetcher("a", _fr(synced=_not_found(), unsynced=_not_found()))
|
||||||
b = MockFetcher("b", _synced("b"))
|
b = MockFetcher("b", _fr(synced=_synced("b")))
|
||||||
manager = make_manager(tmp_path)
|
manager = make_manager(tmp_path)
|
||||||
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
|
with patch("lrx_cli.core.build_plan", return_value=[[a], [b]]):
|
||||||
result = manager.fetch_for_track(_track())
|
result = manager.fetch_for_track(_track())
|
||||||
@@ -119,8 +125,8 @@ def test_negative_continues_next_group(tmp_path):
|
|||||||
def test_trusted_synced_cancels_sibling(tmp_path):
|
def test_trusted_synced_cancels_sibling(tmp_path):
|
||||||
"""When a fast fetcher returns trusted synced, the slow sibling must be cancelled.
|
"""When a fast fetcher returns trusted synced, the slow sibling must be cancelled.
|
||||||
If cancellation is broken this test will block for 10 seconds."""
|
If cancellation is broken this test will block for 10 seconds."""
|
||||||
fast = MockFetcher("fast", _synced("fast"))
|
fast = MockFetcher("fast", _fr(synced=_synced("fast")))
|
||||||
slow = MockFetcher("slow", _synced("slow"), delay=10.0)
|
slow = MockFetcher("slow", _fr(synced=_synced("slow")), delay=10.0)
|
||||||
manager = make_manager(tmp_path)
|
manager = make_manager(tmp_path)
|
||||||
with patch("lrx_cli.core.build_plan", return_value=[[fast, slow]]):
|
with patch("lrx_cli.core.build_plan", return_value=[[fast, slow]]):
|
||||||
result = manager.fetch_for_track(_track())
|
result = manager.fetch_for_track(_track())
|
||||||
@@ -131,23 +137,73 @@ def test_trusted_synced_cancels_sibling(tmp_path):
|
|||||||
assert result.source == "fast"
|
assert result.source == "fast"
|
||||||
|
|
||||||
|
|
||||||
def test_best_confidence_within_group(tmp_path):
|
def test_allow_unsynced_true_picks_highest_confidence_unsynced(tmp_path):
|
||||||
"""When no trusted synced result, highest-confidence result from group is returned."""
|
"""When allow_unsynced=True and no trusted synced result, highest-confidence unsynced is returned."""
|
||||||
low = MockFetcher("low", _unsynced("low", confidence=40.0))
|
low = MockFetcher("low", _fr(unsynced=_unsynced("low", confidence=40.0)))
|
||||||
high = MockFetcher("high", _unsynced("high", confidence=70.0))
|
high = MockFetcher("high", _fr(unsynced=_unsynced("high", confidence=70.0)))
|
||||||
manager = make_manager(tmp_path)
|
manager = make_manager(tmp_path)
|
||||||
with patch("lrx_cli.core.build_plan", return_value=[[low, high]]):
|
with patch("lrx_cli.core.build_plan", return_value=[[low, high]]):
|
||||||
result = manager.fetch_for_track(_track())
|
result = manager.fetch_for_track(_track(), allow_unsynced=True)
|
||||||
assert result is not None
|
assert result is not None
|
||||||
assert result.source == "high"
|
assert result.source == "high"
|
||||||
|
|
||||||
|
|
||||||
|
def test_equal_confidence_prefers_synced_when_unsynced_allowed(tmp_path):
|
||||||
|
"""Tie on confidence should still prefer synced over unsynced."""
|
||||||
|
dual = MockFetcher(
|
||||||
|
"dual",
|
||||||
|
_fr(
|
||||||
|
synced=_synced("dual", confidence=70.0),
|
||||||
|
unsynced=_unsynced("dual", confidence=70.0),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
with patch("lrx_cli.core.build_plan", return_value=[[dual]]):
|
||||||
|
result = manager.fetch_for_track(_track(), allow_unsynced=True)
|
||||||
|
assert result is not None
|
||||||
|
assert result.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
def test_unsynced_only_returns_none_when_not_allowed(tmp_path):
|
||||||
|
"""When allow_unsynced=False, unsynced-only pipeline result must be rejected."""
|
||||||
|
only_unsynced = MockFetcher(
|
||||||
|
"u",
|
||||||
|
_fr(unsynced=_unsynced("u", confidence=95.0)),
|
||||||
|
)
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
with patch("lrx_cli.core.build_plan", return_value=[[only_unsynced]]):
|
||||||
|
result = manager.fetch_for_track(_track(), allow_unsynced=False)
|
||||||
|
assert result is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_allow_unsynced_flag_controls_return_type(tmp_path):
|
||||||
|
"""With both slots available, allow_unsynced controls whether unsynced can be returned."""
|
||||||
|
dual = MockFetcher(
|
||||||
|
"dual",
|
||||||
|
_fr(
|
||||||
|
synced=_synced("dual", confidence=55.0),
|
||||||
|
unsynced=_unsynced("dual", confidence=95.0),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
|
||||||
|
with patch("lrx_cli.core.build_plan", return_value=[[dual]]):
|
||||||
|
synced_only = manager.fetch_for_track(_track(), allow_unsynced=False)
|
||||||
|
assert synced_only is not None
|
||||||
|
assert synced_only.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
with patch("lrx_cli.core.build_plan", return_value=[[dual]]):
|
||||||
|
allow_unsynced = manager.fetch_for_track(_track(), allow_unsynced=True)
|
||||||
|
assert allow_unsynced is not None
|
||||||
|
assert allow_unsynced.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
|
||||||
|
|
||||||
# Cache interaction
|
# Cache interaction
|
||||||
|
|
||||||
|
|
||||||
def test_cache_negative_skips_fetch(tmp_path):
|
def test_cache_negative_skips_fetch(tmp_path):
|
||||||
"""A cached NOT_FOUND entry must prevent the fetcher from being called."""
|
"""A cached NOT_FOUND entry must prevent the fetcher from being called."""
|
||||||
fetcher = MockFetcher("src", _synced("src"))
|
fetcher = MockFetcher("src", _fr(synced=_synced("src")))
|
||||||
manager = make_manager(tmp_path)
|
manager = make_manager(tmp_path)
|
||||||
track = _track()
|
track = _track()
|
||||||
manager.cache.set(track, "src", _not_found(), ttl_seconds=3600)
|
manager.cache.set(track, "src", _not_found(), ttl_seconds=3600)
|
||||||
@@ -159,7 +215,7 @@ def test_cache_negative_skips_fetch(tmp_path):
|
|||||||
|
|
||||||
def test_cache_trusted_synced_no_fetch(tmp_path):
|
def test_cache_trusted_synced_no_fetch(tmp_path):
|
||||||
"""A trusted synced cache hit must be returned without calling the fetcher."""
|
"""A trusted synced cache hit must be returned without calling the fetcher."""
|
||||||
fetcher = MockFetcher("src", None)
|
fetcher = MockFetcher("src", _fr())
|
||||||
manager = make_manager(tmp_path)
|
manager = make_manager(tmp_path)
|
||||||
track = _track()
|
track = _track()
|
||||||
manager.cache.set(track, "src", _synced("src"), ttl_seconds=3600)
|
manager.cache.set(track, "src", _synced("src"), ttl_seconds=3600)
|
||||||
@@ -168,3 +224,106 @@ def test_cache_trusted_synced_no_fetch(tmp_path):
|
|||||||
assert not fetcher.called
|
assert not fetcher.called
|
||||||
assert result is not None
|
assert result is not None
|
||||||
assert result.status == CacheStatus.SUCCESS_SYNCED
|
assert result.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
def test_cached_slots_support_strategy_switch_without_refetch(
|
||||||
|
tmp_path,
|
||||||
|
):
|
||||||
|
"""When both slots are cached, strategy switch should reuse cache without re-fetch."""
|
||||||
|
fetcher = MockFetcher(
|
||||||
|
"src",
|
||||||
|
_fr(
|
||||||
|
synced=_synced("src", confidence=85.0),
|
||||||
|
unsynced=_unsynced("src", confidence=95.0),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
track = _track()
|
||||||
|
|
||||||
|
# First request: permissive strategy, unsynced wins and is cached.
|
||||||
|
with patch("lrx_cli.core.build_plan", return_value=[[fetcher]]):
|
||||||
|
first = manager.fetch_for_track(track, allow_unsynced=True)
|
||||||
|
assert first is not None
|
||||||
|
assert first.status == CacheStatus.SUCCESS_UNSYNCED
|
||||||
|
|
||||||
|
fetcher.called = False
|
||||||
|
|
||||||
|
# Second request: stricter strategy should use synced cache slot directly.
|
||||||
|
with patch("lrx_cli.core.build_plan", return_value=[[fetcher]]):
|
||||||
|
second = manager.fetch_for_track(track, allow_unsynced=False)
|
||||||
|
|
||||||
|
assert not fetcher.called
|
||||||
|
assert second is not None
|
||||||
|
assert second.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
def test_unsynced_cache_only_still_fetches_when_unsynced_disallowed(tmp_path):
|
||||||
|
"""If only unsynced cache slot exists, allow_unsynced=False must still fetch synced."""
|
||||||
|
fetcher = MockFetcher("src", _fr(synced=_synced("src", confidence=88.0)))
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
track = _track()
|
||||||
|
|
||||||
|
manager.cache.set(
|
||||||
|
track,
|
||||||
|
"src",
|
||||||
|
_unsynced("src", confidence=95.0),
|
||||||
|
ttl_seconds=3600,
|
||||||
|
positive_kind=SLOT_UNSYNCED,
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch("lrx_cli.core.build_plan", return_value=[[fetcher]]):
|
||||||
|
result = manager.fetch_for_track(track, allow_unsynced=False)
|
||||||
|
|
||||||
|
assert fetcher.called
|
||||||
|
assert result is not None
|
||||||
|
assert result.status == CacheStatus.SUCCESS_SYNCED
|
||||||
|
|
||||||
|
|
||||||
|
# manual_insert
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_synced_stored_with_correct_status(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
manager.manual_insert(_track(), "[00:01.00]Hello\n[00:03.00]World\n")
|
||||||
|
|
||||||
|
rows = manager.cache.query_track(_track())
|
||||||
|
assert any(r["status"] == CacheStatus.SUCCESS_SYNCED.value for r in rows)
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_unsynced_stored_with_correct_status(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
manager.manual_insert(_track(), "Hello\nWorld\n")
|
||||||
|
|
||||||
|
rows = manager.cache.query_track(_track())
|
||||||
|
assert any(r["status"] == CacheStatus.SUCCESS_UNSYNCED.value for r in rows)
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_source_and_ttl(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
manager.manual_insert(_track(), "[00:01.00]line\n")
|
||||||
|
|
||||||
|
rows = manager.cache.query_track(_track())
|
||||||
|
assert all(r["source"] == "manual" for r in rows)
|
||||||
|
assert all(r["expires_at"] is None for r in rows)
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_overwrites_previous_entry(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
track = _track()
|
||||||
|
manager.manual_insert(track, "[00:01.00]old\n")
|
||||||
|
manager.manual_insert(track, "[00:01.00]new\n")
|
||||||
|
|
||||||
|
best = manager.cache.get_best(track, ["manual"])
|
||||||
|
assert best is not None
|
||||||
|
assert str(best.lyrics) == "[00:01.00]new"
|
||||||
|
|
||||||
|
|
||||||
|
def test_manual_insert_is_returned_by_fetch(tmp_path):
|
||||||
|
manager = make_manager(tmp_path)
|
||||||
|
track = _track()
|
||||||
|
manager.manual_insert(track, "[00:01.00]cached\n")
|
||||||
|
|
||||||
|
result = manager.fetch_for_track(track)
|
||||||
|
assert result is not None
|
||||||
|
assert result.lyrics is not None
|
||||||
|
assert str(result.lyrics) == "[00:01.00]cached"
|
||||||
|
|||||||
+7
-22
@@ -74,14 +74,13 @@ def test_score_missing_one_side_gives_zero_for_field() -> None:
|
|||||||
assert score == 10.0
|
assert score == 10.0
|
||||||
|
|
||||||
|
|
||||||
def test_score_synced_bonus() -> None:
|
def test_synced_state_does_not_affect_score() -> None:
|
||||||
"""Synced adds 10 points."""
|
|
||||||
base = SearchCandidate(item="x", title="My Love", is_synced=False)
|
base = SearchCandidate(item="x", title="My Love", is_synced=False)
|
||||||
synced = SearchCandidate(item="x", title="My Love", is_synced=True)
|
synced = SearchCandidate(item="x", title="My Love", is_synced=True)
|
||||||
diff = _score_candidate(synced, "My Love", None, None, None) - _score_candidate(
|
diff = _score_candidate(synced, "My Love", None, None, None) - _score_candidate(
|
||||||
base, "My Love", None, None, None
|
base, "My Love", None, None, None
|
||||||
)
|
)
|
||||||
assert diff == 10.0
|
assert diff == 0.0
|
||||||
|
|
||||||
|
|
||||||
def test_score_duration_linear_decay() -> None:
|
def test_score_duration_linear_decay() -> None:
|
||||||
@@ -95,11 +94,11 @@ def test_score_duration_linear_decay() -> None:
|
|||||||
at_tol = SearchCandidate(item="x", duration_ms=232000.0 + 3000.0)
|
at_tol = SearchCandidate(item="x", duration_ms=232000.0 + 3000.0)
|
||||||
score_edge = _score_candidate(at_tol, None, None, None, 232000)
|
score_edge = _score_candidate(at_tol, None, None, None, 232000)
|
||||||
|
|
||||||
# Only duration is comparable → rescaled to fill 0-90
|
# Only duration is comparable → metadata spans 0-90, plus a constant baseline +10
|
||||||
# exact=90, half=45, edge=0
|
# exact=100, half=55, edge=10
|
||||||
assert score_exact == 90.0
|
assert score_exact == 100.0
|
||||||
assert score_half == 45.0
|
assert score_half == 55.0
|
||||||
assert score_edge == 0.0
|
assert score_edge == 10.0
|
||||||
|
|
||||||
|
|
||||||
def test_duration_hard_filter_rejects_all_mismatched() -> None:
|
def test_duration_hard_filter_rejects_all_mismatched() -> None:
|
||||||
@@ -333,20 +332,6 @@ def test_lrclib_picks_exact_album_match() -> None:
|
|||||||
assert score >= MIN_CONFIDENCE
|
assert score >= MIN_CONFIDENCE
|
||||||
|
|
||||||
|
|
||||||
def test_lrclib_rejects_wrong_title() -> None:
|
|
||||||
"""'Hello My Love' should not beat 'My Love' entries."""
|
|
||||||
candidates = _lrclib_candidates()
|
|
||||||
best, _ = select_best(
|
|
||||||
candidates,
|
|
||||||
_REF_LENGTH,
|
|
||||||
title=_REF_TITLE,
|
|
||||||
artist=_REF_ARTIST,
|
|
||||||
album=_REF_ALBUM,
|
|
||||||
)
|
|
||||||
assert best is not None
|
|
||||||
assert best["trackName"] != "Hello My Love"
|
|
||||||
|
|
||||||
|
|
||||||
def test_lrclib_noisy_picks_westlife() -> None:
|
def test_lrclib_noisy_picks_westlife() -> None:
|
||||||
"""In noisy title-only results, artist matching should filter to Westlife."""
|
"""In noisy title-only results, artist matching should filter to Westlife."""
|
||||||
candidates = _lrclib_noisy_candidates()
|
candidates = _lrclib_noisy_candidates()
|
||||||
|
|||||||
@@ -0,0 +1,684 @@
|
|||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from lrx_cli.lrc import LRCData
|
||||||
|
from lrx_cli.models import TrackMeta
|
||||||
|
from lrx_cli.watch.control import ControlClient, ControlServer, parse_delta
|
||||||
|
from lrx_cli.watch.view import BaseOutput, LyricView, WatchState, WatchStatus
|
||||||
|
from lrx_cli.watch.view.pipe import PipeOutput
|
||||||
|
from lrx_cli.watch.view.print import PrintOutput
|
||||||
|
from lrx_cli.watch.player import ActivePlayerSelector, PlayerState, PlayerTarget
|
||||||
|
from lrx_cli.config import AppConfig
|
||||||
|
from lrx_cli.watch.tracker import PositionTracker
|
||||||
|
from lrx_cli.watch.session import WatchCoordinator
|
||||||
|
|
||||||
|
|
||||||
|
TEST_CONFIG = AppConfig()
|
||||||
|
BUS = "org.mpris.MediaPlayer2.spotify"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_delta_supports_plus_minus_and_reset() -> None:
|
||||||
|
assert parse_delta("+200") == (True, 200, None)
|
||||||
|
assert parse_delta("-150") == (True, -150, None)
|
||||||
|
assert parse_delta("0") == (True, 0, None)
|
||||||
|
|
||||||
|
|
||||||
|
# PlayerTarget
|
||||||
|
|
||||||
|
|
||||||
|
def test_player_target_allows_all_when_hint_empty() -> None:
|
||||||
|
target = PlayerTarget()
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.mpd") is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_player_target_filters_by_case_insensitive_substring() -> None:
|
||||||
|
target = PlayerTarget("Spot")
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.mpd") is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_player_target_hint_allows_regardless_of_blacklist() -> None:
|
||||||
|
# --player bypasses PLAYER_BLACKLIST; PlayerTarget.allows() reflects the hint only
|
||||||
|
target = PlayerTarget("spot")
|
||||||
|
assert target.allows("org.mpris.MediaPlayer2.spotify") is True
|
||||||
|
|
||||||
|
|
||||||
|
# ActivePlayerSelector
|
||||||
|
|
||||||
|
|
||||||
|
def _ps(bus: str, status: str = "Playing") -> PlayerState:
|
||||||
|
return PlayerState(bus_name=bus, status=status, track=TrackMeta(title="T"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_returns_none_when_no_players() -> None:
|
||||||
|
assert ActivePlayerSelector.select({}, None, "spotify") is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_prefers_single_playing() -> None:
|
||||||
|
players = {
|
||||||
|
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
|
||||||
|
"org.mpris.MediaPlayer2.bar": _ps("org.mpris.MediaPlayer2.bar", "Playing"),
|
||||||
|
}
|
||||||
|
assert (
|
||||||
|
ActivePlayerSelector.select(players, None, "spotify")
|
||||||
|
== "org.mpris.MediaPlayer2.bar"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_prefers_keyword_among_multiple_playing() -> None:
|
||||||
|
players = {
|
||||||
|
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo"),
|
||||||
|
"org.mpris.MediaPlayer2.spotify": _ps("org.mpris.MediaPlayer2.spotify"),
|
||||||
|
}
|
||||||
|
assert (
|
||||||
|
ActivePlayerSelector.select(players, None, "spotify")
|
||||||
|
== "org.mpris.MediaPlayer2.spotify"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_uses_last_active_when_no_playing() -> None:
|
||||||
|
players = {
|
||||||
|
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
|
||||||
|
"org.mpris.MediaPlayer2.bar": _ps("org.mpris.MediaPlayer2.bar", "Stopped"),
|
||||||
|
}
|
||||||
|
assert (
|
||||||
|
ActivePlayerSelector.select(players, "org.mpris.MediaPlayer2.bar", "spotify")
|
||||||
|
== "org.mpris.MediaPlayer2.bar"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_active_player_selector_falls_back_to_first_when_no_preference() -> None:
|
||||||
|
players = {
|
||||||
|
"org.mpris.MediaPlayer2.foo": _ps("org.mpris.MediaPlayer2.foo", "Paused"),
|
||||||
|
}
|
||||||
|
result = ActivePlayerSelector.select(players, None, "")
|
||||||
|
assert result == "org.mpris.MediaPlayer2.foo"
|
||||||
|
|
||||||
|
|
||||||
|
# PositionTracker
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_seeked_calibrates_immediately() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=1200), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Playing", "track-A")
|
||||||
|
await tracker.on_seeked(BUS, 3500)
|
||||||
|
pos = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert pos >= 3500
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_pause_stops_position_growth() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=0), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Playing", "track-A")
|
||||||
|
await asyncio.sleep(0.08)
|
||||||
|
before = await tracker.get_position_ms()
|
||||||
|
await tracker.on_playback_status(BUS, "Paused")
|
||||||
|
await asyncio.sleep(0.08)
|
||||||
|
after = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert before > 0
|
||||||
|
assert after - before < 20
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_resume_via_playback_status_calibrates() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=50000), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Paused", "track-A")
|
||||||
|
await tracker.on_playback_status(BUS, "Playing")
|
||||||
|
pos = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert pos >= 50000
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_paused_start_calibrates_initial_position() -> None:
|
||||||
|
"""set_active_player with Paused must still calibrate position — player may be mid-song."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=45000), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Paused", "track-A")
|
||||||
|
pos = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert pos >= 45000
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_position_tracker_resume_via_set_active_player_calibrates() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
tracker = PositionTracker(lambda _: asyncio.sleep(0, result=42000), TEST_CONFIG)
|
||||||
|
await tracker.start()
|
||||||
|
await tracker.set_active_player(BUS, "Paused", "track-A")
|
||||||
|
await tracker.set_active_player(BUS, "Playing", "track-A")
|
||||||
|
pos = await tracker.get_position_ms()
|
||||||
|
await tracker.stop()
|
||||||
|
assert pos >= 42000
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
# ControlServer and ControlClient
|
||||||
|
|
||||||
|
|
||||||
|
def test_control_server_and_client_roundtrip(tmp_path: Path) -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
class _Session:
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.offset = 0
|
||||||
|
|
||||||
|
def handle_offset(self, delta: int) -> dict:
|
||||||
|
self.offset += delta
|
||||||
|
return {"ok": True, "offset_ms": self.offset}
|
||||||
|
|
||||||
|
def handle_status(self) -> dict:
|
||||||
|
return {"ok": True, "offset_ms": self.offset, "lyrics_status": "idle"}
|
||||||
|
|
||||||
|
socket_path = tmp_path / "watch.sock"
|
||||||
|
server = ControlServer(socket_path=str(socket_path))
|
||||||
|
await server.start(_Session()) # type: ignore
|
||||||
|
client = ControlClient(socket_path=str(socket_path))
|
||||||
|
r1 = await client._send_async({"cmd": "offset", "delta": 200})
|
||||||
|
r2 = await client._send_async({"cmd": "status"})
|
||||||
|
await server.stop()
|
||||||
|
assert r1 == {"ok": True, "offset_ms": 200}
|
||||||
|
assert r2["ok"] is True
|
||||||
|
assert r2["offset_ms"] == 200
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
# PipeOutput
|
||||||
|
|
||||||
|
|
||||||
|
def _pipe_state(
|
||||||
|
status: WatchStatus,
|
||||||
|
lyrics: Optional[LRCData] = None,
|
||||||
|
position_ms: int = 0,
|
||||||
|
offset_ms: int = 0,
|
||||||
|
track: Optional[TrackMeta] = None,
|
||||||
|
) -> WatchState:
|
||||||
|
return WatchState(
|
||||||
|
track=track,
|
||||||
|
lyrics=LyricView.from_lrc(lyrics) if lyrics else None,
|
||||||
|
position_ms=position_ms,
|
||||||
|
offset_ms=offset_ms,
|
||||||
|
status=status,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_fetching_renders_status_window(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.FETCHING))
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "\n[fetching...]\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_no_lyrics_renders_status_window(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.NO_LYRICS))
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "\n[no lyrics]\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_idle_renders_status_window(capsys) -> None:
|
||||||
|
asyncio.run(PipeOutput(before=1, after=1).on_state(_pipe_state(WatchStatus.IDLE)))
|
||||||
|
assert capsys.readouterr().out == "\n[idle]\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_no_newline_mode(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=0, after=0, no_newline=True).on_state(
|
||||||
|
_pipe_state(WatchStatus.FETCHING)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "[fetching...]"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_default_window_shows_current_line(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput().on_state(_pipe_state(WatchStatus.OK, lrc, position_ms=2100))
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "b\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_context_window(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=2100)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "a\nb\nc\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_before_region_empty_at_first_line(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=1100)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "\na\nb\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_after_region_empty_at_last_line(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=3100)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "b\nc\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_upcoming_lines_before_first_timestamp(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:02.00]a\n[00:03.00]b")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=0)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "\n\na\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_offset_ms_shifts_effective_position(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]a\n[00:02.00]b\n[00:03.00]c")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput().on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=1000, offset_ms=1500)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
# effective = 2500 ms → line b
|
||||||
|
assert capsys.readouterr().out == "b\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_output_repeated_text_uses_correct_timed_occurrence(capsys) -> None:
|
||||||
|
lrc = LRCData("[00:01.00]A\n[00:02.00]X\n[00:03.00]B\n[00:04.00]X\n[00:05.00]C")
|
||||||
|
asyncio.run(
|
||||||
|
PipeOutput(before=1, after=1).on_state(
|
||||||
|
_pipe_state(WatchStatus.OK, lrc, position_ms=4100)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out == "B\nX\nC\n"
|
||||||
|
|
||||||
|
|
||||||
|
# PrintOutput
|
||||||
|
|
||||||
|
|
||||||
|
def _ok_state(lyrics: LRCData, track: Optional[TrackMeta] = None) -> WatchState:
|
||||||
|
return WatchState(
|
||||||
|
track=track or TrackMeta(title="Song", artist="Artist"),
|
||||||
|
lyrics=LyricView.from_lrc(lyrics),
|
||||||
|
position_ms=0,
|
||||||
|
offset_ms=0,
|
||||||
|
status=WatchStatus.OK,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _status_state(status: WatchStatus, track: Optional[TrackMeta] = None) -> WatchState:
|
||||||
|
return WatchState(
|
||||||
|
track=track or TrackMeta(title="Song", artist="Artist"),
|
||||||
|
lyrics=None,
|
||||||
|
position_ms=0,
|
||||||
|
offset_ms=0,
|
||||||
|
status=status,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_emits_lrc_on_ok(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PrintOutput().on_state(_ok_state(LRCData("[00:01.00]Hello\n[00:02.00]World")))
|
||||||
|
)
|
||||||
|
assert capsys.readouterr().out.startswith("[00:01.00]")
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_plain_strips_tags(capsys) -> None:
|
||||||
|
asyncio.run(
|
||||||
|
PrintOutput(plain=True).on_state(
|
||||||
|
_ok_state(LRCData("[00:01.00]Hello\n[00:02.00]World"))
|
||||||
|
)
|
||||||
|
)
|
||||||
|
out = capsys.readouterr().out
|
||||||
|
assert "[" not in out
|
||||||
|
assert "Hello" in out
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_plain_with_unsynced_lyrics(capsys) -> None:
|
||||||
|
asyncio.run(PrintOutput(plain=True).on_state(_ok_state(LRCData("Hello\nWorld"))))
|
||||||
|
out = capsys.readouterr().out
|
||||||
|
assert "Hello" in out
|
||||||
|
assert "[" not in out
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_no_lyrics_emits_blank_line(capsys) -> None:
|
||||||
|
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.NO_LYRICS)))
|
||||||
|
assert capsys.readouterr().out == "\n"
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_fetching_emits_nothing(capsys) -> None:
|
||||||
|
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.FETCHING)))
|
||||||
|
assert capsys.readouterr().out == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_idle_emits_nothing(capsys) -> None:
|
||||||
|
asyncio.run(PrintOutput().on_state(_status_state(WatchStatus.IDLE)))
|
||||||
|
assert capsys.readouterr().out == ""
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_is_stateless(capsys) -> None:
|
||||||
|
"""View has no internal deduplication — emits on every call."""
|
||||||
|
output = PrintOutput()
|
||||||
|
state = _ok_state(LRCData("[00:01.00]Hello"))
|
||||||
|
asyncio.run(output.on_state(state))
|
||||||
|
asyncio.run(output.on_state(state))
|
||||||
|
lines = [ln for ln in capsys.readouterr().out.splitlines() if ln]
|
||||||
|
assert len(lines) == 2
|
||||||
|
|
||||||
|
|
||||||
|
def test_print_output_position_sensitive_is_false() -> None:
|
||||||
|
assert PrintOutput.position_sensitive is False
|
||||||
|
|
||||||
|
|
||||||
|
# WatchCoordinator
|
||||||
|
|
||||||
|
|
||||||
|
class _CaptureFetcher:
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.requested: list[str] = []
|
||||||
|
|
||||||
|
def request(self, track: TrackMeta) -> None:
|
||||||
|
self.requested.append(track.display_name())
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def _make_coordinator(output: Optional[BaseOutput] = None) -> WatchCoordinator:
|
||||||
|
class _Manager:
|
||||||
|
def fetch_for_track(self, *_a, **_kw):
|
||||||
|
return None
|
||||||
|
|
||||||
|
class _NullOutput(BaseOutput):
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
session = WatchCoordinator(
|
||||||
|
_Manager(), # type: ignore
|
||||||
|
output or _NullOutput(),
|
||||||
|
player_hint=None,
|
||||||
|
config=TEST_CONFIG,
|
||||||
|
)
|
||||||
|
session._tracker = PositionTracker(
|
||||||
|
lambda _bus: asyncio.sleep(0, result=0),
|
||||||
|
TEST_CONFIG,
|
||||||
|
)
|
||||||
|
return session
|
||||||
|
|
||||||
|
|
||||||
|
def _pstate(status: str = "Playing", title: str = "Song") -> PlayerState:
|
||||||
|
return PlayerState(
|
||||||
|
bus_name=BUS,
|
||||||
|
status=status,
|
||||||
|
track=TrackMeta(title=title, artist="Artist"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_fetches_on_initial_player() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Playing")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert fetcher.requested == ["Artist - Song"]
|
||||||
|
assert session._model.status == WatchStatus.FETCHING
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_fetches_while_paused() -> None:
|
||||||
|
"""Fetch starts immediately even when player is paused — no wait for resume."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Paused")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert fetcher.requested == ["Artist - Song"]
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_paused_start_emits_correct_line_after_fetch() -> None:
|
||||||
|
"""After fetch completes with a mid-song paused player, the current lyric line must render."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
received: list[WatchState] = []
|
||||||
|
|
||||||
|
class _CaptureOutput(BaseOutput):
|
||||||
|
position_sensitive = True
|
||||||
|
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
received.append(state)
|
||||||
|
|
||||||
|
class _Manager:
|
||||||
|
def fetch_for_track(self, *_a, **_kw):
|
||||||
|
return None
|
||||||
|
|
||||||
|
PAUSED_MS = 45000
|
||||||
|
lrc = LRCData("[00:43.00]a\n[00:44.00]b\n[00:46.00]c")
|
||||||
|
|
||||||
|
session = WatchCoordinator(
|
||||||
|
_Manager(), # type: ignore
|
||||||
|
_CaptureOutput(),
|
||||||
|
player_hint=None,
|
||||||
|
config=TEST_CONFIG,
|
||||||
|
)
|
||||||
|
session._tracker = PositionTracker(
|
||||||
|
lambda _bus: asyncio.sleep(0, result=PAUSED_MS),
|
||||||
|
TEST_CONFIG,
|
||||||
|
)
|
||||||
|
await session._tracker.start()
|
||||||
|
|
||||||
|
# Calibrate tracker directly (tracker-level behavior already covered by
|
||||||
|
# test_position_tracker_paused_start_calibrates_initial_position)
|
||||||
|
await session._tracker.set_active_player(BUS, "Paused", "Artist - Song")
|
||||||
|
|
||||||
|
# Put model in the state _on_player_change would have produced
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._model.active_track_key = "Artist - Song"
|
||||||
|
session._model.status = WatchStatus.FETCHING
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Paused")}
|
||||||
|
session._last_emit_signature = (
|
||||||
|
"status",
|
||||||
|
WatchStatus.FETCHING,
|
||||||
|
BUS,
|
||||||
|
"Artist - Song",
|
||||||
|
)
|
||||||
|
|
||||||
|
await session._on_lyrics_update(lrc)
|
||||||
|
|
||||||
|
last_ok = next(
|
||||||
|
(s for s in reversed(received) if s.status == WatchStatus.OK), None
|
||||||
|
)
|
||||||
|
assert last_ok is not None, "no OK state emitted after lyrics loaded"
|
||||||
|
assert last_ok.position_ms >= PAUSED_MS
|
||||||
|
|
||||||
|
await session._tracker.stop()
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_fetches_on_track_change() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._model.active_track_key = "Artist - Old Song"
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]old"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Playing", title="New Song")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
assert fetcher.requested == ["Artist - New Song"]
|
||||||
|
assert session._model.lyrics is None
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_no_refetch_on_calibration_no_lyrics() -> None:
|
||||||
|
"""Calibration with same player/track and no_lyrics must NOT trigger a second fetch."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Playing")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert len(fetcher.requested) == 1
|
||||||
|
|
||||||
|
session._model.status = WatchStatus.NO_LYRICS
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert len(fetcher.requested) == 1
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_no_fetch_when_lyrics_present() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._model.active_track_key = "Artist - Song"
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]line"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {BUS: _pstate("Playing")}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
assert fetcher.requested == []
|
||||||
|
assert session._model.status == WatchStatus.OK
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_player_disappears_goes_idle() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._model.active_track_key = "Artist - Song"
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]line"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
|
||||||
|
session._player_monitor.players = {}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
assert session._model.status == WatchStatus.IDLE
|
||||||
|
assert session._model.lyrics is None
|
||||||
|
assert session._model.active_player is None
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_no_fetch_when_track_is_none() -> None:
|
||||||
|
"""Player present but reports no track metadata → no fetch, status NO_LYRICS."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
session = _make_coordinator()
|
||||||
|
fetcher = _CaptureFetcher()
|
||||||
|
session._fetcher = fetcher # type: ignore[assignment]
|
||||||
|
session._player_monitor.players = {
|
||||||
|
BUS: PlayerState(bus_name=BUS, status="Playing", track=None)
|
||||||
|
}
|
||||||
|
session._on_player_change()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
assert fetcher.requested == []
|
||||||
|
assert session._model.status == WatchStatus.NO_LYRICS
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_emit_deduplicates_on_same_cursor() -> None:
|
||||||
|
async def _run() -> None:
|
||||||
|
counts = [0]
|
||||||
|
|
||||||
|
class _CountOutput(BaseOutput):
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
counts[0] += 1
|
||||||
|
|
||||||
|
session = _make_coordinator(_CountOutput())
|
||||||
|
track = TrackMeta(title="Song", artist="Artist")
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._player_monitor.players = {
|
||||||
|
BUS: PlayerState(bus_name=BUS, status="Playing", track=track)
|
||||||
|
}
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]a\n[00:03.00]b"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
await session._tracker.set_active_player(BUS, "Playing", "Artist - Song")
|
||||||
|
|
||||||
|
await session._emit_state() # emits
|
||||||
|
await session._emit_state() # same cursor → no emit
|
||||||
|
assert counts[0] == 1
|
||||||
|
|
||||||
|
await session._tracker.on_seeked(BUS, 3200)
|
||||||
|
await session._emit_state() # cursor advanced → emits
|
||||||
|
assert counts[0] == 2
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_coordinator_position_insensitive_output_ignores_seeks() -> None:
|
||||||
|
"""With position_sensitive=False, seek events do not trigger re-emit."""
|
||||||
|
|
||||||
|
async def _run() -> None:
|
||||||
|
counts = [0]
|
||||||
|
|
||||||
|
class _CountPrint(PrintOutput):
|
||||||
|
async def on_state(self, state: WatchState) -> None:
|
||||||
|
counts[0] += 1
|
||||||
|
|
||||||
|
session = _make_coordinator(_CountPrint())
|
||||||
|
track = TrackMeta(title="Song", artist="Artist")
|
||||||
|
session._model.active_player = BUS
|
||||||
|
session._player_monitor.players = {
|
||||||
|
BUS: PlayerState(bus_name=BUS, status="Playing", track=track)
|
||||||
|
}
|
||||||
|
session._model.set_lyrics(LRCData("[00:01.00]a\n[00:03.00]b"))
|
||||||
|
session._model.status = WatchStatus.OK
|
||||||
|
|
||||||
|
await session._emit_state() # emits once
|
||||||
|
assert counts[0] == 1
|
||||||
|
|
||||||
|
await session._tracker.on_seeked(BUS, 3200)
|
||||||
|
await session._emit_state() # position fixed at 0 → same signature → no re-emit
|
||||||
|
assert counts[0] == 1
|
||||||
|
|
||||||
|
asyncio.run(_run())
|
||||||
@@ -43,7 +43,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cyclopts"
|
name = "cyclopts"
|
||||||
version = "4.10.1"
|
version = "4.10.2"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "attrs" },
|
{ name = "attrs" },
|
||||||
@@ -51,9 +51,9 @@ dependencies = [
|
|||||||
{ name = "rich" },
|
{ name = "rich" },
|
||||||
{ name = "rich-rst" },
|
{ name = "rich-rst" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/6c/c4/2ce2ca1451487dc7d59f09334c3fa1182c46cfcf0a2d5f19f9b26d53ac74/cyclopts-4.10.1.tar.gz", hash = "sha256:ad4e4bb90576412d32276b14a76f55d43353753d16217f2c3cd5bdceba7f15a0", size = 166623, upload-time = "2026-03-23T14:43:01.098Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/66/2c/fced34890f6e5a93a4b7afb2c71e8eee2a0719fb26193a0abf159ecb714d/cyclopts-4.10.2.tar.gz", hash = "sha256:d7b950457ef2563596d56331f80cbbbf86a2772535fb8b315c4f03bc7e6127f1", size = 166664, upload-time = "2026-04-08T23:57:45.805Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/8a/0b/2261922126b2e50c601fe22d7ff5194e0a4d50e654836260c0665e24d862/cyclopts-4.10.1-py3-none-any.whl", hash = "sha256:35f37257139380a386d9fe4475e1e7c87ca7795765ef4f31abba579fcfcb6ecd", size = 204331, upload-time = "2026-03-23T14:43:02.625Z" },
|
{ url = "https://files.pythonhosted.org/packages/b4/bd/05055d8360cef0757d79367157f3b15c0a0715e81e08f86a04018ec045f0/cyclopts-4.10.2-py3-none-any.whl", hash = "sha256:a1f2d6f8f7afac9456b48f75a40b36658778ddc9c6d406b520d017ae32c990fe", size = 204314, upload-time = "2026-04-08T23:57:46.969Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -153,7 +153,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lrx-cli"
|
name = "lrx-cli"
|
||||||
version = "0.4.5"
|
version = "0.7.9"
|
||||||
source = { editable = "." }
|
source = { editable = "." }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "cyclopts" },
|
{ name = "cyclopts" },
|
||||||
@@ -162,11 +162,12 @@ dependencies = [
|
|||||||
{ name = "loguru" },
|
{ name = "loguru" },
|
||||||
{ name = "mutagen" },
|
{ name = "mutagen" },
|
||||||
{ name = "platformdirs" },
|
{ name = "platformdirs" },
|
||||||
{ name = "python-dotenv" },
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dev-dependencies]
|
[package.dev-dependencies]
|
||||||
dev = [
|
dev = [
|
||||||
|
{ name = "poethepoet" },
|
||||||
|
{ name = "pyright" },
|
||||||
{ name = "pytest" },
|
{ name = "pytest" },
|
||||||
{ name = "ruff" },
|
{ name = "ruff" },
|
||||||
]
|
]
|
||||||
@@ -178,12 +179,13 @@ requires-dist = [
|
|||||||
{ name = "httpx", specifier = ">=0.28.1" },
|
{ name = "httpx", specifier = ">=0.28.1" },
|
||||||
{ name = "loguru", specifier = ">=0.7.3" },
|
{ name = "loguru", specifier = ">=0.7.3" },
|
||||||
{ name = "mutagen", specifier = ">=1.47.0" },
|
{ name = "mutagen", specifier = ">=1.47.0" },
|
||||||
{ name = "platformdirs", specifier = ">=4.9.4" },
|
{ name = "platformdirs", specifier = ">=4.9.6" },
|
||||||
{ name = "python-dotenv", specifier = ">=1.2.2" },
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.metadata.requires-dev]
|
[package.metadata.requires-dev]
|
||||||
dev = [
|
dev = [
|
||||||
|
{ name = "poethepoet", specifier = ">=0.44.0" },
|
||||||
|
{ name = "pyright", specifier = ">=1.1.406" },
|
||||||
{ name = "pytest", specifier = ">=9.0.2" },
|
{ name = "pytest", specifier = ">=9.0.2" },
|
||||||
{ name = "ruff", specifier = ">=0.15.8" },
|
{ name = "ruff", specifier = ">=0.15.8" },
|
||||||
]
|
]
|
||||||
@@ -218,6 +220,15 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/b0/7a/620f945b96be1f6ee357d211d5bf74ab1b7fe72a9f1525aafbfe3aee6875/mutagen-1.47.0-py3-none-any.whl", hash = "sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719", size = 194391, upload-time = "2023-09-03T16:33:29.955Z" },
|
{ url = "https://files.pythonhosted.org/packages/b0/7a/620f945b96be1f6ee357d211d5bf74ab1b7fe72a9f1525aafbfe3aee6875/mutagen-1.47.0-py3-none-any.whl", hash = "sha256:edd96f50c5907a9539d8e5bba7245f62c9f520aef333d13392a79a4f70aca719", size = 194391, upload-time = "2023-09-03T16:33:29.955Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "nodeenv"
|
||||||
|
version = "1.10.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/24/bf/d1bda4f6168e0b2e9e5958945e01910052158313224ada5ce1fb2e1113b8/nodeenv-1.10.0.tar.gz", hash = "sha256:996c191ad80897d076bdfba80a41994c2b47c68e224c542b48feba42ba00f8bb", size = 55611, upload-time = "2025-12-20T14:08:54.006Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/88/b2/d0896bdcdc8d28a7fc5717c305f1a861c26e18c05047949fb371034d98bd/nodeenv-1.10.0-py2.py3-none-any.whl", hash = "sha256:5bb13e3eed2923615535339b3c620e76779af4cb4c6a90deccc9e36b274d3827", size = 23438, upload-time = "2025-12-20T14:08:52.782Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "packaging"
|
name = "packaging"
|
||||||
version = "26.0"
|
version = "26.0"
|
||||||
@@ -228,12 +239,21 @@ wheels = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "platformdirs"
|
name = "pastel"
|
||||||
version = "4.9.4"
|
version = "0.2.1"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/19/56/8d4c30c8a1d07013911a8fdbd8f89440ef9f08d07a1b50ab8ca8be5a20f9/platformdirs-4.9.4.tar.gz", hash = "sha256:1ec356301b7dc906d83f371c8f487070e99d3ccf9e501686456394622a01a934", size = 28737, upload-time = "2026-03-05T18:34:13.271Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/76/f1/4594f5e0fcddb6953e5b8fe00da8c317b8b41b547e2b3ae2da7512943c62/pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d", size = 7555, upload-time = "2020-09-16T19:21:12.43Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/63/d7/97f7e3a6abb67d8080dd406fd4df842c2be0efaf712d1c899c32a075027c/platformdirs-4.9.4-py3-none-any.whl", hash = "sha256:68a9a4619a666ea6439f2ff250c12a853cd1cbd5158d258bd824a7df6be2f868", size = 21216, upload-time = "2026-03-05T18:34:12.172Z" },
|
{ url = "https://files.pythonhosted.org/packages/aa/18/a8444036c6dd65ba3624c63b734d3ba95ba63ace513078e1580590075d21/pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364", size = 5955, upload-time = "2020-09-16T19:21:11.409Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "platformdirs"
|
||||||
|
version = "4.9.6"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/9f/4a/0883b8e3802965322523f0b200ecf33d31f10991d0401162f4b23c698b42/platformdirs-4.9.6.tar.gz", hash = "sha256:3bfa75b0ad0db84096ae777218481852c0ebc6c727b3168c1b9e0118e458cf0a", size = 29400, upload-time = "2026-04-09T00:04:10.812Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/75/a6/a0a304dc33b49145b21f4808d763822111e67d1c3a32b524a1baf947b6e1/platformdirs-4.9.6-py3-none-any.whl", hash = "sha256:e61adb1d5e5cb3441b4b7710bea7e4c12250ca49439228cc1021c00dcfac0917", size = 21348, upload-time = "2026-04-09T00:04:09.463Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -246,17 +266,43 @@ wheels = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pygments"
|
name = "poethepoet"
|
||||||
version = "2.19.2"
|
version = "0.44.0"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
|
dependencies = [
|
||||||
|
{ name = "pastel" },
|
||||||
|
{ name = "pyyaml" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/a1/a4/e487662f12a5ecd2ac4d77f7697e4bda481953bb80032b158e5ab55173d4/poethepoet-0.44.0.tar.gz", hash = "sha256:c2667b513621788fb46482e371cdf81c0b04344e0e0bcb7aa8af45f84c2fce7b", size = 96040, upload-time = "2026-04-06T19:40:58.908Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
|
{ url = "https://files.pythonhosted.org/packages/80/b7/503b7d3a51b0de9a329f1323048d166e309a97bb31bdc60e6acd11d2c71f/poethepoet-0.44.0-py3-none-any.whl", hash = "sha256:36d3d834708ed069ac1e4f8ed77915c55265b7b6e01aeb2fe617c9fe9cfd524a", size = 122873, upload-time = "2026-04-06T19:40:57.369Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pygments"
|
||||||
|
version = "2.20.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/c3/b2/bc9c9196916376152d655522fdcebac55e66de6603a76a02bca1b6414f6c/pygments-2.20.0.tar.gz", hash = "sha256:6757cd03768053ff99f3039c1a36d6c0aa0b263438fcab17520b30a303a82b5f", size = 4955991, upload-time = "2026-03-29T13:29:33.898Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f4/7e/a72dd26f3b0f4f2bf1dd8923c85f7ceb43172af56d63c7383eb62b332364/pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176", size = 1231151, upload-time = "2026-03-29T13:29:30.038Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pyright"
|
||||||
|
version = "1.1.408"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "nodeenv" },
|
||||||
|
{ name = "typing-extensions" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/74/b2/5db700e52554b8f025faa9c3c624c59f1f6c8841ba81ab97641b54322f16/pyright-1.1.408.tar.gz", hash = "sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684", size = 4400578, upload-time = "2026-01-08T08:07:38.795Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/0c/82/a2c93e32800940d9573fb28c346772a14778b84ba7524e691b324620ab89/pyright-1.1.408-py3-none-any.whl", hash = "sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1", size = 6399144, upload-time = "2026-01-08T08:07:37.082Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pytest"
|
name = "pytest"
|
||||||
version = "9.0.2"
|
version = "9.0.3"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||||
@@ -265,18 +311,45 @@ dependencies = [
|
|||||||
{ name = "pluggy" },
|
{ name = "pluggy" },
|
||||||
{ name = "pygments" },
|
{ name = "pygments" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
|
{ url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "python-dotenv"
|
name = "pyyaml"
|
||||||
version = "1.2.2"
|
version = "6.0.3"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/82/ed/0301aeeac3e5353ef3d94b6ec08bbcabd04a72018415dcb29e588514bba8/python_dotenv-1.2.2.tar.gz", hash = "sha256:2c371a91fbd7ba082c2c1dc1f8bf89ca22564a087c2c287cd9b662adde799cf3", size = 50135, upload-time = "2026-03-01T16:00:26.196Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/0b/d7/1959b9648791274998a9c3526f6d0ec8fd2233e4d4acce81bbae76b44b2a/python_dotenv-1.2.2-py3-none-any.whl", hash = "sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a", size = 22101, upload-time = "2026-03-01T16:00:25.09Z" },
|
{ url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -307,27 +380,36 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ruff"
|
name = "ruff"
|
||||||
version = "0.15.8"
|
version = "0.15.10"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/14/b0/73cf7550861e2b4824950b8b52eebdcc5adc792a00c514406556c5b80817/ruff-0.15.8.tar.gz", hash = "sha256:995f11f63597ee362130d1d5a327a87cb6f3f5eae3094c620bcc632329a4d26e", size = 4610921, upload-time = "2026-03-26T18:39:38.675Z" }
|
sdist = { url = "https://files.pythonhosted.org/packages/e7/d9/aa3f7d59a10ef6b14fe3431706f854dbf03c5976be614a9796d36326810c/ruff-0.15.10.tar.gz", hash = "sha256:d1f86e67ebfdef88e00faefa1552b5e510e1d35f3be7d423dc7e84e63788c94e", size = 4631728, upload-time = "2026-04-09T14:06:09.884Z" }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/4a/92/c445b0cd6da6e7ae51e954939cb69f97e008dbe750cfca89b8cedc081be7/ruff-0.15.8-py3-none-linux_armv6l.whl", hash = "sha256:cbe05adeba76d58162762d6b239c9056f1a15a55bd4b346cfd21e26cd6ad7bc7", size = 10527394, upload-time = "2026-03-26T18:39:41.566Z" },
|
{ url = "https://files.pythonhosted.org/packages/eb/00/a1c2fdc9939b2c03691edbda290afcd297f1f389196172826b03d6b6a595/ruff-0.15.10-py3-none-linux_armv6l.whl", hash = "sha256:0744e31482f8f7d0d10a11fcbf897af272fefdfcb10f5af907b18c2813ff4d5f", size = 10563362, upload-time = "2026-04-09T14:06:21.189Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/eb/92/f1c662784d149ad1414cae450b082cf736430c12ca78367f20f5ed569d65/ruff-0.15.8-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:d3e3d0b6ba8dca1b7ef9ab80a28e840a20070c4b62e56d675c24f366ef330570", size = 10905693, upload-time = "2026-03-26T18:39:30.364Z" },
|
{ url = "https://files.pythonhosted.org/packages/5c/15/006990029aea0bebe9d33c73c3e28c80c391ebdba408d1b08496f00d422d/ruff-0.15.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b1e7c16ea0ff5a53b7c2df52d947e685973049be1cdfe2b59a9c43601897b22e", size = 10951122, upload-time = "2026-04-09T14:06:02.236Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/ca/f2/7a631a8af6d88bcef997eb1bf87cc3da158294c57044aafd3e17030613de/ruff-0.15.8-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6ee3ae5c65a42f273f126686353f2e08ff29927b7b7e203b711514370d500de3", size = 10323044, upload-time = "2026-03-26T18:39:33.37Z" },
|
{ url = "https://files.pythonhosted.org/packages/f2/c0/4ac978fe874d0618c7da647862afe697b281c2806f13ce904ad652fa87e4/ruff-0.15.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:93cc06a19e5155b4441dd72808fdf84290d84ad8a39ca3b0f994363ade4cebb1", size = 10314005, upload-time = "2026-04-09T14:06:00.026Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/67/18/1bf38e20914a05e72ef3b9569b1d5c70a7ef26cd188d69e9ca8ef588d5bf/ruff-0.15.8-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fdce027ada77baa448077ccc6ebb2fa9c3c62fd110d8659d601cf2f475858d94", size = 10629135, upload-time = "2026-03-26T18:39:44.142Z" },
|
{ url = "https://files.pythonhosted.org/packages/da/73/c209138a5c98c0d321266372fc4e33ad43d506d7e5dd817dd89b60a8548f/ruff-0.15.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83e1dd04312997c99ea6965df66a14fb4f03ba978564574ffc68b0d61fd3989e", size = 10643450, upload-time = "2026-04-09T14:05:42.137Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/d2/e9/138c150ff9af60556121623d41aba18b7b57d95ac032e177b6a53789d279/ruff-0.15.8-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:12e617fc01a95e5821648a6df341d80456bd627bfab8a829f7cfc26a14a4b4a3", size = 10348041, upload-time = "2026-03-26T18:39:52.178Z" },
|
{ url = "https://files.pythonhosted.org/packages/ec/76/0deec355d8ec10709653635b1f90856735302cb8e149acfdf6f82a5feb70/ruff-0.15.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8154d43684e4333360fedd11aaa40b1b08a4e37d8ffa9d95fee6fa5b37b6fab1", size = 10379597, upload-time = "2026-04-09T14:05:49.984Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/02/f1/5bfb9298d9c323f842c5ddeb85f1f10ef51516ac7a34ba446c9347d898df/ruff-0.15.8-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:432701303b26416d22ba696c39f2c6f12499b89093b61360abc34bcc9bf07762", size = 11121987, upload-time = "2026-03-26T18:39:55.195Z" },
|
{ url = "https://files.pythonhosted.org/packages/dc/be/86bba8fc8798c081e28a4b3bb6d143ccad3fd5f6f024f02002b8f08a9fa3/ruff-0.15.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ab88715f3a6deb6bde6c227f3a123410bec7b855c3ae331b4c006189e895cef", size = 11146645, upload-time = "2026-04-09T14:06:12.246Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/10/11/6da2e538704e753c04e8d86b1fc55712fdbdcc266af1a1ece7a51fff0d10/ruff-0.15.8-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d910ae974b7a06a33a057cb87d2a10792a3b2b3b35e33d2699fdf63ec8f6b17a", size = 11951057, upload-time = "2026-03-26T18:39:19.18Z" },
|
{ url = "https://files.pythonhosted.org/packages/a8/89/140025e65911b281c57be1d385ba1d932c2366ca88ae6663685aed8d4881/ruff-0.15.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a768ff5969b4f44c349d48edf4ab4f91eddb27fd9d77799598e130fb628aa158", size = 12030289, upload-time = "2026-04-09T14:06:04.776Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/83/f0/c9208c5fd5101bf87002fed774ff25a96eea313d305f1e5d5744698dc314/ruff-0.15.8-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2033f963c43949d51e6fdccd3946633c6b37c484f5f98c3035f49c27395a8ab8", size = 11464613, upload-time = "2026-03-26T18:40:06.301Z" },
|
{ url = "https://files.pythonhosted.org/packages/88/de/ddacca9545a5e01332567db01d44bd8cf725f2db3b3d61a80550b48308ea/ruff-0.15.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ee3ef42dab7078bda5ff6a1bcba8539e9857deb447132ad5566a038674540d0", size = 11496266, upload-time = "2026-04-09T14:05:55.485Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/f8/22/d7f2fabdba4fae9f3b570e5605d5eb4500dcb7b770d3217dca4428484b17/ruff-0.15.8-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f29b989a55572fb885b77464cf24af05500806ab4edf9a0fd8977f9759d85b1", size = 11257557, upload-time = "2026-03-26T18:39:57.972Z" },
|
{ url = "https://files.pythonhosted.org/packages/bc/bb/7ddb00a83760ff4a83c4e2fc231fd63937cc7317c10c82f583302e0f6586/ruff-0.15.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51cb8cc943e891ba99989dd92d61e29b1d231e14811db9be6440ecf25d5c1609", size = 11256418, upload-time = "2026-04-09T14:05:57.69Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/71/8c/382a9620038cf6906446b23ce8632ab8c0811b8f9d3e764f58bedd0c9a6f/ruff-0.15.8-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:ac51d486bf457cdc985a412fb1801b2dfd1bd8838372fc55de64b1510eff4bec", size = 11169440, upload-time = "2026-03-26T18:39:22.205Z" },
|
{ url = "https://files.pythonhosted.org/packages/dc/8d/55de0d35aacf6cd50b6ee91ee0f291672080021896543776f4170fc5c454/ruff-0.15.10-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:e59c9bdc056a320fb9ea1700a8d591718b8faf78af065484e801258d3a76bc3f", size = 11288416, upload-time = "2026-04-09T14:05:44.695Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/4d/0d/0994c802a7eaaf99380085e4e40c845f8e32a562e20a38ec06174b52ef24/ruff-0.15.8-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:c9861eb959edab053c10ad62c278835ee69ca527b6dcd72b47d5c1e5648964f6", size = 10605963, upload-time = "2026-03-26T18:39:46.682Z" },
|
{ url = "https://files.pythonhosted.org/packages/68/cf/9438b1a27426ec46a80e0a718093c7f958ef72f43eb3111862949ead3cc1/ruff-0.15.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:136c00ca2f47b0018b073f28cb5c1506642a830ea941a60354b0e8bc8076b151", size = 10621053, upload-time = "2026-04-09T14:05:52.782Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/19/aa/d624b86f5b0aad7cef6bbf9cd47a6a02dfdc4f72c92a337d724e39c9d14b/ruff-0.15.8-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8d9a5b8ea13f26ae90838afc33f91b547e61b794865374f114f349e9036835fb", size = 10357484, upload-time = "2026-03-26T18:39:49.176Z" },
|
{ url = "https://files.pythonhosted.org/packages/4c/50/e29be6e2c135e9cd4cb15fbade49d6a2717e009dff3766dd080fcb82e251/ruff-0.15.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8b80a2f3c9c8a950d6237f2ca12b206bccff626139be9fa005f14feb881a1ae8", size = 10378302, upload-time = "2026-04-09T14:06:14.361Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/35/c3/e0b7835d23001f7d999f3895c6b569927c4d39912286897f625736e1fd04/ruff-0.15.8-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c2a33a529fb3cbc23a7124b5c6ff121e4d6228029cba374777bd7649cc8598b8", size = 10830426, upload-time = "2026-03-26T18:40:03.702Z" },
|
{ url = "https://files.pythonhosted.org/packages/18/2f/e0b36a6f99c51bb89f3a30239bc7bf97e87a37ae80aa2d6542d6e5150364/ruff-0.15.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:e3e53c588164dc025b671c9df2462429d60357ea91af7e92e9d56c565a9f1b07", size = 10850074, upload-time = "2026-04-09T14:06:16.581Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/f0/51/ab20b322f637b369383adc341d761eaaa0f0203d6b9a7421cd6e783d81b9/ruff-0.15.8-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:75e5cd06b1cf3f47a3996cfc999226b19aa92e7cce682dcd62f80d7035f98f49", size = 11345125, upload-time = "2026-03-26T18:39:27.799Z" },
|
{ url = "https://files.pythonhosted.org/packages/11/08/874da392558ce087a0f9b709dc6ec0d60cbc694c1c772dab8d5f31efe8cb/ruff-0.15.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:b0c52744cf9f143a393e284125d2576140b68264a93c6716464e129a3e9adb48", size = 11358051, upload-time = "2026-04-09T14:06:18.948Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/37/e6/90b2b33419f59d0f2c4c8a48a4b74b460709a557e8e0064cf33ad894f983/ruff-0.15.8-py3-none-win32.whl", hash = "sha256:bc1f0a51254ba21767bfa9a8b5013ca8149dcf38092e6a9eb704d876de94dc34", size = 10571959, upload-time = "2026-03-26T18:39:36.117Z" },
|
{ url = "https://files.pythonhosted.org/packages/e4/46/602938f030adfa043e67112b73821024dc79f3ab4df5474c25fa4c1d2d14/ruff-0.15.10-py3-none-win32.whl", hash = "sha256:d4272e87e801e9a27a2e8df7b21011c909d9ddd82f4f3281d269b6ba19789ca5", size = 10588964, upload-time = "2026-04-09T14:06:07.14Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/1f/a2/ef467cb77099062317154c63f234b8a7baf7cb690b99af760c5b68b9ee7f/ruff-0.15.8-py3-none-win_amd64.whl", hash = "sha256:04f79eff02a72db209d47d665ba7ebcad609d8918a134f86cb13dd132159fc89", size = 11743893, upload-time = "2026-03-26T18:39:25.01Z" },
|
{ url = "https://files.pythonhosted.org/packages/25/b6/261225b875d7a13b33a6d02508c39c28450b2041bb01d0f7f1a83d569512/ruff-0.15.10-py3-none-win_amd64.whl", hash = "sha256:28cb32d53203242d403d819fd6983152489b12e4a3ae44993543d6fe62ab42ed", size = 11745044, upload-time = "2026-04-09T14:05:39.473Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/15/e2/77be4fff062fa78d9b2a4dea85d14785dac5f1d0c1fb58ed52331f0ebe28/ruff-0.15.8-py3-none-win_arm64.whl", hash = "sha256:cf891fa8e3bb430c0e7fac93851a5978fc99c8fa2c053b57b118972866f8e5f2", size = 11048175, upload-time = "2026-03-26T18:40:01.06Z" },
|
{ url = "https://files.pythonhosted.org/packages/58/ed/dea90a65b7d9e69888890fb14c90d7f51bf0c1e82ad800aeb0160e4bacfd/ruff-0.15.10-py3-none-win_arm64.whl", hash = "sha256:601d1610a9e1f1c2165a4f561eeaa2e2ea1e97f3287c5aa258d3dab8b57c6188", size = 11035607, upload-time = "2026-04-09T14:05:47.593Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "typing-extensions"
|
||||||
|
version = "4.15.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
|||||||
Reference in New Issue
Block a user