In the previous post, we discussed the hardware deployment of the secondary screen. For me, the core requirement is to display the latest news I am interested in near real-time. Traditionally, this would require a complex recommendation system; however, with the powerful capabilities of LLMs, we can build a precise, personal recommendation stream with minimal configuration (Almost One-Shot).
To this end, I developed the AI-News-Dashboard project. The implementation strategy involves: recalling news widely from multiple channels via RSS/API, using an LLM to score each item based on my personal preferences, and finally generating a recommendation list that balances timeliness and accuracy using a Time Decay Algorithm.

You can experience the recommendation effects under different preferences via these two Demos:
- Demo 1 (Tech/AI/Space focus): https://kindledash.t0saki.com/
- Demo 2 (Medical/Local Life focus): https://parentdash.t0saki.com/
Technical Implementation Details
1. News Acquisition (Recall Layer)
Currently, content is pulled extensively via the RSS protocol, which remains the most universal web aggregation solution. The main strategies for RSS configuration include:
- Native Support: Prioritize finding if the target website offers a direct RSS interface.
- RSSHub: For sites without native interfaces (including many domestic Chinese platforms), use the powerful RSSHub project to convert them into RSS feeds.
- Google News: A highly efficient trick is constructing URL parameters for Google News RSS Feeds. This allows you to directly capture search results for specific keywords, saving the hassle of maintaining specific site lists for volatile topics.
Acquired news and subsequent processing results are stored in a SQLite database. The main program polls periodically, storing only incremental content for further processing.
2. Personalized Preference Settings
To enable the LLM to accurately understand my needs, the Prompt design is crucial. In the Prompt, I detailed my areas of interest and provided specific scoring examples (Few-Shot), asking the LLM to mimic my standards. Practice has shown that LLMs perform excellently in this type of task, and adjusting preferences is as flexible as simply modifying the Prompt.
3. LLM Processing Flow (Coarse & Fine Ranking)
In this workflow, every news item undergoes two rounds of LLM processing, each handling different tasks:
-
Round 1 (Coarse Ranking): Uses a lightweight model (e.g.,
gemini-2.5-flash-lite).- Task: Quickly filter out “potentially interesting” news based on preferences.
- Goal: Perform preliminary screening on news sources with lower signal-to-noise ratios using a low-cost model, significantly reducing the overhead for subsequent high-precision processing.
-
Round 2 (Fine Ranking & Rewriting): Uses a more intelligent model (e.g.,
gemini-3-flash-preview).- Task: Not only precisely scores the news but also performs Title Rewriting based on the title and abstract.
- Goal: Address issues such as “clickbait” or low information density often found in Chinese sources, while improving reading efficiency for non-native (English) content. The final output is an optimized title and its recommendation score.
4. Sorting Algorithm (Re-ranking)
After LLM processing, we have a set of scored candidate news. If we sorted solely by score, the list might be dominated by high-scoring but older news. Therefore, I introduced a Gravity Sort algorithm (similar to Hacker News). By tuning hyperparameters to balance “content importance” and “timeliness,” it generates a recommendation list that is both precise and current.
5. Deduplication Mechanism
Since I subscribe to multiple overlapping sources, a single hot topic (like the recent Claude Opus 4.6 release) might flood the feed. The current deduplication mechanism is basic: The Prompt instructs the LLM to merge duplicates only if they appear within the same Batch.
- Limitation: Duplicates across different Batches are hard to handle.
- Future Improvement: Theoretically, this can be solved using Levenshtein distance or text vectorization (Embedding) for similarity matching, enabling cross-batch semantic deduplication.
6. Output and Display
The project ultimately generates two JSON files:
dashboard.json: Contains the full list from the past 24 hours, used for the web dashboard.top5.json: A simplified version specifically for the Kindle secondary screen (see Part 1).
The server uses Nginx to host static files, effectively acting as a regularly updated API server. The front-end pages (the Demos linked at the start) render data by requesting this API.
Summary
This project essentially leverages the semantic understanding of modern LLMs to build a Single-User Recommendation System. Its flow perfectly mirrors the four classic stages of recommendation systems: Recall (RSS) → Coarse Ranking (Flash-Lite) → Fine Ranking (Flash-Preview) → Re-ranking (Gravity Sort).
Compared to traditional recommendation systems, this approach has distinct characteristics:
- Pros: No reliance on historical behavior data (Cold-start friendly), fully controllable, and accessible via API.
- Cons: Semantic deduplication is not yet perfect; preference updates rely on manual Prompt adjustments (lacking an automatic feedback loop); high cost for multi-user scaling.
If you also wish to escape the algorithmic “black box” and build a personalized information feed in a controllable, transparent way, the ideas behind this project might provide some inspiration.