Skip to content
~/tosaki
Go back

AI Secondary Screen (Part 2): Building a Personalized Multi-Source News Recommendation Feed with LLMs

Edit page

In the previous post, we discussed the hardware deployment of the secondary screen. For me, the core requirement is to display the latest news I am interested in near real-time. Traditionally, this would require a complex recommendation system; however, with the powerful capabilities of LLMs, we can build a precise, personal recommendation stream with minimal configuration (Almost One-Shot).

To this end, I developed the AI-News-Dashboard project. The implementation strategy involves: recalling news widely from multiple channels via RSS/API, using an LLM to score each item based on my personal preferences, and finally generating a recommendation list that balances timeliness and accuracy using a Time Decay Algorithm.

AI-News-Demo

You can experience the recommendation effects under different preferences via these two Demos:

Technical Implementation Details

1. News Acquisition (Recall Layer)

Currently, content is pulled extensively via the RSS protocol, which remains the most universal web aggregation solution. The main strategies for RSS configuration include:

Acquired news and subsequent processing results are stored in a SQLite database. The main program polls periodically, storing only incremental content for further processing.

2. Personalized Preference Settings

To enable the LLM to accurately understand my needs, the Prompt design is crucial. In the Prompt, I detailed my areas of interest and provided specific scoring examples (Few-Shot), asking the LLM to mimic my standards. Practice has shown that LLMs perform excellently in this type of task, and adjusting preferences is as flexible as simply modifying the Prompt.

3. LLM Processing Flow (Coarse & Fine Ranking)

In this workflow, every news item undergoes two rounds of LLM processing, each handling different tasks:

4. Sorting Algorithm (Re-ranking)

After LLM processing, we have a set of scored candidate news. If we sorted solely by score, the list might be dominated by high-scoring but older news. Therefore, I introduced a Gravity Sort algorithm (similar to Hacker News). By tuning hyperparameters to balance “content importance” and “timeliness,” it generates a recommendation list that is both precise and current.

5. Deduplication Mechanism

Since I subscribe to multiple overlapping sources, a single hot topic (like the recent Claude Opus 4.6 release) might flood the feed. The current deduplication mechanism is basic: The Prompt instructs the LLM to merge duplicates only if they appear within the same Batch.

6. Output and Display

The project ultimately generates two JSON files:

The server uses Nginx to host static files, effectively acting as a regularly updated API server. The front-end pages (the Demos linked at the start) render data by requesting this API.

Summary

This project essentially leverages the semantic understanding of modern LLMs to build a Single-User Recommendation System. Its flow perfectly mirrors the four classic stages of recommendation systems: Recall (RSS) → Coarse Ranking (Flash-Lite) → Fine Ranking (Flash-Preview) → Re-ranking (Gravity Sort).

Compared to traditional recommendation systems, this approach has distinct characteristics:

If you also wish to escape the algorithmic “black box” and build a personalized information feed in a controllable, transparent way, the ideas behind this project might provide some inspiration.


Edit page
Share this post on:

Next Post
AI Secondary Display (Part 1): Reviving a Forgotten Kindle, Crafting an Elegant E-Ink Desktop Dashboard