I’m sharing an open-source project out of our lab called InkSight (code-named inco). Like many of us, our team found ourselves constantly distracted by notifications, glowing monitors, and endless doomscrolling on our phones. We wanted a way to consume high-quality, low-frequency information (like a Stoic quote, a minimalist daily briefing, or a quick recipe) without the cognitive load of a traditional screen.
So, we built InkSight—an open-source "slow tech" desktop companion. It uses an ESP32-C3 and an e-ink display to fetch customized LLM-generated content.
The Tech Stack & Architecture:
Hardware: ESP32-C3 written in C/C++ (Arduino framework). It supports common 2.13" / 1.54" e-ink panels.
Backend: Python & FastAPI. It acts as the brain, parsing user-defined JSON prompt templates and calling any OpenAI-compatible LLM (OpenAI, DeepSeek, Kimi, etc.).
Web Dashboard: Pure HTML/JS/CSS for easy configuration without digging into the code.
What we think HN might find interesting (The Optimizations):
Instead of the ESP32 waking up, making an API call to the LLM, and waiting 5-10 seconds for a response (which drains the LiPo battery significantly), we built a smart caching layer in the backend.
The backend pre-generates and caches the content. When the ESP32 wakes up, it fetches the cached payload in sub-seconds, updates the e-ink screen, and immediately goes back to deep sleep. This allows it to run for 3 to 6 months on a single charge and stay resilient against temporary network drops.
Deployment:
We wanted to make it accessible to everyone, so the backend can be 1-click deployed to Vercel (for free), and the hardware uses a Captive Portal for zero-code WiFi and API configuration. Of course, it’s completely self-hostable if you prefer to run it locally.
I’d love to hear your thoughts on the "slow tech" philosophy, any feedback on the architecture, or ideas for new content templates! I'll be hanging around the thread to answer any questions.
Thanks for the insight.We definitely plan to cover these patterns in future updates. Please excuse a slight delay as our team is currently celebrating the Chinese New Year. We'll be back to shipping code right after the holidays.OWO
Huge thanks, dang! I really appreciate you rescuing the post from the filter and switching the URL to the English version.And thanks for pinning the context comment; it helps a lot since the project is quite extensive. We're thrilled it struck a chord.
Thanks for the feedback! I've flagged this for the team member working on that section.We are taking a short break for the Chinese New Year, so updates might be a bit slower than usual.QAQ
That's a great point. I completely agree—'Data Engineering for LLMs' is much more accurate given the content.I'll pass this feedback on to the project lead immediately. Thanks for the suggestion.
Let me clarify the team structure to avoid any misunderstanding.
We are actually three first-year Master's students. This project is indeed a summary of our learning from this past semester, which we rushed to wrap up right before the Chinese New Year break.
When I mentioned 'Project Lead,' I was referring to a senior PhD candidate in our lab. He acts as a mentor to review our code and ensure quality control, but the learning and implementation are very much ours. And yes, to move fast and polish the English, we did utilize LLMs during the writing process.
No, I think it gives the impression that the author is using an LLM without much supervision to not only write the submitted content, but to reply to posts here.
I promise there is a human behind the keyboard!QAQ
My English is not good, so I use GPT to help translate and polish my replies to be polite. Maybe it made them sound too robotic. I am reading every comment myself. Sorry for the wrong impression.
Yes, you are right. We are a team from China and used GPT to help with the English translation. We didn't realize it came across as 'fake warmth.' We appreciate the feedback and will work on making the tone more neutral and concise.
Thanks for the support! We believe that code and engineering challenges are universal languages.
We are pleasantly surprised by the warm reception. We know the project (and our English localization) is still a Work in Progress, but we are committed to improving it to meet the high standards of the HN community. We'll keep shipping updates!
Thanks for the heads-up! We noticed that discrepancy as well and have just updated the README_en.md with the correct English diagram. It should be displaying correctly now.
I’m sharing an open-source project out of our lab called InkSight (code-named inco). Like many of us, our team found ourselves constantly distracted by notifications, glowing monitors, and endless doomscrolling on our phones. We wanted a way to consume high-quality, low-frequency information (like a Stoic quote, a minimalist daily briefing, or a quick recipe) without the cognitive load of a traditional screen.
So, we built InkSight—an open-source "slow tech" desktop companion. It uses an ESP32-C3 and an e-ink display to fetch customized LLM-generated content.
The Tech Stack & Architecture:
Hardware: ESP32-C3 written in C/C++ (Arduino framework). It supports common 2.13" / 1.54" e-ink panels.
Backend: Python & FastAPI. It acts as the brain, parsing user-defined JSON prompt templates and calling any OpenAI-compatible LLM (OpenAI, DeepSeek, Kimi, etc.).
Web Dashboard: Pure HTML/JS/CSS for easy configuration without digging into the code.
What we think HN might find interesting (The Optimizations): Instead of the ESP32 waking up, making an API call to the LLM, and waiting 5-10 seconds for a response (which drains the LiPo battery significantly), we built a smart caching layer in the backend. The backend pre-generates and caches the content. When the ESP32 wakes up, it fetches the cached payload in sub-seconds, updates the e-ink screen, and immediately goes back to deep sleep. This allows it to run for 3 to 6 months on a single charge and stay resilient against temporary network drops.
Deployment: We wanted to make it accessible to everyone, so the backend can be 1-click deployed to Vercel (for free), and the hardware uses a Captive Portal for zero-code WiFi and API configuration. Of course, it’s completely self-hostable if you prefer to run it locally.
Repo: https://github.com/datascale-ai/inksight
I’d love to hear your thoughts on the "slow tech" philosophy, any feedback on the architecture, or ideas for new content templates! I'll be hanging around the thread to answer any questions.