Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ahhh, it seems that tabby does use RAG and context providers for the code completion. Interesting:

> During LLM inference, this context is utilized for code completion

Hmmm... I wonder what's better. As I'm coding I jump and search to relevant parts of the codebase to build up my own context for solving the problem, and I expect that's likely better than RAG. Llama.vim seems to follow this model, while tabby could theoretically get at things I'm not looking at/haven't looked at recently...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: