Infinite Context LLMs: Going Beyond RAG with Extended Minds
research
In this blog we discuss how the transformer architecture naturally extends over external memories, and share empirical results which leverage this capability to succeed where RAG has struggled. These methods are innate (don’t require fine tuning) and outperform popular retrieval augmented generation methods.
No matching items