Chain AI Library

Chain AI is a streamlined micro-framework that simplifies LLM application development by providing modular, swappable components for RAG pipelines. It reduces LangChain complexity while maintaining flexibility through type-safe, GPU-accelerated document processing.

Built With

  • Python

Technical Breakdown

Chain AI eliminates boilerplate with high-level convenience functions that create complete RAG pipelines from files or directories.

  • File-based RAG: Direct pipeline creation from document paths with automatic processing.
  • Directory Ingestion: Batch processing with configurable file type filtering.
  • Smart Defaults: Optimized chunk sizes and retrieval parameters out-of-the-box.
  • Interactive Chat: Built-in chat interface for immediate testing and validation.
1# Create RAG from files
2from chain.rag_runner import create_rag_from_files
3
4rag = create_rag_from_files(
5    file_paths=["manual.txt", "README.md"],
6    system_prompt="You are a documentation assistant.",
7    chunk_size=500,
8    retrieval_k=3
9)
10rag.run_chat()
11
12# Process entire directories
13from chain.rag_runner import create_rag_from_directory
14
15rag = create_rag_from_directory(
16    directory="./src",
17    file_extensions=['.py', '.md'],
18    system_prompt="You are a code assistant."
19)