A generalized information-seeking agent system with Large Language Models (LLMs).
-
Updated
Jun 19, 2024 - Python
A generalized information-seeking agent system with Large Language Models (LLMs).
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
The PyVisionAI Official Repo
MVP of an idea using multiple local LLM models to simulate and play D&D
Fenix Ai Trading Bot with crew ai and ollama
A local chatbot for managing docs
GPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).
Demo project showcasing Gemma3 function calling capabilities using Ollama. Enables automatic web searches via Serper.dev for up-to-date information and features an interactive Gradio chat interface.
Open WebUI Tools for Google Services: Manage contacts, send emails, and schedule Google Meet meetings directly from your AI assistant using Google APIs.
Open WebUI Tool Template & Skeleton: A starting point and guide for developing custom tools to extend your AI assistant's capabilities in Open WebUI. Includes examples and best practices.
An automated pipeline to convert e-books into beautifully illustrated scenes using local AI backends ComfyUI and SD Forge.
TinyRag is a minimal Python library for retrieval-augmented generation. It offers easy document ingestion, automatic text extraction, embedding generation, and retrieval with vector stores. Designed for quick setup and flexible provider configuration, TinyRag enables fast, contextual responses from language models.
TinyRag is a minimal Python library for retrieval-augmented generation. It offers easy document ingestion, automatic text extraction, embedding generation, and retrieval with vector stores. Designed for quick setup and flexible provider configuration, TinyRag enables fast, contextual responses from language models.
Add a description, image, and links to the localllm topic page so that developers can more easily learn about it.
To associate your repository with the localllm topic, visit your repo's landing page and select "manage topics."