-
Oracle | xIBMers
- Ahmedabad
- https://medium.com/ml-research-lab
- in/ashishpatel2604
- https://huggingface.co/ashishpatel26/
Lists (7)
Sort Name ascending (A-Z)
- All languages
- AutoHotkey
- Batchfile
- C
- C#
- C++
- CSS
- Clojure
- Cuda
- Cython
- D
- Dockerfile
- Go
- HCL
- HTML
- Java
- JavaScript
- JetBrains MPS
- Jsonnet
- Jupyter Notebook
- Just
- Kotlin
- Lua
- MATLAB
- MDX
- Makefile
- Markdown
- Mathematica
- OCaml
- OpenEdge ABL
- OpenQASM
- PHP
- PowerShell
- Processing
- Python
- QML
- R
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Stan
- Svelte
- Swift
- TeX
- TypeScript
- V
- Vue
Starred repositories
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
Define and run multi-container applications with Docker
Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of …
The open source ELT framework powered by Apache Arrow
LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progres…
A comprehensive, up-to-date collection of information about several thousands (!) of crypto tokens.
Manage multiple AI terminal agents like Claude Code, Aider, Codex, OpenCode, and Amp.
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
Artificial Intelligence Infrastructure-as-Code Generator.
🔒 Enterprise-grade API gateway that helps you monitor and impose cost or rate limits per API key. Get fine-grained access control and monitoring per user, application, or environment. Supports Open…
The open source, end-to-end computer vision platform. Label, build, train, tune, deploy and automate in a unified platform that runs on any cloud and on-premises.
🕵️♂️ Library designed for developers eager to explore the potential of Large Language Models (LLMs) and other generative AI through a clean, effective, and Go-idiomatic approach.
Autoscale LLM (vLLM, SGLang, LMDeploy) inferences on Kubernetes (and others)