-
Microsoft
- China
Stars
Wix Toolset 4 and 5 tutorials and samples for authoring custom installation bundles using Burn and .NET
Framework for building a complete MSI or WiX source code by using script files written with C# syntax.
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
An open-source AI agent that brings the power of Gemini directly into your terminal.
5ire is a cross-platform desktop AI assistant, MCP client. It compatible with major service providers, supports local knowledge base and tools via model context protocol servers .
An AutoHotKey script for Windows that lets a user change virtual desktops by pressing CapsLock + <num>.
Model Context Protocol Servers
A collection of extensions for the Windows Command Palette
This app demonstrates the controls available in WinUI and the Fluent Design System.
Microsoft PowerToys is a collection of utilities that help you customize Windows and streamline everyday tasks
Awesome-llm-role-playing-with-persona: a curated list of resources for large language models for role-playing with assigned personas
A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
There can be more than Notion and Miro. AFFiNE(pronounced [ə‘fain]) is a next-gen knowledge base that brings planning, sorting and creating all together. Privacy first, open-source, customizable an…
The Web Data API for AI - Turn entire websites into LLM-ready markdown or structured data 🔥
Generate comic panels using a LLM + SDXL. Powered by Hugging Face 🤗
Accepted as [NeurIPS 2024] Spotlight Presentation Paper
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System
Use API to call the music generation AI of suno.ai, and easily integrate it into agents like GPTs.
Muzic: Music Understanding and Generation with Artificial Intelligence
fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tps,多并发可达60+。