diff --git a/.python-version b/.python-version new file mode 100644 index 0000000..2419ad5 --- /dev/null +++ b/.python-version @@ -0,0 +1 @@ +3.11.9 diff --git a/.tool-versions b/.tool-versions new file mode 100644 index 0000000..8aa451a --- /dev/null +++ b/.tool-versions @@ -0,0 +1 @@ +python 3.11.9 diff --git a/docs/acp/index.md b/docs/acp/index.md new file mode 100644 index 0000000..99a06fc --- /dev/null +++ b/docs/acp/index.md @@ -0,0 +1,45 @@ +# Agent Client Protocol + +**`fast-agent`** has comprehensive support for Zed Industries [Agent Client Protocol](https://zed.dev/acp). Why use **`fast-agent`**?: + +- Robust, native LLM Provider infrastructure, with Streaming and Structured outputs. +- Comprehensive MCP and Agent Skills support, including Tool Progress Notifications and Sampling. +- Build custom, multi-agent experiences in a few lines of code. +- Each Agent or Workflow appears as a "Mode" and transmits workflow events to the your ACP Client. + +## Getting Started + +### No Install Quick Start: +To try it out straight away with your Client, set an API Key environment variable and add: + +**Hugging Face** + +export HF_TOKEN=hf_....... + +`uvx fast-agent-mcp@latest serve --transport acp --model ` + +**Open AI** + +export OPENAI_API_KEY=...... + +`uvx fast-agent-mcp@latest serve --transport acp --model ` + +**Anthropic** + +export ANTHROPIC_API_KEY=...... + +`uvx fast-agent-mcp@latest serve --transport acp --model ` + +Tip: Use `uvx fast-agent-mcp check` to help diagnose issues. + +Note: OAuth keys are stored in your keyring, so `check` may prompt to read the credential store. + +### Installing + +`uv tool install -U fast-agent-mcp` + +Documentation in Progress. + +## Shell and File Access + +**`fast-agent`** adds the shell tool, read and write tools from the Client to allow "follow-along" functionality. \ No newline at end of file diff --git a/docs/agents/instructions.md b/docs/agents/instructions.md index 48ba389..cc11577 100644 --- a/docs/agents/instructions.md +++ b/docs/agents/instructions.md @@ -1,8 +1,52 @@ -# Instructions +# System Prompts -Agents can have their System Instructions set in a number of flexible ways to make building useful . +Agents can have their System Instructions set and customised in a number of flexible ways. The default System Prompt caters or Agent Skills, MCP Server Instructions, `AGENTS.md` and Shell access. +## Template Variables +The following variables are available in System Prompt templates: + +| Variable | Description | Notes | +|----------|-------------|-------| +| `{{file:path}}` | Reads and embeds local file contents (errors if file missing) | **Must be a relative path** (resolved relative to `workspaceRoot`) | +| `{{file_silent:path}}` | Reads and embeds local file contents (empty if file missing) | **Must be a relative path** (resolved relative to `workspaceRoot`) | +| `{{url:https://...}}` | Fetches and embeds content from a URL | +| `{{serverInstructions}}` | MCP server instructions with available tools | Warning displayed in `/mcp` if Instructions are present and template variable missing | +| `{{agentSkills}}` | Agent skill manifests with descriptions | | +| `{{workspaceRoot}}` | Current working directory / workspace root | Set by Client in ACP Mode | +| `{{hostPlatform}}` | Host platform information | | +| `{{pythonVer}}` | Python version | | +| `{{env}}` | Formatted environment block with all environment details | | +| `{{currentDate}}` | Current date in long format | | + +**Example `{{env}}` output:** +``` +Environment: +- Workspace root: /home/user/project +- Client: Zed 0.232 +- Host platform: Linux-6.6.87.2-microsoft-standard-WSL2 +``` + +**Note on file templates:** File paths in `{{file:...}}` and `{{file_silent:...}}` must be relative paths. They will be resolved relative to the `workspaceRoot` at runtime. Absolute paths are not allowed and will raise an error. + +**Viewing the System Prompt** The System Prompt can be inspected with the `/system` command from `fast-agent` or the `/status +system` Slash Command in ACP Mode. + +The default System Prompt used with `fast-agent go` or `fast-agent-acp` is: + +```markdown title="Default System Prompt" +You are a helpful AI Agent. + +{{serverInstructions}} +{{agentSkills}} +{{file_silent:AGENTS.md}} +{{env}} + +The current date is {{currentDate}}.""" +``` + + +## Using Instructions When defining an Agent, you can load the instruction as either a `String`, `Path` or `AnyUrl`. @@ -18,8 +62,8 @@ You are a helpful AI Agent. ```python title="With current date" @fast.agent(name="example", instruction=""" -You are a helpful AI Agent. -Your reliable knowledge cut-off date is December 2024. +You are a helpful AI Agent. +Your reliable knowledge cut-off date is December 2024. Todays date is {{currentDate}}. """) ``` @@ -29,7 +73,7 @@ Will produce: `You are a helpful AI Agent. Your reliable knowledge cut-off date ```python title="With URL" @fast.agent(name="mcp-expert", instruction=""" -You are have expert knowledge of the +You are have expert knowledge of the MCP (Model Context Protocol) schema. {{url:https://raw.githubusercontent.com/modelcontextprotocol/modelcontextprotocol/refs/heads/main/schema/2025-06-18/schema.ts}} @@ -73,7 +117,8 @@ from pydantic import AnyUrl You can start an agent with instructions from a file using the `fast-agent` commmand: ```bash -fast-agent --instructions=mcp-expert.md +fast-agent --instructions mcp-expert.md +fast-agent -i mcp-expert.md ``` This can be combined with other options to specify model and available servers: @@ -82,7 +127,7 @@ This can be combined with other options to specify model and available servers: fast-agent -i mcp-expert.md --model sonnet --url https://hf.co/mcp ``` -Starts an interactive agent session, with the MCP Schema loaded, attached to Sonnet with the Hugging Face MCP Server. +Starts an interactive agent session, with the MCP Schema loaded, attached to Sonnet with the Hugging Face MCP Server. ![Instructions](instructions.png) @@ -90,4 +135,4 @@ You can even specify multiple models to directly compare their outputs: ![Instructions Parallel](instructions_parallel.png) -Read more about the `fast-agent` command [here](../ref/go_command.md). \ No newline at end of file +Read more about the `fast-agent` command [here](../ref/go_command.md). diff --git a/docs/agents/skills.md b/docs/agents/skills.md new file mode 100644 index 0000000..4864f31 --- /dev/null +++ b/docs/agents/skills.md @@ -0,0 +1,37 @@ +# Agent Skills + +**`fast-agent`** supports Agent Skills, and looks for them in either the `.fast-agent/skills` or `.claude/skills` folder. + +When valid SKILL.md files are found: + +- The Agent is given access to an `execute` tool for running shell commands, with the working directory set to the skills folder. +- Skill descriptions from the manifest and path are added to the System Prompt using the `{{agentSkills}}` expansion. A warning is displayed if this is not present in the System Prompt. +- The `/skills` command lists the available skills. + +## Command Line Options + +If using **``fast-agent``** interactively from the command line, the `--skills ` switch can be used to specify the location of SKILL.md files. + +```bash +# Specify a skills folder and a model +fast-agent --skills ~/skill-development/testing/ --model gpt-5-mini.low + +# Give fast-agent access to the shell +fast-agent -x +``` + +## Programmatic Usage + +Skills directories can be defined on a per-agent basis: + +```python +# Define the agent +@fast.agent(instruction=default_instruction,skills=["~/source/skills"]) +async def main(): + # use the --model command line switch or agent arguments to change model + async with fast.run() as agent: + await agent.interactive() +``` + +This allows each individual agent to use a different set of skills if needed. + diff --git a/docs/index.md b/docs/index.md index f72ed97..9f2fdb7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -13,17 +13,16 @@ hide: --- - Install [`fast-agent-mcp`](https://pypi.org/project/fast-agent-mcp/) with [`uv`](https://docs.astral.sh/uv/) and get up - and running in minutes + Simple installation and setup with [`uv`](https://docs.astral.sh/uv/) to be up and running in minutes. Out-of-the box examples of Agents, Workflows and MCP Usage. -- :material-battery-charging:{ .lg .middle } __Batteries Included__ +- :material-battery-charging:{ .lg .middle } __Agent Skills__ --- - Out-of-the box examples of sophisticated Agents, Workflows and advanced MCP Usage. + Support for [Agent Skills](https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills) to define context efficient behaviour for your Agents. Read the documentation [here](./agents/skills.md). - :material-connection:{ .lg .middle } __New - Elicitation Quickstart Guide__ @@ -36,14 +35,14 @@ hide: --- - Comprehensive test automation, accelerating delivery and assuring quality + Extensive validated [model support](./models/llm_providers/) for Structured Outputs, Tool Calling and Multimodal capabilities - :material-check-all:{ .lg .middle } __MCP Feature Support__ --- - First MCP Host to support Tools, Prompts, Resources, Sampling and Roots + Full MCP feature support including Elication and Sampling and advanced transport diagnostics [:octicons-arrow-right-24: Reference](mcp/index.md) diff --git a/docs/mcp/mcp-server.md b/docs/mcp/mcp-server.md new file mode 100644 index 0000000..f18893d --- /dev/null +++ b/docs/mcp/mcp-server.md @@ -0,0 +1,80 @@ +### Running as an MCP Server + +**`fast-agent`** Can deploy any configured agents over MCP, letting external MCP clients connect via STDIO, SSE, or HTTP. + +Additionally, there is a convenient `serve` command enabling rapid, command line deployment of MCP enabled agents in a variety of instancing modes. + +This feature also works with [Agent Skills](../agents/skills.md), enabling powerful adaptable behaviours. + +#### Using the CLI (fast-agent serve) + +```bash +fast-agent serve [OPTIONS] +``` + +Key options: + +- `--transport [http|sse|stdio]` (default http) +- `--port / --host` (for HTTP/SSE) +- `--instance-scope [shared|connection|request] `– choose how agent state is isolated + - `shared` (default) reuses a single agent for all clients + - `connection` (sessions) Create one Agent per MCP session (separate history per client) + - `request` (stateless) - create a new Agent for every tool call and disable MCP Sessions +- `--description` – Customise the MCP tool description (supports {agent} placeholder) + +Standard CLI flags also apply (e.g. --config-path, --model, --servers, --stdio, +--quiet). This allows the **`fast-agent`** to serve any existing MCP Server in "Agent Mode", use custom system prompts and so on. + +Examples: + +```bash +fast-agent serve \ +--url https://huggingface.co/mcp \ +--instance-scope connection \ +--description "Interact with the {agent} workflow" \ +--model haiku +``` + +This starts a Streamable HTTP MCP Server on port 8000, providing access to an Agent connected to the Hugging Face MCP Server using Anthropic Haiku. + + + +```bash +fast-agent serve \ +--npx @modelcontextprotocol/server-everything \ +--instance-scope request \ +--description "Ask me anything!" \ +-i system_prompt.md +--model kimi +``` + +This starts a Streamable HTTP MCP Server on port 8000, providing agent access to the STDIO version of the "Everything Server" with a custom system prompt. + +#### Running an agent + +If you already have an agent module or workflow (e.g. the generated agent.py), you can start it as a server directly: + +```bash +uv run agent.py --server [OPTIONS] +``` + +The embedded CLI parser supports the same server flags as the serve command: + +- `--transport`, `--host`, `--port` +- `--instance-scope [shared|connection|request]` +- `--description` (tool instructions) +- `--quiet`, `--model`, and other agent startup options + +Example: + +```bash +uv run agent.py \ +--server \ +--transport http \ +--port 8723 \ +--instance-scope request +``` + +Both approaches initialise FastAgent with the same config and skill loading pipeline; +choose whichever fits your workflow (one-off CLI invocation vs. packaging an agent as +a reusable script). diff --git a/docs/models/check.png b/docs/models/check.png index 6ee4ced..ff39b93 100644 Binary files a/docs/models/check.png and b/docs/models/check.png differ diff --git a/docs/models/llm_providers.md b/docs/models/llm_providers.md index cd78d8e..7ba22b1 100644 --- a/docs/models/llm_providers.md +++ b/docs/models/llm_providers.md @@ -82,6 +82,89 @@ openai: | `gpt-5` | `gpt-5` | `gpt-5-mini` | `gpt-5-mini` | | `gpt-5-nano` | `gpt-5-nano` | | | + + +## Hugging Face + +Use models via [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/en/index). + +```yaml +hf: + api_key: "${HF_TOKEN}" + base_url: "https://router.huggingface.co/v1" # Default + default_provider: # Optional: groq, fireworks-ai, cerebras, etc. +``` + +**Environment Variables:** + +- `HF_TOKEN` - HuggingFace authentication token (required) +- `HF_DEFAULT_PROVIDER` - Default inference provider (optional) + +### Model Syntax + +Use `hf.[:provider]` to specify models. If no provider is specified, the model is auto-routed. + +**Examples:** + +```bash +# Auto-routed +fast-agent --model hf.openai/gpt-oss-120b +fast-agent --model hf.moonshotai/kimi-k2-instruct-0905 + +# Explicit provider +fast-agent --model hf.moonshotai/kimi-k2-instruct-0905:groq +fast-agent --model hf.deepseek-ai/deepseek-v3.1:fireworks-ai +``` + +### Model Aliases + +Aliased models are verified and tested to work with Structured Outputs and Tool Use. Functionality may vary between providers, or be clamped in some situations. + +| Alias | Maps to | +| ------------- | ----------------------------------------- | +| `kimithink` | `hf.moonshotai/Kimi-K2-Thinking:together` | +| `kimi` | `hf.moonshotai/Kimi-K2-Instruct-0905` | +| `gpt-oss` | `hf.openai/gpt-oss-120b` | +| `gpt-oss-20b` | `hf.openai/gpt-oss-20b` | +| `glm` | `hf.zai-org/GLM-4.6` | +| `qwen3` | `hf.Qwen/Qwen3-Next-80B-A3B-Instruct` | +| `deepseek31` | `hf.deepseek-ai/DeepSeek-V3.1` | +| `minimax` | `hf.MiniMaxAI/MiniMax-M2` | + +**Using Aliases:** + +```bash +fast-agent --model kimi +fast-agent --model deepseek31 +fast-agent --model kimi:together # provider can be specified with alias +``` + +### MCP Server Connections + +`HF_TOKEN` is **automatically** applied when connecting to HuggingFace MCP servers. + +**Supported domains:** + +- `hf.co` / `huggingface.co` - Uses `Authorization: Bearer {HF_TOKEN}` +- `*.hf.space` - Uses `X-HF-Authorization: Bearer {HF_TOKEN}` + +**Examples:** + +```yaml +# fastagent.config.yaml +mcp: + servers: + huggingface: + url: "https://huggingface.co/mcp" + # HF_TOKEN automatically applied! +``` + +```bash +# Command line - HF_TOKEN automatically applied +fast-agent --model kimi --url https://hf.co/mcp +fast-agent --url https://my-space.hf.space/mcp +``` + ## Azure OpenAI ### ⚠️ Check Model and Feature Availability by Region @@ -179,9 +262,7 @@ groq: | Model Alias | Maps to | | ----------- | -------------------------- | -| `kimi` | `moonshotai/kimi-k2-instruct` | -| `gpt-oss` | `openai/gpt-oss-120b` | -| `gpt-oss-20b` | `openai/gpt-oss-20b` | +| `kimigroq` | `moonshotai/kimi-k2-instruct` | ## DeepSeek diff --git a/docs/ref/go_command.md b/docs/ref/go_command.md index f6e51cf..2b91ff2 100644 --- a/docs/ref/go_command.md +++ b/docs/ref/go_command.md @@ -33,10 +33,18 @@ fast-agent go [OPTIONS] ### Examples +Note - you may omit `go` when supplying command line options. + ```bash # Basic usage with interactive mode fast-agent go --model=haiku +# Basic usage with interactive mode (go omitted) +fast-agent --model haiku + +# Send commands to different LLMs in Parallel +fast-agent --model kimi,gpt-5-mini.low + # Specifying servers from configuration fast-agent go --servers=fetch,filesystem --model=haiku @@ -46,11 +54,24 @@ fast-agent go --url=http://localhost:8001/mcp,http://api.example.com/sse # Connecting to an authenticated API endpoint fast-agent go --url=https://api.example.com/mcp --auth=YOUR_API_TOKEN +# Run an NPX package directly +fast-agent --npx @modelcontextprotocol/server-everything + # Non-interactive mode with a single message fast-agent go --message="What is the weather today?" --model=haiku # Using a prompt file fast-agent go --prompt-file=my-prompt.txt --model=haiku + +# Specify a system prompt file +fast-agent go -i my_system_prompt.md + +# Specify a skills directory +fast-agent --skills ~/my-skills/ + +# Provider LLM shell access (use at your own risk) +fast-agent -x + ``` ### URL Connection Details diff --git a/mkdocs.yml b/mkdocs.yml index 6d0d34b..07570eb 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -89,13 +89,17 @@ nav: - agents/defining.md - agents/running.md - agents/prompting.md - - agents/instructions.md + - System Prompts: agents/instructions.md + - agents/skills.md - Models: - Model Features: models/index.md - LLM Providers: models/llm_providers.md - Internal Models: models/internal_models.md + - ACP: + - Using as an Agent: acp/index.md - MCP: - Configuring Servers: mcp/index.md + - Deploying as an MCP Server: mcp/mcp-server.md - Inspecting Servers: mcp/mcp_display.md - Quickstart - Elicitations: mcp/elicitations.md - Quickstart - State Transfer: mcp/state_transfer.md