CodeRunner is an MCP (Model Context Protocol) server that executes AI-generated code in a sandboxed environment on your Mac using Apple's native containers.
Key use case: Process your local files (videos, images, documents, data) with remote LLMs like Claude or ChatGPT without uploading your files to the cloud. The LLM generates Python code or bash scripts that run locally on your machine to analyze, transform, or process your files.
Without CodeRunner | With CodeRunner |
---|---|
LLM writes code, you run it manually | LLM writes and executes code, returns results |
Upload files to cloud for AI processing | Files stay on your machine, processed locally |
Install tools and dependencies yourself | Tools available in sandbox, auto-installs others |
Copy/paste scripts to run elsewhere | Code runs immediately, shows output/files |
LLM analyzes text descriptions of files | LLM directly processes your actual files |
Manage Python environments and packages | Pre-configured environment ready to use |
Limited to one programming language | Supports both Python and Bash execution |
Prerequisites: Mac with macOS and Apple Silicon (M1/M2/M3/M4), Python 3.10+
git clone https://github.com/BandarLabs/coderunner.git
cd coderunner
chmod +x install.sh
sudo ./install.sh
MCP server will be available at: http://coderunner.local:8222/sse
Install required packages (use virtualenv and note the python path):
pip install -r examples/requirements.txt
Configure Claude Desktop to use CodeRunner as an MCP server:
-
Copy the example configuration:
cd examples cp claude_desktop/claude_desktop_config.example.json claude_desktop/claude_desktop_config.json
-
Edit the configuration file and replace the placeholder paths:
- Replace
/path/to/your/python
with your actual Python path (e.g.,/usr/bin/python3
or/opt/homebrew/bin/python3
) - Replace
/path/to/coderunner
with the actual path to your cloned repository
Example after editing:
{ "mcpServers": { "coderunner": { "command": "/opt/homebrew/bin/python3", "args": ["/Users/yourname/coderunner/examples/claude_desktop/mcpproxy.py"] } } }
- Replace
-
Update Claude Desktop configuration:
- Open Claude Desktop
- Go to Settings → Developer
- Add the MCP server configuration
- Restart Claude Desktop
-
Start using CodeRunner in Claude: You can now ask Claude to execute code, and it will run safely in the sandbox!
Use CodeRunner with OpenAI's Python agents library:
-
Set your OpenAI API key:
export OPENAI_API_KEY="your-openai-api-key-here"
-
Run the client:
python examples/openai_agents/openai_client.py
-
Start coding: Enter prompts like "write python code to generate 100 prime numbers" and watch it execute safely in the sandbox!
Gemini CLI is recently launched by Google.
~/.gemini/settings.json
{
"theme": "Default",
"selectedAuthType": "oauth-personal",
"mcpServers": {
"coderunner": {
"url": "http://coderunner.local:8222/sse"
}
}
}
Code runs in an isolated container with VM-level isolation. Your host system and files outside the sandbox remain protected.
From @apple/container:
Each container has the isolation properties of a full VM, using a minimal set of core utilities and dynamic libraries to reduce resource utilization and attack surface.
CodeRunner consists of:
- Sandbox Container: Isolated execution environment with Python and Bash Jupyter kernels
- MCP Server: Handles communication between AI models and the sandbox
- Multi-Kernel Support: Automatically routes Python and Bash code to appropriate kernels
The examples/
directory contains:
openai-agents
- Example OpenAI agents integrationclaude-desktop
- Example Claude Desktop integration
-
Install dependencies:
pip install -r requirements.txt
-
Set up configuration:
cp .env.example .env # Edit .env with your preferred settings
-
Run tests:
python -m pytest tests/ -v
-
Run the server:
python server.py
CodeRunner provides the following MCP tools for AI models:
-
execute_python_code
- Execute Python code in a persistent Jupyter kernelexecute_python_code(command="print('Hello, World!')")
-
execute_bash_code
- Execute Bash commands in a persistent Jupyter bash kernelexecute_bash_code(command="ls -la && echo 'Directory listing complete'")
-
get_kernel_status
- Check the status of available kernelsget_kernel_status()
Python Code Execution:
# Data analysis
execute_python_code("""
import pandas as pd
import matplotlib.pyplot as plt
# Create sample data
data = {'x': [1, 2, 3, 4, 5], 'y': [2, 4, 6, 8, 10]}
df = pd.DataFrame(data)
print(df.describe())
""")
Bash Script Execution:
# File operations
execute_bash_code("""
# Create directory structure
mkdir -p /tmp/test_dir
cd /tmp/test_dir
# Create files
echo "Hello World" > hello.txt
echo "Goodbye World" > goodbye.txt
# List files with details
ls -la
""")
Combined Usage:
# Use bash to prepare data, then Python to analyze
execute_bash_code("curl -o data.csv https://example.com/data.csv")
execute_python_code("""
import pandas as pd
df = pd.read_csv('data.csv')
print(df.head())
""")
CodeRunner can be configured via environment variables with the CODERUNNER_
prefix for consistency across all components (Python application, Docker container, and entrypoint script). See .env.example
for available options:
CODERUNNER_JUPYTER_HOST
: Jupyter server host (default: 127.0.0.1)CODERUNNER_JUPYTER_PORT
: Jupyter server port (default: 8888)CODERUNNER_FASTMCP_HOST
: FastMCP server host (default: 0.0.0.0)CODERUNNER_FASTMCP_PORT
: FastMCP server port (default: 8222)CODERUNNER_EXECUTION_TIMEOUT
: Code execution timeout in seconds (default: 300)CODERUNNER_LOG_LEVEL
: Logging level (default: INFO)
Run the test suite:
# Run all tests
python -m pytest tests/
# Run specific test files
python -m pytest tests/test_config.py -v
# Run tests with coverage (if installed)
python -m pytest tests/ --cov=. --cov-report=html
-
Modularized Architecture
- Extracted Jupyter client logic into
jupyter_client.py
- Created centralized configuration system in
config.py
- Improved separation of concerns
- Extracted Jupyter client logic into
-
Enhanced Configuration Management
- Environment variable support with
CODERUNNER_
prefix - Centralized configuration with sensible defaults
- Better local development support
- Environment variable support with
-
Improved Error Handling
- Custom exception classes for better error categorization
- More robust WebSocket connection handling
- Comprehensive logging and error reporting
-
Container Optimizations
- Multi-stage Docker build for smaller images
- Proper signal handling with
tini
- Better entrypoint script with error handling
- Unified configuration with
CODERUNNER_
prefix across all components
-
Multi-Kernel Support
- Added Bash kernel support alongside Python
- New
execute_bash_code
MCP tool for shell commands - Kernel status monitoring with
get_kernel_status
tool
-
Testing Framework
- Comprehensive test suite with pytest
- Unit tests for configuration and Jupyter client
- Mock-based testing for isolated components
-
Code Quality Improvements
- Pinned dependency versions for reproducible builds
- Cleaner, more maintainable code structure
- Better documentation and type hints
coderunner/
├── config.py # Configuration management
├── jupyter_client.py # Jupyter WebSocket client
├── server.py # Main FastMCP server
├── requirements.txt # Pinned dependencies
├── Dockerfile # Optimized multi-stage build
├── entrypoint.sh # Improved container entrypoint
├── .env.example # Configuration template
├── pytest.ini # Test configuration
└── tests/ # Test suite
├── test_config.py
└── test_jupyter_client.py
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.